February 15, 2011

Jeopardy! and Artificial Intelligence

Colby Cosh writes:
Having lived through the hype over IBM’s 1997 Deep Blue challenge to human chessplayers, I find myself intensely irritated at IBM’s 2011 assault on Jeopardy! ...
Jeopardy!, after all, doesn’t demand that much in the way of language interpretation. Watson has to, at most, interpret text questions of no more than 25 or 30 words—questions which, by design, have only a single answer. It handles puns and figures of speech impressively, for a computer. But it doesn’t do so in anything like the way humans do. IBM’s ads would have you believe the opposite, but it bears emphasizing that Watson is not “getting” the jokes and wordplay of the Jeopardy! writers. It’s using Bayesian math on the fly to pick out key nouns and phrases and pass them to a lookup table. If it sees “1564″ and “Pisa”, it’s going to say “Galileo”.

26 comments:

  1. "It’s using Bayesian math on the fly to pick out key nouns and phrases and pass them to a lookup table. If it sees “1564″ and “Pisa”, it’s going to say “Galileo”."

    Of course, as a former human It's Academic player, I can attest that was mostly how we played. I fondly remember the time I was able to answer "John Locke" just by recognizing his nose.

    The benefit of Watson's method is that when you actually do the Bayesian math, you know how good your answers are.

    ReplyDelete
  2. While I agree that this is overhyped,
    I wonder how Colby Cosh knows how human beings arrive at their answers/store of knowledge and how he knows it is radically different from how Watson does. No one really knows how the mind works so some modesty should be appropriate when criticizing AI attempts to replicate human decision making. It's quite possible that the human nervous system just has far more parallel recursive circuits than we can build in computers today.

    ReplyDelete
  3. Hubert Dreyfus' "What Computers Still Can't Do" (1992, MIT Press) is still relevant. Human brains are computers, too, but no one has a clue as to how they work.

    ReplyDelete
  4. Yeah, the dream of "real" understanding and computer cognition has been proved a side issue. When I google for an answer for the best way to brine a turkey, the question of whether Google has chosen the top answer because it has thought really hard about it never enters my mind.

    Notions of computer thinking, so fascinating to me in my youth, seem like an afterthought in terms of what actually gets me useful answers.

    I think we probably will get machine sentience at some point, but I think it continues, as it ever has, to be about 20 years off.

    ReplyDelete
  5. How about man vs computer rap battle?

    ReplyDelete
  6. it's more sophisticated than what colby cosh says. the guys working on it have made it better than just a pattern recognition system. sometimes it learns on it's own and starts to get ideas right that it used to get wrong. when it begins to pick up on a new concept, say, gender, it synthesizes that idea with it's huge database of facts. it's not at a human level of information processing yet but it's not just a mindless data lookup machine anymore. it gets spooky once in a while.

    this is the subject i studied in college, robotics, and later i actually worked on a project with chris welty, who appears to be the main technical developer of watson. there's so much to know in this field, it gets too hard i think. had to study all the psychology related to how humans do this stuff (pinker wasn't right about some of this stuff), learn computer science, learn physics for movement and vision (an enormous problem, so, SO hard, and the reason we don't have robots in our daily lives just doing all our chores). i guess the only thing you don't have to learn is the biology for various life processes (machines don't need krebs cycle et cetera, but they do need some energy source, so back to physics). oh, and of course, you had to be at a near english major's level of command of english, to do all the language work. you better have a deep understanding of how to parse english into it's constituent parts of speech.

    my only disappointment with watson so far is that they decided to not go for audio natural language processing coming in. guess that was just too hard, too many technical challenges all at once. audio natural language processing was not that good when i last worked with it 10 years ago, guess it hasn't improved much. must still be hidden markov models and such.

    ReplyDelete
  7. the non-technical specialist, man on the street, thinks this whole project is easy. "Just connect it to the internet". in fact that's what jeopardy producer harry friedman worried it was gonna be like when IBM first contacted him.

    but that's not only non-trivial, it's wrong and WAY too slow. watson HAS to be a self contained system. it has to get the answer in 1 second. the internet is FAR too slow for that. WAY, WAY too slow. in fact, you could see that ken jennings and brad rutter knew most of the answers that watson got correct, they just couldn't get the buzzer to trigger fast enough. try to use the internet for watson and it loses every time. it gets blitzed by good players.

    now, if you wanted to crack "Who wants to be a Millionaire?", then you would eventually develop a watson-like system which could seek out particular, domain specific information on the internet. because it would get lots of field specific questions that it could never know, but it might be able to figure out with 1 minute of thinking. THAT would be TREMENDOUSLY impressive, and far more advanced than even watson.

    say, the question is about some specific term in, oh, carpentry or plumbing or color theory. no machine could know all that, could have all the expert knowledge of every specialist in every field of human experience. but maybe with 60 seconds of internet searching it could figure out a good amount of those questions.

    ReplyDelete
  8. there's a similar problem in the autonomous vehicle field, where the "brain" of the car or truck or tank has to be self contained, right there inside the vehicle and that's it. no outside connection to a supercomputer is allowed, all the "thinking" has to happen right there inside the car, otherwise, it happens too slow.

    car's radar sees stuff, signals to supercomputer 400 miles away in some air conditioned laboratory, car waits, supercomputer gets information, processes, signals to car, car waits, car gets supercomputer "thoughts", meanwhile, car has already crashed into boulder or into another car. i'm impressed by the guys who drive the mars rovers, that 40 minute delay they deal with has to be agonizing.

    this is something i worked on, so i know it will be scary when we have big tanks running around out there with a nuclear reactor for a power plant, a railgun for a cannon, and a supercomputer for a brain. you know what i wonder is, if they get that good, will they be able to handle incoming fire from helicopters (a tank's main weakness) because the supercomputer brain will be able to react so fast to incoming threats. i made a post last year about autocannon CIWS and how it can't save surface ships from good missiles, but laser CIWS might be able to, and maybe you could put that on a nuclear tank.

    also as an aside it's funny how the google PR machine had that initiative last year where they did all those press releases about how they were "inventing" the self driving car, as if nobody else had been working on it for decades before google got into it. it looked to me they were just funding sebastian thrun, who is the head professor of robotics at stanford and who won the DARPA grand challenge with volkswagen's support. and before him, there were lots of other guys too.

    ReplyDelete
  9. The Western World has been terrified by the prospect of machine intelligence for some time now. The default plot on the original Star Trek was always about Kirk (the human) somehow triumphing over some machine or Spock's machine like logic. When they remade the show later they replaced Spock with an actual machine - Mr. Data.

    Similarly the whole Terminator series is based on the supposition that intelligent machines would be our implacable enemies.

    Clearly the SciFi writers are worried.

    For simplicity let's assume that humans last until the ice returns. It's been a little more than ten thousand years since the ice pulled back and the eemian interglacial lasted just about that long. So let's say that humans have only two thousand years left. What then?

    Well if Moore's Law continues to be valid then that IBM computer should double in its abilities many times over. It seems impossible that a machine won't be smarter than any human within just a few years - certainly within two millennia.

    The real issue won't be total brains but rather the cost effectiveness of humans versus machines. For at least a century raising a human will still be cheaper than building a purpose built machine.

    After that it will be up to them.

    Albertosaurus

    ReplyDelete
  10. I hear that AI geeks who get worked up over PR stunts get all the laydeez.

    It's just an IBM promo. No one normal cares about the maths or whatever that guy is complaining about.

    --Sighing in NYC.

    ReplyDelete
  11. Just an interesting aside, my neighbor here in Pittsburgh has been working on Watson for years at Carnegie Mellon University.

    ReplyDelete
  12. They say Trebek is hung lie a Clydesdale...

    ReplyDelete
  13. While the mechanics of Bayesian inference may be inhumanly computational - cut and dried, quick and stable - the results it produces are a good approximation of certain kinds of human intuition.

    Fed geo-coded crime and demographic statistics for instance, a Bayesian-powered anthropomorphically White robot could justifiably come to the popular conclusion that it's a bad idea to walk down a good many city streets. Fed any number of mainstream media articles condemning such "racist" conclusions it could also conclude that it shouldn't mention these conclusions, and instead focus on inane applications like televised game shows.

    ReplyDelete
  14. This guy is just a hater. The fact that Watson can even figure out what the jeopardy question is asking represents a major leap forward in Natural Language processing.

    Being able to parse what the question is even asking you to spit out is a challenge for a lot of human contestants. I can definitely see a modified version of Watson being highly useful for domain specific applications.

    Still, haters gon' hate.

    ReplyDelete
  15. IBM was sporting enough to agree to a graphic at the bottom of the screen giving Watson's top three answers. Numbers two and three are often ludicrous. (I'm still trying to figure out how Frank Sinatra was a potential answer for a question about the Beatles song "Maxwell's Silver Hammer.") Even when not absurd, the backup answers often show failure to understand the clue.

    Sometimes a bad answer breaks through to number one. To a punning clue with "What is class?" as the intended answer - "Stylish elegance, or students who all graduated in the same year" - Watson answered, "What is chic?" He had Harry Potter at number one for a question with the intended answer of Voldemort, but Rutter rang in first.

    ReplyDelete
  16. Chief Seattle2/15/11, 12:40 PM

    I haven't watched it or seen the ads. But I imagine people in general find it less impressive in the age of google/wiki where any drunk at a bar can whip out their phone and find any trivia they like.

    It would be a lot more impressive if Watson used speech recognition to understand the answers instead of just having them typed in.

    ReplyDelete
  17. Is the point of Watson to prove that a computer can think LIKE a human, or that it can think AS WELL AS humans in a fairly demanding cognitive challenge?

    I admit I don't understand the impulse to minimize the achievement that Watson represents. I realize that this is a specialized application and that the Watson project may represent only incremental progress beyond what computers could do before IBM decided to take on Jeopardy! -- but it's still pretty damn impressive when you think about it.

    ReplyDelete
  18. in fairness to the bot, isn't that "sorta" what humans already do??

    the bot is more straightforward about it.

    ReplyDelete
  19. "Fed geo-coded crime and demographic statistics for instance, a Bayesian-powered anthropomorphically White robot could justifiably come to the popular conclusion that it's a bad idea to walk down a good many city streets."

    Remember, when you order a new GPS for your car, to specify White.
    Gilbert Pinfold.

    ReplyDelete
  20. "you had to be at a near english major's level of command of english, to do all the language work. you better have a deep understanding of how to parse english into it's constituent parts of speech."

    Ha, ha, ha! English and linguistics aren't related anymore at all.

    ReplyDelete
  21. You make these contests sound like the winner is likely to be the most impulsive of a group of nerds.

    ReplyDelete
  22. I was viewing the reactions of various figures in the field, notably reactions of Chomsky and Minsky.

    http://www.framingbusiness.net/archives/1287

    Chomsky accordingly believes that its just a purely stronger computer hence calling it a bigger steamroller that can perform stronger calculations. It doesn't delve into the deeper and significant aspects of AI as he states he is working on.

    Minsky, probably more significant in the field of AI, has only positive notes to say as from the NOVA documentary.

    ReplyDelete
  23. But, as Tim Krabbe has pointed out, computers still can't play chess. Maybe you AI guys could fix that first before falling flat on your faces in ANOTHER game?

    ReplyDelete
  24. ". . .computers still can't play chess. Maybe you AI guys could fix that first before falling flat on your faces in ANOTHER game?

    "A computer beat me at chess once. But, it was no match for me at kick boxing."
    -- Emo Phillips

    ReplyDelete
  25. Here's a question: How are the questions fed to Watson? Is it the case that Watson's getting the questions fed directly (in typed or keyed in form) just as Alex ends his verbal question, or is Watson actually parsing the verbal question itself? If it's the latter, then Watson is really a massive step forward. If it's the former, then that, as much as anything, is a huge disadvantage for the humans. Speech recognition is really a huge problem, although Watson is still a very impressive machine. I'd also like to know how the Bayesian posteriors (and of course, the priors) are formulated. This is by far the most interesting thing going on right now.

    ReplyDelete
  26. @Decken;
    Watson is given the question as a .txt file. I'm not sure if it's sent the moment it appears onscreen or the moment Trebek finishes speaking.
    Speech recognition/perception is a whole different creature to syntax and language parsing. I think it's fair for Watson to receive his information as a .txt file - Since the other two speakers have English as an L1 it isn't quite 'fair' to ask watson to do twice as much work. Though if the other human contestant learned English as adults, then you could talk about speech recognition. The fact is we don't know how people learn language, and Watson wasn't designed as that kind of machine (though there have been other computers that test this, with varying degrees of success. Rumelhart and McClelland did probably the most famous of this work, with a computer 'learning' the regular past-tense in English.

    Also, it doesn't surprise me Chomsky's not a fan of Watson. It doesn't fit into his minimalist fantasy lol.

    ReplyDelete

Comments are moderated, at whim.