UPDATE: See who prevailed—man or machine?—in the third and final round!
IBM’s Watson Jeopardy computer and its human opponents, Ken Jennings and Brad Rutter.
What a difference a day makes in the life of an artificial intelligence.
The game started with Monday’s score: Brad Rutter tied with Watson for first with $5000, and Ken Jennings last with $2000.
Ken was first to pick a category, but after host Alex Trebek read the clue, Watson buzzed faster. From then on, the computer just kept on going, buzzing and answering correctly seven times in a row, amassing $21,035. Ken and Brad stood there, hopeless. The IBMers in the audience grinned and clapped.
Which brings me to my first question about this whole thing: How does Watson ring the buzzer? Was something implemented to make the buzzing fairer to the human competitors
, who are not electrically wired to the game hardware? Update: Here’s how Watson receives the clue and rings in the buzzer: It receives the clue as a text file at the moment that the clue appears on the stage screen, so in principle at the same time the clue “hits Brad Rutter’s and Ken Jennings’ retinas.” To buzz in, Watson receives a signal when a “buzzer enable” light turns on, and then it can activate a robotic finger to press the buzzer. Though some may disagree, IBM claims this is a fair design to compete with human contestants.
Anyway, after the seventh correct answer, the category was “The Art of the Steal” and an interesting clue came up. Watch what happened:
Clearly, Watson didn’t quite understand the clue, which called for an art
period, not an artist, as answer. Curiously, the computer had the correct answer listed among its choices, but with a low probability. The humans had no problem understanding the question—but they got the art period wrong.
Watson’s confusion didn’t last, though. Soon, the machine was again dominating the game, this time getting six straight correct answers and expanding its lead. Ken and Brad would occasionally get an answer right, but it was a Watson show.
The highlight of the night came at the end, during the Final Jeopardy round, when contestants can wager a certain amount (up to their total score) and then they see the final clue. The category was “U.S. cities,” and Watson had $36,681, Rutter $5400, and Jennings $2400. Watch:
Toronto????? Ooohhh. You can hear the IBMers gasping, terrified that this humiliating mistake is going to cost Watson everything. But nope. The smarty-pants (or smarty-racks) machine didn’t go all in, its wagering-strategy algorithm deciding to bet just $947. (Here’s how IBM explains the flub.)
So the night ended with Jennings with $4800, Brad with $10,400, and Watson with $35,734. The LCD-faced machine, with its HAL 9000 voice, vastly outperformed the best brains at this game. A massacre.
Which brings me to my second question: What is Watson good for other than playing Jeopardy? Will it help advance AI for real or is this just an entertaining challenge, much like the Deep Blue vs. Kasparov matches?
IBM, wise about this PR opportunity, made sure to include a video segment in which its execs and scientists brag about Watson’s potential “to transform many industries.” Their comments, however, were vague—things like “Life is about questions and answers,” or “This changes the paradigm in which we work with computers”—and the most concrete example they gave was using Watson to help clinicians diagnose a hard case involving lots of data.
The whole thing looks like a giant commercial for IBM, but hey, I’m not complaining; I was very entertained and feel like I want to learn more about how Watson works. And I’m looking forward to tonight’s round. Do Watson’s mistakes mean there’s hope for Ken and Brad? What do you think will happen tonight?