The October 2022 issue of IEEE Spectrum is here!

Close bar

Watson AI Crushes Humans in Second Round of Jeopardy

IBM’s Watson computer dominates the game—but makes some embarrassing mistakes

3 min read
Watson AI Crushes Humans in Second Round of Jeopardy

UPDATE: See who prevailed—man or machine?—in the third and final round!

watson jeopardy ibm computer challenge gameIBM’s Watson Jeopardy computer and its human opponents, Ken Jennings and Brad Rutter.

What a difference a day makes in the life of an artificial intelligence.

After an unimpressive debut on Monday, Watson, the IBM Jeopardy-playing computer, crushed its carbon lifeform opponents last night.

The game started with Monday’s score: Brad Rutter tied with Watson for first with $5000, and Ken Jennings last with $2000.

Ken was first to pick a category, but after host Alex Trebek read the clue, Watson buzzed faster. From then on, the computer just kept on going, buzzing and answering correctly seven times in a row, amassing $21,035. Ken and Brad stood there, hopeless. The IBMers in the audience grinned and clapped.

Which brings me to my first question about this whole thing: How does Watson ring the buzzer? Was something implemented to make the buzzing fairer to the human competitors, who are not electrically wired to the game hardware? Update: Here’s how Watson receives the clue and rings in the buzzer: It receives the clue as a text file at the moment that the clue appears on the stage screen, so in principle at the same time the clue “hits Brad Rutter’s and Ken Jennings’ retinas.” To buzz in, Watson receives a signal when a “buzzer enable” light turns on, and then it can activate a robotic finger to press the buzzer. Though some may disagree, IBM claims this is a fair design to compete with human contestants.

Anyway, after the seventh correct answer, the category was “The Art of the Steal” and an interesting clue came up. Watch what happened:

Clearly, Watson didn’t quite understand the clue, which called for an art
period, not an artist, as answer. Curiously, the computer had the correct answer listed among its choices, but with a low probability. The humans had no problem understanding the question—but they got the art period wrong.

Watson’s confusion didn’t last, though. Soon, the machine was again dominating the game, this time getting six straight correct answers and expanding its lead. Ken and Brad would occasionally get an answer right, but it was a Watson show.

The highlight of the night came at the end, during the Final Jeopardy round, when contestants can wager a certain amount (up to their total score) and then they see the final clue. The category was “U.S. cities,” and Watson had $36,681, Rutter $5400, and Jennings $2400. Watch: 

Toronto????? Ooohhh. You can hear the IBMers gasping, terrified that this humiliating mistake is going to cost Watson everything. But nope. The smarty-pants (or smarty-racks) machine didn’t go all in, its wagering-strategy algorithm deciding to bet just $947. (Here’s how IBM explains the flub.)

So the night ended with Jennings with $4800, Brad with $10,400, and Watson with $35,734. The LCD-faced machine, with its HAL 9000 voice, vastly outperformed the best brains at this game. A massacre. 

Which brings me to my second question: What is Watson good for other than playing Jeopardy? Will it help advance AI for real or is this just an entertaining challenge, much like the Deep Blue vs. Kasparov matches?

IBM, wise about this PR opportunity, made sure to include a video segment in which its execs and scientists brag about Watson’s potential “to transform many industries.” Their comments, however, were vague—things like “Life is about questions and answers,” or “This changes the paradigm in which we work with computers”—and the most concrete example they gave was using Watson to help clinicians diagnose a hard case involving lots of data.

The whole thing looks like a giant commercial for IBM, but hey, I’m not complaining; I was very entertained and feel like I want to learn more about how Watson works. And I’m looking forward to tonight’s round. Do Watson’s mistakes mean there’s hope for Ken and Brad? What do you think will happen tonight?

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}