IBM’s Watson Goes to Med School
This AI program mastered “Jeopardy!” Next up, oncology
In the final round of a televised game show that pitted top players against IBM’s AI program Watson, a humbled human jotted down an aside to his written response: “I for one welcome our new computer overlords.”
Now even doctors are speaking that way. “I’d like to shake Watson’s hand,” says Mark Kris, an oncologist at Memorial Sloan-Kettering Cancer Center, in New York City. He talks excitedly about the day in late 2013 when Watson—now his student—will be fully trained and ready to assist physicians at the cancer center with their diagnoses and treatment plans.
It will be quite a career move for Watson, but one that IBM scientists envisioned from the get-go. They hope health care will be the killer app for Watson, an AI with phenomenal skills in natural-language processing. Watson first demonstrated its powers on “Jeopardy!,” a game that employs puns and wordplay in its trivia clues. For each clue, Watson had to make sense of the messy English language, parse complicated phrasing, and search through up to 200 million pages of text.
Soon, after the equivalent of medical school, Watson will be able to examine a patient’s history and test results, search the medical literature, and make a recommendation for the patient’s treatment. To make the task manageable, the computer program’s studies have so far been limited to oncology: Watson is studying lung and breast cancer now and will start on several other cancer types soon.
Kris, a lung cancer specialist, is working with the IBM team on the first iteration. The project is an experiment on the frontiers of medicine and technology, Kris admits, but he thinks it will result in a practical tool. He notes that for many cancer cases today, it’s not obvious which chemotherapy drug will be most effective. “Sometimes it’s very clear‑cut; there’s a genetic change, so you give the drug targeted to that genetic change. But for the vast majority of patients today, there isn’t an obvious biology-linked treatment,” Kris says. “We have many medicines that can help, and making the best choice can make a big difference for a patient.”
That’s where Watson can come in. It can comb through thousands of similar cancer cases and examine patient outcomes, review the most recent findings from hundreds of medical journals, and make a recommendation. The goal, Kris explains, is to replicate the decision-making process of a Sloan-Kettering oncologist. “Let’s say you have an oncologist in Smalltown, U.S.A. Suddenly he has access to every medical journal and the expertise of the top specialists at Sloan-Kettering,” Kris says. Watson will never replace a human physician, Kris stresses, but it can provide advice and a top-notch second opinion. “It’s a great tool for the doc,” he says, “and for the patient it’s a great comfort.”
In the game show face-off, Watson was positioned between the two most successful “Jeopardy!” players of all time. On its animated avatar, the whizzing beams of lights around the IBM “smarter planet” logo usually glowed green, indicating that Watson was on a winning streak.
The program parsed the game’s complex clues with ease. For example, a clue in the category “Literary Characters APB” read: “Wanted for a 12-year crime spree of eating King Hrothgar’s warriors; officer Beowulf has been assigned the case.” In its choppy, computerized voice, Watson replied correctly: “Who is Grendel?” A panel at the bottom of the screen showed TV viewers Watson’s top three search results along with its level of confidence for each. When it named the monster that devoured the king’s men in the epic poem Beowulf, Watson was 97 percent confident.
The IBM Research team knew that Watson couldn’t win at “Jeopardy!” by virtue of a know-it-all database alone. The program also had to learn how to interpret a complicated clue. As a child does, it had to learn how to understand. But IBM didn’t have time to explain the world to a computer program, so it used sophisticated machine learning techniques to get Watson up to speed. The program was given tens of thousands of “Jeopardy!” clue-and-response pairs so it could establish its own rules for what constituted a correct response. Then it was tested with new clues. When it got an answer right, Watson took note of which algorithms had produced the correct search path and answer.
Martin Kohn, chief medical scientist of care delivery systems at IBM Research, says a similar process is now under way at Sloan-Kettering. “It will be given cases and treatment guidelines, and it will give its suggestions,” Kohn says. Just as in “Jeopardy!,” Watson will come up with a ranked list of possible solutions and display its confidence level for each. “Then one of the oncologists will say, ‘Yes, what Watson suggested was reasonable’ or ‘Watson was off the wall,’ ” Kohn says. In that way, Watson will learn, and it will establish its credibility.
Right now, the Sloan-Kettering team is giving Watson cases that have all the information needed to devise a treatment plan, says Ari Caroline, Sloan-Kettering’s director of strategic initiatives and quantitative analysis, who has been overseeing Watson’s machine learning process at the cancer center. A next step is to give the program incomplete cases and get Watson to notice what’s missing.
“Watson could actually prompt the user,” says Caroline. “It could say, ‘I can give you an answer with 30 percent confidence right now, which is not very useful. In order to give a more confident answer, you would have to provide the molecular pathology information around these particular tests.’ ”
During Watson’s triumphant performance on “Jeopardy!,” David Ferrucci, IBM’s principal investigator for that project, spoke about the company’s motives for investing heavily in Watson. “It is irresistible to pursue this,” said Ferrucci, “because as we pursue understanding natural language, we pursue the heart of what we think of when we think of human intelligence.”
Natural-language processing may be the gateway to a broad spectrum of applications, and IBM is already thinking of other business opportunities for Watson, like financial analysis. Yet the specialization required by “Dr.” Watson is telling: IBM isn’t aiming to make it a general practitioner, nor even an all-around oncologist, but rather an expert on a few types of cancers. In seems that each field of endeavor that Watson tackles brings with it its own specialized language and elaborate new problems.
No one knows that better than Caroline, who has worked on the details of Watson’s medical training. “This is nothing close to a plug and play,” says Caroline. “There’s no such thing as a natural-language-processing tool that you just plug in and it automatically interprets everything.”
But specialized though it is, Watson represents a big step in the direction of general intelligence, compared to IBM’s earlier foray into world-beating, game-playing AIs. Its chess-playing system Deep Blue, which defeated then-world champion Garry Kasparov in 1997, couldn’t do anything else, not even play checkers.
It remains to be seen whether Watson can build on its success, adding new realms of practical expertise, one after the other. The project shows the fascinating potential of machines that can speak our language, but it’s also a reminder that we don’t have to bow down to our computer overlords just yet.
This article originally appeared in print as "Watson Goes to Med School."