Most experiments with artificial intelligence in medicine thus far have worked on the diagnostic side. AI systems have used computer vision to examine images like X-rays or pathology slides, and they have combed through data in electronic medical records to spot subtle patterns that humans can miss.
Just last week, IEEE Spectrum reported on hospitals that are trying out AI systems that identify patients with the first signs of sepsis, a life-threatening condition where the body responds to infection with widespread inflammation, which can lead to organ failure. Sepsis is a leading cause of death in hospitals, as well as one of the most expensive ailments to treat.
But the technology that goes by the name AI Clinician, described today in a paper in Nature Medicine, doesn’t diagnose—it makes decisions. It takes all the information about a patient with sepsis and recommends a course of treatment.
“It’s not mimicking the perceptual ability of the doctor, where the doctor sees certain symptoms and says the patient is going into septic shock,” says Aldo Faisal, an associate professor of bioengineering and computing at Imperial College London and one of the paper’s authors. “It’s really cognition that is captured here. We’re not just making the AI see like a doctor, we’re making it act like a doctor.”
The researchers didn’t try out their system on real patients; the technology isn’t ready for the clinic yet. Instead, they trained and tested AI Clinician on medical record databases from intensive care units (ICUs) in the United States. They first used 17,000 cases to teach the model about sepsis treatment, and then had it issue recommendations for 79,000 cases.
Overall, the treatments that the AI recommended were more likely to keep patients alive than those administered by the human doctors.
Anthony Gordon, an Imperial College professor of critical-care medicine and a coauthor of the study, explains that an international effort to reduce deaths from sepsis has resulted in guidelines for treatment, which hospitals try to follow. “The international guidelines tell you, on average, what’s a good thing to do,” he says.
Part of the treatment is to give patients intravenous fluids and drugs called vasopressors that constrict the blood vessels and increase blood pressure: These actions ensure that blood is reaching the organs. However, there’s considerable debate about how much to give, and when.
The researchers trained AI Clinician to issue recommendations on fluids and vasopressors. Gordon says these basic recommendations are just a start, and that the team has already been working on a model that includes more treatment factors.
Even such a rudimentary AI system could be a big help, Gordon says: “As a senior doctor, I have lots of experience, but there are times that I have uncertainty.” The AI won’t replace doctors at the bedside, he says, but it can be a useful tool. “If I have a tool that helps me in my decision-making and can advise me, I see that as a bonus,” he says.
This initial version made treatment decisions at 4-hour intervals, but an AI caretaker could make decisions and change treatment parameters much more frequently. In the ICU, data streams off patients’ bedside monitors every second—and while no human doctor could make sense of all that information, the AI could.[shortcode ieee-pullquote quote=""If I have a tool that helps me in my decision-making and can advise me, I see that as a bonus."" float="left" expand=1]
Theoretically, an AI could control electronic pumps that deliver IV fluids and medications. “It would be the most personal doctor you can imagine, relentlessly watching over you,” Faisal says.
In reality, there would certainly be a “human in the loop” to oversee the AI in some way. (For more on this topic, see the recent Spectrum feature article “AI in the ICU.”)
Essentially, reinforcement learning comes down to trial and error. The trainers establish a goal—such as winning a game, achieving a high score, or keeping a sepsis patient alive—and link it to a reward. (In this case, the AI was programmed to maximize credits, and it earned credits for each patient that stayed alive and lost credits for those that died.) The AI tries out a sequence of actions at random, and if it achieves its goal, it gets the payoff. Over many repetitions, it learns which combinations of actions are most likely to result in the reward.
This self-directed learning was a good match for ICU data, says Faisal, as there’s an abundance of data about the patient’s state and every action taken by the medical staff. “We knew what condition the patient was in, what the medical team did, what was the outcome,” he says. “So we could tell the algorithm: From all this data, try to find the sequence of actions that will steer a patient from a bad state to a good state.”
After AI Clinician experimented with the 17,000 cases in its training data set, it was tested on 79,000 cases it had never seen before. Overall, the AI recommended lower doses of IV fluids and higher doses of vasopressors than the patients actually received in those test cases. Patients who received doses similar to those recommended by AI Clinician had the lowest mortality.
The researchers plan to test their system in a real hospital—though for safety’s sake, the first trials won’t affect patient care. Initially the AI Clinician will get real-time data from the hospital’s electronic medical record system and will issue recommendations, but the doctors won’t see them or act on them. The researchers will observe patient outcomes, and determine whether patients fare better when the doctors independently decide on the same course of treatment that the AI recommends.
If the trials prove AI Clinician’s worth, the researchers will work toward commercial software that can be put in place in hospitals across the world.
“With sepsis, we’ve struggled to find new treatments,” says Gordon. “But optimizing our current therapies can have a big effect. Even if we can change mortality by just a few percentage points, we can save tens of thousands of lives.”