The December 2022 issue of IEEE Spectrum is here!

Close bar

Does AI in Healthcare Need More Emotion?

Emotion AI, also known as affective computing, is rare in certain healthcare applications. Should emotion be included as a parameter?

3 min read
Illustration of an eye with a heart over a medical symbol
Illustration: Getty Images

For the last 25 years, researchers have sought to teach computers to measure, understand, and react to human emotion. Technologies developed in this wave of emotion AI—alternately called affective computing or artificial emotional intelligence—have been applied in a variety of ways: capturing consumer reactions to advertisements; measuring student engagement in class; and detecting customer moods over the phone at call centers, among others.

Emotion AI has been less widely applied in healthcare, as can be seen in a recent literature review. In a meticulous analysis of 156 papers on AI in pregnancy health, a team in Spain found only two papers in which emotions were used as inputs. Their review, published in the journal IEEE Access, concluded that expanded use of affective computing could help improve health outcomes for pregnant women and their infants.

“There is a lot of evidence that stress, anxiety and negative feelings can make [outcomes] worse for pregnant women,” says study co-author Andreea Oprescu, a PhD student at the University of Seville. An app or wearable that takes these feelings into account could better help detect and monitor certain conditions, she notes.

Other researchers are more hesitant to apply emotion AI to healthcare. “I caution people about trying to use a variable that is so difficult to understand and estimate in anything that has to do with medical diagnosis or interpretation,” says Aleix Martinez, director of the Computational Biology and Cognitive Science Lab at Ohio State University, which uses machine learning and computer vision to study emotion and intent.

In healthcare, affective computing has been applied to the field of autism. Affective computing pioneer Rosalind Picard, a professor at MIT, first started designing headsets and minicomputers that displayed emotional cues in the 1990s, and others have more recently developed wearables that rely on AI to help children with autism interpret facial reactions of people around them.  

In Spain, a team of doctors at the Virgen del Rocío University Hospital in Seville were interested in developing a mobile application or wearable for detecting pregnancy health conditions, and they reached out to computer scientist María del Carmen Romero-Ternero at the University of Seville to explore what was already in development in the field.

Using a recognized methodological framework for reviewing health research evidence, Romero-Ternero’s team identified 156 papers that used artificial intelligence in pregnancy studies. They found that over the last 12 years, AI has been used for detecting and monitoring fetal health, congenital birth defects, risk of preterm birth, and gestational diabetes. Most of the studies were in early stages and not yet supported by clinical testing, notes Oprescu. “For an algorithm to be used in a medical field, it needs to go through a lot of tests before physicians can actually use it,” she adds.

Because emotional stress, anxiety and depression have been associated with complications during pregnancy and birth, the researchers also searched the literature for AI studies that include emotional parameters. They only found two: one about the relationship between pre-eclampsia and worry, and one concerning patient reactions to prenatal tests. “The majority [of the studies] didn’t look into the emotional aspects of the patient,” says Oprescu.

This is perhaps not surprising for the relatively young field. While affective computing has worked well in areas such as image searches for emotions and teaching robots to interact socially with humans, computers in general are not great at interpreting human emotions, says Martinez, who was not involved in the pregnancy study.

While affective computing has worked well in areas such as image searches and teaching robots to interact socially, computers in general are not great at interpreting human emotions.

That is largely because we humans aren’t good at it ourselves. Study after study shows that humans are pretty bad at knowing what they feel internally, much less understanding how other people feel or teaching a computer how to understand. “If I don’t know what I am experiencing, how am I going to tell you what I am experiencing to collect a good dataset that can be used to train a machine learning algorithm?” says Martinez. “There’s no hope for that.” Plus, understanding emotion needs to take into account variation, culture and context, he notes. For example, is a person screaming because he is mad or because he just scored the winning goal?

While AI continues to be applied widely to healthcare, there is still a long way to go before emotional states can be automatically recorded and applied to health tools. But there is no doubt that in many areas of healthcare, including pregnancy, “emotion is important,” says Oprescu. “We think that should not be overlooked.”

This article appears in the January 2021 print issue as “Health Care Needs Empathic AI.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}