The December 2022 issue of IEEE Spectrum is here!

Close bar

AI Cardiologist Aces Its First Medical Exam

A neural network outperforms human cardiologists in a task involving heart scans

3 min read
Photo-illustration of an AI computer analyzing heart scans.
Photo-illustration: Rima Arnaout

Rima Arnaout wants to be clear: The AI she created to analyze heart scans, which easily outperformed human experts on its task, is not ready to replace cardiologists. 

It was a limited task, she notes, just the first step in what a cardiologist does when evaluating an echocardiogram (the image produced by bouncing sound waves off the heart). “The best technique is still inside the head of the trained echocardiographer,” she says.

But with experimental artificial intelligence systems making such rapid progress in the medical realm, particularly on tasks involving medical images, Arnaout does see the potential for big changes in her profession. And when her 10-year-old cousin expressed the desire to be a radiologist when she grows up, Arnaout had some clear advice: “I told her that she should learn to code,” she says with a laugh. 

When both the AI and expert cardiologists were asked to classify the images, the AI achieved an accuracy of 92 percent. The humans got only 79 percent correct.

Arnaout, an assistant professor and practicing cardiologist at UC San Francisco, is keeping up with the times through her research in computational medicine; she published this new study in the journal Digital Medicine

In the study, Arnaout and her colleagues used deep learning, specifically something called a convolutional neural network, to train an AI system that can classify echocardiograms according to the type of view shown.

This classification is a cardiologist’s first step when examining an image of the heart. Because the heart is such a complex structure—it’s an asymmetrical organ with four chambers, four valves, and blood constantly flowing in and out through several vessels—echocardiographers take videos from many different positions. When the doctors are ready to analyze those videos, they first have to figure out which view they’re looking at and which anatomical features they can see. 

Typically the cardiologist would look at a relatively high-resolution video of the echocardiogram, showing a shifting image captured as the imaging tool was moved around the patient’s chest. But the AI had a much harder task. It was given still images taken from video clips, and the images had been shrunk to just 60 by 80 pixels each. 

Six echocardiogram images showing different views of the heart.

Image: A Madani et al.
The AI had to sort heart scan images into categories based on which view of the heart they presented.

When both the AI and expert cardiologists were asked to sort these tiny black-and-white images into 15 categories of views, the AI achieved an accuracy of 92 percent. The humans got only 79 percent correct. “These were excellent echocardiographers,” Arnaout says, “but it’s a hard task. We’re not used to seeing the images shrunken down and out of context.”

The AI only performed this first step in the analysis of a heart image and the making of a diagnosis. A human cardiologist looks at many of these scans to examine more than 20 structures within the heart, then synthesizes that information to arrive at a conclusion.

Arnaout is now working on a new version of the technology that can take the next steps to identify different diseases and heart problems. “A human echocardiographer can look at any heart, no matter what the defect, and figure out what’s going on,” Arnaout says. “I’m interested in building a platform that can do that.”

Even if she accomplishes her goal, though, she doesn’t think human cardiologists will be put out of their jobs. “As cardiologists, we read the images and then go see the patient,” she says. “So we’re both reading images and practicing medicine. I don’t think that second piece will be taken over so quickly.”

The Conversation (0)

Are You Ready for Workplace Brain Scanning?

Extracting and using brain data will make workers happier and more productive, backers say

11 min read
Vertical
A photo collage showing a man wearing a eeg headset while looking at a computer screen.
Nadia Radic
DarkGray

Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}