The February 2023 issue of IEEE Spectrum is here!

Close bar

This AI Can Predict Heart Attack Risk From an Eye Scan

The method detects subtle changes in affordable and widely available retinal scans

4 min read
Optometrist giving eye exam to senior patient
Bill Oxford/Getty Images

A heart attack might seem like a completely random event, but there are signs when a person might be at risk. What if you could pick up on these signs from a simple, relatively cheap scan of a person’s eye?

“You get it every time you go to [get] a pair of glasses,” said Alex Frangi, a professor of computing and medicine at the University of Leeds, in England. “So that would be really cool.”


Frangi was one of the authors of a new study that attempted to do just that. Subtle changes in the tiny blood vessels of the retina, or the back part of the eye, can show signs that a person might have a heart attack, making it possible to use AI to detect these differences. Though applications of AI in medicine have become increasingly common, detecting conditions including Alzheimer’s disease and cancer, many of these techniques analyze data from expensive medical imaging technology. In contrast, retinal scans are cheap and easily accessible. In the new study, the technique was also reasonably effective, giving an accurate prediction 70 to 80 percent of the time. The study is preliminary, and it's possible the method may not work as well with real patients as it did in the study. Still, experts say, it's a promising innovation that could be useful as an accessible way to gauge heart attack risk.

“To me, [this is a] really good example of using machine learning to take information that's already there, but to use it in a novel way to help identify risk in a different disease state,” said Dr. Deepak Bhatt, the executive director of interventional cardiovascular programs at Brigham and Women’s Hospital Heart & Vascular Center and a professor of medicine at Harvard Medical School, who was not involved in the study.

Researchers pulled data from the UK Biobank, a huge biomedical database of health information from half a million people in the United Kingdom. To train the system, they used data from 71,515 people who had undergone retinal scans, as well as a different set of 5,663 people who had both a retinal scan and a cardiac magnetic resonance (CMR) scan. The system comprised two networks. The first, called a multichannel variational autoencoder, was trained using the smaller data set, featuring both types of images—essentially enabling it to translate a retinal scan into a reconstructed cardiac image. Once the first network was trained and tested, it was used as the template for using retinal scans in the larger data set (along with patient information like age, gender, body mass index, blood pressure, and smoking status) to predict heart attack risk.

To test and train the system, the researchers divided each data set into 10 parts, using nine parts to train it and the last to test it. They found that the system correctly predicted a heart attack from the combination of the retinal scan and the demographic data between 70 and 80 percent of the time, depending on the type of measurement. For comparison, the researchers also made a separate calculation of heart attack risk using only demographic data, which is the type of data usually used to determine if someone might be at risk. Predictions based on that data were about 10 percent less accurate.

The researchers also wanted to see if the system would be accurate if they used a completely different set of data taken from the National Eye Institute’s Age-Related Eye Disease Study. While they found that differences in image quality and demographic data didn’t impact the system much, they discovered that it was less effective at analyzing images from patients with an eye condition called macular degeneration.

“When you have a patient that has macular degeneration, your ability to predict correctly the heart attack outcome somehow seems to be masked,” said Frangi. With data from these patients, the system was around 60 percent accurate, but its fidelity went up to 70 percent if they were eliminated from the study sample.

While the system is not necessarily more accurate than using cardiac imaging alone to make a prediction of heart attack risk, retinal scans are far cheaper and easier to access. Bhatt said the method also could help identify people at risk who might otherwise be missed because they don’t regularly see a primary care doctor. In some cases, an eye doctor can already pick up signs that a person might have high blood pressure or diabetes from a retinal exam.

“This to me is a logical extension of something that can already happen when you go to the eye doctor,” said Bhatt.

For this study, the researchers chose to generate a predicted cardiac scan from a retinal image and use that to calculate risk. Other studies have taken a more direct approach, using only retinal images to calculate risk. Frangi said the researchers used this method to try to ensure that the system made the correct associations between the eye and heart scans and would not pick up patterns from eye scans that didn’t correlate with any impact on the heart. Still, without a direct comparison between the methods in the study, it’s still hard to say which approach works better.

“I would have liked to have seen a baseline experiment—just using the retina to predict [heart attack],” said Dr. Ronald Summers, the senior investigator of the U.S. National Institutes of Health’s Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, who was not involved in the study. Summers has, however, had past and ongoing research collaborations with some of the authors.

Frangi said that he and his colleagues are working on improving the system, for instance by incorporating additional medical and even genetic information about a patient into the system. But he also understands that complex data might not always be available, which is something that this study, with its use of multiple data sets with varying amounts of patient information, takes into account.

“Sometimes what is needed to make a difference is not necessarily sophistication,” he said.

The Conversation (0)
Illustration showing an astronaut performing mechanical repairs to a satellite uses two extra mechanical arms that project from a backpack.

Extra limbs, controlled by wearable electrode patches that read and interpret neural signals from the user, could have innumerable uses, such as assisting on spacewalk missions to repair satellites.

Chris Philpot

What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

Keep Reading ↓Show less
{"imageShortcodeIds":[]}