This AI Can Predict Heart Attack Risk From an Eye Scan

The method detects subtle changes in affordable and widely available retinal scans

4 min read
Optometrist giving eye exam to senior patient
Bill Oxford/Getty Images

A heart attack might seem like a completely random event, but there are signs when a person might be at risk. What if you could pick up on these signs from a simple, relatively cheap scan of a person’s eye?

“You get it every time you go to [get] a pair of glasses,” said Alex Frangi, a professor of computing and medicine at the University of Leeds, in England. “So that would be really cool.”

Frangi was one of the authors of a new study that attempted to do just that. Subtle changes in the tiny blood vessels of the retina, or the back part of the eye, can show signs that a person might have a heart attack, making it possible to use AI to detect these differences. Though applications of AI in medicine have become increasingly common, detecting conditions including Alzheimer’s disease and cancer, many of these techniques analyze data from expensive medical imaging technology. In contrast, retinal scans are cheap and easily accessible. In the new study, the technique was also reasonably effective, giving an accurate prediction 70 to 80 percent of the time. The study is preliminary, and it's possible the method may not work as well with real patients as it did in the study. Still, experts say, it's a promising innovation that could be useful as an accessible way to gauge heart attack risk.

“To me, [this is a] really good example of using machine learning to take information that's already there, but to use it in a novel way to help identify risk in a different disease state,” said Dr. Deepak Bhatt, the executive director of interventional cardiovascular programs at Brigham and Women’s Hospital Heart & Vascular Center and a professor of medicine at Harvard Medical School, who was not involved in the study.

Researchers pulled data from the UK Biobank, a huge biomedical database of health information from half a million people in the United Kingdom. To train the system, they used data from 71,515 people who had undergone retinal scans, as well as a different set of 5,663 people who had both a retinal scan and a cardiac magnetic resonance (CMR) scan. The system comprised two networks. The first, called a multichannel variational autoencoder, was trained using the smaller data set, featuring both types of images—essentially enabling it to translate a retinal scan into a reconstructed cardiac image. Once the first network was trained and tested, it was used as the template for using retinal scans in the larger data set (along with patient information like age, gender, body mass index, blood pressure, and smoking status) to predict heart attack risk.

To test and train the system, the researchers divided each data set into 10 parts, using nine parts to train it and the last to test it. They found that the system correctly predicted a heart attack from the combination of the retinal scan and the demographic data between 70 and 80 percent of the time, depending on the type of measurement. For comparison, the researchers also made a separate calculation of heart attack risk using only demographic data, which is the type of data usually used to determine if someone might be at risk. Predictions based on that data were about 10 percent less accurate.

The researchers also wanted to see if the system would be accurate if they used a completely different set of data taken from the National Eye Institute’s Age-Related Eye Disease Study. While they found that differences in image quality and demographic data didn’t impact the system much, they discovered that it was less effective at analyzing images from patients with an eye condition called macular degeneration.

“When you have a patient that has macular degeneration, your ability to predict correctly the heart attack outcome somehow seems to be masked,” said Frangi. With data from these patients, the system was around 60 percent accurate, but its fidelity went up to 70 percent if they were eliminated from the study sample.

While the system is not necessarily more accurate than using cardiac imaging alone to make a prediction of heart attack risk, retinal scans are far cheaper and easier to access. Bhatt said the method also could help identify people at risk who might otherwise be missed because they don’t regularly see a primary care doctor. In some cases, an eye doctor can already pick up signs that a person might have high blood pressure or diabetes from a retinal exam.

“This to me is a logical extension of something that can already happen when you go to the eye doctor,” said Bhatt.

For this study, the researchers chose to generate a predicted cardiac scan from a retinal image and use that to calculate risk. Other studies have taken a more direct approach, using only retinal images to calculate risk. Frangi said the researchers used this method to try to ensure that the system made the correct associations between the eye and heart scans and would not pick up patterns from eye scans that didn’t correlate with any impact on the heart. Still, without a direct comparison between the methods in the study, it’s still hard to say which approach works better.

“I would have liked to have seen a baseline experiment—just using the retina to predict [heart attack],” said Dr. Ronald Summers, the senior investigator of the U.S. National Institutes of Health’s Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, who was not involved in the study. Summers has, however, had past and ongoing research collaborations with some of the authors.

Frangi said that he and his colleagues are working on improving the system, for instance by incorporating additional medical and even genetic information about a patient into the system. But he also understands that complex data might not always be available, which is something that this study, with its use of multiple data sets with varying amounts of patient information, takes into account.

“Sometimes what is needed to make a difference is not necessarily sophistication,” he said.

The Conversation (0)

Deep Learning Could Bring the Concert Experience Home

The century-old quest for truly realistic sound production is finally paying off

12 min read
Image containing multiple aspects such as instruments and left and right open hands.
Stuart Bradford

Now that recorded sound has become ubiquitous, we hardly think about it. From our smartphones, smart speakers, TVs, radios, disc players, and car sound systems, it’s an enduring and enjoyable presence in our lives. In 2017, a survey by the polling firm Nielsen suggested that some 90 percent of the U.S. population listens to music regularly and that, on average, they do so 32 hours per week.

Behind this free-flowing pleasure are enormous industries applying technology to the long-standing goal of reproducing sound with the greatest possible realism. From Edison’s phonograph and the horn speakers of the 1880s, successive generations of engineers in pursuit of this ideal invented and exploited countless technologies: triode vacuum tubes, dynamic loudspeakers, magnetic phonograph cartridges, solid-state amplifier circuits in scores of different topologies, electrostatic speakers, optical discs, stereo, and surround sound. And over the past five decades, digital technologies, like audio compression and streaming, have transformed the music industry.

Keep Reading ↓Show less