Medical Imaging AI Software Is Vulnerable to Covert Attacks

An attacker could manipulate medical software to rig a clinical trial or justify unnecessary procedures

4 min read

Illustration of medical information at risk.
Illustration: iStockphoto

Artificial intelligence systems meant to analyze medical images are vulnerable to attacks designed to fool them in ways that are imperceptible to humans, a new study warns.

There may be enormous incentives to carry out such attacks for healthcare fraud and other nefarious ends, the researchers say.

“The most striking thing to me as a researcher crafting these attacks was probably how easy they were to carry out," says study lead author Samuel Finlayson, a computer scientist and biomedical informatician at Harvard Medical School in Boston. "This was in practice a relatively simple process that could easily be automated."

"In addition to how easy they are to carry out, I was also surprised by how relatively unknown these weaknesses are to the medical community," says study co-author Andrew Beam, a computer scientist and biomedical informatician at Harvard Medical School in Boston. "There is a lot of coverage about how accurate deep learning can be in medical imaging, but there is a dearth of understanding about potential weaknesses and security issues."

AI systems known as deep learning neural networks are increasingly helping analyze medical images. For example, last week, a group of German scientists and their colleagues showed these systems are better than experienced dermatologists at detecting skin cancer.

On 11 April, the U.S. Food and Drug Administration announced the approval of the first AI system that can be used for medical diagnosis without the input of a human clinician. Given the costs of healthcare in the United States, one might imagine that AI could help make medical imaging cheaper by taking humans out of the loop, say Finlayson, Beam, and their colleagues in the study.

"We as a society stand to receive enormous benefit from the deliberate application of machine learning in healthcare," Finlayson says. "However, as we integrate these incredible tools into the healthcare system, we need to be acutely aware of their potential downsides as well."

Characteristic results of adversarial manipulation.In these examples, the percentages listed represent what a model has said is the probability that each image shows evidence of a disease. Green tags indicate that the model was correct in its analysis, and red tags indicate the model was incorrect.Images: Harvard Medical School/MIT/arXiv

The researchers examined how difficult it was to fool medical image analysis software. Computer scientists regularly test deep learning systems with so-called "adversarial examples" crafted to make the AIs misclassify them in order to find out the possible limitations of current deep learning methods.

The scientists note there may be major incentives to attack medical image analysis software. The healthcare economy is huge, with the United States alone spending roughly $3.3 trillion, or 17.8 percent of GDP, on healthcare in 2016, and medical fraud is already routine—one 2014 study estimated medical fraud cost as much as $272 billion in 2011.

In the new study, the researchers tested deep learning systems with adversarial examples on three popular medical imaging tasks—classifying diabetic retinopathy from retinal images, pneumothorax from chest X-rays, and melanoma from skin photos. In such attacks, pixels within images are modified in a way that might seem like a minimal amount of noise to humans, but could trick these systems into classifying these pictures incorrectly.

The scientists note their attacks could make deep learning systems misclassify images up to 100 percent of the time, and that modified images were imperceptible from real ones to the human eye. They add that such attacks could work on any image, and could even be incorporated directly into the image-capture process.

"One criticism that we have received is that if someone has access to the underlying data, then they could commit many different kinds of fraud, not just using adversarial attacks," Beam says. "This is true, but we feel that adversarial attacks are particularly pernicious and subtle, because it would be very difficult to detect that the attack has occurred."

“The most striking thing to me as a researcher crafting these attacks was probably how easy they were to carry out.”

There are many possible reasons that deep learning systems might be attacked for medical fraud, the researchers say. With eye images, they note insurers might want to reduce the rate of surgeries they have to pay for. With chest X-rays, they note companies running clinical trials might want to get the results they want, given that one 2017 study estimated the median revenues across individual cancer drugs was as high as $1.67 billion four years after approval. With skin photos, the researchers note that dermatology in the United States operates under a model wherein a physician or practice is paid for the procedures they perform, causing some dermatologists to perform a huge number of unnecessary procedures to boost revenue.

Such attacks might also be carried out to sabotage the test results of patients so they do not get the treatment they need. "However, medical fraud is much more pervasive than medical sabotage, and we expect this will likely remain the case even as technology advances," Finlayson says. "Deep learning may be a new technology, but the humans who use it, for good or ill, are driven by the same motivations we've always been, and greed is sadly a fairly universal vice."

Finlayson notes that "computer scientists are working hard to build machine learning models that aren't susceptible to adversarial attacks in the first place. This is a promising area of research, but has yet to deliver a golden bullet—we have still yet to see a model that is both highly accurate and highly resistant to attacks."

Another way to defend against these kinds of attacks is to shore up medical infrastructure. "We can work on building medical IT systems that carefully track medical images and ensure that they aren't being manipulated," Finlayson says. "Even basic measures to implement these sorts of infrastructural defenses could do a lot to prevent adversarial attacks, but I don't think there is a simple golden bullet on this end either, since there are new types of adversarial attacks being discovered every day."

"The greatest tragedy in my mind would be if someone took the existence of adversarial examples as proof that machine learning shouldn't be developed or used in healthcare," Finlayson says. "All of us on this paper are extremely bullish on deep learning for healthcare. We just think that it's important to be aware of how these systems could be abused and to safeguard against this abuse in advance."

The Conversation (0)