The AI Arms Race to Combat Fake Images Is Even—For Now

Detectors can spot fakes, but generative AI is becoming more subtle

3 min read

purple lines showing a person against a blue background
iStock

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The recent and dramatic increase in AI-generated images is blurring the lines of what’s real and fake—and underscores the need for more tools to discern between the two.

In a recent study, researchers in Italy analyzed a suite of AI models designed to identify fake images, finding these current methods to be fairly effective. But the results, published in the May-June issue of IEEE Security & Privacy, also point to an AI arms race to keep pace with evolving generative AI tools.

Luisa Verdoliva is a professor at the University of Naples Federico II in Italy who was involved in the study. She notes that while AI-generated images may be great in terms of entertainment, they can be harmful when used in more serious contexts.

“For example, a compromising image can be created for a political figure and used to discredit him or her in an election campaign,” Verdoliva explains. “In cases like this, it becomes essential to be able to determine whether the image was acquired by a camera or was generated by the computer.”

“The detectors get better and better, but the generators also get better, and both learn from their failures.” —Luisa Verdoliva, University of Naples Federico II

There are two types of clues that hint at whether an image is generated by AI. The first are “high-level” artifacts, or defects, in the images that are obvious to the human eye, such as odd shadows or asymmetries in a face. But as Verdoliva notes, these blatant errors will become less obvious as image generators improve over time.

Deeper within the layers of an image are artifacts not obvious to the human eye, but only through statistical analysis of the image’s data. Each of these “low-level” artifacts are unique to the generator that created the image.

The concept is akin to firearm forensics, in which a fired bullet will exhibit unique scratches based on the barrel of the gun from which it was shot. In this way, bullets can be traced back to the gun that fired it.

Similarly, each fake image has a distinct data pattern based on the AI generator that created it. Ironically, the best way to pick up on these signatures is by creating new AI models trained to identify them and link them back to a specific image generator.

In their study, Verdoliva and her colleagues tested 13 AI models—capable of detecting fake images and/or identifying their generator—against thousands of images known to be real or fake. Unsurprisingly, the models were generally very effective at identifying image defects and generators they were trained to find. For example, one model trained on a dataset of real and synthetic images was able to identify images created by the generator DALL-E with 87 percent accuracy, and images generated by Midjourney with 91 percent accuracy.

More surprisingly, the detection models could still flag some AI-generated images that they weren’t specifically trained to find. This is because most current AI generators utilize very similar approaches to image creation, resulting in somewhat similar defects across their generated images.

The challenge, Verdoliva notes, is to detect previously unseen defects from new and emerging AI generators—the new forensic “fingerprints” that aren’t already on our radars.

“At the end of the day, there is no approach that works well all the time. After all, this is a competition between two players. The detectors get better and better, but the generators also get better, and both learn from their failures,” says Verdoliva.

To tackle this problem moving forward, Verdoliva emphasizes the need to use a variety of models for detecting fake images. This will increase the chances that unusual defects from novel generators will be picked up.

But above all else, she emphasizes, human discretion is key. It’s important that people learn not to trust multimedia coming from unlikely sources, but rather seek out information from reputable sources.

“This is the first and most important defense,” she says, noting that no AI will be able to protect us from all attacks. “In the meantime, the scientific community will continue to provide tools and methods to compete in this arms race.”

The Conversation (0)