The October 2022 issue of IEEE Spectrum is here!

Close bar

Hackers Compete To Confound Facial Recognition

Def Con challenge organizers hope to spur better security in the industry

3 min read
a photo of Brad Pitt on the left and an AI image of Brad Pitt on the right

The real Brad Pitt (L) versus an AI Brad Pitt (R).

Left: Matt Sayles/AP; Right: DEFCON

Facial-recognition technology is becoming increasingly prevalent in our lives, but its also highly vulnerable to attack. That’s why a group of researchers is appealing to hackers to take part in a new competition designed to expose facial recognition's flaws and raise awareness of the potential risks.

The machine-learning security evasion competition has been a regular fixture of the AI Village at the Def Con hacking conference since 2019. Early iterations challenged researchers to sneak around the defenses of machine-learning-based malware detection systems. In 2021, organizers added a new track designed to uncover flaws in computer-vision models that use visual cues in order to detect phishing websites.

But this year the competition will also pit hackers against facial-recognition systems, challenging them to modify photographs of celebrities so that a machine-learning model misidentifies them. Zoltan Balazs, head of vulnerability research lab at software company Cujo AI, one of the organizers of the competition, says the addition was in response to the rapid expansion in the use of facial recognition and the seemingly lax approach to security among many vendors.

“We really hope that one of the conclusions of our competition will be that very special care should be taken into consideration whenever people are implementing facial-recognition systems,” he says. “Because they are not perfect. And the consequences can be bad.”

The facial-recognition challenge has been designed by AI security company Adversa AI, which knows exactly how vulnerable these kinds of machine-learning models are. The company regularly carries out “red teaming” exercises—in which it is hired by other firms to test their machine-learning systems for security flaws.

There are a growing number of tools available online that hackers can use to carry out these kind of attacks, Adversa CTO Eugene Neelou says, and there are already real-world examples where people have exploited weaknesses in facial-recognition systems. Scammers recently managed to trick facial recognition software used by identity-verification company ID.me to verify fake driving licenses as part of a US $2.5 million unemployment fraud scheme, and in China criminals managed to launder $77 million by using manipulated photos to dupe software used by local tax authorities.

“It’s very easy and fast to do for attackers with enough motivation,” says Neelou. “Our engagements show that some of the best facial-recognition vendors demonstrate little to no security against adversarial input modifications.”

The organizers hope that their competition will highlight the current concerns around facial recognition. The winner is also required to publish its techniques, which should help the industry close potential gaps. The contest opened on 12 August and will run until 23 September. Entrants are given a set of 10 headshots of well-known celebrities and online access to a facial-recognition model that has been trained to recognize them.

Attackers are instructed to subtly alter the images so that the model misidentifies them. The goal is to trick the system into identifying each celebrity as each of the other celebrities, which means creating nine modified images for each headshot. These then need to be submitted to the competition organizers who will assess the effectiveness of the deception.

The most important criteria for judging each image is the confidence with which the model accepts the new identity. This is judged by the probability score the model gives the image, which ranges from 0 to 1. Images will also be rate on the “stealthiness” of the modifications—in other words, how difficult they are to spot. This will be judged based on the level structural similarity between the original and doctored image but will only be used as a tiebreaker if teams are level on confidence scores.

How entrants edit the images is up to them. While it is possible to modify them by hand, Neelou says that attacks on a machine-learning system typically use automated processes. In most cases, hackers won’t know anything about the model they are targeting and so will submit hundreds or thousands of images and use feedback from the model to iterate on their alterations. If they can glean some information about the model though, their job becomes even easier as they can tailor their approach to its particularities.

“There are various strategies to target different internal characteristics of neural networks,” says Neelou. “One attack technique may be the best against one network and the worst against another. That’s why there is no one-fits-all attack, but given enough time every AI system can be hacked.”

There is plenty that facial-recognition developers can do to protect their models though, says Neelou. Techniques like adversarial retraining, in which models are retrained to spot doctored images, attack detection, and even things as simple as limiting the number of times people can feed data into a model can help.

But what the industry really needs is a fundamental mind shift, so that security is taken into account earlier in the development process rather than being tagged on at the end, says Neelou. “The main reason AI is vulnerable to attacks is that AI is never built with security in mind,” he says, “As with many technologies, security is an afterthought.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}