Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Experts Bet on First Deepfakes Political Scandal

Researchers wager on a possible Deepfake video scandal during the 2018 U.S. midterm elections

6 min read

Illustration of changing a politician to reflect a different expression.
Illustration: iStockphoto

A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They’re betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018.

The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the “yes” camp and tropical tiki drinks for the “no” camp. But the implications of the technology behind the bet’s premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life.

“We talk about these technologies and we see the fact you can simulate Obama’s voice or simulate a Trump video, and it seems so obvious that there would be a lot of financial interest in seeing the technology used, ” says Tim Hwang, director of the Ethics and Governance of AI Initiative at the Harvard Berkman-Klein Center and the MIT Media Lab. “ But one thing in my mind is, why haven’t we seen it yet?”

Deepfakes imageDeepfake technology enables a source actor (bottom left) to fully control a faked video portrait (right).Images: SIGGRAPH

The Deepfake technology in question first gained notoriety in December 2017 when a person going by the pseudonym “DeepFakes” showed how deep learning—a popular AI technique based on neural network computer architecture—could digitally stitch the faces of celebrities onto the faces of porn actors in pornography videos. Since that time, social network services such as Twitter and Reddit have attempted to clamp down on a slew of amateur-created Deepfake videos that are typically being used for pornographic purposes.

Such technology relies upon a “generative adversarial networks” (GANs) approach. One network learns to identify the patterns in images or videos to re-create, say, a particular celebrity’s face as its output. The second network acts as the discriminating viewer by trying to figure out whether a given image or video frame is authentic or a synthetic fake. That second network then provides feedback to reinforce and strengthen the believability of the first network’s output.

Experts have been investigating and refining the deep-learning techniques behind such Deepfake videos. Beyond just face swapping, researchers have shown how to digitally mimic both the appearance and voice of individuals in order to create the equivalent of digital puppets. Stanford University researchers recently unveiled some of the most realistic-looking examples to date in their “Deep Video Portraits” paper that will be presented at the SIGGRAPH 2018 annual conference on computer graphics in Vancouver from August 12 to 16.

Such technological developments have been on Hwang’s radar ever since he previously served as global public policy lead for artificial intelligence and machine learning at Google. More recently, he started the Deepfakes wager discussion on Twitter from a skeptical stance by wondering why there has not been “high profile examples” of such videos making a significant impact. He has subsequently taken a “no” position in the bet.

The bet has attracted more than a dozen experts from both technology and social science backgrounds, with Hwang acting as the bookie. Many involved in the wager who seem to fall into the “no” camp think Deepfake videos will not make a huge splash during the campaign season for the 2018 U.S. midterm elections. But more agree that the technology could become more problematic by the next U.S. presidential election, in 2020. And a common refrain is that it’s a question of when, not if, Deepfake videos become more widespread and potentially problematic.

“My assumption is that by 2020, this stuff has spread a bit farther and become cheap enough so that we’ve enlarged the pool of jokers sufficiently so that somebody does this,” says Jack Clark, strategy and communications director at OpenAI, a nonprofit AI research company, who has taken a “no” position on the bet. “The technology is still kind of fiddly and a bit difficult.”

The threat may go beyond malicious hackers attempting to subvert democratic elections with Deepfake videos. The bored Internet masses looking for entertainment could cause even more problems if Deepfake technology becomes as commonly available as today’s meme-generating apps. Government cybersecurity departments dedicated to tracking and countering foreign hackers may be helpless in the face of online crowds flooding social media with Deepfake videos.

“What I think of is, what happens when creating amusing fakes just becomes another pastime like creating memes today?” Clark says. “I think people who are just kind of having fun are, in aggregate, more dangerous than individual bad actors.”

The threat is real enough that the U.S. Defense Advanced Research Projects Agency (DARPA) has funded a Media Forensics project aimed at finding ways to automatically screen for Deepfake videos and similarly deceptive examples of digital media.

But Clark also wants to see lawmakers and political leaders get a head start in talking about Deepfake technologies so that they don’t get caught unawares by a political scandal. His concern is that surprised lawmakers could enact “suboptimal laws” without having taken the time to think everything through. “I want that conversation to happen in a reasonable, slow-burning way,” he says.

“I think people who are just kind of having fun are, in aggregate, more dangerous than individual bad actors.”

Given that even experts remain uncertain about the scope of the Deepfakes problem, any anticipatory regulation by lawmakers may risk tackling the wrong issues, says Rebecca Crootof, executive director of the Information Society Project and a research scholar and lecturer in law at Yale Law School. Instead, she suggested that tort law may be the better mechanism for dealing with Deepfakes technology on a “tailored, case-by-case basis” in courtrooms.

Assuming that everyone involved was American, the release of an influential Deepfake video could trigger a defamation lawsuit from the “puppet” victim, Crootof explains. The Deepfake creator would likely try to fall back upon a First Amendment defense based on the right to political speech. But in her view, the weight of legal arguments would likely go in favor of the “puppet.”

“That would spark an interesting argument, but the defamation side has the stronger claim—the government is allowed to regulate defamatory speech, if it is knowingly false,” Crootof says. “With a Deepfake, the representation was false, was known to be false by the maker, was made with the intent to deceive, and the ‘puppet’ likely suffered damages as a result.”

The debate could also spill over into international law if a Deepfake video targeting a U.S. political candidate happens to originate from another country. A similar discussion is already taking place among international law attorneys about the Russian state-sponsored hack of the Democratic National Committee. But Crootof doubted a Deepfake video would count as either a prohibited intervention or violation of state sovereignty under existing international regulations.

As one of the earliest respondents to Hwang’s wager, Crootof took a “yes” position on the Deepfakes bet to help make it a sporting bet. But regardless of the exact timeline, her view is that a major Deepfake political scandal is just a question of time.

“There is very little disagreement that it’s going to happen and it will happen in a way that fools us all—and it won’t be proven that it’s happened until long after a number of people have been convinced by it, and possibly policies have been shaped by it,” Crootof says.

Another expert who agrees with Crootof’s “yes” bet is Michael Horowitz, a professor of political science and the associate director of Perry World House at the University of Pennsylvania. He described how Deepfake videos could exploit modern societies split by partisanship into echo chambers where information—authentic or not—tends to reinforce preexisting beliefs among people.

Deepfake videos could exploit modern societies split by partisanship into echo chambers where information—authentic or not—tends to reinforce preexisting beliefs.

“Would you be that surprised if the week before the midterm elections, a Deepfake video came out that was some kind of Russian agitation designed to inflame Americans regardless of which side they were on?” Horowitz says. “In some ways, at this point I would be surprised if the Russians didn’t try—the question is how much pickup it gets.”

Miles Brundage, research fellow at the University of Oxford’s Future of Humanity Institute and a Ph.D. candidate in Human and Social Dimensions of Science and Technology at Arizona State University, did weigh in early on to suggest that the 2018 U.S. midterm elections would be a likely first opportunity for malicious actors to sow confusion with Deepfakes, but decided to stay on the sidelines of the wager because of the uncertainty.

“It’s somewhat analogous to cyber vulnerabilities—don’t use them until it really matters because people have a chance to adapt if you move too soon,” Brundage says.“ It’s also plausible, though, that more handcrafted, non-Deepfake tools are still better for these sorts of purposes, so I didn’t make a bet on it.”

For now, Deepfake technology remains brittle and prone to failure in many scenarios, Hwang say. The computing power required to generate a believable fake remains a barrier for casual computer users. And many cheaper and more widely available software techniques exist that don’t necessarily require deep-learning algorithms to digitally alter the appearance or sound of videos.

Researchers have also been investigating ways to detect Deepfake videos and reveal their inauthentic nature, Hwang says. That could lead to effective countermeasures by the time Deepfake technology becomes more believable and widely available. Horowitz suggests that major Internet platforms such as Facebook and YouTube will have some incentive to help figure out monitoring systems to weed out Deepfakes.

Half a year’s time will tell who proves triumphant in the Deepfake wager of 2018. But the technology’s development and deployment could end up surprising experts focused on the possibility of malicious hackers perpetrating political scandals, Clark says. After all, the technology first emerged through its use in pornography by ordinary Internet users.

“The first really big use of [the technology] was in Deepfakes, which is pornography,” Clark says. “And I think that highlights how the domain may surprise us in the same way Deepfakes did.”

The Conversation (0)