The FCC’s Ban on AI in Robocalls Won’t Be Enough

AI is making it too easy for bad actors to create realistic, targeted scams

5 min read

An illustration of a red robot hand holding a blue phone with a line emanating from it in the shape of a face
iStock

In the days before the U.S. Democratic Party’s New Hampshire primary election on 23 January, potential voters began receiving a call with AI-generated audio of a fake President Biden urging them not to vote until the general election in November. In Slovakia a Facebook post contained fake, AI-generated audio of a presidential candidate planning to steal the election—which may have tipped the election in another candidate’s favor. Recent elections in Indonesia and Taiwan have been marred by AI-generated misinformation, too.

In response to the faux-Biden robocall in New Hampshire, the U.S. Federal Communications Commission moved to make AI-generated voices in robocalls illegal on 8 February. But experts IEEE Spectrum spoke to aren’t convinced that the move will be enough, even as generative AI brings new twists to old robocall scams and offers opportunities to turbocharge efforts to defraud individuals.

The total lost to scams and spam in the United States in 2022 is thought to be US $39.5 billion, according to TrueCaller, which makes a caller ID and spam-blocking app. That same year, the average amount of money lost by people scammed in the United States was $431.26, according to a survey by Hiya, a company that provides call-protection and identity services. Hiya says that amount stands to go up as the usage of generative AI gains traction.

“In aggregate, it’s mind-boggling how much is lost to fraud perpetuated through robocalls,” says Eric Burger, the research director of the Commonwealth Cyber Initiative at Virginia Tech.

“I don’t think we can appreciate just how fast the telephone experience is going to change because of this.” —Jonathan Nelson, Hiya

AI Will Make It Easier for Scammers to Target Individuals

“The big fear with generative AI is it’s going to take custom-tailored scams and take them mainstream,” says Jonathan Nelson, director of product management at Hiya. In particular, he says, generative AI will make it easier to carry out spear-phishing attacks.

The Cost of Phone Fraud

The average amount of money lost by a phone-scam victim in 2022, in U.S. dollars:
  • United States: $431.26
  • UK: $324.04
  • Canada: $472.87
  • France: $360.62
  • Germany: $325.87
  • Spain: $282.35

Source: Hiya

Generally, phishing attacks aim to trick people into parting with personal information, such as passwords and financial information. Spear-phishing, however, is more targeted: The scammer knows exactly whom they’re targeting, and they’re hoping for a bigger payout through a more tailored approach. Now, with generative AI, Nelson says, a scammer can scrape social-media sites, draft text, and even clone a trusted voice to part unsuspecting individuals from their money en masse.

With the FCC’s unanimous vote to make generative AI in robocalls illegal, the question naturally turns to enforcement. That’s where the experts whom IEEE Spectrum spoke to are generally doubtful, although many also see it as a necessary first step. “It’s a helpful step,” says Daniel Weiner, the director of the Brennan Center’s Elections and Government Program, “but it’s not a full solution.” Weiner says that it’s difficult for the FCC to take a broader regulatory approach in the same vein as the general prohibition on deepfakes being mulled by the European Union, given the FCC’s scope of authority.

Burger, who was the FCC’s chief technology officer from 2017 to 2019, says that the agency’s vote will ultimately have an impact only if it starts enforcing the ban on robocalls more generally. Most types of robocalls have been prohibited since the agency instituted the Telephone Consumer Protection Act in 1991. (There are some exceptions, such as prerecorded messages from your dentist’s office, for example, reminding you of an upcoming appointment.)

“Enforcement doesn’t seem to be happening,” says Burger. “The politicians like to say, ‘We’re going after the bad guys,’ and they don’t—not with the vigor we’d like to see.”

Robocall Enforcement Tools May Not Be Enough Against AI

The key method to identify the source of a robocall—and therefore prevent bad actors from continuing to make them—is to trace the call back through the complex network of telecom infrastructure and identify the call’s originating point. Tracebacks used to be complicated affairs, as a call typically traverses infrastructure maintained by multiple network operators like AT&T and T-Mobile. However, in 2020, the FCC approved a mandate for network operators to begin implementing a protocol called STIR/SHAKEN that would, among other antirobocall measures, make one-step tracebacks possible.

“One-step traceback has been borne out,” says Burger. Traceback, for example, identified the source of the fake Biden calls targeting New Hampshire voters as a Texas-based company called Life Corporation. The problem, Burger says, is that the FCC, the U.S. Federal Bureau of Investigation, and state agencies aren’t providing the resources to make it possible to go after the sheer number of illegal robocall operations. Historically, the FCC has gone after only the very largest perpetrators.

“There is no stopping these calls,” says Hiya’s Nelson—at least not entirely. “Our job isn’t to stop them, it’s to make them unprofitable.” Hiya, like similar companies, aims to accomplish that goal by lowering the amount of successful fraud through protective services, including exposing where a call was created and by whom, to make it less likely that an individual will answer the call in the first place.

However, Nelson worries that generative AI will make the barrier to entry so low that those preventative actions will be less effective. For example, today’s scams still almost always require transferring the victim to a live agent in a call center to close out the scam successfully. With AI-generated voices, scam operators can eventually cut out the call center entirely.

“In aggregate, it’s mind-boggling how much is lost to fraud perpetuated through robocalls.” —Eric Burger, Virginia Tech

Nelson is also concerned that as generative AI improves, it will be harder for people to even recognize that they weren’t speaking to an actual person in the first place. “That’s where we’re going to start to lose our footing,” says Nelson. “We may have an increase in call recipients not realizing it’s a scam at all.” Scammers positioning themselves as fake charities, for example, could successfully solicit “donations” without donors ever realizing what actually happened.

“I don’t think we can appreciate just how fast the telephone experience is going to change because of this,” says Nelson.

One other complicating issue for enforcement is that the majority of illegal robocalls in the United States originate from beyond the country’s borders. The Industry Traceback Group found that in 2021, for example, 65 percent of all such calls were international in origin.

Burger points out that the FCC has taken steps to combat international robocalls. The agency made it possible for other carriers to refuse to pass along traffic from gateway providers—a term for network operators connecting domestic infrastructure to international infrastructure—that are originating scam calls. In December 2023, for example, the FCC ordered two companies, Solid Double and CallWin, to stop transmitting illegal robocalls or risk other carriers being required to refuse their traffic.

“Enforcement doesn’t seem to be happening. . . . not with the vigor we’d like to see.” —Eric Burger, Virginia Tech

The FCC’s recent action against generative AI in robocalls is the first of its kind, and it remains to be seen if regulatory bodies in other countries will follow. “I certainly think the FCC is setting a good example in swift and bold action in the scope of its regulatory authority,” says Weiner. However, he also notes that the FCC’s counterparts in other democracies will likely end up with more comprehensive results.

It’s hard to say how the FCC’s actions will stack up versus other regulators, according to Burger. As often as the FCC is way ahead of the curve—such as in spectrum sharing—it’s just as often way behind, such as the use of mid-band 5G.

Nelson says he expects to see revisions to the FCC’s decision within a couple of years, because it currently prevents companies from using generative AI for legitimate business practices.

It also remains to be seen whether the FCC’s vote will have any real effect. Burger points out that, in the case of calls like the fake Biden one, it was already illegal to place those robocalls and impersonate the president, so making another aspect of the call illegal likely won’t be a game-changer.

“By making it triply illegal, is that really going to deter people?” Burger says.

The Conversation (0)