How Facebook’s AI Tools Tackle Misinformation

Facebook’s weapons against hate speech improve their aim, identify permutations of misinformation, CTO says

3 min read
Network connections of blue and white with a dark shadow in the middle
Illustration: Andriy Onufriyenko/Getty Images

Facebook today released its quarterly Community Standards Enforcement Report, in which it reports actions taken to remove content that violate its policies, along with how much of this content was identified and removed before users brought it to Facebook’s attention. That second category relies heavily on automated systems developed through machine learning.

In recent years, these AI tools have been focused on hate speech. According to Facebook CTO Mike Schroepfer, the company’s automated systems identified and removed three times as many posts containing hate speech in the third quarter of 2020 as in the third quarter of 2019. Part of the credit for that improvement, he indicated, goes to a new machine learning approach that uses live, online data instead of just offline data sets to continuously improve. The technology, tagged RIO, for Reinforced Integrity Optimizer, looks at a number tracking the overall prevalence of hate speech on the platform, and tunes its algorithms to try to push that number down.

The idea of moving from a handcrafted off-line system to an online system is a pretty big deal,” Schroepfer said. “I think that technology is going to be interesting for us over the next few years.”

During 2020, Facebook’s policies towards misinformation became increasingly tighter, though many would say not tight enough. The company in April announced that it would be directly warning users exposed to COVID-19 misinformation. In September it announced expanded efforts to remove content that would suppress voting and a plan to label claims of election victory before the results were final. In October it restricted the spread of a questionable story about Hunter Biden. And throughout the year it applied increasingly explicit tags on content identified as misinformation, including a screen that blocks access to the post until the user clicks on it. Guy Rosen, Facebook vice president of integrity, reported that only five percent of users take that extra step.

That’s the policy. Enforcing that policy takes both manpower and technology, Schroepfer pointed out in a Zoom call with journalists on Wednesday. At this point, he indicated, AI isn’t used to determine if the content of an original post falls into the categories of misinformation that violates its standards—that is a job for human fact-checkers. But after a fact-checker identifies a problem post, the company’s similarity matching system hunts down permutations of that post and removes those automatically.

Facebook wants to automatically catch a post, says Schroepfer, even if  “someone blurs a photo or crops it… but we don’t want to take something down incorrectly.”

“Subtle changes to the text—a no, or not, or just kidding—can completely change the meaning,” he said. “We rely on third-party fact checkers to identify it, then we use AI to find the flavors and variants.”

The company reported in a blog post that a new tool, SimSearchNet++, is helping this effort. Developed through self-supervised learning, it looks for variations of an image, adding optical character recognition when text is involved.

Facebook's technology identified these posts as misinformation.Image: Facebook

As an example, Schroepfer pointed to two posts about face masks identified as misinformation (above).

Thanks to these efforts, Rosen indicated, Facebook directly removed 12 million posts with dangerous COVID misinformation between March and October, and put warnings on 167 million more COVID related posts debunked by fact checkers. It took action on similar numbers of posts related to the U.S. election, he reported.

Schroepfer also reported that Facebook has deployed weapons to fight deep fakes. Thanks to the company’s Deep Fake Challenge launched in 2019, Facebook does have a deep fake detector in operation. “Luckily,” he said, “that hasn’t been a top problem” to date.

“We are not done in terms of where we want to be,” he said. “But we are nowhere near out of ideas for how to improve the capability of our systems.”

 

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}