Will Foreign Agents Rig the U.S. Midterm Elections Through Social Media?

Samantha Bradshaw, an expert on computational propaganda, weighs in on whether Facebook, Twitter, and others are doing enough to curb political social media bots

9 min read

Illustration of thumbs down and tweet bird made up of American election related icons.
Illustrations: iStockphoto

On 5 September, the U.S. Senate Intelligence Committee held its fourth and final public hearing on the manipulation of social media by foreign agents to influence American politics. The hearings were prompted by the discovery, largely by academic researchers, that Russian botnets and fake accounts had targeted U.S. voters in the run-up to the November 2016 elections. Under mainly gentle questioning, Sheryl Sandberg, chief operating officer of Facebook, and Jack Dorsey, cofounder and CEO of Twitter, highlighted what their respective social media platforms have been doing to prevent a repeat of 2016.

IEEE Spectrum’s Jean Kumagai spoke to Samantha Bradshaw, a researcher at the Computational Propaganda Project and a doctoral candidate at the Oxford Internet Institute, who’s been tracking the phenomenon of political manipulation through social media.

IEEE Spectrum:Remind us what happened during the November 2016 U.S. elections that prompted the Senate to conduct these hearings.

Samantha BradshawSamantha Bradshaw, a researcher at the Oxford Internet Institute’s Computational Propaganda ProjectPhoto: Computational Propaganda Project

Samantha Bradshaw: During 2016, researchers, including our team at Oxford, were starting to identify lots of Russian-run accounts on all the social media platforms spreading junk news and disinformation. Some were human-run, and some were automated bot accounts.

Bot accounts mainly identified and amplified posts on Twitter, liking, sharing, and retweeting at a much quicker pace than a real human could. Human-operated accounts also engaged in highly polarizing debates or set up comment threads, groups, or pages to spread divisive messaging.

Russian agents also purchased ads on Facebook and Google,  targeting specific groups of people with specific messages around highly contentious debates, such as immigration, gun rights, or LGBTQ rights. Ads that focused on the candidates typically supported either Donald Trump or Bernie Sanders, but they always attacked Hillary Clinton. 

Overall, the quality of information being shared on social media was very low and highly polarizing. Junk news and disinformation were spreading like wildfire. Visceral attacks on both the left and the right were strategically targeted and amplified to divide the American electorate.

Alongside growing concerns of Russian collusion during the 2016 elections and mounting evidence of government meddling via social media platforms, the Senate Intelligence Committee began their investigation to understand what happened in 2016.

Spectrum: Did Russian influence have a measurable effect?

Bradshaw: It’s difficult to draw a clear connection between what people see on social media and how they vote. Communication scholarship tells us that lots of different factors go into the opinion-formation process. It’s not just what we consume via social media—people will have conversations with friends and family, read the newspaper, and watch the news on TV. Political opinions also form over time: We rarely see one story and instantly change our minds.

In terms of measurable effects, we can say a few things about social media. First, many people do rely on social media platforms as a source of news and information. Based on research we conducted for the Computational Propaganda Project, we also know that coordinated disinformation campaigns targeted voters in swing states, where only a few votes could determine whether it went red or blue. By strategically targeting those voters, disinformation campaigns that sway only 5 or 10 people can still be quite powerful and impactful.

Facebook chief operating officer Sheryl Sandberg and Twitter chief executive officer Jack Dorsey testify during a Senate Intelligence Committee hearing concerning foreign influence operations' use of social media platforms.Facebook chief operating officer Sheryl Sandberg (left) and Twitter chief executive officer Jack Dorsey testified during a Senate Intelligence Committee hearing on 5 September on foreign influence in American politics via social media.Photo: Drew Angerer/Getty Images

Spectrum:During the hearing, Dorsey, Sandberg, and several senators referred to being caught “flatfooted” by the revelation of Russian influence on social media. Should people have been surprised?

Bradshaw: I’ve always been a little bit critical of social media and the power they have in shaping what we see. So I personally wasn’t surprised. The platforms also should not have been surprised or caught “flatfooted”: These are huge corporations with massive amounts of power and money going into them, and they have a responsibility to protect their user base from harmful information and behavior.

Part of the problem was that until recently, their focus was on more traditional cybersecurity—preventing accounts from getting hacked or preventing the spread of spam. Instead, bad actors used the platforms exactly how they were intended to be used. But they used these affordances to make disinformation go viral and to target voters with divisive and polarizing advertisements.

Spectrum: Senator Harris (Calif.) quoted one of your colleagues, Lisa-Maria Neudert, on how social media algorithms amplify the most conspiratorial or misleading content, because that’s what generates the most user engagement. How do political social media bots take advantage of this mechanism?

Bradshaw: Bot accounts want real users to engage with their content, and content that is inflammatory tends to spread further than factual information. Bots use this negative, divisive messaging to try to get more “organic” engagement, where real users like, click, or share their stories. By artificially driving engagement by liking, sharing, or retweeting certain stories, bots can get social media algorithms to display this information more readily to users, because the algorithms share content based on what’s popular or trending.

Spectrum: Several senators mentioned that other countries are now following “Russia’s playbook” when it comes to social media manipulation. Is there in fact direct copying of Russian methods?

Bradshaw: We are definitely seeing more state actors experimenting with using manipulative tactics on social media. Last month, Facebook announced that it shut down Iranian and Russian botnets trying to undermine the U.S. midterm elections. That is a clear example of one authoritarian regime taking inspiration from another. 

After the 2016 elections, Professor Philip Howard [director of the Computational Propaganda Project] and I started putting together an annual inventory that looks at state actors who are investing in capabilities to manipulate public opinion via social media. In 2016, when we began this project, the debate was largely focused on Russian activity during the U.S. election.

But we soon realized this was a much broader phenomenon than just one bad actor, and that even legitimate political parties are using tools and techniques of computational propaganda to shape what citizens see and share on social media. Our global inventory compares how effective, resourced, and skilled the state actors are at leveraging computational propaganda.

Spectrum: How do you gather that information?

Bradshaw: It’s a three-part methodology. First, we conduct a content analysis. This year, we worked in 10 different languages, selecting specific keywords and examining whether or not reporters have identified any cases of state-sponsored manipulation in their own country’s context.

Second, we corroborate this evidence with other secondary literature, such as government budgets, think-tank reports, and academic studies. Finally, we consult country-specific experts to make sure the data we collected is accurate and to point us to other relevant literature or examples.

In 2017 we identified 28 countries where states were actively investing in social media manipulation campaigns, but we were only looking at English-language sources. In 2018, working in 10 languages, we detected 48 countries. That increase is partly explained by an improvement in our methods, but also the fact that there is more attention around these issues and people have started looking for evidence of computational propaganda.

We also saw quite a few new political-party actors starting to experiment with these tools and techniques during elections that took place in 2017, following the release of the first report.

Spectrum:During the hearing, Facebook and Twitter highlighted steps they’ve taken since the 2016 elections to rein in “inauthentic” and automated activity. Twitter blocks a half million suspicious logins per day, and it challenges 8.5 million to 10 million accounts each week that it suspects of misusing automation or generating spam. Facebook disabled 1.27 billion fake accounts, and its safety and security team has doubled to more than 20,000 people handling content in 50 languages. Both companies are investing in machine-learning algorithms and artificial intelligence that can automatically spot and remove other undesirable activity [see, for example, “AI-Human Partnerships Tackle ‘Fake News’ ”]. And so on. Is all that enough to prevent such activity?

Bradshaw: I think a lot of the measures that platforms are adopting don’t address some of the deeper systemic issues that give rise to computational propaganda and the spread of disinformation in the first place. These business models that turn our personal data into advertising revenue by driving engagement is what incentivizes information to spread based on virality, rather than veracity.

I do commend Facebook and Twitter for removing fake accounts and trying to moderate their content with a little more nuance and care. For quite some time, content moderation was a hidden lever of power: making content visible or invisible. Users were happy to see funny cat pictures and satiric news stories, and all was good in Internet land.

But social media has become a fundamental platform for news consumption and political debate. As they’ve become so ingrained in our democracies, we really need to have serious discussions about how much control these platforms have in shaping our online experience.

I don’t think the burden of fixing the problem should fall only on social media companies. There is a role for government in this, such as creating legislation like the Honest Ads Act, which aims for more transparency around political ads on social media platforms. Improving media literacy in school systems and investing in local journalism are other options. But simply flagging and removing fake-account and junk information are Band-Aid solutions to much deeper problems.

Spectrum:Senator King (Maine) mentioned a recent meeting with representatives from Lithuania, Estonia, and Latvia, who reported that they’ve contended with social media interference from Russia “for years.”

Bradshaw: There is plenty of great academic and journalistic investigation into Russian interference on social media in the Baltic states. Often fake accounts would be used to spread pro-Russian viewpoints, as well as conspiratorial content or disinformation. In Ukraine, for example, following Russia’s annexation of Crimea, Russia used social media manipulation alongside its military interventions.

We saw similar social media strategies used in Ukraine being applied to the 2016 U.S. election, such as promoting multiple narratives to distract, divide, and confuse. Russian bots were active in tweeting multiple competing narratives about the Clinton campaign, such as the pizzagate conspiracy about a pedophile ring in the basement of a Washington, D.C., pizzeria, or stories about Clinton’s failing health, after she collapsed at a 9/11 ceremony. All these competing stories serve to slowly push people away from the truth.

Spectrum: Senator King went on to suggest that Twitter and Facebook use an eBay-style rating system, where people can evaluate how trustworthy or misleading they deem content or users to be. Do you think such a system might work, or could it be gamed?

Bradshaw: All of these systems can be gamed, as bad actors will always try to break technology. We already see problems with rating systems like those on Yelp, TripAdvisor, and Amazon, where fake accounts leave fake reviews to boost the rankings of certain products or services.

If you start applying ratings to user accounts, you might end up in a “Black Mirror” situation. In the episode “Nosedive,” everyone in society has a score that’s based on every social interaction they have, and the higher the rating, the more benefits a person gets in society. The main character wants to boost her rating to gain those benefits, but things start to go wrong, her rating plummets, and even more terrible things start to happen.

Social media also makes us compare ourselves to our friends and family, and a rating system might exacerbate some of the anxiety and insecurities that people might face using these platforms on a daily basis. 

Spectrum: Facebook has tweaked its algorithm to downgrade content originating from media outlets in favor of content shared among family and friends.

Bradshaw: I think that deemphasizing news stories, especially coming from high-quality news outlets, isn’t necessarily the answer. Social media has disrupted the traditional business models that supported high-quality journalism, and news organizations now rely on social media platforms to reach audiences and drive traffic to their own websites, to generate ad revenue and keep them afloat.

Part of the problem with today’s media environment is due to the changing nature of our media business models. News outlets are struggling and shutting down, especially at the local level, which creates a disconnect for people in those communities. And downgrading local news could cause more harm to democracy and to our shared sense of community. It’s good that we’re seeing more authentic content from our friends and family. But local news and news from high-quality outlets, that’s important to see, too.

The fact that a social media platform decides what content and how much of that content users should see in their feeds is a deeper question that we as a society need to reflect on: Where is the user choice and power in all of this? However, most people do not understand the fundamentals about how these platforms work, the extent to which they’re being targeted, or why they’re seeing what they’re seeing in their news feeds in the first place.

There needs to be more transparency and control for the user, but right now, platforms hold the majority of control, and decisions are kept hidden in these algorithmic black boxes.

Spectrum: Senator Collins (Maine) asked Twitter to consider archiving its data and making it available to academic researchers beyond a three-year window. Would that help in the kind of research you do at the Computational Propaganda Project?

Bradshaw: I definitely think so. The best data about political life is no longer in the archives of the Library of Congress. Instead, it’s being collected, aggregated, and stored by social media platforms. For quite some time, researchers have wanted to access this wealth of information to better understand not only issues to do with politics, but all aspects of human behavior, since so much of our lives are carried out on digital platforms. More collaboration with academics could provide unique insights that could better democracy and humanity more broadly.

Spectrum:Does Facebook make its data available to academic researchers?

Bradshaw: Facebook has recently launched a data-sharing initiative with academics. It’s still in its early days, so it’s hard to tell how valuable it will be. But in my field, there’s a lot of excitement about the initiative, and I think that is really positive. In general, the more data that academics can access from the platforms the better, as it will be really valuable for understanding all sorts of political and social challenges.

Spectrum: So what can we expect leading up to the midterms? Has your group detected any increase or decrease in social-media manipulation?

Bradshaw: It’s still a little bit too early to tell. Manipulation campaigns require careful and strategic planning and take time to set up. There’s a whole life cycle for these kinds of campaigns. Accounts might have to be hacked, and botnets that appear to be legitimate take time to develop, as they require a history of engagement and authentic behavior or else they risk being taken down. It’s really in the month before an election where the campaign really takes off.

The fact that Facebook has already identified some Russian and Iranian botnets suggests that we can expect to see computational propaganda during the midterm elections. But how much manipulation and what kind of impact these campaigns will have on the outcome are still up for debate.

The Conversation (0)