How Bots Win Friends and Influence People

Social and computer scientists parse online bot discourse

2 min read

stylized computer-drawn chat bubble shown over 1s and 0s
Illustration: iStockphoto

Every now and then sociologist Phil Howard writes messages to social media accounts accusing them of being bots. It’s like a Turing test of the state of online political propaganda. “Once in a while a human will come out and say, ‘I’m not a bot,’ and then we have a conversation,” he said at the European Conference for Science Journalists in Copenhagen on June 29.

In his academic writing, Howard calls bots “highly automated accounts.” By default, the accounts publish messages on Twitter, Facebook, or other social media sites at rates even a teenager couldn’t match. Human puppet-masters manage them, just like the Wizard of Oz, but with a wide variety of commercial aims and political repercussions. Howard and colleagues at the Oxford Internet Institute in England published a working paper [PDF] last month examining the influence of these social media bots on politics in nine countries.

“Our goal is to produce large amounts of evidence, gathered systematically, so that we can make some safe, if not conservative, generalizations about where public life is going,” Howard says. The working paper, available ahead of peer-review in draft form, reports on countries with a mixture of different types of governments: Brazil, Canada, China, Germany, Poland, Russia, Taiwan, Ukraine, and the United States.

“My biggest surprise (maybe disappointment) is how it’s seemingly taken the 2016 U.S. election outcome to elevate the conversation and concerns related to this issue... because it’s not new,” says John F. Gray, co-founder of Mentionmapp, a social media analytics company in Vancouver, Canada. For years, bot companies have flooded protest movements’ hashtags with pro-government spam from Mexico [PDF] to Russia [PDF]. More sophisticated bots replicate real-life human networks and post or promote “fake news” and conspiracy theories seeking to sway voters. Indiana University researchers are building a taxonomy of social-network bots to simplify research (see “Taxonomy Goes Digital: Getting a Handle on Social Bots”,IEEE Spectrum, 9 June 2017).

Howard and colleagues have taken a social science approach: They found informants willing to provide access to programmers behind the botnets and have spent time with those programmers, getting to know their business models and motivations. One of their discoveries, Howard says, is that bot networks are, “not really bought and sold: they’re rented.” That’s because the older a profile is and the more varied its activity, the easier it is to evade detection by social networks’ security teams.

Private companies, not just governments and political parties, are major botnet users, Howard adds. The big business of renting botnets to influence public conversations may encourage firms to create ever-more realistic bots. The computation for spreading propaganda via bots, Howard says, isn’t that complicated. Instead, Gray says the sophistication of botnet design, their coordination, and how they manipulate social media has been “discouragingly impressive.”

Both Howard and Gray say they are pessimistic about the ability of regulations to keep up with the fast-changing social bot-verse. Howard and his team are instead trying to examine each country's situation and in the working paper they call for social media firms to revise their designs to promote democracy.

Gray calls it a literacy problem. Humans must get better at evaluating the source of a message to help them decide how much to believe the message itself, he says.

Note: Journalist Lucas Laursen attended the 2017 European Conference for Science Journalists, where Howard spoke, with a travel grant but had no obligation to cover conference events.

The Conversation (0)