Because it’s just six years old, Twitter has been around for only one U.S. presidential election, in 2008. As the tide of Twitter users swells and the political stakes rise, candidates will increasingly use this social media platform to enhance our perception of their popularity and tarnish that of their opponents. There is ample opportunity to spread misinformation cheaply, argue Wellesley College computer scientists Panagiotis “Takis” Metaxas and Eni Mustafaraj, in the journal Science this week. Metaxas, who says his students’ willingness to believe information on the Internet brought him to examine the issue, says that the ability to manipulate social media and search results remains underappreciated by the media and the public.
IEEE Spectrum: Has social media manipulation increased with the current election?
Panagiotis Metaxas: In the past, I have done research on specific cases. During the Massachusetts special election in 2010, looking at the spikes in searches on Google about the candidates led me to examine what was happening on Twitter [PDF] and some of the games that were played there. At that time, it appeared that it was an attempt to test the waters to see how effective certain techniques are going to be. Some of them appear to be quite effective, since we see the techniques going on today. It has very little cost, for example, to throw a Twitter bomb. It takes just a few hours of programming—or you can probably buy a program that can do it for you. It has a lot of potential because of its low cost. [Editor’s note: A Twitter bomb is the sending of unsolicited replies to specific users through “bots.” While Twitter can shut these bots down, retweets can still proliferate. In the 2010 special elections, nine fake accounts sent out about 1000 tweets before being blocked by Twitter. Retweeting potentially reached 60 000 accounts in a day.]
IEEE Spectrum: Why would Twitter manipulation be an effective strategy?
Panagiotis Metaxas: Persuasion is based on emotion; it is not based on logic. If everybody knew that 1 percent of the [Twitter] accounts are responsible for 30 percent of the traffic, people would be more careful thinking about that. But the vast majority of people don’t know that. Information is propagated through retweets from others who are either fooled or agree. At the end of the day, people receive some piece of misinformation, and many will not realize that it has come from one account that has been blasting messages. They will think that it came from friends.
IEEE Spectrum: How has the battleground shifted in the past four years?
Panagiotis Metaxas: A larger number of people are in social media and are using search engines on a daily basis. The political campaigns are aware of that, so they are much better at focusing their message for each particular user, because there is so much information from the digital traces we leave on the Web. Every time we use a search engine, every time we visit Facebook, we leave information about who we are, what we like. The campaigns and the spammers can focus their message much better to us. They try to do microtargeting, as it is called.
IEEE Spectrum: Can you tell us about your work developing software that detects such social media manipulation and how it connects to the Truthy project at Indiana University?
Panagiotis Metaxas: We try to create semiautomatic tools that will help the receiver of information evaluate and research its trustworthiness. It will provide, for example, a button that will give information about the reputation of the sender, and another button to check who else may be broadcasting the same information, as well as any connection that the two may share. The final decision would rest with the users to decide whether they want to believe the message or not.
Truthy is a project developed at Indiana University, inspired by our earlier work on misinformation in the Massachusetts special elections. Truthy is looking at “bursty” activity in Twitter, since it is indicative of interesting events. They make their data available to crowds who evaluate them and decide whether they see some foul play in it.
IEEE Spectrum: Have there been any indications of social media manipulation in this election?
Panagiotis Metaxas: For the coming elections we have so far seen few examples, except the surge of fake accounts that began following Governor Romney in a single day. But for spammers the congressional elections may be more interesting than the presidential elections. Control of the Congress is very important, and it falls below the radar of the press. We expect to see a surge in the last week before the elections, when there will be little time for debunking. In any event, people should be educated about how misinformation propagates, and they should be ready to use our personal old technology, our brain, to decide whether they should trust what they hear.
IEEE Spectrum: Why should media organizations be especially wary of retweeting without comment?
Panagiotis Metaxas: Because people tend to trust journalists, tend to trust newspapers. They are expecting that when a journalist says something, he or she has done a little bit of research to see whether this is valid or not.
IEEE Spectrum: Could you give us a real-world example?
Panagiotis Metaxas: It was recently tweeted that [Syrian President Bashar al-] Assad was dead…. It was rebroadcasted by some journalist, and within the hour of Assad’s supposed death, oil prices climbed by a dollar.
IEEE Spectrum: What are prefab tweet factories, and how do they attempt to influence journalists?
Panagiotis Metaxas: It is an effort to put pressure on the journalists so that they support your cause. You make it appear to the journalist that he or she is being monitored by grassroots people who are objecting to their reporting. So it puts pressure on them to change their reporting. A reporter who is unaware that he or she is being targeted will likely be influenced by this effort. This is a form of what is called astroturfing [PDF].
IEEE Spectrum: How useful is social media in predicting elections?
Panagiotis Metaxas:Not very useful at this point [PDF]. We have seen some research indicating that they could predict elections, but on close examination these papers refer to analysis done after the results were known. Replicating these methods we found no predictive power in social media. Given the lack of representativeness between social media accounts and voting citizens and the possibilities for social media manipulation, it is unlikely that today's social media data can predict elections. In fact, every paper claiming predictability does not verify the previously published methods; instead, they develop new ones. I guess they did not find the previous methods to be predictive so they had to develop some new ones that fit their particular data.