Incoming messages for straight men on dating sites are… rare. Yet many of the dashing men who tried out Ashley Madison, a site aimed at the already-married, got messages soon after signing up. To see the messages, the men had to pay. The more perceptive among them soon noticed that their pen pals wrote similar come-ons, logged in and out at the same time every day, and oddest of all, had not visited the men's profiles. Ashley Madison was using more than 70,000 bots to lure in users, Gizmodo found in a 2015 investigation.
The message-sending profiles were one iteration of a growing army of bots that populate our online social networks, affecting everything from our wallets to our politics. Now they are attracting academic study and government research dollars.
“For the first time, humans are beginning to share their social ecosystem with a new species,” says computer science graduate student Gregory Maus, of Indiana University. And because not everybody is as attentive as the Ashley Madison user who blew the whistle on the fembots, human users of social networks are susceptible to everything from outright scams to subtler political influence by bots promoting fake news. In response, two years ago the Defense Advanced Research Projects Agency (DARPA) challenged researchers to identify "influence bots" and is now funding further research on social networks.
Maus will present one of a growing number of socialbot taxonomies at the ACM Web Science conference in Troy, New York, this June. The taxonomy seeks to expand on earlier taxonomies focused on identifying the different types of botnets and categorizing malicious socialbots that, for example, flood with spam a Twitter hashtag used to organize political protests. Another recent paper began mapping the existence of benign bots. Maus says he hopes his new taxonomy will be a more “broad, flexible framework useful for researchers” seeking both to understand and interact with bots.
“The interesting aspect of the current work is that it considers five different dimensions,” says computational social scientist Taha Yasseri of Oxford University in the United Kingdom, who earlier this year published a case study of an unexpected years-long conflict between Wikipedia maintenance-bots.
Maus' paper sketches out categories based on the degree to which a bot tries to pretend to be human, who its owner is, how the bot interacts with other bots, whether it hides its connection to its owner, and its mission. Some of these have their own sub-categories. Yasseri adds that it would be useful to examine how the different types interact with each other, rather than just studying each type in isolation. The interaction of human and machine networks is the focus of Yasseri’s European Union-funded project, HUMANE.
In fact, that has been one of the features of the human approach to studying bot taxonomies: variety and interactivity. Researchers come from a wide range of backgrounds. Maus, whose undergraduate degree is in philosophy, worked in marketing before joining the Networks & agents Network group at IU. His colleagues there have a mixture of backgrounds in psychology, mathematics, physics, and computer science.
Maus says students or others interested in working on social network taxonomy can get an immediate start by studying the APIs of a social network and reaching out to other researchers working on these problems. His supervisor, Filippo Menczer, accepts potential students through any of three different Ph.D. tracks. The area of bot taxonomy is young enough—and complex enough—that the variety of human profiles almost matches that of the bots.
Lucas Laursen is a journalist covering global development by way of science and technology with special interest in energy and agriculture. He has lived in and reported from the United States, United Kingdom, Switzerland, and Mexico.