Crowdsource Control

Researchers show how easy it is to manipulate online opinions

Loading the podcast player...

Stephen Cass: Hi, this is Stephen Cass for IEEE Spectrum’s “Techwise Conversations.”

The idea of aggregating the opinions of online users to produce valuable results dates back to John Brunner’s 1975 science fiction novel, The Shockwave Rider. Today, crowdsourced opinions are very much a fact of daily life.

User ratings play a role in how we choose the movies we watch, the news stories we read, and even in major decisions such as picking a doctor or hospital. But how trustworthy are these ratings? Can the crowd be manipulated to produce wisdom to order?

My guest today, Sinan Aral, has spent the last few years studying how influence and ideas spread through social networks, working at the NYU Stern School of Business and the MIT Sloan School of Management. Recently, he and two colleagues published a paper in the journal Science, demonstrating how they tried to influence the popularity of comments on articles at a news website. Sinan joins us by phone from his apartment in New York.

Sinan, welcome to the podcast.

Sinan Aral: Thanks for having me.

Stephen Cass: So your paper is titled “Social Influence Bias: A Randomized Experiment.” Before we get to the results of that experiment, can you tell us a little about the motivation behind it?

Sinan Aral: Sure. My lab has been very interested in trying to understand how people influence one another in social networks and how this results in the diffusion or cascading effect of behaviors throughout society. And in the past we’ve done very large-scale experiments with Facebook and with Yahoo to try and understand social influence between peers in a social network, and in this study what we wanted to do was go a step further and see how this type of social influence affects our decision making and how we perceive value in things.

Stephen Cass: Can you tell us a little bit about how you actually set up this experiment with this website?

Sinan Aral: Sure. Well, what we’re really interested in here is whether the prior opinions of others, the ratings that something has or whether or not something has been popular in the past, affects in the future people’s perception of the value or quality of those items. And this is actually a very difficult question to answer because it’s difficult to know without a randomized experiment whether something is popular today because it was popular yesterday, or rather whether it was popular yesterday and today because it just happens to be a very good item. And in order to break that relationship between the quality of the items being rated and their prior ratings, we randomly manipulated the votes or ratings on items to understand how this process worked.

Stephen Cass: So, using this, were you actually able to change the outcome of comments? Make a comment more popular or less popular?

Sinan Aral: Yeah, so what we did was we did a very small manipulation. So we took comments on this social news aggregation website, which is very similar to Reddit or Digg.com, and we just simply gave them a random up-vote or a random down-vote chosen at random. So we didn’t choose the comments; we just randomly selected them. And then we saw how the subsequent voting changed for comments that were randomly positively manipulated or randomly negatively manipulated, and since the comments were randomly chosen, the only difference between their ratings should really only come from this single positive or negative vote, which was the manipulation of our experiment.

And what we found was actually quite striking: It turned out that we could actually create a very big effect from just a single positive or negatively manipulated vote. And there were really three major results—top-level results from the study. The first one is that the positive manipulation, a single random up-vote randomly chosen, created a herding behavior in ratings that resulted in a 25 percent increase in the ratings of positively manipulated comments on average, and that’s a really big increase in ratings from just a single positive signal at the beginning of the deliberation process, if you will.

The second major result was that this herding bias was asymmetric between negative and positive voting. While the positive manipulation created this snowball of popularity or positivity toward an item, the negative manipulation had no effect, and that surprised us as well, because there’s no intuitive reason a priori why you would expect us to follow the opinions of others in a positive direction but not in a negative direction. And when we dug deeper, what we found was that people tended to herd on negatively manipulated comments. In other words, people were more likely to vote negative after our negative manipulation, but they were much more likely to go in and quote-unquote “correct” the negative manipulation by voting positive, which is what we call a “correction effect.” So, an intuitive explanation for this asymmetry is that we tend to go along with the positive opinions of others, but we tend to be skeptical of the negative opinions of others, and so we go in and correct what we think is an injustice—something voted down that doesn’t seem like it should be voted down. But if something is voted up, we say, “Oh, it must be better than I thought it was,” I’ll go ahead and up-vote that as well.

And the third major result was that these effects varied by topic. So in business and society, culture, politics, we found substantial susceptibility to positive herding, whereas in general news, economics, IT, we found no such herding effects in the positive or negative direction.

Stephen Cass: So in terms of the herding behavior that you’re observing, what are some of the real-world implications of this behavior that you’ve seen?

Sinan Aral: Well, so, I think you can take this in several layers. At the first layer, you can think about the implications of these results for online commerce. So, ratings are one of the cornerstones of consumer confidence in online commerce transactions. In fact, Nielsen did a survey last year, and they surveyed about 30 000 people, and they found that 75 percent of people believe that online ratings are truthful and accurate, and those types of ratings are second only to the opinions of friends and family as the most trusted source for opinions about items that people were interested in purchasing online. So it’s really an important part of how we decide what to consume online or what products to purchase.

And if the results that we find generalize to all the different types of ratings online, then there’s substantial bias and incentive for manipulation. If I’m the purveyor of products online, I have a tremendous incentive to vote those products up and to create this snowball of positivity in my product ratings. In the online ratings case, for consumers it means perhaps we should be slightly more skeptical of positive ratings and take them with a grain of salt, and obviously the purveyors of these types of ratings—Amazon, Netflix, Reddit, and so on. We want to, as scientists, understand how these types of phenomenon work, and then as the entrepreneurs and designers of these websites that use ratings, we need to make sure that they’re designed in a way that they’re less susceptible to this kind of herding or popularity bias.

Stephen Cass: So in some of your other research, you looked at some interesting patterns in who are the kind of people who are more likely to influence others, and who’s more likely to be susceptible to influence. Can you tell us a little bit more about those patterns?

Sinan Aral: Yeah, sure. In a study we published last summer in Science, we did a randomized experiment about influence in social networks on Facebook among 1.5 million people. And what we found was that you could really measure influence very well online, and you could tell who was influential and who wasn’t influential, and we found some pretty interesting patterns that were potentially specific to this film or movie industry that we were studying—the context of which is movies and film. These types of results may vary if you’re talking about fashion. Or if you’re talking about politics, someone else may be more influential than who we found to be influential. But in terms of the film industry, what we found was that influence and susceptibility tended to trade off. So if you were very influential, you were less likely to be susceptible. We found that influence increased with age, and susceptibility to influence decreased with age. We found that, for instance, you could identify influential individuals who themselves had influential friends, and these could potentially be superspreaders in a network, and you may want to get your message out to them before anyone else in order to spread the message by word-of-mouth communication.

And when we were thinking about this result and we were thinking about the result from the social-influence bias paper, it really struck us how important these things might be in very real and important outcomes that we see in society. So, obviously, the paper about social influence—who’s more influential and who’s more susceptible—might be useful to a political campaign who’s trying to get the word out. And the social-influence bias paper that was written about ratings, we actually wrote that paper during the 2012 presidential elections in the United States. And I couldn’t stop thinking about every time I saw an election poll, what that meant to the election, because we take these polls as predictors of election results, but I kept asking myself if the prior opinions of others biases the opinion of those who see how other people are rating an item. Do election polls predict, or do they drive election results? If I see, oh, 87 percent of people think Obama is doing a great job, do I say to myself, “Oh maybe he’s doing better than I thought he was, and maybe I’ll vote for him even though I wasn’t planning to.”

Stephen Cass: Well, that’s great Sinan. Thank you for coming on and talking about your clearly influential research.

Sinan Aral: Thanks so much for having me.

Stephen Cass: We’ve been speaking with Sinan Aral, associate professor at the Sloan School of Management, about how crowdsource ratings can be manipulated. For IEEE Spectrum’s “Techwise Conversations,” I’m Stephen Cass.

This interview was recorded Wednesday, 28 August 2013.

Segment producer: Barbara Finkelstein

Segment producer: Stephen Cass

Audio engineer: Francesco Ferorelli

Read more “Techwise Conversations,” find us in iTunes, or follow us on Twitter.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

Advertisement
Advertisement