Can You Trust Crowd Wisdom?

MIT TechReview reviews new social networking research in an untrustworthy way

3 min read

Can you trust crowd wisdom? An article this week on the MIT Technology Review website asks that question and answers it in the negative - or rather, says that new research indicates the answer is no: "Researchers say online recommendation systems can be distorted by a minority of users."

When searching online for a new gadget to buy or a movie to rent, many people pay close attention to the number of stars awarded by customer-reviewers on popular websites. But new research confirms what some may already suspect: those ratings can easily be swayed by a small group of highly active users.
Vassilis Kostakos, an assistant professor at the University of Madeira in Portugal and an adjunct assistant professor at Carnegie Mellon University (CMU), says that rating systems can tap into the "wisdom of the crowd" to offer useful insights, but they can also paint a distorted picture of a product if a small number of users do most of the voting. "It turns out people have very different voting patterns," he says, varying both among individuals and among communities of users.

What¿s the official informal fallacy name for bait-and-switch? This Tech Review article commits it. It wants you to think this about recommendation systems, but it isn't. It wants you to think that there's a hidden problem of only a few people voting, when the research is really talking about the fact that a relatively small fraction of people are doing a large share of the total voting at places like IMDb.

That's not to say that there aren't problems with the voting at IMDb. Is Inglourious Basterds really the 43rd best movie ever made, better than The Departed (#57), Slumdog Millionaire (#75), Braveheart (#100), Unforgiven (#110), No Country For Old Men (#116), Million Dollar Baby (#150), or Crash (#232), each of which won the Academy Award for Best Picture in its respective year? Of course not. But the problem is''t a handful of voters influencing the vote - these fewest number of votes for any one of these is 85 000. The problem is 18-years-olds with no historical memory of cinema giving a movie a 10 the same night they see it, while those of us over 40 are carefully weighing whether Yojimbo gets an 8 or a 9.

Suppose for the sake of argument there's an 80/20 rule for IMDb voting - that is, 80 percent of all votes are cast by 20 percent of the people who vote. Is that a problem? What if it turns out there's an 80/20 rule for electoral voting in the United States. Does that invalidate the election process?

In other words, consider the entire aggregation of election votes cast by everyone alive who has ever voted. It might very well be the case that a handful of people turn out to every election, casting votes for every county supervisor and municipal judge election, while a large number of people turn out once every four years to vote for the U.S. President, while another large group votes even less frequently than that. It might well turn out that 20 percent of all citizens cast 80 percent of the votes. In fact, in the absence of Soviet-style mandatory voting, it would be surprising if something like that weren't the case.

As might be expected, the paper itself, which was presented at the 2009 IEEE International Conference on Social Computing and is available from Kostakos's website here [PDF], isn't about the unreliability of crowd wisdom at all. It looked at three different online voting systems with different barriers of entry to voting. (Its conclusion that experts can be encouraged to vote more often by lowering the barriers to voting seems to me to be rather circular and obvious, given that it defines experts simply as people who vote often.)

The paper takes for granted that if an item has been only reviewed or voted on a couple of times, the result is unreliable, and it doesn't seem to have anything particular to say about the reliability of a recommendation based on a large number of votes or reviews. It doesn't, by the way, even contain the word "distorted" - that seems to have come from a conversation or interview with Kostakos, not from the paper itself.

Nor does the paper have anything to say about "online recommendation systems" - when discussing Amazon, for example, it considers only the voting and reviewing on the Amazon site, and not the feature by which it recommends other products based on what other people looked at or bought. This reviewer's recommendations: One shaky thumb up for the research, two firm thumbs down for Tech Review's report on it.

The Conversation (0)