MIT Technology Review Subscribe

Can You Trust Crowd Wisdom?

Researchers say online recommendation systems can be distorted by a minority of users.

When searching online for a new gadget to buy or a movie to rent, many people pay close attention to the number of stars awarded by customer-reviewers on popular websites. But new research confirms what some may already suspect: those ratings can easily be swayed by a small group of highly active users.

Vassilis Kostakos, an assistant professor at the University of Madeira in Portugal and an adjunct assistant professor at Carnegie Mellon University (CMU), says that rating systems can tap into the “wisdom of the crowd” to offer useful insights, but they can also paint a distorted picture of a product if a small number of users do most of the voting. “It turns out people have very different voting patterns,” he says, varying both among individuals and among communities of users.

Advertisement

Kostakos studied voting patterns on Amazon, the Internet Movie Database (IMDb), and the book review site BookCrossings. The research was presented last month at the 2009 IEEE International Conference on Social Computing. His team looked at hundreds of thousands of items and millions of votes across the three sites. In each case, they found that a small number of users accounted for a large number of ratings. For example, only 5 percent of active Amazon users cast votes on more than 10 products. A handful of users voted hundreds of items.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

“If you have two or three people voting 500 times,” says Kostakos, the results may not be representative of the community overall. He suspects this may be why ratings often tend toward extremes.

Jahna Otterbacher, an assistant professor at Illinois Institute of Technology who studies online rating systems, says that previous research has hinted that rating systems can be skewed by factors such as the age of a review. But she notes that some sites, including Amazon, already incorporate mechanisms designed to control the quality of ratings–for example, allowing users to vote on the helpfulness of other users’ reviews.

Kostakos proposes further ways to make recommendations more reliable. He suggests making it easier to vote, in order to encourage more users to join in.

Niki Kittur, an assistant professor at CMU who studies user collaboration on Wikipedia and was not involved with Kostakos’s work, says that providing more information about voting patterns to users could also be helpful. Kittur suggests that sites could create ways to easily summarize and represent other users’ contributions to reveal any obvious biases. “There are both intentional and unintentional sources of bias,” says Kittur. “In the end, what we really need [are] tools and transparency.”

Kostakos also suggests removing overly negative and overly positive reviews, so a site won’t be too positive or too negative overall. But Otterbacher, who is examining reviews from IMDb, Amazon, and Yelp, worries that such a policy could discourage many people from taking part. “People who write reviews want to say something about the item, and they can be pretty passionate about their opinions,” she says.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement