Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

When searching online for a new gadget to buy or a movie to rent, many people pay close attention to the number of stars awarded by customer-reviewers on popular websites. But new research confirms what some may already suspect: those ratings can easily be swayed by a small group of highly active users.

Vassilis Kostakos, an assistant professor at the University of Madeira in Portugal and an adjunct assistant professor at Carnegie Mellon University (CMU), says that rating systems can tap into the “wisdom of the crowd” to offer useful insights, but they can also paint a distorted picture of a product if a small number of users do most of the voting. “It turns out people have very different voting patterns,” he says, varying both among individuals and among communities of users.

Kostakos studied voting patterns on Amazon, the Internet Movie Database (IMDb), and the book review site BookCrossings. The research was presented last month at the 2009 IEEE International Conference on Social Computing. His team looked at hundreds of thousands of items and millions of votes across the three sites. In each case, they found that a small number of users accounted for a large number of ratings. For example, only 5 percent of active Amazon users cast votes on more than 10 products. A handful of users voted hundreds of items.

“If you have two or three people voting 500 times,” says Kostakos, the results may not be representative of the community overall. He suspects this may be why ratings often tend toward extremes.

Jahna Otterbacher, an assistant professor at Illinois Institute of Technology who studies online rating systems, says that previous research has hinted that rating systems can be skewed by factors such as the age of a review. But she notes that some sites, including Amazon, already incorporate mechanisms designed to control the quality of ratings–for example, allowing users to vote on the helpfulness of other users’ reviews.

Kostakos proposes further ways to make recommendations more reliable. He suggests making it easier to vote, in order to encourage more users to join in.

Niki Kittur, an assistant professor at CMU who studies user collaboration on Wikipedia and was not involved with Kostakos’s work, says that providing more information about voting patterns to users could also be helpful. Kittur suggests that sites could create ways to easily summarize and represent other users’ contributions to reveal any obvious biases. “There are both intentional and unintentional sources of bias,” says Kittur. “In the end, what we really need [are] tools and transparency.”

Kostakos also suggests removing overly negative and overly positive reviews, so a site won’t be too positive or too negative overall. But Otterbacher, who is examining reviews from IMDb, Amazon, and Yelp, worries that such a policy could discourage many people from taking part. “People who write reviews want to say something about the item, and they can be pretty passionate about their opinions,” she says.

30 comments. Share your thoughts »

Credit: Technology Review

Tagged: Computing, Web, Amazon, crowd-sourcing, recommendation engines

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me