Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Amazon’s Mechancial Turk is the ultimate in nearly anonymous outsourcing: any task that can be completed online can be accomplished by the combination of automated marketplace and human labor. Those who sign up to complete tasks - Turkers - are paid wages as low as pennies per chore to do everything from data entry to folk art.

Mechanical Turk is designed to complete tasks that are easy for humans and hard for machines, such as categorizing or identifying the content of images. The problem for Amazon and all its imitators, however, is that machines are getting better at many tasks, while the humans on Mechanical Turk, for reasons I’ll explore in tomorrow’s post, are getting worse.

Recently, for example, researchers working at the online review site Yelp released a paper (pdf) on their experience matching thousands of Mechanical Turkers against a supervised learning algorithm.

The results weren’t pretty: in order to find a population of Turkers whose work was passable, the researchers first used Mechanical Turk to administer a test to 4,660 applicants. It was a multiple choice test to determine whether or not a Turker could identify the correct category for a business (Restaurant, Shopping, etc.) and verify, via its official website or by phone, its correct phone number and address.

79 passed. This was an extremely basic multiple choice test. It makes one wonder how the other 4,581 were smart enough to operate a web browser in the first place.

These 79 “high quality” workers were then thrown at the problem of verifying business information three at a time. This allowed the researchers to take only the results that a simple majority of Turkers agreed were correct, or in some cases to take the result chosen by the Turker who had historically been the most accurate.

Researchers threw a “Naive Bayes classifier” at the same set of problems. This is a kind of supervised learning algorithm; one that, according to a 2006 comparison of these systems, isn’t even the best kind out there.

The Bayes classifier won handily.

In almost every case, the algorithm, which was trained on a pool of 12 million user-submitted Yelp reviews, correctly identified the category of a business a third more often than the humans. In the automotive category, the computer was twice as likely as the assembled masses to correctly identify a business.

These results don’t necessarily suggest that business categorization is a problem like chess, where the human computer has finally been exceeded by its mechanical counterpart. Rather, they suggest that something about Mechanical Turk itself is broken – either the incentive system or its mechanisms for policing quality. It’s long been known that the wages on Mechanical Turk are quite low - workers are making, on average, between two and three dollars an hour for their labors, and it’s likely that this is part of the problem. Economists have only just begun to address the question; more on that tomorrow.

Follow Mims on Twitter or contact him via email.

3 comments. Share your thoughts »

Tagged: Web, Amazon.com

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me