Skip to Content
Artificial intelligence

AI needs to face up to its invisible-worker problem

Machine-learning models are trained by low-paid online gig workers. They’re not going away—but we can change the way they work, says Saiph Savage.
SAIPH SAVAGE

Many of the most successful and widely used machine-learning models are trained with the help of thousands of low-paid gig workers. Millions of people around the world earn money on platforms like Amazon Mechanical Turk, which allow companies and researchers to outsource small tasks to online crowdworkers. According to one estimate, more than a million people in the US alone earn money each month by doing work on these platforms. Around 250,000 of them earn at least three-quarters of their income this way. But even though many work for some of the richest AI labs in the world, they are paid below minimum wage and given no opportunities to develop their skills. 

Saiph Savage is the director of the human-computer interaction lab at West Virginia University, where she works on civic technology, focusing on issues such as fighting disinformation and helping gig workers improve their working conditions. This week she gave an invited talk at NeurIPS, one of the world’s biggest AI conferences, titled “A future of work for the invisible workers in AI.” I talked to Savage on Zoom the day before she gave her talk. 

Our conversation has been edited for clarity and length.  

You talk about the invisible workers in AI. What sorts of jobs are these people doing?  

A lot of tasks involve labeling data—especially image data—that gets fed into supervised machine-learning models so that they understand the world better. Other tasks involve transcribing audio. For instance, when you talk to Amazon’s Alexa you might have workers transcribing what you say so that the voice recognition algorithm learns to understand speech better. And I just had a meeting with crowdworkers in rural West Virginia. They get hired by Amazon to read out a lot of dialogue to help Alexa understand how people in that region talk. You can also have workers labeling websites that might be filled with hate speech or pedophilia. This is why, when you search for images on Google or Bing, you're not exposed to those things.

People are hired to do these tasks on platforms like Amazon Mechanical Turk. Large tech companies might use in-house versions—Facebook and Microsoft have their own, for instance. The difference with Amazon Mechanical Turk is that anyone can use it. Researchers and startups can plug into the platform and power themselves with invisible workers.

What problems do these invisible workers have?

I don’t actually see crowdwork as a bad thing; it’s a really good idea. It has made it very easy for companies to add an external workforce.

But there are a number of problems. One is that workers on these platforms earn very low wages. We did a study where we followed hundreds of Amazon Mechanical Turk workers for several years, and we found that they were earning around $2 per hour. This is much less than the US minimum wage. There are people who dedicate their lives to these platforms; it’s their main source of income.

And that brings other problems. These platforms cut off future job opportunities as well, because full-time crowdworkers are not given a way to develop their skills—at least not ones that are recognized. We found that a lot of people don’t put their work on these platforms on their résumé. If they say they worked on Amazon Mechanical Turk, most employers won’t even know what that is. Most employers are not aware that these are the workers behind our AI.

It’s clear you have a real passion for what you do. How did you end up working on this?

I worked on a research project at Stanford where I was basically a crowdworker, and it exposed me to the problems. I helped design a new platform, which was like Amazon Mechanical Turk but controlled by the workers. But I was also a tech worker at Microsoft. And that also opened my eyes to what it’s like working within a large tech company. You become faceless, which is very similar to what crowdworkers experience. And that really sparked me into wanting to change the workplace.   

You mentioned doing a study. How do you find out what these workers are doing and what conditions they face?

I do three things. I interview workers, I conduct surveys, and I build tools that give me a more quantitative perspective on what is happening on these platforms. I have been able to measure how much time workers invest in completing tasks. I’m also measuring the amount of unpaid labor that workers do, such as searching for tasks or communicating with an employer—things you’d be paid for if you had a salary.

You’ve been invited to give a talk at NeurIPS this week. Why is this something that the AI community needs to hear?

Well, they’re powering their research with the labor of these workers. I think it’s very important to realize that a self-driving car or whatever exists because of people that aren’t paid minimum wage. While we’re thinking about the future of AI, we should think about the future of work. It’s helpful to be reminded that these workers are humans.

Are you saying companies or researchers are deliberately underpaying?

No, that’s not it. I think they might underestimate what they’re asking workers to do and how long it will take. But a lot of the time they simply haven’t thought about the other side of the transaction at all.

Because they just see a platform on the internet. And it’s cheap.

Yes, exactly.

What do we do about it?  

Lots of things. I’m helping workers get an idea how long a task might take them to do. This way they can evaluate if a task is going to be worth it. So I’ve been developing an AI plug-in for these platforms that helps workers share information and coach each other about which tasks are worth their time and which let you develop certain skills. The AI learns what type of advice is most effective. It takes in the text comments that workers write to each other and learns what advice leads to better results, and promotes it on the platform.

Let’s say workers want to increase their wages. The AI identifies what type of advice or strategy is best suited to help workers do that. For instance, it might suggest that you do these types of task from these employers but not these other types of task over there. Or it will tell you not to spend more than five minutes searching for work. The machine-learning model is based on the subjective opinion of workers on Amazon Mechanical Turk, but I found that it could still increase workers’ wages and develop their skills.

So it’s about helping workers get the most out of these platforms?

That’s a start. But it would be interesting to think about career ladders. For instance, we could guide workers to do a number of different tasks that let them develop their skills. We can also think about providing other opportunities. Companies putting jobs on these platforms could offer online micro-internships for the workers.

And we should support entrepreneurs. I've been developing tools that help people create their own gig marketplaces. Think about these workers: they are very familiar with gig work and they might have new ideas about how to run a platform. The problem is that they don’t have the technical skills to set one up, so I’m building a tool that makes setting up a platform a little like configuring a website template.  

A lot of this is about using technology to shift the balance of power.

It’s about changing the narrative, too. I recently met with two crowdworkers that I’ve been talking to and they actually call themselves tech workers, which—I mean, they are tech workers in a certain way because they are powering our tech. When we talk about crowdworkers they are typically presented as having these horrible jobs. But it can be helpful to change the way we think about who these people are. It’s just another tech job.  

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.