While the prevailing narrative around AI is that it automates work away, in practice it has really just changed its nature—in many cases creating more menial and more tedious types of jobs. The insatiable need of deep-learning algorithms to train on massive quantities of labeled data, for example, has spawned an entire cottage industry of human labelers. Both the New York Times and GQ China have covered the emergence of data factories in China, where workers spend hours on end to manually determine the content of images or passages of text for very low wages.
Now a startup named Vainu has found a new source of cheap labor: prison inmates. It has been partnering with two prisons in Finland over the last few months, specifically to improve its Finnish natural-language processing (NLP) capabilities. The cofounder told The Verge that while it uses Amazon Mechanical Turk to crowdsource labor for English NLP training, it initially struggled to find a scalable Finish alternative at the same low cost. The company now pays as much as it would for Mechanical Turk directly to the government agency that oversees the prisons. It’s not known how much actually reaches the prisoners.
Vainu has publicized its effort as “a prime example of a company creating work because of AI [... to] employ and empower the new working class.” Really, it serves to highlight a growing concern among AI experts and labor activists about how the technology will create even more banal and soul-crushing tasks than those that it is designed to eliminate.
Data labeling is just one among many examples, including the work of safety drivers who monotonously sit behind the wheel of self-driving cars and content moderators who mindlessly sift through Facebook posts and YouTube videos to clean up after imprecise algorithms.
All these jobs fall into the category of what anthropologist Mary L. Gray and computer scientist Siddharth Suri call “ghost work,” a type of labor that fuels the mirage of automation but is devalued because it’s meant to be invisible. In other words, there is indeed a “new working class,” but it certainly isn’t being empowered.
This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.