Facebook’s leaked moderation rules show why Big Tech can’t police hate speech
Society asked Big Tech to shut down hate speech online. We got exactly what we asked for.
The news: The New York Times’s Max Fisher published extracts from more than 1,400 pages of internal Facebook documents, containing rules for the company’s global army of more than 7,500 content moderators. (Motherboard had previously published some of the same material.)
What’s inside? A sprawling hodgepodge of guidelines, restrictions, and classifications. The rules on hate speech alone “run to 200 jargon-filled, head-spinning pages.” They include details on how to interpret emoji (use of ? can be both “bullying” and “praising,” apparently) and lists of people or political parties to monitor for possible hate speech. The documents show Facebook to be “a far more powerful arbiter of global speech” than it has admitted, Fisher writes.
The problem: The guidelines are not only byzantine; some are out of date or contain errors. They also vary widely depending on how much pressure the company is under: “Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinize the social network, but only one in neighboring Austria.” Moderators, most of whom work for outsourcing companies and get minimal training, are expected to make complex judgments in a matter of seconds, processing a thousand posts a day, with rules that change frequently in response to political events, and often using Google Translate.
The takeaway: This strips away any remaining pretense that Facebook is just a neutral publishing platform. Political judgments permeate every page of these guidelines.
But what did you expect? As Facebook’s former chief security officer, Alex Stamos, told me in October, we’ve demanded that tech platforms police hate speech, and that only gives them more power. “That’s a dangerous path,” Stamos warned. “Five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online.”
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.