Skip to Content
Silicon Valley

Facebook’s leaked moderation rules show why Big Tech can’t police hate speech

December 28, 2018

Society asked Big Tech to shut down hate speech online. We got exactly what we asked for.

The news: The New York Times’s Max Fisher published extracts from more than 1,400 pages of internal Facebook documents, containing rules for the company’s global army of more than 7,500 content moderators. (Motherboard had previously published some of the same material.)

What’s inside? A sprawling hodgepodge of guidelines, restrictions, and classifications. The rules on hate speech alone “run to 200 jargon-filled, head-spinning pages.” They include details on how to interpret emoji (use of ? can be both “bullying” and “praising,” apparently) and lists of people or political parties to monitor for possible hate speech. The documents show Facebook to be “a far more powerful arbiter of global speech” than it has admitted, Fisher writes.

The problem: The guidelines are not only byzantine; some are out of date or contain errors. They also vary widely depending on how much pressure the company is under: “Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinize the social network, but only one in neighboring Austria.” Moderators, most of whom work for outsourcing companies and get minimal training, are expected to make complex judgments in a matter of seconds, processing a thousand posts a day, with rules that change frequently in response to political events, and often using Google Translate.

The takeaway: This strips away any remaining pretense that Facebook is just a neutral publishing platform. Political judgments permeate every page of these guidelines.

But what did you expect? As Facebook’s former chief security officer, Alex Stamos, told me in October, we’ve demanded that tech platforms police hate speech, and that only gives them more power. “That’s a dangerous path,” Stamos warned. “Five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online.”

Keep Reading

Most Popular

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.