Skip to Content
Silicon Valley

Facebook’s leaked moderation rules show why Big Tech can’t police hate speech

December 28, 2018

Society asked Big Tech to shut down hate speech online. We got exactly what we asked for.

The news: The New York Times’s Max Fisher published extracts from more than 1,400 pages of internal Facebook documents, containing rules for the company’s global army of more than 7,500 content moderators. (Motherboard had previously published some of the same material.)

What’s inside? A sprawling hodgepodge of guidelines, restrictions, and classifications. The rules on hate speech alone “run to 200 jargon-filled, head-spinning pages.” They include details on how to interpret emoji (use of ? can be both “bullying” and “praising,” apparently) and lists of people or political parties to monitor for possible hate speech. The documents show Facebook to be “a far more powerful arbiter of global speech” than it has admitted, Fisher writes.

The problem: The guidelines are not only byzantine; some are out of date or contain errors. They also vary widely depending on how much pressure the company is under: “Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinize the social network, but only one in neighboring Austria.” Moderators, most of whom work for outsourcing companies and get minimal training, are expected to make complex judgments in a matter of seconds, processing a thousand posts a day, with rules that change frequently in response to political events, and often using Google Translate.

The takeaway: This strips away any remaining pretense that Facebook is just a neutral publishing platform. Political judgments permeate every page of these guidelines.

But what did you expect? As Facebook’s former chief security officer, Alex Stamos, told me in October, we’ve demanded that tech platforms police hate speech, and that only gives them more power. “That’s a dangerous path,” Stamos warned. “Five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.