MIT Technology Review Subscribe

Facebook’s leaked moderation rules show why Big Tech can’t police hate speech

Society asked Big Tech to shut down hate speech online. We got exactly what we asked for.

The news: The New York Times’s Max Fisher published extracts from more than 1,400 pages of internal Facebook documents, containing rules for the company’s global army of more than 7,500 content moderators. (Motherboard had previously published some of the same material.)

Advertisement

What’s inside? A sprawling hodgepodge of guidelines, restrictions, and classifications. The rules on hate speech alone “run to 200 jargon-filled, head-spinning pages.” They include details on how to interpret emoji (use of ? can be both “bullying” and “praising,” apparently) and lists of people or political parties to monitor for possible hate speech. The documents show Facebook to be “a far more powerful arbiter of global speech” than it has admitted, Fisher writes.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The problem: The guidelines are not only byzantine; some are out of date or contain errors. They also vary widely depending on how much pressure the company is under: “Facebook blocks dozens of far-right groups in Germany, where the authorities scrutinize the social network, but only one in neighboring Austria.” Moderators, most of whom work for outsourcing companies and get minimal training, are expected to make complex judgments in a matter of seconds, processing a thousand posts a day, with rules that change frequently in response to political events, and often using Google Translate.

The takeaway: This strips away any remaining pretense that Facebook is just a neutral publishing platform. Political judgments permeate every page of these guidelines.

But what did you expect? As Facebook’s former chief security officer, Alex Stamos, told me in October, we’ve demanded that tech platforms police hate speech, and that only gives them more power. “That’s a dangerous path,” Stamos warned. “Five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement