Skip to Content
Silicon Valley

Facebook is getting better at detecting hate speech, but it still has a long way to go

The social network released its first content moderation report today. Here are the numbers you need to know.

In the first quarter of 2018 ...

- 583 million fake accounts were closed. Three to 4 percent of Facebook’s monthly users are fake.
- 836 million instances of spam had action taken against them.
- Facebook took enforcement action against 21 million posts containing nudity.
- The company found 2.5 million posts containing hate speech, a 56 percent increase over the last quarter of 2017.
- The number of terrorism-related posts removed increased by 73 percent over the previous quarter. The company says machine-learning algorithms are being used to locate older posts, hence the increase.

Why the numbers matter: The report gives us a picture of the sheer quantity of content Facebook’s software and human moderators are churning through. It’s important to remember that these numbers represent only the posts and users that have actually been identified.

The road ahead: Facebook’s software for finding hate speech is getting better, but it’s nowhere near ready to take over sole responsibility for policing content: 62 percent of the hate speech posts on which action was taken were first reported by users. Using AI to moderate posts is, it turns out, really hard.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.