Skip to Content
MIT Technology Review

Facebook is getting better at detecting hate speech, but it still has a long way to go

Category:

The social network released its first content moderation report today. Here are the numbers you need to know.

In the first quarter of 2018 ...

- 583 million fake accounts were closed. Three to 4 percent of Facebook’s monthly users are fake.
- 836 million instances of spam had action taken against them.
- Facebook took enforcement action against 21 million posts containing nudity.
- The company found 2.5 million posts containing hate speech, a 56 percent increase over the last quarter of 2017.
- The number of terrorism-related posts removed increased by 73 percent over the previous quarter. The company says machine-learning algorithms are being used to locate older posts, hence the increase.

Why the numbers matter: The report gives us a picture of the sheer quantity of content Facebook’s software and human moderators are churning through. It’s important to remember that these numbers represent only the posts and users that have actually been identified.

The road ahead: Facebook’s software for finding hate speech is getting better, but it’s nowhere near ready to take over sole responsibility for policing content: 62 percent of the hate speech posts on which action was taken were first reported by users. Using AI to moderate posts is, it turns out, really hard.