The social network released its first content moderation report today. Here are the numbers you need to know.
In the first quarter of 2018 ...
- 583 million fake accounts were closed. Three to 4 percent of Facebook’s monthly users are fake.
- 836 million instances of spam had action taken against them.
- Facebook took enforcement action against 21 million posts containing nudity.
- The company found 2.5 million posts containing hate speech, a 56 percent increase over the last quarter of 2017.
- The number of terrorism-related posts removed increased by 73 percent over the previous quarter. The company says machine-learning algorithms are being used to locate older posts, hence the increase.
Why the numbers matter: The report gives us a picture of the sheer quantity of content Facebook’s software and human moderators are churning through. It’s important to remember that these numbers represent only the posts and users that have actually been identified.
The road ahead: Facebook’s software for finding hate speech is getting better, but it’s nowhere near ready to take over sole responsibility for policing content: 62 percent of the hate speech posts on which action was taken were first reported by users. Using AI to moderate posts is, it turns out, really hard.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.