Skip to Content
Artificial intelligence

This is how Facebook’s AI looks for bad stuff

November 29, 2019
How Facebook's machine learning identifies people and objects
How Facebook's machine learning identifies people and objectsFacebook

The context: The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review. In its latest community standards enforcement report, published earlier this month, the company claimed that 98% of terrorist videos and photos are removed before anyone has the chance to see them, let alone report them. 

So, what are we seeing here? The company has been training its machine-learning systems to identify and label objects in videos—from the mundane, such as vases or people—to the dangerous, such as guns or knives. Facebook’s AI uses two main approaches to look for dangerous content. One is to employ neural networks that look for features and behaviors of known objects and label them with varying percentages of confidence (as we can see in the video above).

Training in progress: These neural networks are trained on a combination of pre-labeled videos from its human reviewers, reports from users, and soon, from videos taken by London’s Metropolitan Police. The neural nets are able to use this information to guess what the entire scene might be showing, and whether it contains any behavior or images that should be flagged. It gave more details on how its systems work at a press briefing this week.

Then what? If the system decides that a video file contains problematic images or behavior, it can remove it automatically or send it to a human content reviewer. If it breaks the rules, Facebook can then create a hash—a unique string of numbers—to denote it and propagate that throughout the system so that other matching content will be automatically deleted if someone tries to re-upload it. These hashes can be shared with other social-media firms so they can also take down copies of the offending file.

“These [Metropolitan Police] videos are incredibly useful for us. Terrorist events are rare, thankfully, but it means the amount of training data is so small,” engineering manager Nicola Bortignon said on a call.

One weak spot: Facebook is still struggling to automate its understanding of the meaning, nuance, and context of language. That’s why the company relies on people to report the overwhelming majority of bullying and harassment posts that break its rules: just 16% of these posts are identified by its automated systems. As the technology advances, we can expect to see that figure increase. However, getting AI to truly understand language remains one of the field’s biggest challenges.

The bigger picture: In March, a terrorist killed 49 people at two mosques in Christchurch, New Zealand. He live-streamed the massacre on Facebook, and videos of it circulated around the site for months afterwards. It was a wake-up call for the industry. If it happened again now, there is a better chance it would be caught and removed more quickly.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.