Skip to Content
Artificial intelligence

Facebook says it’s going to use machine learning to catch revenge porn

March 15, 2019

That Facebook thinks it can tackle the problem in this way shows how just powerful its ability to identify people with AI has become.

The news: Facebook announced today that it will use machine learning to detect and block nude or near-nude images and videos that have been shared without permission—before they have even been reported. Revenge porn (the sharing of sexual videos of someone, usually a past partner, without consent) has become a serious problem, with devastating consequences for victims. Facebook also says it will overhaul the process by which victims can report unapproved images. 

Face time: Facebook didn't say exactly what sort of machine learning it was going to use, but it has an almost unparalleled ability to identify people in images, thanks to a vast corpus of labeled training data supplied by its own users.

Training sets: Although many users are unaware, social-media photos are widely used to train machine-learning algorithms. The state-of-art programs are often now better than humans at recognizing people in snaps.

Coming threat: New detection technology could become especially important as it becomes ever easier to generate convincing-looking fake video with AI. The rise of easy-to-use face-swapping software has already led to a proliferation of fake celebrity porn and other weird video mashups.

Silver bullets: The Facebook effort is a worthwhile use of machine learning, but AI is no silver bullet for dealing with harassment, abuse, or fake news on social media (regardless of what Mark Zuckerberg might tell Congress). Humans will always find ways to outwit the best algorithms. Besides that, the problem sadly extends far beyond the walls of Facebook.

Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.