Skip to Content
Artificial intelligence

Facebook’s “radioactive data” tracks the images used to train an AI

An image of a pile of photographs
An image of a pile of photographsJon Tyson | Unsplash

The news: A team from Facebook AI Research has developed a way to track exactly which images in a data set were used to train a machine-learning model. By making imperceptible tweaks to images, creating a kind of watermark, they were able to make tiny corresponding changes to the way an image classifier trained on those images works, without impairing its overall accuracy. This let them later match models up with the images that were used to train them. 

Why it matters: Facebook calls the technique “radioactive data” because it is analogous to the use of radioactive markers in medicine, which show up in the body under x-ray. Highlighting what data has been used to train an AI makes models more transparent, flagging potential sources of bias—such as a model trained on an unrepresentative set of images—or revealing when a data set was used without permission or for inappropriate purposes. 

Make no mistake: A big challenge was to change the images without breaking the resulting model. Tiny tweaks to an AI’s input can sometimes lead to it making stupid mistakes, such as identifying a turtle as a gun or a sloth as a racecar. Facebook made sure to design its watermarks so that this did not happen. The team tested its technique on ImageNet, a widely used data set of more than 14 million images, and found that they could detect use of radioactive data with high confidence in a particular model even when only 1% of the images had been marked. 

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.