Skip to Content
Artificial intelligence

Deepfakes could anonymize people in videos while keeping their personality

September 17, 2019
Woman in video with face blurred
Woman in video with face blurredMs. Tech; Original photo: Unsplash

AI could generate faces that match the expressions of anonymous subjects to grant them privacy—without losing their ability to express themselves.

The news: A new technique uses generative adversarial networks (GANs), the technology behind deepfakes, to anonymize someone in a photo or video.

How it works: The algorithm extracts information about the person’s facial expression by finding the position of the eyes, ears, shoulders, and nose. It then uses a GAN, trained on a database of 1.5 million face images, to create an entirely new face with the same expression and blends it into the original photo, retaining the same background.

Glitch: Developed by researchers at the Norwegian University of Science and Technology, the technique is still highly experimental. It works on many types of photos and faces, but still trips up when the face is partially occluded or turned at particular angles. The technique is also very glitchy for video.

Other work: This isn’t the first AI-based face anonymization technique. A paper in February from researchers at the University of Albany used deep learning to transplant key elements of a subject’s facial expressions onto someone else. That method required a consenting donor to offer his or her face as the new canvas for the expressions.

Why it matters: Face anonymization is used to protect the identity of someone, such as a whistleblower, in photos and footage. But traditional techniques, such as blurring and pixelation, run the risk of being incomplete (i.e., the person’s identity can be discovered anyway) or completely stripping away the person’s personality (i.e., by removing facial expressions). Because GANs don’t use the subject’s original face at all, they eliminate any risk of the former problem. They can also re-create facial expressions in high resolution, thus offering a solution to the latter.

Not always the bad guy: The technique also demonstrates a new value proposition for GANs, which have developed a bad reputation for lowering the barrier to producing persuasive misinformation. While this study was limited to visual media, by extension it shows how GANs could also be applied to audio to anonymize voices.

Deep Dive

Artificial intelligence

Why Meta’s latest large language model survived only three days online

Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.

DeepMind’s game-playing AI has beaten a 50-year-old record in computer science

The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of everyday computer tasks.

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Online videos are a vast and untapped source of training data—and OpenAI says it has a new way to use it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.