Skip to Content
Artificial intelligence

Deepfakes could anonymize people in videos while keeping their personality

September 17, 2019
Woman in video with face blurred
Woman in video with face blurredMs. Tech; Original photo: Unsplash

AI could generate faces that match the expressions of anonymous subjects to grant them privacy—without losing their ability to express themselves.

The news: A new technique uses generative adversarial networks (GANs), the technology behind deepfakes, to anonymize someone in a photo or video.

How it works: The algorithm extracts information about the person’s facial expression by finding the position of the eyes, ears, shoulders, and nose. It then uses a GAN, trained on a database of 1.5 million face images, to create an entirely new face with the same expression and blends it into the original photo, retaining the same background.

Glitch: Developed by researchers at the Norwegian University of Science and Technology, the technique is still highly experimental. It works on many types of photos and faces, but still trips up when the face is partially occluded or turned at particular angles. The technique is also very glitchy for video.

Other work: This isn’t the first AI-based face anonymization technique. A paper in February from researchers at the University of Albany used deep learning to transplant key elements of a subject’s facial expressions onto someone else. That method required a consenting donor to offer his or her face as the new canvas for the expressions.

Why it matters: Face anonymization is used to protect the identity of someone, such as a whistleblower, in photos and footage. But traditional techniques, such as blurring and pixelation, run the risk of being incomplete (i.e., the person’s identity can be discovered anyway) or completely stripping away the person’s personality (i.e., by removing facial expressions). Because GANs don’t use the subject’s original face at all, they eliminate any risk of the former problem. They can also re-create facial expressions in high resolution, thus offering a solution to the latter.

Not always the bad guy: The technique also demonstrates a new value proposition for GANs, which have developed a bad reputation for lowering the barrier to producing persuasive misinformation. While this study was limited to visual media, by extension it shows how GANs could also be applied to audio to anonymize voices.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.