Skip to Content
Artificial intelligence

An AI app that turns you into a movie star has risked the privacy of millions

September 4, 2019
An image of the Chinese AI app ZAO
An image of the Chinese AI app ZAO
An image of the Chinese AI app ZAODa Qing/AP

ZAO, a viral Chinese app that uses AI to face-swap users and famous actors, is now embroiled in a major privacy controversy.

The news: On Friday, a new app released by Momo, a social-media developer, instantly went viral on Chinese social media. It allows users to upload a single portrait and, within seconds, see their face superimposed onto actors in iconic movie scenes. By Sunday, it had become the most downloaded free entertainment app in China’s Apple Store.

AI fakery: It’s the latest—and perhaps most impressive—application of generative adversarial networks, or GANs, the AI algorithms behind deepfakes. While GANs have been used for face-editing and face-swapping before (increasingly so in Hollywood films), ZAO’s use of a single photo, coupled with the speed and seamlessness of its swap, demonstrates how far the state of the art in media fakery has advanced.

The controversy: Within hours of its release, ZAO began to spark privacy concerns, specifically over a clause in the user agreement that gave developers the right to use all uploaded photos for free in perpetuity. It also allowed them to transfer that right to any third party without user permission. Legal experts in China said that wasn’t legal, and by Saturday, the app’s developer had caved under pressure and removed the clause. WeChat, China’s top social-networking app, also banned any sharing of footage or photos from ZAO.

Déjà vu: The episode replayed a similar controversy over FaceApp, a photo-editing app that went viral in July. The app also used GANs to retouch people’s portraits and had amassed over 150 million photos of faces since its launch. ZAO received a much quicker and sharper backlash, but it too had likely already been used by millions of users by the time it revised its policy. On one hand, the frequency of such incidents shows how easily a user’s personal data can now be co-opted and repurposed beyond his or her control. On the other, it shows that people have also become more sensitive to privacy and are less willing to give it up without a fight.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

chasm concept
chasm concept

Artificial intelligence is creating a new colonial world order

An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

spaceman on a horse generated by DALL-E
spaceman on a horse generated by DALL-E

This horse-riding astronaut is a milestone in AI’s journey to make sense of the world

OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.

labor exploitation concept
labor exploitation concept

How the AI industry profits from catastrophe

As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.