Skip to Content
Policy

An AI app that “undressed” women shows how deepfakes harm the most vulnerable

DeepNude has now been taken offline, but it won’t be the last time such technology is used to target vulnerable populations.
June 28, 2019
An image of a woman with her body blurred
An image of a woman with her body blurredMs. Tech/ Original image: Getty

The attention around deepfakes and synthetic media has grown in recent months. But while the conversation has primarily focused on its potential impact on politics, several experts in human rights and tech ethics have warned that another potential harm has been overlooked: the possibly devastating consequences for women and other vulnerable populations who are targeted with the technology but cannot protect themselves.

Now the latest deepfake experiment—an app called DeepNude that “undressed” photos of women—is playing out those nightmares. First reported by Vice, it used generative adversarial networks, or GANs, to swap the women’s clothes for highly realistic nude bodies. The article quickly inspired a viral backlash, and the app’s creator shut it down.

“The DeepNude app proves our worst fears about the unique way audiovisual tools can be weaponized against women,” says Mutale Nkonde, a fellow at the Data & Society Research Institute, who advised a bill introduced in Congress by Representative Yvette Clarke that would create mechanisms for victims of such malicious deepfakes to seek legal recourse for reputational damage.

The app specifically targeted women. Vice found that the software only generated images of the female body, even when given a picture of a man. The anonymous creator confirmed that he had trained the GAN algorithm only on nude photos of women—in this case more than 10,000 of them—because they were easier to find online. He did, however, also intend to eventually make a male version.

Though the deepfakes didn’t depict the women’s actual bodies—they are completely synthesized by the algorithm—they still had the potential to cause significant emotional and reputational damage. The images could easily be mistaken for the real thing and used as revenge porn or a powerful tool for silencing women. In fact, this has happened before: a female journalist in India had her face grafted onto a porn video after she began uncovering government corruption. It instantly went viral, subjecting her to intense harassment and rape threats, and she had to go offline for several months.

Deepfakes are not a new threat; manipulated media has been around since long before AI. But the technology has accelerated and broadened existing trends, says Sam Gregory, program director of the human rights nonprofit Witness. Algorithms have made it much easier for far more people to generate ever more convincing fake media. Thus, anything that people used manipulated media to do in the past, such as attack journalists, imply corruption, or obfuscate evidence, will become increasingly commonplace and dangerously difficult to detect.

The app is no different, he says. The image-based sexual abuse of women already existed as a problem. Now deepfakes are adding fuel to the flames.

By the same logic, Nkonde worries that women won’t be the only vulnerable targets of deepfakes. Minorities, LGBTQ folks, and other groups often subject to the most severe online harassment will likely become victims too—though perhaps in different ways. During the 2016 US presidential campaign, for example, Russian operatives used fake African-American personas and related imagery as part of a Facebook disinformation campaign to heighten racial tensions among Americans.

“This was a new way for voter suppression, and it was through misappropriating people’s identities online,” Nkonde says. Deepfake technology would be another natural tool for malicious actors pretending to be people they are not to disrupt communities and otherwise cause harm.

So where do we go from here? Both Nkonde and Gregory have shared similar recommendations with MIT Technology Review in the past: companies and researchers who produce tools for deepfakes must also invest in countermeasures, and social-media and search companies should integrate those countermeasures directly into their platforms. Nkonde also urges regulators to act quickly. “Unless government finds a way to protect the rights of consumers, apps like this are going to proliferate,” she says.

“Technology is not neutral,” says Gregory. “This [DeepNude] app is not dual use. It’s single use for a malicious purpose, and it is being created amorally.

“We need to really focus on the ethics of creating and sharing generative media tools,” he adds. “We should repeatedly call this out.”

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Yes, remote learning can work for preschoolers

The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.