Skip to Content
Tech policy

Twitter says it may warn users about deepfakes—but won’t remove them

November 12, 2019

The news: Twitter has drafted a deepfake policy that would warn users about synthetic or manipulated media, but not remove it. Specifically, it says it would place a notice next to tweets that contain deepfakes, warn people before they share or like tweets that include deepfakes, or add a link to a news story or Twitter Moment explaining that it isn’t real. Twitter has said it may remove deepfakes that could threaten someone’s physical safety or lead to serious harm. People have until November 27 to give Twitter feedback on the proposals.

The context: It’s become relatively easy to make convincing doctored videos thanks to advances in artificial intelligence. That’s led to a huge panic over the potential for deepfakes to subvert democracy, as they can be used to make politicians seem to say or do whatever the creator wants.

A real threat?:The most notorious political deepfakes so far either have not been deepfakes (see the Nancy Pelosi video released in May) or have been created by people warning about deepfakes, rather than any bad actors themselves. For example, in the UK today two new deepfakes have been released of the prime minister, Boris Johnson, and leader of the opposition, Jeremy Corbyn, endorsing each other for an upcoming election on December 12. But they were created by a social enterprise trying to raise awareness of the issue.

The real problem: There is no denying that deepfakes pose a significant new threat. But so far, they’re mostly a threat to women, particularly famous actors and musicians. A recent report found that 96% of deepfakes are porn, virtually always created without the consent of the person depicted. These would already break Twitter’s existing rules and be removed.

An issue for the whole industry: That said, it is refreshing to see a social-media company wrangling with its content moderation responsibilities so openly. The varying responses to the Pelosi video (YouTube removed it, Facebook flagged it as false, and Twitter let it stand) show what a complex, thorny problem manipulated videos can pose. And unfortunately, we can’t expect deepfake detection technology to fix it, either. We’ll need social and legal solutions,too.

Sign up here for our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.

Deep Dive

Tech policy

2022’s seismic shift in US tech policy will change how we innovate

Three bills investing hundreds of billions into technological development could change the way we think about government’s role in growing prosperity.

Mass-market military drones: 10 Breakthrough Technologies 2023

Turkish-made aircraft like the TB2 have dramatically expanded the role of drones in warfare.

We’re witnessing the brain death of Twitter

An analysis of Musk’s tweets shows him at the center of conversations once kept on the fringes of Twitter.

Abortion pills via telemedicine: 10 Breakthrough Technologies 2023

Medication abortion has become increasingly common, but the US Supreme Court’s decision to overturn Roe v. Wade brought a new sense of urgency.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.