Skip to Content
Artificial intelligence

At the White House, the idea of digital fakery is eroding the truth

November 8, 2018

The frightening future of digital fakery has arrived, in the form of a video of CNN reporter Jim Acosta. The footage on the right shows Acosta roughly handling a White House aide during a press conference yesterday—or does it? (The original clip is shown to the left.)

Fact or fake: A fight has broken out on social media over whether the right-hand clip was, in fact, doctored by an editor at the right-wing conspiracy site Infowars to make it seem as if Acosta was being more physically aggressive than he actually was. That would be alarming because the White House press secretary, Sarah Sanders, later retweeted the clip as justification for revoking Acosta’s press credentials.

Truth or scare: As a number of keen-eyed Twitter users have pointed out, it looks like the clip was ever-so-slightly sped up at the moment contact is made.

It’s possible that this was an artifact of turning the clip into a jittery animated GIF, says Hany Farid, a world-renowned expert on digital forensics and a professor at Dartmouth. “A combination of a reduction in the quality of the video, a slowing down of the video, and the particular vantage point of the CSPAN video gives the appearance that there was more contact between the reporter and the intern than there probably was,” he adds. Farid has looked at the clip, but he has not analyzed it in detail.

AI trickery: The incident is all the more troubling given that artificial intelligence is making it ever easier to manipulate video footage. Even I was able to create a ridiculous clip of a Ted Cruz doppelganger with relative ease. The power videos hold as “ground truth” will be eroded as these digital tools become more commonplace. And this will also make it easier for those in power to discredit evidence against them as just more “fake news.”

Truth out there? But the clip also shows you don’t need really clever AI to mislead people or stir up controversy. Videos that have been carefully staged and edited can be just as effective. As Farid says: “This is a good example of precisely the problem that emerges when video can be easily manipulated—anyone can claim that a video is fake, and that claim is credible. In many ways, this may be the larger threat than the actual fake footage.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.