Skip to Content
Artificial intelligence

At the White House, the idea of digital fakery is eroding the truth

November 8, 2018

The frightening future of digital fakery has arrived, in the form of a video of CNN reporter Jim Acosta. The footage on the right shows Acosta roughly handling a White House aide during a press conference yesterday—or does it? (The original clip is shown to the left.)

Fact or fake: A fight has broken out on social media over whether the right-hand clip was, in fact, doctored by an editor at the right-wing conspiracy site Infowars to make it seem as if Acosta was being more physically aggressive than he actually was. That would be alarming because the White House press secretary, Sarah Sanders, later retweeted the clip as justification for revoking Acosta’s press credentials.

Truth or scare: As a number of keen-eyed Twitter users have pointed out, it looks like the clip was ever-so-slightly sped up at the moment contact is made.

It’s possible that this was an artifact of turning the clip into a jittery animated GIF, says Hany Farid, a world-renowned expert on digital forensics and a professor at Dartmouth. “A combination of a reduction in the quality of the video, a slowing down of the video, and the particular vantage point of the CSPAN video gives the appearance that there was more contact between the reporter and the intern than there probably was,” he adds. Farid has looked at the clip, but he has not analyzed it in detail.

AI trickery: The incident is all the more troubling given that artificial intelligence is making it ever easier to manipulate video footage. Even I was able to create a ridiculous clip of a Ted Cruz doppelganger with relative ease. The power videos hold as “ground truth” will be eroded as these digital tools become more commonplace. And this will also make it easier for those in power to discredit evidence against them as just more “fake news.”

Truth out there? But the clip also shows you don’t need really clever AI to mislead people or stir up controversy. Videos that have been carefully staged and edited can be just as effective. As Farid says: “This is a good example of precisely the problem that emerges when video can be easily manipulated—anyone can claim that a video is fake, and that claim is credible. In many ways, this may be the larger threat than the actual fake footage.”

Deep Dive

Artificial intelligence

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

ChatGPT is going to change education, not destroy it

The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

Deep learning pioneer Geoffrey Hinton has quit Google

Hinton will be speaking at EmTech Digital on Wednesday.

The future of generative AI is niche, not generalized

ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.