Skip to Content
Uncategorized

The Defense Department has produced the first tools for catching deepfakes

Fake video clips made with artificial intelligence can also be spotted using AI—but this may be the beginning of an arms race.
August 7, 2018
SUNYSUNY

The first forensics tools for catching revenge porn and fake news created with AI have been developed through a program run by the US Defense Department.

Forensics experts have rushed to find ways of detecting videos synthesized and manipulated using machine learning because the technology makes it far easier to create convincing fake videos that could be used to sow disinformation or harass people.

The most common technique for generating fake videos involves using machine learning to swap one person’s face onto another's. The resulting videos, known as “deepfakes,” are simple to make, and can be surprisingly realistic. Further tweaks, made by a skilled video editor, can make them seem even more real.

Video trickery involves using a machine-learning technique known as generative modeling, which lets a computer learn from real data before producing fake examples that are statistically similar. A recent twist on this involves having two neural networks, known as generative adversarial networks, work together to produce ever more convincing fakes (see “The GANfather: The man who’s given machines the gift of imagination”).

The tools for catching deepfakes were developed through a program—run by the US Defense Advanced Research Projects Agency (DARPA)—called Media Forensics. The program was created to automate existing forensics tools, but has recently turned its attention to AI-made forgery.

"We've discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations,” says Matthew Turek, who runs the Media Forensics program.

Four video still images of Tucker Carlson speaking
Four video still images that mirror the original Tucker Carlson video. The face on the speaker appears to be that of actor Nicolas Cage.
Tucker Carlson gets his own Nicolas Cage makeover.
University at Albany, SUNY

One remarkably simple technique was developed by a team led by Siwei Lyu, a professor at the State University of New York at Albany, , and one of his students. “We generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well,” Lyu says.

Then, one afternoon, while studying several deepfakes, Lyu realized that the faces made using deepfakes rarely, if ever, blink. And when they do blink, the eye-movement is unnatural. This is because deepfakes are trained on still images, which tend to show a person with his or her eyes open.

Others involved in the DARPA challenge are exploring similar tricks for automatically catching deepfakes: strange head movements, odd eye color, and so on. “We are working on exploiting these types of physiological signals that, for now at least, are difficult for deepfakes to mimic,” says Hany Farid, a leading digital forensics expert at Dartmouth College.

DARPA’s Turek says the agency will run more contests “to ensure the technologies in development are able to detect the latest techniques."

The arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths. A key problem, says Farid, is that machine-learning systems can be trained to outmaneuver forensics tools.

Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.”

 

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.