Skip to Content
Artificial intelligence

Deepfake-busting apps can spot even a single pixel out of place

Two startups are using algorithms to track when images are edited—from the moment they’re taken.
November 1, 2018
Courtesy of Serelay

Falsifying photos and videos used to take a lot of work. Either you used CGI to generate photorealistic images from scratch (both challenging and expensive) or you needed some mastery of Photoshop—and a lot of time—to convincingly modify existing pictures.

Now the advent of AI-generated imagery has made it easier for anyone to tweak an image or a video with confusingly realistic results. Earlier this year, MIT Technology Review senior AI editor Will Knight used off-the-shelf software to forge his own fake video of US senator Ted Cruz. The video is a little glitchy, but it won’t be for long.

That same technology is creating a growing class of footage and photos, called “deepfakes,” that have the potential to undermine truth, confuse viewers, and sow discord at a much larger scale than we’ve already seen with text-based fake news.

These are the possibilities that disturb Hany Farid, a computer science professor at Dartmouth College who has been debunking fake imagery for 20 years. “I don’t think we’re ready yet,” he warns. But he’s hopeful that growing awareness of the issue and new technological developments could better prepare people to discern true images from manipulated creations.

An original image of Oxford University’s Brasenose College.
Courtesty of Serelay
An example of how the original image could be edited to remove the LGBTQ rainbow flag on the roof.
Courtesy of Serelay

There are two main ways to deal with the challenge of verifying images, explains Farid. The first is to look for modifications in an image. Image forensics experts use computational techniques to pick out whether any pixels or metadata seem altered. They can look for shadows or reflections that don’t follow the laws of physics, for example, or check how many times an image file has been compressed to determine whether it has been saved multiple times.

The second and newer method is to verify an image’s integrity the moment it is taken. This involves performing dozens of checks to make sure the photographer isn’t trying to spoof the device’s location data and time stamp. Do the camera’s coordinates, time zone, and altitude and nearby Wi-Fi networks all corroborate each other? Does the light in the image refract as it would for a three-dimensional scene? Or is someone taking a picture of another two-dimensional photo?

Farid thinks this second approach is particularly promising. Considering the two billion photos that are uploaded to the web daily, he thinks it could help verify images at scale. 

Serelay allows users to upload suspect photos to check whether they have been doctored. The system performs a series of checks to determine where modifications, if any, have been made.
Courtesy of Serelay

Two startups, US-based Truepic (which Farid consults for) and UK-based Serelay, are now working to commercialize this idea. They have taken similar approaches: each has free iOS and Android camera apps that use proprietary algorithms to automatically verify photos when taken. If an image goes viral, it can be compared against the original to check whether it has retained its integrity. 

While Truepic uploads its users’ images and stores them in its servers, Serelay stores a digital fingerprint of sorts by computing about a hundred mathematical values from each image. (The company claims that these values are enough to detect even a single-pixel edit and determine approximately what section of the image was changed.) Truepic says they choose to store the full images in case users want to delete sensitive photos for safety reasons. (In some instances, Truepic users operating in high-threat scenarios, like a war zone, need to remove the app immediately after they document scenes.) Serelay, in contrast, believes that not storing the photos affords users greater privacy.

Serelay is able to catch and highlight the missing flag in the photo.
Courtesy of Serelay

As an added layer of trust and protection, Truepic also stores all photos and metadata using a blockchain—the technology behind Bitcoin that combines cryptography and distributed networking to securely store and track information.

“It’s not bulletproof,” Farid admits, and he says there are some downsides. For instance, users must use the verification software instead of the camera app in their phone. He also notes that companies that attempt to commercialize this kind of technology may prioritize monetization over security. “There is some trust we are putting in the companies building these apps,” he says.

But there are also mitigating strategies. Truepic and Serelay both offer software development kits to make their technology accessible to third-party platforms. Their idea is to one day make their verification technology an industry standard for digital cameras, including Facebook’s, Snapchat’s, or even Apple’s native camera app. In that scenario, an unaltered image posted on social media could automatically receive a check mark, like a Twitter verification badge, indicating that it matches an image in their database—a sign that Serelay hopes would establish trustworthiness.

“The vast majority of the content we’re seeing online is taken with mobile devices,” says Farid. “There’s basically a handful of cameras out there that can incorporate this type of technology into their system, and I think you’d have a pretty good solution.”

Each startup is now in early talks with social-media companies to explore the possibility of a partnership, and Serelay is also part of a new Facebook accelerator program called LDN_LAB.

While the technology is not yet prevalent, Farid encourages people to use it by default when documenting high-stakes scenarios, whether those be political campaign speeches, human rights violations, or pieces of evidence at a crime scene. Truepic has already seen citizens use its app to document crises in Syria. Al Jazeera then used the verified footage to produce several videos. Both companies have also marketed their technology in the insurance industry as a verified way to document damage.

Farid says it’s important for companies doing this work to be transparent about their processes and work with trusted partners. That can help maintain user trust and keep bad actors away.

We still have a way to go to be fully prepared for the proliferation of deepfakes, he says. But he’s hopeful. “The Truepic-type technology and the Serelay-type technology is in good shape,” he says. “I think we’re getting ready.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.