It was nearly twice as good at identifying manipulated images as humans.
The research: Researchers from Adobe and UC Berkeley have created a tool that uses machine learning to identify when photos of people’s faces have been altered. The deep-learning tool was trained on thousands of images scraped from the internet. In a series of experiments, it was able to correctly identify edited faces 99% of the time, compared with a 53% success rate for humans.
The context: There’s growing concern over the spread of fake images and “deepfake” videos. However, machine learning could be a useful weapon in the detection (as well as the creation) of fakes.
Some caveats: It’s understandable that Adobe wants to be seen acting on this issue, given that its own products are used to alter pictures. The downside is that this tool works only on images that were made using Adobe Photoshop’s Face Aware Liquify feature.
It's just a prototype, but the company says it plans to take this research further and provide tools to identify and discourage the misuse of its products across the board.
This story first appeared in our daily newsletter The Download. Sign up here to get your dose of the latest must-read news from the world of emerging tech.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
We are hurtling toward a glitchy, spammy, scammy, AI-powered internet
Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.