It was nearly twice as good at identifying manipulated images as humans.
The research: Researchers from Adobe and UC Berkeley have created a tool that uses machine learning to identify when photos of people’s faces have been altered. The deep-learning tool was trained on thousands of images scraped from the internet. In a series of experiments, it was able to correctly identify edited faces 99% of the time, compared with a 53% success rate for humans.
Some caveats: It’s understandable that Adobe wants to be seen acting on this issue, given that its own products are used to alter pictures. The downside is that this tool works only on images that were made using Adobe Photoshop’s Face Aware Liquify feature.
It's just a prototype, but the company says it plans to take this research further and provide tools to identify and discourage the misuse of its products across the board.
This story first appeared in our daily newsletter The Download. Sign up here to get your dose of the latest must-read news from the world of emerging tech.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.