It was nearly twice as good at identifying manipulated images as humans.
The research: Researchers from Adobe and UC Berkeley have created a tool that uses machine learning to identify when photos of people’s faces have been altered. The deep-learning tool was trained on thousands of images scraped from the internet. In a series of experiments, it was able to correctly identify edited faces 99% of the time, compared with a 53% success rate for humans.
Some caveats: It’s understandable that Adobe wants to be seen acting on this issue, given that its own products are used to alter pictures. The downside is that this tool works only on images that were made using Adobe Photoshop’s Face Aware Liquify feature.
It's just a prototype, but the company says it plans to take this research further and provide tools to identify and discourage the misuse of its products across the board.
This story first appeared in our daily newsletter The Download. Sign up here to get your dose of the latest must-read news from the world of emerging tech.
Artificial intelligence is creating a new colonial world order
An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
This horse-riding astronaut is a milestone in AI’s journey to make sense of the world
OpenAI’s latest picture-making AI is amazing—but raises questions about what we mean by intelligence.
How the AI industry profits from catastrophe
As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.