Bitcoin miners and data scientists love cheap, green Icelandic processing power. Maybe a little too much.
Most of it, say astrophysicists, if we want to guarantee the future of humanity.
It was nearly twice as good at identifying manipulated images as humans....
The research: Researchers from Adobe and UC Berkeley have created a tool that uses machine learning to identify when photos of people’s faces have been altered. The deep-learning tool was trained on thousands of images scraped from the internet. In a series of experiments, it was able to correctly identify edited faces 99% of the time, compared with a 53% success rate for humans.
Some caveats: It’s understandable that Adobe wants to be seen acting on this issue, given that its own products are used to alter pictures. The downside is that this tool works only on images that were made using Adobe Photoshop’s Face Aware Liquify feature.
It's just a prototype, but the company says it plans to take this research further and provide tools to identify and discourage the misuse of its products across the board.
This story first appeared in our daily newsletter The Download. Sign up here to get your dose of the latest must-read news from the world of emerging tech.