Facebook has demonstrated a neat, and slightly creepy, trick: its AI can now automatically open people’s eyes in photos.
Eye-opening: The technology could help save photos in which someone has blinked at the wrong moment. It shows how much easier it’s going to become to mess with images and video in coming years thanks to progress in artificial intelligence.
Dueling networks: Facebook’s researchers used what’s known as a “generative adversarial network,” which involves two dueling neural networks. One network learns from a data set (photos of open and closed eyes) and tries to generate synthetic examples. The other tries to tell fakes from the real thing, thereby pushing the first to create more convincing fakes.
Kinda creepy: In testing, Facebook’s eye-opening software often fooled humans, too. But the results can sometimes look a bit strange—if a person’s closed eyes are partly covered by hair, for example. This just goes to show that the underlying system has no idea what eyes actually are.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.