Human brains and AIs can be hacked with these weird tweaked photos
Computer vision algorithms aren’t the only forms of “intelligence” that can be tricked by manipulated photos.
Uh-oh: Researchers have used an AI to design the first photos that fool both humans and computer vision algorithms, like the above: an unaltered image of a cat, on the left, next to a version that’s been tweaked to look weirdly like a dog.
For science! Finding human weaknesses in this way could improve our AI systems. In the paper that describes these manipulated photos—coauthored by Ian Goodfellow, the creator of generative adversarial networks (GANs)—the researchers point out that if we find a certain class of altered images that can’t fool the human mind, it suggests that a “similar mechanism” could exist in machine learning. Most AI systems are loosely based on the human brain, after all.
Why it matters: Now that there are stickers that can be put on physical objects to confuse computer vision systems, so-called adversarial examples are a real-world problem. And if autonomous vehicles don’t have a way to make sure their systems can see every stop sign, they won’t be ready for the road anytime soon.
Deep Dive
Artificial intelligence
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.