To those with long memories, the hype surrounding artificial intelligence is becoming ever more reminiscent of the dot-com boom.
Billions of dollars are being invested into AI startups and AI projects at giant companies. The trouble, says Zachary Lipton, is that the opportunity is being overshadowed by opportunists making overblown claims about the technology’s capabilities.
During a talk at MIT Technology Review’s EmTech conference today, Lipton warned that the hype is blinding people to its limitations. “It’s getting harder and harder to distinguish what’s a real advance and what is snake oil,” he said.
AI technology known as deep learning has proved very powerful at performing tasks like image recognition and voice translation, and it’s now helping to power everything from self-driving cars to translation apps on smartphones,
But the technology still has significant limitations. Many deep-learning models only work well when fed vast amounts of data, and they often struggle to adapt to fast-changing real-world conditions.
In his presentation, Lipton also highlighted the tendency of AI boosters to claim human-like capabilities for the technology. The risk is that the AI bubble will lead people to place too much faith in algorithms governing things like autonomous vehicles and clinical diagnoses.
“Policymakers don’t read the scientific literature,” warned Lipton, “but they do read the clickbait that goes around.” The media business, he says, is complicit here because it’s not doing a good enough job of distinguishing between real advances in the field and PR fluff.
Lipton isn’t the only academic sounding the alarm: in a recent blog post, “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” Michael Jordan, a professor at University of California, Berkeley, says that AI is all too often bandied about as “an intellectual wildcard,” and this makes it harder to think critically about the technology’s potential impact.
Still, Lipton acknowledges that he faces a struggle in trying to burst the hype. “I feel like I’m just a pebble in a stream,” he says.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
The therapists using AI to make therapy better
Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.
DeepMind says its new language model can beat others 25 times its size
RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network
AI fake-face generators can be rewound to reveal the real faces they trained on
Researchers are calling into doubt the popular idea that deep-learning models are “black boxes” that reveal nothing about what goes on inside
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.