Artificial intelligence is often overhyped—and here’s why that’s dangerous
To those with long memories, the hype surrounding artificial intelligence is becoming ever more reminiscent of the dot-com boom.
Billions of dollars are being invested into AI startups and AI projects at giant companies. The trouble, says Zachary Lipton, is that the opportunity is being overshadowed by opportunists making overblown claims about the technology’s capabilities.
During a talk at MIT Technology Review’s EmTech conference today, Lipton warned that the hype is blinding people to its limitations. “It’s getting harder and harder to distinguish what’s a real advance and what is snake oil,” he said.
AI technology known as deep learning has proved very powerful at performing tasks like image recognition and voice translation, and it’s now helping to power everything from self-driving cars to translation apps on smartphones,
But the technology still has significant limitations. Many deep-learning models only work well when fed vast amounts of data, and they often struggle to adapt to fast-changing real-world conditions.
In his presentation, Lipton also highlighted the tendency of AI boosters to claim human-like capabilities for the technology. The risk is that the AI bubble will lead people to place too much faith in algorithms governing things like autonomous vehicles and clinical diagnoses.
“Policymakers don’t read the scientific literature,” warned Lipton, “but they do read the clickbait that goes around.” The media business, he says, is complicit here because it’s not doing a good enough job of distinguishing between real advances in the field and PR fluff.
Lipton isn’t the only academic sounding the alarm: in a recent blog post, “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” Michael Jordan, a professor at University of California, Berkeley, says that AI is all too often bandied about as “an intellectual wildcard,” and this makes it harder to think critically about the technology’s potential impact.
Still, Lipton acknowledges that he faces a struggle in trying to burst the hype. “I feel like I’m just a pebble in a stream,” he says.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.