Skip to Content
Artificial intelligence

Artificial intelligence is often overhyped—and here’s why that’s dangerous

AI has huge potential to transform our lives, but the term itself is being abused in very worrying ways, says Zachary Lipton, an assistant professor at Carnegie Mellon University.
September 13, 2018
Jake Belcher

To those with long memories, the hype surrounding artificial intelligence is becoming ever more reminiscent of the dot-com boom.

Billions of dollars are being invested into AI startups and AI projects at giant companies. The trouble, says Zachary Lipton, is that the opportunity is being overshadowed by opportunists making overblown claims about the technology’s capabilities.

During a talk at MIT Technology Review’s EmTech conference today, Lipton warned that the hype is blinding people to its limitations. “It’s getting harder and harder to distinguish what’s a real advance and what is snake oil,” he said.

AI technology known as deep learning has proved very powerful at performing tasks like image recognition and voice translation, and it’s now helping to power everything from self-driving cars to translation apps on smartphones,

But the technology still has significant limitations. Many deep-learning models only work well when fed vast amounts of data, and they often struggle to adapt to fast-changing real-world conditions.

In his presentation, Lipton also highlighted the tendency of AI boosters to claim human-like capabilities for the technology. The risk is that the AI bubble will lead people to place too much faith in algorithms governing things like autonomous vehicles and clinical diagnoses.

“Policymakers don’t read the scientific literature,” warned Lipton, “but they do read the clickbait that goes around.” The media business, he says, is complicit here because it’s not doing a good enough job of distinguishing between real advances in the field and PR fluff.

Lipton isn’t the only academic sounding the alarm: in a recent blog post, “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” Michael Jordan, a professor at University of California, Berkeley, says that AI is all too often bandied about as “an intellectual wildcard,” and this makes it harder to think critically about the technology’s potential impact.

Still, Lipton acknowledges that he faces a struggle in trying to burst the hype. “I feel like I’m just a pebble in a stream,” he says.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.