The Case Against Deep-Learning Hype
Is there more to AI than neural networks? Gary Marcus, professor of psychology at NYU and ex-director of Uber’s AI lab, thinks so. He’s published a critique of deep-learning systems that use neural nets, and it skewers some of the current AI hype.
Deep learning’s limits: Marcus identifies 10 major hurdles facing deep learning, including data hunger and lack of generalization. For what it’s worth, we’re tempted to agree that it’s not the silver bullet many think (see “Is AI Riding a One-Trick Pony?”).
The risk of hype: He argues that overselling the abilities of deep learning provides “fresh risk for seriously dashed expectations” that could bring another AI winter, as well as blinkering AI researchers from trying new ideas.
What now? But Marcus doesn’t dismiss deep learning entirely: instead, he suggests that we should “conceptualize it, not as a universal solvent, but simply as one tool among many.”
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.