Is there more to AI than neural networks? Gary Marcus, professor of psychology at NYU and ex-director of Uber’s AI lab, thinks so. He’s published a critique of deep-learning systems that use neural nets, and it skewers some of the current AI hype.
Deep learning’s limits: Marcus identifies 10 major hurdles facing deep learning, including data hunger and lack of generalization. For what it’s worth, we’re tempted to agree that it’s not the silver bullet many think (see “Is AI Riding a One-Trick Pony?”).
The risk of hype: He argues that overselling the abilities of deep learning provides “fresh risk for seriously dashed expectations” that could bring another AI winter, as well as blinkering AI researchers from trying new ideas.
What now? But Marcus doesn’t dismiss deep learning entirely: instead, he suggests that we should “conceptualize it, not as a universal solvent, but simply as one tool among many.”
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.