Skip to Content

Unthinking Machines

Artificial intelligence needs a reboot, say experts.

Some of the founders and leading lights in the fields of artificial intelligence and cognitive science gave a harsh assessment last night of the lack of progress in AI over the last few decades.

During a panel discussion—moderated by linguist and cognitive scientist Steven Pinker—that kicked off MIT’s Brains, Minds, and Machines symposium, panelists called for a return to the style of research that marked the early years of the field, one driven more by curiosity rather than narrow applications.

“You might wonder why aren’t there any robots that you can send in to fix the Japanese reactors,” said Marvin Minsky, who pioneered neural networks in the 1950s and went on to make significant early advances in AI and robotics. “The answer is that there was a lot of progress in the 1960s and 1970s. Then something went wrong. [Today] you’ll find students excited over robots that play basketball or soccer or dance or make funny faces at you. [But] they’re not making them smarter.”

Patrick Winston, director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997, echoed Minsky. “Many people would protest the view that there’s been no progress, but I don’t think anyone would protest that there could have been more progress in the past 20 years. What went wrong went wrong in the ’80s.”

Winston blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the “mechanistic balkanization” of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms. “When you dedicate your conferences to mechanisms, there’s a tendency to not work on fundamental problems, but rather [just] those problems that the mechanisms can deal with,” said Winston.

Winston said he believes researchers should instead focus on those things that make humans distinct from other primates, or even what made them distinct from Neanderthals. Once researchers think they have identified the things that make humans unique, he said, they should develop computational models of these properties, implementing them in real systems so they can discover the gaps in their models, and refine them as needed. Winston speculated that the magic ingredient that makes humans unique is our ability to create and understand stories using the faculties that support language: “Once you have stories, you have the kind of creativity that makes the species different to any other.”

Emilio Bizzi, one of the founding members of MIT’s McGovern Institute of Brain Research, agreed that researchers should focus on important elements of human intellect, such as the ability to generalize learning experiences, or fluidly plan movements to avoid obstacles to achieve a specific goal such as grasping a pair of glasses. “I am optimistic that in the next few years, we will make a lot of progress, and the reason is that there are many laboratories scattered in various parts of the world that are pursuing humanoid robotics.”

The two linguists on the panel, Noam Chomsky and Barbara Partee, both made seminal contributions to our understanding of language by considering it as a computational, rather than purely cultural, phenomenon. Both also felt that understanding human language was the key to creating genuinely thinking machines. “Really knowing semantics is a prerequisite for anything to be called intelligence,” said Partee.

Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don’t try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. “That’s a notion of [scientific] success that’s very novel. I don’t know of anything like it in the history of science,” said Chomsky.

Sydney Brenner, who deciphered the three-letter DNA code with Francis Crick and teased out the complete neural structure of the c. elegans worm on a cellular level, agreed that researchers in both artificial intelligence and neuroscience might be getting overwhelmed with surface details rather than seeking the bigger questions underneath. Looking at attempts to replicate his mapping of the c. elegans neural “wiring diagram” with more complex organisms, Brenner worried that neuro- and cognitive scientists were being “overzealous” in these attempts. He said they should refocus on higher level problems instead. He used the analogy of someone taking a picture with a smart phone: no one today would bother to give a transistor-level description of such an action: it’s much more useful to discuss the process in terms of higher level subsystems and software.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.