Skip to Content
MIT News magazine

Minsky on AI's Future

To move artificial intelligence forward we must unpack human mental states.
March 12, 2007

As you start this review, you might be reading to see whether you’d like to read more. That might seem like a simple task, yet Marvin Minsky, an artificial-intelligence pioneer, says it is in fact an orchestration of many smaller mental processes.

The Emotion Machine: Commonsense Thinking, Artificial Intelligence, And The Future Of The Human Mind By Marvin Minsky Simon and Schuster, 2006, $26.00

While you read, your eyes scan the page, and you recognize and process words and sentences. At a higher cognitive level, you might be comparing what you read with your own experiences. Higher still, you may be gauging your level of interest in the words you’re reading. Each of these processes involves still more subprocesses, and it’s this complexity that programmers in artificial intelligence strive to replicate.

In his new book, The Emotion ­Machine, Minsky, professor of media arts and sciences at MIT, writes, “We all admire great accomplishments in the sciences, arts, and humanities–but we rarely acknowledge how much we achieve in the course of our everyday lives.” He takes on terms we may all recognize and understand but have a hard time explaining, such as emotions, consciousness, and thinking. ­Minsky calls these “suitcase words”: they contain many smaller concepts that can be unpacked and analyzed. For example, he identifies more than 20 different processes involved in a single instance of being conscious of one’s own actions. By breaking down a mental state into discrete mental processes, Minsky hopes, AI programmers might one day be able to build those mental states back up, in the form of a humanlike robot.

The Emotion Machine doesn’t spend much time on actual advances in artificial intelligence. A few ex­amples here and there tease rather than satisfy. For instance, Minsky describes how, in 1965, he and a team he headed set out to design a robot that could recognize various shapes, analyze their spatial relationships, and use that information to build structures like arches and tables out of blocks.

But in this book Minsky is less interested in AI’s history than in its future. He wants, he writes, “to find more complicated ways to explain our most familiar mental events.” Such a pursuit, he argues, includes breaking down the conventional idea of a self. Minsky believes that we do not have a central essence but, rather, a collection of neural associations and connections rooted in memory, experience, and evolution. To those who say his approach makes us machines, Minsky says that in a way, we are, and should be proud of it.

“It’s degrading or insulting to say somebody is a good person or has a soul,” he says. “Each person has built this incredibly complex structure, and if you attribute it to a magical pearl in the middle of an oyster that makes you good, that’s trivializing a person and keeps you from thinking of what’s really happening.”

Recent Books
From the MIT community

The Laws of Simplicity
By John Maeda, MIT Media
Lab professor
MIT Press, 2006, $20.00

Project Valuation Using Real Options: A Practitioner’s Guide
By Prasad Kodukula and Chandra Papudesu, SM ‘98
J. Ross Publishing, 2006, $54.95

Inside the Economist’s Mind: Conversations with Eminent Economists
Edited by Paul A. Samuelson and William A. Barnett ‘63
Blackwell Publishing, 2007, $29.95

The Welfare State Nobody Knows: Debunking Myths about U.S. Social Policy
By Christopher Howard, SM ‘90, PhD ‘93
Princeton University Press, 2006, $29.95

Reaching: Love Affairs with Industry
By Richard Muther ‘38, SM ‘41
Leathers Publishing, 2006, $19.95

The Yale Book of Quotations
Edited by Fred R. Shapiro ‘74
Yale University Press, 2006, $50.00

The Definitive Drucker
By Elizabeth Haas Edersheim, PhD ‘79; foreword by A. G. Lafley
McGraw-Hill, 2007, $27.95

The Giant Book of Animal Jokes: Beastly Humor for Grownups
By Richard Lederer and James D. Ertner, OCE ‘75, SM ‘75
Stone and Scott, 2005, $19.95

Please submit titles of books and papers published in 2006 and 2007 to be considered for this column.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.