Skip to Content

Marvin Minsky on Common Sense and Computers That Emote

As artificial intelligence research celebrates its 50th birthday, the MIT icon asks what makes the minds of three-year-olds tick.
July 13, 2006

Top computer scientists from around the world are meeting today at Dartmouth College in Hanover, NH, to mark the 50th anniversary of “artificial intelligence.” Back in 1956, John McCarthy, then a member of Dartmouth’s mathematics faculty, invented the term for the field’s seminal gathering, the Dartmouth Summer Research Project on Artificial Intelligence. McCarthy and four other participants in the 1956 project, including MIT’s Marvin Minsky, are participating in this week’s meeting, which focuses on AI’s next 50 years.

Marvin Minsky, emeritus professor of media arts and sciences at MIT, was one of the original participants in the Dartmouth Summer Research Project on Artificial Intelligence in 1956. He will co-open the 50th anniversary commemorative conference at Dartmouth today. (Courtesy of Coveney/MIT.)

Mathematical and philosophical breakthroughs by Alan Turing, John von Neumann, Herbert Simon, Allen Newell, and other giants of computer science made the 1950s a time of great optimism about machine intelligence. Researchers believed they would soon be able to program computers to simulate many forms of human reasoning. Expert systems would embody and manipulate knowledge in the form of symbolic logic. Artificial neural networks would be trained to evolve toward correct answers.

This optimism even spilled over into popular culture, where HAL, the intelligent (and profoundly disturbed) computer in Stanley Kubrick’s 1968 film 2001: A Space Odyssey, upstaged the human actors.

But by the late 1960s it was clear that approximating even child-like human reasoning in a computer would require vastly complex webs of logical equations or neural connections. So researchers retrenched. They began breaking down problems, focusing on replicating simple human feats such as moving children’s blocks (the subject of Stanford computer scientist Terry Winograd’s now-famous program SHRDLU, which used natural-language instructions to manipulate a robotic arm).

Minsky, who will open the Dartmouth conference with McCarthy, admired Winograd’s work. But he’s long eschewed reductionistic demonstrations in favor of exploring the real mechanisms behind human thought. Working with Seymour Papert in the MIT AI Lab, for instance, Minsky began in the 1970s to develop the “Society of Mind” theory, which posits that layers of purposeful yet mindless “agents” work together to generate consciousness.

Technology Review interrupted Minsky on July 11, as he was proofing the galleys for his forthcoming book, The Emotion Machine, which reinterprets the human mind as a “cloud of resources,” or mini-machines that turn on and off depending on the situation and give rise to our various emotional and mental states.

Technology Review: Can you believe that it’s been 50 years since the first Dartmouth AI meeting? Does it feel like five decades have passed?

Marvin Minsky: I haven’t experienced many intervals of 50 years, so it’s hard for me to say.

TR: Fair enough. So, what are your thoughts about the state of AI research today, compared to where it was in 1956?

MM: What surprises me is how few people have been working on higher-level theories of how thinking works. That’s been a big disappointment. I’m just publishing a big new book on what we should be thinking about: How does a three- or four-year-old do the common-sense reasoning that they’re so good at and that no machine seems to be able to do? The main difference being that if you are having trouble understanding something, you usually think, “What’s wrong with me?” or “What’s wasting my time?” or “Why isn’t this way of thinking working? Is there some other way of thinking that might be better?”

But the kinds of AI projects that have been happening for the last 30 or 40 years have had almost no reflective thinking at all. It’s all reacting to a situation and collecting statistics. We organized a conference about common-sense thinking about three years ago and we were only able to find about a dozen researchers in the whole world who were interested in that.

TR: Why do people shy away from the common-sense problem?

MM: I think people look around to see what field is currently popular, and then waste their lives on that. If it’s popular, then to my mind you don’t want to work on it. Now, physics is different. There, people say “This popular theory works pretty well, but it doesn’t explain this or that – so I should look at that.” But when people write AI papers, they only tell what their program did, and not how it failed or what kinds of problems it couldn’t solve. People don’t consider the important problem to be the one their system hasn’t solved. People have gotten neural networks to recognize that if you are looking for a taxi, for example, you should look for a yellow moving object. But they don’t ask how come these networks can’t answer other kinds of questions.

TR: But understanding common sense is a much harder problem, isn’t it? Couldn’t that explain why so many AI researchers go into other areas?

MM: That’s true. Back when I was writing The Society of Mind, we worked for a couple of years on making a computer understand a simple children’s story: “Mary was invited to Jack’s party. She wondered if he would like a kite.” If you ask the question “Why did Mary wonder about a kite?” everybody knows the answer – it’s probably a birthday party, and if she’s going that means she has been invited, and everybody who is invited has to bring a present, and it has to be a present for a young boy, so it has to be something boys like, and boys like certain kinds of toys like bats and balls and kites. You have to know all of that to answer the question. We managed to make a little database and got the program to understand some simple questions. But we tried it on another story and it didn’t know what to do. Some of us concluded that you’d have to know a couple million things before you could make a machine do some common-sense thinking.

TR: As people have realized how difficult it is to get a computer to understand even simple common-sense situations, would you say that some of the optimism around the possibilities for AI in the 1950s and 1960s has dissipated?

MM: I don’t think optimism is the right word. I think we were asking good questions, but somehow most of the people working on what they called AI started looking for one of these universal solutions. In physics, that worked; there were Newton’s equations and then Maxwell’s and then relativity and quantum theory. Most AI people are trying to imitate that and find a general theory. But humans have 100 different brain centers that all work in slightly different ways. You shouldn’t be working on a single solution; you should be working on a host of gadgets.

TR: A lot of the funding for AI has come from the Defense Advanced Research Projects Agency (DARPA), where there’s a pretty clear demand for practical results. In fact, they’re one of the sponsors of the Dartmouth AI conference. How has DARPA shaped the direction of AI research?

MM: In the early days, DARPA supported people rather than proposals. There was a lot of progress from starting in 1963; for about ten years the kinds of things I am talking about did flourish. And then in the early 1970s there was a kind of funny accident. Senator Mike Mansfield, quite a liberal, decided that the Department of Defense shouldn’t be supporting civilian research. So he was responsible for ARPA becoming DARPA, and straining not to compete with industrial and civilian research. So it became much harder for them to support visionary researchers.

At the same time, the American corporate research community started to disappear in the early 1970s. Bell Labs and RCA and the others essentially disappeared from this sort of activity. And another thing happened: the entrepreneur bug hit. By the 1980s, many people were starting to try to patent things and start startups and make products, and that coincided with the general disappearance of young scientists. People who could have become productive scientists are now going into law and business.

So there’s no way to support this research. If you have a good idea, it’s hard to get it published because people say “Where’s your experiment?” But the trouble with common-sense thinking is that you can’t experiment until you have a big common-sense database. There is one called Cyc, started by Doug Lenat in 1985. And we have the Open Mind database, which is publicly available but not very well structured yet. But it’s a whole research project just to figure out how to open up the Open Mind database.

TR: You mentioned that a computer needs to know a couple million things in order to make common-sense connections. But Lenat and his colleagues have been working on exactly that, spending years feeding common-sense knowledge into Cyc. Why is another database needed?

MM: When Lenat started Cyc in 1985, it was pretty ambitious, and there was no other such project. My colleagues and I said let’s wait and see how this works. And then nothing happened for a while.

Lenat has done some very good things. The trouble is that Cyc is very hard to use and it’s proprietary, so it isn’t used by researchers much. And there are a lot of problems with his system that didn’t show up earlier because there wasn’t any competition.

They’ve made it consistent, so it actually doesn’t know much. Should a whale be considered a mammal or a fish? Whales have many fish-like characteristics, so most people are surprised when they hear it’s a mammal. But the real answer is, it should be both. A common-sense database shouldn’t necessarily be logically consistent. Lenat finally realized that they should restructure Cyc by providing for the different contexts in which a question may come up. But the database was originally structured to make things very logical, and its language is predicate calculus. Our hope is to make the Open Mind system use natural language – which is of course full of ambiguities, but ambiguities are both good and bad.

TR: What are some of the main arguments or research recommendations in your upcoming book, The Emotion Machine?

MM: The main idea in the book is what I call resourcefulness. Unless you understand something in several different ways, you are likely to get stuck. So the first thing in the book is that you have got to have different ways of describing things. I made up a word for it: “panalogy.” When you represent something, you should represent it in several different ways, so that you can switch from one to another without thinking.

The second thing is that you should have several ways to think. The trouble with AI is that each person says they’re going to make a system based on statistical inference or genetic algorithms, or whatever, and each system is good for some problems but not for most others. The reason for the title The Emotion Machine is that we have these things called emotions, and people think of them as mysterious additions to rational thinking. My view is that an emotional state is a different way of thinking.

When you’re angry, you give up your long-range planning and you think more quickly. You are changing the set of resources you activate. A machine is going to need a hundred ways to think. And we happen to have a hundred names for emotions, but not for ways to think. So the book discusses about 20 different directions people can go in their thinking. But they need to have extra meta-knowledge about which way of thinking is appropriate in each situation.

TR: Are you saying that computers should get angry?

MM: If somebody is in your way, and they won’t get out of your way, you have to intimidate them or scare them or make them be afraid. That’s a perfectly reasonable way to solve the problem if you’re in a hurry and if something bad will happen if you can’t get around them. I propose that we need about 20 different words for these ways of thinking. Then you can throw “rational” away.

Minsky’s The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is scheduled to be published in hardcover by Simon & Schuster in November 2006. Minsky has published a draft of the book online.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.