Top computer scientists from around the world are meeting today at Dartmouth College in Hanover, NH, to mark the 50th anniversary of “artificial intelligence.” Back in 1956, John McCarthy, then a member of Dartmouth’s mathematics faculty, invented the term for the field’s seminal gathering, the Dartmouth Summer Research Project on Artificial Intelligence. McCarthy and four other participants in the 1956 project, including MIT’s Marvin Minsky, are participating in this week’s meeting, which focuses on AI’s next 50 years.
Mathematical and philosophical breakthroughs by Alan Turing, John von Neumann, Herbert Simon, Allen Newell, and other giants of computer science made the 1950s a time of great optimism about machine intelligence. Researchers believed they would soon be able to program computers to simulate many forms of human reasoning. Expert systems would embody and manipulate knowledge in the form of symbolic logic. Artificial neural networks would be trained to evolve toward correct answers.
This optimism even spilled over into popular culture, where HAL, the intelligent (and profoundly disturbed) computer in Stanley Kubrick’s 1968 film 2001: A Space Odyssey, upstaged the human actors.
But by the late 1960s it was clear that approximating even child-like human reasoning in a computer would require vastly complex webs of logical equations or neural connections. So researchers retrenched. They began breaking down problems, focusing on replicating simple human feats such as moving children’s blocks (the subject of Stanford computer scientist Terry Winograd’s now-famous program SHRDLU, which used natural-language instructions to manipulate a robotic arm).
Minsky, who will open the Dartmouth conference with McCarthy, admired Winograd’s work. But he’s long eschewed reductionistic demonstrations in favor of exploring the real mechanisms behind human thought. Working with Seymour Papert in the MIT AI Lab, for instance, Minsky began in the 1970s to develop the “Society of Mind” theory, which posits that layers of purposeful yet mindless “agents” work together to generate consciousness.
Technology Review interrupted Minsky on July 11, as he was proofing the galleys for his forthcoming book, The Emotion Machine, which reinterprets the human mind as a “cloud of resources,” or mini-machines that turn on and off depending on the situation and give rise to our various emotional and mental states.
Technology Review: Can you believe that it’s been 50 years since the first Dartmouth AI meeting? Does it feel like five decades have passed?
Marvin Minsky: I haven’t experienced many intervals of 50 years, so it’s hard for me to say.
TR: Fair enough. So, what are your thoughts about the state of AI research today, compared to where it was in 1956?
MM: What surprises me is how few people have been working on higher-level theories of how thinking works. That’s been a big disappointment. I’m just publishing a big new book on what we should be thinking about: How does a three- or four-year-old do the common-sense reasoning that they’re so good at and that no machine seems to be able to do? The main difference being that if you are having trouble understanding something, you usually think, “What’s wrong with me?” or “What’s wasting my time?” or “Why isn’t this way of thinking working? Is there some other way of thinking that might be better?”
But the kinds of AI projects that have been happening for the last 30 or 40 years have had almost no reflective thinking at all. It’s all reacting to a situation and collecting statistics. We organized a conference about common-sense thinking about three years ago and we were only able to find about a dozen researchers in the whole world who were interested in that.