What Marvin Minsky Still Means for AI
Marvin Minsky, a pioneering mathematician, cognitive scientist, and computer engineer, and a father of the field of artificial intelligence, passed away at his home on Sunday at age 88.
Minsky was a uniquely brilliant, creative, and charismatic person, and his intellect and imagination shone through in his work. His ideas helped shape the computer revolution that has transformed modern life over the past few decades, and they can still be felt in modern efforts to build intelligent machines—one of the most exciting and important endeavors of our age.
Minsky grew up in New York City, and he attended Harvard, where his curiosity led him to study an eclectic range of subjects, including mathematics, biology, and music. He then completed a PhD in the prestigious mathematics program at Princeton, where he mingled with scientists including the physicist Albert Einstein and the mathematician and computer pioneer John von Neumann.
Inspired by mathematical work on logic and computation, Minsky believed that the human mind was fundamentally no different than a computer, and he chose to focus on engineering intelligent machines, first at Lincoln Lab, and then later as a professor at MIT, where he cofounded the Artificial Intelligence Lab in 1959 with another pioneer of the field, John McCarthy.
Minsky’s early achievements include building robotic arms and grippers, computer vision systems, and the first electronic learning system, a device, which he called Snarc, that simulated the functioning of a simple neural network fed visual stimuli. Remarkably, while at Harvard in 1956, he also invented the confocal scanning microscope, an instrument that is still widely used today in medical and scientific research.
Minsky was also central to a split in AI that is still highly relevant. In 1969, together with Seymour Papert, an expert on learning, Minsky wrote a book called Perceptrons, which pointed to key problems with nascent neural networks. The book has been blamed for directing research away from this area of research for many years.
Today, the shift away from neural networks may seem like a mistake, since advanced neural networks, known as deep learning systems, have proven incredibly useful for all sorts of tasks.
In fact, the picture is a little more complicated. Perceptrons highlighted important problems that needed to be overcome in order to make neural networks more useful and powerful; Minsky often argued that a purely “connectionist” neural network-focused approach would never be sufficient to imbue machines with genuine intelligence. Indeed, many modern-day AI researchers, including those who have pioneered work in deep learning, are increasingly embracing this same vision.
Overall, though, Minsky made colossal contributions to artificial intelligence. He published important work on the theory of computation, and did much to advance the symbolic approach, which involved high-level conceptual representations of logic and thought. Researchers made significant progress with this approach in the early years.
A later book by Minsky, The Society of Mind, also presented a highly original and creative theory of human intelligence, inspired by efforts to build thinking machines. It suggested that intelligence emerges not from one system but from the interactions of numerous simple components, or “agents.”
Interestingly, as AI has experienced a renaissance in recent years, another aspect of Minsky’s thinking could prove important. In contrast to alarmist warnings about the dangers of AI, he often took a philosophically positive view of a future in which machines might truly be capable of thought. He believed that AI might eventually offer a way to solve some of humanity’s biggest problems.
For those who worked with Minsky, were taught by him, or simply met him, though, his restless creativity, wit, and curiosity will not easily be forgotten. Nor will his passion for a problem that will likely enchant us for some time yet.
As Minksy recalled from his days as an undergraduate student, to the writer of a wonderful New Yorker profile published in 1981:
“Genetics seemed to be pretty interesting, because nobody knew yet how it worked,” he said. “But I wasn’t sure that it was profound. The problems of physics seemed profound and solvable. It might have been nice to do physics. But the problem of intelligence seemed hopelessly profound. I can’t remember considering anything else worth doing.”
MIT Technology Review visited Minsky at his home last year, and recorded a video interview about his life working on artificial intelligence.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.
AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.