Skip to Content
Artificial intelligence

What Marvin Minsky Still Means for AI

How Marvin Minsky, a pioneer of artificial intelligence who died on Sunday, still influences the field today.
January 26, 2016

Marvin Minsky, a pioneering mathematician, cognitive scientist, and computer engineer, and a father of the field of artificial intelligence, passed away at his home on Sunday at age 88.

Minsky was a uniquely brilliant, creative, and charismatic person, and his intellect and imagination shone through in his work. His ideas helped shape the computer revolution that has transformed modern life over the past few decades, and they can still be felt in modern efforts to build intelligent machines—one of the most exciting and important endeavors of our age.

Minsky grew up in New York City, and he attended Harvard, where his curiosity led him to study an eclectic range of subjects, including mathematics, biology, and music. He then completed a PhD in the prestigious mathematics program at Princeton, where he mingled with scientists including the physicist Albert Einstein and the mathematician and computer pioneer John von Neumann.

Inspired by mathematical work on logic and computation, Minsky believed that the human mind was fundamentally no different than a computer, and he chose to focus on engineering intelligent machines, first at Lincoln Lab, and then later as a professor at MIT, where he cofounded the Artificial Intelligence Lab in 1959 with another pioneer of the field, John McCarthy.

Minsky’s early achievements include building robotic arms and grippers, computer vision systems, and the first electronic learning system, a device, which he called Snarc, that simulated the functioning of a simple neural network fed visual stimuli. Remarkably, while at Harvard in 1956, he also invented the confocal scanning microscope, an instrument that is still widely used today in medical and scientific research.

Minsky was also central to a split in AI that is still highly relevant. In 1969, together with Seymour Papert, an expert on learning, Minsky wrote a book called Perceptrons, which pointed to key problems with nascent neural networks. The book has been blamed for directing research away from this area of research for many years.

Today, the shift away from neural networks may seem like a mistake, since advanced neural networks, known as deep learning systems, have proven incredibly useful for all sorts of tasks.

In fact, the picture is a little more complicated. Perceptrons highlighted important problems that needed to be overcome in order to make neural networks more useful and powerful; Minsky often argued that a purely “connectionist” neural network-focused approach would never be sufficient to imbue machines with genuine intelligence. Indeed, many modern-day AI researchers, including those who have pioneered work in deep learning, are increasingly embracing this same vision.

Overall, though, Minsky made colossal contributions to artificial intelligence. He published important work on the theory of computation, and did much to advance the symbolic approach, which involved high-level conceptual representations of logic and thought. Researchers made significant progress with this approach in the early years.

A later book by Minsky, The Society of Mind, also presented a highly original and creative theory of human intelligence, inspired by efforts to build thinking machines. It suggested that intelligence emerges not from one system but from the interactions of numerous simple components, or “agents.”

Interestingly, as AI has experienced a renaissance in recent years, another aspect of Minsky’s thinking could prove important. In contrast to alarmist warnings about the dangers of AI, he often took a philosophically positive view of a future in which machines might truly be capable of thought. He believed that AI might eventually offer a way to solve some of humanity’s biggest problems.

For those who worked with Minsky, were taught by him, or simply met him, though, his restless creativity, wit, and curiosity will not easily be forgotten. Nor will his passion for a problem that will likely enchant us for some time yet.

As Minksy recalled from his days as an undergraduate student, to the writer of a wonderful New Yorker profile published in 1981:

“Genetics seemed to be pretty interesting, because nobody knew yet how it worked,” he said. “But I wasn’t sure that it was profound. The problems of physics seemed profound and solvable. It might have been nice to do physics. But the problem of intelligence seemed hopelessly profound. I can’t remember considering anything else worth doing.”

MIT Technology Review visited Minsky at his home last year, and recorded a video interview about his life working on artificial intelligence.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.