Skip to Content
MIT News feature

Of mice, men, and computers

How three giants of early computing—Claude Shannon, SM ’40, PhD ’40, Marvin Minsky, and JCR Licklider—got their ideas and got them across.
Shannon, Minsky and Licklider
Shannon, Minsky and LickliderMIT MuSEum

In March of 1952, readers of Popular Science were introduced to an unusual critter. The story got top billing, above a guide to fixing houses and a feature on a new hot rod. “This mouse is smarter than you are,” the tantalizing headline read.

The mouse in question was actually a block of wood outfitted with a magnet, three wheels, and a set of copper whiskers. The invention came from Claude Shannon, SM ’40, PhD --’40. Its nickname was Theseus, and it had one skill: it could solve a maze.

Theseus wasn’t just another Mickey wannabe. It was a proof of concept for an idea that would revolutionize computing: that circuit design could be boiled down to a set of yes-or-no questions (see “Mighty mouse,” MIT News, January/February 2019). In fact, if you look closely enough, the annals of computing history are filled with important—if fake—mice.

Shannon was a soft-spoken, gizmo--loving mathematician. As a grad student at MIT in the late 1930s, he worked with a massive analog computer used to solve differential equations. Shannon realized the machine could be used to solve logical problems, too, as long as you constructed those problems as a series of binary decisions.

Later, in a 1948 paper called “A Mathematical Theory of Communication,” he explained how any communicable material—a poem, a photograph, a radio wave—could be broken down into units of pure information, a series of yeses and nos. These units “may be called binary digits, or more briefly bits,” Shannon wrote. Thus represented, the material could be stored, manipulated, or sent from one place to another, all with great fidelity.
Shannon built the first prototype for Theseus in 1950, while working at Bell Laboratories in New Jersey. Thanks to an array of telephone relay switches positioned underneath the maze, the mouse is able to ask a single question over and over: Is there a wall here, or not? In other words, it solves the maze bit by bit. After Theseus successfully explores the maze once—reaching the brass wedge of cheese at the end—the map of open and closed switches it leaves behind guides it directly to the target on the following tries.

Shannon took a similar path through life. Over and over, he bumped into seemingly intractable problems and found a mathematical way through, making sure to absorb their lessons for next time. Along the way, he “single-handedly laid down the general rules of modern information theory,” establishing the foundation for digital computing and becoming “a giant [in] the industry,” as an anonymous eulogizer wrote in the Times of London after his death in 2001.

Shannon went on to join the MIT faculty and its Research Laboratory of Electronics, staying at the Institute until his retirement in 1978. Although he kept dreaming up gadgets—including a juggling robot, Styrofoam shoes that let him walk on water, and a Roman numeral calculator, which he called “Throback I”—he never returned to rodents in mazes. But just a year after Theseus first grabbed the brass cheese, another computer science pioneer picked up the torch: Marvin Minsky.

Minsky and Shannon at the 1956 Dartmouth workshop on AI.
Minsky and Shannon at the 1956 Dartmouth workshop on AI.
Academy of Achievement

As a child in New York City, Minsky had borrowed his father’s copies of the works of Sigmund Freud, and he developed a keen interest in the human mind. He brought this curiosity with him to Harvard, where he studied physics, and to Princeton, where he earned a PhD in mathematics.

His peers were using computers to solve increasingly complex numerical problems, but “people didn’t seem to have any theories of how thinking worked,” he later recalled. He wondered whether these machines could be used to imitate—and better understand—the brain.   

In 1951, he decided to see for himself. That summer, along with physicist Dean Edmonds, he began working on the Stochastic Neural Analog Reinforcement Calculator, a.k.a. SNARC. Over long days and nights in a lab at Harvard, the two constructed what would become the first artificial neural network—a machine that could learn from its own mistakes to become better at a task, like a human brain.

Today, similar networks run on powerful computers. They can recognize images and translate between languages. SNARC was made out of “about 400 vacuum tubes and a couple of hundred relays and a bicycle chain,” Minsky told the Infinite History Project in 2008. But like Theseus, it could do only one thing: solve a maze.

Minsky would boot up his artificial brain and choose a point within it, which he called a “rat.” Then he’d choose another point to be the “cheese.” The computer tried over and over to connect the rat and the cheese. A feedback loop reinforced correct choices by increasing the probability that the computer would make them again—a more complicated version of Shannon’s method, and a level closer to how our minds really work. Eventually, the rat learned the maze.

Like Shannon with his bits, Minsky realized that something recognizable as “intelligence” might arise from discrete and manageable parts. He spent his career delving ever deeper into the workings of the human mind, approximating what he found there with increasingly complex programs and machines. Along with Shannon, John McCarthy (then a professor at Dartmouth), and computer scientist Nathaniel Rochester, he helped organize the “Dartmouth workshop” of 1956, now considered the founding event of artificial intelligence. Minsky joined the MIT faculty in 1958, wrote a number of influential books, and helped to found both the Artificial Intelligence Lab—which later merged with the Lab for Computer Science to become CSAIL—and the Media Lab. He won the Turing Award in 1969. Without him, “the intellectual landscape would be unrecognizable,” President L. Rafael Reif said after Minsky’s death in 2016.

(Editor’s note: This piece was written before recent allegations about Minsky’s involvement with accused sex trafficker Jeffrey Epstein came to light.)

Shannon unlocked the potential of computing, and Minsky pushed it into a new realm. But if it weren’t for a third computing legend, their achievements might have remained too heady for the rest of us. While they were watching their mice scurry and their programs churn, our final pioneer was ensuring that these world-changing technologies didn’t leave anyone behind.

Joseph Carl Robnett Licklider, known to most as “Lick,” first came to MIT in 1950, as an associate professor. More of a tinkerer than a coding whiz, he had a background in psychology and audiology. But over the next three and a half decades, both in and out of the Institute, Lick dedicated his imagination, empathy, and polymathic skill set to making machines accessible and relevant to everyday life, eventually becoming known as the “Johnny Appleseed of Computing.”

A psychologist among physicists and engineers, Lick quickly realized he could offer a unique perspective. In 1951, he began working on Project Charles, a study program that helped the US Air Force develop a computer network—known as the Semi-Automatic Ground Environment, or SAGE—that would help detect and respond to enemy threats. His job, he later recalled, was to work on “display and control”: to make sure that the computer programs the engineers developed were intuitive for the people using them.

When he left MIT in 1957 for the technology company Bolt Beranek and Newman, these experiences stuck with him. As he wrote in his 1960 paper “Man-Computer Symbiosis,” he began envisioning a world in which people and computers cooperate. No longer the domain of trained experts, Lick’s machines would be networked and easily searchable. They would talk to people and to each other.

Two years after he published this vision, a new job in Washington gave Lick the ability to actually pursue it. As a program director at the US Defense Department’s Advanced Research Projects Agency (ARPA), Lick sent ideas and cash to labs around the country that were working to bring together human and machine. From afar, he guided MIT’s “Project MAC,” led by Robert Fano, which managed to divvy up a mainframe’s processing power among a network of remote computers, allowing a gaggle of people to all work at once. (Minsky was heavily involved in Project MAC until his group split off from it to form MIT’s AI Lab, and Licklider himself would run Project MAC for a time when he returned to MIT in 1968.) A series of memoranda he wrote eventually formed the basis for ARPAnet, the first global computer network. In other words, Lick dreamed up the internet in a memo. Other contemporary computing mainstays he either funded or inspired include e-commerce, online banking, interface “windows,” and hypertext.

He also helped with the creation of something slightly simpler, but even more foundational. In 1964, as displays were becoming more complex, another researcher Lick funded—Douglas Engelbart of Stanford Research Institute—decided to figure out a simple way for users to switch between different parts of the screen.

The winning solution? A piece of wood with wheels, which moved an onscreen cursor. Nearly 70 years after Theseus, people and computers remain connected through a mouse.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.