Skip to Content
MIT News feature

Patrick Winston ’65, SM ’67, PhD ’70

The professor who told stories—and taught computers to understand them.
August 21, 2019
Patrick Winston
Patrick WinstonJason Dorfman/MIT CSAIL

When Patrick Winston ’65, SM ’67, PhD ’70, was a young graduate student at MIT, unsure of what he wanted to do in life and aware that his father had “started talking darkly about law school,” he attended a lecture by Marvin Minsky, the renowned founder of the school’s artificial-intelligence laboratory. Minsky was working to build and program machines that could behave in ways humans considered intelligent. “There was such joy in his talk, such pride in what his students had done, and such passion for what would be done in the future that I left the lecture saying to my friend, ‘I want to do what he does,’” Winston recalled at the “Hello World, Hello MIT” event celebrating the MIT Stephen A. Schwarzman College of Computing in February.

In graduate work supervised by Minsky, Winston used children’s blocks, a robot, and computer programming to create a system that could recognize arches. As Minsky explained in a grainy video from the time, Winston presented the machine with well-chosen examples of arches, and for each example the machine “jump[ed] to some sort of conclusion.” It learned more about what an arch can be—say, with a wedge on top instead of a flat block. Importantly, Winston included examples that showed what an arch is not. “It takes a good teacher,” Minsky said, but the machine “can learn very fast.”

Winston, who died in July, would prove to be an exceptional teacher. (When he was named a MacVicar Faculty Fellow in 2011, a student noted that he’d learned the names of every student—nearly 150 of them—in his 6.034 class.) And throughout his career, his research focused on what he called the “cognitive and thinking part of AI,” in contrast to the brute-force “statistical” approach to machine learning that is dominant today (which involves simply inundating computers with examples). He wanted to know what makes humans uniquely smart, and he worked to program computers to learn in similarly structured ways. “I belong to the lunatic fringe that still works on symbolic reasoning,” he joked in an interview with MIT Technology Review in December. 

Patrick Winston
MIT Museum

In 1972, just two years after receiving his PhD, Winston was named director of the MIT AI Lab. “There’s controversy about how that came to be,” he told the “Hello World, Hello MIT” audience. “Some say I had arranged a coup d’état. Others say I was tricked into it.” Whatever the case, Winston directed the lab for 25 years, working closely with the government’s Advanced Research Projects Agency. “He understood that creativity can’t be straitjacketed, so there was a lot of freedom for researchers in the lab,” says Berthold Horn, SM ’68, PhD ’70, a fellow MIT computer scientist. “Yet he was able to bring in these umbrella contracts and explain how the work all fit together and could benefit the country.”

Under Winston’s direction, much of the lab’s research focused on machine vision, machine learning, and robotics, with an eye toward developing an “embodied intelligence”—a system that interacts with the environment. That meant projects involving perceptual systems, robotic manipulators, and—in between them—“a thinking machine, some sort of logic box,” Horn says. The work yielded a range of applications. In 1986, Winston spun off Ascent Technologies, which helps airlines, restaurants, and hotels optimize scheduling and resources. In 1992, lab member Marc Raibert, PhD ’77, founded the robotics company Boston Dynamics. Some of the lab’s work also contributed to the development of Apple’s virtual assistant Siri.

Since then, Winston, who was the Ford Professor of AI and Computer Science, focused on what he saw as a central form of human intelligence: our narrative facility. “There’s a little bit of magic in our human brains that’s different from the brains of other species, including chimpanzees,” he said. “It gives us stories, and we argue that much of thinking is about stories, much of common sense is about stories, much of education is about stories.” 

Humans have an inner symbolic language, Winston explained, and his research group created computer models to study how it works—and why it matters. In one project, he and his students used a summary of Macbeth to teach computers about ambition, passion, and revenge. “At the very bottom we have a variety of rule types that tend to put causal connections between the events in the story,” he observed in a lecture. For instance, after Macbeth kills Duncan, the program understands that Duncan is dead. It also analyzes events in the story that are further from each other in time to determine that the victory is pyrrhic, with Macbeth doomed to die on the battlefield at the hands of Macduff.

Winston and his group also programmed computers to understand when stories share a theme like revenge; to analyze how Eastern and Western readers might understand tales differently; and to manipulate narrative so that one character seems more sympathetic than another. He argued that this work might benefit a wide range of professionals, including diplomats seeking to understand geopolitical conflict. (Winston himself saw storytelling potential everywhere, even in engineering and cooking: “A recipe is a sequence of actions,” he once said, “so it’s a special case of a story.”)

A consummate storyteller himself, Winston was known as a great lecturer. “He told me early on to tell lies,” recalls Horn, who looked to Winston for advice when he began teaching. “First you tell a story that is very simplified and wrong, and then you refine it, so you have a succession of refined lies, and that’s how you teach.” Winston likened this to the ways in which scientists thought about gravity before Newton, after Newton, and then after Einstein, working with theories that helped them make sense of the physical world even if they weren’t entirely correct.

Winston’s “How to Speak” lectures, which he gave during IAP for nearly four decades, formalized his tips to colleagues and students and achieved near cult status. Among his kernels of advice: use a numbered outline to help listeners keep track of where they are in the lecture, and repeat important points three times. (If 20% of the audience is not paying attention at any given moment, the chance that an individual will be fogged out all three times falls below 1%.)

To bring a talk to a close, he recommended steering clear of thanking the audience, to avoid suggesting that they were doing the speaker a favor by attending. Rather, he told listeners how much he enjoyed being with them. And if you’re able to end with a joke, go ahead. As he liked to say, it will make the audience “think they’ve had fun the whole time.”


With additional reporting by Will Knight

Keep Reading

Most Popular

10 Breakthrough Technologies 2024

Every year, we look for promising technologies poised to have a real impact on the world. Here are the advances that we think matter most right now.

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

AI for everything: 10 Breakthrough Technologies 2024

Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.

What’s next for AI in 2024

Our writers look at the four hot trends to watch out for this year

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.