Skip to Content
Uncategorized

Radia Perlman ’73, SM ’76, PhD ’88

The mathematician who made networks work
Illustration of Radia Perlman
Illustration of Radia PerlmanPatrick Leger

Today the idea of computers sharing information over a network seems obvious. But it wasn’t when Radia Perlman arrived at Bolt Beranek and Newman in 1976 with two MIT math degrees and three years’ experience as a researcher at the MIT AI Lab. When she decided to leave grad school, a friend suggested she apply for a job at BBN. Although she had been designing software and hardware to teach children programming at MIT, Perlman was assigned to the group helping create routers and switches for the early internet. (“The only reason that I got into networking was that a friend stopped by,” she confesses.)

Back then, most computers were stand-alone machines. Some early home computers could “dial up” mainframes over phone lines, but they typically ran software that made them act as “dumb terminals”—a mere screen and keyboard for the remote system.

That’s where routers and switches come in. They let computers exchange data in chunks called packets that find their way through the data network like a package moving by truck, rail, or air. Packet routing lets any two computers on the network communicate with each other, no matter how many links in the network the packets must transverse.

This is complicated business, and the early algorithms for routing packets had major flaws that could potentially crash the entire internet. Perlman used her knowledge of mathematics to design more robust algorithms for routers and prove that they were mathematically correct. Her famous “spanning tree” algorithm, which she invented at Digital Equipment Corp. in 1984, is, with slight modifications, still used today.

Perlman went on to do more fundamental work in networking, security, and privacy. She has more than 130 patents; her most recent (#10,298,551, for an algorithm that helps preserve privacy when devices share information) was granted on May 21, 2019.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.