5 MIT patents that changed computing
Few patents actually change the world, but these did.
MIT filed 358 US patents and had 435 issued in the year ending June 30, 2021, according to the MIT Technology Licensing Office.
To be awarded, a patent must describe an invention in sufficient detail that a person with “ordinary skill in the art” will be able to implement the claimed invention—assuming that the person has sufficient money, time, and other resources. Once filed, the patent application is reviewed by an examiner, who awards it, rejects it, or requests further clarification. The back-and-forth between the patent attorney and the lawyer can go on for years. A 2019 analysis by Reuters found that of the 1,614 patents filed by MIT between 2012 and 2017, only 44.8% were subsequently granted.
Most patents never amount to much, but some are transformative. These are the patents that open up new areas of technological pursuit, giving their licensees the confidence to invest the substantial sums required to perfect the technology, market it, and sell it to the world. Here are five MIT patents that changed computing.
Multicoordinate Digital Information Storage Device
May 11, 1951
February 28, 1956 (4 years, 9 months, 17 days)
Jay W. Forrester
Every computer has two fundamental parts. The first is the processing unit that fetches instructions from memory, reads data from memory, performs calculations, and writes the result back to memory. The second part is the memory itself.
By the early 1940s, engineers had constructed reasonably good processors using vacuum tubes, but memory was not satisfactory. One design, the acoustical delay line, stored bits as moving compression waves in a long tube of mercury. A transducer emitted each bit into the liquid. The bits then traveled at 1,450 meters per second to the other end, where another circuit read each one, cleaned it up, and reinjected it at the start. The storage was slow, and the tubes were big and heavy: the first UNIVAC, delivered to the US Census Bureau in 1951, had seven memory units (with 18 mercury-filled tubes), each weighing roughly 800 pounds.
In 1949, MIT professor Jay W. Forrester came up with another approach: a ring of ferromagnetic material, serving as a transformer core, that could be magnetized in one of two directions, representing a 0 or a 1. If he arranged the rings in an array and threaded them with horizontal and vertical wires, he could select any bit for reading or writing by energizing the right pair of crossing wires. It was the first true random-access memory, though the word “random” does not appear in the 1951 patent.
Forrester’s approach was broadly adopted by the computer industry—especially by IBM. But IBM and MIT couldn’t agree on license terms.
Forrester wasn’t the first to store bits on ferromagnetic cores: An Wang filed his own core memory patent on October 21, 1949. Wang’s patent wasn’t issued until May 17, 1955, by which time Forrester’s invention was already in wide use. Wang sold his patent to IBM on March 4, 1956, for $500,000 and used the money to start Wang Laboratories. In 1964, IBM purchased the rights to Forrester’s patent as well: at $13 million, it was reportedly the largest patent settlement ever at the time.
Semiconductor memory starting displacing core memory in the 1970s, but the older technology still found use for spaceflight thanks to its low power requirements and its ability to resist radiation. Today, programmers still call a computer’s main memory its “core,” a throwback to magnetic storage.
Cryptographic Communications System and Method
4,405,829 (a prime number!)
December 14, 1977
September 20, 1983 (5 years, 9 months, 6 days)
Ronald L. Rivest, Adi Shamir, Leonard M. Adleman
Three MIT professors were having dinner in April 1977 when the discussion turned to cryptography. Was it possible, they wondered, to design a system that used one mathematical function for encryption and another for decryption? Such a system would allow anyone in the world to send an encrypted message to a specific recipient. The recipient, in turn, would be the only person who could decrypt the message.
By the end of the year, the professors had designed such a system, using a branch of number theory involving prime numbers, exponentiation, and factoring. They also wrote a technical paper and filed a patent.
Today we call the invention the RSA encryption system, named for its three inventors—Ronald L. Rivest, Adi Shamir, and Leonard M. Adleman. Although it was designed for the purpose of encrypting email, 15 years later RSA was also responsible for the successful commercialization of the internet, because RSA made it easy to transmit credit card numbers securely. And because RSA was the first secure encryption system based purely on the difficulty of an underlying mathematical problem (factoring), it also opened up new lines of research in cryptography and number theory that continue to be fruitful to this day. RSA was the first public-key cryptography system to be widely adopted.
Encryption algorithms use a small block of data, called a key, to encode information so that it cannot be deciphered by someone who intercepts a message as it travels from sender to recipient. With RSA, each key consists of two parts: a public key and a private key. In a typical system, users create their own keys and then publish their public keys in a directory. If someone wants to send you a secret message, that person can go to the directory, get a copy of your public key, and use that key to encrypt the message. When you get the message, you can decrypt it because you have a copy of your private key. But no one else has a copy of your private key, so anyone who intercepts your message will be unable to decipher it.
The RSA algorithm could not be patented outside the US because it had been publicly described before the patent was filed. Nevertheless, it launched an entire industry. In 2002, the invention also earned Rivest, Shamir, and Adleman the Turing Award, regarded by many as computing’s equivalent of the Nobel Prize.
Three-Dimensional Printing Techniques
December 8, 1989
April 20, 1993 (3 years, 4 months, 12 days)
Emanuel M. Sachs ’75, SM ’76, PhD ’83, John S. Haggerty, Michael J. Cima, Paul A. Williams, SM ’90
The core idea of 3D printing—often called additive manufacturing—is to build some kind of solid by accreting material in three dimensions according to a plan. It creates objects by adding material to a substrate, rather than starting with a large block of material and cutting it away. Today, 3D printing is used to create otherwise unavailable replacement parts for appliances and cars, and to manufacture implanted medical devices. Companies are experimenting with 3D-printed houses and 3D-printed wind turbines.
The roots of 3D printing go back to 1945, but the field took off in the 1980s with the development of three distinctly different approaches. Chuck Hull invented stereolithography,a technique that used a steerable ultraviolet light to selectively harden a liquid polymer. Hull printed his first object in 1983, filed for his patent in 1984, and created 3D Systems, the first 3D-printing company, in 1986. Another approach, called fused deposition modeling, forces a filament through a device that heats and extrudes the material. That’s the technology used by low-cost 3D printers popular in the maker movement.
The MIT approach, developed by mechanical engineering professor Emanuel Sachs, uses ink-jet printing technology to selectively spray a binder into a tub of fine powder. The printing head scans back and forth through the powder, depositing the binder and slowly moving up. Then the powder is removed, leaving the completed object, which can be made from a wide variety of materials, including metal. This is the approach that Sachs named 3D printing. The technology was licensed in 1994 to Z Corporation, a startup founded by Marina Hatsopoulos, SM ’93, Walter Bornhorst, SM ’64, PhD ’66, James Bredt ’82, SM ’87, PhD ’95, and Tim Anderson. Z Corp. was acquired by 3D Systems in 2012 for $135.5 million.
Today, 3D printing is commonly used for prototyping, art, and small-batch production. The US military is exploring the use of 3D printing to make replacement parts in the field. NASA sent a 3D printer to the International Space Station. It is now used to create dental implants and crowns, and research is being done on using human stem cells to 3D-print replacement organs.
(For other 3D printing patents by MIT researchers, see this list.)
Methods and Apparatus for Motion Estimation in Motion Picture Processing
April 3, 1987
June 13, 1989 (2 years 2 months 10 days)
Dennis M. Martinez and Jae S. Lim ’74
The US standardized its broadcast color television in 1953. Called NTSC, after the National Television System Committee, it attracted jokes that the acronym stood for “Never Twice the Same Color.” The standard called for 525 lines smeared across the screen at 29.97 frames per second—specs that seem weird today but were ingeniously designed so that color transmissions would fit within existing broadcast bands and be compatible with existing black-and-white televisions.
Following NTSC, engineers outside the US developed and deployed numerous analog systems to deliver higher-quality pictures. Europe adopted the Phase Alternating Line (PAL) standard, which provided 625 lines. Inspired by the television coverage of the 1964 Tokyo Summer Olympics, Japan’s NHK developed Hi-Vision, which had 1,035 lines. Broadcasts started in the 1980s, and by the early 1990s, Hi-Vision sets with amazing clarity lined Tokyo’s shopping districts. Some thought Hi-Vision was the future of TV.
Those people weren’t at MIT.
Jae S. Lim ’74, SM ’75, EE ’78, ScD ’78, joined the faculty of MIT and the MIT Research Laboratory of Electronics in 1978. His research focused on approaches for improving and compressing sound and video. In 1987 Lim and Dennis M. Martinez ’82, SM ’82, EE ’83, ENG ’83, PhD ’86, filed a patent that described how to take an analog video signal and compress it into a digital stream. Over the next 10 years, Lim filed more: eventually 18 patents related to digital television were issued in total.
None of these inventions would be useful unless digital television was standardized and adopted, so Lim became an active member of the Federal Communication Commission’s Advanced Television Standardization Process. In 1993 Lim helped create the so-called HDTV “Grand Alliance,” with participants from MIT and six companies.
Lim and others on the Grand Alliance Technical Oversight Group decided to recommend that HDTV sound use technology from Dolby Laboratories—a decision that resulted in a $30 million payment from Dolby to MIT from a tangled patent lawsuit settlement.
Lim, in turn, received $8 million from MIT as part of the Institute’s sharing policy.
Content delivery networks (Akamai)
Global Hosting System
Provisional Patent filed
July 14, 1998
May 19, 1999
August 22, 2000 (2 years, 1 month, 8 days)
F. Thomson Leighton, Daniel M. Lewin, SM ’98
In the early years of the World Wide Web, anyone accessing the MIT homepage at https://web.mit.edu/ would get the page and all its embedded images from a server in Cambridge, Massachusetts. This worked fine—at first. But soon it was clear that it made more sense for people in Cambridge, England, to have their data come from a server on the eastern side of the Atlantic: distributed web servers would result in speedier page loads and decreased communication costs. Such servers were called “mirror sites” and had been used for years.
“Unfortunately, mirror sites place unnecessary economic and operational burdens on Content Providers, and they do not offer economies of scale,” reads patent 6,108,703. The problem was that each content provider had to purchase and deploy its own mirror sites. A second problem was that the mirror sites didn’t offer fault tolerance. If one site went down, users had to manually choose another. Yet another problem was keeping the content on mirror sites synchronized.
MIT professor Thomson Leighton and his student Daniel Lewin, SM ’98, solved all these problems at once with an algorithm that balanced load between multiple servers in a single cluster, and an approach for directing web browsers to download images and other large objects from geographically closer hosts that were part of the globally distributed network.
In September 1997 Leighton and Lewin, along with MIT Sloan student Preetish Nijhawan, MBA ’98, were among six finalists for the annual MIT $50K Entrepreneurship Competition. MIT filed a provisional patent in July 1998; that August, Leighton and Lewin incorporated Akamai, obtaining an exclusive license from MIT.
Akamai grew fast and went public on October 29, 1999. The patent was granted 10 months later. Akamai soon had competitors. It tried to buy one of them, an Arizona-based company called Limelight. When that deal fell through, Akamai sued Limelight in 2006. The case cycled through the federal court system, making a trip to the US Supreme Court in 2014, and was finally resolved in August 2016, with Limelight agreeing to license the Akamai patents for $54 million.
Simson Garfinkel ’87, PhD ’05, is listed as the inventor on seven US patents but has yet to see revenue from even one.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.