Skip to Content
MIT News magazine

The Quantum Frontier

MIT researchers have been leading the way toward quantum computers—devices that could solve problems too big for all today’s supercomputers put together.

In May 1981, at a conference hosted by MIT’s Laboratory for Computer Science, Richard Feynman ‘39 described a theoretical device that he called a “quantum computer,” which would perform calculations by harnessing the strange behavior of matter at very small scales. Theoretical physicists took up the idea, showing that quantum computers could, in principle, do anything ordinary computers could and arguing that they might be able to do some things much, much faster. Still, for more than a decade quantum computation remained, for all but a few enthusiasts, a subject only for idle speculation.

From left: Professors Peter Shor, Scott Aaronson, and Edward Farhi

That changed spectacularly in 1994, when Peter Shor, PhD ‘85, now a professor of applied mathematics at MIT, described a quantum algorithm for finding a number’s prime factors. A quantum computer running Shor’s algorithm would be able to perform factoring tasks that today’s computers couldn’t finish in the lifetime of the universe. Since the difficulty of factoring large numbers is all that guarantees the security of most modern cryptographic systems, the rest of the world—and particularly organizations like the National Security Agency—had to take notice. “Shor showed that if you could build quantum computers, then there would be people who wanted to buy them,” says Seth Lloyd, a professor of mechanical engineering who studies quantum computing. “Shor’s algorithm constituted the killer app that got everybody interested.”

Although fully functional, general-purpose quantum computers are probably decades away, Shor’s algorithm turned quantum computation into a teeming area of research. Today, Lloyd says, the number of researchers worldwide working on quantum computing is probably somewhere around 5,000. “I think that if we get another 300 members at the American Physical Society, we’ll be a division of the American Physical Society,” he says. And in every respect, from discovering new algorithms to dreaming up new types of computers, MIT researchers are in the thick of the fray.

Quantum Possibility

Quantum computation is rooted in the central mystery of quantum physics: that tiny particles of matter can inhabit multiple, seemingly mutually exclusive states at the same time. Fire a single photon—a particle of light—at a barrier with two slits in it, and it will pass through both slits at once. Electrons have a property called “spin,” which can be thought of as clockwise or counterclockwise rotation—but a single electron can simultaneously spin clockwise and counterclockwise. This ability to be in more than one state at a time, which physicists call superposition, is, as Feynman once put it, “impossible, absolutely impossible, to explain in any classical way.” To make things even weirder, if you have a quantum particle that’s in multiple states at once and you perform some measurement on it, it instantly snaps into only one of those states. And which of them it assumes is totally random. (That’s the basis of another famous physics quote—Einstein’s insistence, against the dicta of quantum physics, that “God doesn’t play dice with the universe.”)

In computer science, the fundamental unit of information is the bit, which can take one of two values, usually expressed as 0 and 1. The quantum-computing pioneers realized that since a quantum particle can be in two states at once, it could represent 0 and 1 at the same time. Two quantum bits—or qubits—could represent four values, three of them eight, four of them 16, and so on. A single calculation involving N qubits would be like doing 2N calculations at once.

Suppose, however, that you have eight qubits, representing the results of 256 simultaneous calculations. If you perform a measurement on the qubits, the superposition collapses: each qubit immediately assumes a value of either 0 or 1. You’re left with only one of the 256 initial possibilities, and that one chosen at random. How do you guarantee that it’s the one you want?

The First Quantum Algorithm

That’s the question Shor answered, in what remains “the major result in our field,” according to Edward Farhi, the director of MIT’s Center for Theoretical Physics, who also researches quantum computation.

When a particle in superposition randomly assumes a single state, it does so in accord with certain probabilities: over time, particles will snap into some states more often than others. Those probabilities can be depicted as a curve that happens to look a lot like the crest of a wave. It turns out that the same math that describes the physics of waves also describes the physics of quantum probabilities.

When waves collide, they interfere with each other, either constructively or destructively. If two crests intersect, the result is a bigger crest; if a crest intersects with a trough, they cancel each other out. Shor found an ingenious way to represent the problem of factoring with probability waves, so that the right answers would tend to reinforce each other while the wrong ones would essentially disappear. The result is still a probability wave, but when the superposition collapses, the odds are very high that you’ll get the right answer.

Shor began working on the algorithm in 1993, when he was at AT&T’s Bell Labs research center, after hearing a talk on quantum computing by Umesh Vazirani ‘81, a professor at the University of California, Berkeley. “Not working on it full time, of course,” he says. “In fact, I didn’t really tell anyone I was working on it until I came up with it.”

Actually, the first quantum algorithm that Shor told anyone about, in April 1994, was one for calculating logarithms, a problem closely related to factorization. “I gave a talk about the algorithm at Bell Labs on a Tuesday,” he says. “That Saturday, I was home with a bad cold, and Umesh Vazirani called me up from California, very excited, and said, ‘I hear you know how to factor with a quantum computer.’ ” In fact, he did: in the intervening four days, he had adapted his algorithm to that very problem. “The Economist interviewed me not long after that,” Shor says. Soon “I was getting tons of e-mails about the algorithm, and I still hadn’t written the paper yet.” When he gave his first public talk on it at Cornell in early May, he says, laughing, “someone from the NSA talked to me about it afterwards.”

Harnessing MRI Technology

When Seth Lloyd came to MIT in 1994, “I’d written a bunch of papers about quantum computing,” he says. “I and maybe, like, five or six other people had been working on this before 1994.” For Lloyd, the announcement of Shor’s algorithm had a very concrete effect: “It made it a lot easier to get tenure.”

One of the papers Lloyd had published, in Science in 1993, proposed the first feasible design for a quantum computer. “You could think of it as a cup full of molecules,” he says. In each molecule, qubits were represented by different types of atoms, and all the molecules in the cup would perform the same computation at the same time.

In separate research, visiting professor David Cory and Neil Gershenfeld, head of the Media Lab’s Physics and Media Group, showed that Lloyd’s design could be realized using nuclear magnetic resonance (NMR), the phenomenon underlying magnetic resonance imaging. A powerful magnet would align the spins of the atoms making up the molecules. Different frequencies of radio waves could then put some of the atoms in superposition and flip the spins of others. The electrons with definite spins would represent data, and the spins in superposition would represent the results of multiple operations performed on that data.

In 1998, Gershenfeld teamed up with Isaac Chuang ‘90, ‘91, SM ‘91, then at IBM Almaden Research Center in San Jose, California, and Mark Kubinec of UC Berkeley to build the first computing system that used NMR to execute a quantum algorithm. It had two qubits.

In 2000, Chuang returned to MIT, where he’s now a professor of physics and electrical engineering and computer science. The next year, he and IBM colleagues built a seven-qubit NMR computer that successfully executed Shor’s algorithm for the first time. It determined that the prime factors of 15 are very probably three and five.

One problem with NMR quantum computing is that since qubits are represented by different atoms in a single molecule, more complex calculations require more complex molecules. But the larger the molecule, the stronger its electromagnetic field, and the greater the difficulty of distinguishing the electromagnetic signal produced by a single atom. Some researchers are investigating tiny sensors that can read magnetic signals from individual molecules. But Chuang, among others, has turned to quantum computers that use ions trapped in electromagnetic fields as qubits, a technique proposed in 1995 by researchers at Austria’s University of Innsbruck.

Ion-trap quantum computing uses rotating magnetic fields to isolate individual molecules, and laser light rather than radio pulses to change the molecules’ quantum states. While that gives researchers more precise control over qubits than existing NMR techniques do, it also demands it. Electrons orbiting a nucleus can be in different energy states. Add enough energy to an electron, and it will pop up to the next energy level; if it loses just a little energy, it will drop back down. Ion-trap quantum computing requires keeping electrons in different, precisely specified energy states. This is so tricky that some researchers began to consider whether there could be a quantum computer that just stayed in its lowest energy state.

An Adiabatic Approach

In 2000, MIT physicists Edward Farhi and Jeffrey Goldstone, the math department’s Michael Sipser, and Northeastern’s Sam ­Gutmann, ‘73, PhD ‘77, proposed a new type of quantum computer, called an adiabatic quantum computer, which is always in its lowest energy state. (Objects tend to seek the lowest energy states they can find, so low energy states tend to be more stable than high ones.) The paper didn’t specify how the qubits would be realized. But it was based on the recognition that solutions to computational problems could be represented as the lowest energy states of a physical system.

Two magnets, for example, will tend to line up north pole to south pole because it takes less energy than forcing the north poles together. A bunch of magnets placed arbitrarily on a board will therefore start flipping so that as many of them as possible are aligned north-south. In theory, if you placed magnets in the right pattern and set their initial orientations in just the right way, you could encode a computational problem. As the magnets flipped to find their lowest-energy orientation, they would converge on the problem’s solution. Adiabatic quantum computing is similar, but it could explore lots of possible solutions simultaneously, since it would use qubits rather than magnets.

With adiabatic quantum computing, a quantum-mechanical physical system would be set up in its lowest energy state, called the ground state. Initially, the system would encode a much simpler problem than the one it’s intended to solve. But over time, some control parameter of the system—say, the strength of its electromagnetic field—would be gradually changed, until the system ended up encoding the more difficult problem. If the change took place slowly enough, the system would stay in its ground state, so it would end up representing the solution to the problem.

“Some people think that if you had a quantum computer that was required to stay in its ground state—by, let’s say, making it very cold—that that might make the system a little bit less prone to error,” Farhi says. “Because if you’re always cold, and you’re always in the ground state, that’s probably an easier place to be than in some excited state that you have to carefully control.”

The problem with the adiabatic approach is that the system has to change slowly to keep it from jumping to a higher energy state, and no one knows how slowly is slowly enough. “If it’s infinitely slow, we know it will work,” Farhi says. But if the system has to change too slowly, it will offer no advantages over conventional computers.

Farhi continues to investigate the question of how rapidly an adiabatic quantum computing system can change, both through computer modeling of relatively simple systems and through mathematical analysis. Meanwhile, in 2002, Lloyd and Bill ­Kaminsky, a graduate student in his group, proposed a way to realize an adiabatic quantum computer using superconducting electrical circuits, in which current flow can be in superposition: effectively, the current is flowing clockwise and counterclockwise at once. The direction of current flow represents the value of a qubit, and the system’s total energy depends on the direction of current flow in adjacent circuits. When a magnetic field is applied, the currents go into superposition. When measured, the system then snaps into its lowest energy state, revealing the answer. Two years later, Lloyd; Wim van Dam, a postdoc in Farhi’s group; and four other researchers from four different universities proved that an adiabatic quantum computer could, in principle, perform any calculation that a “conventional” quantum computer could.

In 2007, a company in Burnaby, British Columbia, demonstrated what it said was a 16-qubit adiabatic quantum computer that used superconducting circuits. At the end of 2008, the company, D-Wave, announced that it had gotten the qubit count up to 128. Many experts have been skeptical, but in a Nature paper published earlier this year, D-Wave researchers demonstrated that their eight-qubit cell does exhibit quantum effects. The company has raised more than $65 million in funding, and in May it sold its first commercial device, a 128-qubit D-Wave One, to Lockheed Martin.

Surpassing the Supercomputers

Part of the problem with demonstrations like D-Wave’s, or even with NMR quantum computers like Chuang’s, is that the quantum circuits are too simple to perform calculations that conventional computers can’t. Scott Aaronson, an associate professor of computer science who, at 30, is the youngest of MIT’s high-profile quantum-computing researchers, is trying to address that problem by, as he puts it, “meeting the experimentalists halfway.”

In 2011, Aaronson and his grad student Aleksandr Arkhipov proposed an experiment that, if it worked, would perform a computation not even the most powerful of today’s supercomputers could perform. The experimental setup should, he believes, be much simpler to build than a full-scale quantum computer.

The experiment would use a series of beam splitters, devices used in optical networking to split laser beams in two. In 1987, physicists at the University of Rochester discovered that if two photons arrived at a beam splitter at exactly the same time, quantum-­mechanical interactions would force both to go either right or left. They would never, as the law of probabilities would predict, exit the beam splitter in different directions.

Aaronson and Arkhipov propose routing a finite number of photons—say, 20—through a series of beam splitters to a set of light detectors—say, about 400. Calculating the frequency with which different numbers of photons would arrive at different detectors is probably beyond the computational capacity of all the computers in the world. But, Aaronson and Arkhipov proved, so is calculating statistically plausible outcomes for even a couple of dozen runs of the experiment. This is a problem, however, that a couple of dozen successful runs of the experiment would solve.

When they first described their experiment, Terry Rudolph, an advanced research fellow in Imperial College London’s Quantum Optics and Laser Science group, said that “it has the potential to take us past what I would like to call the ‘quantum singularity,’ where we do the first thing quantumly that we can’t do on a classical computer.”

Experimental physicists at several universities have taken up Aaronson and Arkhipov’s challenge and are confident that in relatively short order they will get the experiment to work with maybe four photons. A version with 20 photons will take longer, and a fully functioning quantum computer could take longer still. But when that computer is finally built and the history of its invention is written, the early chapters will be studded with the names of MIT professors.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.