Skip to Content
Computing

Google’s Quantum Dream Machine

Physicist John Martinis could deliver one of the holy grails of computing to Google—a machine that dramatically speeds up today’s applications and makes new ones possible.
December 18, 2015

John Martinis used the arm of his reading glasses to indicate the spot where he intends to demonstrate an almost unimaginably powerful new form of computer in a few years. It is a cylindrical socket an inch and a half across, at the bottom of a torso-sized stack of plates, blocks, and wires of brass, copper, and gold. The day after I met with him this fall, he loaded the socket with an experimental superconducting chip etched with a microscopic Google logo and cooled the apparatus to a hundredth of a degree Celsius above absolute zero. To celebrate that first day of testing the machine, Martinis threw what he called “a little party” at a brewpub with colleagues from his newly outfitted Google lab in Santa Barbara, California.

John Martinis has been researching how quantum computers could work for 30 years. Now he could be on the verge of finally making a useful one.

That party was nothing compared with the celebration that will take place if Martinis and his group can actually create the wonder computer they seek. Because it would harness the strange properties of quantum physics that arise in extreme conditions like those on the ultracold chip, the new computer would let a Google coder run calculations in a coffee break that would take a supercomputer of today millions of years. The software that Google has developed on ordinary computers to drive cars or answer questions could become vastly more intelligent. And earlier-stage ideas bubbling up at Google and its parent company, such as robots that can serve as emergency responders or software that can converse at a human level, might become real.

The theoretical underpinnings of quantum computing are well established. And physicists can build the basic units, known as qubits, out of which a quantum computer would be made. They can even operate qubits together in small groups. But they have not made a fully working, practical quantum computer.

Martinis is a towering figure in the field: his research group at the University of California, Santa Barbara, has demonstrated some of the most reliable qubits around and gotten them running some of the code a quantum computer would need to function. He was hired by Google in June 2014 after persuading the company that his team’s technology could mature rapidly with the right support. With his new Google lab up and running, Martinis guesses that he can demonstrate a small but useful quantum computer in two or three years. “We often say to each other that we’re in the process of giving birth to the quantum computer industry,” he says.

Google and quantum computing are a match made in algorithmic heaven. The company is often said to be defined by an insatiable hunger for data. But Google has a more pressing strategic addiction: to technology that extracts information from data, and even creates intelligence from it. The company was founded to commercialize an algorithm for ranking Web pages, and it built its financial foundations with systems that sell and target ads. More recently, Google has invested heavily in the development of AI software that can learn to understand language or images, perform basic reasoning, or steer a car through traffic—all things that remain tricky for conventional computers but should be a breeze for quantum ones. “Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything,” Google’s CEO, Sundar Pichai, recently informed investors. Supporting that effort would be the first of many jobs for Martinis’s new quantum industry.

Dream maker

As recently as last week the prospect of a quantum computer doing anything useful within a few years seemed remote. Researchers in government, academic, and corporate labs were far from combining enough qubits to make even a simple proof-of-principle machine. A well-funded Canadian startup called D-Wave Systems sold a few of what it called “the world’s first commercial quantum computers” but spent years failing to convince experts that the machines actually were doing what a quantum computer should (see “The CIA and Jeff Bezos Bet on Quantum Computing”).

Then NASA summoned journalists to building N-258 at its Ames Research Center in Mountain View, California, which since 2013 has hosted a D-Wave computer bought by Google. There Hartmut Neven, who leads the Quantum Artificial Intelligence lab Google established to experiment with the D-Wave machine, unveiled the first real evidence that it can offer the power proponents of quantum computing have promised. In a carefully designed test, the superconducting chip inside D-Wave’s computer—known as a quantum annealer—had performed 100 million times faster than a conventional processor.

As recently as last week the prospect of a quantum computer doing anything useful within a few years seemed remote. Then NASA summoned journalists to its Ames Research Center in Mountain View.

However, this kind of advantage needs to be available in practical computing tasks, not just contrived tests. “We need to make it easier to take a problem that comes up at an engineer’s desk and put it into the computer,” said Neven, a talkative machine-learning expert. That’s where Martinis comes in. Neven doesn’t think D-Wave can get a version of its quantum annealer ready to serve Google’s engineers quickly enough, so he hired Martinis to do it. “It became clear that we can’t just wait,” Neven says. “There’s a list of shortcomings that need to be overcome in order to arrive at a real technology.” He says the qubits on D-Wave’s chip are too unreliable and aren’t wired together thickly enough. (D-Wave’s CEO, Vern Brownell, responds that he’s not worried about competition from Google.)

Google will be competing not only with whatever improvements D-Wave can make, but also with Microsoft and IBM, which have substantial quantum computing projects of their own (see “Microsoft’s Quantum Mechanics” andIBM Shows Off a Quantum Computing Chip). But those companies are focused on designs much further from becoming practically useful. Indeed, a rough internal time line for Google’s project estimates that Martinis’s group can make a quantum annealer with 100 qubits as soon as 2017. D-Wave’s latest chip already has 1,097 qubits, but Neven says a high-quality chip with fewer qubits will probably be useful for some tasks nonetheless. A quantum annealer can run only one particular algorithm, but it happens to be one well suited to the areas Google most cares about. The applications that could particularly benefit include pattern recognition and machine learning, says William Oliver, a senior staff member at MIT Lincoln Laboratory who has studied the potential of quantum computing.

John Martinis, 57, is the perfect person to wrestle a mind-bogglingly complex strand of quantum physics research into a new engineering discipline. Not only can he dive into the esoteric math, but he loves to build things. Operating even a single qubit is a puzzle assembled from deep quantum theory, solid-state physics, materials science, microfabrication, mechanical design, and conventional electronics. Martinis, who is tall with a loud, friendly voice, makes a point of personally mastering the theory and technical implementation of every piece. Giving a tour of his new lab at Google, he is as excited about the new soldering irons and machine tools in the conventional workshop area as he is about the more sophisticated equipment that chills chips and operates them. “To me it’s fun,” he says. “I’ve been able to do experiments no one else could do, because I could build my own electronics.”

This experimental chip, etched with the Google logo, is cooled to just above absolute zero in order to generate quantum effects.

Martinis and his team have to be adept at so many things because qubits are fickle. They can be made in various ways—Martinis uses aluminum loops chilled with tiny currents until they become superconductors—but all represent data by means of delicate quantum states that are easily distorted or destroyed by heat and electromagnetic noise, potentially ruining a calculation.

Qubits use their fragile physics to do the same thing that transistors use electricity to do on a conventional chip: represent binary bits of information, either 0 or 1. But qubits can also attain a state, called a superposition, that is effectively both 0 and 1 at the same time. Qubits in a superposition can become linked by a phenomenon known as entanglement, which means an action performed on one has instant effects on the other. Those effects allow a single operation in a quantum computer to do the work of many, many more operations in a conventional computer. In some cases, a quantum computer’s advantage over a conventional one should grow exponentially with the amount of data to be worked on.

The difficulty of creating qubits that are stable enough is the reason we don’t have quantum computers yet. But Martinis has been working on that for more than 11 years and thinks he’s nearly there. The coherence time of his qubits, or the length of time they can maintain a superposition, is tens of microseconds—about 10,000 times the figure for those on D-Wave’s chip.

Martinis’s confidence in his team’s hardware even has him thinking he can build Google an alternative to a quantum annealer that would be even more powerful. A universal quantum computer, as it would be called, could be programmed to take on any kind of problem, not just one kind of math. The theory behind that approach is actually better understood than the one for annealers, in part because most of the time and money in quantum computing research have been devoted to universal quantum computing. But qubits have not been reliable enough to translate the theory into a working universal quantum computer.

This structure of metal plates is necessary to cool and shield quantum chips.

Until March, that is, when Martinis and his team became the first to demonstrate qubits that crossed a crucial reliability threshold for a universal quantum computer (see “Google Researchers Make Quantum Computing Components More Reliable”). They got a chip with nine qubits to run part of an error-checking program, called the surface code, that’s necessary for such a computer to operate (IBM has since gotten part of the surface code working on four qubits). “We demonstrated the technology to a point where I knew we could scale up,” says Martinis. “This was for real.”

Martinis aims to show off a complete universal quantum computer with about 100 qubits around the same time he delivers Google’s new quantum annealer, in about two years. That would be a milestone in computer science, but it would be unlikely to help Google’s programmers right away. Such is the complexity of the surface code that although a chip with 100 qubits could run the error-checking program, it would be unable to do any useful work in addition to that, says Robert McDermott, who leads a quantum computing research group at the University of Wisconsin. Yet Martinis thinks that once he can get his qubits reliable enough to put 100 of them on a universal quantum chip, the path to combining many more will open up. “This is something we understand pretty well,” he says. “It’s hard to get coherence but easy to scale up.”

Stupid algorithms

When Martinis explains why his technology is needed at Google, he doesn’t spare the feelings of the people working on AI. “Machine-learning algorithms are really kind of stupid,” he says, with a hint of wonder in his voice. “They need so many examples to learn.”

Indeed, the machine learning used by Google and other computing companies is pathetic next to the way humans or animals pick up new skills or knowledge. Teaching a piece of software new tricks, such as how to recognize cars and cats in photos, generally requires thousands or millions of carefully curated and labeled examples. Although a technique called deep learning has recently produced striking advances in the accuracy with which software can learn to interpret images and speech, more complex faculties like understanding the nuances of language remain out of machines’ reach.

Figuring out how Martinis’s chips can make Google’s software less stupid falls to Neven. He thinks that the prodigious power of qubits will narrow the gap between machine learning and biological learning—and remake the field of artificial intelligence. “Machine learning will be transformed into quantum learning,” he says. That could mean software that can learn from messier data, or from less data, or even without explicit instruction. For instance, Google’s researchers have designed an algorithm they think could allow machine-learning software to pick up a new trick even if as much as half the example data it’s given is incorrectly labeled. Neven muses that this kind of computational muscle could be the key to giving computers capabilities today limited to humans. “People talk about whether we can make creative machines–the most creative systems we can build will be quantum AI systems,” he says.

More practically, with only D-Wave’s machine to practice on for now, Google’s researchers can’t do much more than speculate about what exactly they could or should do with the chips Martinis is building. Even when they do get their hands on them, it will take time to invent and build the infrastructure needed to operate large numbers of the exotic devices so they can contribute materially to Google’s business.

Neven is confident that Google’s quantum craftsmen and his team can get through all that. He pictures rows of superconducting chips lined up in data centers for Google engineers to access over the Internet relatively soon. “I would predict that in 10 years there’s nothing but quantum machine learning–you don’t do the conventional way anymore,” he says. A smiling Martinis warily accepts that vision. “I like that, but it’s hard,” he says. “He can say that, but I have to build it.”

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

Why China is betting big on chiplets

By connecting several less-advanced chips into one, Chinese companies could circumvent the sanctions set by the US government.

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.