Skip to Content
Uncategorized

Scientists Are Defining Quantum-Computing Terms Because Everyone Is Confused

August 23, 2017

One person’s trapped ion is another’s electrostatically defined quantum dot. I’m talking about qubits, by the way—the quantum-computing equivalent of the bits in regular computers. But if you don’t quite follow, don’t worry: you’re far from being alone.

“Confusions exist on what quantum computing or a quantum computer means,” says Hidetoshi Nishimori, a professor at the Tokyo Institute of Technology who specializes in quantum computing. And if he thinks that, then God help the rest of us. But do not fear. For the Institute of Electrical and Electronics Engineers is so painfully aware of the confused language in quantum-computing circles—whether it’s to do with quantum tunneling, superposition, quantum entanglement, or something else entirely—that it’s kicked off a project to bring a little order and understanding to the proceedings.

The snappily titled IEEE P7130 Standard for Quantum Computing Definitions Project will corral experts and define the most important terms in the field so that everybody is reading from the same page. That, says the IEEE, will “make quantum computing more accessible to a larger group of contributors, including developers of software and hardware, materials scientists, mathematicians, physicists, engineers, climate scientists, biologists and geneticists.” The project’s only just starting, but it sounds as if it’s going to be very useful.

And, frankly, the timing is perfect. IBM, Google, Intel, and others are all racing to build the first practical quantum computer, which is why we made quantum computing one of our 10 Breakthrough Technologies of 2017. In fact, Google has already promised that it will build and test a quantum device that beats a regular computer—demonstrating so-called quantum supremacy—before the year is out, and IBM plans to do the same “over the next few years.” Maybe by then we’ll all know what the hell we’re talking about, too.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.