Skip to Content

Supercomputer Powered by Mobile Chips Suggests New Threat to Intel

Supercomputers can’t keep getting faster unless they start eating less power. Chips like those in your phone could make that possible.

The world’s biggest chip maker, Intel, is struggling because it missed out on the market in chips for mobile devices. Now mobile chips are coming for a market Intel has long had mostly to itself, supercomputers.

Supercomputers are used in government, academia, and industry for research on topics as varied as nuclear weapons and potential new drugs. Intel chips power more than 90 percent of the 500 most powerful of these, as well as dominating the server and PC markets. But smartphones and tablets are almost all powered by chips built using designs licensed from U.K. company ARM, which has long prioritized energy efficiency (see “Intel Outside”).

Fujitsu said this week that it will use ARM-based processors to build a successor to an existing Japanese supercomputer called Project K. Fujitsu is building the "Post-K" machine for the Riken Advanced Institute for Computational Science, which plans to use it for biomedical, climate, and energy research. The computer is slated to be installed and turned on in 2020.

A replacement for the K supercomputer at the Riken Advanced Institute for Computational Science in Kobe, Japan, will be built using chips similar to those in smartphones.

Fujitsu announced that plan at the International Supercomputing Conference in Germany, where there was more bad news for Intel. A new list of the world’s most powerful supercomputers was revealed, and the new top machine is not based on Intel’s x86 technology.

The makers of the Chinese TaihuLight system, at the National Supercomputing Center in the Chinese city of Wuxi, used a custom-built processor that uses an unspecified architecture built by the Chinese (see “New Fastest Supercomputer Is Chinese Through and Through”).

The power of supercomputers can be measured by the number of operations they can perform per second, using a metric known as FLOPS. TaihuLight performs at 93 petaflops—a quadrillion per second.

TaihuLight has a crazy amount of computing power, but it isn’t enough. And the prospects for making supercomputers get faster have started to look murky in recent years. Using more powerful chips—usually from Intel—used to deliver predictable gains in high-performance computing. But other factors, like the speed at which data can be moved around inside the system, have become limiting. And the power bills racked up by top supercomputers have become a major headache. The race to build bigger machines has seemingly hit a wall.

Computers from supercomputers down to mobile devices used to get more and more powerful as chip makers crammed more and smaller transistors onto chips, a trend known as Moore’s Law. But transistors are no longer shrinking so fast, and the power consumption of chips is getting out of control. Supercomputer builders have started looking to alternative designs that could allow their machines to keep getting faster. One of those is ARM.

“It’s a disruptive time to be in high performance computing,” says James Cuff, assistant dean for research computing at Harvard University. “Those that design machines inside the power envelope with the right support of key algorithms and codes are going to be the players that ultimately win in this new game.”

ARM has been putting dollars into getting its chips into high-performance computers since 2011. It has struck partnerships with IBM and graphics chip maker Nvidia, and it recently created software partnerships to make sure popular research software will run on ARM-based processors.

ARM’s strategy is yet to be fully tested, since no supercomputer based on its chip designs has been built, points out Jack Dongarra, a professor of computer science at the University of Tennessee in Knoxville, one of the authors of the list of the 500 most powerful supercomputers. But in supercomputing’s new energy-conscious era, it could make sense. “I think ARM has great potential,” he says. “It hasn’t been demonstrated in a large-scale machine so far. But there is nothing in the design that would limit its use.”

Update: An earlier version of this story incorrectly stated that a petaflop is equal to a thousand billion per second. It is a quadrillion per second. 

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.