Skip to Content

Smart Phone Chips Calling for Data Centers

The processors are slower, but they use much less energy—a huge boon for those who run massive data centers.
November 10, 2011

It’s no secret that the demands placed on data centers are growing rapidly—all those 800 million Facebook profiles have to be stored somewhere. Not surprisingly, the companies that operate these vast warehouses are concerned about the costs of using all that energy. In September, Google said that its global operations continuously draw 260 million megawatts of power, roughly a quarter of the energy generated by a nuclear power plant. 

Last week, Hewlett-Packard announced it would partner with a Texas-based processor startup, Calxeda, to use extremely low-power ARM chips in a new generation of data-center servers. These chips are similar to the ones found in iPhones, iPads, and other mobile devices, and use significantly less energy than Intel’s traditional server chips.

“Every watt that you use on a CPU, you spend one more watt to cool it down,” says Sergis Mushell, an analyst with Gartner Research. “If you reduce the box’s [energy demands] by one watt, you save yourself two watts of power.”

Scale that up to the size of a company like Google or Facebook, and there’s a huge incentive to bring down those energy requirements. 

Calxeda is one of many companies that licenses low-power processor designs from U.K.-based ARM Holdings, a company that was spun out of academic research done in the U.K. in the early 1980s. Calxeda is the first company to put ARM-based processors into data-center servers.

Nowadays, ARM-based processors can be found in more than 90 percent of the world’s cell phones. As smart phones have increasingly come to resemble full-fledged computers, these chips have migrated into other areas. About 22 percent of laptops will use ARM chips by 2015, according to one estimate.

Smarter server: Calxeda builds very-low-power servers using ARM chips, like the one in the center of this image.

“Low-power embedded processors are turning out to be a real disruptive technology,” says Steve Furber, a professor of computer science at the University of Manchester, and one of the original designers of the ARM chip.

“When they come in, they don’t bring anything radically new to the business,” he says. “They sneak into a niche that the high-end guys don’t care about. But then they eat the high-end guys’ lunch all over the place. I see data centers as the obvious next move.”

Calxeda and other companies that have licensed the ARM chip clearly agree. “It looks like our platform can deliver 10 times the performance for the same amount of power,” says Karl Freund, Calxeda’s vice president of marketing. “These processors are inherently slower, but they’re more energy-efficient.”

The drawback, of course, is that for processor-intensive tasks, like video or image rendering, an ARM-based computer may not be the way to go.

Freund says most processors running on servers draw about 160 watts under normal operation, and even draw about 80 watts when they’re idle. Calxeda’s processors, by contrast, draw just five watts under normal operation and half a watt “when our server is doing nothing,” Freund says.

He says the new servers wouldn’t be appropriate for some more “latency sensitive” server applications, like algorithmic stock trading, or fast database queries, but he thinks they’d be perfect for companies whose customers are everyday users of sites like Google and Facebook.

Furber expects even supercomputers to eventually use ARM processors. “There is no way forward now for computer technology except to go parallel, and once you have adopted massive parallelism, you can choose between delivering your performance requirement from, say, 1,000 complex processors or 10,000 simple processors,” he says. “These two solutions will deliver roughly the same computing power at roughly the same silicon cost, but the simple processors will deliver it at a tenth of the energy consumption, and energy is increasingly the principal cost of computing.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.