It takes energy to run the computers inside data centers—and then more energy to cool them down. With demand for cloud computing growing rapidly, the companies that run these centers are looking for ways to save on energy costs. The microprocessors inside their computers look to be an ideal target.
For years, Intel and AMD have dominated the microprocessor market with high-performance chips. But as the cost of cooling chips becomes a bigger issue, these companies will face competition from low-power upstarts, some of which use chip architectures originally developed for cell phones and other mobile devices.
The ARM chip design—licensed by a company called ARM, based in Cambridge, U.K.—originates from a battery-constrained environment, which means that it is inherently low-power. The design is relatively simple and trades processing power for energy savings. Unlike Intel and AMD chips, ARM chip designs can be modified by other companies and optimized for specific tasks. ARM chips also integrate components that are normally found elsewhere in a server into one chip, an approach that saves space and cost.
ARM chips are also already produced in greater numbers than Intel and AMD chips. In the long term, this could mean greater innovation and lower production costs, because competition among different producers may drive better designs and production methods.
To compensate for the loss of performance compared to high-performance chips, a company would need to use more ARM chips for more taxing applications.
“Producers of these ARM chips don’t have any secret sauce that gets them around the laws of physics,” says Tom Halfhill, industry analyst and editor of Microprocessor Report. “They’re not talking a whole lot right now about how much power their chips are really going to save, but the basic fact is that performance costs power.”
Viren Shah, senior director of Marvell’s enterprise business unit, says that the chips are best used in systems where networking is the processing bottleneck. Good examples of this would be Web servers and cloud-computing applications, where simple processing tasks can be distributed across a network.
In that case, he says, the quad-core chips being used by Marvell (which have four central processing units that work in parallel) could use less than 10 watts in situations where most other processors commonly use more than 80 watts. But for tasks that need a lot of processing power, like heavy database applications and high-speed trading, the chips would likely offer no power savings, he says.
There are also major hurdles that must be overcome before ARM chips are widely accepted. The server industry has 20 years of experience fine-tuning software for the “x86” instruction set used by Intel and AMD. This means ARM chips cannot run operating systems and other software developed for x86-based systems. They are compatible with open-source systems such as Linux, Halfhill says. But specialized software has typically been developed for x86 platforms.
ARM is also a 32-bit architecture, whereas data centers typically use 64-bit architectures. This means the software that runs on these systems is designed to handle larger chunks of data than ARM chips can deal with, making rewriting it harder.
Gary Lauterbach, chief technology officer of another company offering low-power server chips, SeaMicro, says that ARM-based servers could commonly provide energy savings of 50 percent or more after a year of implementation. But he believes that ARM servers will only succeed by drawing an active open-source community to build and optimize software. If that happens, he says, “we are in for a battle that will likely help consumers significantly.”
SeaMicro, based in Santa Clara, California, is designing server chips based on Intel’s low-power, x86-compatible Atom chipset for mobile devices. SeaMicro’s first product, the SM10000 server, offers twice the performance per watt of a comparable high-end server, according to Microprocessor Report.
“You can count on the industry providing many more low-power alternatives in the near future,” says Wu Feng, a computer scientist and energy-efficiency expert at Virginia Tech.
“More than half of the data centers out there claim that electricity use is their number-one facility issue,” he adds. “Right now everyone in the server industry is looking for that one product that could completely change how servers use energy.”