A report issued today estimates that the overall electricity used by servers–computers that make up the networks of organizations, from small businesses to giant financial institutions–doubled between 2000 and 2005. The reasons for this growth rate were an increase in the number of servers installed in data centers and the demands of auxiliary equipment such as cooling fans and facility lighting, says Jonathan Koomey, staff scientist at Lawrence Berkeley National Laboratory and author of the report. “I was surprised by the doubling,” says Koomey. “I expected some growth, but not quite as large.”
In 2005, servers and their auxiliary equipment accounted for an estimated 1.2 percent of all power consumption in the United States and 0.8 percent worldwide, the report states. Koomey says that server power consumption in the United States was the equivalent of that of the entire state of Mississippi in 2005; that year, 20 other states used less power. The report, titled “Estimating Total Power Consumption by Servers in the U.S. and the World,” was funded by AMD and peer-reviewed by the major companies that sell servers, including Intel, IBM, Hewlett-Packard, Sun, and Dell.
Prior to the report, the server and data-center industry hadn’t had an up-to-date, well-researched estimate of server power consumption, says Koomey. The report relies on detailed data from IDC, a research firm, on the number of preexisting installed servers and shipments of servers, as well as on measured data and estimates of the power used per server for the most common models in each class of server.
The report is important to the industry, says Koomey, because once power consumption can be quantified, companies can make better decisions about how to reduce it and save money. “The industry sees that one of the first things you need to do to address the problem is to figure out how big it is,” he says.
Even without this report, companies in the industry have assumed that servers and data centers consume more power than they should, which has prompted the development of technology and practices to curb power consumption. Intel and AMD, for instance, have been touting more-energy-efficient microprocessors during the past year. And for years HP has been working with estimates that closely match the report’s findings, which have been used to inform data-center research at the company, says Chandrakant Patel, HP fellow. “It corroborates our thinking and gives us quantification that might have been lacking before. The good news is, reports such as this … serve as a good reference.”
The report’s estimates, although based on reputable data, still contain some uncertainties. One of these, Koomey says, involves measuring the number and type of Google’s servers. Google–and potentially other companies–uses servers that don’t fall into any of the three main categories of server that the report includes: volume-class servers (which cost less than $25,000 per unit), mid-range systems (between $25,000 and $500,000 per unit), and high-end systems (more than $500,000 per unit). The company instead buys motherboards, the main circuitry of a PC, and uses them to custom-design servers. “Those wouldn’t come under the IDC definition of ‘server,’” says Koomey. “They fall under the PC category.” To estimate the potential impact of Google’s servers on the findings, Koomey used a New York Times-reported estimate of 450,000 Google servers. Assuming this and other power estimates are correct, Koomey claims that Google’s servers would increase electricity consumption in the volume class of server by about 1.7 percent.
Still, the starting point is solid enough for researchers to use the findings as input for a study on data-center power consumption mandated by a recent Congressional bill. In December 2006, the Senate approved legislation introduced by Representatives Anna G. Eshoo, D-California and Mike Rogers R-Michigan, requiring research into reducing the power consumption of servers and data centers. “This will be used to inform that process,” Koomey says.
Koomey expects that the report will spark industry-wide improvements, from energy-saving microprocessors and chip architectures to more-efficient cooling technology and software that helps distribute the workload in data centers more resourcefully. “I think in the data-center area, there is a lot of opportunity for improvement,” Koomey says.
The big new idea for making self-driving cars that can go anywhere
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
Inside Charm Industrial’s big bet on corn stalks for carbon removal
The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.