Chip-maker AMD is looking to address the energy issue by designing a processing unit that, for one thing, eliminates data “bottlenecks,” according to Brent Kirby, director of marketing. The actual physical arrangement of processors, memory, and input and output devices in a server is critical, he says. Instead of using a traditional approach, which forces bits of data to be consolidated at times into a single pipeline, much like highway traffic merging into one lane, the AMD architecture, which uses the company’s Opteron processors, has a grid-like schematic that allows data to flow more freely to all parts of the unit. And when bits of data don’t stall in bottlenecks, less power is needed to push them through.
Even with more efficient processors, though, a room with racks full of servers can become excessively hot, and heat can hinder processor speed, as well as damage equipment. Such rooms need to be kept cool – and sometimes the solution is surprisingly simple.
“We do physical modeling of the air flow within the server, and we calibrate the system to maximum efficiency,” says Alex Yost, director of product management at IBM. Using these models, IBM engineers strategically place fans, which are less power hungry than standard air-conditioning units, to direct air so that critical components, such as the processors and memory, get the freshest air, Yost says. Of course air conditioners still need to be used, but with the cleverly placed fans, they do not have to run at full tilt.
Sun’s Hetherington points out that Silicon Valley technology companies, including his Santa Clara-based firm, endure brown-outs in the summer – last year it happened at Sun about a half-dozen times. “In our offices in the midafternoon, our lights are dimmed” as a way to conserve electricity, he says. “We’re sitting in the dark – and we’re wondering whether energy-efficient data centers make sense? It couldn’t be clearer to us.”