MIT Technology Review Subscribe

Fifty Years of Water Cooling

Why everything old is new again in data centers.

Everything we take for granted – the Internet, the Cloud, all that on demand media, stock markets, your checking account balance – has a physical reality in a place called a data center. We are as dependent on them as we are on our most critical pieces of infrastructure – power plants, water treatment facilities, hospitals. And it wasn’t always this way.

An early installation of a Tandem NonStop I computer in a data center, courtesy HP

“The idea of a stand alone data center is a more recent development in the history of computing,” says Bill Kosik, lead data center energy technologist at Hewlett-Packard. Like the earliest power plants, data centers were originally purpose-built, and attached to the businesses they served, be they steel plants, office buildings or automobile factories.

Advertisement

Rob Taylor, director of infrastructure technology outsourcing in HP’s enterprise services wing, remembers graduating to his first data center from what had literally been a closet. The center was full of equipment from IBM but also Data General and Digital Equipment Corporation equipment – at that time, the second-largest computer manufacturer in the world.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

At that time, says Taylor, “the data center was a place where, if you needed cooling and power, you went to go and get it. We didn’t think about the future; we were just trying to think about a place to put our stuff.”

Kosik remembers a time when the IBM 360 mainframe was ubiquitous. These water-cooled behemoths would occasionally literally spring a leak, and “they were not waterproof,” says Kosik. Burst hoses and leaking valves would lead to an immediate shutdown and a visit from a technician. In case of a failure in the building cooling system, the IBM 360 even had an optional bolt-on water storage tank.

The physics of convective heat transfer haven’t changed in the decades since, which is why water cooling is making a slow comeback, says Kosik. For supercomputing clusters with extremely high power density – up to 100 kilowatts per cabinet of servers – it’s a must. Circulating water is simply too effective, compared to air, at carrying heat away from where it’s not wanted.

“Computers are not getting less power intensive,” says Kosik. “It will force a move to water cooling eventually.”

Kosik cautions, however, that there are significant capital expenses associated with running a secondary web of water pipes through a building, and he believes we’ll always have air cooled computers of some kind or another in data centers.

That could be because data centers themselves are slow to change – even as computers become more sophisticated, legacy systems remain entrenched.

“We still run every generation of technology [in our customer’s data centers], and we don’t see that changing,” says Doug Oathout, vice president of marketing in HP’s converged infrastructure division. “It isn’t all moving to one thing or another,” he adds.

Advertisement

Follow Mims on Twitter or contact him via email.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement