Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

An early installation of a Tandem NonStop I computer in a data center, courtesy HP

Everything we take for granted - the Internet, the Cloud, all that on demand media, stock markets, your checking account balance - has a physical reality in a place called a data center. We are as dependent on them as we are on our most critical pieces of infrastructure - power plants, water treatment facilities, hospitals. And it wasn’t always this way.

“The idea of a stand alone data center is a more recent development in the history of computing,” says Bill Kosik, lead data center energy technologist at Hewlett-Packard. Like the earliest power plants, data centers were originally purpose-built, and attached to the businesses they served, be they steel plants, office buildings or automobile factories.

Rob Taylor, director of infrastructure technology outsourcing in HP’s enterprise services wing, remembers graduating to his first data center from what had literally been a closet. The center was full of equipment from IBM but also Data General and Digital Equipment Corporation equipment – at that time, the second-largest computer manufacturer in the world.

At that time, says Taylor, “the data center was a place where, if you needed cooling and power, you went to go and get it. We didn’t think about the future; we were just trying to think about a place to put our stuff.”

Kosik remembers a time when the IBM 360 mainframe was ubiquitous. These water-cooled behemoths would occasionally literally spring a leak, and “they were not waterproof,” says Kosik. Burst hoses and leaking valves would lead to an immediate shutdown and a visit from a technician. In case of a failure in the building cooling system, the IBM 360 even had an optional bolt-on water storage tank.

The physics of convective heat transfer haven’t changed in the decades since, which is why water cooling is making a slow comeback, says Kosik. For supercomputing clusters with extremely high power density – up to 100 kilowatts per cabinet of servers – it’s a must. Circulating water is simply too effective, compared to air, at carrying heat away from where it’s not wanted.

“Computers are not getting less power intensive,” says Kosik. “It will force a move to water cooling eventually.”

Kosik cautions, however, that there are significant capital expenses associated with running a secondary web of water pipes through a building, and he believes we’ll always have air cooled computers of some kind or another in data centers.

That could be because data centers themselves are slow to change - even as computers become more sophisticated, legacy systems remain entrenched.

“We still run every generation of technology [in our customer’s data centers], and we don’t see that changing,” says Doug Oathout, vice president of marketing in HP’s converged infrastructure division. “It isn’t all moving to one thing or another,” he adds.

Follow Mims on Twitter or contact him via email.

0 comments about this story. Start the discussion »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me