Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Tesla M2050 GPUs are now available in Amazon’s cloud computing offerings
courtesy NVIDIA

Perhaps it was inevitable: the cloud is already parsing enormous quantities of information at a high speed for the world’s webmasters; why not diversify its processor types and apply that power to problems that previously required in-house supercomputing resources?

That’s the pitch behind Amazon’s new GPU-powered Elastic Compute Cloud (EC2) on-demand computing resources, powered by NVIDIA’s Tesla GPUs. Amazon’s on-demand computing resources have long been used for processing chunks of data too large for in-house resources–famously, the New York Times used EC2 to parse 405,000 giant TIFF files in order to make 71 years of its archives available to the public.

Making GPU-based servers that can accomplish the same thing is a logical extension of Amazon’s existing CPU-based server technology. Amazon has also taken extra steps to make sure that these servers are well-suited to high performance computing applications, including 10 Gbps Ethernet interconnects “with the ability to create low latency, full bisection bandwidth HPC clusters.”

What’s especially interesting about this development is that outside of graphics-intensive operations and the odd password crack, for which GPUs are naturally suited, most high performance software has yet to be translated so that it can run on GPU servers. Amazon–not to mention IBM and the other vendors creating the servers that power Amazon’s new offering–are therefore placing a bet on the general utility of GPU servers and the continued migration of software to these platforms.

The migration to GPU computing is by no means assured. Some problems simply may not be transferable, and, as Thom Dunning, director of the National Center for Supercomputing Applications told Technology Review not long ago, programming for GPUs remains something of a dark art.

Dunning also admitted, however, that current approaches to supercomputing can’t get us to the next generation of computing power, and that GPU computers might be a step in the right direction.

Possibly the most notable feature of Amazon’s “new” offering is how much it resembles existing supercomputing setups, at least from an end-user’s perspective. Most scientists and even businesses reserve or buy time on existing systems, which are kept fully booked in order to fully utilize them. By provisioning supercomputing resources in the cloud, Amazon is simply making itself a potential vendor of choice for scientists and engineers who would otherwise obtain the same services from their university’s supercomputing center or consortium.

If Amazon can provide those resources at a lower price, it’s worth asking whether the company is about to eat the lunch of low and mid-level supercomputing centers, leaving only the largest and most specialized HPC resources in demand.

Follow Mims on Twitter or contact him via email.

0 comments about this story. Start the discussion »

Tagged: Computing, Amazon, Amazon.com, NVIDIA, high performance computing, supercomputing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me