Did Amazon Just Move Supercomputing to the Cloud?
Perhaps it was inevitable: the cloud is already parsing enormous quantities of information at a high speed for the world’s webmasters; why not diversify its processor types and apply that power to problems that previously required in-house supercomputing resources?

courtesy NVIDIA
That’s the pitch behind Amazon’s new GPU-powered Elastic Compute Cloud (EC2) on-demand computing resources, powered by NVIDIA’s Tesla GPUs. Amazon’s on-demand computing resources have long been used for processing chunks of data too large for in-house resources–famously, the New York Times used EC2 to parse 405,000 giant TIFF files in order to make 71 years of its archives available to the public.
Making GPU-based servers that can accomplish the same thing is a logical extension of Amazon’s existing CPU-based server technology. Amazon has also taken extra steps to make sure that these servers are well-suited to high performance computing applications, including 10 Gbps Ethernet interconnects “with the ability to create low latency, full bisection bandwidth HPC clusters.”
What’s especially interesting about this development is that outside of graphics-intensive operations and the odd password crack, for which GPUs are naturally suited, most high performance software has yet to be translated so that it can run on GPU servers. Amazon–not to mention IBM and the other vendors creating the servers that power Amazon’s new offering–are therefore placing a bet on the general utility of GPU servers and the continued migration of software to these platforms.
The migration to GPU computing is by no means assured. Some problems simply may not be transferable, and, as Thom Dunning, director of the National Center for Supercomputing Applications told Technology Review not long ago, programming for GPUs remains something of a dark art.
Dunning also admitted, however, that current approaches to supercomputing can’t get us to the next generation of computing power, and that GPU computers might be a step in the right direction.
Possibly the most notable feature of Amazon’s “new” offering is how much it resembles existing supercomputing setups, at least from an end-user’s perspective. Most scientists and even businesses reserve or buy time on existing systems, which are kept fully booked in order to fully utilize them. By provisioning supercomputing resources in the cloud, Amazon is simply making itself a potential vendor of choice for scientists and engineers who would otherwise obtain the same services from their university’s supercomputing center or consortium.
If Amazon can provide those resources at a lower price, it’s worth asking whether the company is about to eat the lunch of low and mid-level supercomputing centers, leaving only the largest and most specialized HPC resources in demand.
Keep Reading
Most Popular
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.