Perhaps it was inevitable: the cloud is already parsing enormous quantities of information at a high speed for the world’s webmasters; why not diversify its processor types and apply that power to problems that previously required in-house supercomputing resources?
That’s the pitch behind Amazon’s new GPU-powered Elastic Compute Cloud (EC2) on-demand computing resources, powered by NVIDIA’s Tesla GPUs. Amazon’s on-demand computing resources have long been used for processing chunks of data too large for in-house resources–famously, the New York Times used EC2 to parse 405,000 giant TIFF files in order to make 71 years of its archives available to the public.
Making GPU-based servers that can accomplish the same thing is a logical extension of Amazon’s existing CPU-based server technology. Amazon has also taken extra steps to make sure that these servers are well-suited to high performance computing applications, including 10 Gbps Ethernet interconnects “with the ability to create low latency, full bisection bandwidth HPC clusters.”
What’s especially interesting about this development is that outside of graphics-intensive operations and the odd password crack, for which GPUs are naturally suited, most high performance software has yet to be translated so that it can run on GPU servers. Amazon–not to mention IBM and the other vendors creating the servers that power Amazon’s new offering–are therefore placing a bet on the general utility of GPU servers and the continued migration of software to these platforms.
The migration to GPU computing is by no means assured. Some problems simply may not be transferable, and, as Thom Dunning, director of the National Center for Supercomputing Applications told Technology Review not long ago, programming for GPUs remains something of a dark art.
Dunning also admitted, however, that current approaches to supercomputing can’t get us to the next generation of computing power, and that GPU computers might be a step in the right direction.
Possibly the most notable feature of Amazon’s “new” offering is how much it resembles existing supercomputing setups, at least from an end-user’s perspective. Most scientists and even businesses reserve or buy time on existing systems, which are kept fully booked in order to fully utilize them. By provisioning supercomputing resources in the cloud, Amazon is simply making itself a potential vendor of choice for scientists and engineers who would otherwise obtain the same services from their university’s supercomputing center or consortium.
If Amazon can provide those resources at a lower price, it’s worth asking whether the company is about to eat the lunch of low and mid-level supercomputing centers, leaving only the largest and most specialized HPC resources in demand.
Here’s how a Twitter engineer says it will break in the coming weeks
One insider says the company’s current staffing isn’t able to sustain the platform.
Technology that lets us “speak” to our dead relatives has arrived. Are we ready?
Digital clones of the people we love could forever change how we grieve.
How to befriend a crow
I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.
Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not
Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.