Perhaps it was inevitable: the cloud is already parsing enormous quantities of information at a high speed for the world’s webmasters; why not diversify its processor types and apply that power to problems that previously required in-house supercomputing resources?
That’s the pitch behind Amazon’s new GPU-powered Elastic Compute Cloud (EC2) on-demand computing resources, powered by NVIDIA’s Tesla GPUs. Amazon’s on-demand computing resources have long been used for processing chunks of data too large for in-house resources–famously, the New York Times used EC2 to parse 405,000 giant TIFF files in order to make 71 years of its archives available to the public.
Making GPU-based servers that can accomplish the same thing is a logical extension of Amazon’s existing CPU-based server technology. Amazon has also taken extra steps to make sure that these servers are well-suited to high performance computing applications, including 10 Gbps Ethernet interconnects “with the ability to create low latency, full bisection bandwidth HPC clusters.”
What’s especially interesting about this development is that outside of graphics-intensive operations and the odd password crack, for which GPUs are naturally suited, most high performance software has yet to be translated so that it can run on GPU servers. Amazon–not to mention IBM and the other vendors creating the servers that power Amazon’s new offering–are therefore placing a bet on the general utility of GPU servers and the continued migration of software to these platforms.
The migration to GPU computing is by no means assured. Some problems simply may not be transferable, and, as Thom Dunning, director of the National Center for Supercomputing Applications told Technology Review not long ago, programming for GPUs remains something of a dark art.
Dunning also admitted, however, that current approaches to supercomputing can’t get us to the next generation of computing power, and that GPU computers might be a step in the right direction.
Possibly the most notable feature of Amazon’s “new” offering is how much it resembles existing supercomputing setups, at least from an end-user’s perspective. Most scientists and even businesses reserve or buy time on existing systems, which are kept fully booked in order to fully utilize them. By provisioning supercomputing resources in the cloud, Amazon is simply making itself a potential vendor of choice for scientists and engineers who would otherwise obtain the same services from their university’s supercomputing center or consortium.
If Amazon can provide those resources at a lower price, it’s worth asking whether the company is about to eat the lunch of low and mid-level supercomputing centers, leaving only the largest and most specialized HPC resources in demand.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
Surgeons have successfully tested a pig’s kidney in a human patient
The test, in a brain-dead patient, was very short but represents a milestone in the long quest to use animal organs in human transplants.
Is everything in the world a little bit conscious?
The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional
reasons. But can it be tested? Surprisingly, perhaps it can.
We reviewed three at-home covid tests. The results were mixed.
Over-the-counter coronavirus tests are finally available in the US. Some are more accurate and easier to use than others.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.