Perhaps it was inevitable: the cloud is already parsing enormous quantities of information at a high speed for the world’s webmasters; why not diversify its processor types and apply that power to problems that previously required in-house supercomputing resources?
That’s the pitch behind Amazon’s new GPU-powered Elastic Compute Cloud (EC2) on-demand computing resources, powered by NVIDIA’s Tesla GPUs. Amazon’s on-demand computing resources have long been used for processing chunks of data too large for in-house resources–famously, the New York Times used EC2 to parse 405,000 giant TIFF files in order to make 71 years of its archives available to the public.
Making GPU-based servers that can accomplish the same thing is a logical extension of Amazon’s existing CPU-based server technology. Amazon has also taken extra steps to make sure that these servers are well-suited to high performance computing applications, including 10 Gbps Ethernet interconnects “with the ability to create low latency, full bisection bandwidth HPC clusters.”
What’s especially interesting about this development is that outside of graphics-intensive operations and the odd password crack, for which GPUs are naturally suited, most high performance software has yet to be translated so that it can run on GPU servers. Amazon–not to mention IBM and the other vendors creating the servers that power Amazon’s new offering–are therefore placing a bet on the general utility of GPU servers and the continued migration of software to these platforms.
The migration to GPU computing is by no means assured. Some problems simply may not be transferable, and, as Thom Dunning, director of the National Center for Supercomputing Applications told Technology Review not long ago, programming for GPUs remains something of a dark art.
Dunning also admitted, however, that current approaches to supercomputing can’t get us to the next generation of computing power, and that GPU computers might be a step in the right direction.
Possibly the most notable feature of Amazon’s “new” offering is how much it resembles existing supercomputing setups, at least from an end-user’s perspective. Most scientists and even businesses reserve or buy time on existing systems, which are kept fully booked in order to fully utilize them. By provisioning supercomputing resources in the cloud, Amazon is simply making itself a potential vendor of choice for scientists and engineers who would otherwise obtain the same services from their university’s supercomputing center or consortium.
If Amazon can provide those resources at a lower price, it’s worth asking whether the company is about to eat the lunch of low and mid-level supercomputing centers, leaving only the largest and most specialized HPC resources in demand.
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.