This disparity means that future supercomputing centers simply might not be able to afford separate graphics-processing units. “At petascale, [separate graphics-processing units] are less cost-effective,” says Hank Childs, a computer systems engineer and visualization expert at Lawrence Berkeley National Laboratory. Childs points out that a dedicated visualization cluster, like the one for Argonne’s Intrepid supercomputer, often costs around $1 million, but in the future that cost might increase by a factor of 20.
Pat McCormick, who works on visualization on the world’s fastest supercomputer, the AMD Opteron and IBM Cell-powered “Roadrunner” at Los Alamos National Laboratory, says that Peterka’s work on direct visualization of data is critical because “these machines are getting so big that you really don’t have a choice.” Existing, GPU-based methods of visualization will continue to be appropriate only for certain kinds of simulations, McCormick says.
“If you’re going to consume an entire supercomputer with calculations, I don’t think you have a choice,” says McCormick. “If you’re running at that scale, you’ll have to do the work in place, because it would take forever to move it out, and where else will you be able to process that much data?”
Peterka, McCormick, and Childs envision a future in which supercomputers perform what’s known as in-situ processing, in which simulations are visualized as they’re running, rather than after the fact.
“The idea behind in-situ processing is you bypass I/O altogether,” says Childs. “You never write anything to disk. You take visualization routines and link them directly to simulation code and output an image as it happens.”
This approach is not without its pitfalls, however. For one thing, it would take a whole second or more to render each image, precluding the possibility of interacting with three-dimensional models in a natural fashion. Another pitfall is the fact that interacting with data in this way burns up cycles on the world’s most expensive mainframes.
“Supercomputers are incredibly valuable resources,” notes Childs. “That someone would do a simulation and then interact with the data for an hour–that’s a very expensive resource to hold hostage for an hour.”
As desktop computers follow supercomputers and GPUs into the world of multiple cores and massively parallel processing, Peterka speculates that there could be a trend away from processors specialized for particular functions. Already, AMD offers the OpenCL code library, which makes it possible to run code designed for a GPU on any x86 chip–and vice versa.
Xavier Cavin, founder and CEO of Scalable Graphics, a company that designs software for the largest graphics-processing units used by businesses, points out that the very first parallel volume-rendering algorithm ran on the CPUs of a supercomputer. “After that, people started to use GPUs and GPU clusters to do the same thing,” Cavin says. “And now it comes back to CPUs. It’s come full circle.”