Skip to Content

How “Bullet Time” Will Revolutionise Exascale Computing

The famous Hollywood filming technique will change the way we access the huge computer simulations of the future, say computer scientists

The exascale computing era is almost upon us and computer scientists are already running into difficulties. 1 exaflop is 10^18 floating point operations per second, that’s a thousand petaflops. The current trajectory of computer science should produce this kind of  capability by 2018 or so.

The problem is not processing or storing this amount of data–Moore’s law should take care of all that. Instead, the difficulty is uniquely human. How do humans access and make sense of the exascale data sets? 

In a nutshell, the problem is that human senses have a limited bandwidth–our brains can receive information from the external world at roughly gigabit rates.  So a computer simulation at exascale data rates simply overwhelms us. The famous aphorism compares data overload to drinking from a fire hose. This is more like stopping a tidal wave with a bucket. 

The answer, of course, is to find some way to compress the output data without losing its essential features. Today,  Akira Kageyama and Tomoki Yamada from Kobe University in Japan put forward a creative solution. These guys say the trick is to use “bullet time”, the Hollywood filming technique made famous by movies like The Matrix.

Bullet time is a special effect that slows down ordinary events while the camera angle changes as if it were flying around the action at normal speed.  The technique involves plotting the trajectory of the camera in advance and then placing many high speed cameras along this route. All these cameras then film the action as it occurs.

This footage is later edited together to look as if the camera position has moved. And because the cameras are all high speed, the footage can be slowed down. The results are impressive, as anyone who has seen the Matrix movies or played the video games can attest.

Kageyama and Yamada say the same technique could revolutionise the way humans access exascale computer simulations. Their idea is to surround the simulated action with thousands, or even millions, of virtual cameras that all record the action as it occurs.

Humans can later “fly” through the action by switching from one camera angle to the next, just like bullet time.

All this sounds computationally complex but it is actually a useful way to compress the data. The compression arises because each camera records a 2-dimensional image of a  3-dimensional scene.

 Kageyama and Yamada say that the footage from a single camera can be compressed into a file of say 10 megabytes. So even if there are a million cameras recording the action, the total amount of data they produce is of the order of 10 terabytes. That’s tiny compared to the exascale size of the simulation.

These guys have tested the idea on the much smaller scales that are possible today. They simulated the way seismic waves propagate in a 10 GB simulation. They used 130 virtual cameras to record the action and compressed the resulting movies to 1.7 GB.  “Our movie data is an order of magnitude smaller,” they say, adding: “This gap will increase much more in larger scale simulations.”

That’s an interesting and exciting idea that could have big implications for the way we access “big data”. In fact, it’s not hard to imagine the film and gaming industries that inspired the idea, embracing it for future productions. And 2018 isn’t far away now.

Ref: arxiv.org/abs/1301.4546: An Approach to Exascale Visualization: Interactive Viewing of In-Situ Visualization

Keep Reading

Most Popular

still from Embodied Intelligence video
still from Embodied Intelligence video

These weird virtual creatures evolve their bodies to solve problems

They show how intelligence and body plans are closely linked—and could unlock AI for robots.

pig kidney transplant surgery
pig kidney transplant surgery

Surgeons have successfully tested a pig’s kidney in a human patient

The test, in a brain-dead patient, was very short but represents a milestone in the long quest to use animal organs in human transplants.

conceptual illustration showing various women's faces being scanned
conceptual illustration showing various women's faces being scanned

A horrifying new AI app swaps women into porn videos with a click

Deepfake researchers have long feared the day this would arrive.

thermal image of young woman wearing mask
thermal image of young woman wearing mask

The covid tech that is intimately tied to China’s surveillance state

Heat-sensing cameras and face recognition systems may help fight covid-19—but they also make us complicit in the high-tech oppression of Uyghurs.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.