Skip to Content

Lightspeed Animation

A new animated lighting system lets movie directors improve shots in seconds.

Perfecting a shot for an animated film or a special-effects sequence is a very incremental process. Every time the director tweaks the lighting, an enormous amount of computation is required to render the new image, and that rendering process can take a very long time.

A light touch: The top image is a preview image: an approximation that directors work with while editing a film. The two halves of the preview image show how much a shot can change when the lighting is adjusted. The bottom half is the untouched original. The top half shows the image after the animators altered the lighting using Lightspeed. The bottom image shows the final product as it appears in a theater.

Now a team of computer scientists from MIT, Tippett Studios, and Industrial Light and Magic (ILM) have devised a system that reduces the amount of time required to render a preview image from nearly an hour to seconds, allowing directors to fine-tune the lighting in a shot immediately. ILM tested the system, called Lightspeed, on the movie Transformers, and it plans on deploying it throughout the whole company in the next couple of weeks.

“We are still rolling it out,” says Christophe Hery, the lead engineer of research and development at ILM. “But potentially, what used to take three or four days to produce might be compressed into a single day.”

The team’s solution is based on the fact that lighting designers are working at the end of the production process. Since everything else in the image has largely been set, much of the data involved in the rendering process is redundant. So, to accelerate the process, Lightspeed identifies and then compresses the data that is not changing in the image each time it’s rendered so as to avoid redundancy.

Multimedia

  • See Lightspeeds affect on different types of images.

  • Watch as Lightspeed lets a director improve an image in real time.

Next, Lightspeed takes advantage of high-performance graphic processors (GPUs). Traditionally, when a lighting designer renders an image, that work is performed entirely on a central processing unit (CPU). The Lightspeed system, in contrast, caches the redundant data on the CPU and performs the remaining computations for re-executing the lighting programs on the GPU. Managing the data in this way makes previewing an image orders of magnitude faster than running it entirely on the CPU.

“The first big step is eliminating work that doesn’t have to be recomputed every frame,” says Jonathan Ragan-Kelley, a computer scientist at MIT and a Lightspeed team member. “The next big acceleration comes from taking that data [that] lighting designers are editing, and then mapping it onto a processor that can execute it much more efficiently.”

The Lpics preview system used by Pixar Animation Studios employs a similar method to render preview images quickly. But Lpics requires a programmer to manually go through and identify what data in an image is going to change and what isn’t when making different preview images. Moreover, this process has to be redone for Lpics anytime the lighting programs change to capture a different lighting effect, which often happens during production.

The other improvement over Lpics, according to Ragan-Kelley, is that the Lightspeed preview system supports additional effects, such as motion blur and transparency, in which more than one point in a scene contributes to the color of an individual pixel.

“They went for a very nice solution that guarantees accuracy, especially in small scenes with lots of details,” says Fabio Pellacini, a computer scientist at Dartmouth College and one of the creators of Pixar’s Lpics system. “We are seeing improvements coming very quickly online these days, but difficulties remain for handling complex images where light reflects across a variety of objects from different angles. Hopefully, these problems will be solved soon.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.