Skip to Content

Automating Animation

Digital screen stars are becoming more believable-and affordable
December 1, 2002

Today’s digital movie characters are more realistic, interactive, and endearing than ever before. But creating them is still expensive and labor-intensive. Most are either hand-drawn frame by frame with digital pen and ink (and computers that fill in some gaps), or they are based on “motion capture” techniques, which record and digitally mimic the movements of live actors to create on-screen characters. But as movies and video games pack more and more digital creatures onto the screen, animators are turning to physics and new artificial-intelligence methods for faster, more efficient ways to bring their stories to life.


Physics equations guide NaturalMotion’s characters. (Image courtesy of NaturalMotion)

NaturalMotion, a spinoff of the University of Oxford in the United Kingdom, has developed artificial-intelligence software that gives digital characters the power to animate themselves. The approach: a programmer specifies a character’s physical shape and properties and adds equations to govern the movement of its body parts. When animators apply a simulated force like gravity or a push from behind, the character responds realistically without further programming. “What you see on the screen is not a computer graphic of the character” dumbly mimicking recorded motions, says NaturalMotion’s chief executive, Torsten Reil. “It is the actual character.” The company’s goal is to generate interactive animations in real time, allowing animators to use the technology in video games as well as films, Reil says. At the Computer Graphics Laboratory of Stanford University, researchers Katherine Pullen and Christoph Bregler have combined automation techniques with the traditional motion-capture approach. Using their system, animators can manually draw the most active parts of a character-its legs, say, during a walking sequence-and use archival motion-capture data to automatically complete the body and give it extra texture. “Within five years, all high-end games will use methods like these,” says Casey Muratori, lead developer at RAD Game Tools in Kirkland, WA. Making characters not only move but also emote convincingly requires a different level of technology-software that generates “believable facial expressions and body language,” says Ken Perlin, director of the Media Research Laboratory at New York University. Perlin’s group is developing tools that will make it easy for animators to add layers of subtle gestures, for instance, a raised eyebrow or drooping shoulders. Such effects must be drawn by hand today. Though farther from commercialization, Perlin’s software is a step closer to animation’s larger goal-to dazzle our hearts as well as our eyes.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.