Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Smarter animation bridges the gap between the physical and digital worlds.

Hao Li remembers watching Jurassic Park as a kid: “That moment of seeing something that didn’t exist in reality, but it looked so real—that was definitely the one that made me think about doing this,” he says. Li tells me the story one afternoon while we dine at the cafeteria of Industrial Light & Magic, the famed San Francisco visual-effects studio where he has been working on a way to digitally capture actors’ facial expressions for the upcoming Star Wars movies. When Jurassic Park came out, Li was 12 years old and living in what he calls the “boonie” town of Saarbrücken, Germany, where his Taiwanese parents had moved while his father completed a PhD in chemistry. Now, 20 years later, if all goes to plan, Li’s innovation will radically alter how effects-laden movies are made, blurring the line between human and digital actors.

Visual-effects artists typically capture human performances through small balls or tags that are placed on an actor’s face and body to track movement. The data capturing the motion of those markers is then converted into a digital file that can be manipulated. But markers are distracting and uncomfortable for actors, and they’re not very good at capturing subtle changes in facial expression. Li’s breakthrough involved depth sensors, the same technology used in motion gaming systems like the Xbox Kinect. When a camera with depth sensors is aimed at an actor’s face, Li’s software analyzes the digital data in order to figure out how the facial shapes morph between one frame and the next. As the actor’s lips curl into a smile, the algorithm keeps track of the expanding and contracting lines and shadows, essentially “identifying” the actor’s lips. Then the software maps the actor’s face onto a digital version. Li’s work improves the authenticity of digital performances while speeding up production.

Li is amiably brash, unembarrassed about proclaiming his achievements, his ambitions, and the possibilities of his software. His algorithm is already in use in some medical radiation scanners, where it keeps track of the precise location of a tumor as a patient breathes. In another project, the software has been used to create a digital model of a beating heart. Ask him if his technology can be used to read human emotions or if he’ll find some other far-off possibility, and he’s likely to say, “I’m working on that, too.”

When I ask if he speaks German, Li smiles and says he does—“French, German, Chinese, and English.” This fall, he will begin working in Los Angeles as an assistant professor in a University of Southern California computer graphics lab. But Hollywood movies are not the end game. “Visual effects are a nice sandbox for proof of concepts, but it’s not the ultimate goal,” Li says. Rather, he sees his efforts in data capture and real-time simulation as just a step on the way to teaching computers to better recognize what’s going on around them.

Farhad Manjoo

 

 

Credits: Photographs courtesy of Hao Li, video by Whitney Dinneweth | Above the Cut Films, © 2013 MIT Technology Review

Tagged: Computing, EmTech2014

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me