Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Virtual Expressions
Computer graphics technique transfers facial expressions

Results: Researchers from MIT and Mitsubishi Electric Research Laboratories have created a computer model that allows them to capture from video the facial expression, speech-related mouth shapes, and other key identifying features of one person’s face and digitally transfer a select combination of those attributes to video of another person’s face. In one example, the researchers took a surprised look from one person and the mouth position from a second person and placed those two features on the face of a third person filmed with a blank expression; in the resulting image, she looked surprised.

Why It Matters: Making digital facial movements look natural is a major challenge in computer animation. These new tools could be used to give computer-generated characters in films and video games more-realistic faces, based on the movements made by live actors. Existing techniques, such as those used in movies like The Polar Express, typically capture the motion of a live actor using reflective markers stuck to the actor’s body and face. The MIT method can capture motion and expressions from a video recording of the actor without the need for markers, making this kind of computer animation potentially simpler and cheaper.

Methods: Daniel Vlasic of MIT and his colleagues created their model using data from 3-D scans of 31 subjects making different facial expressions and mouthing different sounds. They then filmed subjects performing–singing, for instance. They tracked the facial movements of the subjects and fed that data into the model. The model used that data to change the expressions or mouth movements of a second person, and the researchers imposed those changes on video of the person. The model allowed the researchers to manipulate a subject’s attributes, such as a smile or identifying features, independently of one another, so that they could transfer, say, a smile to a person without changing that person’s identity. – By Corie Lok

Source: Vlasic, D., et al. 2005. Face transfer with multilinear models. ACM Transactions on Graphics 24:426-433.

0 comments about this story. Start the discussion »

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me