Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

One of the greatest living psychologists is an American called Paul Ekman. In the 1970s, Ekman and a colleague developed a way to categorise and assess human facial expressions.

At the time, many psychologists believed that the expressions conveying specific emotions vary from one culture to another. But in a ground-breaking set of experiments carried out with cultures all over the world, Ekman showed that all humans share the same facial expressions for six basic emotions–anger, fear, joy, surprise, disgust and sadness. 

He went to develop a taxonomy of facial expressions called the Facial Action Coding System or FACS which identifies the facial muscle movements associated with each expression.

This work has been hugely influential.  FACS is particularly useful for psychologists studying the role that emotion plays in everyday life.  

But there is another group that has benefited too–animators. FACS provides a straightforward way to give computer-generated characters realistic expressions relatively easily. Indeed, FACS has inspired an MPEG4 standard for encoding facial expressions in computer generated characters.

That in turn has helped psychologists who can now produce exactly reproducible emotions on-demand in the expressions of virtual characters. That’s hugely useful in research projects. 

However, there’s a problem. While plenty of people have evaluated and calibrated expressions in humans, nobody has done the same for virtual characters. That’s significant because humans may not interpret facial expressions in virtual characters in the same way as they do in humans. 

What’s more, since researchers generate their own virtual characters, the way expressions vary from one project to another may mean the results are not be comparable.   

All that could be solved with a standard set of expressions that have been comprehensively evaluated and calibrated by real human subjects. 

Today, Joost Broekens and pals at the Man-Machine-Interaction department at Delft University in The Netherlandsthat’s do exactly that.

These guys have created a set of six virtual expressions based on FACS. Each expression is a set of vectors that together specify how different parts of an animated face should move to simulate a basic emotion. A virtual character simply imports these vectors to take on that expression.

They then asked human volunteers to evaluate each expression, asking them to determine the emotion it represents and its intensity when the virtual character is near and further away and when viewed from the side.

The results show that these virtual expressions communicate emotions in more or less the same way as human faces. There are one or two minor differences: a fearful expression also tends to look surprised and disgust can be confused with anger, something that other researchers have also found. But these are minor concerns

As a further check, the team also asked the volunteers to evaluate blends of two basic expressions to produce so-called blended emotions. For example, joy and anger together communicate evil or naughtiness but this has never been properly measured in virtual characters before.  

One of the team’s important findings is that the volunteers were all able to identify anger easily in the tests. However other blended emotions such as enthusiasm (joy + surprise) did not fare so well. 

An important point here is that Broekens and buddies are not attempting to create the most realistic or believable expressions. Instead, they have produced a set of clearly calibrated expressions that are easy to reproduce exactly in more or less any virtual environment.

That will be hugely useful for researchers wanting to produce comparable data in a variety of different settings and tests.  It will also be useful for any animators who want an easy way to make their characters convey a very specific emotion.  

For those who want to download the expressions, the team has made them publicly available from http://www.joostbroekens.com. The Microsoft paper clip may never look the same again.

 

Ref: arxiv.org/abs/1211.4500: Dynamic Facial Expression of Emotion Made Easy

0 comments about this story. Start the discussion »

Tagged: Communications

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »