Skip to Content
Uncategorized

Psychologists Release Emotion-On-Demand Plug In For Virtual Characters

Downloadable facial expressions for virtual characters are guaranteed to convey specific emotions, say psychologists

One of the greatest living psychologists is an American called Paul Ekman. In the 1970s, Ekman and a colleague developed a way to categorise and assess human facial expressions.

At the time, many psychologists believed that the expressions conveying specific emotions vary from one culture to another. But in a ground-breaking set of experiments carried out with cultures all over the world, Ekman showed that all humans share the same facial expressions for six basic emotions–anger, fear, joy, surprise, disgust and sadness. 

He went to develop a taxonomy of facial expressions called the Facial Action Coding System or FACS which identifies the facial muscle movements associated with each expression.

This work has been hugely influential.  FACS is particularly useful for psychologists studying the role that emotion plays in everyday life.  

But there is another group that has benefited too–animators. FACS provides a straightforward way to give computer-generated characters realistic expressions relatively easily. Indeed, FACS has inspired an MPEG4 standard for encoding facial expressions in computer generated characters.

That in turn has helped psychologists who can now produce exactly reproducible emotions on-demand in the expressions of virtual characters. That’s hugely useful in research projects. 

However, there’s a problem. While plenty of people have evaluated and calibrated expressions in humans, nobody has done the same for virtual characters. That’s significant because humans may not interpret facial expressions in virtual characters in the same way as they do in humans. 

What’s more, since researchers generate their own virtual characters, the way expressions vary from one project to another may mean the results are not be comparable.   

All that could be solved with a standard set of expressions that have been comprehensively evaluated and calibrated by real human subjects. 

Today, Joost Broekens and pals at the Man-Machine-Interaction department at Delft University in The Netherlandsthat’s do exactly that.

These guys have created a set of six virtual expressions based on FACS. Each expression is a set of vectors that together specify how different parts of an animated face should move to simulate a basic emotion. A virtual character simply imports these vectors to take on that expression.

They then asked human volunteers to evaluate each expression, asking them to determine the emotion it represents and its intensity when the virtual character is near and further away and when viewed from the side.

The results show that these virtual expressions communicate emotions in more or less the same way as human faces. There are one or two minor differences: a fearful expression also tends to look surprised and disgust can be confused with anger, something that other researchers have also found. But these are minor concerns

As a further check, the team also asked the volunteers to evaluate blends of two basic expressions to produce so-called blended emotions. For example, joy and anger together communicate evil or naughtiness but this has never been properly measured in virtual characters before.  

One of the team’s important findings is that the volunteers were all able to identify anger easily in the tests. However other blended emotions such as enthusiasm (joy + surprise) did not fare so well. 

An important point here is that Broekens and buddies are not attempting to create the most realistic or believable expressions. Instead, they have produced a set of clearly calibrated expressions that are easy to reproduce exactly in more or less any virtual environment.

That will be hugely useful for researchers wanting to produce comparable data in a variety of different settings and tests.  It will also be useful for any animators who want an easy way to make their characters convey a very specific emotion.  

For those who want to download the expressions, the team has made them publicly available from http://www.joostbroekens.com. The Microsoft paper clip may never look the same again.

Ref: arxiv.org/abs/1211.4500: Dynamic Facial Expression of Emotion Made Easy

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.