Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

In a completely new wrinkle on the old adage, “Smile and the world smiles with you, frown and you frown alone,” Japanese researchers are producing a generation of robots that can identify human facial expressions and then respond to them. A team lead by Fumio Hara, a mechanical engineering professor at the Science University of Tokyo, has built a female robotic head that can both recognize and express fear, happiness, surprise, sadness, anger, and disgust.

The main goal of Hara’s project, which is supported by a five-year, $3 million grant from the Japanese government, is not merely to produce a robotic version of monkey-see, monkey-do. Instead, the aim is to create robots that will “empathize” with us and make us feel more comfortable as they read emotional changes expressed in our faces. The researchers expect that such visually emotive robots would be appreciated by factory workers forced to share the line with electronic workmates. The robots may even be useful as teaching aids for certain autistic children who have a communications disorder that makes it difficult for them to understand facial expressions and respond to them correctly.

One surprising feature of Hara’s research is its genesis. During the great economic expansion in Japan in the 1970s, accidents began to occur in newly constructed chemical plants. Hara surmised that plant operators were having difficulty spotting problems by reading banks of digital readout panels. “I thought what they needed was a more global view of what was going on in the plant,” he says, “and if something went wrong, it might be expressed, for example, as a sad face.”

Alas, the engineers soon confronted major hurdles. They found that it was not only difficult to assign facial expressions to the wide range of operational wrongs-from a messy plant floor to dangerous changes in temperature or pressure in the manufacturing process-but also that individual human operators interpreted the same expressions in different ways. The task of tying facial expressions to plant conditions eventually proved so complex that Hara gave up on the research.

But in the mid-1980s, when one of his students expressed interest in robotic research, Hara wondered if the facial-expression approach, although a failure on a plant level, might work between individual robots and humans. He began by making use of work by Paul Ekman, a professor of psychology at the University of California at San Francisco, who divided the movements of human facial expressions into 44 categories, or “action units.” Each action unit would correspond to an individual movement, such as eyebrows going up or down or lips pursing. Combining action units in various ways produced different expressions. For example, disgust entails lowering the brow, wrinkling the nose, and raising the chin.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me