MIT Technology Review Subscribe

How to Make a Robot Smile

In a completely new wrinkle on the old adage, “Smile and the world smiles with you, frown and you frown alone,” Japanese researchers are producing a generation of robots that can identify human facial expressions and then respond to them. A team lead by Fumio Hara, a mechanical engineering professor at the Science University of Tokyo, has built a female robotic head that can both recognize and express fear, happiness, surprise, sadness, anger, and disgust.

The main goal of Hara’s project, which is supported by a five-year, $3 million grant from the Japanese government, is not merely to produce a robotic version of monkey-see, monkey-do. Instead, the aim is to create robots that will “empathize” with us and make us feel more comfortable as they read emotional changes expressed in our faces. The researchers expect that such visually emotive robots would be appreciated by factory workers forced to share the line with electronic workmates. The robots may even be useful as teaching aids for certain autistic children who have a communications disorder that makes it difficult for them to understand facial expressions and respond to them correctly.

Advertisement

One surprising feature of Hara’s research is its genesis. During the great economic expansion in Japan in the 1970s, accidents began to occur in newly constructed chemical plants. Hara surmised that plant operators were having difficulty spotting problems by reading banks of digital readout panels. “I thought what they needed was a more global view of what was going on in the plant,” he says, “and if something went wrong, it might be expressed, for example, as a sad face.”

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Alas, the engineers soon confronted major hurdles. They found that it was not only difficult to assign facial expressions to the wide range of operational wrongs-from a messy plant floor to dangerous changes in temperature or pressure in the manufacturing process-but also that individual human operators interpreted the same expressions in different ways. The task of tying facial expressions to plant conditions eventually proved so complex that Hara gave up on the research.

But in the mid-1980s, when one of his students expressed interest in robotic research, Hara wondered if the facial-expression approach, although a failure on a plant level, might work between individual robots and humans. He began by making use of work by Paul Ekman, a professor of psychology at the University of California at San Francisco, who divided the movements of human facial expressions into 44 categories, or “action units.” Each action unit would correspond to an individual movement, such as eyebrows going up or down or lips pursing. Combining action units in various ways produced different expressions. For example, disgust entails lowering the brow, wrinkling the nose, and raising the chin.

Beginning in the 1990s, Hara’s group set about creating six expressions-fear, happiness, surprise, sadness, anger, and disgust-which, according to Ekman, are universal to all human cultures. The team constructed an aluminum robot head with 18 air-pressure-driven microactuators-in essence, tiny gears-that could mimic 26 facial movements. The next step was to cast a face in silicone rubber from a mold taken from one of the male students in the laboratory. Because the all-male group desired a female presence in the lab, they feminized the male face by adding a natural hair wig, rouged cheeks, and lipstick. The head was also fitted with false teeth.

Through trial and error, the researchers hooked up tiny wires from the actuators to spots on the mask that, when moved, would recreate the action units required to reproduce the desired six expressions. The robot’s head and eyeballs were also engineered to make humanlike movements. Finally, the Japanese engineers put a tiny camera in the robot’s left eye to scan a human face when positioned about one meter away. A computer connected to the camera determined the person’s expression by searching for brightness variations in different areas of the face.

The computer observed changes in the dark areas-the eyes, mouth, nose, and eyebrows-that occur when a face moves from its neutral, emotionless expression to that showing one of six emotions. Using a neural-network-based self-training program, the computer was eventually able to recognize within 60 milliseconds how changes in the brightness patterns of an individual’s face related to the expressions of a given feeling. Such processing speed combined with refinements in the design of the actuators enabled the robot’s silicone face to respond to changes in expression with humanlike speed.

The robot was surprisingly accurate, correctly guessing the expressions of test subjects, on average, 85 percent of the time. It also fared equally well as a facial actor. In fact, a group of students correctly identified the robot’s expressions 83 percent of the time. By comparison, the same students identified the facial expressions of professional actors 87 percent of the time.

Since accounts of the robot’s performance first appeared in the early 1990s, Hara has been approached by some unexpected parties. These included an artist interested in creating what he believes is a new art form-humans and robots reacting to each other’s expressions-and several psychologists in Japan who think such a robot could help certain handicapped children overcome difficulty manifesting appropriate expressions. The psychologists would have the robot act as a kind of two-way prompt, for example, demonstrating what a happy smile looks like and, after assuming a neutral expression, indicating when the child has smiled by smiling back.

Advertisement

More immediately, Hara’s team is working on a “mouth robot” whose actuators would realistically mimic lip movements during speech. Such a robot might help people with speech or language disabilities, Hara says, because studies show that more than 50 percent of speech understanding stems from
facial expression and movements.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement