Beginning in the 1990s, Hara’s group set about creating six expressions-fear, happiness, surprise, sadness, anger, and disgust-which, according to Ekman, are universal to all human cultures. The team constructed an aluminum robot head with 18 air-pressure-driven microactuators-in essence, tiny gears-that could mimic 26 facial movements. The next step was to cast a face in silicone rubber from a mold taken from one of the male students in the laboratory. Because the all-male group desired a female presence in the lab, they feminized the male face by adding a natural hair wig, rouged cheeks, and lipstick. The head was also fitted with false teeth.Through trial and error, the researchers hooked up tiny wires from the actuators to spots on the mask that, when moved, would recreate the action units required to reproduce the desired six expressions. The robot’s head and eyeballs were also engineered to make humanlike movements. Finally, the Japanese engineers put a tiny camera in the robot’s left eye to scan a human face when positioned about one meter away. A computer connected to the camera determined the person’s expression by searching for brightness variations in different areas of the face.
The computer observed changes in the dark areas-the eyes, mouth, nose, and eyebrows-that occur when a face moves from its neutral, emotionless expression to that showing one of six emotions. Using a neural-network-based self-training program, the computer was eventually able to recognize within 60 milliseconds how changes in the brightness patterns of an individual’s face related to the expressions of a given feeling. Such processing speed combined with refinements in the design of the actuators enabled the robot’s silicone face to respond to changes in expression with humanlike speed.
The robot was surprisingly accurate, correctly guessing the expressions of test subjects, on average, 85 percent of the time. It also fared equally well as a facial actor. In fact, a group of students correctly identified the robot’s expressions 83 percent of the time. By comparison, the same students identified the facial expressions of professional actors 87 percent of the time.
Since accounts of the robot’s performance first appeared in the early 1990s, Hara has been approached by some unexpected parties. These included an artist interested in creating what he believes is a new art form-humans and robots reacting to each other’s expressions-and several psychologists in Japan who think such a robot could help certain handicapped children overcome difficulty manifesting appropriate expressions. The psychologists would have the robot act as a kind of two-way prompt, for example, demonstrating what a happy smile looks like and, after assuming a neutral expression, indicating when the child has smiled by smiling back.
More immediately, Hara’s team is working on a “mouth robot” whose actuators would realistically mimic lip movements during speech. Such a robot might help people with speech or language disabilities, Hara says, because studies show that more than 50 percent of speech understanding stems from
facial expression and movements.