Some, however, are much better than others at reading microexpressions. Ekman’s University of San Francisco colleague Maureen O’Sullivan has tested 20,000-odd folks over two decades and identified 50 individuals among that number who consistently demonstrate over 80 percent accuracy in detecting when others are lying, with a very few approaching perfect accuracy. Clearly, some specific, optimal set of capabilities underlies these rare individuals’ success.
Since trained FACS experts generally replay footage for three hours in order to analyze just a single minute of a subject’s facial twitches and blinks on video, it made sense to ask whether a computer system could automate the process of microexpression analysis and match O’Sullivan’s human “wizards.” Ekman first considered the challenge in the late 1980s. On a sabbatical in London, he visited Brunel College, where an engineer who had developed one of the first parallel-processing computers was training an artificial neural network to recognize terrorists. The engineer’s problem was that subjects’ varied facial expressions made it difficult for his system to recognize their identities, while Ekman’s difficulty tended to be the reverse: he needed to disregard his subjects’ individual physiognomies to recognize the emotions revealed by their expressions. So the two men worked together. “Within three days, we taught the machine to recognize three different emotions on different people,” he says. “Back in the U.S., I wrote up a grant proposal for the NIH, who turned it down, claiming parallel-processing computers didn’t exist.” Ekman expressed his frustration to a friend who was a Nobel Prize-winning physicist; the friend contacted Terry Sejnowski, the cross-disciplinary eminence of computational neurobiology at the Salk Institute, whose lab had the necessary computers. Ekman and Sejnowski teamed up and got the grant.
Mark Frank, a former postdoctoral student of Ekman’s and now a professor at the University at Buffalo, in New York, has had the greatest success automating FACS. Frank, working out of Buffalo’s Center for Unified Biometrics and Sensors, has worked with a group of computer scientists at the University of California, San Diego–mostly former students of Sejnowski’s–to turn FACS into a technology called the Computer Expression Recognition Toolbox (CERT). I asked him how the project was going.
“We’ve done it,” Frank told me. “We have a system that operates in real time. In terms of machine learning, we had to give the machines good audiovisual material with real emotions and expressions. Then it’s just a matter of training, testing, training, testing.” CERT works about as well as a human expert, he says, but it’s a little faster.
A technology that accurately detects people’s true emotions possesses tremendous political, social, and commercial potential. What if political commentators had applied it to footage of last year’s U.S. presidential debates, for instance, to reveal if McCain or Obama was lying? Or if lawyers used it to analyze video depositions presented during court trials to determine whether a witness had lied, a finding that could be cited as evidence? Indeed, since the technology mines ordinary video, it might be commodified as a cheap Web service so everybody could use it: people might record job interviews, business negotiations, prenuptial-agreement signings, wedding ceremonies, or any other kind of civil transaction, with an eye toward reviewing them to ascertain the good faith of those involved. “You wonder what you do when the cat comes out of the bag,” Frank says. “And can you get it back in?”