Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

The argument for admitting such evidence in court seems straightforward. To be admissible, a technology must satisfy one of two legal standards; the Daubert test (from the 1993 U.S. Supreme Court case Daubert v. Merrell Dow Pharmaceuticals) is the one used in most jurisdictions. “Daubert requires that scientific testimony must qualify as reliable ‘scientific knowledge,’” says Edward Imwinkelried, a law professor at the University of California, Davis, who is an expert on the admissibility of scientific evidence. “The Supreme Court defines ‘scientific knowledge’ as knowledge validated by a specific methodology, which it described in classic terms as, firstly, the formulation of an hypothesis and, secondly, the subsequent controlled experimentation or systematic field observation to verify or falsify the hypothesis.” Given FACS’s three decades of acceptance and CERT’s record of accuracy, automated facial-expression analysis might well meet those criteria.

Making this argument, however, would require the support of expert witnesses like Frank or Ekman, and that’s not forthcoming. Frank, for instance, supports CERT’s use by the U.S. government for purposes of national security– it may happen by 2011, he guesses–but he doesn’t want to see the technology spread much further: “Though we get a call every two weeks from people wanting to make the big bucks by marketing this as lie detection, I’m proud that nobody involved in the science has thus far gone beyond what it supports.”

What the science confirms is that both FACS and CERTS can reveal much of any human subject’s real emotions, but those results must be construed intelligently–especially in the context of detecting deception. Otherwise, Ekman summed up, users risk what he calls “Othello’s error”: “Othello read Desdemona’s fear accurately. But he didn’t recognize that the fear of being disbelieved is just like the fear of being caught. Yes, our faces reveal what emotions we’re experiencing, if you can read the signs. What our faces don’t necessarily reveal is what triggered that emotion.” If you don’t know that, interpretation can go far astray. “Rule out all the possible explanations before you conclude that what you’re seeing is a sign of lying about a criminal act,” Ekman warns. “Because very often, it’s not.”

Mark Williams is a contributing editor to Technology Review.

4 comments. Share your thoughts »

Credit: Associated Press

Tagged: Computing, Communications, computers, facial expressions

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me