Skip to Content
Artificial intelligence

Machine Vision Algorithm Learns to Recognize Hidden Facial Expressions

Microexpressions reveal your deepest emotions, even when you are trying to hide them. Now a machine vision algorithm has learned to spot them, with wide-ranging applications from law enforcement to psychological analysis.

Most people are good at recognizing the ordinary emotions on other people’s faces. But there are another set of facial expression that most people are almost entirely unaware of. In the late 1960s, psychologists discovered that when humans try to hide their emotions, they often display their real feelings in “microexpressions” that appear and disappear in the blink of an eye.

These fleeting facial expressions have fascinated psychologists and the general public ever since. It turns out that while most people are entirely oblivious to microexpressions, a tiny subset of individuals can spot them accurately and use them to tell when people are hiding their true feelings or when they are downright lying.

A significant industry has grown up that focuses on training people to be better at recognizing microexpressions. Law enforcement officials and anti-terrorist agents are often trained in this way in the hope that it can help them spot individuals who are up to no good.  Whether this training works is the subject of much debate—it may be that most people do not have the sensory and cognitive skills to catch microexpressions, regardless of the training they receive.

But there is another way to spot microexpressions. In recent years, machine vision has improved at a rate so rapid that it has surprised even experts in the field. Today, machines equipped with the best artificial intelligence algorithms can routinely outperformed humans at object recognition and facial recognition, and have begun to match them in recognizing expressions and the emotional charge they carry.

That raises an interesting prospect. Could machines soon become better at recognizing microexpressions than humans? Today we get an answer thanks to the work of Xiaobai Li at the University of Oulu in Finland and few pals. These guys have built and tested the first machine vision system capable of spotting and recognizing microexpressions and they say that it is already better than humans at the task.

The rapid developments in artificial intelligence in recent years have come about partly because of improved methods of computing. But these machines are useless without vast and accurate databases to train them.

So the first task for Li and co was to create a database of videos showing microexpressions in realistic conditions. This is easier said than done. Microexpressions tend to occur when individuals hide their feelings under conditions of relatively high stakes.

That’s not easy to reproduce. Indeed, much previous work has focused on posed expressions, but various psychologists have pointed out the limitations of this method, not least of which is that microexpressions look significantly different to posed expressions.  

Li and co tackled this problem by asking a group of 20 individuals to watch a series of videos designed to invoke strong emotions among them. These people were given a strong incentive to avoid showing any emotion during the task: they were told that that they would have to fill in a long, boring questionnaire explaining any emotions they did display.  

As a result, 16 of the 20 individuals produced 164 microexpressions between them, which the team recorded on a high-speed camera at 100 frames per second. The team linked the emotions on display to the emotional content of the videos, giving them a gold-standard database with which to train their machine-learning algorithm.

The task of recognizing microexpressions falls into two parts. The first is to pick out the fleeting change in facial features that characterize a microexpression. The second is to identify the emotion this displays.

The team tackled the first problem by using a single frame showing the subject’s face as a standard and comparing all subsequent frames against it to determine how the expression changed. Any change beyond a certain threshold was defined as a microexpression, and these images set aside for further analysis.

Recognizing expressions is generally harder because microexpressions tend to be less pronounced than ordinary expressions.  “One major challenge for microexpression recognition is that the intensity levels of facial movements are too low to be distinguishable,” say Li and co.

The team solved this using an algorithm that “magnifies” expressions. This works by identifying the parts of the face in motion when an expression changes and distorting the face to move them further.

This process has to be carefully applied. For example, Li and co say they cannot use it for spotting microexpressions because the algorithm magnifies all movement, such as head turning, not just the expressions. So it is only applied to the frames identified by the spotting process described above.

Finally, the algorithm classifies the emotion on display as positive, negative or surprise, a process it learns from the training database.

An interesting question is how well this approach works compared to human performance. To find out, the team asked 15 people to identify the expression displayed in videos containing just the microexpressions (so they didn’t need to pick out the microexpressions from longer sequences). Another 15 people watched the entire videos and had to spot each microexpression as well as identify it.

The results make for interesting reading. Li and co’s machine matched human ability to spot and recognize microexpressions and significantly outperformed humans at the recognition task alone.

“Our method is the first system that has ever been tested on a hard spontaneous microexpression data set, containing natural microexpressions,” say the team. “It outperforms humans at microexpression recognition by a significant margin, and performs comparably to humans at the combined microexpression spotting and recognition task.”

That’s not bad for first try, and these machines are clearly going to improve quickly.

It’s not hard to come up with applications. Li and co pick out lie detection, law enforcement, and psychotherapy, but it’s easy to imagine this being used in job interviews and assessments and even in Google Glass-type devices in everyday life.

Soon, there will be nowhere to hide.

Ref:  arxiv.org/abs/1511.00423 : Reading Hidden Emotions: Spontaneous Microexpression Spotting and Recognition

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.