Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Even seasoned parents can find it tough to tell the difference between a baby in pain and a baby who is hungry. But now a face-recognition system is being developed that could help lift the veil on infant communication and allow us to know when babies are genuinely experiencing pain.

If it proves successful, this kind of software could be used in neonatal intensive-care units (NICU) to help alert medical staff when an infant becomes seriously distressed, says Sheryl Brahnam, an information scientist at Missouri State University at Springfield. “The problem is, they can’t articulate pain verbally,” she says. To make matters worse, an infant’s repertoire of facial expressions is very limited, so it’s not always easy to determine when a baby is actually experiencing pain.

Currently, clinicians use “objective scales” of pain indicators for neonates, says Gilbert Martin, director of NICU at the Citrus Valley Medical Center, in West Covina, CA. Such pain scales take into account a variety of factors, including body posture, blood pressure, and sensitivity to touch, as well as facial expression. But there is usually still an element of subjectivity in assessing a patient, he says.

Until fairly recently, the general consensus was that newborn babies couldn’t experience pain. In fact, until the mid 1990s it was common for infants to undergo surgery without any kind of anaesthetic or pain relief, says Martin. “It’s really terrible to think of,” he says. But the belief was that a newborn’s nervous system wasn’t mature enough to experience pain, he explains.

Brahnam’s system, called Classification of Pain Expressions (COPE), uses facial-recognition techniques to extract and examine features of the baby’s expression, such as how scrunched up the eyes are, the angle of the mouth, and the furrow of the brow.

The system relies on a neural-network learning algorithm that has been trained on a database of 204 photographic images of 26 different infants. Of these, 60 showed the babies in pain. These photos were taken during a standard heel prick–a procedure used to draw blood that is widely acknowledged to be painful. The rest of the images were taken when the infants were pulling very similar facial expressions, but this time they had not been stimulated by pain. The latter images were obtained using other stimuli such as blowing gently on the babies’ faces. “And rubbing their heel causes their face to scrunch up,” says Brahnam.

Preliminary tests showed that the system was more than 90 percent accurate. This is remarkable, given how similar these expressions can look, says Brahnam. Even so, she is quick to point out the limitations of using such a small training set and still images instead of video. “We have a long way to go to see if this would really work in a clinical setting,” she says.

0 comments about this story. Start the discussion »

Tagged: Biomedicine

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me