Assessing pain in infants
Even seasoned parents can find it tough to tell the difference between a baby in pain and a baby who is hungry. But now a face-recognition system is being developed that could help lift the veil on infant communication and allow us to know when babies are genuinely experiencing pain.
If it proves successful, this kind of software could be used in neonatal intensive-care units (NICU) to help alert medical staff when an infant becomes seriously distressed, says Sheryl Brahnam, an information scientist at Missouri State University at Springfield. “The problem is, they can’t articulate pain verbally,” she says. To make matters worse, an infant’s repertoire of facial expressions is very limited, so it’s not always easy to determine when a baby is actually experiencing pain.
Currently, clinicians use “objective scales” of pain indicators for neonates, says Gilbert Martin, director of NICU at the Citrus Valley Medical Center, in West Covina, CA. Such pain scales take into account a variety of factors, including body posture, blood pressure, and sensitivity to touch, as well as facial expression. But there is usually still an element of subjectivity in assessing a patient, he says.
Until fairly recently, the general consensus was that newborn babies couldn’t experience pain. In fact, until the mid 1990s it was common for infants to undergo surgery without any kind of anaesthetic or pain relief, says Martin. “It’s really terrible to think of,” he says. But the belief was that a newborn’s nervous system wasn’t mature enough to experience pain, he explains.
Brahnam’s system, called Classification of Pain Expressions (COPE), uses facial-recognition techniques to extract and examine features of the baby’s expression, such as how scrunched up the eyes are, the angle of the mouth, and the furrow of the brow.
The system relies on a neural-network learning algorithm that has been trained on a database of 204 photographic images of 26 different infants. Of these, 60 showed the babies in pain. These photos were taken during a standard heel prick–a procedure used to draw blood that is widely acknowledged to be painful. The rest of the images were taken when the infants were pulling very similar facial expressions, but this time they had not been stimulated by pain. The latter images were obtained using other stimuli such as blowing gently on the babies’ faces. “And rubbing their heel causes their face to scrunch up,” says Brahnam.
Preliminary tests showed that the system was more than 90 percent accurate. This is remarkable, given how similar these expressions can look, says Brahnam. Even so, she is quick to point out the limitations of using such a small training set and still images instead of video. “We have a long way to go to see if this would really work in a clinical setting,” she says.
There is a real need for this technology, says Martin. “It would be very welcome if you could remove some of the subjectivity,” he says. There is also evidence to suggest that allowing infants to experience pain can impair their neurological development over time, he says. “We’re talking about an immaturity in children responding to stress years later.”
Pain is a frontline defense mechanism and often the first sign that something is wrong, says Brahnam. So apart from the issue of preventing another human being from suffering, there are medical benefits to alleviating pain as soon as possible, she says. If fitted above NICU cots, a system like COPE could help medical staff automatically detect when a patient develops problems.
There is now a lot of interest in finding ways to automatically detect pain, says Rosalind Picard, director of affective computing research at Massachusetts Institute of Technology, in Cambridge. And the applications are not limited to neonates, she says.
Picard has already been approached by anaesthesiologists who are keen to find ways to monitor the pain that patients are experiencing during surgery. “There are some facial expressions that are involuntary,” she says. There are also countless examples of people regaining consciousness during surgery. Horrifically, these patients later report having felt everything the surgeon was doing to them and yet being unable to tell anyone because of the paralyzing effects of the drugs. In theory, it may be possible to detect the involuntary muscle movements of the face to determine when this is happening, Picard says.
Brahnam and her colleagues’ work will appear in a forthcoming issue of the journal Decision Support Systems. They are now working on a follow-up study involving 500 infants and using video images. Moving images should allow the researchers to investigate the dynamic characteristics of pain expressions, she says.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.