Brain Scans Teach Humans to Empathize with Bots
When we watch a human express a powerful emotion - anger, fear, disgust - big sections of our brains light up, including so-called “mirror neurons,” which are unique because they fire both when we produce a given action and when we perceive it in others. They are the basis of what neuroscientists call Resonance.

Resonance describes the mechanism by which the neural substrates involved in the internal representation of actions, as well as emotions and sensations, are also recruited when perceiving another individual experiencing the same action, emotion or sensation.
In order to test whether the sections of the brain that are activated when a human sees a robot expressing powerful emotions are the same as when a human sees another human expressing them, an international group of researchers stuck volunteers into an fMRI machine - which can, with limited spatial and temporal resolution, determine which parts of your brain are active at any given time - and played them clips of humans and robots making identical facial expressions.

On a very basic level, the researchers were asking whether humans empathize with even obviously mechanical robots.
The results, published last week in the journal PLoS ONE, were about what you would expect: in a default scenario in which participants were told to concentrate on the motion of the facial gesture itself, their brains showed significantly reduced activation when they watched robots expressing emotion, as compared to humans doing the same thing.
But a funny thing happened when they were told to concentrate on the emotional content of the robots’ expressions: their brains evidenced significantly increased activity, including the areas that contain mirror neurons.
So when humans are asked to think about what a robot expressing an emotion might be feeling, we are instantly more likely to empathize with them. The very question - please concentrate on what the robot is feeling - presupposes that the robot even has emotions.
Whether or not the robot is actually feeling something is therefore up to us - it depends on our beliefs about the sentience or non-sentience of the robot. It’s not hard to convincingly simulate at least an animal level of emotion in robots with even the most primitive gestural vocabulary - that’s the basis of the success of robotic therapy as carried out with, for example, the robotic baby seal Paro.
Below, I’ve embedded the very same video which participants were shown when they were in the fMRI scanner. The robot itself is barely recognizable as human, and its gestures even less so, which makes it all the more intriguing that participants were able to imagine, just for an instant, that it has feelings, too. What might an even more humanoid - or more familiar - robot accomplish?
Keep Reading
Most Popular
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.