Skip to Content

Don’t I Know You?

New research sheds light on how the brain recognizes faces.

It only takes a millisecond to recognize celebrities on TV while flipping through the channels: Rachel Ray hawking coffee, David Hasselhoff judging a talent show, or Charles Gibson relaying the latest tragedy in Iraq. While it seems easy, recognizing those faces is a cognitively complex task. Your brain must identify the object you’re seeing as a face, regardless of the size or angle; interpret the expression encoded by the particular arrangement of eyes and mouth; and access the memory part of the brain to determine if the face is familiar. By combining two of the most important tools in neuroscience–brain imaging and electrical recordings from single brain cells–scientists are poised to finally understand how the brain performs these complex computations.

Facial recognition: Scientists showed images of faces and other objects (pictured above) to monkeys and measured their brains’ response using functional magnetic resonance imaging. The researchers identified several regions, highlighted in orange, that respond selectively to faces.

“Shape recognition is one of the biggest unsolved questions in visual biology,” says David Hubel, an emeritus neuroscientist at Harvard Medical School who won a Nobel Prize for his research on the visual system. “Combining these different techniques has tremendous power.”

The visual system works like a series of relay stations. Visual information is fed into the brain via the retina and the optic nerve and is then shunted to different processing centers. This visual information, encoded as neural signals, is continually processed and rerouted–different areas analyze color, movement, and form–and is ultimately summed. This allows the brain to recognize objects, such as a moving truck, a steaming kettle, or a familiar face.

Facial recognition is an extremely important component of human social interaction, and our brains appear to have evolved a special processing center to carry out the complex task. Brain-imaging studies show that a particular region is active when people look at faces as opposed to other objects, such as houses or cars. And a stroke experienced by a particular part of the brain can knock out face-processing ability–a disorder known as prosopagnosia, or face blindness.

However, due to the relatively low resolution of brain-imaging technologies, scientists know little about how the brain actually processes faces. Doris Tsao, a neuroscientist at the University of Bremen, in Germany, aims to change that. To study facial processing step by step, she is combining magnetic resonance imaging (MRI), a brain-imaging technology only recently used in animals, and single-cell electrical recording.

In research published last year, Tsao and her colleagues identified several parts of the brain in monkeys that respond selectively to faces. She then used the detailed anatomical picture generated by MRI to guide an electrode precisely to one of those spots. By recording activity from a number of cells there, she found that different cells are active in response to different facial characteristics–the overall shape of the face or the size of the eyes, for example. This exquisite level of detail would have been impossible to generate using brain imaging alone, and it yields important clues into how our brains detect faces. “The combination of [brain imaging] and electrode recording allowed her to get really amazing insight into the behavior of these face cells,” says Hubel.

The findings also help confirm one of the basic assumptions of functional MRI. The technology measures changes in blood flow to brain cells, which neuroscientists use as a proxy for neural activity. Finding a population of cells that respond specifically to faces within the face-processing region highlighted by MRI “shows that the assumption everyone operates under is correct,” says Christof Koch, a neuroscientist at the California Institute of Technology, in Pasadena.

Tsao is now studying the different properties of each face-processing region in more detail. One face patch, for example, appears to be involved in detecting the overall shape of the face. “Our hypothesis is that it measures ratios [between facial features], but that it hasn’t made the identity of the face explicit yet,” she says. “I think the three anterior regions are encoding other aspects of faces–expression, movement, memory, identity.”

To truly understand how the brain processes visual information, scientists must figure out how disparate pieces of information–the shape of the face and a sense of recognition of the face, for example–are bound together to create our perception of the face. Using dyes detectable with MRI to trace connections between different neurons, Tsao will record activity from multiple connected cells to determine how visual information is summed and shaped as it travels through the brain. “I think that seeing how this information is transformed will clarify a lot of what the brain is doing,” she says.

Ultimately, Tsao’s work could shed light on how neural activity leads to conscious visual perception. “It’s a step toward answering the age-old question, how does visual conscious perception arise from the underlying neural activity?” says Koch. “What is the relation between the mind and the brain?”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.