It only takes a millisecond to recognize celebrities on TV while flipping through the channels: Rachel Ray hawking coffee, David Hasselhoff judging a talent show, or Charles Gibson relaying the latest tragedy in Iraq. While it seems easy, recognizing those faces is a cognitively complex task. Your brain must identify the object you’re seeing as a face, regardless of the size or angle; interpret the expression encoded by the particular arrangement of eyes and mouth; and access the memory part of the brain to determine if the face is familiar. By combining two of the most important tools in neuroscience–brain imaging and electrical recordings from single brain cells–scientists are poised to finally understand how the brain performs these complex computations.
“Shape recognition is one of the biggest unsolved questions in visual biology,” says David Hubel, an emeritus neuroscientist at Harvard Medical School who won a Nobel Prize for his research on the visual system. “Combining these different techniques has tremendous power.”
The visual system works like a series of relay stations. Visual information is fed into the brain via the retina and the optic nerve and is then shunted to different processing centers. This visual information, encoded as neural signals, is continually processed and rerouted–different areas analyze color, movement, and form–and is ultimately summed. This allows the brain to recognize objects, such as a moving truck, a steaming kettle, or a familiar face.
Facial recognition is an extremely important component of human social interaction, and our brains appear to have evolved a special processing center to carry out the complex task. Brain-imaging studies show that a particular region is active when people look at faces as opposed to other objects, such as houses or cars. And a stroke experienced by a particular part of the brain can knock out face-processing ability–a disorder known as prosopagnosia, or face blindness.
However, due to the relatively low resolution of brain-imaging technologies, scientists know little about how the brain actually processes faces. Doris Tsao, a neuroscientist at the University of Bremen, in Germany, aims to change that. To study facial processing step by step, she is combining magnetic resonance imaging (MRI), a brain-imaging technology only recently used in animals, and single-cell electrical recording.
In research published last year, Tsao and her colleagues identified several parts of the brain in monkeys that respond selectively to faces. She then used the detailed anatomical picture generated by MRI to guide an electrode precisely to one of those spots. By recording activity from a number of cells there, she found that different cells are active in response to different facial characteristics–the overall shape of the face or the size of the eyes, for example. This exquisite level of detail would have been impossible to generate using brain imaging alone, and it yields important clues into how our brains detect faces. “The combination of [brain imaging] and electrode recording allowed her to get really amazing insight into the behavior of these face cells,” says Hubel.
The findings also help confirm one of the basic assumptions of functional MRI. The technology measures changes in blood flow to brain cells, which neuroscientists use as a proxy for neural activity. Finding a population of cells that respond specifically to faces within the face-processing region highlighted by MRI “shows that the assumption everyone operates under is correct,” says Christof Koch, a neuroscientist at the California Institute of Technology, in Pasadena.
Tsao is now studying the different properties of each face-processing region in more detail. One face patch, for example, appears to be involved in detecting the overall shape of the face. “Our hypothesis is that it measures ratios [between facial features], but that it hasn’t made the identity of the face explicit yet,” she says. “I think the three anterior regions are encoding other aspects of faces–expression, movement, memory, identity.”
To truly understand how the brain processes visual information, scientists must figure out how disparate pieces of information–the shape of the face and a sense of recognition of the face, for example–are bound together to create our perception of the face. Using dyes detectable with MRI to trace connections between different neurons, Tsao will record activity from multiple connected cells to determine how visual information is summed and shaped as it travels through the brain. “I think that seeing how this information is transformed will clarify a lot of what the brain is doing,” she says.
Ultimately, Tsao’s work could shed light on how neural activity leads to conscious visual perception. “It’s a step toward answering the age-old question, how does visual conscious perception arise from the underlying neural activity?” says Koch. “What is the relation between the mind and the brain?”
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.