For the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds specifically to music, but not to speech or other environmental sounds.
Whether such a population of neurons exists has been the subject of widespread speculation, says Josh McDermott, an assistant professor of neuroscience at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” he says.
Using functional magnetic resonance imaging (fMRI), McDermott and colleagues scanned the brains of 10 human subjects listening to 165 sounds, including different types of speech and music as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.
Mapping the auditory system has proved difficult because fMRI, which measures blood flow as an index of neural activity, lacks fine spatial resolution. In fMRI, “voxels”—the smallest unit of measurement—can reflect the response of millions of neurons.
To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. This revealed six populations of neurons—the music-selective population, a set of neurons that respond selectively to speech, and four sets that respond to other acoustic properties such as pitch and frequency.
Those four acoustically responsive populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical sound processing. The speech- and music-selective neural populations lie beyond this primary region.
“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” says postdoc Sam Norman-Haignere, PhD ’15, lead author of the study, published in Neuron.
Nancy Kanwisher ’80, PhD ’86, a professor of cognitive neuroscience and an author of the study, says that even though music-selective responses exist in the brain, that doesn’t mean they reflect an innate brain system. “An important question for the future will be how this system arises in development: how early it is found in infancy or childhood, and how dependent it is on experience,” she says.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.