For the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds specifically to music, but not to speech or other environmental sounds.
Whether such a population of neurons exists has been the subject of widespread speculation, says Josh McDermott, an assistant professor of neuroscience at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” he says.
Using functional magnetic resonance imaging (fMRI), McDermott and colleagues scanned the brains of 10 human subjects listening to 165 sounds, including different types of speech and music as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.
Mapping the auditory system has proved difficult because fMRI, which measures blood flow as an index of neural activity, lacks fine spatial resolution. In fMRI, “voxels”—the smallest unit of measurement—can reflect the response of millions of neurons.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. This revealed six populations of neurons—the music-selective population, a set of neurons that respond selectively to speech, and four sets that respond to other acoustic properties such as pitch and frequency.
Those four acoustically responsive populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical sound processing. The speech- and music-selective neural populations lie beyond this primary region.
“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” says postdoc Sam Norman-Haignere, PhD ’15, lead author of the study, published in Neuron.
Nancy Kanwisher ’80, PhD ’86, a professor of cognitive neuroscience and an author of the study, says that even though music-selective responses exist in the brain, that doesn’t mean they reflect an innate brain system. “An important question for the future will be how this system arises in development: how early it is found in infancy or childhood, and how dependent it is on experience,” she says.
Keep Reading
Most Popular
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.