Skip to Content
77 Mass Ave

Musical Neurons

Researchers zero in on a neural population that responds to music.
February 23, 2016

For the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds specifically to music, but not to speech or other environmental sounds.

Whether such a population of neurons exists has been the subject of widespread speculation, says Josh McDermott, an assistant professor of neuroscience at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” he says.

Using functional magnetic resonance imaging (fMRI), McDermott and colleagues scanned the brains of 10 human subjects listening to 165 sounds, including different types of speech and music as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

Mapping the auditory system has proved difficult because fMRI, which measures blood flow as an index of neural activity, lacks fine spatial resolution. In fMRI, “voxels”—the smallest unit of measurement—can reflect the response of millions of neurons.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. This revealed six populations of neurons—the music-selective population, a set of neurons that respond selectively to speech, and four sets that respond to other acoustic properties such as pitch and frequency.

Those four acoustically responsive populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical sound processing. The speech- and music-­selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” says postdoc Sam Norman-Haignere, PhD ’15, lead author of the study, published in Neuron.

Nancy Kanwisher ’80, PhD ’86, a professor of cognitive neuroscience and an author of the study, says that even though music-selective responses exist in the brain, that doesn’t mean they reflect an innate brain system. “An important question for the future will be how this system arises in development: how early it is found in infancy or childhood, and how dependent it is on experience,” she says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.