For the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds specifically to music, but not to speech or other environmental sounds.
Whether such a population of neurons exists has been the subject of widespread speculation, says Josh McDermott, an assistant professor of neuroscience at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” he says.
Using functional magnetic resonance imaging (fMRI), McDermott and colleagues scanned the brains of 10 human subjects listening to 165 sounds, including different types of speech and music as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.
Mapping the auditory system has proved difficult because fMRI, which measures blood flow as an index of neural activity, lacks fine spatial resolution. In fMRI, “voxels”—the smallest unit of measurement—can reflect the response of millions of neurons.
To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. This revealed six populations of neurons—the music-selective population, a set of neurons that respond selectively to speech, and four sets that respond to other acoustic properties such as pitch and frequency.
Those four acoustically responsive populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical sound processing. The speech- and music-selective neural populations lie beyond this primary region.
“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” says postdoc Sam Norman-Haignere, PhD ’15, lead author of the study, published in Neuron.
Nancy Kanwisher ’80, PhD ’86, a professor of cognitive neuroscience and an author of the study, says that even though music-selective responses exist in the brain, that doesn’t mean they reflect an innate brain system. “An important question for the future will be how this system arises in development: how early it is found in infancy or childhood, and how dependent it is on experience,” she says.
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.