Skip to Content
MIT News: 77 Mass Ave

Learning to hear

Only a few types of sound reach babies in the womb, but that may help them learn to process auditory input as they grow.

August 24, 2022
close up view of a sleeping baby from the side
Getty Images

Human fetuses can begin to hear at around 20 weeks of gestation, but only low-frequency sounds penetrate the muffled environment of the womb. A new study suggests that this is a feature, not a bug. 

Using simple computer models of human auditory processing, professor of vision and computational neuroscience Pawan Sinha, SM ’92, PhD ’95, and colleagues showed that performance on tasks such as identifying emotions from a voice clip was better when the input the model received while learning the task was initially limited to these low frequencies.

Along with a previous study by the same team, which showed that early exposure to blurry images of faces improves computer models’ subsequent performance in face recognition, the findings suggest that initially receiving low-quality sensory input may be key to some aspects of brain development, especially when it comes to absorbing information over larger areas or longer periods of time.

“Instead of thinking of the poor quality of the input as a limitation that biology is imposing on us, this work takes the standpoint that perhaps nature is being clever and giving us the right kind of impetus to develop the mechanisms that later prove to be very beneficial when we are asked to deal with challenging recognition tasks,” Sinha says.

In practical terms, the new findings suggest that babies born prematurely may benefit from being exposed to lower-­frequency sounds rather than the full spectrum that they now hear in neonatal intensive care units, the researchers say.

Keep Reading

Most Popular

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.