Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Japanese, Canadian, and Stanford University researchers have designed a novel computer program that, through listening to samples of speech, was able to identify different categories of sounds without any human guidance. These findings shed light on how human infants learn language.

“In the past, there has been a strong tendency to think that language is very special and that the mechanisms involved are predetermined by evolutionary constraints, and are not very general,” says James McClelland, a cognitive neuroscientist at Stanford University who worked on the project. “What we are saying is, Look, we can use a very general approach and do quite well learning aspects of language.”

The group of researchers developed the program’s software by incorporating features of machine learning into a neural network model. They then recorded the speech of mothers talking to their babies–Canadian mothers speaking English, and Japanese mothers speaking Japanese. The researchers extracted the parameters of the vowel sounds that the mothers were using in their speech and gave their program presentations of samples from the mothers’ distribution of vowel sounds. The researchers tested four vowel sounds.

The program was able to bunch together the sounds it was hearing into only a few vowel categories, and it was able to gather the vowel sounds into four categories more than 80 percent of the time. The report appears in the current issue of the Proceedings of the National Academy of Sciences.

The next step is to determine if the program could deal with larger ensembles of sounds in a language, says McClelland. “That will definitely push the limits of the model, and from there we can gain even further insight into how the brain learns.”

1 comment. Share your thoughts »

Tagged: Communications, machine learning, language, neural networks

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me