MIT Technology Review Subscribe

Is Language Innate or Learned?

An international team of researchers has created a computer program that makes them believe the answer is the latter.

Japanese, Canadian, and Stanford University researchers have designed a novel computer program that, through listening to samples of speech, was able to identify different categories of sounds without any human guidance. These findings shed light on how human infants learn language.

“In the past, there has been a strong tendency to think that language is very special and that the mechanisms involved are predetermined by evolutionary constraints, and are not very general,” says James McClelland, a cognitive neuroscientist at Stanford University who worked on the project. “What we are saying is, Look, we can use a very general approach and do quite well learning aspects of language.”

Advertisement

The group of researchers developed the program’s software by incorporating features of machine learning into a neural network model. They then recorded the speech of mothers talking to their babies–Canadian mothers speaking English, and Japanese mothers speaking Japanese. The researchers extracted the parameters of the vowel sounds that the mothers were using in their speech and gave their program presentations of samples from the mothers’ distribution of vowel sounds. The researchers tested four vowel sounds.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The program was able to bunch together the sounds it was hearing into only a few vowel categories, and it was able to gather the vowel sounds into four categories more than 80 percent of the time. The report appears in the current issue of the Proceedings of the National Academy of Sciences.

The next step is to determine if the program could deal with larger ensembles of sounds in a language, says McClelland. “That will definitely push the limits of the model, and from there we can gain even further insight into how the brain learns.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement