Japanese, Canadian, and Stanford University researchers have designed a novel computer program that, through listening to samples of speech, was able to identify different categories of sounds without any human guidance. These findings shed light on how human infants learn language.
“In the past, there has been a strong tendency to think that language is very special and that the mechanisms involved are predetermined by evolutionary constraints, and are not very general,” says James McClelland, a cognitive neuroscientist at Stanford University who worked on the project. “What we are saying is, Look, we can use a very general approach and do quite well learning aspects of language.”
The group of researchers developed the program’s software by incorporating features of machine learning into a neural network model. They then recorded the speech of mothers talking to their babies–Canadian mothers speaking English, and Japanese mothers speaking Japanese. The researchers extracted the parameters of the vowel sounds that the mothers were using in their speech and gave their program presentations of samples from the mothers’ distribution of vowel sounds. The researchers tested four vowel sounds.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.