For years, cognitive scientists have described the human brain as operating like a computer when it comes to language, meaning it interprets letters and sounds in a binary, one-step-at-a-time fashion. It’s either a Labrador or a laptop.
But a recent study, led by Cornell psycholinguist and associate professor Michael Spivey, suggests that the mind may be comprehending language in a more fluid way.
Our results have shown that the various parts of the brain that participate in language processing are passing their continuous, partially activated results onto each next stage, not waiting till it’s done to share information, says Spivey. Its a lot more like a distributed neural network.”
Distributed networks are a familiar concept to computer users as well. But distributed neural networks found in biological systems process information (in this case, language) in decidedly different ways than artificial distributed networks. Whereas computers still perform calculations in a linear order, the human brain can make a continuous series of computations at the same time, passing information back and forth in a non-linear, self-organizing manner.
Ironically, Spivey used computer modeling, as well as the commonest PC accessory – a mouse – to demonstrate that human language comprehension is different than computer processing.
In the study, published in the Proceedings of the National Academy of Sciences in late June, 42 undergraduates followed instructions to click a mouse on one of two pictures on a computer monitor. Sometimes the images were different-sounding objects, such as candle and jacket. At other times, they were similar, such as candle and candy.
Researchers found that when the objects’ names were quite different, the mouse movements of the students followed a straight-line trajectory to the correct picture. When the words were similar, however, the trajectories were slower and arced. In the latter cases, Spivey hypothesized, subjects would begin processing a word at the first sound, then continued in an ambiguous state as they moved the mouse.
If the linear computer model of language comprehension were valid, the researchers expected that subjects would perform one of three actions: at some point, they’d recognize the word and decide on its meaning, making a linear mouse movement; they would make a mistake between similar-sounding words and correct themselves, as packets of information were processed; or they would wait to move until the entire word was understood.