For years, cognitive scientists have described the human brain as operating like a computer when it comes to language, meaning it interprets letters and sounds in a binary, one-step-at-a-time fashion. It’s either a Labrador or a laptop.
But a recent study, led by Cornell psycholinguist and associate professor Michael Spivey, suggests that the mind may be comprehending language in a more fluid way.
Our results have shown that the various parts of the brain that participate in language processing are passing their continuous, partially activated results onto each next stage, not waiting till it’s done to share information, says Spivey. Its a lot more like a distributed neural network.”
Distributed networks are a familiar concept to computer users as well. But distributed neural networks found in biological systems process information (in this case, language) in decidedly different ways than artificial distributed networks. Whereas computers still perform calculations in a linear order, the human brain can make a continuous series of computations at the same time, passing information back and forth in a non-linear, self-organizing manner.
Ironically, Spivey used computer modeling, as well as the commonest PC accessory – a mouse – to demonstrate that human language comprehension is different than computer processing.
In the study, published in the Proceedings of the National Academy of Sciences in late June, 42 undergraduates followed instructions to click a mouse on one of two pictures on a computer monitor. Sometimes the images were different-sounding objects, such as candle and jacket. At other times, they were similar, such as candle and candy.
Researchers found that when the objects’ names were quite different, the mouse movements of the students followed a straight-line trajectory to the correct picture. When the words were similar, however, the trajectories were slower and arced. In the latter cases, Spivey hypothesized, subjects would begin processing a word at the first sound, then continued in an ambiguous state as they moved the mouse.
If the linear computer model of language comprehension were valid, the researchers expected that subjects would perform one of three actions: at some point, they’d recognize the word and decide on its meaning, making a linear mouse movement; they would make a mistake between similar-sounding words and correct themselves, as packets of information were processed; or they would wait to move until the entire word was understood.
Instead, the curving mouse trajectories seemed to reveal an ongoing process, one that Spivey sometimes compares to a state of quantum superposition – in this case, a mind existing in a “grey” area.
While Spivey, who co-authored the study with Marc Grosjean of the University of Dortmund, Germany, and Gunther Knoblich of Rutgers University, believes their latest work could eventually lead to a new model for understanding cognition, he admits that it’s a small step.
And a hot-button topic.
Its quite a big debate right now [over the best model of language acquisition], he says. There are a lot of traditional cognitive scientists who are essentially digging their heels.
Jim Magnuson, an assistant psychology professor at the University of Connecticut and a researcher at the Haskins Laboratories in New Haven, CT, which specializes in biological bases of speech and language, thinks their work shows promise. But he also lays out the position of some critics: that the mouse movement “is potentially much more under conscious control than an eye movement.
Measuring eye movements is the traditional method of cognitive studies – and a costly and complex one. As Magnuson points out, humans average 2 to 4 eye movements per second, and you are not aware of most of them, but with a mouse movement, its quite intended.
Rather than discounting Spivey’s use of a mouse, though, Magnuson feels that it could be a methodological innovation, as well as leading to less-expensive experiments.
Eye tracking has been an important tool in usability testing for websites, Magnuson says. Now you could end up using mouse tracking much the same way.
Finally, besides hinting at new understandings of human cognition and new kinds of computer-assisted research and design, Spivey’s study might have implications for a field somewhere in the middle: artificial intelligence. As Spivey points out, biological neural networks might be a better model for creating AI applications, such as language-recognition systems, than binary-based computers.
If you want to invent a mind, you probably dont want to be using a computer format,” Spivey says.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate
Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.
10 Breakthrough Technologies 2023
These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway
Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.