Skip to Content

Musings from a Mouse

A study that used a computer mouse in language experiments may have implications for fields as seemingly diverse as cognitive theory and website design.
August 15, 2005

For years, cognitive scientists have described the human brain as operating like a computer when it comes to language, meaning it interprets letters and sounds in a binary, one-step-at-a-time fashion. It’s either a Labrador or a laptop.

But a recent study, led by Cornell psycholinguist and associate professor Michael Spivey, suggests that the mind may be comprehending language in a more fluid way.

“Our results have shown that the various parts of the brain that participate in language processing are passing their continuous, partially activated results onto each next stage, not waiting till it’s done to share information,” says Spivey. “It’s a lot more like a distributed neural network.”

Distributed networks are a familiar concept to computer users as well. But distributed neural networks found in biological systems process information (in this case, language) in decidedly different ways than artificial distributed networks. Whereas computers still perform calculations in a linear order, the human brain can make a continuous series of computations at the same time, passing information back and forth in a non-linear, self-organizing manner.

Ironically, Spivey used computer modeling, as well as the commonest PC accessory – a mouse – to demonstrate that human language comprehension is different than computer processing.

In the study, published in the Proceedings of the National Academy of Sciences in late June, 42 undergraduates followed instructions to click a mouse on one of two pictures on a computer monitor. Sometimes the images were different-sounding objects, such as “candle” and “jacket.” At other times, they were similar, such as “candle” and “candy.”

Researchers found that when the objects’ names were quite different, the mouse movements of the students followed a straight-line trajectory to the correct picture. When the words were similar, however, the trajectories were slower and arced. In the latter cases, Spivey hypothesized, subjects would begin processing a word at the first sound, then continued in an ambiguous state as they moved the mouse.

If the linear computer model of language comprehension were valid, the researchers expected that subjects would perform one of three actions: at some point, they’d recognize the word and decide on its meaning, making a linear mouse movement; they would make a mistake between similar-sounding words and correct themselves, as “packets” of information were processed; or they would wait to move until the entire word was understood.

Instead, the curving mouse trajectories seemed to reveal an ongoing process, one that Spivey sometimes compares to a state of quantum superposition – in this case, a mind existing in a “grey” area.

While Spivey, who co-authored the study with Marc Grosjean of the University of Dortmund, Germany, and Gunther Knoblich of Rutgers University, believes their latest work could eventually lead to a new model for understanding cognition, he admits that it’s a small step.

And a hot-button topic.

 “It’s quite a big debate right now [over the best model of language acquisition],” he says. “There are a lot of traditional cognitive scientists who are essentially digging their heels.”

Jim Magnuson, an assistant psychology professor at the University of Connecticut and a researcher at the Haskins Laboratories in New Haven, CT, which specializes in biological bases of speech and language, thinks their work shows promise. But he also lays out the position of some critics: that the mouse movement “is potentially much more under conscious control than an eye movement.”

Measuring eye movements is the traditional method of cognitive studies – and a costly and complex one. As Magnuson points out, humans average 2 to 4 eye movements per second, “and you are not aware of most of them, but with a mouse movement, it’s quite intended.”

Rather than discounting Spivey’s use of a mouse, though, Magnuson feels that it could be a “methodological innovation,” as well as leading to less-expensive experiments.

“Eye tracking has been an important tool in usability testing for websites,” Magnuson says. “Now you could end up using mouse tracking much the same way.”

Finally, besides hinting at new understandings of human cognition and new kinds of computer-assisted research and design, Spivey’s study might have implications for a field somewhere in the middle: artificial intelligence. As Spivey points out, biological neural networks might be a better model for creating AI applications, such as language-recognition systems, than binary-based computers.

 “If you want to invent a mind, you probably don’t want to be using a computer format,” Spivey says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.