Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

For years, cognitive scientists have described the human brain as operating like a computer when it comes to language, meaning it interprets letters and sounds in a binary, one-step-at-a-time fashion. It’s either a Labrador or a laptop.

But a recent study, led by Cornell psycholinguist and associate professor Michael Spivey, suggests that the mind may be comprehending language in a more fluid way.

“Our results have shown that the various parts of the brain that participate in language processing are passing their continuous, partially activated results onto each next stage, not waiting till it’s done to share information,” says Spivey. “It’s a lot more like a distributed neural network.”

Distributed networks are a familiar concept to computer users as well. But distributed neural networks found in biological systems process information (in this case, language) in decidedly different ways than artificial distributed networks. Whereas computers still perform calculations in a linear order, the human brain can make a continuous series of computations at the same time, passing information back and forth in a non-linear, self-organizing manner.

Ironically, Spivey used computer modeling, as well as the commonest PC accessory – a mouse – to demonstrate that human language comprehension is different than computer processing.

In the study, published in the Proceedings of the National Academy of Sciences in late June, 42 undergraduates followed instructions to click a mouse on one of two pictures on a computer monitor. Sometimes the images were different-sounding objects, such as “candle” and “jacket.” At other times, they were similar, such as “candle” and “candy.”

Researchers found that when the objects’ names were quite different, the mouse movements of the students followed a straight-line trajectory to the correct picture. When the words were similar, however, the trajectories were slower and arced. In the latter cases, Spivey hypothesized, subjects would begin processing a word at the first sound, then continued in an ambiguous state as they moved the mouse.

If the linear computer model of language comprehension were valid, the researchers expected that subjects would perform one of three actions: at some point, they’d recognize the word and decide on its meaning, making a linear mouse movement; they would make a mistake between similar-sounding words and correct themselves, as “packets” of information were processed; or they would wait to move until the entire word was understood.

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me