Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

So far, Guenther and Kennedy have programmed the synthesizer to play back sounds within 50 milliseconds–that is, almost instantaneously–from when Ramsey first “voiced” them in his head. This audio playback feature has allowed Ramsey to practice mentally voicing vowels, first by hearing his initial “utterance,” then by adjusting his mental sound representation to improve the next playback. Jonathan Brumberg, a PhD student in Guenther’s lab, says that while each trial has been slow-going–it takes great effort on Ramsey’s part–the results have been promising. “At this point, he can do these vowel sounds pretty well,” says Brumberg. “We’re now fairly confident the same can be accomplished with consonants.”

However, as there are four times as many consonants as vowels, it may take years for the team to decode all the sounds, not to mention string them together to recognize and produce fluent speech. Brumberg says that the team may need to implant more electrodes, in areas solely devoted to the tongue, lips, or mouth, to get an accurate picture of more-complex sounds such as consonants.

“The electrode is only capturing about 56 distinct neural signals,” says Brumberg. “But you have to think: there are billions of cells in the brain with trillions of connections, and we are only sampling a very small portion of what is there.”

The team has no immediate plans to implant Ramsey with additional electrodes. However, Guenther is also exploring noninvasive methods of studying speech production in normal volunteers. He and Brumberg are scanning the brains of normal speakers using functional magnetic resonance imaging (fMRI). As volunteers perform various tasks, such as naming pictures and mentally repeating various sounds and words, active brain areas light up in response.

Guenther and Brumberg plan to analyze these scans for common patterns, zeroing in on specific regions related to certain sounds, with the goal of one day implanting additional electrodes in these regions. The researchers say that decoding signals within these areas may help translate speech for people with disorders such as locked-in syndrome and other forms of paralysis.

“For patients with certain kinds of speech-related disorders originating in the peripheral nervous system, this approach is highly promising,” says Vincent Gracco, director of the Center for Research on Language, Mind and Brain at McGill University. “There is the potential to provide a useful means of communicating for patients with no functioning speech, in ways that have not been explored.”

0 comments about this story. Start the discussion »

Credit: Frank Guenther, Boston University

Tagged: Biomedicine, prosthesis, speech recognition, neural decoding, speech

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me