Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

For more than eight years, Erik Ramsey has been trapped in his own body. At 16, Ramsey suffered a brain-stem injury after a car crash, leaving him with a condition known as “locked-in” syndrome. Unlike other forms of paralysis, locked-in patients can still feel sensation, but they cannot move on their own, and they are unable to control the complex vocal muscles required to speak. In Ramsey’s case, his eyes are his only means of communication: skyward for yes, downward for no.

Now researchers at Boston University are developing brain-reading computer software that in essence translates thoughts into speech. Combined with a speech synthesizer, such brain-machine interfacing technology has enabled Ramsey to vocalize vowels in real time–a huge step toward recovering full speech for Ramsey and other patients with paralyzing speech disorders. The researchers are presenting their work at the annual Acoustical Society of America meeting in Paris this week.

“The question is, can we get enough information out that produces intelligible speech?” asks Philip Kennedy of Neural Signals, a brain-computer interface developer based in Atlanta. “I think there’s a fair shot at this at this point.”

Kennedy and Frank Guenther, an associate professor at Boston University’s Department of Cognitive and Neural Systems, have been decoding activity within Ramsey’s brain for the past three years via a permanent electrode implanted beneath the surface of his brain, in a region that controls movement of the mouth, lips, and jaw. During a typical session, the team asks Ramsey to mentally “say” a particular sound, such as “ooh” or “ah.” As he repeats the sound in his head, the electrode picks up local nerve signals, which are sent wirelessly to a computer. The software then analyzes those signals for common patterns that most likely denote that particular sound.

The software is designed to translate neural activity into what are known as formant frequencies, the resonant frequencies of the vocal tract. For example, if your mouth is open wide and your tongue is pressed to the base of the mouth, a certain sound frequency is created as air flows through, based on the position of the vocal musculature. Different muscle positioning creates a different frequency. Guenther trained the computer to recognize patterns of neural signals linked to specific movements of the mouth, jaw, and lips. He then translated these signals into the correlating sound frequencies and programmed a sound synthesizer to project these frequencies back out through a speaker in audio form.

0 comments about this story. Start the discussion »

Credit: Frank Guenther, Boston University

Tagged: Biomedicine, prosthesis, speech recognition, neural decoding, speech

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »