Skip to Content

Tongue Control

Sensory feedback via the tongue might improve neural prostheses.
November 24, 2008

Could the tongue help a paraplegic pick up a slippery glass or a soft, squishy kitten? Justin Williams and his colleagues at the University of Wisconsin are testing electrical stimulation of the tongue as an adjunct to visual feedback for brain-controlled computer interfaces, such as those used to control prosthetic arms. According to research presented at the Society of Neurosciences conference in Washington, DC, last week, the researchers showed that volunteers can control the movement of a cursor on a computer screen using electrical stimulation to the tongue just as well as they can control it using visual feedback.

Brain machines: A volunteer tests a brain-computer interface being developed at the University of Wisconsin. It consists of an electroencephalogram (EEG) cap that records brain activity and a device that stimulates the tongue. The position of the yellow ball on the screen is represented in electrical activity on the volunteer’s tongue. The volunteer uses brain activity (imagined movements) to move the ball to the red target.

“The tongue device opens new ways for devices to interact with the brain and the body in a collaborative way,” says Gerwin Schalk, a research scientist at the Wadsworth Center in Albany, NY, who was not involved in the research. “If we have more ways of providing feedback, it would enhance performance.”

The tongue stimulator consists of a thin-film array of 144 electrodes–it’s a bit larger than a quarter–that sits on the surface of the tongue. A stimulator delivers electrical signals based on visual information–in this case, the movement of a dot on a computer screen. “It acts like a low-resolution monitor with a 12-by-12 array of pixels,” says Williams. A similar device is already in use for people with balance disorders–tongue stimulation tells the user whether her head is upright–and is also being tested as a visual aid for the blind.

For the current project, scientists attached the tongue device to a cap that reads electrical activity from the brain through the scalp using electroencephalography (EEG). The EEG cap is being developed as part of a neural prosthesis for people who are severely paralyzed. Specially developed software can detect when the user is thinking of moving his or her arms or feet, which can then be used to control a cursor on the screen.

Users usually determine how well they are performing a task, such as using brain activity to move a ball to a target on a screen, by looking at the position of the ball. Williams and his colleagues found that the volunteers could perform just as quickly and accurately when the ball’s location was represented as a pattern of electrical activity on the tongue with no visual information.

The researchers hope to use the device to augment prosthetic limbs. When we pick up a cup, we use visual signals to bring our hand to it, but we use tactile and other information, such as its heaviness or texture, to judge how hard it is to grasp. This type of feedback is missing from existing prosthetic limbs. “A prosthetic hand may be able to do wonderful things mechanically, but if the subject cannot feel anything on the fingertips, it vastly limits the amount of manipulation they may be able to do,” says Marc Schieber, a physician and scientist at the University of Rochester Medical School, who was not involved in the research.

While the tongue may seem like an odd location to deliver sensory information, Williams says that it is actually an ideal spot. “It is one of the most densely enervated organs; it has receptors for taste, pain, and temperature,” he says. Under some conditions, the device can even stimulate taste, like sweet and sour. As for how the electrical jolts feel: “Sort of like Pop Rocks,” says Williams.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.