Skip to Content

EEG Cap Helps Paralyzed Patients

A noninvasive brain-recording device being developed for home use can help seriously disabled patients communicate.
April 3, 2006

For a person with advanced Lou Gehrig’s disease, communicating can be an enormous challenge. Patients with this progressive neuromuscular disorder, also known as amyotrophic lateral sclerosis (ALS), can think just fine, but they gradually lose their ability to move, speak, and breathe. Now, a noninvasive device that detects brain waves is helping these patients interact with the world.

The device consists of a cap wired with electrodes to record the electrical signals coming from the brain. The cap is connected to an amplifier, which is in turn connected to a computer that processes the electrical signals. The setup allows a patient to “click” on choices presented on a computer screen just by thinking about them.

The screen displays a matrix of icons representing, for example, the surrounding environment, personal comfort, or word processing. The rows of the matrix are highlighted in a random order; when the row containing the desired item is highlighted, the user’s brain emits a characteristic brain wave pattern, which the computer can recognize.

The user can scroll through different menus this way, selecting, say, “environment,” then “room temperature,” then “warmer.” Patients use a similar matrix of letters and numbers to compose e-mails. Scientists say the device could be useful for patients with extensive loss of muscle control, such as those with ALS, spinal-cord injuries, strokes, or cerebral palsy.

The EEG-based system, which uses electroencephalograms to record brain activity, has had significant success in laboratory tests. But now Jonathan Wolpaw and colleagues at the Wadsworth Center, part of the New York State Department of Health, say they are intent on bringing the technology to patients in their homes. They have developed a simplified version of the device and are starting a small-scale, in-home trial with severely disabled patients, such as those with advanced ALS.

“Something like this could make a huge difference in quality of life for someone with ALS and their families,” says Jennifer Brand, director of patient services at the ALS Association, a California-based patient advocacy group.

Patients with severe neuromuscular diseases or spinal-cord injuries can sometimes communicate using systems that detect small muscle movements, such as an eyebrow twitch or eye movement. But gradually, some people lose the ability to control even the smallest muscle. “Right now, the options for locked-in patients are extremely limited,” says Joseph J. Pancrazio, a program director at the National Institute of Neurological Disorder and Stroke who oversees neuroprosthetics research. “There are eye-blink kinds of interfaces, but I think we’re going to find out that the BCI [brain-computer interface] approaches are easier and faster to use.”

Roberta Miller, a home-care physician in New York who works with immobilized patients, has had several of her patients try out the device as part of laboratory testing. “They are excited and hopeful about the technology and hope it’s ready for them when they need it.”

One ALS patient, a biomedical scientist in Delaware, has already started testing the system in his home. In an e-mail he wrote using the device, he said, “[The brain-computer interface] improved my quality of life immediately. I couldn’t use my optical input device anymore.” He has been using the system for four to six hours a day to send e-mails and do other tasks and eventually will use it to write scientific manuscripts.

Wolpaw’s team is now selecting more patients to test out the new, simplified version of the device, which the Wadsworth Center created in collaboration with Cambridge Consultants, a product development company based in Boston and the United Kingdom. “Initially, the software had 10,000 different parameters. But if a caregiver is faced with a piece of software with 10,000 parameters, they may get frustrated and walk away,” says Mark Manasas, who manages Cambridge Consultants’ part of the project. “We want people with minimal computer understanding to be able to get this up and running with a couple of mouse clicks.”

The team also redesigned the EEG cap to make it easier to use. Most EEG caps used in research must be specifically fitted to a subject by a technician to provide reliable recordings. But the new cap can be adjusted to an individual once, then worn every day, and still yield reliable readings.

The scientists will use results from the current trial to figure out exactly what patients want and how they use the system. They ultimately hope to develop a small, portable version of the technology that costs less than $3,000 and would be covered by medical insurance.

Wolpaw’s device is one of a number of technologies under development to help people with neurodegenerative disease or spinal-cord injuries use computers or even robotic arms. However, many of these devices are invasive, requiring an implanted electrode to record or transmit electrical signals. While implanted devices will ultimately provide a wider range of capabilities, they present a greater risk to patients and will be more difficult to develop for broad use. “The field has really exploded in the last five years, but now the focus needs to be on showing it can do some good,” says Wolpaw. “There have been lots of lab successes, but in terms of providing people with things they can use and benefit from, that hasn’t happened yet.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.