Technology-assisted mind-reading is inching closer to reality, with advances that could help those unable to communicate on their own. According to research presented at the Society for Neuroscience conference in Chicago this week, scientists can determine what vowel and consonants a person is thinking of by recording activity from the surface of the brain. The system, which has about a 50-to-70% accuracy rate, could one day be used as a neural prosthesis for people with severe paralysis, translating their thoughts into actions on a computer or prosthetic limb.
Gerwin Schalk and colleagues at the Wadsworth Center, in Albany, NY, used a technology called electrocorticography (ECoG), in which a sheet of electrodes is laid directly on the surface of a patient’s brain. The procedure is currently used to locate the source of seizures in patients with severe epilepsy that is resistant to drugs. Neuroscientists take advantage of the unparalleled access to the human brain during the test–which can last for days–by asking these patients to participate in experiments.
In the new experiments, researchers asked patients to say or imagine words flashed on a screen while their brain activity was recorded. Schalk’s team then used specially designed decoder algorithms to predict the vowels and consonants of the word, using only the pattern of brain activity.
They found that both speaking and imagining the word gave roughly the same level of accuracy, which is essential for the system to be used in people who are so severely paralyzed that they have lost the ability to speak. The device isn’t yet ready for clinical use–at the current capacity, the device would guess the wrong word a significant portion of the time. But “if we can boost accuracy to 90 percent, we’ll have a genuine thought-translation device,” says Schalk.
Schalk and collaborators Eric Leuthardt and Dan Moran at Washington University School of Medicine have launched a St. Louis-based start-up, called Neurolutions , to develop a smaller version of the ECoG device for use in paralyzed people.
Nicholas Schiff and colleagues at Weill Cornell Medical College in New York are working on a different variety of mind-reading, developing a way to use functional magnetic resonance imaging (fMRI) to communicate with people who have suffered serious brain damage and remain in what’s called a minimally conscious state. (After severe brain trauma, some patients are diagnosed as being in a vegetative state, in which they are totally unaware of their environment, while others remain in a minimally conscious state, in which they occasionally laugh or cry, reach for objects, or even respond to simple questions.)
The research builds on a finding published 2006 by Adrian Owen and colleagues at the Medical Research Council in Cambridge, England showing via fMRI that a patient who was diagnosed as being in a persistent vegetative state could follow commands, an experiment I wrote about in a feature two years ago.
Owen and his coworkers created a brain-imaging test they hoped would indicate whether someone was actually aware of his or her environment. An MCS patient was instructed to imagine playing tennis when she heard the word “tennis,” or to imagine walking through her house when she heard the word “house”; she was then positioned in the scanner and given auditory prompts. The test was designed to evaluate both short-term memory, because the instructions were given well before the prompts, and the capacity for sustained attention, because the patient was told to continue imagining a scene until asked to stop. Most important, it was designed to require intentional action.
If you’re healthy, imagining that you’re playing tennis or navigating your house activates specific parts of your brain–respectively, the supplementary motor areas, which control motor responses, and the parahippocampal gyrus, which plays a role in memory of scenes. So the scientists knew exactly what to look for in patients with impaired consciousness. Their subject was a 23-year-old woman who’d been left in a vegetative state after a car accident in 2005. At the time of the study, five months had elapsed since her accident, meaning that statistically, she had a 20 percent chance of some recovery. She showed no outward signs of awareness.
The results of the test were shocking, even “spectacular,” according to a commentary accompanying their publication in the journal Science last fall. “When we cued her with the word ‘tennis,’ her brain would activate in a way that is indistinguishable from a healthy person,” says Owen. The same was true for the word “house.” “We think the fMRI demonstrated unequivocally that she is aware,” he says.
While the patient met all the clinical requirements for being in a vegetative state, her fMRI clearly showed a brain capable of relatively complex stimulus-processing. Still, it’s not yet certain what conclusions can be drawn from her case. “We have studied over 60 patients in Belgium and have never seen activation compatible with conscious perception,” says Laureys. “I definitely think this is the exception, but I can’t tell if it’s a one-in-a-thousand or a one-in-a-million case.” Owen now plans to run the same tests on more patients, using a variation of fMRI that shows brain responses in real time.
In the new research, Jonathan Bardin, a researcher in Schiff’s lab, aims to take this finding a step further, using the technique to communicate with patients by using characteristic brain activity–in this case generated by imagining swimming–as an affirmative answer to questions. “We think this could facilitate communication with at least some subset of these patients,” he says.
Researchers first tested the approach in healthy people, showing them a card and asking a multiple choice question to identify that card: “If this card is a queen, imagine swimming, now stop. If this card is a jack, imagine swimming, now stop.” And so forth. All 10 normal volunteers could communicate clearly that their card was an ace of diamonds using brain imaging.
They then tested the approach on a patient who was in a minimally conscious state. Using fMRI, researchers found that the patient’s ability to follow commands was very similar to the patient described in Owen’s research, but the results on the multiple choice tests were less clear. “It appears that the patient is trying to do the task, but that they are coming up with the wrong answer,” says Bardin. Her pattern of brain activity appeared to be delayed compared to that of healthy people, a pattern that has previously been seen in people with brain damage.
The patient could indicate the identity of the card using eye motor movements, so it’s not yet clear why the brain imaging results are more ambiguous, he says. “Maybe imagining the task was more complicated or maybe the patient was less aroused when she was in the scanner.” People in a minimally conscious state can have highly variable level of attentiveness and responsiveness.
Researchers hope the technology could ultimately help in diagnosing these patients, who can often be misdiagnosed. For example, patients with locked-in syndrome–a type of brainstem stroke that leads to severe paralysis–are cognitively normal but cannot move, leading some of them to be diagnosed as being in a vegetative or minimally conscious state.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The walls are closing in on Clearview AI
The controversial face recognition company was just fined $10 million for scraping UK faces from the web. That might not be the end of it.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
These materials were meant to revolutionize the solar industry. Why hasn’t it happened?
Perovskites are promising, but real-world conditions have held them back.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.