For decades, brain-computer interfaces have been imagined as a way for people who are paralyzed or those who have lost arms to be able to do everyday tasks like brushing their hair or clicking a TV remote—just by thinking about it.
Such robotic devices exist today—so far, a handful of patients in research labs around the world have tried them, giving them a limited range of motions. But researchers are still years away from making these devices practical for use in people’s homes, says Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh.
Speaking at MIT Technology Review’s annual EmTech MIT conference in Cambridge, Massachusetts, on Tuesday, Schwartz said these interfaces will need a number of modifications in order for that to happen. He said he’s working on such a model with Draper Laboratory, based in Cambridge, but hasn’t been able to get funding to move the project along.
“This is very much on the outskirts of science,” said Schwartz, an early pioneer of these interfaces.
Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.
The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.
In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person's head.
Schwartz said he hopes paralyzed patients will someday be able to use these interfaces to control all sorts of objects beyond just a robotic arm.
“Just imagine someone using telemetry going into a smart home and being able to operate all these devices merely by thinking about them,” he said.
The big hurdle is that the science behind the technology is so complex. The interface relies on translating the “neural code”—that is, the pattern of activity of neurons in the brain—into specific commands that will translate into movements. Currently, the kinds of gestures people are able to perform with these interfaces are limited because scientists know little about all the different patterns in which the neurons fire.
For example, Schwartz and his team have been able to get monkeys, as well as a few human participants, to grasp objects using a brain-computer interface and a robotic arm. But applying force to objects, such as by pushing or pulling, is more complicated and requires a different set of neural codes that the computer algorithms need to learn.
“We don’t have a good understanding yet of how motion and force are mixed together to allow us to interact with objects,” Schwartz said. Scientists will need to study the brain more to figure out what these signals look like.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.