In July 2006, a paper in Nature described how a paralyzed man with a chip implanted in his brain used his mind to move a computer cursor and a robotic arm. The chip is one of the most successful examples to date of a neural prosthetic. Such devices pick up neural signals from a part of the brain involved in a given activity, such as the neurons in the motor cortex that fire as a person imagines moving a computer mouse by hand. Then they interpret those signals and direct a physical action accordingly–say, moving a cursor to the left.
Neural prosthetics promise to empower people with neurodegenerative diseases and spinal-cord injuries. But because they can involve many combinations of brain regions and hardware, each new prototype has needed its own software. Designing new algorithms from scratch slows development, says Lakshminarayan Srinivasan, SM ‘03, PhD ‘06, a neurosurgery research fellow at the Massachusetts General Hospital and a medical student in the Harvard-MIT Division of Health Sciences and Technology. So Srinivasan is developing general algorithms that could lead to software compatible with all such devices.
Technologies for detecting brain activity include functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and functional near-infrared spectroscopy–each of which generates different kinds of data. Output devices might include TVs, computers, or robotic arms, each of which has different user commands.
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.