Skip to Content

Mind-Machine Merger

Devices that connect the brain with computers could lead to mind-controlled robots, repair neurological disorders, and even improve memory.

Ted Berger is a mind reader. The minds of rats, that is.  In his lab at the University of Southern California, the neurobiologist places a tiny array of electrodes onto a slice of a rat’s brain in a petri dish. With the flip of a switch, graduate student Walid Soussou starts the flow of electrical signals into the tissue. The brain cells respond by generating their own electrical impulses. This swirling pattern of neural signals is picked up by the electrodes and appears on a nearby computer screen as a wash of colors ranging from brilliant red to dark blue.

For the next few hours, Berger and his team will map out the circuitry behind one of the brain’s most complex functions: memory. It’s basic research, but they are doing it with a big technological goal in mind. Berger’s group aims to use the information to build an advanced “brain-machine interface”-a device that links the biological circuits of a brain to the silicon circuits of a computer-that will change how the mind thinks.

In recent years, research groups around the country have implanted electrodes in the brains of animals-and even a few humans-and have used signals detected by those electrodes to move robot arms, levers, and cursors on computer screens (see “Other Brain-Machine Research,” table, last page). The aim of the work has been to give paralyzed patients the ability to control prosthetic limbs and simple communication tools. But Berger’s objective is even more far-reaching: to build a computer chip that will restore the cognitive abilities of the brain itself, aiding memory in patients who suffer from such neurological disorders as Alzheimer’s disease and stroke and perhaps eventually enhancing the abilities of healthy minds. To do so, the researchers have to understand neural processes that may be more complicated than those that govern, say, the control of a prosthetic arm. “It’s one of the most ambitious projects in the whole field,” says Christof Koch, an expert on computation and neural systems at Caltech.

As bold as it is, Berger’s team is not the only group breaking new ground in what researchers sometimes call neural prostheses. A two-year, $24 million program from the U.S. Defense Advanced Research Projects Agency, launched last fall, is rapidly expanding the boundaries of brain-machine interface research. The six projects funded by DARPA’s program-including Berger’s at the University of Southern California-aim to develop technologies that will not only restore but will also augment human capabilities, says Alan Rudolph, program manager of the DARPA initiative. This coordinated, well-funded “big science” approach to understanding how minds and machines can interact, he says, could have “transformational consequences for defense and society.”

The effort will yield a new generation of electrodes, computer chips, and software that might eventually equip soldiers, for example, to control superfast artificial limbs, pilot remote vehicles, and guide mobile robots in hazardous environments, using only the power of their thoughts. Even more remarkable, such devices could enhance decision-making, upgrade memory and cognitive skills, and even allow one person’s brain to communicate wirelessly with another’s.

Although such applications are as speculative as they are spectacular, scientists no longer view them as pure fantasy. Their new optimism is fueled in part by a host of recent advances in neuroscience, interface hardware, and signal processing. And the influx of money certainly doesn’t hurt. “DARPA is putting much larger resources into the area than has ever been seen before,” says William Heetderks, director of the Neural Prosthesis Program at the National Institutes of Health. And because researchers in this field have no shortage of innovative ideas, he adds, the new funding “will have a tremendous effect.”

Remote Control

Among the rolling hills of Durham, NC, Duke University’s Miguel Nicolelis is attempting to teach old monkeys new tricks. But first, their brains must learn to listen.

Over the last few years, Nicolelis and his team have shown that brain signals picked up by electrodes implanted in animals’ brains can provide rudimentary control of robot arms. But there’s a hitch: the animals don’t know they are controlling anything. To get to the point where animals-and eventually humans-can take on more sophisticated tasks, Nicolelis says, real-time communication between mind and machine must become a two-way street.

So in Nicolelis’s lab a rhesus monkey is not only controlling a robot arm through brain signals picked up by electrodes implanted in its head, it is also getting feedback from the robot-for now, in the form of a cursor on a screen that shows the robot’s movements. Kept in separate rooms, the monkey and robot arm are linked via cables, a microcomputer, and a parallel processor. The next step will be to implement tactile feedback. When the monkey tries to use the robot arm to grab a rubber beer mug, the robot arm will send signals to force transducers placed on the animal’s upper arm; these motors will vibrate vigorously when the robot’s grip tightens. And eventually, Nicolelis says, the system could provide even more direct feedback by electrically stimulating sensory regions of the brain. “The trick is to give the right kind of feedback so the monkey’s brain will incorporate the robot as if it were a part of its own body,” he says.

Once they “close the loop” of brain-machine interaction, Nicolelis says, researchers can begin to think realistically about designing systems whose physical capabilities surpass those of normal people. One example: by bypassing nerves and muscles and connecting the brain directly to a robotic limb, he says, it may be possible to cut reaction times by a factor of six. He predicts that many labs will demonstrate such augmentation of basic physical abilities over the next five years.

As Nicolelis works to replicate and augment such everyday capabilities as grasping and lifting, researchers at the University of Michigan are pushing brain-machine interfaces into new realms of physical control. Biomedical engineer Daryl Kipke and his team are teaching rats and monkeys how to guide the movements of a fleet of mobile robots using only their minds. Feedback is important, Kipke says, because it allows the animals to gain experience interacting with a device that is completely foreign-in this case a half-meter-long, six-legged robotic critter named RHex (pronounced rex).

For now, the agile robot must be either programmed to run in a certain direction or remotely directed by a hand-controlled wireless link. But brain-machine interfaces, the Michigan researchers say, could allow for faster and better-coordinated control. In the distant future, soldiers or rescue personnel-possibly at multiple locations-might plug their minds into a central computer to control a fleet of RHexes in the field. Guided by brain impulses, the robots would carry out search-and-rescue missions in war zones and disaster areas, while sending audio, visual, and tactile feedback to their controllers. “That’s the home run,” says Kipke.

Although reaching that goal is probably still decades away, Kipke’s team is working toward it by extracting signals from neurons in the areas of the brain that are involved in planning and executing movements. With all the noise from surrounding cells, it’s like trying to listen to specific conversations in a baseball stadium. Within a year, the researchers will surgically implant arrays of silicon electrodes-each no wider than a hair-in an animal’s brain and connect each array to a flexible low-power circuit that looks like a one-square-centimeter Band-Aid on the animal’s skin. The circuit will speed up the overall processing of the signals and allow them to be sent wirelessly to a central computer. There, custom software will translate the signals into movements of a computer cursor, which the animal will watch. The next step, says Kipke, will be connecting the cursor to RHex’s wireless control system so that when the cursor moves left, the robot does the same.

By summer the Michigan team, together with physiologist Dan Moran at Washington University, plans to have a monkey in St. Louis navigate RHex through an obstacle course in Ann Arbor, MI. The control signals will pass back and forth via the Internet, and the monkey will monitor a graphical representation of the robot’s position and movements on a screen. The overarching goal of the current project is to test whether such interfaces can engage the brain-making use of both neural commands and feedback-to control increasingly remote and complex devices. “Within five years, we’ll know if we can do this,” Kipke says.

Pumping up Perception

While Nicolelis and Kipke are boosting the brain’s ability to control external devices, others in the DARPA initiative are aiming to manipulate the brain’s inner workings-specifically those that send, receive, and process sights and sounds. By tapping into the visual and auditory regions of the mind, researchers are testing whether such information can be transmitted between brains and computers to enhance perception and communication. If successful, the projects could lead to astounding new interfaces that enhance humans’ ability to recognize faces, objects, and speech and to make decisions. They might even enable brain-to-brain wireless communication, says DARPA’s Rudolph.

Before they can devise such systems, researchers must learn how to “read out” information from the brain, as well as “write in” information, says Tomaso Poggio, an expert on artificial intelligence at MIT. Poggio and MIT neurophysiologist James DiCarlo, both principal investigators in the DARPA program, are working with visual perception and object recognition in rhesus monkeys. The researchers will present objects such as abstract shapes, cars, and animals on a computer screen. One possible experiment is based on previous collaborations with MIT neuroscientist Earl Miller: the researchers could train a monkey to decide whether a computer-generated animal on a screen looks more like a cat or a dog (see “Mind Readout,” sidebar). Software would blur the line, creating, for example, an image that is 60 percent cat and 40 percent dog. While the monkey is making its decision, the researchers would use implanted electrodes to record signals from neurons in the visual cortex: some of these cells fire when the monkey views a cat, others when it sees a dog.

Silicon Cognition

Back at the University of Southern California, Berger’s team is pushing the farthest frontier of brain-machine interfaces. Once they have mapped out the signal patterns of several regions of the brain, the researchers plan to manipulate the ways the brain processes information and communicates with itself-in short, how the brain thinks. This work could one day lead to neural prostheses that restore and even enhance such cognitive processes as memory. Imagine going to the doctor to recover memories long since faded or buying hardware that sharpens your ability to remember people’s names.

Berger’s team is taking a baby step toward that vision by developing a computer chip that mimics the signal processing of the hippocampus, a spiral-shaped region of the brain that is instrumental in learning and forming memories. Fortunately, the information flow in the hippocampus of rats is straightforward, says Berger, and the circuit looks similar, though more complicated, in the human hippocampus.

What makes things challenging is that-at least in Berger’s view-memory in the brain is represented in the dynamic firing patterns of neurons, not in a fixed arrangement of bits like that of a computer’s memory. “If any part of the brain looks like RAM, we haven’t found it yet,” Berger says. And neurons are inherently tricky. To get one to fire, timing is everything: it may take a combination of impulses from surrounding neurons or repeated inputs from one messenger spaced in time just so.

To capture these dynamics, Berger’s team has developed mathematical models of the individual neurons in question and has begun to implement the models in hardware. If neuron A sends a particular pattern of impulses to neuron B, says University of Southern California biomedical engineer Vasilis Marmarelis, the model tells you what pattern neuron B will send to neuron C. “It isn’t sexy,” he says, “but it’s the first step of a very long journey.” From there, the researchers will put thousands of neuron models onto a low-power silicon chip.

Later this year, says Berger, the proof-of-principle experiment will go like this: In a slice of a rat’s hippocampus, the scientists will demonstrate that electrical signals from region A are processed by region B and sent on to region C. They will then remove neurons from region B and show that the output of region C is disrupted. Finally, they will reroute the signals through a prototype chip-in place of region B-to see whether that completes the circuit and produces the same overall pattern of signals as the healthy slice.

Image courtesy of John MacNeil

If this is successful, the next step will be to test the chip in an animal. Within three years Berger’s group plans to turn its interface over to a team led by physiologist Sam Deadwyler at Wake Forest University. Deadwyler is training monkeys to remember clip art pictures flashed on a screen and to pick the images from a subsequent lineup. At the same time, he is recording signals from the hippocampus that allow him to identify which neurons are important for the task-and even to predict whether the monkey will choose correctly. When Berger’s interface is ready, says Deadwyler, the researchers will temporarily inactivate the hippocampus so the primate can no longer do the task; then they will plug the chip into the affected area to see whether the interface can restore the monkey’s performance.

Eventually, Berger and Deadwyler plan to determine whether the chip can augment memory: they will implant the chip in an animal whose hippocampus is intact. With the chip, the monkey might be able to remember a picture for a longer period of time or be able to pick it out of a larger lineup of distractions. In the future, says Deadwyler, it might be possible to connect a person’s brain to hardware that makes memories last longer or that allows one to keep track of ever increasing amounts of information-like when you’re dashing through a busy airport and need to remember a phone number for a few seconds. But don’t expect to see this anytime soon. “We’re a long way from improving on paper and pencil,” says the NIH’s Heetderks.

For one thing, Berger’s group faces the skepticism of some scientists who don’t buy into the fundamental premise that memory is constituted solely of dynamic patterns of neuron activity. And it faces many of the practical challenges other neural-prosthesis research teams grapple with. For now, nobody knows exactly which neurons-or how many-need to be tapped in order to achieve useful devices. Depending on the application, the researchers may need to access thousands of brain cells all at once. And there are computational hurdles they must overcome before the interfaces can process massively parallel streams of neural data in real time.

But perhaps the greatest technical challenge lies in physically connecting rigid hardware to delicate brain cells and sustaining those connections for months or even years at a time, says John Chapin, a physiologist at the State University of New York Downstate Medical Center who helped pioneer methods for accessing brain signals in the mid-1990s. Because neurons continually shift their positions and alter their connections, the interface must be flexible, biocompatible, and adaptable to changes in the signals it receives. With this in mind, DARPA’s Rudolph is pushing to promote a standardized electrode platform across the initiative so that each team doesn’t reinvent the wheel. But this is easier said than done. “Scientists would rather use each other’s toothbrushes than each other’s electrodes,” says Caltech’s Koch.

Even if the interface technologies work, they might face a long road to acceptance. Paralyzed patients anxious to gain enhanced physical abilities may be willing to accept the risks of surgery and to live with hardware implanted in their brains, but most healthy people would probably balk at the proposition. In fact, says Rudolph, “we really don’t envision implanting healthy people with these kinds of devices.” The key to being able to restore or augment human capabilities, he says, will be gaining access to the brain signals in an unobtrusive way-ideally, without wires, electrodes, or surgeries.

Before DARPA-or anyone else, for that matter-will invest in that next generation of brain-signal-detection technology, researchers must determine whether neural prostheses will be practical in their new applications. “If successful,” says Rudolph, “we will have seeded the important work to demonstrate that this can be done and-if a noninvasive tool can be found to extract the same kinds of information-that human performance enhancement can be envisioned.” And though this vision is still years away, our minds may already be on the road to a new way of thinking.

Other Brain-Machine Research
RESEARCHER INSTITUTION PROJECT
Richard Andersen Caltech Electrode systems for recording brain impulses
Niels Birbaumer University of Tbingen (Germany) Noninvasive brain-signal detectors
John Donoghue Brown University
and Cyberkinetics
(Providence, RI)
Neural prostheses that give paralyzed patients control over computers
Philip Kennedy Neural Signals
(Atlanta, GA)
First human tests of brain implants for restoring communication in completely paralyzed patients
Andrew Schwartz University of Pittsburgh Neural prostheses that control robot arms
Harvey Wiggins Plexon
(Dallas, TX)
Hardware and software for recording and analyzing brain signals

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.