Skip to Content

Rat-Brained Robot

Rat neuron cells on silicon are the brains behind a new robot-a breakthrough that may lead to better computer chips.
December 18, 2002

Steve Potter’s brand new robot would probably never make it to the second round of Battlebots. The size of a coffee mug, the cylindrical robot slides across a round meter-sized playpen on an apparently chaotic path. But this robot is a thinker, not a fighter, and it does its thinking with a network of neurons-culled from rat embryos-that resides a few feet away on an electrode-activated silicon chip.

The device, which Potter calls a hybrot, is in essence a rat-controlled robot, and marks the first instance in which cultured neurons have been used to control a robotic mechanism. And while the hybrot’s movements may appear less than graceful, the knowledge gained could lead to computer chips modeled on biological systems-and perhaps even to computers that incorporate biological components. Such computers might one day learn, repair themselves, and perform certain tasks-such as dictation-at which binary-based systems are miserable. “I’m banking my whole career on the fact that there is a world of emergent properties in these neural networks that we don’t know anything about,” says Potter, who is a professor of biomedical engineering at the Georgia Institute of Technology.

In his experiment, Potter places a droplet of solution containing thousands of rat neuron cells onto a silicon chip that’s embedded with 60 electrodes connected to an amplifier. The electrical signals that the cells fire at one another are picked up by the electrodes which then send the amplified signal into a computer. The computer, in turn, wirelessly relays the data to the robot.

The robot then manifests this neuronal activity with physical motion, each of its movements a direct result of neurons talking to neurons. And the robot also sends information back to the cells. Equipped with light sensors, the robot receives input about its location in the playpen from infrared signals lining the borders.

This proximity data is sent back through the computer and into the cells as electrical pulses.  “On the one side, we record activity from the cells and use them to control the motors in the robot, and on the other, we take sensory input from the robot and translate it into stimuli for the cells in the dish,” says Potter. This entire feedback loop takes less than a tenth of a second. “Basically, we’ve taken these cells in a dish and given them back a body.”

Potter records the patterns of neural signaling over long periods of time with a high-speed camera. He’s looking for evidence that the cells are learning from the feedback, and he’s observed that some stimuli does in fact cause changes in the brains cells that last for several days. “The brain’ is definitely developing,” he says.

According to Rolf Pfeifer, professor of computer science at the University of Zurich, Switzerland, this work can have implications for constructing self-healing computer systems. “The neural substrate has this ability for self repair and enormous plasticity that is still lacking in standard tech systems,” Pfeifer says. “So I can imagine that when you have computer applications where some aspects really require adaptive behavior, you might be able to combine biological substrates with standard technology.”

Currently, Steven DeWeerth, professor of electrical engineering at Georgia Tech, is using Potter’s findings to build actual circuits in silicon, although this work is still preliminary. Potter can see the knowledge gained from this research also leading to breakthroughs in clockless or asynchronous chips-chips that do not operate according to the metronomic rhythm of an internal clock.

For now, Potter is still studying the symbiotic relationship between his cultured cells and the hybrot. It may not be long before these living networks start leading to ideas that computer designers won’t be able to ignore.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.