Skip to Content

What We Can Learn from Robots

For Japan’s Mitsuo Kawato, robotics explains how the human brain works.

On a crisp october day last year, Carnegie Mellon University’s Robotics Institute kicked off its 25th-anniversary celebration, as the world’s robotics experts came to Pittsburgh to see C-3PO, Shakey the robot, Honda’s Asimo, and Astro Boy inducted into the Robot Hall of Fame. The next day saw demonstrations of running, snaking, and bagpipe-playing bots. On the third day, it was Mitsuo Kawato’s turn to speak. The lights went down, and the director of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, made his way to the stage to the beat of rock music.

Despite such a welcome, Kawato is an outsider here, dismissive of the self-congratulation that creeps into conversations about modern robotics. He begins his presentation by shuffling slowly across the stage, imitating how stiffly and deliberately today’s humanoid robots walk. What this suggests, he says, is that scientists don’t really understand how the human brain controls the body. If they did, they could re-create the process in a robot. Indeed, Kawato doesn’t talk about improving robot vision or navigational controls, as many other speakers at the gala do. Instead, he describes the role of brain regions such as the cerebellum and basal ganglia in the acquisition of motor skills, carefully couching his explanations in terms that roboticists understand.

On Kawato’s lapel is a button that reads “I love Robots!” But there is a difference between him and other attendees. Kawato loves robots not because they are cool, but because he believes they can teach him how the human brain works. “Only when we try to reproduce brain functions in artificial machines can we understand the information processing of the brain,” he says. It’s what he calls “understanding the brain by creating the brain.” By programming a robot to reach out and grasp an object, for instance, Kawato hopes to learn the patterns in which electrical signals flow among neurons in the brain to control a human arm.

It’s a surprising and controversial idea. Despite the increasing number of humanlike machines, robots and people are nothing alike. The human brain has billions of neurons interconnected in complex ways that no computer program can yet simulate. But Kawato believes that experiments on humanoid robots can, at least, provide simplified models of what certain groups of neurons in the brain are doing. Then, using advanced imaging techniques, he looks at whether brain cells in monkeys and humans accord with the models.

“This is very different from the usual justification for building humanoid robots – that they are economically useful or will help take care of the elderly,” says Christopher Atkeson, a robotics expert at Carnegie Mellon. Rather, Kawato’s motivation lies in using robots to gain insights into how people think, make decisions, and interact with the world. That information could help doctors design therapies for patients with brain injuries, strokes, and neurological disorders – even cognitive and behavior problems. Seeing what it takes to design a socially interactive robot, for example, might motivate a search for areas in the brain that are switched off in cases of autism. (Neural circuits in the basal ganglia are prime candidates.) A robot arm that becomes unstable when feedback signals are delayed might suggest a new source of tremors in the cerebella of Parkinson’s patients.

As a tool for understanding the mind, robots are “extremely valuable,” says Antonio Damasio, head of neurology at the University of Iowa and the author of three books on the brain that have popularized the notion of “embodied intelligence.” “Robots can implement and test how processes like movement can occur,” he says. By extending these models to develop a broader theory of the mind, Damasio adds, “we’ll know more and more about what it takes for, say, human consciousness to operate.”

Lost in Translation
There’s a Japanese proverb that says, “To teach is to learn.” Down the hall from Kawato’s office at ATR, robot school is in session. In one corner, a researcher teaches the humanoid robot DB, short for Dynamic Brain, to interact with people. Built like a good-sized person, 1.9 meters tall and 80 kilograms, DB also moves like one: it’s fast and graceful. The researcher stands in front of the robot, waving around a stuffed dog. DB watches, apparently intently, tilting its head and tracking the toy with its camera eyes. Then it reaches out with a hydraulic arm and pats the dog, a bit clumsily, on the head. A big screen nearby displays what the robot sees, as well as which algorithms it’s running.

But this isn’t just another robot showing off its humanlike skills. Gordon Cheng, head of the humanoid robotics group at ATR, thinks of DB as an experimental subject that eats electricity and bleeds hydraulic fluid. Working with robots, says Cheng, teaches “how the pieces fit together to build a rich system” that can emulate the human brain and body.

To control DB’s arm, for instance, software computes what commands will produce the right sequence of joint movements to achieve a certain goal. Kawato and Cheng believe a similar process happens in the human brain: they think we use “internal models” to calculate relationships between neural signals and the resulting body movements. For example, when you’re about to pick up a cup, neurons in your brain access internal models to figure out what series of signals to send to your shoulder, elbow, and wrist. It’s as if your brain were carrying out calculations every time you drink your coffee.

It is a system design that might seem intuitive to a roboticist, but for years most neuroscientists found it ridiculous. How, they asked, could neurons perform such complex computations? They believed the command signals from the brain were much simpler, and that muscles and reflexes – not some abstract model – largely explained motor behaviors. But over the last decade, Kawato has offered strong evidence to the contrary, arguing that internal models are in fact necessary for eye and arm movements and may even be important for interactions with people and with objects in the world.

In practice, however, it’s difficult to draw direct connections between robots and humans. To do so would require the robots and their algorithms to mirror human physiology and neurology as closely as possible. Yet DB’s brain doesn’t even reside in its head, occupying several racks of computers, and a different scientist is needed to fire up each of the robot’s many behaviors, such as reaching or juggling. How DB carries out a task may or may not have much to do with how a human brain operates. To find out, Kawato’s team is studying how people learn to solve problems.

In experiments conducted in Kawato’s lab, subjects lie in a magnetic-resonance imaging machine and learn to use an unfamiliar tool, a modified computer mouse, to follow a moving target on a screen. Certain areas in the cerebellum light up, indicating increased blood flow in certain clusters of neurons. The researchers believe these neurons represent an internal model of the coordinated actions required for using the tool – much like the ones programmed into DB.

By combining magnetic-resonance imaging, which offers millimeter-level resolution, with electrical and magnetic recording techniques, which resolve brain activity down to milliseconds, Kawato’s group hopes to understand more of the details of what is happening among these neurons. It’s what Kawato calls “mind decoding” – reading out a person’s intent based solely on neural signal patterns. If successful, it would be a breakthrough in understanding how the mind works.

Translating the brain’s messages into language that a robot can understand is a step toward realizing a long-term technological ambition: a remote “brain-machine interface” that lets a user participate in events occurring thousands of kilometers away. A helmet could monitor a person’s brain activity and report it, over the Internet, to a remote humanoid robot; in nearly real time, the person’s actions could be replicated by a digital double. To build the system, researchers will need to look in the brain for specific signals, translate them, transmit the data wirelessly without large delays, and use them to control a device on the other end. The puzzle is far from complete, but Kawato’s mix of neuroscience and robotics could at least snap the first few pieces into place.

Robots ‘R’ Us
Using robots to understand the human brain could also produce more autonomous robots. That may not be saying much. MIT artificial-intelligence pioneer Marvin Minsky says, “Robots today seem uniformly stupid, unable to solve even simple, commonsense problems.” The most successful product from iRobot in Burlington, MA, a leading robotics company, is a vacuum cleaner. Industrial robots paint cars and build microchips but can’t do anything they’re not programmed to do. But there is increasing interest, especially in Japan and Europe, in developing new humanoid robots using insights from neuroscience.

That development has already begun in Kawato’s lab. As part of a five-year, $8 million project, DB is getting an overhaul, based in part on what Kawato has learned from probing the human brain. The new robot – designed, like DB, by Sarcos of Salt Lake City, UT – will be more humanlike in its anatomy, brain architecture, power requirements, and strength. It will have powerful legs that will allow it to walk and run. (By contrast, the current DB can’t walk.) Once the new bot is operational in late 2005, one of its first uses will be as a test platform for studying gait disorders and falls among elderly people.

Kawato is also laying the foundation for a grander collaboration between robotics and neuroscience. Together with Sony and Honda, he is lobbying the Japanese government to help fund a worldwide project to build a humanoid robot that would have the intelligence and capabilities of a five-year-old child. In addition to the technological payoff, says Kawato, the benefits to neuroscience would be immense, though he believes it will take upwards of $500 million a year for 30 years to make it happen.

The evolution of robots into something more humanlike is probably inevitable. Experts agree there is nothing magical about how the brain works, nothing that is too inherently complex to figure out and copy. As Kawato is learning in his lab, the ultimate value in closing the gap between humans and machines might lie in what new generations of robots can teach us about ourselves.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.