On a crisp october day last year, Carnegie Mellon University’s Robotics Institute kicked off its 25th-anniversary celebration, as the world’s robotics experts came to Pittsburgh to see C-3PO, Shakey the robot, Honda’s Asimo, and Astro Boy inducted into the Robot Hall of Fame. The next day saw demonstrations of running, snaking, and bagpipe-playing bots. On the third day, it was Mitsuo Kawato’s turn to speak. The lights went down, and the director of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, made his way to the stage to the beat of rock music.
Despite such a welcome, Kawato is an outsider here, dismissive of the self-congratulation that creeps into conversations about modern robotics. He begins his presentation by shuffling slowly across the stage, imitating how stiffly and deliberately today’s humanoid robots walk. What this suggests, he says, is that scientists don’t really understand how the human brain controls the body. If they did, they could re-create the process in a robot. Indeed, Kawato doesn’t talk about improving robot vision or navigational controls, as many other speakers at the gala do. Instead, he describes the role of brain regions such as the cerebellum and basal ganglia in the acquisition of motor skills, carefully couching his explanations in terms that roboticists understand.
On Kawato’s lapel is a button that reads “I love Robots!” But there is a difference between him and other attendees. Kawato loves robots not because they are cool, but because he believes they can teach him how the human brain works. “Only when we try to reproduce brain functions in artificial machines can we understand the information processing of the brain,” he says. It’s what he calls “understanding the brain by creating the brain.” By programming a robot to reach out and grasp an object, for instance, Kawato hopes to learn the patterns in which electrical signals flow among neurons in the brain to control a human arm.
It’s a surprising and controversial idea. Despite the increasing number of humanlike machines, robots and people are nothing alike. The human brain has billions of neurons interconnected in complex ways that no computer program can yet simulate. But Kawato believes that experiments on humanoid robots can, at least, provide simplified models of what certain groups of neurons in the brain are doing. Then, using advanced imaging techniques, he looks at whether brain cells in monkeys and humans accord with the models.
“This is very different from the usual justification for building humanoid robots – that they are economically useful or will help take care of the elderly,” says Christopher Atkeson, a robotics expert at Carnegie Mellon. Rather, Kawato’s motivation lies in using robots to gain insights into how people think, make decisions, and interact with the world. That information could help doctors design therapies for patients with brain injuries, strokes, and neurological disorders – even cognitive and behavior problems. Seeing what it takes to design a socially interactive robot, for example, might motivate a search for areas in the brain that are switched off in cases of autism. (Neural circuits in the basal ganglia are prime candidates.) A robot arm that becomes unstable when feedback signals are delayed might suggest a new source of tremors in the cerebella of Parkinson’s patients.
As a tool for understanding the mind, robots are “extremely valuable,” says Antonio Damasio, head of neurology at the University of Iowa and the author of three books on the brain that have popularized the notion of “embodied intelligence.” “Robots can implement and test how processes like movement can occur,” he says. By extending these models to develop a broader theory of the mind, Damasio adds, “we’ll know more and more about what it takes for, say, human consciousness to operate.”