Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Lost in Translation
There’s a Japanese proverb that says, “To teach is to learn.” Down the hall from Kawato’s office at ATR, robot school is in session. In one corner, a researcher teaches the humanoid robot DB, short for Dynamic Brain, to interact with people. Built like a good-sized person, 1.9 meters tall and 80 kilograms, DB also moves like one: it’s fast and graceful. The researcher stands in front of the robot, waving around a stuffed dog. DB watches, apparently intently, tilting its head and tracking the toy with its camera eyes. Then it reaches out with a hydraulic arm and pats the dog, a bit clumsily, on the head. A big screen nearby displays what the robot sees, as well as which algorithms it’s running.

But this isn’t just another robot showing off its humanlike skills. Gordon Cheng, head of the humanoid robotics group at ATR, thinks of DB as an experimental subject that eats electricity and bleeds hydraulic fluid. Working with robots, says Cheng, teaches “how the pieces fit together to build a rich system” that can emulate the human brain and body.

To control DB’s arm, for instance, software computes what commands will produce the right sequence of joint movements to achieve a certain goal. Kawato and Cheng believe a similar process happens in the human brain: they think we use “internal models” to calculate relationships between neural signals and the resulting body movements. For example, when you’re about to pick up a cup, neurons in your brain access internal models to figure out what series of signals to send to your shoulder, elbow, and wrist. It’s as if your brain were carrying out calculations every time you drink your coffee.

It is a system design that might seem intuitive to a roboticist, but for years most neuroscientists found it ridiculous. How, they asked, could neurons perform such complex computations? They believed the command signals from the brain were much simpler, and that muscles and reflexes – not some abstract model – largely explained motor behaviors. But over the last decade, Kawato has offered strong evidence to the contrary, arguing that internal models are in fact necessary for eye and arm movements and may even be important for interactions with people and with objects in the world.

In practice, however, it’s difficult to draw direct connections between robots and humans. To do so would require the robots and their algorithms to mirror human physiology and neurology as closely as possible. Yet DB’s brain doesn’t even reside in its head, occupying several racks of computers, and a different scientist is needed to fire up each of the robot’s many behaviors, such as reaching or juggling. How DB carries out a task may or may not have much to do with how a human brain operates. To find out, Kawato’s team is studying how people learn to solve problems.

In experiments conducted in Kawato’s lab, subjects lie in a magnetic-resonance imaging machine and learn to use an unfamiliar tool, a modified computer mouse, to follow a moving target on a screen. Certain areas in the cerebellum light up, indicating increased blood flow in certain clusters of neurons. The researchers believe these neurons represent an internal model of the coordinated actions required for using the tool – much like the ones programmed into DB.

By combining magnetic-resonance imaging, which offers millimeter-level resolution, with electrical and magnetic recording techniques, which resolve brain activity down to milliseconds, Kawato’s group hopes to understand more of the details of what is happening among these neurons. It’s what Kawato calls “mind decoding” – reading out a person’s intent based solely on neural signal patterns. If successful, it would be a breakthrough in understanding how the mind works.

Translating the brain’s messages into language that a robot can understand is a step toward realizing a long-term technological ambition: a remote “brain-machine interface” that lets a user participate in events occurring thousands of kilometers away. A helmet could monitor a person’s brain activity and report it, over the Internet, to a remote humanoid robot; in nearly real time, the person’s actions could be replicated by a digital double. To build the system, researchers will need to look in the brain for specific signals, translate them, transmit the data wirelessly without large delays, and use them to control a device on the other end. The puzzle is far from complete, but Kawato’s mix of neuroscience and robotics could at least snap the first few pieces into place.

1 comment. Share your thoughts »

Tagged: Biomedicine

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me