Skip to Content

Mirroring Motion

Robots gain agility by watching their makers.

At the Advanced Telecommunications Research Institute in Kyoto, Japan, a million-dollar humanoid robot is learning to play air hockey. Using stereo video cameras, “DB” watches as a researcher strikes the puck with his paddle. Then using its hydraulically powered arm, the robot imitates the motion. After a few false starts, DB is able to hit the puck, and its movements are surprisingly graceful. This sort of “imitation learning” is yielding smarter, more adaptive robots for physical therapy, search-and-rescue missions, and space applications.

Imitation learning combines artificial-intelligence software with cutting-edge neuroscience. To learn arm movements for air hockey-or even for hitting a tennis ball-the robot uses machine vision algorithms to determine the position and velocity of a person’s limbs and maps this information to its hydraulic joints. Comparing its own movement with the original, the robot makes adjustments in real time. This is a more efficient way to make a robot perform human tasks than brute-force programming or trial and error, says computer scientist Stefan Schaal of the University of Southern California. Schaal is teaching DB new tricks in collaboration with the Japanese research group, Carnegie Mellon University, and Sarcos, a robotics company in Salt Lake City.

Until recently, however, “there was a large component of Simon Says with these robots. They didn’t understand what they were doing,” says Carnegie Mellon robotics expert Chris Atkeson. Changing one of their familiar tasks even slightly, in other words, would befuddle the machines. To add flexibility, researchers are now teaching robots to divide movements they’ve seen before into pieces that serve intermediate goals. Robots can learn to splice these building blocks together to adapt their behaviors-reaching, balancing, and even walking-to new situations. Within a few years, says Schaal, such resourceful robots could perform hazardous rescue missions, analyze stroke patients’ movements to help them refine motor skills, and replace space-walking astronauts on repair missions.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.