Skip to Content
Artificial intelligence

Google taught this robotic dog to learn new tricks by imitating a real one

April 3, 2020
Google

Google researchers are using imitation learning to teach autonomous robots how to pace, spin, and move in more agile ways.

What they did: Using a data set of motion capture data recorded from various sensors attached to a dog, the researchers taught a quadruped robot named Laikago several different movements that are hard to achieve through traditional hand-coded robotic controls.

How they did it: First, they used the motion data from the real dog to construct simulations of each maneuver, including a dog trot, side-step, and … a dog version of classic ’80s dance move, the running man. (The last one was not, in fact, performed by the real dog itself. The researchers manually animated the simulated dog to dance to see if that would translate to the robot as well.) They then matched together key joints on the simulated dog and the robot to make the simulated robot move in the exact same way as the animal. Using reinforcement learning, it then learned to stabilize the movements and correct for differences in weight distribution and design. Finally, the researchers were able to port the final control algorithm into a physical robot in the lab—though some moves, like the running man, weren’t entirely successful.

Why it matters: Teaching robots the complex and agile movements necessary to navigate the real world has been a long-standing challenge in the field. Imitation learning of this kind instead allows such machines to easily borrow the agility of animals and even humans.

Future work: Jason Peng, the lead author on the paper, says there are still a number of challenges to overcome. The heaviness of the robot limits its ability to learn certain maneuvers, like big jumps or fast running. Additionally, capturing motion sensor data from animals isn’t always possible. It can be incredibly expensive and requires the animal’s cooperation. (A dog is friendly; a cheetah, not so much.) The team plans to try using animal videos instead, which would make their technique far more accessible and scalable.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.