Skip to Content

Robot Toddler Learns to Stand by “Imagining” How to Do It

Instead of being programmed, a robot uses brain-inspired algorithms to “imagine” doing tasks before trying them in the real world.
November 6, 2015

Like many toddlers, Darwin sometimes looks a bit unsteady on its feet. But with each clumsy motion, the humanoid robot is demonstrating an important new way for androids to deal with challenging or unfamiliar environments. The robot learns to perform a new task by using a process somewhat similar to the neurological processes that underpin childhood learning.

Darwin tries moving its torso around under the control of several neural networks.

Darwin lives in the lab of Pieter Abbeel, an associate professor at the University of California, Berkeley. When I saw the robot a few weeks ago, it was suspended from a camera tripod by a piece of rope, looking a bit tragic. A little while earlier, Darwin had been wriggling around on the end of the rope, trying to work out how best to move its limbs in order to stand up without falling over.

Darwin’s motions are controlled by several simulated neural networks—algorithms that mimic the way learning happens in a biological brain as the connections between neurons strengthen and weaken over time in response to input. The approach makes use of very complex neural networks, which are known as deep-learning networks, which have many layers of simulated neurons.

For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. The second network is required because when the first network tries, for example, to move a leg, the friction experienced at the point of contact with the ground may throw it off completely, causing the robot to fall.

Darwin the robot performs various actions after virtual and real-world learning.

The researchers had the robot learn to stand, to move its hand to perform reaching motions, and to stay upright when the ground beneath it tilts.

“It practices in simulation for about an hour,” says Igor Mordatch, a postdoctoral researcher at UC Berkeley who carried out the study. “Then at runtime it’s learning on the fly how not to slip.”

Abbeel’s group has previously shown how deep learning can enable a robot to perform a task, such as passing a toy building block through a shaped hole, through a process of trial and error. The new approach is important because it may not always be possible for a robot to indulge in an extensive period of testing. And simulations lack the complexities found in the real world, conditions that with robots can cascade into a catastrophic failure.

“We’re trying to be able to deal with more variability,” says Abbeel. “Just even a little variability beyond what it was designed for makes it really hard to make it work.”

The new technique could prove useful for any robot working in all sorts of real environments, but it might prove especially useful for more graceful legged locomotion. The current approach is to design an algorithm that takes into account the dynamics of a process such as walking or running (see “The Robots Walking This Way”). But such models can struggle to deal with variation in the real world, as many of the humanoid robots involved in the DARPA Robotics Challenge demonstrated by falling over when walking on sand, or when unbalancing themselves by reaching out to grasp something (see “Why Robots, and Humans, Struggled with DARPA’s Challenge”). “It was a bit of a reality check,” Abbeel says. “That’s what happens in the real world.”

Dieter Fox, a professor in the computer science and engineering department at the University of Washington who specializes in robot perception and control, says neural network learning has huge potential in robotics. “I’m very excited about this whole research direction,” Fox says. “The problem is always if you want to act in the real world. Models are imperfect. Where machine learning, and especially deep learning comes in, is learning from the real-world interactions of the system.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.