Making robots self-aware could be the key to enabling them to become more resilient to damage, according roboticists in New York.
They have designed a robot that is capable of building internal models of its own body to enable it to sense and recover from damage. “It continuously models itself and updates those models on the fly to reflect the current state of its body,” says Josh Bongard, who carried out the research with colleagues at Cornell University, in Ithaca, NY.
This is not the first time robots have used sensors to monitor their bodies and recover from damage, says Bongard, who is now based at the University of Vermont, in Burlington. His innovation is in the way the robots recover, he says.
The researchers hope that making robots self-aware in this way will make them better able to cope when operating in dangerous or difficult environments.
The robot creates a self model by checking the position of various parts of its body, “and then [using] those models to internally rehearse behaviors before trying them out in reality,” Bongard says.
It’s a very original idea, says Andy Tyrrell, an intelligent-systems researcher and expert in self-repairing systems at the University of York, UK. As robots are being made increasingly more complex, the idea of enabling them to perform continuous self modeling becomes very attractive, he says.
The greatest challenge for robots is usually surviving their environment. Typically, roboticists handle this by creating maps or models of the robots’ surroundings. To be effective, this usually has to be an ongoing process as the robot’s environment changes, or as its position within the environment alters.
But Bongard believes this modeling idea could work equally well for the robots themselves. Robots can change either through damage or degradation, which is why it is important to make them self-aware, he says. And while sensors monitoring limb movement can help detect damage, they don’t tell the robot how it needs to adapt to complete its mission.
Working with colleagues Hod Lipson and Victor Zykov at Cornell, Bongard built a four-legged robot that tracks its own movement via tilt and angle sensors in its joints. Initially, the robot doesn’t know how it has been assembled, says Bongard. So to create an internal model of its own structure, it first has to go through a process of sending signals to its motors while simultaneously monitoring its sensors. This information is then fed into a type of optimization program called a genetic algorithm, which uses a digital version of natural selection to try to work out how the robot is assembled.
Once it has figured this out, the robot uses another genetic algorithm to generate possible gaits, so it can move. But rather than testing out each potential-candidate gait–which could take considerable time and potentially end up doing more harm than good–the robot uses an internal model to act out the movements first and determine which is the most efficient.
The researchers, who published their results in the journal Science, showed that when they shortened one of the robot’s legs, it responded by shifting its gait more effectively when using an internal model than when the robot had no internal model.
But some remain unconvinced that there is a genuine benefit to using internal models in this way. It is unlikely that cockroaches have internal models of themselves, says Inman Harvey, a roboticist at the University of Sussex, in Brighton, UK. “And yet if a cockroach’s leg comes off, it manages to change from a six-legged gait to a five-legged one,” he says. It may turn out that in engineering terms, there is an advantage, but that has yet to be shown, Harvey says.
According to Bongard, having a self model is the only way that he and his colleagues can internally rehearse new behaviors before actually trying them out in reality. In real situations, the robot itself is changing all the time. “The strength of the motors and the reliability of the sensors gently degrade over time, and the robot would need to discover and update this information on its own,” Bongard says. And this couldn’t be done reliably by a human operator because it would involve such subtle information and would have to be done continuously.
“Self modelling is important,” says Igor Aleksander, a neural-systems engineer at Imperial College London, UK. In order to plan its actions, a robot does need to know where it is and what its limbs are up to. However, the danger, Aleksander says, is to read too much into this and start attributing some form of consciousness to the robot.
Bongard agrees and says that this sort of self-awareness should not be confused with consciousness. The ability to build up an understanding of one’s own body seems sufficient to explain a lot of human behavior, without having to resort to the mysterious concept of consciousness, he says.
The big new idea for making self-driving cars that can go anywhere
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
Inside Charm Industrial’s big bet on corn stalks for carbon removal
The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.