MIT Technology Review Subscribe

Adroit Droids

New sensors and software are giving robots a better sense of their “bodies.”

After 50 years of research, scientists have yet to build a robot that can learn to manipulate new objects as proficiently as a one-year-old child. Robots don’t react well to new situations; most of their movements must be programmed in advance. Some use sensors to fine-tune their movements in real time, but they generally don’t retain and interpret the sensor data. So while they might navigate a room without bumping into things, they can’t stop to help rearrange the furniture.

But now advances in sensors, software, and computer architecture are beginning to give robots a sense of their “bodies” and of what sorts of actions are safe and useful in their environments. The results could eventually include more effective robotic assistants for the elderly and autonomous bots for exploring battlefields and space.

Advertisement

This summer, one of the world’s most advanced robots passed an important test at NASA’s Johnson Space Center in Houston, TX. The dexterous humanoid robot learned to use tools to tighten bolts on a wheel. Rather than having to be separately programmed for each of several possible situations, the robot showed it could recover if a tool slipped from its grasp or was moved around – and that it was flexible enough in its routine to tighten the bolts in any order requested. “Now, within limits, the robot can adjust to changes in its environment,” says Vanderbilt University electrical-engineering professor Alan Peters, one of the project leaders.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The key advance: a new framework for robot learning. Peters’s software gives the NASA robot, called Robonaut, a short-term memory that lets it keep track of where it is and what it’s doing. By correlating actions like reaching for and grasping a tool with information from its 250 sensors – visual, tactile, auditory – the robot gets a feel for which movements achieve what kinds of goals. It can then apply that information to the acquisition of new skills, such as using a different tool. Maja Mataric, codirector of the University of Southern California’s Robotics Research Lab, calls Peters’s work “important for bringing together research on sensory-motor learning and applying it to real-world, highly complex robots.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement