Skip to Content

Adroit Droids

New sensors and software are giving robots a better sense of their “bodies.”
November 1, 2004

After 50 years of research, scientists have yet to build a robot that can learn to manipulate new objects as proficiently as a one-year-old child. Robots don’t react well to new situations; most of their movements must be programmed in advance. Some use sensors to fine-tune their movements in real time, but they generally don’t retain and interpret the sensor data. So while they might navigate a room without bumping into things, they can’t stop to help rearrange the furniture.

But now advances in sensors, software, and computer architecture are beginning to give robots a sense of their “bodies” and of what sorts of actions are safe and useful in their environments. The results could eventually include more effective robotic assistants for the elderly and autonomous bots for exploring battlefields and space.

This summer, one of the world’s most advanced robots passed an important test at NASA’s Johnson Space Center in Houston, TX. The dexterous humanoid robot learned to use tools to tighten bolts on a wheel. Rather than having to be separately programmed for each of several possible situations, the robot showed it could recover if a tool slipped from its grasp or was moved around – and that it was flexible enough in its routine to tighten the bolts in any order requested. “Now, within limits, the robot can adjust to changes in its environment,” says Vanderbilt University electrical-engineering professor Alan Peters, one of the project leaders.

The key advance: a new framework for robot learning. Peters’s software gives the NASA robot, called Robonaut, a short-term memory that lets it keep track of where it is and what it’s doing. By correlating actions like reaching for and grasping a tool with information from its 250 sensors – visual, tactile, auditory – the robot gets a feel for which movements achieve what kinds of goals. It can then apply that information to the acquisition of new skills, such as using a different tool. Maja Mataric, codirector of the University of Southern California’s Robotics Research Lab, calls Peters’s work “important for bringing together research on sensory-motor learning and applying it to real-world, highly complex robots.”

Keep Reading

Most Popular

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate

Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.

10 Breakthrough Technologies 2023

Every year, we pick the 10 technologies that matter the most right now. We look for advances that will have a big impact on our lives and break down why they matter.

These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway

Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.