Skip to Content

Adroit Droids

New sensors and software are giving robots a better sense of their “bodies.”
November 1, 2004

After 50 years of research, scientists have yet to build a robot that can learn to manipulate new objects as proficiently as a one-year-old child. Robots don’t react well to new situations; most of their movements must be programmed in advance. Some use sensors to fine-tune their movements in real time, but they generally don’t retain and interpret the sensor data. So while they might navigate a room without bumping into things, they can’t stop to help rearrange the furniture.

But now advances in sensors, software, and computer architecture are beginning to give robots a sense of their “bodies” and of what sorts of actions are safe and useful in their environments. The results could eventually include more effective robotic assistants for the elderly and autonomous bots for exploring battlefields and space.

This summer, one of the world’s most advanced robots passed an important test at NASA’s Johnson Space Center in Houston, TX. The dexterous humanoid robot learned to use tools to tighten bolts on a wheel. Rather than having to be separately programmed for each of several possible situations, the robot showed it could recover if a tool slipped from its grasp or was moved around – and that it was flexible enough in its routine to tighten the bolts in any order requested. “Now, within limits, the robot can adjust to changes in its environment,” says Vanderbilt University electrical-engineering professor Alan Peters, one of the project leaders.

The key advance: a new framework for robot learning. Peters’s software gives the NASA robot, called Robonaut, a short-term memory that lets it keep track of where it is and what it’s doing. By correlating actions like reaching for and grasping a tool with information from its 250 sensors – visual, tactile, auditory – the robot gets a feel for which movements achieve what kinds of goals. It can then apply that information to the acquisition of new skills, such as using a different tool. Maja Mataric, codirector of the University of Southern California’s Robotics Research Lab, calls Peters’s work “important for bringing together research on sensory-motor learning and applying it to real-world, highly complex robots.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.