Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

After 50 years of research, scientists have yet to build a robot that can learn to manipulate new objects as proficiently as a one-year-old child. Robots don’t react well to new situations; most of their movements must be programmed in advance. Some use sensors to fine-tune their movements in real time, but they generally don’t retain and interpret the sensor data. So while they might navigate a room without bumping into things, they can’t stop to help rearrange the furniture.

But now advances in sensors, software, and computer architecture are beginning to give robots a sense of their “bodies” and of what sorts of actions are safe and useful in their environments. The results could eventually include more effective robotic assistants for the elderly and autonomous bots for exploring battlefields and space.

This summer, one of the world’s most advanced robots passed an important test at NASA’s Johnson Space Center in Houston, TX. The dexterous humanoid robot learned to use tools to tighten bolts on a wheel. Rather than having to be separately programmed for each of several possible situations, the robot showed it could recover if a tool slipped from its grasp or was moved around – and that it was flexible enough in its routine to tighten the bolts in any order requested. “Now, within limits, the robot can adjust to changes in its environment,” says Vanderbilt University electrical-engineering professor Alan Peters, one of the project leaders.

The key advance: a new framework for robot learning. Peters’s software gives the NASA robot, called Robonaut, a short-term memory that lets it keep track of where it is and what it’s doing. By correlating actions like reaching for and grasping a tool with information from its 250 sensors – visual, tactile, auditory – the robot gets a feel for which movements achieve what kinds of goals. It can then apply that information to the acquisition of new skills, such as using a different tool. Maja Mataric, codirector of the University of Southern California’s Robotics Research Lab, calls Peters’s work “important for bringing together research on sensory-motor learning and applying it to real-world, highly complex robots.”

0 comments about this story. Start the discussion »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »