Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Katz says that his team was inspired by the work of Paul Fitzpatrick, a researcher at the LIRA-Lab at the University of Genoa, in Italy. In Fitzpatrick’s research, a robot tapped an object to distinguish it from its visual background. “What I like about the Amherst work, compared to my own, is that they are extracting a lot more information from essentially the same action,” says Fitzpatrick. This is “the robot equivalent of ‘fumbling around’ with an object, where you don’t really know enough about it to manipulate it dexterously.”

As of now, UMan is not equipped to pick up objects; instead, it manipulates them on the surface of the table. It has successfully learned how to manipulate scissors, shears, and several different kinds of wooden toys. A little shorter than the average human, it has a single arm that’s about a meter long. The arm’s seven degrees of freedom make it “very similar to a human arm in its flexibility,” according to Katz. The arm has a three-fingered hand and is mounted on a rotating base.

The researchers expect that UMan will soon be able to use past experience as a guide to handling new objects. In computer simulations, they’ve tested a learning algorithm for UMan, so that “the next time [it] sees a similar object, [it] can generalize and use the same action,” says Katz. For example, “you learn something about a pair of scissors, and next time you see a stapler you understand it has a similar structure.” In the simulations, the algorithm was able to identify joints by pushing objects in only one direction, as opposed to the six that UMan currently uses. But Katz hopes that eventually the robot won’t even need to touch a new object: it will generalize about it on the basis of visual observation alone. Katz expects to test the learning algorithm in the real world in the next year.

“This work seems like a step toward a more humanlike, manipulation-sensing-perception process,” says Josh Smith, who works on sensing for robotic grasping at Intel. The UMass approach, Smith says, is “philosophically interesting in the way it combines manipulation with sensing and perception.”

0 comments about this story. Start the discussion »

Credit: Dov Katz

Tagged: Computing, machine learning, robotic hand, grasping

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me