Skip to Content

How Robots Can Quickly Teach Each Other to Grasp New Objects

It may take hours for a robot to figure out how to grasp a new object. But hundreds of robots could accelerate the process by sharing knowledge.
November 17, 2015

Grabbing a pen or pair of sunglasses might be effortless for you or me, but it’s fiendishly difficult for a robot, especially if the object in question is unfamiliar or positioned awkwardly.

The Baxter in Stefanie Tellex’s lab tries to grasp two objects at once.

Practice makes perfect, though, as one robot is proving. It is teaching itself to grasp all sorts of objects through hours of repetition. The robot uses different cameras and infrared sensors to look at an unfamiliar object from various angles before attempting to pick it up. Then it does so using several different grasps, shaking the object to make sure it is held securely. It may take dozens of tries for the robot to find the right grasp, and dozens more for it to make sure an object won’t slip.

That might seem like a tedious process, but once the robot has learned how to pick something up, it can share that knowledge with other robots that have the same sensors and grippers. The researchers behind the effort eventually hope to have hundreds of robots learn collectively how to grasp a million different things.

The work was done by Stefanie Tellex, an assistant professor at Brown University, together with one of her graduate students, John Oberlin. They used a two-armed industrial robot called Baxter, made by the Boston-based company Rethink Robotics.

At the Northeast Robotics Colloquium, an event held at Worcester Polytechnic Institute this month, Oberlin demonstrated the robot’s gripping abilities to members of the public.

Enabling robots to manipulate objects more easily is one of the big challenges in robotics today, and it could have major industrial significance (see “Shelf-Picking Robots Will Vie for Amazon Prize”).

Tellex says robotics researchers are increasingly looking for more efficient ways of training robots to perform tasks such as manipulation. “We have powerful algorithms now—such as deep learning—that can learn from large data sets, but these algorithms require data,” she says. “Robot practice is a way to acquire the data that a robot needs for learning to robustly manipulate objects.”

Tellex also notes that there are around 300 Baxter robots in various research labs around the world today. If each of those robots were to use both arms to examine new objects, she says, it would be possible for them to learn to grasp a million objects in 11 days. “By having robots share what they’ve learned, it’s possible to increase the speed of data collection by orders of magnitude,” she says.

To grasp each object, the Brown researchers’ robot scans it from various angles using one of the cameras in its arms and the infrared sensors on its body. This allows it to identify possible locations at which to grasp. The researchers used a mathematical technique to optimize the process of practicing different grips. With this technique, the team’s Baxter robot picked up objects as much as 75 percent more reliably than it did using its regular software. The information acquired for each object—the images, the 3-D scans, and the correct grip—is encoded in a format that allows it to be shared online.

Other groups are developing methods to allow robots to learn to perform various tasks, including grasping. One of the most promising ways to achieve this is deep learning using so-called neural networks, which are simulations loosely modeled on the way nerves in the brain process information and learn (see “Robot Toddler Learns to Stand by ‘Imagining’ How to Do It”).

Although humans acquire an ability to grasp through learning, a child doesn’t need to spend so much time handling different objects, and he or she can use previous experience to figure out very quickly how to pick up a new object. Tellex says the ultimate goal of her project is to give robots similar abilities. “Our long-term aim is to use this data to generalize to novel objects,” she says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.