Skip to Content
Artificial intelligence

This robot learns to pick up mugs by first learning a theory of mugness

March 19, 2019

For all of the recent progress in machine intelligence, robots still struggle to adapt relatively simple tasks to new situations. Take, for example, picking up a mug and hanging it on a mug rack; even small changes in a mug’s shape, size, color, and orientation can throw a robot off.

In a new paper, researchers at MIT are now proposing a new technique for helping robots generalize their learning with relatively little data. They do so by training a neural network to extract just a few key points from a mug or other object that needs to be picked up and placed, giving the robot a visual road map for how to grasp and orient it. During testing, they found that the bot only needed three key points for a mug—one on the center of its side, one on the bottom, and one on the handle—and six key points for a shoe.

Unlike previous techniques that require hundreds or even thousands of examples for a robot to learn to pick up a mug it has never seen before, this approach requires only a few dozen. The researchers were able to train the neural network on 60 scenes of mugs and 60 scenes of shoes to reach a similar level of performance. When the system initially failed to pick up high heels because there were none in the data set, they quickly fixed the problem by adding a few labeled scenes of high heels to the data.

The team hopes to use the approach to tackle bigger tasks next, like unloading a dishwasher or wiping down a kitchen counter.

This story originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your inbox, sign up here for free.

Deep Dive

Artificial intelligence

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

AGI is just chatter for now concept
AGI is just chatter for now concept

The hype around DeepMind’s new AI model misses what’s actually cool about it

Some worry that the chatter about these tools is doing the whole field a disservice.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.