Skip to Content

A Blob for a Robot Hand

A robotic hand made up of grains in a bag proves to be surprisingly effective.

A team of roboticists have taken a radically new approach to designing robotic hands by creating a versatile gripper out of a beanbag. The group showed that the simple yet effective approach let the robotic hand pick up a range of objects and even pour a glass of water.

Grasping is a challenge in robots because highly articulated fingers that imitate humans’ take a lot of power to control and are delicate, expensive, and not always good at picking up objects they haven’t encountered before. But effective robotic hands could have a host of applications, such as in military situations (like disabling bombs), homes, hospitals, and manufacturing settings.

The team–made up of scientists from the University of Chicago, Cornell University and iRobot–filled a stretchy, balloon-like bag with coffee grounds. The coffee grounds, like any granular material (sugar, salt, glass beads) have the property of flowing easily when the grains are loose and have room to move, and solidifying into a mass when space constricts and the grains can no longer pass each other. The squishy ball is attached to a vacuum pump to change how much space the grains have.

To work, a robotic arm attached to the beanbag gently presses the ball onto whatever object it wants to grasp. The bag has enough air so that the grains gently contour around the object. When it’s ready grip an object, a vacuum connected to the arm sucks a small amount of air out of the bag, tightening the grains around the object enough to have a firm enough grasp to pick it up.

This work looks like an extension of the Chembot that iRobot demoed last October, a robot which also used jamming methods to deflate and inflate. The idea in this case was to create a robot that could squeeze under a door or through a hole before regaining its shape for surveillance missions. This particular DARPA-funded work was detailed in a recent PNAS paper.

The video below shows the hand in action. Halfway into the video, the hand picks up and pours a glass of water, and even picks up a pen and writes. However, there might be certain objects that it would have difficulty grasping (for example, something soft).

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.