Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Ng’s team developed an alternative that simplifies the process. Instead of collecting data about lots of points on an object, the researchers’ algorithm identifies the midpoint of a graspable portion of an object, such as a handle, by calculating the edges of an object and comparing this with the edges of statistically similar objects in the database. The software matches this point using both cameras and triangulates the distance. “This was the key idea that made all of our grasping things work,” Ng says. “We’ve now done things like load items from a dishwasher.”

Robots still need to learn the finer points of automatic manipulation, Ng adds. STAIR was designed only to grasp objects, and not to adjust its grasp depending on the situation. For instance, it wasn’t built to pour coffee from a pot–a task that might require a different grasp position and a different amount of pressure than simply picking up the pot and placing it on a shelf. Additionally, the software doesn’t know the consistency of the object–whether it’s squishy or solid. But researchers are working on these problems, and ultimately, a personal robot will have a combination of sensing technologies and different software that will allow it to pick up and manipulate an object. (See “Robots That Sense Before They Touch.”)

It could be years before all the technologies are integrated well enough so that robots can handle complex household chores on their own, but the Stanford work is pushing the dream forward. “If I had to pick one thing that’s holding back this vision of personal robotics, it would be the ability to pick things up and manipulate them,” says Josh Smith, senior research scientist at Intel Research, in Seattle. “We need more grasping strategies, like [the Stanford researchers’], that don’t require an explicit 3-D model of the object.” He adds that in addition to the robot having improved computer vision techniques, the actual hand of the robot will most likely have a number of sensors that can feel if an object is moving or if the grasp isn’t right. “Much richer sensing in the hand will be an important part of the solution,” Smith says.

0 comments about this story. Start the discussion »

Credit: Computer Science Department, Stanford University

Tagged: Computing, software, robots

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me