Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Robots That Teach Each Other

What if robots could figure out more things on their own and share that knowledge among themselves?

Availability: 3-5 years

  • by Amanda Schaffer
  • Many of the jobs humans would like robots to perform, such as packing items in warehouses, assisting bedridden patients, or aiding soldiers on the front lines, aren’t yet possible because robots still don’t recognize and easily handle common objects. People generally have no trouble folding socks or picking up water glasses, because we’ve gone through “a big data collection process” called childhood, says Stefanie Tellex, a computer science professor at Brown University. For robots to do the same types of routine tasks, they also need access to reams of data on how to grasp and manipulate objects. Where does that data come from? Typically it has come from painstaking programming. But ideally, robots could get some information from each other.

    Robots Teaching Robots
    • Breakthrough Robots that learn tasks and send that knowledge to the cloud for other robots to pick up later.
    • Why It Matters Progress in robotics could accelerate dramatically if each type of machine didn’t have to be programmed separately.
    • Key Players in Advanced Robotics - Ashutosh Saxena, Brain of Things
      - Stefanie Tellex, Brown University
      - Pieter Abbeel, Ken Goldberg, and Sergey Levine, University of California, Berkeley
      - Jan Peters, Technical University of Darmstadt, Germany

    That’s the theory behind Tellex’s “Million Object Challenge.” The goal is for research robots around the world to learn how to spot and handle simple items from bowls to bananas, upload their data to the cloud, and allow other robots to analyze and use the information.

    This story is part of our March/April 2016 Issue
    See the rest of the issue
    Subscribe

    Tellex’s lab in Providence, Rhode Island, has the air of a playful preschool. On the day I visit, a Baxter robot, an industrial machine produced by Rethink Robotics, stands among oversized blocks, scanning a small hairbrush. It moves its right arm noisily back and forth above the object, taking multiple pictures with its camera and measuring depth with an infrared sensor. Then, with its two-pronged gripper, it tries different grasps that might allow it to lift the brush. Once it has the object in the air, it shakes it to make sure the grip is secure. If so, the robot has learned how to pick up one more thing.

    Stefanie Tellex and a Baxter robot at Brown University.

    The robot can work around the clock, frequently with a different object in each of its grippers. Tellex and her graduate student John Oberlin have gathered—and are now sharing—data on roughly 200 items, starting with such things as a child’s shoe, a plastic boat, a rubber duck, a garlic press and other cookware, and a sippy cup that originally belonged to her three-year-old son. Other scientists can contribute their robots’ own data, and Tellex hopes that together they will build up a library of information on how robots should handle a million different items. Eventually, robots confronting a crowded shelf will be able to “identify the pen in front of them and pick it up,” Tellex says.

    Projects like this are possible because many research robots use the same standard framework for programming, known as ROS. Once one machine learns a given task, it can pass the data on to others—and those machines can upload feedback that will in turn refine the instructions given to subsequent machines. Tellex says the data about how to recognize and grasp any given object can be compressed to just five to 10 megabytes, about the size of a song in your music library.

    Tellex was an early partner in a project called RoboBrain, which demonstrated how one robot could learn from another’s experience. Her collaborator Ashutosh Saxena, then at Cornell, taught his PR2 robot to lift small cups and position them on a table. Then, at Brown, Tellex downloaded that information from the cloud and used it to train her Baxter, which is physically different, to perform the same task in a different environment.

    Such progress might seem incremental now, but in the next five to 10 years, we can expect to see “an explosion in the ability of robots,” says Saxena, now CEO of a startup called Brain of Things. As more researchers contribute to and refine cloud-based knowledge, he says, “robots should have access to all the information they need, at their fingertips.”

    Each time the robot determines the best way to grasp and hold something, it files that data away in a format other robots can use.

    Couldn't make it to Cambridge? We've brought EmTech MIT to you!

    Watch session videos
    Stefanie Tellex and a Baxter robot at Brown University.
    Each time the robot determines the best way to grasp and hold something, it files that data away in a format other robots can use.

    Uh oh–you've read all of your free articles for this month.

    Insider Premium
    $179.95/yr US PRICE

    Next in 10 Breakthrough Technologies 2016
    Want more award-winning journalism? Subscribe to Insider Online Only.
    • Insider Online Only {! insider.prices.online !}*

      {! insider.display.menuOptionsLabel !}

      Unlimited online access including articles and video, plus The Download with the top tech stories delivered daily to your inbox.

      See details+

      What's Included

      Unlimited 24/7 access to MIT Technology Review’s website

      The Download: our daily newsletter of what's important in technology and innovation

    /
    You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.