• 3-D scans of rooms that are part of the ScanNet data set.
  • Intelligent Machines

    A Massive New Library of 3-D Images Could Help Your Robot Butler Get Around Your House

    Using three-dimensional images is a better way of mimicking the way animals perceive things.

    For a robot to be of any real help around the home, it will need to be able to tell the difference between a coffee table and a child’s crib—a simple task that most robots can’t do today.

    A huge new data set of 3-D images captured by researchers from Stanford, Princeton, and the Technical University of Munich might help. The data set, known as ScanNet, includes thousands of scenes with millions of annotated objects like coffee tables, couches, lamps, and TVs. Computer vision has improved dramatically in the past five years, thanks in part to the release of a much simpler 2-D data set of labeled images called ImageNet, generated by another research group at Stanford. ScanNet would contribute even more data for the mission.

    “ImageNet had a critical amount of annotated data, and that sparked the AI revolution,” says Matthias Niessner, a professor at the Technical University of Munich and one of the researchers behind the data set.

    The hope is that ScanNet will give machines a deeper understanding of the physical world, and that this could have practical applications. “The obvious scenario is a robot in your home,” Niessner says. “If you have a robot, it needs to figure out what’s going on around it.”

    An off-the-shelf 3-D scanner was used to capture each room.

    Niessner, who did the work while he was a visiting associate professor at Stanford University, believes researchers will apply deep learning—the same machine-learning technique used on ImageNet—to train computers to better understand 3-D scenes (see “10 Breakthrough Technologies 2013: Deep Learning”). He created the data set with Angela Dai, one of his students at Stanford, and Thomas Funkhouser, a professor at Princeton, as well as several of his other students.

    The researchers describe their approach in a paper posted recently online. They built the data set by scanning 1,513 scenes using a 3-D camera similar to the Microsoft Kinect. This device uses both a conventional camera and an infrared depth sensor to create a 3-D picture of the scene in front of it. The researchers then had volunteers annotate the scans using an iPad app via Amazon’s Mechanical Turk crowdsourcing platform. To improve overall accuracy, one set of participants painted and labeled the objects in a scan, and another group was asked to re-create a scene using a 3-D model.

    Stefanie Tellex, an assistant professor at Brown University who is doing research aimed at enabling home robots, says ScanNet is much bigger than anything available previously. “Making a data set that is an order of magnitude larger is a big contribution,” she says. “3-D information is critical for robots to perceive and interact with their environment, yet there is a real lack of data for such tasks.”

    A room showing annotated items in different colors.

    Niessner says the team behind the data set tried applying deep learning and found that it could recognize many objects reliably using only their depth information, or their shape. This already suggests that the 3-D data will provide a deeper understanding of the physical world, he says. He adds that using 3-D information is a better way of mimicking the way animals perceive things.

    Siddhartha Srinivasa, a professor at the Robotics Institute at Carnegie Mellon University, says the new data set could be a “good start” toward enabling machines to understand the insides of homes. “The popularity of ImageNet was partly due to the immensity of the data set and largely due to the immediate and numerous applications of image labeling, especially in Web applications,” says Srinivasa. He says there are fewer obvious applications for a 3-D data set besides robotics and architecture, but says applications could emerge quickly.

    Srinivasa adds that others are using synthetic or virtual scenes to train machine-vision systems. “Although simulating real-life imagery is often unrealistic, as you can see from the CGI in movies, simulating depth is quite realistic,” he says.

    Subscribe to The Download
    What's important in technology and innovation, delivered to you every day.
    Manage your newsletter preferences

    Hear more about robots at EmTech MIT 2017.

    Register now

    Uh oh–you've read all of your free articles for this month.

    Insider Premium
    $179.95/yr US PRICE

    More from Intelligent Machines

    Artificial intelligence and robots are transforming how we work and live.

    Want more award-winning journalism? Subscribe to Insider Plus.
    • Insider Plus {! insider.prices.plus !}*

      {! insider.display.menuOptionsLabel !}

      Everything included in Insider Basic, plus ad-free web experience, select discounts to partner offerings and MIT Technology Review events

      See details+

      What's Included

      Bimonthly magazine delivery and unlimited 24/7 access to MIT Technology Review’s website

      The Download: our daily newsletter of what's important in technology and innovation

      Access to the magazine PDF archive—thousands of articles going back to 1899 at your fingertips

      Special discounts to select partner offerings

      Discount to MIT Technology Review events

      Ad-free web experience

    /
    You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.