Skip to Content

A Massive New Library of 3-D Images Could Help Your Robot Butler Get Around Your House

Using three-dimensional images is a better way of mimicking the way animals perceive things.
April 24, 2017

For a robot to be of any real help around the home, it will need to be able to tell the difference between a coffee table and a child’s crib—a simple task that most robots can’t do today.

A huge new data set of 3-D images captured by researchers from Stanford, Princeton, and the Technical University of Munich might help. The data set, known as ScanNet, includes thousands of scenes with millions of annotated objects like coffee tables, couches, lamps, and TVs. Computer vision has improved dramatically in the past five years, thanks in part to the release of a much simpler 2-D data set of labeled images called ImageNet, generated by another research group at Stanford. ScanNet would contribute even more data for the mission.

“ImageNet had a critical amount of annotated data, and that sparked the AI revolution,” says Matthias Niessner, a professor at the Technical University of Munich and one of the researchers behind the data set.

The hope is that ScanNet will give machines a deeper understanding of the physical world, and that this could have practical applications. “The obvious scenario is a robot in your home,” Niessner says. “If you have a robot, it needs to figure out what’s going on around it.”

An off-the-shelf 3-D scanner was used to capture each room.

Niessner, who did the work while he was a visiting associate professor at Stanford University, believes researchers will apply deep learning—the same machine-learning technique used on ImageNet—to train computers to better understand 3-D scenes (see “10 Breakthrough Technologies 2013: Deep Learning”). He created the data set with Angela Dai, one of his students at Stanford, and Thomas Funkhouser, a professor at Princeton, as well as several of his other students.

The researchers describe their approach in a paper posted recently online. They built the data set by scanning 1,513 scenes using a 3-D camera similar to the Microsoft Kinect. This device uses both a conventional camera and an infrared depth sensor to create a 3-D picture of the scene in front of it. The researchers then had volunteers annotate the scans using an iPad app via Amazon’s Mechanical Turk crowdsourcing platform. To improve overall accuracy, one set of participants painted and labeled the objects in a scan, and another group was asked to re-create a scene using a 3-D model.

Stefanie Tellex, an assistant professor at Brown University who is doing research aimed at enabling home robots, says ScanNet is much bigger than anything available previously. “Making a data set that is an order of magnitude larger is a big contribution,” she says. “3-D information is critical for robots to perceive and interact with their environment, yet there is a real lack of data for such tasks.”

A room showing annotated items in different colors.

Niessner says the team behind the data set tried applying deep learning and found that it could recognize many objects reliably using only their depth information, or their shape. This already suggests that the 3-D data will provide a deeper understanding of the physical world, he says. He adds that using 3-D information is a better way of mimicking the way animals perceive things.

Siddhartha Srinivasa, a professor at the Robotics Institute at Carnegie Mellon University, says the new data set could be a “good start” toward enabling machines to understand the insides of homes. “The popularity of ImageNet was partly due to the immensity of the data set and largely due to the immediate and numerous applications of image labeling, especially in Web applications,” says Srinivasa. He says there are fewer obvious applications for a 3-D data set besides robotics and architecture, but says applications could emerge quickly.

Srinivasa adds that others are using synthetic or virtual scenes to train machine-vision systems. “Although simulating real-life imagery is often unrealistic, as you can see from the CGI in movies, simulating depth is quite realistic,” he says.

Keep Reading

Most Popular

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

transplant surgery
transplant surgery

The gene-edited pig heart given to a dying patient was infected with a pig virus

The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.

Muhammad bin Salman funds anti-aging research
Muhammad bin Salman funds anti-aging research

Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging

The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.