Skip to Content
Uncategorized

A New Kind of Computer Vision Can’t Be Tricked by Weird Lighting

November 21, 2017

Computer vision has come a long way since Imagenet, a large, open-source data set of labeled images, was released in 2009 for researchers to use to train AI—but images with tricky or bad lighting can still confuse algorithms. Researchers have either tried to employ hand-crafted rules about how light interacts with objects or used a data set that covers as many lighting situations as possible. But there is a nearly limitless combination of items and light in the real world, handicapping both approaches.

A new paper by researchers from MIT and DeepMind details a process that can identify images in different lighting without having to hand-code rules or train on a huge data set. The process, called a rendered intrinsics network (RIN), automatically separates an image into reflectance, shape, and lighting layers. It then recombines the layers into a reconstruction of the original image.

To train RIN, the researchers created a data set of five shapes—cubes, spheres, cones, cylinders, and toruses—and rendered each with 10 different orientations and 500 different colors. As a proof of concept, the researchers showed how breaking down an image into the three layers could help a computer identify what an item in an image is, or infer its shape. For example, the model learned to spot much more complicated items—like the classic image test models Stanford bunny, Utah teapot, and Blender’s Suzanne—after being trained on the basic sample shapes, without ever seeing labeled examples.

Beyond offering a new way to overcome the problem of infinite lighting situations for an image, RIN is also an example of learning with unlabeled data. Most AI still needs labeled data to learn, and preparing it takes hours of repetitive human labor. Finding a way to learn from unlabeled data is one of the next frontiers in artificial intelligence.

 

Deep Dive

Uncategorized

Uber Autonomous Vehicles parked in a lot
Uber Autonomous Vehicles parked in a lot

It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.

If they ever hit our roads for real, other drivers need to know exactly what they are.

stock art of market data
stock art of market data

Maximize business value with data-driven strategies

Every organization is now collecting data, but few are truly data driven. Here are five ways data can transform your business.

Cryptocurrency fuels new business opportunities

As adoption of digital assets accelerates, companies are investing in innovative products and services.

Yann LeCun
Yann LeCun

Yann LeCun has a bold new vision for the future of AI

One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.