MIT Technology Review Subscribe

Revealing the invisible

A new neural network developed by MIT engineers spies transparent objects in the dark.

Small imperfections in a wine glass or tiny creases in a contact lens can be tricky to make out, even in good light. But MIT engineers have developed a machine-learning technique that can reveal these “invisible” features and objects in the dark.

The key was a neural network, a type of software that can be trained to associate certain inputs with specific outputs—in this case, dark, grainy images of transparent objects and the objects themselves.

Advertisement

The team fed the network extremely grainy images of more than 10,000 transparent etching patterns from integrated circuits. The images were taken in very low lighting conditions, with about one photon per pixel—far less light than a camera would register in a dark, sealed room. Then they showed the neural network a new grainy image, not included in the training data, and found that it was able to reconstruct the transparent object that the darkness had obscured.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The researchers set their camera to take images slightly out of focus, which provides evidence, in the form of ripples in the detected light, that a transparent object may be present.

But defocusing also creates blur, which can muddy a neural network’s computations. To produce a sharper, more accurate image, the researchers incorporated into the neural network a law in physics that describes how light creates a blurring effect when a camera is defocused.

The team repeated their experiments with another 10,000 images of more general and varied objects, including people, places, and animals. Again, the neural network with the embedded physics algorithm was able to re-create an image of a transparent etching that had been taken in the dark.

The results demonstrate that neural networks may be used to illuminate transparent features, such as biological tissues and cells, in images taken with very little light.

“If you blast biological cells with light, you burn them, and there is nothing left to image,” says George Barbastathis, a professor of mechanical engineering. “If you expose a patient to x-rays, you increase the danger they may get cancer.

What we’re doing here [means] you can get the same image quality, but with a lower exposure to the patient. And in biology, you can reduce the damage to biological specimens when you want to sample them.”

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement