Skip to Content

Lensless Camera Takes Multiple-View Pictures

A new class of imaging device from Bell Labs can take more than one view of a scene at the same time or use the same data to create a single high-resolution image

Earlier this month, we looked at a new kind of camera built at Bell Labs that creates pictures using no lenses and only a single pixel.  This lensless design is simple and easy to construct and suffers none of the aberrations usually associated with lenses—indeed none of the scene is out of focus.  So it’s easy to imagine that lensless cameras are threatening to change the way we think about imaging.

Today Hong Jiang and pals from Bell Labs show off another capability of their new design. The original camera makes an ordinary image with a single pixel. Jiang and co show how, with two pixels, it’s possible to create two different images of the scene.  

The arXiv Blog’s original post explained how the device works using on a technique known as compressive sensing:

“It consists of an LCD panel that acts as an array of apertures that each allow light to pass through and a single sensor capable of detecting light three colours.

“Each aperture in the LCD array is individually addressable and so can be open to allow light to pass through or closed. An important aspect of this kind of imaging is that the array of open and closed apertures must be random.

“The process of creating an image is straightforward. It begins with the sensor recording the light from the scene that has passed through a random array of apertures in the LCD panel. It then records the light from a different random array and then another and so on.

“Although seemingly random, each of these snapshots is correlated because they record the same scene in a different way. And this is the key that the team use to reassemble an image. The process of compressive sensing analyses the data, looking for this correlation which it then uses to recreate the image.

“Clearly, the more snapshots that are taken, the better the image will be. But it is possible to create a pretty good image using just a tiny fraction of the data that a conventional image would require. “

That’s a cool way to make an image with a single sensor and a second pixel behind the array works in exactly the same way. However, its view of the scene is slightly different. Jiang and co clearly show how they can reconstruct an image for this second sensor too.

There are a couple of interesting corollaries. Instead of creating two different images, Jiang and co show how to combine the data from both pixels to generate a single image (provided the scene is sufficiently far away). That allows them to create the same quality image in half the time.

And there is yet another use for this second pixel. Since it views the scene through the apertures from a slightly different angle, Jiang and co use the data to reconstruct a higher resolution image than is possible from a single aperture array. The measurements, they say, “may be used to reconstruct an image of the higher resolution than the number of aperture elements.”

Just where we’ll see practical applications of this kind of camera isn’t yet clear. But people who photograph slow-moving objects with expensive gear ought to be particularly interested. Since the lens or mirror is the most expensive part of any telescope—particularly space telescopes–perhaps astronomers will be first in the queue.

Ref: arxiv.org/abs/1306.3946: Multi-View In Lensless Compressive Imaging

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.