Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

You may be about to see the world in a whole new way. MIT researchers, reporting in this month’s issue of Nature Materials, have demonstrated that nearly transparent webs made up of novel semiconducting fibers could replace lenses and sensors in cameras, and, among other things, lead to uniforms or automobile exteriors that give people a continuous view of their surroundings.

The fibers are made of a semiconducting glass core, lined along its full length by wires that act as positive and negative electrodes, and surrounded by a transparent polymer (see link to images below). When light hits the photosensitive core, an electrical current in the fiber changes, registering the hit.

[Click here for images of this light-sensing fiber.]

A mesh of these fibers can then be used to identify the location of the light on a surface. In the Nature Materials paper, the researchers, led by materials scientist Yoel Fink and physicist John Joannopoulos, demonstrate that the fibers, in addition to locating a point of light, can be used to determine the direction from which a light beam comes and can also sense light from a scene to form an image. “Here’s a structure that’s close to being invisible – but can see,” says one of the team members, Ayman Abouraddy, a research scientist at MIT.

For direction sensing, the researchers formed a grid of fibers into a sphere. A light beam from a flashlight first hits one side of the sphere and the grid registers the location. The light then passes through the sphere and out the other side, where it is detected again. Then an integrated circuit compares the entrance and exit points to calculate the path of the light.

Using a similar technique, the researchers were also able to record a scene, not just points of light. Light from a scene passes through first one, then another of two flat, parallel fiber grids, which register the intensity of light from the scene. However, because there’s no lens, which in a camera focuses light from a given plane onto a light detector, the grids receive a blurry image. To compensate for the lack of a lens, the researchers wrote algorithms that compare slight differences between the images recorded by the two fiber grids. These differences allow them to trace the light back to its source – and mathematically reconstruct an in-focus image. Because this “focusing” happens after the data has been recorded, it’s also possible to refocus on various objects in a scene after a picture has been taken.

4 comments. Share your thoughts »

Tagged: Computing

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me