Skip to Content

Superlenses and Smaller Computer Chips

Researchers report using metamaterials to make devices that could transform computing, data storage, and optical microscopy.
March 23, 2007

How small one can fabricate transistors, the detail that can be seen in an optical microscope, and the amount of data that can be squeezed onto a DVD–all these things are limited by the way light moves through materials. But several separate advances reported this week in Science describe new materials for manipulating light in exotic ways, potentially leading to vastly improved electronic circuitry, microscopy, and data storage.

Hyperlens: A new lens overcomes the limits of optical microscopes that make it impossible to see the real-time movement of viruses. Light (orange arrows) passes through designs with nanoscale features etched into a sheet of chromium (light blue). It then encounters a series of alternating silver and aluminum-oxide layers. These layers magnify the image carried by light waves until it is big enough to be observed with a conventional optical microscope.

The three Science papers are part of a fledging field of research called metamaterials, in which novel optical properties are introduced by combining multiple materials in structures smaller than certain types of electromagnetic waves, whether these be microwaves or waves of visible light. The researchers were able to manipulate visible wavelengths by assembling metals (such as gold or silver) with other materials in precise nanoscale layers.

Last year researchers reported metamaterials that could make an object invisible to microwaves by smoothly routing the waves around the object. (See “Cloaking Breakthrough.”) The new devices reported this week manipulate visible light in the green to ultraviolet range. While the devices only work to cloak very small, probably microscopic objects, manipulating visible light could be useful in optical microscopes, photolithography, and optical storage such as DVDs.

In one study, researchers at the University of California, Berkeley, used a metamaterials-based lens paired with a conventional optical lens to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line.

Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. Such detailed resolution would also make it possible to represent more data on the surface of a DVD.

The other two papers describe related advances. Caltech researchers built a microscopic prism using metamaterials that bends green light the opposite way it would bend with an ordinary prism. This could make it possible to create lenses in shapes not possible now, such as space-saving flat lenses.

In the third paper, researchers from the University of Maryland built a lens that can magnify rays of blue-green light emanating from dots just 70 nanometers across. The rays become big enough to be seen by an ordinary optical microscope, giving the device an effective resolution of 70 nanometers. Even better resolution might be observed if smaller dots were used, says Igor Smolyaninov, a Maryland research scientist and author of the paper. He estimates that the method could resolve features as small as 10 nanometers.

Only the Berkeley researchers, however, demonstrated the ability to see an actual image: first two parallel lines, and then the letters O and N. To make their lens, the researchers carved a half-cylinder shape out of a piece of quartz, leaving behind a U-shaped valley. They then deposited alternating layers of silver and aluminum oxide, each just 35 nanometers thick, with each layer curved by the quartz scaffold.

The purpose of this arrangement is to capture a type of light wave, called an evanescent wave, that comes from the smallest details of a surface. Ordinarily, such waves decay too quickly to be seen with an ordinary light microscope. But because of the lens’s novel materials and structure, the wave doesn’t decay as long at it is inside the lens. The image carried by the wave is then magnified by the curved form of the lens layers, making it possible to see the image with an optical microscope.

So far the lens, because it’s cylindrical, can only be used to see things lined up along the bottom of its U-shaped valley. The researchers are developing a spherical version that would allow them to see a whole surface at once–a prerequisite for both optical movies and for photolithography.

Still, much research remains. The new lens’s resolution is limited in part by the absorption of light by metal, says David Smith, professor of electrical and computer engineering at Duke University. Also, in the current design, the surface to be imaged has to be pressed right up against the lens; otherwise, the evanescent waves decay. That limits the lens’s applications. For example, it could not be used to improve a telescope.

Nevertheless, Smith, who himself works on metamaterials, says that the results of the three papers are encouraging. “It’s a great indication as to how this field is flourishing to see three such papers published in the same issue,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.