Skip to Content

Lithography Past Light’s Limits

A new optical etching technique could lead to faster microchips.

The laws of physics dictate that traditional lenses can’t focus light onto a spot narrower than half the wavelength of the light. But converting the light into waves called plasmons can get around this limitation. Plasmonic lithography, which uses plasmon-generated radiation to carve physical features into a substrate, promises to revolutionize optical storage and computing, enabling ultradense DVDs and powerful microprocessors. Now, researchers at the University of California, Berkeley, have surmounted the biggest obstacle to plasmonic lithography by building a prototype that brings a plasmonic lens very close to the substrate.

As the lens flies: This simulation shows how air moves around a microscale bearing that’s the key component of a prototype device for a new kind of high-resolution optical lithography. The red lines show the flow of air around the device; color gradations from dark blue to red indicate air pressure, from low to high. Buoyed by air, the bearing keeps an array of lenses within 20 nanometers of a spinning disc coated with a light-sensitive chemical.

Led by Berkeley mechanical-engineering professors Xiang Zhang and David Bogy, the researchers created what they call a flying plasmonic lens, an array of light concentrators that passes over a surface at a height of only 20 nanometers. The light concentrators are concentric circles patterned onto a thin film of silver; illuminating them with a laser causes electrons on their surfaces to oscillate. The oscillating electrons in turn emit a type of radiation that’s more tightly focused than light passing through conventional optics would be, but it can travel only about 100 nanometers from the lens surface. So the Berkeley researchers mounted the lenses on a device that uses so-called air bearings: the shape of the device causes a cushion of air to form under it, holding the lenses about 20 nanometers from a surface. In the researchers’ prototype,described in a paper in Nature Nanotechnology, the bearing moves the lens array over a disc spinning at speeds of 4 to 12 meters per second, much as the arm on a turntable holds the needle over a record.

Kenneth Crozier, a professor of natural sciences at Harvard University, says that the Berkeley researchers’ use of the air bearing overcomes “one of the key technological challenges in plasmonics.” Over the past few years, Crozier and others have used plasmonics to concentrate light onto ever smaller spots, but they haven’t successfully addressed the practical issue of distance control. The Berkeley device, Crozier adds, also offers far faster scanning speeds than other devices do.

The speed and precision of the system is equivalent to flying a Boeing 747 two millimeters above the ground, says Zhang. Indeed, the design of the air bearing is in some ways analogous to the design of an airplane. A pair of pads on the bearing control roll; another pair control pitch, the equivalent of moving a plane’s nose up or down.

Plasmonic lithography is “a technology that bears looking at because we need better solutions for sub-20-nanometer lithography than we have today,” says John Hartley, director of the Advanced Lithography Center at the University of Albany’s College of Nanoscale Science and Engineering. In optical lithography, light shines through a mask–a type of stencil–onto a substrate, such as a silicon wafer, that’s coated with a light-sensitive chemical called a photoresist. The photoresist hardens where the light strikes it; elsewhere, it can be rinsed away, reproducing the pattern of the mask. It’s possible to make finer features by using shorter-wavelength light, but this approach quickly becomes impractical, says Zhang. Shorter-wavelength light has higher energy, and producing it requires expensive lasers or, in the case of extreme ultraviolet light, a synchrotron. Other technologies, such as electron beams, can etch very fine features without masks, but they’re slow. The Berkeley flying lens is much faster and will become faster still, says Zhang, when the number of plasmonic lenses in an array is increased from the current 16 to 100,000.

So far, the Berkeley researchers have demonstrated that they can use the technology to etch 80-nanometer lines. This is large compared with the best optical-lithography techniques currently in use. However, Zhang says, engineering the distance-control system was the hard part. Making the concentrators smaller, for example, will increase the technique’s resolution.

But higher-resolution light beams won’t do much good without a new generation of photoresists that can resolve features that are only five nanometers or so across; the photoresists on the market were designed to work with wider beams of light. Zhang says that he is currently collaborating with chemists to address this problem.

Zhang says that the air-bearing design could also enable other applications of plasmonics, particularly high-resolution imaging. “If we can print 50 nanometers, we can image 50 nanometers,” he says. The flying lenses could be used as probes for evaluating the quality of computer chips or for biological imaging, allowing biologists to watch processes unfolding in living cells at the molecular level.

Zhang is in the process of spinning out a company to develop the technology and has been contacted by major semiconductor companies, he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.