Skip to Content

Prototype Display Lets You Say Goodbye to Reading Glasses

Researchers are developing technology that can adjust an image on a display so you can see it clearly without corrective lenses.
July 23, 2014

Those of us who need glasses to see a TV or laptop screen clearly could ditch the eyewear thanks to a display technology that corrects vision problems.

researchers focus a camera to simulate nearsightedness
Picture this: Researchers use a camera and lenses to simulate an eye with nearsightedness looking at an “E” shown on a modified iPod Touch display.

The technology uses algorithms to alter an image based on a person’s glasses prescription together with a light filter set in front of the display. The algorithm alters the light from each individual pixel so that, when fed through a tiny hole in the plastic filter, rays of light reach the retina in a way that re-creates a sharp image. Researchers say the idea is to anticipate how your eyes will naturally distort whatever’s onscreen—something glasses or contacts typically correct—and adjust it beforehand so that what you see appears clear.

Brian A. Barsky, a University of California, Berkeley, computer science professor and affiliate professor of optometry and vision science who coauthored a paper on it, says it’s like undoing what the optics in your eyes are about to do. The technology is being developed in collaboration with researchers at MIT and Microsoft.

In addition to making it easier for people with simple vision problems to use all kinds of displays without glasses, the technique may help those with more serious vision problems caused by physical defects that can’t be corrected with glasses or contacts, researchers say. This includes spherical aberration, which causes different parts of the lens to refract light differently.

While similar methods have been tried before, the new approach produces a sharper, higher-contrast image. The paper on the research that Barsky and others wrote will be presented at the annual International Conference and Exhibition on Computer Graphics and Interactive Techniques, also known as Siggraph, in Vancouver, Canada, in August.

For the paper, researchers took images of things like a rainbow-colored hot-air balloon and a detail of a Vincent Van Gogh self-portrait and applied algorithms that warped the image by taking into account the specific eye condition it was told to account for. They then showed the images on an iPod Touch, to whose display they had affixed an acrylic slab topped with a plastic screen pierced with thousands of tiny, evenly spaced holes.

Gordon Wetzstein, who coauthored the paper while a research scientist at MIT’s Media Lab, says the screen allows a regular two-dimensional display to work as what’s known as a “light field display.” This means the screen controls the way individual light rays emanate from the display, leading to a sharper image without degrading contrast.

The researchers tested out their device by using a Canon DSLR camera with the focus set to simulate vision problems like farsightedness.

Wetzstein says the next step is to build prototype displays that people can use in the real world—something he expects could take a few years.

There are still challenges to work out. For instance, the technique depends on a person’s focal length; the technology researchers tested requires whoever’s using it to keep his eyes still, or requires software that tracks head movement and adjusts the image accordingly. Barsky expects this won’t be much of a problem, though, saying that when we look at a display that doesn’t look right, we tend to naturally move around to improve the focus.

And while the technology can be adjusted for different viewers, it won’t currently work for several people simultaneously who have different vision needs. However, Ramesh Raskar, an associate professor at the MIT Media Lab who coauthored the paper, says that if researchers used a display with a high enough resolution—about double the 326 pixels per inch of the iPod Touch used in the paper—the technology could be made to be used by more than one person at once.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.