Skip to Content

Eyeglasses That Can Focus Themselves Are on the Way

Deep Optics is working on glasses with liquid-crystal lenses that can constantly refocus; it could be good news for aging eyes and virtual reality, too.
March 9, 2016

An Israeli startup is making glasses with lenses that can automatically adjust their optical power in real time, which may be a boon to people with age-related trouble focusing on nearby objects and could also be helpful for making virtual reality less nauseating.

Called Deep Optics, the startup has spent the last three years building lenses with a see-through liquid-crystal layer that can change its refractive index—that is, the way light bends while passing through it—when subjected to an electrical current that depends on sensor data about where a wearer’s eyes are trying to focus. This month it announced it had brought in $4 million in venture capital to help make this happen; investors include Essilor, a French company that makes eyeglass lenses.

While the technology is not entirely new—it’s been used in smartphone camera lenses in the past, for instance—Deep Optics claims to be able to use it in lenses that are larger and more optically powerful.

A prototype of Deep Optics’s self-adjusting lens is shown surrounded by a printed circuit board (left), while another lens is packed in a plastic handle that has an adapter to connect it to a computer (right).

The company initially hopes its technology can be useful for people with presbyopia, which is a very common inability to focus close up as people reach their 40s and older. Typically, this is solved by wearing glasses with progressive lenses, which have different degrees of focusing power in different areas. Such specs limit a person’s field of view, though, Deep Optics cofounder and CEO Yariv Haddad argues, and force people to learn a new behavior to see clearly.

The idea behind Deep Optics, he says, is that when the glasses are not operating electrically, they’ll be focused on the far distance, like a normal pair of glasses. But when you’re looking at an object close up, like a book, or at an intermediate distance, like a computer display, sensors tracking the eyes will send data about the distance between your pupils to a tiny processor built into the glasses; the processor will calculate where you’re looking and adjust the focus up to three diopters, which Deep Optics says covers the same visual range as a pair of multifocal lenses.

“The user doesn’t have to control it, doesn’t have to look through a specific area of the lens,” Haddad says. “[They] just have to look through the glasses as they would with any glasses prior to that.”

You won’t be able to buy glasses that include this technology any time soon. While the company has the basics of a working prototype, including functional lenses and other components, it still has a lot of work to do when it comes to perfecting the lenses and the system for detecting pupil distance, Haddad says, not to mention figuring out how to shrink everything down so it can fit into something as slim as a pair of eyeglasses. He expects that it will be two years before Deep Optics will start having people test the glasses extensively.

Haddad says the Deep Optics technology may be useful for other things besides vision problems. For example, it may offer a way to focus your eyes more naturally when wearing a virtual-reality headset. The experience requires you to focus both on a flat display ahead of you and on 3-D images that look closer to your eyes, which makes some people feel sick. Haddad thinks the constantly adjusting lenses can help.

Jesse Schell, a professor at Carnegie Mellon University’s Entertainment Technology Center and CEO of Schell Games, says the technology does sound potentially useful for virtual reality. How well it could work and how practical it could be, though, is harder to say.

“I think part of the challenge is it’s going to have to respond really fast, and that’s not trivial,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.