Skip to Content

Nvidia’s Eye-Tracking Tech Could Revolutionize Virtual Reality

A phenomenon first observed by Leonardo da Vinci is being used to make virtual images look more realistic.
July 21, 2016

Look at a clock on a nearby wall. The focal point of your gaze should be in focus, while the scene around the clock is blurred, as if your brain is sketching your surroundings, or, in computer graphics terms, rendering a low-resolution version of the scene.

Nvidia is applying the same trick to rendering virtual reality, and it could help improve the realism of virtual worlds significantly. By focusing graphics rendering power on a smaller area, it is possible to sharpen the image a person sees significantly.

Leonardo da Vinci was the first person to notice this visual phenomenon, called foveal vision, in the 15th century. David Luebke, together with four other researchers at Nvidia, has spent the last nine months attempting to mimic the principle in VR by fully rendering only the specific area where a player is looking, and leaving the rest of the scene at a far lower resolution.

This virtual scene was rendered using Nvidia’s foveal vision approach, which tracks the focus of the user’s gaze and blurs peripheral vision around it.

When the player using the Nvidia system focuses on a new area of the scene, eye-tracking software shifts the focus of the rendering in kind. To render a full scene in VR at 90 frames per second, the lowest acceptable frame rate in VR before users begin to report feelings of nausea, four million pixels must be rendered at almost a hundred times a second. But by focusing the rendering only on the player’s line of sight, huge computational savings can be made. “The performance gains are too large to be ignored,” says Luebke.

The principle is not new in VR research. "The principle is not new in VR research. Indeed, the Kickstarter-backed Fove headset uses a similar system (see "Point, Click, and Fire in Virtual Reality—With Just Your Eyes"). Luebke has spent much of the past 15 years studying the area, first while a professor at the University of Virginia and now at Nvidia. Previously, however, eye-tracking technology has struggled to keep up with the whip-quick speed of human eye movements, causing a stomach-churning latency effect when a user switches from, say, the left side of a scene to the right. A new prototype eye-tracking VR display by SensoMotoric Instruments is capable of accurate and low-latency eye-tracking at 250 Hertz. “For the first time we have eye-trackers that you can’t outrun with your eyes,” explains Luebke.

Even with this capability, Nvidia’s team needed to spend a great deal of time calculating exactly how much it could lower the resolution of the periphery of a scene before a viewer notices. “Peripheral vision is very good at detecting flicker,” explains Luebke. “It’s used to help us see tigers in the woods.”

As such, any flicker from the degradation is disconcerting. Likewise, if the periphery becomes too blurred, it can create a tunnel vision effect, as if the viewer is looking through a pair of binoculars. “You can tell something’s wrong, even if you can’t quite put your finger on what,” says Luebke.

To solve the issue, Nvidia’s researchers found that if they increase the contrast of the peripheral scene while lowering the resolution, the human mind is completely fooled.

While Nvidia has no products in production that facilitate the technique, the company, which provides hardware and software for many VR companies, hopes its findings will encourage the major headset makers to include eye-trackers in their future head-mounted displays. “Part of what we are doing here is helping to define the rules of the road for VR,” says Luebke.

The technology is unlikely to appear outside of VR—in laptops, for example—since eye-trackers become far less efficient the further they are from one’s face. VR, where the tracker sits a few centimeters from the eye, by contrast, offers the ideal pairing. The technology will likely impact the company’s future graphics cards, giving developers the opportunity to prioritize computational processing on specific pixels and redefine rendering algorithms.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.