Skip to Content

Instead of a Password, Security Software Just Checks Your Eyes

Everybody has a different pattern of veins in the whites of their eyes. New security software makes use of that.
December 3, 2012

Typing a password into your smartphone might be a reasonable way to access the sensitive information it holds, but a startup called EyeVerify thinks it would be easier—and more secure—to just look into the phone’s camera lens and move your eyes to the side.

EyeVerify’s software identifies you by your “eyeprints,” the pattern of veins in the whites of your eyes. Everybody has four eyeprints, two in each eye on either side of the iris. The company claims that its method is as accurate as a fingerprint or iris scan, without requiring any special hardware.

The Kansas City, Kansas-based company plans to roll out its software in the first half of next year. CEO and founder Toby Rush envisions a range of uses for it, including authenticating people who want to use smartphones to access their online medical records or bank accounts. Rush says phone manufacturers are interested in embedding the software into handsets so that many applications can use it for authenticating people, though he declined to name any prospective partners.

The technology behind EyeVerify comes from Reza Derakhshani, associate professor of computer science and electrical engineering at the University of Missouri, Kansas City. Derakhshani, the company’s chief scientist, was a co-recipient of a patent for the eye-vein biometrics behind EyeVerify in 2008.

On the user’s end, EyeVerify seems pretty simple (though somewhat awkward in its prototype stage). To access data on a smartphone that’s locked with EyeVerify, you would look to the right or the left, enabling EyeVerify to capture eyeprints from each of your eyes with the camera on the back of the smartphone. (Eventually, EyeVerify expects to take advantage of a smartphone’s front-facing camera, but for now the resolution is not high enough on most of these cameras, Rush says.) EyeVerify’s software processes the images, maps the veins in your eye, and matches that against an eyeprint stored on the phone.

Rush says the software can tell the difference between a real person and an image of a person. It randomly challenges the smartphone’s camera to adjust settings such as focus, exposure, and white balance and checks whether it receives an appropriate response from the object it’s focused on.

The look of the veins in your eyes changes over time, and you might burst a blood vessel one day. But Rush says long-term changes would be slow enough that EyeVerify could “age” its template to adjust. And the software only needs one proper eyeprint to authenticate you, so unless you bloody up both eyes, you should be able to use EyeVerify after a bar fight.

Kevin Bowyer, chair of the University of Notre Dame’s computer science and engineering department—whose research includes biometrics of the iris of the eye—says he thinks the technology has promise, but he’s skeptical that it’s as accurate as fingerprint scanning.

Indeed, EyeVerify still needs to do more to prove that. Rush says that in tests of 96 people, the eyeprint system was 99.97 percent accurate. The company is working with Purdue University researchers to judge the accuracy of its software on 250 subjects—or another 500 eyes.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.