Skip to Content
Artificial intelligence

Iris scanner can distinguish dead eyeballs from living ones

In theory, an iris scanner can be hacked using an eyeball plucked from the victim. Now researchers have trained a machine-vision system to tell the difference between dead irises and live ones.

The 1993 film Demolition Man is set in the fictional future of the 2030s, where people gain access to more or less everything via iris scans. That leads to an unsurprising plot device in which a prisoner escapes from jail by cutting out the warden’s eyeball and using it to spoof the biometric scanners.

This raises an interesting question. Is it possible for a scanner to tell the difference between a living iris and a dead one?

Today we get an answer thanks to the work of Mateusz Trokielewicz at Warsaw University of Technology in Poland and a couple of his colleagues. These guys have created a database of iris scans from living people and from dead bodies and then trained a machine-learning algorithm to spot the difference.

They say their algorithm can distinguish a living iris from and a dead one with 99 percent accuracy. But their results offer criminals a potential way to beat the detection system.

First some background. Ophthalmologists have long recognized that the intricate structure of the iris is unique in every individual. The details are particularly apparent in near-infrared iris images, and iris images at this wavelength are widely used in various security applications.

But the system isn’t perfect. Last year, hackers unlocked an iris-scanning Samsung smartphone by printing an image of the owner’s iris onto a contact lens and then placing the contact lens onto a dummy eyeball.

The more gruesome hack from Demolition Man is another way to circumvent these systems. But nobody has worked out whether this form of attack can be detected, until now.

The research is made possible by an unusual database—the Warsaw BioBase PostMortem Iris dataset, which includes 574 near-infrared iris images collected from 17 people at various times after they have died. The images date from five hours to 34 days after death.

The team also collected 256 images of live irises. They took care to use the same iris camera used on the cadavers so that the machine-learning algorithm couldn’t be fooled into recognizing images based on the characteristics of different cameras.

The team also checked the dataset for obvious bias in the images, such as differences in the way different operators may take pictures and the way this influences image intensity. They found there was little to distinguish the images in this respect.

However, there is an obvious difference in the way alive and dead irises often look in images. This arises because the eyelids of cadavers are often held open using metal retractors, unlike for most live iris images. These are easy for a machine-vision algorithm to spot. For this reason, the team cropped the images to show just the iris. 

Finally, they used most of the dataset to train a machine-learning system to recognize dead and alive irises. They used the rest of the dataset to test the algorithm.

The results suggest that the algorithm accurately spots all dead irises and rarely misclassifies live ones. “No post-mortem sample gets mistakenly classified as a live one, with a probability of misclassifying a live sample as a dead one being around 1 percent,” says the team.

However, there is a caveat. This accuracy applies only to irises that have been dead for 16 hours or more. “Samples collected briefly after death (i.e., five hours in our study) can fail to provide post-mortem changes that are pronounced enough to serve as cues for liveness detection,” say Trokielewicz and co.

That gives these gruesome hackers a window of opportunity since freshly plucked eyeballs should work a treat. Worried readers can surely take some comfort from the knowledge that plucked eyeballs lose their hacking potency just a few hours later.

Ref: arxiv.org/abs/1807.04058: Presentation Attack Detection for Cadaver Irises

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.