The 1993 film Demolition Man is set in the fictional future of the 2030s, where people gain access to more or less everything via iris scans. That leads to an unsurprising plot device in which a prisoner escapes from jail by cutting out the warden’s eyeball and using it to spoof the biometric scanners.
This raises an interesting question. Is it possible for a scanner to tell the difference between a living iris and a dead one?
Today we get an answer thanks to the work of Mateusz Trokielewicz at Warsaw University of Technology in Poland and a couple of his colleagues. These guys have created a database of iris scans from living people and from dead bodies and then trained a machine-learning algorithm to spot the difference.
They say their algorithm can distinguish a living iris from and a dead one with 99 percent accuracy. But their results offer criminals a potential way to beat the detection system.
First some background. Ophthalmologists have long recognized that the intricate structure of the iris is unique in every individual. The details are particularly apparent in near-infrared iris images, and iris images at this wavelength are widely used in various security applications.
But the system isn’t perfect. Last year, hackers unlocked an iris-scanning Samsung smartphone by printing an image of the owner’s iris onto a contact lens and then placing the contact lens onto a dummy eyeball.
The more gruesome hack from Demolition Man is another way to circumvent these systems. But nobody has worked out whether this form of attack can be detected, until now.
The research is made possible by an unusual database—the Warsaw BioBase PostMortem Iris dataset, which includes 574 near-infrared iris images collected from 17 people at various times after they have died. The images date from five hours to 34 days after death.
The team also collected 256 images of live irises. They took care to use the same iris camera used on the cadavers so that the machine-learning algorithm couldn’t be fooled into recognizing images based on the characteristics of different cameras.
The team also checked the dataset for obvious bias in the images, such as differences in the way different operators may take pictures and the way this influences image intensity. They found there was little to distinguish the images in this respect.
However, there is an obvious difference in the way alive and dead irises often look in images. This arises because the eyelids of cadavers are often held open using metal retractors, unlike for most live iris images. These are easy for a machine-vision algorithm to spot. For this reason, the team cropped the images to show just the iris.
Finally, they used most of the dataset to train a machine-learning system to recognize dead and alive irises. They used the rest of the dataset to test the algorithm.
The results suggest that the algorithm accurately spots all dead irises and rarely misclassifies live ones. “No post-mortem sample gets mistakenly classified as a live one, with a probability of misclassifying a live sample as a dead one being around 1 percent,” says the team.
However, there is a caveat. This accuracy applies only to irises that have been dead for 16 hours or more. “Samples collected briefly after death (i.e., five hours in our study) can fail to provide post-mortem changes that are pronounced enough to serve as cues for liveness detection,” say Trokielewicz and co.
That gives these gruesome hackers a window of opportunity since freshly plucked eyeballs should work a treat. Worried readers can surely take some comfort from the knowledge that plucked eyeballs lose their hacking potency just a few hours later.
Ref: arxiv.org/abs/1807.04058: Presentation Attack Detection for Cadaver Irises
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
DeepMind’s AI predicts almost exactly when and where it’s going to rain
The firm worked with UK weather forecasters to create a model that was better at making short term predictions than existing systems.
People are hiring out their faces to become deepfake-style marketing clones
AI-powered characters based on real people can star in thousands of videos and say anything, in any language.
What an octopus’s mind can teach us about AI’s ultimate mystery
Machine consciousness has been debated since Turing—and dismissed for being unscientific. Yet it still clouds our thinking about AIs like GPT-3.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.