How to Teach an Iris Scanner That the Eye It’s Looking at Is Dead

New imaging and deep-learning techniques for iris recognition will foil a favorite trick of Hollywood villains

10 min read
illustration of eyes on bright pink background
Illustration: Adam Voorhes/Gallery Stock

illustration of eyes on bright pink background Photo: Adam Voorhes/Gallery Stock

No matter how many times you hold open a cadaver’s eyelids to image the irises, each time is uniquely memorable.

One of us (Maciejewicz) once fielded phone calls day and night from mortuary workers at the Medical University of Warsaw’s hospital, in Poland. The calls were often placed immediately after a death or the arrival of a cadaver, so that Maciejewicz could get to the mortuary as soon as possible. The bodies would often come in with traces of their final moments: tiny bits of debris from a traffic accident, or electrodes on their skin from failed resuscitations. Maciejewicz may have been focused on imaging the irises, but for him, these traces made every encounter personal, as if he were meeting these people after death. After his work was done, he would thank each person aloud.

The reason why Maciejewicz was scanning eyes in a room full of cadavers was to help answer some lingering questions about the security of iris-recognition systems. As iris scanning starts going mainstream, such questions are becoming more urgent. Around the world, these systems help us skip security lines at many international airports, withdraw money from ATMs, and unlock our smartphones with a glance (Samsung, for example, uses iris scanning, while competitors such as Apple have opted for facial recognition). Governments, including those of Ghana, Tanzania, and Somaliland, have used iris recognition to identify duplicate voter registration records ahead of elections. The world’s largest biometric project, operated by the Unique Identification Authority of India, uses iris recognition along with other biometric identification to issue a unique ID number, or Aadhaar, to Indian residents. So far, Aadhaars have been issued to 1.2 billion Indian residents.

Iris recognition is gaining favor for these applications because the iris’s structure, like that of a fingerprint, is unique to every individual and doesn’t change over the course of one’s life. Identical twins, who have genetically identical eyes, do not share the same patterns. Even your own left and right irises are distinct. And while fingerprints are also commonly used to identify an individual, the iris is particularly attractive because it’s more complex and therefore more discriminating than other options. In theory, at least, that additional complexity makes it easier to correctly identify an individual and harder to fake someone else’s iris.

But it’s precisely because of the growing popularity of iris-recognition systems that it’s fair to ask how well those systems work. How well can these systems distinguish between a real iris and a replica, such as a high-quality image? Can they recognize an iris that has been—as gruesome as it may sound—plucked from a cadaver? And what about the rare case in which the iris does change, because of disease or injury?

Conventional wisdom has held that the iris begins decaying only minutes after death. Thanks to Maciejewicz’s work, we discovered that wisdom to be wrong: If cadavers are kept cool, the eyes can still be used for identification for up to three weeks postmortem. Researchers had also assumed that recognition systems could not accurately identify dead eyes, which means that such systems are now vulnerable to exploitation. So to be completely secure, future generations of iris-recognition systems will therefore need more-advanced detection mechanisms, capable of recognizing dead eyes—otherwise, we could find ourselves in a science-fiction future where people could use someone else’s dead eyes to access information or locations they’re not supposed to. Less fantastical, though, the systems will also need to be flexible enough to adapt in instances where there are changes to an iris due to disease, and precise enough to tell whether the iris in question is a fake.

img Eye of the Beholder: The iris of the human eye has tremendous detail, and every person’s irises are unique, even in twins. Structures such as the crypts of Fuch (the oval-shaped openings surrounding the iris’s outer rim) are complex enough to serve as clear markers of identity. Photo: Joe McNally/Getty Images

Today, commercial iris-recognition systems use near-infrared light to illuminate the eye for a scan. Near infrared works well for iris scanning because it is largely not absorbed by melanin, the pigment that among other things determines the iris’s color.

Iris-recognition systems also rely on methods to segment parts of the image into iris and non-iris areas. This segmentation allows the system to use an image in which part of the iris is obstructed—for example, by eyelashes, eyelids, or light reflecting off the eye—or when the entire iris is misshapen or damaged, as in more severe cases of postmortem or diseased eyes. Following segmentation, the prevailing approach is to filter the image to make the pattern more pronounced. The pattern is then converted into a binary code. Readily available software can then very quickly hunt for a match between this code and others in a database.

One notable characteristic of these systems is that they all work with a still photo of the iris. So a simple test to automatically check whether the eye is “alive”—shining a light on it to see if the pupil contracts—is not always available. And there are other issues. For example, we found in our time at the hospital mortuary that the cornea—that protective, transparent outer layer of the eye—becomes cloudy soon after death. That cloudiness is noticeable in visible light, but near-infrared light basically sees right through it. Furthermore, after death, pupils become fixed in the “cadaveric position,” a mid-dilated position similar to the one considered ideal for recognition systems. The cadaveric position makes it tricky to tell a dead eye from a living eye at a glance in normal lighting conditions.

These factors, and our research on iris-recognition algorithms, support our finding that irises remain identifiable up to 21 days after death, a discovery that actually has an important and positive implication: Iris recognition could become a powerful new option for forensic examiners when they need to verify the identity of a corpse. Traditionally, identifications are made from fingerprints, dental records, or DNA, but those options can take hours or even days. Iris scanning, using the same method we used to catalog cadaver irises, could deliver the identification almost immediately.

As we’ve mentioned, the macabre downside is the possibility of someone using a dead person’s eyeball to gain access to secure locations or information. Sci-fi movies have already trotted out such grisly scenarios, but it’s never happened in the real world. But if iris-recognition systems are more vulnerable than expected, it’s reasonable to ask how else they might be tricked. There have been plenty of instances in which more mundane schemes have been used to get past these systems.

Back in 2002, a group of German researchers and hackers demonstrated that the sensors in commercial recognition systems could be tricked [PDF] by a person holding up to the scanner a paper printout of a photograph of an iris, with a hole in place of the pupil. Despite that revelation, recognition systems remain vulnerable to this very trick. In 2017, Chaos Computer Club, which bills itself as Europe’s largest association of hackers dedicated to making the public aware of data security issues, showed that the iris scanner on the Samsung Galaxy S8 could be deceived by using a photograph of an iris with a contact lens laid on top of it. The one catch is that the camera used to take the photo must be capable of capturing near-infrared light. Digital cameras typically have a filter that removes this light before it reaches the image sensor, but for many digital single-lens reflex (DSLR) cameras it’s not difficult to remove the filter.

Photo: Adam Czajka, Mateusz Trokielewicz & Piotr Maciejewicz

This clear, healthy eye was photographed with a Canon EOS 1000D camera. To see how the same eye looks to different cameras, scroll to the right.

Successful biometric attacks like the Galaxy S8 demonstration have inspired security researchers to step up their work on detecting and handling such attacks. Those efforts have resulted in a host of effective countermeasures.

One possible measure relies on the use of photometric stereo, a computer-vision technique that captures the three-dimensional structure of an object. The technique works by illuminating the object from multiple directions, one direction at a time, and photographing the results. It’s possible, using the resulting collection of images, to determine the orientation of the object’s surface at any point by comparing how light reflects from different angles. We showed earlier this year that photometric stereo can detect when someone is wearing contact lenses with someone else’s iris pattern, which might bamboozle an ordinary iris-recognition system. Fortunately, most modern iris-recognition systems can be adapted to use photometric stereo without swapping out hardware. Another possibility is that systems could be adjusted to spot telltale anomalies that printers leave on printed images.

Several techniques could help these identification systems detect a dead eye. One is to add an additional, thermal sensor to the system in order to detect an eye too cold to be part of a living person. Adding an additional sensor would be a relatively expensive proposition, however.

It’s also possible to tell the system to pay attention to the pupil as well as the iris. In a healthy, living eye, the pupil contracts and dilates in response to changes in lighting. With that involuntary reaction in mind, we built a general model of how a human pupil reacts to changes in brightness. Our goal was to develop a method to verify whether an iris is alive or not based on that reaction.


img Dead-Eye Accuracy: To identify an individual, iris-recognition systems use segmentation to separate the iris and non-iris portions of the image. Machine learning can accurately segment both diseased eyes (top) with distorted irises and dead eyes (bottom), which may have begun to decay. Photos: Adam Czajka, Mateusz Trokielewicz & Piotr Maciejewicz

The method we developed images the eye multiple times to see whether it’s actually responding to changes in brightness; the process takes about 3 seconds to verify the iris is alive. For comparison, even iris-recognition systems that require just one snapshot usually take about that much time to make an identification. And there’s no reason why the two techniques couldn’t be used in parallel. That means it would be possible to use our method to verify that an iris is part of a living eye while also confirming identity with a still image, with no additional time required.

Let’s say, however, that you want a recognition system that can quickly detect whether or not an iris is dead using only a single snapshot. For that, you’ll have to turn to convolutional neural networks, a subset of machine learning geared mainly toward analyzing images. In 2018, we developed a convolutional neural network with reference images of living and dead irises, and let it determine for itself what set the two categories apart. After approximately 2 hours of training, the network could correctly decide whether an image showed a live iris or a dead one 99 percent of the time.

So there’s reason for optimism that iris recognition will rise to future security challenges. The ability to recognize iris patterns even after death may break new ground in forensics. In addition, the use of video to record the iris’s reflexive response to light could form the basis of a robust recognition system immune to spoofing schemes that make use of cadaver parts or detailed fakes.

But one big problem remains: A person’s iris pattern can undergo changes because of disease. These changes can be significant enough to render an iris-scanning system unable to recognize the iris.

Several different ocular diseases can alter the pattern of the iris. Rubeosis iridis, iridodialysis, and synechiae all distort the shape of the iris and the pupil. Pterygium, bacterial keratitis, and hyphema make an iris’s pattern less visible. All of these conditions, we found, can wreak havoc with iris-recognition systems. They can change the iris so much it can’t be recognized, of course. But they can also make it difficult or impossible to record a person’s pattern because the shapes have been distorted beyond the system’s ability to accommodate them.

Iris-biometrics researchers are still debating how best to address recognition failures due to medical conditions. The U.S. National Institute of Standards and Technology (NIST), for example, hosted a meeting this past June for its Iris Experts Group to discuss the problem. One partial solution is to have people re-enroll their eyes in recognition systems after any medical treatment that might alter the iris. However, this solution doesn’t get at the underlying problem, which is that medical conditions make iris patterns harder to record because such conditions blur and distort the iris’s features.

We began studying the effects of ocular diseases on iris-recognition systems in 2013. We found that in many cases where disease has made recognition impossible, it’s because the recognition system had erroneously segmented the image of the iris. (As a reminder, segmentation is the technique that recognition systems use to separate the iris and non-iris portions of the eye so that the pattern of the iris can be properly identified.) The problem isn’t that the system can’t spot the pattern; it can’t distinguish between what is and what isn’t an iris in the first place.

Essentially, all iris-recognition software written to date assumes that the shape of the iris is circular, or nearly so. That assumption is typically true, and it speeds up and simplifies image processing. However, some eye diseases can change the shape of the pupil or of the outer rim of the iris, or both, leaving the overall shape unrecognizable to a scanner.

The best way to make iris-recognition systems more accessible to more people is to relax the assumptions these systems make about the iris’s shape. Remember how we used convolutional neural networks to distinguish between live eyes and dead ones? We and other researchers are using the same techniques now to create systems whose assumptions are more flexible about what the iris should look like. What we’re all striving for is an all-in-one system that can distinguish among healthy, diseased, and dead eyes reliably.

Photo: Adam Czajka, Mateusz Trokielewicz & Piotr Maciejewicz

This eye isn't healthy, but you might not be able to tell from this image, taken with the Canon camera. To see more revealing views, scroll to the right.

Another intriguing future possibility are recognition systems that would identify an individual based on minute details of the iris, rather than the iris pattern as a whole. This technique would be similar to how forensic scientists match fingerprints today, by comparing just a few points of interest rather than the entire whorl. Crypts, for example, are minuscule holes in the iris tissue, usually located near the pupil, that adjust as the pupil changes shape. It may be that an individual’s crypts are as unique as the ridges of their fingerprints.

And if crypts don’t work out, there are other options. In a study carried out in 2017, we asked participants to examine two iris images and tell us if they were the same iris or not. While our participants scrutinized the images, we tracked their own eye movements to see where they looked. We’ve scoured these gaze maps, as they’re called, to identify the spots people tended to focus on. The next step for us is to direct convolutional neural networks to focus on the same locations as they attempt to match an iris scan.

Iris-recognition systems have been around for 25 years, but only now are we and others finally addressing some longstanding flaws. In so doing, we are advancing our knowledge about the iris. By better understanding the assumptions we’ve made for these systems, and improving our techniques for working around those assumptions, future recognition systems will capture more information in the blink of an eye.

This article appears in the September 2019 print issue as “The Eyes Have It.”

About the Authors

Adam Czajka is an assistant professor of computer science and engineering at the University of Notre Dame. Mateusz Trokielewicz is an assistant professor at the Biometrics and Machine Intelligence Lab, NASK in Warsaw, Poland. Piotr Maciejewicz is an ophthalmologist at the Medical University of Warsaw.

The Conversation (0)

Autonomous Boats Seem More Solvable Than Autonomous Cars

MIT's Roboats will find useful applications in Amsterdam canals

2 min read

It's become painfully obvious over the past few years just how difficult fully autonomous cars are. This isn't a dig at any of the companies developing autonomous cars (unless they're the sort of company who keeps on making ludicrous promises about full autonomy, of course)— it's just that the real world is a complex place for full autonomy, and despite the relatively well constrained nature of roads, there's still too much unpredictability for robots to operate comfortably outside of relatively narrow restrictions.

Where autonomous vehicles have had the most success is in environments with a lot of predictability and structure, which is why I really like the idea of autonomous urban boats designed for cities with canals. MIT has been working on these for years, and they're about to introduce them to the canals of Amsterdam as cargo shuttles and taxis.

Keep Reading ↓ Show less

Could Zinc Gel Chemistry Outperform Flow Batteries?

Australian startup's new stationary storage tech nods to the time-tested, lead-acid cell

4 min read

Researchers at the Gelion Technologies Laboratory and Testing Facility in Sydney, Australia, working on the “gel-ion” (non-flow) zinc-bromine Gelion Endure battery

Gelion Technologies

Maybe flow batteries aren’t always everything they’re cracked up to be. A new technology from Australia is certainly raising this prospect, offering a novel approach to stationary energy storage—whose packaging at least harkens back to the old, familiar car battery.

Flow batteries use liquid electrolytes held externally in tanks and which circulate through the cells using pumps and piping. Their capacity is proportional to the size of the tanks, making them easily scalable. In theory, they should be a good choice for applications, such as storing surplus energy from renewables. But their reliance on mechanical components and intricate design presents drawbacks, including highly specialized maintenance needs, while flow batteries' electrolytes can be costly, corrosive or toxic. This has inhibited flow batteries from gaining widespread deployment, despite increasing improvements.

Keep Reading ↓ Show less

Your career is like your life. It evolves in stages, each of which looks unique and has distinct needs. Wherever you are in your career, the IEEE Member Group Insurance Program is designed to offer solutions geared for your needs – today and tomorrow, too. Early Career. Mid Career. Late Career. Post-Career/Retirement. Whoever you need to protect — yourself, your family, or your business partners/employees — we’re your resource for insights and options.