Computing

Latest Tests of Biometrics Systems Shows Wide Range of Abilities

On 7 January International Biometric Group (IBG; New York City) made available the results from its latest round of biometric technologies testing. IBG has been doing comparative testing of biometric systems since 1998, and up to this point has tested 50 different systems. In its most recent research, which was sponsored by Honeywell, Microsoft, and the Financial Services Technology Consortium, the firm evaluated seven fingerprint scanners, two facial recognition systems, an iris recognition device, and a hand-geometry recognition system. The results showed some wide variances in critical metrics among the different systems. IBG’s director of marketing, Trevor W. Prout, told IEEE Spectrum associate editor Samuel K. Moore why in a 9 January interview.

Your study found a wide variance in system capabilities. For instance, failure to enroll rates ranged from 0 percent to more than 23 percent, meaning that as many as 23 percent of people could not use at least one of the systems. What does such a rate of failure mean to someone thinking of deploying a biometrics system? And what’s behind the wide variance?

Failure to enroll is important for potential deployers to understand because this is the percentage of people that are going to have difficulty enrolling in the system in the first place, let alone being correctly verified on a daily basis. The wide variance in the performance of the technology is important for [purchasers of the technology] to understand, because you can’t make blanket statements about the performance of any given biometric technology. For example in the fingerscan space, there’re dozens of different vendors--some of whom have very strong technology and some of who’s technology is not quite ready for prime time.

So it’s not necessarily the category of technology that leads to enrollment failures, it’s the vendor?

That’s right, although, with any biometric technology there are strengths and weaknesses and inherent limitations. For example, with fingerscanning technology, it’s the conventional wisdom that one or two percent of the population doesn’t have sufficient quality fingerprints to enroll and verify on a biometric system. That’s important, because some systems do a good job of handling people with poor quality prints, and other systems don’t. It’s important in any large-scale biometrics project to understand what percentage of the population is going to have difficulty enrolling, and calculating how that’s going to impact the overall process.

Your study also found that false acceptance rates ranged from 0 percent to 5 percent. Is there any reason that anything but 0 percent would be acceptable?

People tend to focus a lot on false acceptance rates, false matches in other words. But if you think about it, depending on what the application is, it’s really false rejections (false non-matches) which are really going to be a headache in an operational environment on a daily basis.

One of the key metrics that we look at when we’re evaluating a test performance is what we call the ability to verify, or ATV, which is a function of both failure to enroll and the false rejection rate [ ATV = (1 -- failure to enroll rate) x (1 -- false rejection rate)] to give you an idea of what percentage of your population is going to be able to use the system on a daily basis.

When you look at false acceptance rates and false rejection rates, there’s always going to be a trade-off. When you enroll, you are creating a digital template of your biometric feature. Then when you go to verify, you’re creating another template, and the two are being compared. You’re never going to have an exact match. Instead, what you have is a probability that the two are a match. Where you set the threshold of what’s an acceptable probability is going to determine the tradeoff between false matches and false non-matches.

In some cases, potential deployers of the technology decide that any more than one in a million false matches is not going to be acceptable. But it’s important to consider, from a practical point of view, how many imposter attempts are you really having? Probably, not that many. Looking at it the other way around, if you were to have even a one percent false rejection rate, that’s going to cause a huge operational problem, and you’re going to need to have some sort of workaround in place.

I think there’s a little too much emphasis on false acceptance rates. We tend to see a lot fewer false acceptances than we see false rejections, and I think that’s in part a matter of where thresholds are being set, and its reflective of a mindset that vendors will be more criticized for having a false match than a false rejection.

Are these systems adjustable so that the balance between false acceptance and false rejection can be tweaked?

Typically, the security settings are configurable. And in fact, for our comparative biometric testing, we ask each of the vendors, although we don’t require it, to set three different security thresholds--a high, medium, and low--to represent different usage scenarios. In a low security environment, you’re simulating a case where false rejections are going to be a very bad thing, and you’re willing to live with maybe some greater probability of false matches.

Your study found that false rejection rates on the day of enrollment ranged as high as 35 percent, and after six weeks they could be as high as 65 percent. Why does it change?

There are two things we’re looking at there. One is the extent to which the biometric that we’re using, whether it’s your finger or your face or your hand, is changing over time and how much of an impact that’s going to have on the usability of the system. The other is a certain lack of habituation with the system. With a fingerscan system, for example, I might come back six weeks [after my last] interaction with the system and place my finger differently than I did before.

Different biometric systems have different ways of handling those sorts of changes, where they’re adapting the enrollment templates over time. In our most recent round, we tested the smallest fingerscanner we’ve ever tested, the AES 3500 (from AuthenTec Inc., Melbourne, Fla.). It’s only [about 40 mm2]. They’re using it in millions of cellphones in Japan now to secure access to the phone. As you know, as mobile networks become more sophisticated, people are increasingly using their phones or their PDAs to access personal data or even sensitive corporate data. These AuthenTec fingerscanners are embedded in the phone and one of the ways that their system adapts to different placements of the finger is that it continually adds to the template-- getting a bigger and bigger picture of your finger than is seen in just the [40 mm2 of the scanner]. So it’s less sensitive to how I place my finger. Hand geometry technology is also capable of tuning the template over time as are various facial recognition systems.

There were some technologies that were tested for the first time in this round: match on card technology, for instance. What’s that?

Match-on-card means an application where you’re using a smart card with a biometric template, say your fingerprint, stored on the card. So if you’re walking through a physical access turnstile, you’d swipe your card to claim your identity and then place your finger to verify your identity. The smart card actually has a chip on the card, so that the matching is actually happening on the card as opposed to on an external server. The difference is transparent to the user, but has implications in how the system is set up… Using the card even to just store the biometric template doesn’t require that the biometric be stored in a central database, which for some raises privacy concerns.

This was also the first time IBG looked at enrolling on one device and verifying identity on another. Why was that important?

One of the key issues with fingerscan technology in particular, since there are so many vendors in the marketplace each with proprietary solutions, is interoperability. What one vendor, BIO-key International Inc. (Eagan, Minn.) demonstrated in their setup for this round was the ability to enroll on one of the two main types of fingerscan technology and verify on the other. For enrollment, BIO-key used an optical scanner, which takes a very high-resolution picture, and for verification they used an ST Microelectronics TouchChip silicon [on-chip sensor array]. Interestingly, the optical scanner is the same device that’s being used as part of US-VISIT [a program implemented on 5 January in which people entering the United States from certain countries are electronically fingerprinted and photographed].

Do biometrics systems generally interoperate well?

It’s certainly not plug and play. And it’s a significant issue in biometric deployment whether you’re storing images or templates, which are digital representations of the biometric that cannot be turned back into an image. Today there’s really very little interoperability among templates and the algorithms that the systems are using for matching. Those are proprietary technologies. So if you as an organization are considering deploying biometric technologies you should be concerned that the vendor you’re working with now will be around in 5 years and supporting the product. The cost to you of re-enrolling all your subjects could be significant. There are a number of different standards groups in the biometrics industry that are working towards greater interoperability, and they’ve made a good deal of progress, but there’s still a ways to go.

IBG is predicting that the $719-million market for biometric technologies will grow 36.7 percent in 2004. Where’s the growth likely to come from?

Certainly in 2004, civil identification applications such as US VISIT are going to be a key driver for growth. One way to look at biometric technologies is to focus on who the end user is: a citizen, an employee, or a consumer. In the short term, it’s the citizen-facing applications that are really driving the growth, although we see growth in the employee facing sector as well--physical access applications, such as gaining access to your building or network, time and attendance solutions, etc. It’s the consumer-facing applications which are just beginning to emerge and where the real growth is still some ways down the road.

The adoption of biometrics has been slower than expected, why is that?

A few different reasons. Most importantly, the global downturn in the economy. Now that things are starting to turn around, that benefits the industry. Also, large-scale programs like US-VISIT and a number of other government-funded projects that were envisioned in the wake of 9/11 have just been slower to emerge. The delay is partly because of the coming together of the Department of Homeland Security. For a while all the component agencies were in transition and it wasn’t clear whose purview it was to spend money. Even as the department came together, there was still the issue of getting funds through Congress. So the public sector projects have just been slower to emerge than expected.

IEEE Spectrum
FOR THE TECHNOLOGY INSIDER

Follow IEEE Spectrum

Support IEEE Spectrum

IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.