This is part of IEEE Spectrum's special report: Critical Challenges 2002: Technology Takes On
Present ID to suspicious guard. Step through metal detector while briefcase is X-rayed. Swipe keycard and enter PIN to access authorized areas. Once, this sequence of events was a good way to set the stage in a James Bond movie. But today, for an increasing number of employees, some or all of these checkpoints are the start to a regular workday.
Although terrorism is probably the focus of most people's current concern, the need for protection from such nightmares as workplace and school violence has been fanning a demand for more secure environments for some time. Proposed solutions range from the simple, like putting better locks on doors, to the experimental, like automatic face-recognition systems. On a long-term basis, they even include architectural measures: designing buildings so that bomb-laden trucks can't readily approach them. But when the demand is met, do we really end up safer? And how much, if any, privacy must we surrender for that security?
The answer, not surprisingly, depends on the situation. Keeping terrorists off airplanes is different technically and sociolegally from keeping unauthorized employees out of certain parts of a factory, which is different yet again from fighting crime in public parks. Technically, it is much easier to verify that an employee is who he says he is than to identify a shadowy figure who has no proper picture on file. Sociolegally, it is lawful and usually acceptable for an employer to control access to parts of its plant and to subject employees to some scrutiny. But, in the United States at least, ordinary citizens have expectations of privacy, and as we shall discuss, some programs for monitoring public places with cameras--especially cameras equipped with face-recognition systems--have alarmed people as politically diverse as Barry Steinhardt, associate director of the Manhattan-based American Civil Liberties Union (ACLU), and Texas Republican Richard K. Armey, outgoing majority leader of the U.S. House of Representatives.
Courteous access control
For businesses, the first line of defense--controlling who and what enters the building--is often also the last line. "The biggest concern is access control," said Richard McCormick, vice president of the Business Risks International division of Pinkerton, the oldest corporate security firm in the United States, headquartered in Chicago. "A lot of companies don't do anything. You walk in the door and maybe there's somebody sitting there. You say you're here to see someone, and they say 'Fine, just go down the hall, it's the third door!'" McCormick observed.
Adding more security needn't make an office lobby feel like Checkpoint Charlie. "Good access control, if it's done well, does not have to interfere with people's rights. In fact, good access control is somewhat transparent," said McCormick, suggesting that, say, a policy of making visitors wait in the lobby until they can be escorted to their destination by an employee can be as much about good business courtesy as good security.
In McCormick's view, technology is a critical element. "Generally, you try to use technology as opposed to personnel. It's cheaper and in many cases does a better job," he said, citing the problem of securing parked aircraft in countries where drug smuggling is rife. Rather than relying on guards, who may be part of a smuggling operation themselves, McCormick would equip planes with an intrusion-monitoring system that would download a report to a returning pilot's handheld computer. The pilot could then search indicated areas.
Similarly, although some personnel will always be required to respond to alarms and so on, McCormick would much prefer to monitor an area with cameras than have patrols. But others are bothered by the introduction of such surveillance equipment into workplaces.
"I think any camera in the workplace is unacceptable," said Stanley Aronowitz, a professor of sociology at the City University of New York. He sees no justification for surveillance technology in most business settings and feels it can harm both employees and the company. Employers are unlikely to resist the temptation to monitor the tapes for workplace performance and, Aronowitz believes, in most workplaces, if a certain amount of soldiering (working just hard enough to appear busy) is not permitted during the day, working becomes intolerable. Even if the cameras are not monitored, their presence in effect forces an unbroken eight-hour workday on employees. "Any good industrial psychologist will tell you, anybody who doesn't have [some flexibility], even informal time off, will perform badly," he continued.
McCormick, despite feeling that cameras do have a place in the work environment, also stressed the importance of not undermining employees' good will. "It comes down to employee awareness--and not going overboard," he cautioned. Don't put cameras where they are not necessary and a reasonable person would object to them, like in a bathroom--or watching a bathroom entrance.
McCormick also warned against instituting policies that introduce inconvenience without any return in security: "In some companies, visitors sign in and are issued a visitor's badge--that doesn't make any sense unless all the employees are wearing badges and have been trained to challenge anyone without a badge." The danger in creating security measures that are perceived as pointless is that employees soon start going around the system, and hostility can build against all security measures, good and bad.
Additionally, overdoing high-tech gadgetry can divert resources from low-tech measures that might produce real gains in security, a problem noted by Mary Green, a member of the technical staff at Sandia National Laboratories, in Albuquerque, N.M., who specializes in school security. In the wake of a gun-related incident, the reaction might be to purchase a metal detector for US $3000-$4000; but that money might instead allow a school to rekey locks that are maybe 30 years old and have too many copies of their keys floating around, she explained.
Green also worries that not enough thought is given to the difficulties and costs of operating a technology before it is deployed. Schools can afford the metal detectors, but not the staff to run them properly, she said. Writing in a 1999 research report to the U.S. National Institute of Justice, she noted that for one well-run weapons detection program in a New York City school, nine security officers were required for two hours each morning to process a student population of about 2000. A similar concern for effectiveness leads her to believe that unless an incident occurs, or an area is under specific investigation, schools shouldn't bother to review camera recordings, let alone monitor cameras in real time. "Schools can't afford to dedicate someone to looking at the monitors. Plus, no one does a good job of it," she explained.
Sandia, in research conducted in the 1970s for the U.S. Department of Energy, found that even a highly motivated observer's attention becomes less than effective after just 20 minutes of monitoring. Of course, automatic face-recognition systems may do away with this problem, but they are likely to be expensive and controversial for some time to come.
The use of automatic face recognition most often comes up in the context of airport security, where the promise of recognizing terrorists and keeping them from boarding planes has an understandable appeal. Theoretically, FacE-REcognition Technology, or Feret, as it has been dubbed by the U.S. Department of Defense, can be an enlightened alternative to racial or ethnic profiling, according to Jeffrey Rosen, an associate professor at the George Washington University Law School and legal affairs editor at The New Republic.
As he outlined the ideal scheme to IEEE Spectrum, the only faces in the database against which travelers' faces would be compared would be the most dangerous of all terrorists, perhaps an international watch list of prime suspects. The faces of everyone going through a security checkpoint would be scanned, but normally no records would be kept. Only when a match was found would cameras be turned on to surveil people with a live video feed, and then the authorities would verify the match and decide whether to stop the individual whose face matched one in the database.
"The problem with this cheerful scenario," he pointed out, "is that there's not a single country in the world that's used biometric surveillance in which the databases are actually limited to suspected terrorists." And for good reason: terrorism has no face--people who are planning mayhem do not begin with a trip to FBI headquarters to have their picture taken. So, since there are very few photos of terrorists to put in a database, government officials have tended to justify their investments in Feret by filling the databases with criminals who are easily identified--most often the low-level offenders who are in abundant supply.
Nowhere was this tendency more evident than at last year's Super Bowl in Tampa, Fla., which the ACLU's Steinhardt likes to refer to as the Snooperbowl. At that football game, without the public's being informed, cameras took pictures of all the fans entering through the turnstiles and compared them with a database of mug shots of supposed violent criminals and terrorists. But all 19 matches that the system came up with were petty criminals, like pickpockets and ticket scalpers.
The point, Steinhardt and Rosen agree, is that these databases seem to have an inevitable tendency to expand beyond their original purpose of identifying terrorists and other violent criminals. The fear, then, is that as law-enforcement databases and camera systems are centralized, the population will find itself increasingly scanned, identified, and matched against the records of all its past misdeeds, trivial and otherwise. "This," Rosen asserted, "would really be a transformation of public space. It would challenge our traditional right to be anonymous in public spaces."
In an op-ed piece on national ID cards in The New York Times of 13 October, respected Harvard law professor Alan M. Dershowitz asserted that no right to anonymity is hinted at in the U.S. Constitution. But Steinhardt and Rosen say he is mistaken. Rosen called Dershowitz's assertion "bewildering in light of the Supreme Court's recent decision in connection with the right of ballot distributors to remain anonymous." In that case, both the majority and Justice Clarence Thomas, in a separate concurring opinion, issued ringing defenses of the right to anonymity as being deeply rooted in American history.
This year's Super Bowl, which will take place on 3 February at the Superdome in New Orleans, La., has been designated by the Bush administration as a National Special Security Event, which puts it on a par with presidential inaugurations and meetings of the United Nations General Assembly. At such events, overall security planning is in the hands of the Secret Service. Will face-recognition technology be part of the Service's security plan? Possibly. According to Service spokesman Mark Connelly, "typically the Secret Service does not comment on the methods and means it utilizes to carry out a protective responsibility."
U.S. Representative Armey and the ACLU are similarly apprehensive about surveillance of public spaces, and viewed with concern a plan by the Colorado Department of Motor Vehicles to create a database of faces of all those applying for driver's licenses. In a joint statement, they noted "an alarming potential for misuse of all of these systems. Used in conjunction with face-recognition software, for example, the Colorado database could allow the public movements of every citizen in the state to be identified, tracked, recorded, and stored."
That, of course, is a genuine threat to personal freedom. But then so is identity theft, which victimizes 750 000 people a year, according to Thomas J. Colatosti, president and chief executive officer of Viisage Technology Inc., Littleton, Mass., a manufacturer of Feret systems. By recognizing that a person applying for a license already has one under a different name, the Colorado system could help prevent such identity theft, he says.
Colatosti pooh-poohs the threat of face recognition, saying that "...the technology only automates [the detection of] what is easily observable by a law enforcement officer. A nanosecond face scan is a trivial privacy invasion relative to the common and accepted invasive airport body scan by hand wand that is run up one's crotch and down the butt."
Does it work?
These are all serious concerns. Also serious is the accusation, most notably by Steinhardt, that Feret simply does not work. One shortcoming, he said, is that the prerequisite of any effective system--a database of known terrorists--does not exist.
A second shortcoming, in his view, is that the technology itself is ineffective, being nowhere near accurate enough to justify the threat it poses to privacy. That this is a point to consider became clear at the end of a Strategic Research Institute conference on biometrics in New York City last month. The final item on the conference agenda was a visit to the premises of International Biometric Group (IBG), a company that evaluates biometric technology. During that visit, conference attendees participated in demonstrations of various biometric technologies, including face recognition.
The results brought the maturity of the technologies into question. To be effective, given the current state of technology, face recognition appears to require cooperative subjects who remove their hats and sunglasses and look directly at the camera, one person at a time. That is a reasonable requirement for checking people getting ready to board an airplane, but it is hardly realistic for scanning a crowd.
There are even problems with Feret under controlled conditions. At the demo, one volunteer enrolled in a face-recognition database looking straight ahead wearing a deadpan expression. He was then rejected by the system when he turned his head and smiled. The problem may have been that the system's acceptance threshold was set too high--that it was demanding too perfect a match. Of course, if the threshold is set too low, the high false rejection rate (FRR) is simply exchanged for an excessive false acceptance rate (FAR). The trick is to set the threshold at the right spot for the application.
For a school lunch line, where the technology is used to discreetly determine whether students are entitled to free or discounted meals, the threshold would be set low. Little harm is done if a few unqualified children get free lunches, compared with forcing poor people to pay for meals they can't afford.
For accessing an ATM, the threshold would be set higher. It might even be varied depending on the size of the transaction: withdrawing a large amount of money might require a better match than withdrawing a smaller amount. The challenge would be to keep the FRR from growing so large that users get frustrated and switch banks.
Jake Hong, an associate at IBG, told the group how he enrolled his face in a database, then moved the equipment to another room and was not recognized by the system--whether because of the change in lighting, background, or both, he could not say. These observations are not a substitute for controlled experiments, but they do make clear the need for them.
IBG has done some of those studies, but since selling the results is how the company makes its money, Hong declined to give detailed comparative figures, saying only that false-rejection error rates are well into double digits.
Colatosti acknowledges that face recognition is not perfect, but asserts that it is nonetheless valuable and should be regarded as one more item in the crime-fighting toolbox. Even if a system misses half the terrorists that go through it, it will still nab one out of two, he said. "At a cost of about 5 cents a ticket, this certainly is a very cost-effective security tool," he told Spectrum. Joseph J. Atick, president and chief executive officer of Visionics Corp., of Jersey City, N.J., another leader in facial recognition, agrees.
Atick warned that there is no responsible short answer to the question of how accurate facial recognition is. It hinges, he said, on too many factors and parameters like the quality of the database images. For the crucial application of airport security, the problem, as he puts it, is that "we don't have the luxury of enrolling terrorists--we have to use images as they exist, as they come to us from intelligence-gathering agencies."
So, using both simulations of typical terrorist photos and real input from the FBI, Visionics has been running large trials since shortly after 11 September. From those trials, the company has found that for a watch list of a couple of thousand people, the probability that someone on the list will be spotted is 60-90 percent, depending on the quality of the image in the database. You never get much above 90 percent, he said, because for a terrorist database you're dealing with single images taken under uncontrolled conditions. You typically know neither the camera-to-subject distance nor the lighting conditions, so some information is missing. Nevertheless, he claims, these figures are on a par with the best tools for X-raying luggage and screening for metal weapons.
How does Feret work, and why is there so much debate about its accuracy? First, there is more than one kind of face-recognition technology. Viisage Technology, which has made a specialty of creating databases of images on driver's licenses, uses an algorithm based on what it calls eigenfaces, in which real faces are represented as linear combinations of stereotyped grayscale face patterns.
In contrast, local feature analysis--the Visionics approach--determines the positions of the eyes, nose, and mouth relative to each other and creates a topographical map of the face, on which it locates a dozen or so features, called landmarks. With eigenfaces, matching is done on a pixel-to-pixel basis to establish the coefficient of the match. In the local feature analysis case, stereotyped template matching is used only to find the position of the facial landmarks. Once they have been found, the pixel data is discarded; the actual match is based on the topographical map [see figure].
With local feature analysis, comparisons can be made without any need for actual face images at the comparison point. All that's needed is a highly compressed description of the landmark configuration from which the original face cannot be reconstructed. Thus, people can be scanned and compared with a watch list without creating a record that could be used to prove they were at a certain place at a certain time. As Atick describes it, only when a match is detected would a real image of the person be sent from a central file to the comparison point, where the authorities can use human judgment to decide whether the resemblance is close enough to justify further action.
Local feature analysis provides a redundant description, which is good because some landmarks may be occluded in any given picture. Statistical models of human faces tell you that if you locate a fraction of the landmarks, you can predict where the others will be. In fact, locate about 14 of them, and the locations of the others can be predicted from the known statistical regularity of the human face.
Smile, you're on camera
When all is said and done, most people are fairly tolerant of intrusions at airports, where the benefit of increasing security is obvious. Bringing cameras into more ordinary public spaces, however, raises more questions about their effectiveness--and about the intent of the authorities who brought them in. Stephen Maybank, a researcher in computer vision at the University of Reading, in Berkshire, UK, noted at a recent press briefing that his country "has an enormous number of cameras." He estimates that there are two million surveillance cameras in public areas, thanks in large part to public support and money. The government funds surveillance schemes all the time, commented Maybank. In August 2001 it released £79 million (US $110 million) to fund the installation of surveillance cameras in a number of new city centers.
Points to Ponder
AIRPORT SECURITY About one in eight dangerous objects went undetected by FAA tests in 1978; in 1987, the fraction rose to one in five. The current rate is classified.
GAMBLING BUSINESS There are 3572 gambling casinos in the world, all potential users of face-recognition technology.
FAST GROWTH Biometrics is expected to grow by an order of magnitude, from a $200 million business in 1998 to a $2 billion industry by 2004.
Despite this massive investment, the results are disappointing, according to ACLU's Steinhardt. It doesn't reduce crime, he said, but just moves it from where the cameras are to where they are not. Steinhardt also noted that surveillance cameras lend themselves to illegitimate uses, including racial discrimination. "People of color in the UK have been singled out to be followed by the cameras," he said. The cameras have also resulted in video voyeurism, where young women in particular are scrutinized by camera operators.
So how can organizations ensure effective security without being needlessly costly or intrusive? Rudolph V. Matalucci, a distinguished member of the technical staff at Sandia, explained how the laboratory tries to answer this question through a Risk Assessment Methodology (RAM): "We've used it on dams, high-voltage power transmission, and for some buildings," he said.
Safer by design
The first step in the RAM is to determine the threat to a building. "Is the threat explosives, or chemical agents, or even a disgruntled employee?" Matalucci amplified. The second step is to assess the value and significance of the structure and its contents. "That's very gut-wrenching, because sometimes we have to put a price tag on a human being," as well as on property and intellectual assets, he observed.
The third step identifies vulnerabilities that would allow a threat to be realized. "Can someone come in with four men with machine guns and mow everybody down? Can someone drive straight into the building and detonate 100 kilograms of explosives? Is there enough structural redundancy in the building so that if one column comes down, the whole building doesn't come down?" asked Matalucci.
Taking into account the threat level and the value of what is being protected, the final step is to design appropriate safeguards for identified vulnerabilities. Sometimes these take the form of physical barriers, but usually, Matalucci stressed, "we are not interested in building a fortress." Barriers can be very unobtrusive. A good example, he noted, is the Albuquerque federal courthouse [see figure], where short pillars (bollards) were hewn out of local stone and placed on the surrounding sidewalks, so that no one can drive from the street up through the front of the building.
Dennis S. Miyoshi is the director of Sandia's security systems and technology center, and, along with the rest of his staff, he wants security concerns to be taken into account before a building is designed, rather than in retroactive assessments and modifications. Security retrofits, he believes, tend to use personnel rather than technology, creating an intrusive system that encourages a police mentality. Early planning might lead to built-in automated features that provide greater, yet less intrusive, security.
Sandia is working with architects and civil engineers on a series of guidelines for the design of buildings to make them easier to secure and, in the event that the worst happens, to mitigate the damage. One guideline is to use a type of glass that, when struck by a blast, fragments into very small pieces rather than life-threatening shards.
But it is sometimes difficult to get such security-enhancing features included. "The problem is building owners who say, 'If it's not in the building code, I don't want you to spend the money,'" complained Matalucci. Eventually, such technologies will make their way into the general building codes. It may take years, but it is only when security is viewed through the lens of such long-term institutions that real gains can be made in protecting citizens and the potential for waste and abuse inherent in near-term, reactive security measures can be eliminated.
To Probe Further
A firsthand account of the lassitude in aspects of human intelligence gathering and analysis is presented in "The Counterterrorist Myth" (The Atlantic Monthly, July/August 2001). The article, written by Ruel Marc Gerecht, a former CIA case officer who became disillusioned with the agency, is available at http://www.theatlantic.com/issues/2001/07/gerecht.htm
The report of the National Commission on Terrorism, "Countering the Changing Threat of International Terrorism," contains sections on intelligence challenges. Parts of the report, which was prepared for the U.S. Congress, are available in PDF format at http://w3.access.gpo.gov/nct/.
James Bamford's most recent book on the National Security Agency, Body of Secrets, was published by Doubleday, New York, in 2001.