Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

"Surveillance Technology: Tracking Terrorists and Protecting Public Places"

Transcript of IEEE Spectrum/ New York Academy of Sciences Press Briefing

50 min read

31 October 2001

We convened this press briefing because the technical, economic, political issues surrounding surveillance technologies are very complicated, and they’re coming on faster than lawmakers, and certainly faster than the nontechnical public, can understand how they should be used. One of the places where surveillance technology issues will be discussed is in the press, and we hope this briefing will be informative for the press in attendance. Our speakers–who come to surveillance technology-related issues from different sides of the problem–will address questions of utility–what these technologies are and how they work (or don’t work)–and the issues surrounding appropriate use of these technologies in public places.

Speakers at the Briefing: Mr. Rodney Nichols, Executive Director of the New York Academy of Sciences, Susan Hassler, Editor, IEEE Spectrum, Professor Steven Maybank, University of Reading; Dr. Joseph Atick, CEO, Visionics; Mr. Michael Vatis, Director, Institute for Security Technology Studies at Dartmouth; Mr. Barry Steinhardt, Associate Director, American Civil Liberties Union, Robert Freeman, Executive Director of the New York State Committee on Open Government.

Mr. Rodney Nichols, Executive Director, New York Academy of Sciences:

Thank you very much. Welcome to all of you to what I hope will be a very interesting afternoon. The issues we’re going to talk about today are quite critical in two senses, which then have to be brought together. One sense is technological innovation, also the reliability of the technology. And that applies to the older technologies as well as new technological tools that will probably be invented in the next few years or decade. On the other hand, we have to cope with society’s reactions to the applications of these tools, whether they really work, whether they provide surveillance and protection and defense or not, and the degree to which other kinds of freedoms and values that we hold will be sustained.

As a young physicist, I worked in the Pentagon, and in research and development in the defense business and national security business there’s often a real sense of urgency. I think that’s what all of us who live in New York have felt for the last six weeks, a sense of urgency. Yet at the same time it’s extremely important to think through the stakes and the consequences. There is that rueful saying in Washington, "you want results and you get consequences." We are very likely to be seeing that as we implement a variety of things.

The New York Academy of Sciences’ role is to convene groups like this, I hope always with a value-added sense. We have a sense of service to society, with a base of science and technology, and we’re delighted to be playing a small role this fall in bringing people together to think about recovery and next steps after the horrific attacks of September 11.

I want to introduce Susan Hassler, the moderator for the session this afternoon. I've known Susan in a professional way for a number of years. She was, as some of you know, a very distinguished editor of Nature Biotechnology before taking her present position as editor of IEEE Spectrum. She’s an imaginative, energetic, and highly professional editor. Many of us in other parts of the scientific community read Spectrum, but may not read journals produced by other professional societies. So I think you’ll enjoy having Susan as your moderator.

Ms. Hassler: Good afternoon. I want to welcome everybody on behalf of IEEE Spectrum and on behalf of the Academy of Sciences, both those of you in the room and those listening in by telephone.

Why did we decide to do this? Obviously, before September 11, one of the issues Spectrum was covering was privacy. And one of the concerns people had was privacy. People were worried about identity theft, and how in transactions over the Internet your information would go on from one place to another, and you couldn’t conduct financial transactions without people being privy to where you were you going and when and how and what you were buying. There was concern about New York's EZ Pass system for issuing traffic violations. There was concern–quite a preoccupation, I think–with privacy issues in an electronic setting.

Obviously after September 11 everything transmuted into a huge concern about security and protection. There have been estimates that if we were to actively engage in all of the security measures and ideas that have been put forward, that we could spend in the United States alone hundreds of billlions to one and a half trillion dollars on protecting against cyber attacks and biological and radiological attacks and providing security for critical infrastructure.

So we convened this meeting because the technical, economic, political issues that surround surveillance technologies are very complicated, and they’re coming on faster than lawmakers, and certainly faster than the nontechnical public, can process them and understand how they should be used.

We’ve just seen the Congress broaden significantly the government’s ability to monitor people in the United States and beyond, using a variety of these technologies. And one of the places where all of this will be discussed is in the press, both the utility–what these technologies are, how they work, what advances might be made with them, and how we would balance them with things already in play–and then also how to manage and maintain them in doing what they’re supposed to be doing and not crossing the boundaries into being used to check up on people not necessarily on the to-be-checked-up-on list.

That said, I would like to tell you a little bit about our speakers, and about the format. Each of the speakers has agreed to make a very short presentation, after which we will go to questions. I'm estimating that the presentations will take about 45 minutes. We’ll have 45 minutes of Q & A.

Our first speaker is Professor Stephen Maybank, who has come all the way from the University of Reading in the United Kingdom today to talk about the technology of tracking people and vehicles. Another thing he will talk about is the use of anomaly detection or behavior recognition, and also about some of the problems that go with image archiving. In other words, if you’re going to start taking pictures of everybody, what do you do with these pictures, how do you store them, and how do you get them from place to place? Professor Maybank was the chairman of the third IEEE International Workshop on Visual Surveillance, and he is a researcher in computer vision at the University of Reading.

Dr. Maybank: I work on computer analysis of images taken by closed-circuit television (CCTV). So the kinds of things we want to do with these images are to recognize in them people in vehicles, for example, and to track the people in vehicles as they move around in front of the camera.

Not only can we locate these objects and images, we can also work out the way in which they’re moving in space in front of the camera. So, for example, if a car passes in front of a CCTVcamera, then we can say how far away from the camera it is and how fast it is going.

After tracking, we want to make deductions about the ways in which people in vehicles are behaving. We’re thinking at this stage of very simple things. So, for example, if a car moves in front of the camera and then stops, we’d like the system to track it, work out its motion in 3-D, and then make the simple deduction that the car has stopped. Maybe it stops for 30 seconds and then starts up and moves on again. And for certain security applications, there are people very interested in whether the vehicle moving through has stopped for a while, and if so, whether anybody got out of it.

Now, as you build up a security system based on CCTV, you tend to get very large numbers of cameras. So, for example, in the London Underground, it’s a large subway network of 4000 cameras. They archive all the images from all the cameras for three weeks–an enormous amount of data, about maybe 9000 terabytes.

The way that data is used, people want to search back through it after an event to find any recordings relevant to that event. It’s a very difficult search problem because of the mass of data involved, and we’d like to make this problem simpler by attaching to the images basic information about what was going on. So, for example, you can put a label on an image saying, "one person moving," and then maybe a week later somebody searching the archive can say, "give me all the images showing one person moving," and you’re already looking at only a very small part of that archive, rather than the whole of it.

The UK is unusual in that it has an enormous number of cameras–maybe two million. If you go to any city center and look around, you’ll see them. The government funds surveillance schemes all the time. Only in August it released £79 million to fund CCTV surveillance in a number of new city centers. Not only is it promoted publicly, it’s also appearing in private households. So you can buy very simple systems at the local Do-It-Yourself (DIY) store, put up the camera on the outside of your house, and you’d be filming whoever comes to the front door and recording that film on a video cassette recorder inside the house. And this is not just a few technical people playing games; it’s hundreds of thousands of ordinary people with an interest in doing this.

If you install CCTV in a city center, you reduce the crime rate there. You can reduce muggings, car crime, and so on. I should say here that the CCTV is not used on its own. You can’t just put cameras up and hope for the best. You need to have a plan for the way in which you’re going to enforce the law in a city center, to organize police patrols, and the CCTV system forms a part of that overall plan for crime reduction in city centers.

The UK is also a little unusual in that we have little legislation controlling CCTV. It is legal to put up a camera anywhere in a public place and film. The only requirement is that you do not cause an obstruction. There are codes of practice governing CCTV images, but at the moment they are voluntary and do not have the force of law.

Research on visual surveillance is worldwide. In the United States, Europe, Japan, China, everywhere, it’s recognized as an important scientific problem. It’s also recognized industrially and socially as an important area where we need new techniques for analyzing images. Research areas include motion detection: is anything happening in an image, has there been a change? Also, tracking, face recognition, behavior recognition, image archiving, and building up libraries of seen activity. So, for example, you could have two or three cameras looking over a public place, and you might say between 9 o’clock and 10 o’clock most of the people are moving north, and between 10 o’clock and 11 o’clock most are moving south and so on. You can build up automatically over time a library of the kind of movements that you expect to see in that public place.

We’re involved in a project on CCTV surveillance called Advisor; it’s funded by the European Union, and it involves teams from universities and from industry. Its aim is to produce a commercial system for managing CCTV in metro stations, and the leader is Tyler’s Research (phonetic) based near to us in Reading.

Here’s an example of people-tracking. You’re looking here at a CCTV image from a metro station in London. The system draws outlines around the figures of the people, approximating their shape, and then uses these outlines to track the people as they move across the image. The camera has been calibrated, and as a result of this, it can tell how far away the people are from it and how quickly they’re moving over the ground plane.

Regarding future technology, three things seem to me most salient. The first is cameras are going to be a lot better. We’re used to seeing something the size of a shoebox nailed up on the wall, [but] you can already see now cameras the size of your thumb. They will become tiny, cheap, almost invisible. They will record high-quality pictures all the time. [Next,] the algorithms that we’re developing for people tracking and behavior recantation will become better. There’s a worldwide research effort in this area. It will deliver, maybe not next year, maybe not the year after, but we will get there.

Finally, we’re going to see large coordinated camera networks. At the moment there are networks in which you have a few cameras scattered through a town center. Perhaps they’re wired up to a central hub. Perhaps somebody sitting at that central hub is watching these screens. What you’ll see is cameras scattered over a wide area connected up to each other in complicated ways. The infrastructure for this is being established, and it will enable analysis and description of people’s activities over wide areas, including the tracking of a single person as they move from one camera to another. So that completes my presentation.

Ms. Hassler: Next we’re going to hear from Joseph Atick, who’s the chairman, CEO and cofounder of Visionics Jersey City, N.J. If some of you saw Jeffrey Rosen’s piece in The New York Times Sunday magazine a few weeks back, [you’ll have seen] some discussion there of some of the technology that Visionics is working on (complete reference here). But basically today, Dr. Atick will be talking about the general area of biometrics, or identification-based counterterrorism activities. In particular, he’ll talk about a technology that’s been in the news a lot–face recognition technology.

Dr. Atick: Good afternoon, everyone. I'd like to start by iterating what I see is the cornerstone of our defense of the civilized world against crime and terrorism in this new era. I believe it’s going to be our ability, in the context of a free society, to identify those who pose a threat to public safety and to prevent their actions.

Essential to the success of this defense strategy are intelligence data and identification technology, such as facial biometrics. The fact is terrorism and terrorists do not emerge overnight. They require indoctrination. They require constant reinforcement over an extended period of time. This affords intelligence agencies opportunities to establish their identities and to build watch lists. Ultimately terror is not faceless.

Let me give you one slide. There are about a hundred people here–actually fugitives, terrorists, wanted faces by the FBI–and this is only the tip of the iceberg. These individuals are at large, committing a crime, whether it’s murder or [some] terrorist activity, and there’s today no technology, apart from what I’m talking about in terms of biometrics, that can stop these individuals from entering airports and facilities and conducting their activities.

According to published news reports, two of the terrorists of the September 11 hijacking were already on a watch list, sought by the FBI since October 23, but we had no facial recognition mechanism to stop them from entering the airport. A third was already known to the French authorities. I suspect when the dust settles, we’ll find out that several others were already known to the Germans, Belgium, French, British, and Israeli intelligence organizations that have been collecting data about terror.

While there is no guarantee that all terrorists will be known in advance, at the very least we have the responsibility to try to prevent the actions of the thousands already known, just like these. Given a watch list, just like this, the question becomes: does the technology exist that can spot the individuals as they enter a country or attempt to board a plane?

The demands on such a technology are very high. It has to be able to do three things. One is scale, in that it should work across many security checkpoints at hundreds of airports and borders and not at just one location. It has to work as part of a network of cameras–it’s not enough to just plug a hole in one door and leave every other door open. You have to be able to scale the application.

Second, just to give you the scale of the challenge technologically involved, you have to be able to sift through about six hundred million faces alone in the United States as they board planes, as they enter into security checkpoints each year, and spot the terrorists and criminals among them without interfering with the passenger flow. We do not want to create Draconian methods and barricades. The public will not accept that, nor will the airlines, nor will the airport authorities. We have to maintain throughput.

Third, we have to function without infringing on the rights or inconveniencing the honest majority. We have to deliver a solution to a problem but without giving up something we have cherished so much, which is our privacy.

I believe there is good news here, which is that there is a technology. It is computerized facial recognition and facial scanning, such as the FaceIt® face recognition technology, which I can speak about because I’m not only the CEO of Visionics, the company that has commercially developed the technology, but I’m one of its main inventors. I’ve spent the last 14 years working on facial recognition and identification technologies, starting with my days in academia. I used to be the head of two laboratories where the human brain was studied to try to explore how we solve this problem.

The technology works as follows (it’s very simple): you have a standard camera–it could be any video camera. It connects to what’s called a FaceIt appliance, a small box where facial recognition runs. This technology captures each of the faces it sees in front of it, locates them in a crowd. It analyzes the face and creates a mathematical code, a digital code called a faceprint, which is essentially a description of the relationships between the landmarks of your face. It’s some analytical measurement of the skull as it protrudes through the skin of your face. So it’s some number, some mathematical relationship that’s called a faceprint. It’s only about 84 bytes of data, less than two sentences in an e-mail you send to a friend–that’s what captures your identity.

Now this faceprint is encoded, encrypted, and can be sent by a network connection to a database where a watch list exists, a most-wanted database, for matching. The faceprint is a code that only a computer can interpret. It’s encrypted; it cannot be used to reconstitute the image of a face. Given the faceprint, you cannot see what the face looks like. It’s unique to a given face, and it does not change with age, lighting, or viewing conditions. It ignores facial hair and other superficial changes of the face. In a sense, it’s a fingerprint in your face.

Let’s look at it at a system level. These cameras can exist at the security checkpoints as people are walking through them. This is a controlled environment, so you can control the lighting as people walk through it. The camera automatically captures the face, and through the appliance, encodes it into a faceprint, and through the network sends it to a matcher that compares the faceprints against the watch list of the most wanted. It could be in Washington; it could be in the airport.

If a match is successful and beyond a certain level of confidence, then it sends a message to an alarm system. The system is similar to burglar or fire alarms. They are monitored by a central agency, which says, "there’s an alarm that happened, let me check." Video will not be shipped to that location. [Instead,] at the point when the alarm happens, an image of the person going through the security checkpoint and an image from the database appear on the screen in front of the person in the control monitoring the alarm. If the person believes that that is a true match, then they can signal back via a wireless connection to the airport or back via whatever mechanism is appropriate to the security guard at the gate and ask them to intercept and interview that passenger.

I want to emphasize if there is no match, then there is no memory–the image is dropped. This is not a recording system. It does not record any video, nor will you see any video from the other side. All that is shipped by the network is the 84 bytes of data. The system does not record, store, or alter the watch-list database in any way. The watch-list database cannot be hacked into, and because it only accepts faceprint queries; it doesn’t take any delete or add or change.

Over the years, we have seen successive technologies adopted to enhance security. Today at security checkpoints like these, X-ray luggage scanners, metal detectors, chemical trace detectors are deployed to check for concealed weapons and explosives on our body or in our carry-on luggage. I see facial scanning and matching against the watch list as an integral component in tomorrow’s airport security systems. I believe it’s time to ensure that airports are no longer safe havens for criminals and terrorists. The American public agrees. In a recent Harris Poll conducted after September 11, 86 percent of the public endorsed the use of facial recognition to spot terrorists.

Still, there have been some criticisms of this technology. I would like to quickly address those. On the issue of privacy, it’s important to emphasize that FaceIt surveillance system is not a national ID. It does not identify you or me. It is simply an alarm system that alerts when a terrorist on a watch list passes through a metal detector at the airport. If there is no match, I repeat, there is no memory.

Furthermore, such a system delivers security in a nondiscriminatory fashion. This is very important. FaceIt technology is based on analytical measurements that are independent of race, ethnic origination, or religion. It is free of human prejudices or profiling. It does not care where you come from and what your skin color looks like. We have gone further, actually, and have called for congressional oversight and federal legislation to ensure the watch lists contain only individuals who threaten public safety and to penalize for misuse of such technology down the line. I believe Congress will take action in due time, but at the moment their priorities are the real and present danger of terrorism and not on the theoretical potential of misuse down the line.

Another objection concerns the effectiveness of the technology. Actually the same people who raise the objection about privacy have pointed out and raised the same objection about the ineffectiveness of the technology. Some have used old data, for example, going back to a 1996 INS study. I’ll give you the facts about that study. INS began in1996 to try a mechanism to allow people to expedite their passage through the border. That was a very ambitious program early on in 1996. However, through reorganization of INS, the control of the border for vehicles was assigned to the Department of Transportation. As the Department of Transportation had no experts in biometrics, they suspended the project without any data being collected, without any results being analyzed. This is 1996; the world was so far different than we are today.

They also used data out of context, such as a Defense Department study, which, in fact, was a comparative analysis to check which algorithms are worthy of adoption for the embassy security project. In fact, DARPA ended up recommending to Congress a $50 million four-year project called Human ID at a Distance, to adopt facial recognition for needs in embassies outside the United States.

So a lot of talk about using this type of data has been out of context and old and has said, without explaining it, that the technology does not work. I have two responses to that. First, technology is constantly evolving and advancing. Anybody who is in the science and technology business knows that the state of the art today is a quantum leap of where it was even a year ago, let alone five years ago. And of course, with the accelerated R&D initiatives under way around the world with university people as well as industrial people working together, the technology will rapidly become even more reliable and robust. It’s a matter of time, whether it’s this year, next year, the year after, it will be there.

FaceIt has already been used in many real-world environments and has produced significant results; the Mexican election system, police mug shot systems [in] many places around the world, criminal alarm systems in London, Birmingham, Iceland, International Airport Tampa, and so on.

But this is not my main point on this issue. My main point is this: facial scanning at airports is a tool, just like metal detectors and luggage scanners. They enhance security without having to be technologically perfect. A facial scanning system at the security checkpoint will alert the security guard to investigate, just like they do today when the metal detector beeps. Such a system will deter terrorists from boarding planes, just like metal detectors deter them from taking weapons on board, even though we all know metal detectors or luggage scanners are nowhere near a hundred percent accuracy. So if you say that facial recognition is not a hundred percent, well then, let’s go ahead and take out all metal detectors and all luggage scanners, and let’s see what happens to airport security.

We owe it to the traveling public to do everything in our capacity to ensure their safety. We have the technology today, as a nation, to peacefully and responsibly make a difference in the war against terror and to restore the public’s trust in the travel process, without a cost, in my opinion, to the privacy of the honest majority. I see no legitimate objection why we should not do it.

Thank you.

Ms. Hassler: Next we’re going to move away from the technology proper and hear from Michael Vatis, who is the director of the Institute for Security Technology Studies at Dartmouth. He is going to talk about organizational issues with respect to organizing counterterrorism efforts across different technologies.

Mr. Vatis: Good afternoon. What I’m going to talk about today, as Susan said, is not a particular technology per se, but basically issue a clarion call–and I should hear trumpets at this point–for a new Manhattan Project. And this time not to develop an atomic bomb to win a conventional war, but for counterterrorism technology to deal with the types of threats we’ve now seen since September 11 and that have become all too real.

What we’ve witnessed since September 11 is what people in the national security field have called asymmetric warfare or asymmetric threats. That is forms of attack that don’t use conventional weaponry or armaments but take advantage of our openness as a society to use what some have called technological jujitsu, using our own technology against us.

So [in] the example we saw on September 11, 19 men hijacked airplanes and used them as sophisticated airborne bombs to kill thousands of people. And in the letter-borne anthrax incidents of recent weeks, again, anthrax that could have been obtained from research institutes [was] then delivered through a relatively primitive means to cause death and widespread panic.

So chemical and biological, as well as nuclear and radiological, weapons are one set of asymmetric threats. Another is the threat of cyber attacks, the use of computers as a weapon to attack other computer networks to cause, not necessarily death and destruction, but more likely mass disruption. So much of our society and the critical services that we rely on every day–electric power, telecommunications, banking and finance systems, government operations–are controlled and operated by computer networks. Which means that if someone can get into those networks, he can cause a great deal of disruption.

So if you consider the asymmetrical threats, it’s now quite apparent that we need to do something about this. And yet until September 11, the main focus by far of our whole national security apparatus in Washington, led by the Department of Defense, was on the conventional threat, the threat of traditional military apparatus being directed against the U.S. and its allies. And our R&D programs followed that focus. So we spent billions upon billions of dollars to develop new conventional weaponry, to perfect our existing nuclear stockpile and the like, with comparatively paltry amounts spent on dealing with the asymmetrical threats that we’ve seen over the last six weeks, and also to deal with the cyber threat, which is very real, as is evidenced by the destruction and disruption caused by recent worms and viruses and computer intrusions

And so as we recognize how ill-prepared we have been for these types of asymmetric threats, I have issued a call for a Manhattan Project. Others have as well. The November issue of Spectrum has a piece by me, essentially laying out the terms of the project that I call for, and I would recommend your reading it for detail. But essentially what I’m urging is a mechanism to bring together the expertise from academia, private industry, and the government to develop an R&D agenda, to determine what are the most important priorities for research, an agenda to counter the threat–is it things like facial recognition software, or are other things of greater priority? –And then to put the dollars against that prioritized agenda so that we can advance the ball as quickly as possible, again tapping into the expertise across all of the sectors.

The recent Defense Department request for proposals (RFP) has gotten some play in the press and some amount of ridicule, frankly, because it seems like a laundry list of projects. The RFP is put out to the public so that anybody can submit a proposal. There is a place for RFPs because we want to make sure that we don’t leave any source of expertise untapped. But I think a worrisome thing about it is that it is relatively unfocused. And we don’t know exactly what expertise is out there, and we don’t have a sense of prioritization. How did the Defense Department come up with that list of topics that they want research done against? Was this a government-wide project that led to this list or not?

The situation right now is that numerous government agencies have relatively small budgets for R&D without a national R&D agenda, and that is hampering us significantly, I think. So I think a Manhattan Project that puts serious government funding behind research against a prioritized agenda is the way to move this forward. So that the agenda is not driven by the commercial interests of private industry, it’s not driven by the sometimes narrow interests of particular government agencies, but it’s done comprehensively and in a neutral fashion, with all the expertise being tapped into.

Now I’d like to mention, before stepping down, one significant difference from the original Manhattan Project in this field–the fact that, as part of this process, we need to review the legal implications of any new technology as it’s being developed or even before development is initiated. Because when you’re dealing with asymmetric threats, you necessarily get into some very sticky legal issues, because the nature of asymmetric threats is that they happen largely within our borders. Which means countering those threats will bring into question civil liberties concerns and civil rights concerns that aren’t really encountered when you’re talking about dealing abroad with traditional military threats.

And so I think it’s incumbent on us to consider the First Amendment and Fourth Amendment and lots of other Constitutional and statutory implications of technology and consider them in depth up front so that we’re not dealing with them after the fact, after technology is funded and perhaps deployed, and only then do people raise questions about these things.

I think through that approach, we can play to our strengths. We can take advantage of our technological prowess without undermining the civil rights and civil liberties that make America the beacon of liberty around the world. And I look forward to taking your questions. Thank you.

Ms. Hassler: Thank you, Michael. Barry Steinhardt is associate director of the American Civil Liberties Union, and he will pick up basically where Michael Vatis left off, talking about some of the legal and social issues.

Mr. Steinhardt: Good afternoon. Let me begin by quoting a couple of great Americans. First, since this is the New York Academy of Sciences’ building, let me quote one of our original scientists, Ben Franklin, who said, "A society that will trade a little liberty for a little order will lose both and deserve neither." I’m sure some of you have heard that quote fairly often, including from the floors of the Congress over the last few weeks. I want to update that for you and suggest that a society that will trade its liberty for a false sense of security, will get neither and deserves neither.

I also want to quote another great American. This one may surprise you, the source, and that’s Ronald Reagan, who in assessing some of the promises made by the old Soviet Union about missile defense issues, said, "Trust but verify." And that is what we have sought to do with a number of these technologies over the past few years. We understand these assurances that are made by people like Joseph Atick about the technologies, and we trust them, but we want to verify them.

Now what do I mean by that? As we look at any surveillance technology that’s proposed to be introduced into our society, whether for airport security or other forms of law enforcement use or security, we have to ask a number of basic questions. The first question is: does it, in fact, improve our security? That’s really the first question we need to ask, because if the answer is no, we don’t have to reach other questions like what are the costs to privacy. And I want to spend a few minutes talking about face recognition in that context.

I do, however, want to remind you of the other high-profile use of video surveillance and face recognition recently in the United States, and that was in the last Super Bowl in Tampa, Florida, which some of us refer to as the Snooper Bowl. Approximately 75 000 people went through the gates in Tampa and had their video captured. Those photographs were compared to a database which the proponents, the proprietors of the system, referred to as a database, of terrorists and violent criminals. That was their terminology.

Seventy-five thousand people went through. We know that they got 19 hits, 19 hits out of approximately 75000 people. Members of the Tampa police force have now conceded that a number of those hits were false positives; they were not, in fact, people in the database. What they have also said was that those 19 people were essentially people with petty criminal records, ticket scalpers and pickpockets. No terrorists, no violent criminals.

So when these extravagant claims are made about the uses of this technology, remember the Super Bowl experience: 19 people identified, not all of them properly, we don’t know what percentage, and it turned out that they weren’t terrorists and violent criminals; they were petty criminals at best. And no arrests were made, in part because the technology was very difficult–it was very difficult to do much in the setting of 75 000 people streaming through the gates of that stadium. So how much it will improve our security is an important question. Remember the Super Bowl when you think about that.

Then the question is: how much does it threaten our liberties? And I want to talk for a moment about some of what you have heard about this afternoon, which is the combination of video surveillance, of the kind of widespread video surveillance that goes on in the United Kingdom. I was astounded by the number Professor Maybank mentioned–I think the number was two million CCTV cameras. I have been in the London Underground and walked a bit on the streets of London, and I certainly know it is widespread, but that number is somewhat astounding. And what are the consequences of that video surveillance, particularly when paired with facial recognition technology?

Now we in the United States have at the government’s disposal a huge photographic database, which is quickly being digitized by companies like Joseph’s, and that is all those driver’s license photos that all of us submit to as a condition of being able to drive and getting a driver’s license. In Department of Motor Vehicles offices across this country are tens of millions of photographs which can be digitized and I’m sure many of which are in the process of being digitized, which can be entered into a huge photographic database.

Think for a moment about this hypothetical, and remember the images of the London Underground before you think that this is somehow fantastic. And remember the kinds of things that were proposed for this society since September 11.

I want you for a moment to think about this possibility. A young black man–let’s leave it in New York State for the moment–goes to Scarsdale, a predominantly white, affluent suburb, to visit friends. He’s from inner city New York, let’s say from the South Bronx, and he goes to Scarsdale and his video image is captured.

Nothing happens to him right away, but his video image is captured, and as you saw in London, those video images are not discarded when we have this kind of widespread video surveillance. And they are then compared to this preexisting photographic database of driver’s license photos. Young fellow has a driver’s license. New York State has been good enough to provide that driver’s license photographic database to law enforcement as the Department of Justice has proposed at a number of places, including, actually in the counties adjacent to Tampa in Florida.

And what happens? There’s been a robbery in the neighborhood. Next day police from Scarsdale show up at his door, wanting to know what he was doing in Scarsdale that night. Now he’s got a perfectly innocent explanation, just visiting his friend, but he’s been subjected to police investigation because, you know, because he’s suspicious. Because he’s under suspicion because he’s a young black man in the wrong place, predominantly white neighborhood at night, at the wrong time.

The notion that this will end racial profiling, I think, is illusory. What it’s going to do is create new forms of racial profiling. Mass surveillance and identification of people is going to create a new crime or a new level of suspicion, which is being the wrong person in the wrong place at the wrong time, and being subjected to additional surveillance as a result of that.

So we have to ask how will it threaten our liberties. And finally we need to ask whether or not, with any of these systems, the benefits will outweigh the risks. Let me talk about face recognition for a moment. I’ll be quick–I know we need to get on with the questions and answers. Face recognition at airports is really a technological Band-aid that will not make us any safer and that does offer us a false sense of security. Now what do I mean by that? You’ve already heard a good deal from Joseph about how face recognition works. Let me tell you what some of the problems are. And these problems are real. And this is why I say, "trust but verify."

The ACLU has a so-called Sunshine Act request, that’s the Florida equivalent of the Freedom of Information Act. As those of you who remember from all those ballots made available to the members of the press in Florida, they have a fairly broad freedom of information act in Florida. It’s known as the Sunshine Act. We have made a Sunshine Act request to the Tampa city officials, both about the Super Bowl uses of face recognition and about the uses in Ybor City, which is their entertainment district. And they have totally stonewalled us. At some point we’re going to have to sue them over it. Very interesting when you remember those images of all those ballots that were turned over. We can’t even get minutes of city council meetings in Tampa. Those of you in the press, perhaps, can help us with that.

So we do practice trust but verify, but, you know, there are many people out there who will tell you, including–including the government experts who study this–that face recognition does have problems with it, that changes in the subject’s appearance make a difference, that age makes a difference. The Niss Study (phonetic), which looked at images taken even under ideal conditions, images taken at Point A, compared it to images that were 18 months or more older, found a significant error rate. Age makes a difference. Changes in lighting make a difference. Changes in the camera angle make a difference. Changes in busy-ness in the background make a difference. Whether the person’s wearing sunglasses makes a difference. Facial hair makes a difference. All those things make a difference. Think about that in the context of an airport and how useful this technology will, in fact, be.

The fact is, I think, that face recognition in the airports has a number of problems. One is the underlying problem with the technology. I understand the assurances that it will get better, but we’ve got to deal with it today. I'm someone who last year flew a hundred thousand miles, so I'm concerned, and probably about as much as anybody in this room, about the safety of air travel. I'm not betting my safety on face recognition.

I know two things. One is that technology has problems; two is that the database, the photographic database of terrorists, simply does not exist. There is no terrorist photo database. You saw a picture of a hundred wanted criminals on the FBI list; how many of them were terrorists? How many of them were the 19 that we’ve identified from September 11? The fact is we don’t know who the terrorists are. Only two of the 19 September 11 terrorists were even known by the CIA and the FBI. They’ve only been able to produce one prior picture that was a grainy surveillance photo taken, I believe, in Kuala Lumpur. The fact is it’s not the kind of photograph that Joseph and his colleagues are going to be able to use to do any sort of identity. And the fact is that these terrorists are not lining up to submit their photos to the FBI.

There is a real problem here. The problem is that the use of this technology at the airports is going to falsely accuse some good guys, and to let the bad guys go. So the first question, then, we need to ask is, when any of this technology is proposed: does it work? We can sit here and we can debate the privacy implications, and I'm happy to do that. But we first have to ask: does it work? Are the government agencies that are prepared to introduce it, prepared to answer our Freedom of Information Act requests? Are they prepared to submit to independent scientific examinations before it’s introduced? I think the answer to that is no. Certainly the answer to the question of will they give us any information has been no. That’s not just Tampa; it’s been Boston, which has ignored us. It’s been Providence, Rhode Island, which has ignored us. And so the question is: does it work and can we verify that it works? Thank you.

Ms. Hassler: Our final speaker is Robert Freeman, who is an attorney, and also the executive director of the Committee on Open Government. It’s part of New York State Department of State. Mr. Freeman will talk about issues related to the Freedom of Information Act and what you do with it with relative to surveillance technologies. If, for example, you were planning to keep any kind of database of people, and say, individuals wanted to get access to that information, how would you try to do that?

Mr. Freeman: Thank you. I always wonder in advance what am I doing in a place like this. You read about the subject matter; I'm hearing from all kinds of techies with a great deal of expertise. I'm a lawyer. I work for state government in Albany. And how does what I do relate to what we’ve discussed today?

The fact is that I think that I've gained something of significance from each of the speakers, and I think that I can relate it to what I do in terms of freedom of information and privacy.

Just to give you a touch of background, I work for New York State. I have the best job in state government. It mystifies me every day of my life that the government pays me to do what I do. All I do all day is give advice, legal advice, either orally or in writing, to anybody who has a question about public access to government information. And by anybody, I mean exactly that. We take lots of calls from members of the public, people from state government, local government, members of the news media. Our only goal is to give what we believe to be the right answer under the law, regardless of who asks the question.

One of the evolving roles has been to encourage, particularly government officials, to think about what they do before they do it. My guess is that I was invited here today based upon an appearance at another forum downtown in Greenwich Village. Many of you probably know that the New York City Police Department has for quite some time set up surveillance cameras in Washington Square Park. I went to NYU Law School, and the park was kind of a wild place in the late ’60s and early ’70s. And it seemed to have been a little bit quiet.

I did a little test as I was on my way to the forum. I asked the first person I saw walking through the park, "By the way, do you know that there are surveillance cameras here?" And he said, "Sure. They’re over there and over there and over there." And when I got to the forum there were a number of speakers who expressed their views regarding the use of the surveillance cameras in Washington Square Park. And if I were to predict, I would have said, gee, these are the old hippies. They are concerned about their civil liberties. They probably would have objected vociferously to the use of those surveillance cameras.

Bu that wasn’t the reaction. The old hippies now own businesses, and they have kids. And they said, gee, we like the cameras because the park gives us safety. And as Professor Maybank suggested, yes, the presence of the cameras may, in fact, deter crime. But then, after those people spoke, their neighbors who live or run businesses three blocks away got up and said, well, wait a minute. All the bad guys have moved three blocks away. Now we’re the ones who are being robbed and our children are being terrorized.

And of course, I asked the representative from the police department, what are your statistics regarding crime in the area that surrounds Washington Square Park. And the answer from the police was, well, we don’t break down our statistics to that extent. So they couldn’t provide a good answer.

But that’s really an issue that kind of concerns me, and to some extent it goes to what Mr. Steinhardt was saying. What is our real gain if we do engage in surveillance, and does it only work if people know that the surveillance cameras are there? I don’t know what the answer is, but our host at the beginning suggested that, yes, we all want results, but in some cases there are consequences that we see as a result of whatever our actions might be. I guess the theme, based upon what I've heard, is that we should think before we should act.

In relation to Mr. Atick’s talk, I am not terribly worried based upon what he said about the FaceIt technology. After all, it doesn’t record and it doesn’t store. But what I am concerned about to this day is what the government collects. Barry may remember a lawsuit that dates back maybe 20 years or so, which involved the collection of so-called noncriminal intelligence files regarding any number of people who might have been involved in protests, who might have been accused of being reds back in the McCarthy era. It took a long time for the Civil Liberties Union to get the police agencies to stop engaging in those practices and to enable people to gain access to the records about themselves.

Now let’s go back to Greenwich Village and let’s consider that in relation to the Freedom of Information Law. The tapes were running; the videos were stored. I asked the police department what do they do with the tapes. And to relate this to the Freedom of Information Law, it’s an access-to-records law, and in New York, since 1978, we have had on the books what many have characterized as an electronic FOIL, an electronic Freedom of Information Law. It defines the term record expansively to mean any information in any physical form whatsoever kept, held, filed, produced or reproduced by, with or for a government agency. So it clearly includes not only the 8 1/2-by-11-inch sheet of paper, it includes the tapes, computer tapes, disks, microfilm, e-mail, anything that stores information in some physical form that’s maintained by the government.

And I asked the police what if somebody came in and made a Freedom of Information request for the videotape of what occurred in the park last Tuesday. They’d never gotten a request, but they said, well, we would deny access. My question was: Why would you deny access to a record that simply captures what anybody in the park could have seen? Is that an invasion of privacy to disclose something like that? I don’t know what the answer is.

The standard in our law–in New York it’s the same [as] under the federal Freedom of Information Act–is that government can withhold insofar as disclosure would constitute an unwarranted invasion of personal privacy. If there can be unwarranted invasions of privacy, we all know that there must be permissible invasions of personal privacy. What do we do in the case of requests like that? How does anybody define what constitutes an unwarranted invasion of privacy?

The reality is, as a society, what we believed about privacy was different five years ago than it is today. It’s going to be different five years from now than it is today. And each one of you has a different personal line of demarcation between what you consider to be offensive as opposed to innocuous. Who makes the decisions? Is it going to be the government, and can we, in every instance, trust the government to do the right thing?

The threat of the cyber attack was mentioned by Professor Vatis. The point that I want to make is that legislative bodies have a terribly tough time enacting legislation that is in any way consistent with the technology. You know, back in the old days if we had records that were stored in a filing cabinet, if we locked up the filing cabinet, locked the room, and we wrote a procedure indicating how to get into that filing cabinet, if you didn’t have the key to the room and you didn’t have the key to the filing cabinet, you couldn’t get in. Which would mean that the record describing the procedure for entry would be public, there would be no harmful effects of disclosure; again, without the key, there was no harm.

But what about now? What about a government agency’s security protocols? What about the descriptions of the software that it has designed to protect its electronic information systems? Generally speaking, those kinds of records, if push came to shove, would have been public under the state’s Freedom of Information law until two weeks ago. That was when the governor, after three years of trying on the part of our office, finally signed legislation which says that government has the ability to withhold records which, if disclosed, would jeopardize a government agency’s capacity to guarantee the security of its information systems.

Now, I grant you that that has little to do with physical security, but often the question is how easy do we want to make it for the world at large to gain access to records when disclosure would in some way result in at least the possibility of jeopardy.

I guess the final point would relate, again, to Mr. Steinhardt’s comments, this notion of the false sense of security. Many of you read a couple of weeks ago in The New York Times about a not-for-profit organization that acquired copies of voter registration lists in New York City. And the voter registration list has always been public. And it identifies all of you who are good citizens by name, by your residence address, by your political party affiliation. Now most people would probably say, I don’t want my home address to be disclosed, and yet we have lots of laws on the books that say, yes, our home addresses are public. If you’re a good citizen, your home address is public. It’s available to anybody for any reason. If you own a home, the assessment roll is public; it indicates where you live, how much your property is worth. That’s available to anybody.

Do we want to change the law? Do we want to preclude the government from disclosing those kinds of records when, in fact, there is a good reason for requiring disclosure? How do we square the capacity or the desire to protect privacy with the reality that in many instances there is a need to know based upon the balance?

If you buy a home and your taxes are jacked up, what do you do? You go to the assessment record to find out what your neighbors are paying. The legislature, perhaps without knowing so directly, decided that that the minimal invasion of privacy that enables you to know whether the government is treating you fairly outweighs the extent to which your neighbor’s privacy might be invaded.

I think that we have to consider the balance between access and privacy. We have to consider the nature of information about all of us that the government collects. We have to consider what rights of access might be under the Freedom of Information Law. We have to think about the nature of privacy legislation that we might need to guarantee against abuses, not only in terms of disclosures to members of the public but, from my perspective, particularly disclosures to others within any number of units of government.

The quick solutions, it seems to me, are practically impossible to find. And in addition to the terrific technological advances that we’ve heard about today, it seems to me we also have to think about what we do in terms of the impact on society and again, the balance we have to try to strike, at the very least, between access and privacy, the expectations of privacy on the part of our citizens.

And with that, I'm sure that there will be lots of questions. Thank you.

(Applause.)

Ms. Hassler: I think we’re ready to go to the Q&A part of our session. I would ask, you in this room and people on the phone line, if you would identify yourself, when you ask your question. Tell us your name and your affiliation.

Mr. Rosen: My name is Stephen Rosen. I'm chairman of Science and Technology advisory board. I have a question for Mr. Steinhardt and Dr. Atick. Mr. Steinhardt, I have been told by Constitutional lawyers that the Constitution somewhat addresses the issue of who owns their face. In the case of Marilyn Monroe, it’s one matter; that’s a public face and there are copyright issues. But in the case of the rest of us, I think it’s another matter, and I'd like you to address that as a lawyer. And Joseph, I would like to know what the true and false positives and negatives are under what circumstances.

Mr. Steinhardt: I wish you’d asked those questions in reverse.

Mr. Rosen: Okay, done.

Dr. Atick: I'll be happy to address them in reverse, if you’d like.

Mr. Steinhardt: Let me begin by saying that there are probably only nine Constitutional lawyers that count–the nine justices of the U.S. Supreme Court. There’s no simple answer to the question of who owns your facial image. What we’re talking about here is the ability of the government to conduct surveillance, to conduct what amounts to searches, and that’s, of course, governed by the Fourth Amendment. There aren’t a lot of clear answers here.

Generally speaking, the government can observe things that are in plain view. But the question is–and it’s never really been answered by a court–once the government goes beyond observing what’s in plain view and uses technology to conduct the kinds of searches that would require a warrant in other circumstances, what law applies?

It’s almost the question of picking your analogy here. For example, when you’re subjected to video surveillance and then an identity check using face recognition, what is that? Is that like the cop on the beat who notices that it’s Barry Steinhardt from his memory who’s walking down the street, or is it the cop on the beat who stops me without any cause, without any reasonable suspicion, and asks me to produce my driver’s license so that he can check my identity? In the latter circumstance, it does require some level of suspicion; it may even require a warrant. So it’s a question the courts have not addressed yet about which analogy applies here.

Now as to the question of false positives and false negatives, we’re going to give you very different answers. But I think that the telling point here is that these matters have been studied by government agencies. The face recognition has not [as a] technology fared well in the tests that have been done. And we’re prepared to see independent parties test face recognition in settings like airports and find out whether, in fact, it works. I don’t think that we should be relying solely on assurances from vendors who have a financial interest in that technology. Let’s go to the independent experts, ask them what they have discovered, first of all, and secondly whether or not they’re willing to take this technology and test it in the kinds of circumstances they’re proposing to be used in now.

Dr. Atick: Well, in addressing this issue, talking to a scientific crowd, one would explain in detail the circumstances under which you can accomplish levels of performance of facial recognition consistent with a single fingerprint performance. But in this crowd, where Barry’s been addressing more the issue of social implications, I would turn the question around and ask Barry what level of performance he would deem as a standard acceptable level of performance that adds value. Will capturing eight out of 10 terrorists be acceptable? Will capturing five out of 10 terrorists? Will the possibility of having actually stopped two or three out of those 19 people, will that have been of value? Knowing that we can deploy the technology in a manner that does not invade anybody’s privacy, what is the benchmark? Before I would agree to go down that path, what would be the benchmark that would be deemed acceptable by the ACLU?

And I would add, taking it scientifically, go back and tell me what you think is the performance for luggage scanners and metal detectors. Given, by the way, that last week in New Orleans, somebody managed to get a gun through on an airplane, and given that there is no technology that is ever going to be a hundred percent, my concern is that we are camouflaging an agenda which should be properly an agenda of privacy, [as a technology–accessment agend] and I respect Barry Steinhardt’s position. Barry’s concern about technology from a technology point of view should be left for the experts who know this. We should go back and address the issues that concern you, Barry. And tell me what would be a number. Give me the false accept rate that you would be happy with.

Mr. Steinhardt: I want something a little higher than we can get from a Ouija board. The problem here is not whether or not you can — you’ve got several problems. One is whether or not you could accurately identify a terrorist who is in a photographic database. The bigger problem is whether or not they’re in the photographic database. The fact is that none of those 19 terrorists from September 11 would have been in any photographic database.

Dr. Atick: Barry, three. Three. Three out of 19. I have the photographs. The ones I have showed you up there, they’re alive, not the ones that have died.

Mr. Steinhardt: Can I finish my sentence? You had photographs–the FBI and the CIA had in their possession photographs?

Dr. Atick: Since August 23.

Mr. Steinhardt: In fact, those photographs were not distributed. They were not likely to be in the database. If you’re talking about the kind of photograph they have distributed, the FBI has since distributed [the photos of] these 19 people, I know scientific technologists who have attempted to enroll those people using the FaceIt technology. Using those photographs into the database, they haven’t been successful doing it. You can check at the EPIC Web site (https://www.epic.org), E-P-I-C. Check Richard Smith’s results at attempting to enroll those 19 people in a database.

The question is not whether or not this technology works in the abstract–I think there are problems with it in the abstract. But is it going to work in the real world? Do we have a photographic database that’s going to work here? Are we using it in a circumstance where we’re going to be able to make a match? We’ve got to go beyond the hypothetical one-in-whatever-thousand possibility here. There are things that we can do that will make us far safer. As someone who travels a lot on airplanes, I can tell you that we have suggested a number of things that will make us far safer than relying on this technology will. The notion that we’re only concerned about the privacy issues and sociological issues is not true. I think I made it pretty clear that the first question to be asked is, does it work; will it make it safer? We don’t have to reach the other questions until we’ve affirmatively answered that question.

Ms. Hassler: I'd like to take a call from the phone. Do we have somebody there?

Operator: Thank you. We do have a question coming from Joe Shea with American Reporter. Please state your question or comment.

Mr. Shea: Again, on this issue of civil liberties versus surveillance technology, I'm addressing this to the gentleman from the ACLU: you offered an example of an individual in Scarsdale who was being singled out because his face appeared in a surveillance in a sense. You seemed to imply that he was singled out because he was black. But, of course, if the suspect was black, they’d look for a black suspect. If it was a white suspect, they wouldn’t bother. It seems to me that your concern is just–and frankly I often feel this way about the ACLU–overextended, particularly in a time when surveillance technology can save lives.

For instance, as a journalist right now, you can go up to anybody on any public street in America, and take their photograph and put it in the paper. There’s a presumption that a person out in the public street is not in a private place and is not entitled to privacy rights in that respect. So would you destroy the freedom of the press to establish these new privacies? I'd like to hear you address some of these issues.

Mr. Steinhardt: Okay, I'll try. The answer is of course not. We’re not talking about the question of whether or not the press can take a photograph on the public streets. As I said before, the analogy here may be, can the press walk up to someone and demand, at some penalty, that they identify themselves, perhaps by turning over a driver’s license? We need to be very careful here. We need not to set up a false tension between privacy or civil liberties and security.

I know, from having had this conversation with many journalists over the last two or three weeks, that's the sort of question that you’d like to ask and the sort of quote that you’re looking for from the ACLU. But we have to answer the other question first. The burden is really on Joseph Atick and those in his industry to demonstrate that this is going to make us safer, that this is going to work. Only when they have met that burden, which they can’t meet by just making assurances–let them subject this to independent testing before they make those assurances–only when they have met that burden should we be even engaging in a debate about whether or not there is a privacy or civil liberties consequence that’s too high.

We really need to ask those questions first. I don’t think the evidence is there, but I'm prepared to see it be tested by independent experts, and not rely solely on industry assurances. The same industry that told us that they were going to test–they had a database of terrorists and violent felons for the Tampa Super Bowl and came up with 19 pickpockets and ticket scalpers, not all of whom were even positively–properly identified.

Dr. Atick: There are a couple of misconceptions that I have to address. First of all, Visionics was not involved in the Super Bowl–that was not us.

Mr. Steinhardt: I said the same industry.

Dr. Atick: Second, the notion about the Scarsdale example is interesting because, as I mentioned, FaceIt is not a recording system. If your face does not match somebody on the watch list, you won’t be remembered. There’s no match, there’s no memory. So that example will not be applicable to the facial surveillance in that way.

Third is when you ask about the value proposition the technology can offer. If you look at the places where we’ve implemented it and the customers that have used it, in the Mexican election program, I’m very proud to say that the results of having eliminated duplication in the Mexican election in July 2 of last year would have been enough to alter the direction of democracy in that country. These were people who would vote again and again and again–vote often; vote early. And with facial recognition in place, if your faceprint was locked, you won’t vote again.

The reduction in crime in London, over and above what would have happened just from a CCTV control point of view, after the installation of FaceIt technology was so dramatic that you cannot deny a very strong deterrence factor. We can debate for the next three to five years about the technological capabilities of all kinds of technologies. Does the cell phone work? But at the end of the day it’s not Barry Steinhardt who's going to determine that, it’s ultimately the people who have that need, who say we have a clear and present problem.

We have a danger here. We need to restore the public’s sense of safety and we need to deter terrorists from thinking that our airports are safe havens. Never will there be a hundred percent technology, but there is one that can deliver significant value without a cost to our privacy. Why are we debating it?

Ms. Hassler: Excuse me. Excuse me. Another question please, and we’ll allow the dueling to continue.

Mr. Donovan: Thank you. My name’s Tim Donovan. I'm from Technology Space. We’ve created real-time capabilities to make videos searchable. But there seem to be here three separate issues. One is whether or not the technology works. The technology does work more in some instances, in some instances less. It’s a good question whether or not the cell phone works. We have to keep that in perspective.

The next aspect is the notion of personal database indexes. If, in fact, we are storing data, then a fundamental issue, I would think, for the ACLU is whether or not we’re storing information on personal database indexes. That is the fulcrum of this whole discussion.

And the third one is security, private buildings versus public spaces. And in the area of terrorists, if we really just get back to the nub of the issue, there is a visa, a visitor issue in the United States that's been left to go to seed. If we addressed it when we allow people to come in, we essentially create a personal index on them with a photograph, as a visitor. Then when you run that against that photograph at an airport, you can identify whether the person is who they say they are. We really have to separate these issues. And I'd like to hear someone up there try to address those three separate issues. Thank you.

Ms. Hassler: Any takers?

Dr. Maybank: Personal indexing is an interesting and controversial issue. You can imagine a scenario in which Mr. Brown leaves his home and he’s photographed. Then his wife leaves the home; then his two children leave the home. And then a corrupt operator tells a nearby burglar that the house is empty. You could have a scenario like that, why not? So quite apart from any questions of liberties, we have to control this information simply to preserve, to prevent certain new sorts of crime which will become possible.

Mr. Donovan: That also happens with the post office when you ask someone to hold onto your mail. We rely on the postal service to handle that with confidence, and frankly we do.

Dr. Atick: But there is a more effective way of doing it when it comes to visas. We do require people coming to the United States to submit a photograph of themselves before we grant them the visa. And we do have the technology today to take that photograph submitted from the embassy and check it against the watch list of known terrorists and just stop them from entering the country. Leave them home where they are, and don’t let them into the country. That’s a legitimate application of facial recognition that I think could be very effective.

Ms. Kim: My name is Judy Kim. I'm the director of Multimedia Information Technologies in the Office of (unintelligible) Liaison at NYU. We’ve been talking about very broad policy and legal questions, fine. But there are several competitors, I believe, in your space. Are there principles that are common to your company and to your competitors’ companies? Is there a common principle, or is each company developing and selling a different type of technology, based on different principles? How are you the same; how are you similar?

Dr. Atick: From the technology point of view, at least, there are three classes of technologies out there today. There is something called the local feature analysis technologies, which analyze the face as a reducible pattern, by looking at the subcomponents of the face and how they relate to each other; that’s our technology. There is a holistic approach, which analyzes the face as a whole pattern, as a unit, and doesn’t break it up into pieces. And then the third one is neuroelectric based. You throw up your hands, you don't know what’s going on, so you just decide to train a network on recognizing people.

Ms. Kim: So in your faceprint, are you focused more on mathematical ratios between features and points; is that what you’re doing?

Dr. Atick: Pretty much you’re looking at a topographic map of the face–just like you can describe New York by saying Manhattan has Grand Central Station, Empire State Building, and Penn Station and describing where all the landmarks are relative to each other, so that you have now a complete description of the identity of the city you are in.

Ms. Kim: So there are three different types of approach to this problem, and there’s a possibility there could be alternative methods as well.

Dr. Atick: There could be other methods as well.

Ms. Kim: So it’s not really possible to talk about the effectiveness of a particular technology or the technology, but I think we need to distinguish which technology we’re using and how it’s applicable in the current situation.

Dr. Atick: There’s a lot of research going on in academia where more ideas and algorithms could be developed. Some are in our labs actually.

Mr. Steinhardt: But there are policy makers out there, politicians out there, none of whom would possibly adopt a technology because it might make people feel good as opposed to actually working. Policy makers out there are making real decisions, spending tax dollars on various technologies, and doing it on the basis of claims being made by various vendors who have a product to sell. And we need to be able to trust but verify. We need to be able to be assured that this technology works.

I've been on a stage with one of Dr. Atick’s competitors who will sit there and trash [Atick’s] technology and will tell you that his version of face recognition is better. And I'm sure that Dr. Atick will tell you that his is better than Visages, for example. But, you know, we need to have some independent examination of whether any of these technologies work.

Ms. Kim: Wouldn’t that be an ongoing assessment? You know Consumer Reports comes out every month.

Dr. Atick: Correct.

Mr. Steinhardt: Well, we certainly want to test this before we begin to rely on it to protect us in our nation’s airports. We don’t want to buy a pig in a poke.

Ms. Hassler: I'm going to take a phone question.

Operator: Thank you. Our next question is coming from Kevin Coughlin with Newark The Star-Ledger. Please state your question or comment.

Mr. Coughlin: Hi. My question is for Michael Vatis. I was wondering if you could just elaborate on your Manhattan Project a bit as to how it would get started. Would the Office of Homeland Security be a focal point for something like this? How would you organize it and how would you get it going?

Mr. Vatis: The essential principle would be to have a nongovernmental entity and also a nonindustry entity be the vehicle for funding R&D of new counterterrorism technologies. But it would have government funnels so that it would not depend on industry’s desire for profit and were not driven by agency agendas per se.

In terms of what government agency or entity should be the vehicle for delivering the funds, I don’t think it particularly matters. I doubt the Homeland Security Office is the right vehicle simply because they’re part of the White House and don’t have budget authority over other agencies, and it just doesn’t seem it’s something that’s going to fit their role as is currently set out. But it does offer the advantage of being not part of Defense, not part of Justice, not part of the FBI or Commerce. But I don’t think that’s really the most important thing. The key is to have the funding done by the government but have the work done outside the government so you have a somewhat neutral arena to make the decisions.

One of the functions it could do would be to perform a role that Barry set forth–be a neutral measurer of a vendor’s claims about technology that could be useful in the cause of counterterrorism. But there’s no one presently doing that sort of mutual evaluation of technology to see if, indeed, it does work. My institute currently does that sort of function along with the SANS Institute of cybersecurity technology that’s put out there by vendors–intrusion detection systems, for instance.

And vendors make all sorts of claims about they’ve got a 99.9 percent success ratio, and then when you examine their claims against actual performance, there’s a significant discrepancy. So a Manhattan Project, whatever it would be officially called, could perform that role on behalf of the government.

Ms. Hassler: Do you have a follow-up?

Mr. Coughlin: How much money do you think we’d need? Someone mentioned $1.5 trillion if you were to implement everything that’s been bandied about.

Mr. Vatis: It’s impossible to put a dollar figure on it. But there were a couple of expert commissions over the last three years that looked at the question of how much government funding would be needed just in the cybersecurity arena, putting aside chemical, biological, nuclear and radiological weapons. And they came up with an estimate of $100—150 million a year to develop cybersecurity.

I think that’s a reasonable estimate. It is a drop in the bucket of the billions of dollars we’re spending on conventional weaponry, which is not going to be of all that great utility in the war against terrorism. You can only gain so much by bombing caves, bombing tents, and bombing camels. What we need is technology to fight the threat within the United States, within allied nations where the terrorists, the cyber attackers, all reside–a much more complicated and completely underfunded effort right now.

Ms. Davis: My name is Ellen Davis with ABC News. And I have a two-part question: the first part is to Professor Maybank and Dr. Atick.

Can you tell me if, in the aftermath of September 11, you’ve experienced any kind of a spike in interest in contracts from either city, state, federal governments? Are people considering, people would have the authority to do that?

Dr. Maybank: I haven’t so far seen concrete evidence, but I suspect that it has changed the slope of the pitch. And I believe there is a change.

Dr. Atick: The level of inquiry is phenomenal since September 11. There’s been a nonstop barrage of federal, local, state, as well as international organizations concerned with security, whether it’s at the airport or at public venues and facilities. Also one big thing we’re seeing, interest in identity fraud, ways to try to combat identity fraud. Because, as you know, we hear a lot about these false documents, passport traders, identifraud systems, so on and so forth.

Ms. Davis: Thank you. And then—I’m going to be sorry for asking this. But, Barry, I do need to ask you; we’ve done a number of polls—excuse me—about the legislation that was recently passed about the issues that we’re talking about today, about generally whether or not people are willing to compromise some of their freedom, some of their civil liberties to feel safer. And the stunning majority of people are. Now separate and apart from whatever Constitutional challenges end up before the court, how do you respond to that? What do you say to those people? How do you assure them that we can do things to make them feel more secure that don’t, in your view, trample on civil liberties?

Mr. Steinhardt: Well I think the question for people is: are you willing to trade your civil liberties for a false sense of security?

Ms. Davis: Well, you keep saying that. So what’s a real sense of security?

Mr. Steinhardt: A real sense of security is a technological measure for surveillance or a database or whatever that will actually make us safer. Take the antiterrorism legislation for a moment. In a sense Congress sold America a real bill of goods, right? They passed something they called an antiterrorism piece of legislation that significantly increased the powers of government to surveil, to arrest, to, in some cases, definitely detain people, which had very little to do with terrorism, anything people would recognize as terrorism, the kind of biological attack, the kind of attack by airplane that occurred on September 11. It applied to crimes in general.

Part of our role is to try to explain to people, you need to ask some questions about what exactly is being proposed here and what exact benefits are there. We, of course, also have been engaged in talking to people about these issues, and yes, you can find polls out there that will show people are, in general, willing to give up their civil liberties for security. But one thing you find is that when you drill down a bit, and you ask people do you trust government to do XY and Z, often they will say no, that they don’t think they’re going to get any benefit from a particular proposal.

Ms. Davis: But you mentioned that you have some alternative proposals. Could you just tell us briefly what some of them are?

Mr. Steinhardt: Well, I'll tell you about a couple of the proposals, and then I want to tell you an anecdote. The proposals, for example, making the cockpit doors more secure is clearly something that we ought to be doing, that the airlines are beginning to do, the FAA’s beginning to do. More intense training, better professionalism among the airport security workers, the screeners, is something we ought to be doing. Congress is engaged in this bitter debate now about who should be running the system. We need to ask the question who is best able to provide us with security.

But let me tell you about a recent experience. About two weeks ago I and a number of my colleagues from Washington had a meeting with some people who handle security for the FAA. I flew down from here, from La Guardia airport on one of the shuttles. That’s probably got the highest level of security of any air route in America right now. We went through the initial security screening. People will then, either randomly or the result of some sort of profiling, be selected for additional security at the gate, where a wand was passed over their bodies to look for metal. And not surprisingly, when it went to a man’s zipper, it routinely would set off, and they’d be groped in public. A very intrusive kind of search; I felt myself very fortunate that I wasn’t.

I then got on the airplane. It’s the shuttle, and because it’s the shuttle and you pay a premium for it, they gave out free liquor. How did they give us the liquor, both the beer and wine? In glass beer bottles. And I sat there thinking, my God, they just physically groped people at the gate in public, and then they passed out these glass beer bottles. They passed out a weapon and they passed out alcohol; the weapon contained alcohol. How foolish was that?

So we need to ask other simple things we can do that don’t intrude on people’s privacy, their civil liberties that are going to make us a lot safer. It doesn’t make a lot of sense to be groping people because their metal zipper set off a metal detector and you’re handing them a weapon on a plane.

Ms. Hassler: I’d like to take another phone call, if we could.

Operator: Thank you. Our next question is coming from Claire Haney with Business A.M. Please state your question or comment.

Ms. Haney: It’s Business A.M., a Scottish newspaper. This question is for Dr. Maybank. I wonder, specifically looking at capturing terrorists, whether the CCTV system specifically in London resulted in anything?

Dr. Maybank: They’ve been used retrospectively. For example, in about ’99, we had terrorist explosions in Brixton. There were a couple people injured, but the culprit was caught very quickly because the police could go through databases of video images taken in Brixton and identify suspects. But going through those databases of images is a very intensive process that you would only do for some large danger, like these bombs.

There was also another case, a very terrible and horrifying case from about ’93, in which a young child was kidnapped and killed, and they captured on video the film of the child being led away in a shopping center. And this case, this example of the use of CCTV in the kidnapping, has impressed itself very firmly on the minds of the British public, and it’s one of the reasons why they’re in favor of this kind of surveillance.

Dr. Atick: I don’t underestimate the power of deterrence that, say, a ring of cameras have that were placed in London. Basically terrorists are always going to look for areas of vulnerability. Just like we all have locks on our doors; very few of us have got burglars coming through our doors. If the doors didn’t have locks, you’d have many more people trying to get in.

So I would say in London there has been a significant success in that there might have been terrorist attacks that did not place as a result of this.

Ms. Hassler: Do we have a final question, and then we’ll wrap and then people can talk to individuals here. Yes, sir.

Mr. Rocher: My name is Marty Rocher? I'm part of a think tank group in data technology. My question is to Mr. Atick. According to your tests, what is the level of accuracy and under what conditions? And what other initiatives are you going through to make it better?

Dr. Atick: I'm going to give you a facial recognition demonstration right now to tell you why quantifying the kind of numbers that you’re asking from me is difficult. I'll give you facial recognition which is 100% accurate [faces audience]; this is it. I’ll give you facial recantation which is 0% accurate; this is it [turns back of head to audience]. So basically the whole notion of what constitutes the accuracy of facial recognition, when a crowd of people are streaming through or going through without any subject intervention, all depends on the level of control that you’re willing to impose. If you’re willing to impose lighting conditions and control, multiple cameras so that even if the person has turned their face down, another camera to pick them up, then you can perform just like you would do in the lab, in the high 90s percentile.

Mr. Rocher: But you would have to put some parameters to communicate that to us. If this was a TV that I was buying, I wouldn’t buy that TV.

Dr. Atick: Right. And those parameters are communicated to the people who are in the operations business, who are operating airports, who understand what needs to get done. I mean at the end of the day, you need to be safe. You need to have safety and security, and the operations people need to know what they need to do in order to deliver that level of performance. So that’s a technical discussion that is being engaged with the relevant authorities. It’s certainly not the technical discussion I would have with the ACLU, because it’s not their domain for how to make things work. We talk to the airport authorities of how to make things work, and we give them all the criteria.

Ms. Hassler: I'm going to actually–I'm going to bring this part to a close. I'd like to thank you all in the room for participating and thank our telephone listeners and thank the speakers for an interesting afternoon.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions