John H. Gass is still not a happy person. On the 5th of April, he received a letter dated the 22nd of March from the Massachusetts Registry of Motor Vehicles telling him that his driving license had been revoked, and that he must immediately stop driving.
Mr. Gass, who had not received a traffic violation for years, was identified by the RMV as a person suspected of having a fake identity by an automated anti-terrorism facial recognition system, an article in the Boston Globe reported. At least 34 other states use the same or similar software, the Globe says, much of it paid for in part by grants from the US Department of Homeland Security.
It turns out that the face recognition software flagged Mr. Gass's picture as looking like another Massachusetts driver, hence the letter from the Massachusetts RMV. The Globe says that it took Mr. Gass ten days of wrestling with the RMV bureaucracy to prove to them that he was indeed who he said he was before he was able to get his license back.
According to the Globe story, based on results of the recognition system, last year the "State Police obtained 100 arrest warrants for fraudulent identity, and 1,860 licenses were revoked as a result of the software."
The Massachusetts State Police explained to the Globe how the RMV facial recognition system works:
"The system looks at each driver’s license photograph stored in the state’s computers, mapping thousands of facial data points and generating algorithms that compare the images to others in the mathematical database... The software then displays licenses with similar-looking photographs - those with two or more images that have a high score for being the same person. Registry analysts review the licenses and check biographical information, criminal records, and drivers’ histories, in part to rule out cases with legitimate explanations, such as drivers who are identical twins."
However, apparently no one at the RMV or State Police keeps track of the number of false positives generated by the system. In each suspected case flagged, the person identified has to come to the RMV to prove their identity. In Mr. Gass's case, both he and the other driver identified were told to come in.
Mr. Gass, who needs to drive for his job, is now suing Massachusetts for "... unspecified damages and an injunction blocking the Registry from revoking licenses without a hearing."
The RMV Registrar Rachel Kaprielian apparently has little sympathy for Mr. Gass, saying that protecting the public far outweighs any inconvenience Gass or anyone else might experience.
"A driver’s license is not a matter of civil rights. It’s not a right. It’s a privilege,"
Registrar Kaprielian told the Globe, and that it is the individual's "burden" to clear his or her name of any mistakes made by the RMV.
I assume this is the same "false positive burden" innocent drivers flagged in the other 33 states using facial recognition software also must bear. It is likely a larger burden for a large number of people who wish to fly into the US. For according to this story in the Washington Post,
"The State Department since 2009 has been using electronic facial recognition techniques on all visa applications. It now has 142 million images in its database."
The Post story reports no data on the number of false positives generated by the Sate Department's facial recognition software, but I am guessing from reading the story that those with false positives are routinely denied visas to visit the US. There is a recourse offered for a denied visa, but I don't know how easy it is to overcome a false positive incident.
The burden of proving who you are is you likely will be growing.
In a related story, this time in Reuters, police departments in Massachusetts and other states are about to roll out the Mobile Offender Recognition and Information System - or MORIS - made by BI2 Technologies, in Plymouth, Massachusetts. This rollout is also apparently paid for by federal grant money. MORIS allows for both iris and facial scanning of a person. Reuters states that:
"When attached to an iPhone, MORIS can photograph a person's face and run the image through software that hunts for a match in a BI2-managed database of U.S. criminal records. Each unit costs about $3,000."
Apparently, says a story in the Chicago Tribune, there are no overt US constitutional issues involving at least the facial recognition aspect of the MORIS scanner. As the story explains,
"... using a camera-like device to snap random passersby poses no apparent constitutional issue, since it doesn't involve a search or a seizure, any more than video surveillance cameras do. It merely duplicates what a police officer might do with his eyes - looking at individuals in public in search of bad guys whose appearance he has memorized. Iris scans, by contrast, being more invasive and possibly requiring a stop, are more akin to taking fingerprints, which police may not do without 'reasonable suspicion.' "
The "merely duplicates what a police office might do with his eyes" line may be technically correct, but show me a police officer who can also remember thousands of faces and the names that go with them.
Studies a few years ago indicated that plastic surgery can throw-off facial recognition algorithms by as much as 98% of the time in some circumstances, hence the move to include iris scanning capability.
However, privacy advocates worry that the scanner, which can be currently be used up to four feet away to scan a person's iris, may be abused by some in law enforcement. The police groups, the Tribune story reports, say not to worry. These types of assurances need to be taken with a grain of salt, as UK citizens recently discovered in relation to widespread unauthorized police access to supposedly confidential databases.
The proliferation of facial and other types of photo/video-based recognition systems has exploded in the past few years. Facebook, for instance, rolled out late last year a controversial capability to allow users to identify their friends in pictures. This has led to photographers to try to see how many people can be "Facebook-tagged" in one photo (with the police interested in the results as well).
Casinos in Canada have started to use facial recognition software to keep out "problem gamblers" from their establishments, while some bars are going to be using it to report the current ratio of men to women in attendance.
In many ways, I find what advertisers are planning to do with facial recognition software creepier. As this Wall Street Journal article from earlier in the year notes, advertising agencies are looking into recognition technologies that "... actually recognizes faces. If you raise your eyebrow, it can track that."
"The new systems can detect and interpret motions as subtle as nodding or frowning. Some facial-recognition technologies can even identify individuals, one of the reasons the industry's progress in the field is likely to raise privacy concerns. "
Nice of them to think of that issue, although I suspect it isn't for more than a millisecond.
Of course, just what is considered private information is a bit squishy. For instance, the state of Florida made $63 million dollars last year selling drivers "names, addresses, dates of birth, a list of the vehicles they drive." The data management companies that buy the information promise not to harass people as result of the information they now own.
You may want to add another grain of salt to those promises as well.
Let me know what you think of this wide-spread application of facial and other photo/video recognition systems and their implications on society, now as well as in, say, 10 years.
Update: 26 July 2011
The Wall Street Journal and others are reporting today that Google has acquired a face-recognition technology company called Pittsburgh Pattern Recognition. It won't say why it made the acquisition although Google says it doesn't plan to use the technology until strong privacy measures are in place.
I suggest buying more grains of salt.
Update: 27 July 2011
Reuters is reporting today that Facebook has agreed to make it easier to opt-out of the use of its facial-recognition tagging feature. Maybe you can take back a grain of salt, at least for now.
Contributing Editor Robert N. Charette is an acknowledged international authority on information technology and systems risk management. A self-described “risk ecologist,” he is interested in the intersections of business, political, technological, and societal risks. Along with being editor for IEEE Spectrum’s Risk Factor blog, Charette is an award-winning author of multiple books and numerous articles on the subjects of risk management, project and program management, innovation, and entrepreneurship. A Life Senior Member of the IEEE, Charette was a recipient of the IEEE Computer Society’s Golden Core Award in 2008.