The February 2023 issue of IEEE Spectrum is here!

Close bar

Facial Recognition Faces More Proposed Bans Across U.S.

Critics say the technology is not very accurate, and is often rolled out first in minority communities

3 min read
Photo-illustration of a man of color with facial recognition technology reading his face.
Photo-illustration: iStockphoto

THE INSTITUTESeveral companies, including Amazon and Clarifai, are working to create reliable facial recognition technology for use by government agencies and law enforcement to catch criminals and find missing children. 

Amazon’s Rekognition can identify, analyze, and track people in real time. Within seconds, the software can compare information it collects against databases housing millions of images. Law enforcement agencies have used the technology to help find missing people and to identify suspects in terrorist attacks.

While this technology may have benefits, it has recently faced some backlash. Many people are concerned about racial bias and protecting citizen’s privacy.

This month, San Francisco became the first U.S. city to ban the use of facial recognition technology by its police force and city agencies. This ban was prompted by concerns that facial recognition surveillance unfairly targets and profiles certain members of society, especially people of color. According to CPO Magazine, when law enforcement agencies adopt this technology, they usually roll it out in high-crime, low-income neighborhoods, and those disproportionately are neighborhoods with a high percentage of people of color.

The ban gives the city’s Board of Supervisors oversight over all surveillance technology used by agencies and law enforcement authorities. The departments are now required to conduct an audit of all existing surveillance technology such as automatic license plate identification tools, and write an annual report on how technology is being used and how data is being shared. The board must approve the purchase of all new surveillance technologies. The ban does not cover use of the technology by individuals or businesses.

Some proponents of the facial recognition technology say the legislation passed by the city goes too far, and the technology should merely be suspended until it improves, rather than subject to an outright ban. 

While speaking with National Public Radio, Daniel Castro, the vice president of a think tank called the Technology and Innovation Foundation, said, “[The Board of Supervisors] is saying, let’s ban the technology across the board, and that seems extreme, because there are many uses of the technology that are perfectly appropriate.” He explains that the government and the companies that produce the software want to use it to fight sex trafficking, for example. “To ban it wholesale is a very extreme reaction to a technology that many people are just now beginning to understand," Castro says.

Similar legislation to San Francisco’s is under consideration in Oakland, Calif. Massachusetts is considering imposing a moratorium on facial recognition software in the state until the technology improves.

Concerns about the technology aren’t unfounded. In a study published by the MIT Media Lab earlier this year, researchers found facial analysis software made mistakes when identifying the gender of female or dark-skinned individuals, according to news website The Verge.

According to the results of an investigation released earlier this month, New York City’s police officers were found to have edited suspects’ photos—and uploaded celebrity lookalikes—in an effort to manipulate facial recognition results. An NYPD representative emphasized the investigative value of this technology when asked about the allegations by The Verge, but he did not dispute any of the claims. 

“It doesn’t matter how accurate facial recognition algorithms are if police are putting very subjective, highly edited, or just wrong information into their systems," Clare Garvie, a senior associate at the Center on Privacy and Technology who specializes in facial recognition, said in an interview with NBC News. “They’re not going to get good information out. They’re not going to get valuable leads. There’s a high risk of misidentification. And it violates due process if they're using it and not sharing it with defense attorneys."

Should more municipalities ban the use of this technology or impose a moratorium? Does the good this technology can do outweigh the concerns being raised?

The Conversation (0)

Get unlimited IEEE Spectrum access

Become an IEEE member and get exclusive access to more stories and resources, including our vast article archive and full PDF downloads
Get access to unlimited IEEE Spectrum content
Network with other technology professionals
Establish a professional profile
Create a group to share and collaborate on projects
Discover IEEE events and activities
Join and participate in discussions

How Duolingo’s AI Learns What You Need to Learn

The AI that powers the language-learning app today could disrupt education tomorrow

9 min read
Vertical
This playful illustration shows Duolingo’s owl mascot, cut away down the midline, showing hidden inside a high-tech skeleton suggestive of some sort of AI robot.
Eddie Guy
Blue

It’s lunchtime when your phone pings you with a green owl who cheerily reminds you to “Keep Duo Happy!” It’s a nudge from Duolingo, the popular language-learning app, whose algorithms know you’re most likely to do your 5 minutes of Spanish practice at this time of day. The app chooses its notification words based on what has worked for you in the past and the specifics of your recent achievements, adding a dash of attention-catching novelty. When you open the app, the lesson that’s queued up is calibrated for your skill level, and it includes a review of some words and concepts you flubbed during your last session.

Duolingo, with its gamelike approach and cast of bright cartoon characters, presents a simple user interface to guide learners through a curriculum that leads to language proficiency, or even fluency. But behind the scenes, sophisticated artificial-intelligence (AI) systems are at work. One system in particular, called Birdbrain, is continuously improving the learner’s experience with algorithms based on decades of research in educational psychology, combined with recent advances in machine learning. But from the learner’s perspective, it simply feels as though the green owl is getting better and better at personalizing lessons.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}