Artificial Intelligence

Protecting Privacy in Surveillance Video While Mining It for Data

A new technique may help defend privacy while permitting useful analysis of surveillance data

security camera footage showing people walking with trails of private internet data behind them
Jose-Luis Olivares/MIT

Surveillance cameras have proliferated across the globe, raising concerns about privacy that have only deepened as machine-learning tools have now enabled automated video analysis on a massive scale. Now a new security system aims to defend privacy in a way that supports honest analysis of video footage while confounding malicious spying.

There are now “hundreds of millions of surveillance cameras out there across the world,” notes Frank Cangialosi, a computer scientist at MIT and lead author on a study of the system. In the past, these cameras were occasionally monitored manually, if at all, and largely used for security purposes. But steady advances in artificial intelligence have now made it possible for computers to analyze this video data en masse.

There are many applications for automated video analysis of surveillance footage, such as: helping health officials measure the proportion of people wearing masks; letting transportation departments monitor the density and flow of vehicles, pedestrians, and bicycles to figure out where to add sidewalks and bike lanes; and giving businesses better insight into shopping behavior for better planning of promotions. However, such mass surveillance poses the risk of intrusions on privacy at unprecedented scales.

“Video analytics is an exciting potential area, but I think our community also has this huge responsibility to think carefully about how it could be misused and put equal effort towards addressing that,” Cangialosi says.

Attempts to defend privacy against such technology often involve blurring out faces or covering them with black boxes. Those methods can prevent useful analysis of this video, while still not having the intended effect of preserving anonymity.

“So, citizens aren’t going to feel protected, and analysts aren’t going to feel it’s useful enough for them,” Cangialosi says. “It doesn't satisfy anyone, which is why these approaches aren’t actually widely used in practice. And after thinking about it a bit, we realized that these are fundamental issues, so there’s this need for a totally different approach.”

Now, Cangialosi and his colleagues have developed a new system called Privid that lets analysts examine video for statistical data without revealing personally identifiable information.

“Privid might enable us to actually [make more productive use of] tons of footage from all of the cameras we already have around the world [and do so] in a safe way,” Cangialosi says. “They have tons of coverage and are very versatile, so I think there’s really a lot of potential.”

Privid works by first accepting code from an analyst containing a query that triggers an automatic count of, say, the number of people wearing masks in a video feed and the density of the crowd. The system then breaks that video footage into segments and runs the code on each chunk. Instead of reporting the results back from each segment to the analyst, Privid aggregates the data and adds some noise to it before returning the results. The aim of Privid is to let analysts with honest queries get the details they want, while restricting access to raw surveillance data that would enable malicious actors to gain too much information.

For example, when it comes to a video feed observing multiple city intersections, both an honest and a malicious query might claim to want to count the number of people that pass by each hour. Whereas the well-intentioned query from an urban-planning department might want to count pedestrian numbers to better plan crosswalks, the point of a query from someone with malicious intent might be to track a few specific people by looking out for their faces.

Assuming Privid executes both the anodyne and malicious queries, the addition of a little noise does little to derail the analyst behind the honest query from obtaining the count of passersby as was claimed. That same noise, given how the malicious query was actually looking to identify a few specific people, would have a large, confounding effect on the attempt to misuse the data. Privid can also tell analysts how much error it adds to results, which honest analysts can account for in their research so that they can still detect valuable patterns and trends.

Cangialosi stresses that “we are not encouraging surveillance.” With the idea of surveillance, he admits, “lots of negative things, understandably, immediately come to mind—the idea of being watched, Big Brother, and so on. But this is exactly what we want to prevent, full stop. Our fundamental idea of privacy is this idea that we should only be able to use cameras for things that don’t identify people. And there’s lots of examples of this that can benefit society, such as urban safety, public health, and so on.”

A common technical question Cangialosi gets is “Does the privacy guarantee we provide only apply to a single camera?” “The short answer is no. The exact implications are a bit detailed, but the high-level point is that no matter how many cameras’ image feeds are in the system, and no matter how many cameras an analyst aggregates across, an individual will still be protected and can’t be tracked across location and time.”

The researchers note that adding noise to the results may defend privacy, but does also make the analyses imperfect. Still, they noted that across a variety of videos and queries, Privid returned the right answer to queries between 70 and 99 percent of the time when its attention was trained on nonprivate systems.

“Privid isn’t a panacea,” Cangialosi notes. “I think there are lots of use cases where privacy and utility aren’t really at odds, and so we can get a good balance of ensuring privacy without doing too much harm to accuracy or utility. Privid is great for these use cases.”

On the other hand, he cautions, “there are some cases where privacy and utility really are fundamentally at odds. In security-critical applications, like locating a missing person or a stolen car, the entire point is to identify an individual,” Cangialosi says. In such cases, the solution may not be a technical one, “but rather good policies.”

Cangialosi notes that while the scientists focused on the compromise between utility and privacy with Privid, they did not worry about computational efficiency. “An important next step is incorporating a lot of the optimizations the rest of the community has worked on towards making video analytics more efficient,” he says. “The challenge, of course, is doing it carefully, in such a way that we can still maintain the same formal privacy guarantees.”

Future research can also explore different types of video feeds, such as dash cams and videoconference calls, as well as audio and other data. “These data sources represent even more untapped potential for analytics, but they’re obviously in some very privacy-sensitive scenarios,” Cangialosi says. “I think it'll be really exciting to expand the set of domains where we can have computers learn some important information that can help society, while also making sure [that data can't be used to] harm anyone.”

The scientists detailed their findings on 4 April at the USENIX Symposium on Networked Systems Design and Implementation Conference in Renton, Wash.

IEEE Spectrum
FOR THE TECHNOLOGY INSIDER

Follow IEEE Spectrum

Support IEEE Spectrum

IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.