The December 2022 issue of IEEE Spectrum is here!

Close bar

“Algorithmic Destruction” Policy Defangs Dodgy AI

New regulatory tactic of deleting ill-gotten algorithms could have bite

4 min read
Abstract image of networked lines, 0s, 1s and flames
Getty Images/IEEE Spectrum

The U.S. Federal Trade Commission has set its sights on tech companies, finding ways to thwart deceitful data practices.

On 4 March, the FTC issued a settlement order against WW International (the company previously known as Weight Watchers) and its subsidiary, Kurbo, with FTC chair Lina Khan stating that “Weight Watchers and Kurbo marketed weight management services for use by children as young as eight, and then illegally harvested their personal and sensitive health information.” The FTC required the companies to “delete their ill-gotten data, destroy any algorithms derived from it, and pay a penalty for their lawbreaking.”

Algorithms are a finite sequence of commands and a set of rules in a computer program used to process data. In the case of AI, machine learning algorithms are trained on data to build models that could predict certain actions or make specific decisions.

“So when you have to destroy that algorithm, there’s a financial consequence for the company, because that’s their work product generating revenue that they have to give up.”
—Divya Ramjee, Washington College of Law

“When an algorithm is trained on private data, it would be simple to figure out some or all of that data from the algorithm. This means that just deleting the private data wouldn’t be an effective remedy and wouldn’t prevent future privacy harms,” says Kit Walsh, senior staff attorney at the Electronic Frontier Foundation. “So when you have an important interest like privacy and it’s necessary to delete the algorithm to address the harm, that’s when algorithmic destruction orders are on the firmest footing.”

Aside from curbing privacy harms, algorithmic destruction could hold organizations liable not only for how they gather data but also the methods for processing that data. “It’s adding this twofold approach to holding companies accountable when they go about harvesting data through deceptive means and using that data to generate algorithms,” says Divya Ramjee, a senior fellow at American University’s Center for Security, Innovation, and New Technology and a fellow at Washington College of Law’s Tech, Law & Security Program.

Destroying algorithms could render software useless and negatively affect a company’s bottom line. “At the end of the day, companies are doing this work for money,” Ramjee says. “They’re collecting data and creating algorithms that are essentially a product being sold and generating more money for them. So when you have to destroy that algorithm, there’s a financial consequence for the company because that’s their work product generating revenue that they have to give up.”

The FTC is increasingly applying algorithmic destruction as a tool to keep tech firms in check. In a 2021 settlement, the commission said that Everalbum, a now-defunct cloud photo-storage company, “must obtain consumers’ express consent before using facial recognition technology on their photos and videos,” and required the company to “delete the photos and videos of Ever app users who deactivated their accounts and the models and algorithms it developed by using the photos and videos uploaded by its users.”

A similar directive was issued to Cambridge Analytica, ordering the consulting firm to delete or destroy the information it collected about Facebook users through an app, as well as “any information or work product, including any algorithms or equations, that originated, in whole or in part, from this Covered Information.”

“The algorithm itself often communicates private information and leads to repeated privacy violations when shared or used. So the justification for deleting it is comparable to deleting the data that the company unlawfully collected,” Walsh says. “It would undermine people’s privacy even further if companies could violate the law and essentially get away with taking people’s private information simply because they turned it into algorithms before getting caught. This [WW International and Kurbo] settlement is a good sign that regulators aren’t going to fall for that.”

And although directing companies to destroy the algorithms they developed using ill-acquired data may not completely prevent deceitful data practices, it’s a move in the right direction, Ramjee says.

“There are always companies that are going to do things deceptively, but this is a good tool for trying to put a pause on how they’re aggregating and using data,” says Ramjee. “It’s a first step to show that these companies can’t just run rampant, especially when you have these big companies with multimillion-dollar fines slapped on them. It acts as a deterring factor to show you can’t simply get away with this.”

Legislative bodies across the globe are recognizing the need to hold firms responsible for illegally collecting data and using them to develop or train algorithms. As a result, more regulations will likely be in place to mitigate the issues that come with these practices. The European Union (E.U.) is already leading the way with its General Protection Data Regulation (GDPR), but it’s also proposing an Artificial Intelligence Act. “The E.U. proposal does include a potential remedy of deletion or retraining,” Walsh says.

The FTC will potentially continue investigating companies and imposing algorithmic destruction, but Ramjee believes a more comprehensive federal privacy law is crucial to stop deceitful companies in their tracks. “You can also have a presumably unbiased third party checking data and helping validate what’s going on in those platforms,” she says.

The onus now falls on companies to establish more ethical data practices. “As consumers, we’re now much more concerned about how our data is being used because we know these algorithms are working in ways that cater to us—whether good or bad,” says Ramjee. “Consumers are pushing for privacy and thirsting for transparency, so it would behoove companies to be up front about what and how data is being used in a way that’s digestible for the average consumer.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}