After WikiLeaks, a Recap of How the U.S. Government Discloses a Zero-Day

A figure in black inputs commands into a computer screen in the corner of an industrial facility.
Illustration: iStockphoto

The latest WikiLeaks dump rattled the Internet this week with documents that appear to indicate the U.S. Central Intelligence Agency found a slew of software vulnerabilities that it potentially used to break into Apple and Android devices, and even turn Samsung smart TVs into secret microphones to eavesdrop on owners.  

Those accusations have raised questions from security researchers about whether the U.S. government is following through on its commitment to responsibly disclose new software vulnerabilities (also known as zero-days) that it discovers in consumer products. The WikiLeaks documents show that the CIA appears to have built secret software programs to exploit many vulnerabilities that the agency found. Based on those records, the CIA possessed at least 14 Apple iOS exploits and around two dozen for Android devices, plus others that targeted Microsoft Windows and Linux.

Whether or not the CIA should disclose software vulnerabilities—and how quickly it should make a disclosure—is a complicated issue known as the “equities” problem. On the one hand, doing so allows companies to patch them, which protects citizens from nefarious parties who find the same vulnerabilities. On the other hand, this means the CIA loses its ability to gather intelligence through these holes.

In the past, the U.S. government has agreed that disclosing most of the vulnerabilities it finds is in everyone’s best interest. Michael Daniel, who was cybersecurity coordinator for the White House under President Obama, spelled it out in a 2014 blog post: “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.”

However, the decision to disclose any specific vulnerability is often a difficult one, so there is a process in place through which agencies are supposed to work together to determine whether or not a vulnerability should be shared, and how quickly. It’s known as the Vulnerabilities Equities Process, or VEP.

In his blog post, Daniel described the government’s process for evaluating vulnerabilities, presumably the VEP, as “a deliberate process that is biased toward responsibly disclosing the vulnerability.” Much of what we know about how the VEP actually works comes from documents obtained in 2014 through a Freedom of Information Act request and subsequent lawsuit filed by the Electronic Frontier Foundation.

Based on those documents, the process works like this: Federal agencies report any vulnerability they find that is “both newly discovered and not previously known” to the department’s Executive Secretary. Then, an interagency Equities Review Board (whose composition remains classified) reviews each case and determines, by majority vote, whether or not a vulnerability will be disclosed.

Apparently, the ERB uses a set of criteria to make its determination, but those details remain classified. In his blog post, Daniel spelled out several points that he considers in this process, including whether the unpatched vulnerability poses “significant risk,” how likely it is that others have found it, and how badly the agency needs it in order to collect intelligence.

During its deliberations, the ERB discusses the case with representatives from any agency with an interest in the vulnerability in question. Agencies that are not happy with the ERB’s final decision may appeal it. The entire process is overseen by the Information Assurance Directorate of the National Security Agency (NSA).

Aside from that information, there’s little transparency about how this process works in reality. The ERB does not publish how many vulnerabilities the government finds and chooses to disclose versus those it decides to keep secret, or how long on average it knows about a vulnerability before disclosing it.

Admiral Michael Rogers, director of the NSA, said in 2014 that, “by orders of magnitude, the greatest number of vulnerabilities we find, we share.” But it’s impossible to know whether that’s true, or whether his comments reflect what happens in other agencies.

In 2016, Harvard researchers described the VEP and recommended changes to improve its transparency. They requested an annual report that would provide high-level statistics on how many vulnerabilities were found and disclosed each year, and how long those vulnerabilities were held before disclosure, to prevent agencies from keeping them secret indefinitely. They also called for an executive order to formalize the VEP across agencies, and suggested that its oversight be transferred to the Department of Homeland Security.   

Robert Cattanach, a partner with Dorsey & Whitney who specializes in cybersecurity and privacy issues, points out that policies such as the VEP are only as strong as their enforcement. Though agencies may have been instructed to follow this process, it’s difficult to know how faithfully they stick to it.

The VEP was established as part of a directive issued by former U.S. President George W. Bush and later reinforced under his successor, President Barack Obama. The new Trump administration may have an entirely different approach to handling this issue.

So how worried should you be about the vulnerabilities exposed in this latest leak? The CIA has refused to comment on the authenticity of the WikiLeaks documents, and it’s not clear from those documents whether any of the vulnerabilities described are still live. Since the files date from 2013 to 2016, it’s possible that the CIA has already disclosed these zero-days to manufacturers, or that the manufacturers behind these products have independently found and patched them.

At least in the case of Samsung TVs, researchers demonstrated a similar hack at security conferences as early as 2013. And Apple’s response to the leak was to say that many of the vulnerabilities listed had already been fixed through software updates. For the time being, WikiLeaks has not released the code that would be required to actually build and deploy any exploits to take advantage of them, though the organization claims to have that code in its possession.

Gail-Joon Ahn, director of the Center for Cybersecurity and Digital Forensics at Arizona State University, says he wasn’t the least bit surprised to learn that the CIA knows about many vulnerabilities and has developed programs to exploit them. But he is concerned about how these tools are controlled and deployed, and hopes the leak will prompt more public discussion on these topics.

“There's an assumption that [the CIA is] using this cyber weapon for a good purpose,” he says. “But there needs to be some debate about the security and use of these weapons.”

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement