Does the NSA Really Need “Direct Access”?

What to call U.S. intelligence agencies’ activities comes down to semantics

3 min read
Does the NSA Really Need “Direct Access”?

Protesting the Program: Activists gathered in Washington D.C. on June 14th to rally against U.S government surveillance programs.

We’re now well into the second stage of the controversy surrounding the allegations that the NSA is conducting large-scale surveillance of U.S. citizens. Whistleblower/leaker/traitor (the exact term varying according to individual opinion.) Edward Snowden is being scrutinized, as are the articles written by Glenn Greenwald for The Guardian newspaper.

That Snowden’s perceived reliability, or lack thereof, has become a major part of the story is an entirely predictable consequence of his decision to reveal his identity. Back in 2004, Dina Rasor, then working under the auspices of the National Whistleblower Center in Washington D.C., told IEEE Spectrum that going public in this way was like “setting your hair on fire for one glorious minute.” Whistleblowers were well advised to remain anonymous so that the revelation “becomes the issue, and not you.” (As has been pointed out in several places, if we’d known that Deep Throat was an FBI director angry at being passed over for promotion, his accusations about Watergate might not have been taken so seriously.)

That the focus of the discussion has also shifted to Greenwald’s reporting is also not surprising in the light of that 2004 article. IEEE Fellow Stephen H. Unger, a former chairman of the IEEE Ethics Committee cautioned against the dangers of hastiness, or making the slightest factual error, when bringing any revelations to light: “Don't exaggerate at all… You could be 99 percent right, but if you make one little mistake, they'll focus on that to discredit you.”

The biggest substantive criticisms of Greenwald’s reporting so far have centered on his contention that companies like Google and Apple provided “direct access,” so that the NSA could come in and snoop around however they liked, grabbing information in real time if need be.

But does the NSA really need to access to Google’s internal servers to run a system like PRISM? The U.S. intelligence community certainly has the technical ability to conduct significant eavesdropping programs on other nations’ communications systems. As for its domestic capabilities, back in 2006, another whistleblower, Mark Klein, alleged that the NSA had placed a room full of equipment (pdf) in a San Francisco AT&T facility for the express purpose of tapping Internet fiber-optic backbone traffic. In response, the Electronic Frontier Foundation filed a class-action lawsuit, which was ultimately dismissed because the U.S. Congress gave immunity to telecom companies cooperating with eavesdropping programs.

So, making the assumption that the NSA can eavesdrop on our Internet traffic already, does it really need access to Apple and Google’s server farms? After all, there’s nothing irreproducible about their systems—the rise of cloud computing technologies in recent years means that these companies’ servers are virtual constructs in any case, running on fungible hardware. With enough storage space and computing power, it is certainly technically possible to imagine shadow servers, emulating the relevant functions of a number of companies’ online services, and synchronized with data from Internet backbone taps at telcos. It might not be a perfect copy of what’s on the real servers, but such a system would still allow extensive historical searches in many cases.  With such a system, “direct access” versus “intercepting traffic in transit” becomes a distinction without a difference.

True, many cloud-based services, such as Gmail, do provide end-to-end encryption, but many inter-service communications are not encrypted. And if Chinese hackers have been able to penetrate, to at least some degree, U.S. companies, what chance would these firms really have against a determined U.S. intelligence agency on its own soil?

Whether or not such a scenario actually reflects reality may then be quite possibly more a question of legal frameworks and restrictions than technical issues.

Photo: Win McNamee/Getty Images

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}