Reckoning With Tech Before It Becomes Invisible

Facial recognition, route-finding software, and more have the potential for dire consequences

3 min read
Illustration of two people on a bicycle with the front head covered in squares of colors.
Edmon de Haro

Ten years ago, venture capitalist Marc Andreessen proclaimed that software was eating the world. Today, the hottest features in the latest phones are software updates or AI improvements, not faster chips or new form factors. Technology is becoming more mundane, and ultimately, invisible. 

This probably doesn’t bother you. But even as technologies fade into the background of our lives, they still play a pervasive role. We still need to examine how technologies might be affecting us, even if—especially if—they’re commonplace. 

For example, Waze’s navigation software has been influencing drivers’ behavior in the real world for years, algorithmically routing too many cars to residential streets and clogging them. The devices and apps from home-security company Ring have turned neighborhoods into panopticons in which your next door neighbor can become the subject of a notification. Connected medical devices can let an insurance company know if the patient isn’t using the device appropriately, allowing the insurer to stop covering the gadget. 

Using technology to create or reinforce social norms might seem benign or even beneficial, but it doesn’t hurt to ask which norms the technology is enforcing. Likewise, technologies that promise to save time might be saving time for some at the expense of others. Most important, how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins? Underneath concerns about Amazon and Facebook and Google is an understanding that big tech is everywhere, and we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s. 

Underneath concerns about Amazon and Facebook and Google is an understanding that big tech is everywhere.

A big part of the problem is that we haven’t even established what those benefits should be. Let’s take the idea of legislating AI, or even computer-mediated decisions in general. Should we declare such technology illegal on its face? Many municipalities in the United States are trying to ban law enforcement from using facial-recognition software in order to identify individuals. Then again, the FBI has used it to find the people who participated in the 6 January insurrection at the U.S. Capitol. 

To complicate the issue further, it’s well established that facial recognition (and algorithms in general) are biased against Black faces and women’s faces. Personally, I don’t think the solution is to ban facial recognition outright. The European Union, for example, has proposed legislation to audit the outcomes of facial-recognition algorithms regularly to ensure policy goals are met. There’s no reason the United States and the rest of the world can’t do the same. 

And while some in the technology industry have called for the United States to create a separate regulatory body to govern AI, I think the country and policymakers are best served by the addition of offices and experts within existing agencies who can audit the various algorithms and determine if they help meet the agency’s goals. For example, the U.S. Justice Department could monitor, or even be in charge of approving, programs used to release people on bail to keep an eye out for potential bias. 

The United States already has a model of how this might work. The Federal Communications Commission relies on its Office of Engineering and Technology to help regulate the airwaves. Crucially, the office hires experts in the field rather than political appointees. The government can build the same infrastructure into other agencies that can handle scientific and technological inquiry on demand. Doing so would make the invisible visible again—and then we could all see and control the results of our technology.

This article appears in the July 2021 print issue as “Reckoning With Tech.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}