For centuries, we humans have lacked the all-knowing, all-seeing mechanisms to credibly predict and prevent bad actions by others. Now these very powers of preemption are perhaps within our grasp, thanks to a confluence of technologies.
In the foreseeable future, governments, and perhaps some for-profit corporations and civil-society groups, will design, construct, and deploy surveillance systems that aim to predict and prevent bad actions—and to identify, track, and neutralize people who commit them.
And when contemplating these systems, let’s broadly agree that we should prevent the slaughter of children at school and the abduction, rape, and imprisonment of women. And let’s also agree that we should thwart lethal attacks against lawful government.
Of late, the U. S. government gets most of the attention in this arena, and for good reason. The National Security Agency, through its vast capacity to track virtually every phone call, e-mail, and text message, promises new forms of preemption through a system security experts call persistent surveillance.
The Boston Marathon bombing, in April, reinforced the impression that guaranteed prevention against unwanted harm is elusive, if not impossible. Yet the mere chance of stopping the next mass shooting or terror attack persuades many people of the benefits of creating a high-tech version of the omniscient surveillance construct that, in 1787, the British philosopher Jeremy Bentham conceived as a panopticon: a prison with a central viewing station for watching all the inmates at once.
Some activists complain about the potential of such a system to violate basic freedoms, including the right to privacy. But others will be seduced by the lure of techno fixes. For example, how could anyone object to a digital net that protects a school from abusive predators?
Ad hoc surveillance will inevitably proliferate. Dropcam and other cheap surveillance programs, already popular among the tech-savvy, will spread widely. DIY and vigilante panopticons will complicate matters. Imagine someone like George Zimmerman, the Florida neighborhood watchman, equipped not with a gun but with a digital surveillance net, allowing him to track pretty much anything—on his smartphone.
With data multiplying exponentially and technology inexorably advancing, the question is not whether an all-encompassing surveillance systems will be deployed. The question is how, when, and how many.
In the absence of settled laws and norms, the role of engineers looms large. They will shoulder much of the burden of designing systems in ways that limit the damage to innocents while maximizing the pressures brought to bear on bad guys.
But where do the responsibilities of engineers begin and end?
It is too early to answer conclusively, but engineers would do well to keep a few fundamental principles in mind:
Keep humans in the loop, but insist they follow the “rules of the road.” Compiling and analyzing data can be done by machines. But it would be best to design these surveillance systems so that a human reviews and ponders the data before any irreversible actions are taken. If citizens want to spy on one another, as they inevitably will, impose binding rules on how they do so.
Design self-correcting systems that eject tainted or wrong information fast and inexpensively. Create a professional ethos and explicit standards of behavior for engineers, code writers, and designers who contribute significantly to the creation of panopticon-like systems.
Delete the old stuff routinely. Systems should mainly contain real-time data. They should not become archives tracing the lives of innocents.
Engineers acting responsibly are no guarantee that panopticons will not come to control us. But they can be part of getting this brave new world right.
About the Author
G. Pascal Zachary is the author of Endless Frontier: Vannevar Bush, Engineer of the American Century (Free Press, 1997). He teaches at Arizona State University.
For nearly 40 years, Zachary has been fascinated by the role of engineers in innovation and their relationship to science, politics, and culture. He is the author of Endless Frontier: Vannevar Bush, Engineer of the American Century, and Showstopper, about the making of a software program. At Arizona State University, where he is a professor in the university’s school of innovation, he teaches courses on the past, present, and future of technological change. Zachary began his social studies of engineering as a journalist, reporting on Apple and computing for newspapers in San Jose. In 1989, he became the chief Silicon Valley reporter for The Wall Street Journal, where he was senior writer until 2002. He later wrote columns on digital innovation for The New York Times, Technology Review, IEEE Spectrum, and other publications. Zachary’s work grew increasingly international in the 1990s, when he traveled extensively to technology enclaves in Southeast Asia and Eastern Europe. In 2000, he published The Global Me, a book on multicultural identity and the new world economy; a revised edition, incorporating the crisis engendered by 9/11, was published in 2003 as The Diversity Advantage: Multicultural Identity in the New World Economy. Zachary maintains a strong interest in sub-Saharan Africa, and in many of his more than 50 research visits to the region he has concentrated on the relationship of technology and development. He is the author of a memoir, Married to Africa: A Love Story, and a collection of essays, Hotel Africa: The Politics of Escape. In 2017, he completed a three-year study of the growth of computer science at universities in Uganda and Kenya, a project funded by the National Science Foundation. Zachary’s writing has been described as “deeply informed and insightful” by The New York Times, and The Atlantic has called him “a serious public intellectual who can combine familiarity with the scholarly literature...and first-hand reporting.” To learn more about Zachary, see www.gpascalzachary.com.