When I started my career, I was sometimes reluctant to confess that I was an engineer. But I became proud of my profession. I thought of all that we had accomplished, and I would say that we had changed the world. We had created the Internet, cellphones, GPS, lasers, computers, and so much else that was an integral part of modern life.
Of course, when I, and other engineers, said that we had changed the world, the implication was that we had made it better. Now this has been put into question. Every day it seems that there are stories in the media saying that tech has gone out of control and is causing harm. Privacy has been lost, the cellphone is dangerously addictive, spam and scams are omnipresent, security is weak, conspiracies and fake news abound, powerful monopolies have evolved, jobs will be lost to AI and robotics, and so forth.
Growing numbers of advocates are calling for more control over the evolution of technology. Ash Carter, former United States secretary of defense, wrote an article in The Atlantic titled “America Needs to Align Progress in Tech With a Public Purpose.” There are conferences and articles on “ethical AI,” and the European Union has come up with a set of requirements for ethical AI. Nor has the IEEE been idle, recently releasing the first edition of Ethically Aligned Design, which lays out principles for the design and operation of autonomous and intelligent systems. These include respecting human rights, going beyond efficiency or profit in judging success, and incorporating mechanisms for accountability and transparency.
The very existence of these efforts to establish ethical engineering implies that what tech has done has been unethical, or at least capable of becoming unethical.
No one can argue in favor of unethical engineering, but I do have some concern about who decides what is ethical and how those decisions will be used to control technological evolution.
I think of engineers as tool builders. Historically, tools can be used for both good and bad. Fritz Haber was awarded the Nobel Prize in 1918 for developing a process that involved extracting nitrogen from the atmosphere. There were two principal materials produced from the subsequent abundance of nitrogen—fertilizer for crops to feed the world and gunpowder for the world’s armies. So it is with most technological developments, including the Internet.
Even in the early days of the Internet we recognized that it made it easy to spread conspiracy propaganda. We worried about the unfathomable scale of the Net, which was sure to include malevolent participants, and were concerned about inevitable nationalistic urges to Balkanize the Net. Then, as the Internet evolved, there were unforeseen emergent developments and unintended consequences.
Historically, technology has followed its own path. It is as if technology’s whole future evolution has already been written, and our only job as engineers is to uncover it, page by page. Although I am skeptical about our having the vision, wisdom, and power to control that evolution, I recognize that there will be incentives to incorporate social engineering into the fabric of our designs. Much of our research funding today comes from governments, and it is likely to be directed toward chosen social goals. Moreover, the larger support from the market for product development will also tend to reward some social considerations, though they may well differ from those of the governments.
The good news for engineers is that this means new classes of problems to solve in systems design will emerge, and having tough problems is good for engineers.