Let’s Shape AI Before AI Shapes Us

It’s time to have a global conversation about how AI should be developed

Artificial intelligence is like a beautiful suitor who repeatedly brings his admirer to the edge of consummation only to vanish, dashing hopes and leaving an unrequited lover to wonder what might have been.

Once again, big shots are hearing the siren song of AI and warn of hazards ahead. Visionary entrepreneur Elon Musk thinks that AI could be more dangerous than nuclear weapons. Physicist Stephen Hawking warns that AI “could spell the end of the human race.” Even Bill Gates, who usually obsesses on such prosaic tasks as eliminating malaria, advises careful management of digital forms of “super intelligence.”

Will today’s outsized fears of AI become fodder for tomorrow’s computer comedy?

Past AI scares now seem silly in hindsight. In the 1950s, building on excitement over the advent of digital computers, scientists foresaw machines that would instantly translate Russian to English. Rather than 5 years, machine translation took more than 50. “Expert systems” similarly have experienced a long gestation, and even now these programs, built around knowledge gained from human experts, deliver little. Meanwhile, HAL, the Terminator, Ava, and other computer-generated rivals remain the stuff of Hollywood.

In recent years, some claims for AI seem to have been realized. Computers now can literally pick faces out of a crowd and unerringly provide customer service over the phone by simulating a real conversation. Driverless cars and package-delivering drones promise to revolutionize the movement of things and people. Bombs that select their own targets and robots that kill are within reach. Daily life already seems impossible without digital devices that record, alert, and advise their owners on actions and plans.

The beautiful suitor is back, more fetching than ever. Now the human embrace of robots is closer, and probably inevitable.

Because betrayal is central to romance, humans, jealous of AI, worry about the loss of their own supremacy in the world. Digital minds may emulate, then resent, and finally attack humans.

From this dark space, thoughts arise of existential doom. An evil genius might conquer the world with a malevolent AI army. Software agents could knock out the essential systems from which a stricken society could not recover. To Daniel H. Wilson, author of How to Survive a Robot Uprising, humans need not wait for the first AI catastrophe in order to install a “steel-reinforced panic room” to which they can escape from disobedient digital servants.

Dark fantasies, however, distract attention from more urgent questions. How will AI affect employment, especially higher-paying work? When will robot writers and artists alter the way humans consume creative content? Who will be held accountable for accidents when humans are no longer in the decision or action loop?

Instead of “wolf” criers of the Musk sort, humans need a serious discussion about new norms and practices that will shape and govern AI. Here are a few suggestions on how to direct the global conversation in ways fruitful rather than fearful:

  • Embrace the precautionary principle: Bans rarely work; careful testing and technical revisions do. Civil society needs safe spaces to conduct experiments with synthetic intelligences. Governments should encourage controlled tests by private actors on the condition that data and analyses be widely shared.
  • Engineer in equity and diversity: The disturbing truth is that the digital frontier is dominated by men living in North America, Europe, Japan, and China. Adventures in AI ought to reflect the aspirations of women as much as men and also reflect culture and values from the global South.
  • Help the losers: The spread of AI will hurt some, mostly by reducing the demand for human labor. Responses should include helping people to use their time in useful and appealing ways. Governments might also consider giving every human, as a legal right, dominion over a set number of robots. Such policies would at least create a measure of equity, because surely wealthy folks will invest in assembling their own armies of bots.

G. Pascal Zachary is a professor of practice at Arizona State University and author of Endless Frontier: Vannevar Bush, Engineer of the American Century (MIT Press, 1999).