‘AI Pause’ Open Letter Stokes Fear and Controversy

IEEE signatories say they worry about ultrasmart, amoral systems without guidance

3 min read

The letters Ai with a pause symbol in the dot for the i. 4 hands are reaching in with writing implements as if signing it.
iStock/IEEE Spectrum

The recent call for a six-month “AI pause”—in the form of an online letter demanding a temporary artificial intelligence moratorium—has elicited concern among IEEE members and the larger technology world. The Institute contacted some of the members who signed the open letter, which was published online on 29 March. The signatories expressed a range of fears and apprehensions including about rampant growth of AI large-language models (LLMs) as well as of unchecked AI media hype.

The open letter, titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recentAI pauseproposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

“AI can be manipulated by a programmer to achieve objectives contrary to moral, ethical, and political standards of a healthy society,” says IEEE Fellow Duncan Steel, a professor of electrical engineering, computer science, and physics at the University of Michigan, in Ann Arbor. “I would like to see an unbiased group without personal or commercial agendas to create a set of standards that has to be followed by all users and providers of AI.”

IEEE Senior Life Member Stephen Deiss—a retired neuromorphic engineer from the University of California, San Diego—says he signed the letter because the AI industry is “unfettered and unregulated.”

“This technology is as important as the coming of electricity or the Net,” Deiss says. “There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm.”

Eleanor “Nell” Watson, an AI ethicist who has taught IEEE courses on the subject, says the open letter raises awareness over such near-term concerns as AI systems cloning voices and performing automated conversations—which she says presents a “serious threat to social trust and well-being.”

Although Watson says she’s glad the open letter has sparked debate, she says she confesses “to having some doubts about the actionability of a moratorium, as less scrupulous actors are especially unlikely to heed it.”

“There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm.”

IEEE Fellow Peter Stone, a computer science professor at the University of Texas at Austin, says some of the biggest threats posed by LLMs and similar big-AI systems remain unknown.

“We are still seeing new, creative, unforeseen uses—and possible misuses—of existing models,” Stone says.

“My biggest concern is that the letter will be perceived as calling for more than it is,” he adds. “I decided to sign it and hope for an opportunity to explain a more nuanced view than is expressed in the letter.

“I would have written it differently,” he says of the letter. “But on balance I think it would be a net positive to let the dust settle a bit on the current LLM versions before developing their successors.”

IEEE Spectrum has extensively covered one of the Future of Life Institute’s previous campaigns, urging a ban on “killer robots.” The outlines of the debate, which began with a 2016 open letter, parallel the criticism being leveled at the current “AI pause” campaign: that there are real problems and challenges in the field that, in both cases, are at best poorly served by sensationalism.

One outspoken AI critic, Timnit Gebru of the Distributed AI Research Institute, is similarly critical of the open letter. She describes the fear being promoted in the “AI pause” campaign as stemming from what she calls “long-termism”—discerning AI’s threats only in some futuristic, dystopian sci-fi scenario, rather than in the present day, where AI’s bias amplification and power concentration problems are well known.

IEEE Member Jorge E. Higuera, a senior systems engineer at Circontrol in Barcelona, says he signed the open letter because “it can be difficult to regulate superintelligent AI, particularly if it is developed by authoritarian states, shadowy private companies, or unscrupulous individuals.”

IEEE Fellow Grady Booch, chief scientist for software engineering at IBM, signed although he also, in his discussion with The Institute, cited Gebru’s work and reservations about AI’s pitfalls.

“Generative models are unreliable narrators,” Booch says. “The problems with large-language models are many: There are legitimate concerns regarding their use of information without consent; they have demonstrable racial and sexual biases; they generate misinformation at scale; they do not understand but only offer the illusion of understanding, particularly for domains on which they are well-trained with a corpus that includes statements of understanding.

“These models are being unleashed into the wild by corporations who offer no transparency as to their corpus, their architecture, their guardrails, or the policies for handling data from users. My experience and my professional ethics tell me I must take a stand, and signing the letter is one of those stands.”

Please share your thoughts in the comments section below.

The Conversation (8)
David Aswad
David Aswad10 May, 2023
AM

Stupid article, stupid authors, stupid letter. Humans have been failing to control sociopaths and psychopaths for millenia. Engineers are particularly susceptible to failing - JMO - since so many are logic bound, and these 'paths' work outside logic. A psychopath or sociopath plus AI will just be more aggressively monstrous. If you liked (and millions do/did) Hitler, Stalin, Putin and Trump; an AI app just means any low-capability 'path' can now equal their capabilities and a really capable 'path' will be able to conquer the world. You may begin worshipping - now! That was not a suggestion.

James Isaak
James Isaak09 May, 2023
LS

Things are changing, rapidly ... AI.2023 is a different beast and 2024 will be different again, and in all cases, a double edged sword --- able to serve good or evil -- with increasing power in both directions -- additional thoughts and the seed for related seminars and such at https://www.jimisaak.com/Home/ai-and-reality ---We (IEEE technologists) need to join others to track and respond to the opportunities/threats as they emerge.

W George Mckee
W George Mckee09 May, 2023
LM

If you believe that "two heads are smarter than one", then we already have ultrasmart, amoral entities loose in the world. They're called multinational corporations. So far leading manufacturer ACCO Brands has failed to accomplish its legally mandated goal of creating a fearsome "paperclip apocalypse".

We've had the ability to trigger terrifying extinction level events for 50 years, ever since the creation of the nuclear warhead equipped ICBM. Not to mention the civilization-endangering threats of climate change and ecological collapse. AI fearmongers need to stop trying to jump to the front of the line.