The AI Apocalypse: A Scorecard
How worried are top AI experts about the threat posed by large language models like GPT-4?
What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiens around?
If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.
On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world and methodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.
In between are researchers who worry about the abilities of GPT-4 and future instances of generative AI to cause major disruptions in employment, to exacerbate the biases in today’s society, and to generate propaganda, misinformation, and deep fakery on a massive scale. Worrisome? Yes, extremely so. Apocalyptic? No.
Many worried AI experts signed an open letter in March asking all AI labs to immediately pause “giant AI experiments” for six months. While the letter didn’t succeed in pausing anything, it did catch the attention of the general public, and suddenly made AI safety a water-cooler conversation. Then, at the end of May, an overlapping set of experts—academics and executives—signed a one-sentence statement urging the world to take seriously the risk of “extinction from AI.”
Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.
And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too.
This article appears in the August 2023 print issue as “The AI Apocalypse Matrix.”
- 10 Graphs That Sum Up the State of AI in 2023 ›
- Superintelligent AI May Be Impossible to Control; That's the Good News ›
- GPT-4, AGI, and the Hunt for Superintelligence ›
- Members Advocate for AI Regulations During Visit to U.S. Congress - IEEE Spectrum ›
- Why AI Should Move Slow and Fix Things - IEEE Spectrum ›
- Weighing the Prophecies of AI Doom - IEEE Spectrum ›