The AI Apocalypse: A Scorecard

How worried are top AI experts about the threat posed by large language models like GPT-4?

2 min read
A grid of headshots showing AI experts
IEEE Spectrum

What should we make of OpenAI’s GPT-4, anyway? Is the large language model a major step on the way to an artificial general intelligence (AGI)—the insider’s term for an AI system with a flexible human-level intellect? And if we do create an AGI, might it be so different from human intelligence that it doesn’t see the point of keeping Homo sapiens around?

If you query the world’s best minds on basic questions like these, you won’t get anything like a consensus. Consider the question of GPT-4’s implications for the creation of an AGI. Among AI specialists, convictions range from Eliezer Yudkowsky’s view that GPT-4 is a clear sign of the imminence of AGI, to Rodney Brooks’s assertion that we’re absolutely no closer to an AGI than we were 30 years ago.

On the topic of the potential of GPT-4 and its successors to wreak civilizational havoc, there’s similar disunity. One of the earliest doomsayers was Nick Bostrom; long before GPT-4, he argued that once an AGI far exceeds our capabilities, it will likely find ways to escape the digital world and methodically destroy human civilization. On the other end are people like Yann LeCun, who reject such scenarios as sci-fi twaddle.

Click here to skip down to the table

In between are researchers who worry about the abilities of GPT-4 and future instances of generative AI to cause major disruptions in employment, to exacerbate the biases in today’s society, and to generate propaganda, misinformation, and deep fakery on a massive scale. Worrisome? Yes, extremely so. Apocalyptic? No.

Many worried AI experts signed an open letter in March asking all AI labs to immediately pause “giant AI experiments” for six months. While the letter didn’t succeed in pausing anything, it did catch the attention of the general public, and suddenly made AI safety a water-cooler conversation. Then, at the end of May, an overlapping set of experts—academics and executives—signed a one-sentence statement urging the world to take seriously the risk of “extinction from AI.”

Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.

And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too.


Back to top

This article appears in the August 2023 print issue as “The AI Apocalypse Matrix.”

The Conversation (18)
Milton Boden
Milton Boden29 Jun, 2023
LM

My concern is that AI can generate content so rapidly that it could flood the internet bogging it down like a massive denial of service attack. This could happen if competing groups try to outdo each other. This would disrupt logistics, finance and basic communication and overload data center resources.

mike archbold
mike archbold22 Jun, 2023
INDV

It would good to survey the wider community at large including developers and users rather than rely on the (sometimes self-declared) "AI experts" who spend hours self-promoting and flaming up social media.

Angelo Luvison
Angelo Luvison11 Sep, 2023
LM

I would add to the list, e.g., Stuart Russell, Peter Norvig, and Nello Cristanini, well-kwown experts in the field