10 Graphs That Sum Up the State of AI in 2023

The AI Index tracks breakthroughs, GPT training costs, misuse, funding, and more

4 min read

illustration of robots standing and sitting looking at pie charts and graph charts on a dark gray background
iStock

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) has assembled a year’s worth of AI data providing a comprehensive picture of today’s AI world, as it has done annually for six years. And I do mean comprehensive—this year’s report came in at 302 pages. That’s a nearly 60 percent jump from the 2022 report, thanks in large part to the 2022 boom in generative AI demanding attention and an increasing effort to gather data on AI and ethics.

For those of you as eager to pore through the entire 2023 Artificial Intelligence Index Report as I was, you can dive in here. But for a snapshot of the entire set of findings, below are 10 charts capturing essential trends in AI today.

Large language models don’t come cheap

While the power of large language models, like ChatGPT, has increased dramatically, the price of training such models has increased dramatically as well. And of all machine-learning systems, language models are sucking up the most computing resources.


Carbon costs are also high

While it’s not easy to estimate carbon emissions of an AI system, the AI Index team gave it their best shot, considering the number of parameters in a model, the energy efficiency of data centers, and the type of power generation used to deliver electricity. It concluded that a training run for even the most efficient of the four models considered, BLOOM, emitted more carbon than the average U.S. resident uses in a year.


It’s the government’s turn to step up

For the first time in a decade, private AI investment decreased, falling about a third from 2021 to $189.6 billion. The reason isn’t clear. Says Ray Perrault, codirector of the AI Index Steering Committee: “We do know that there was an overall drop in private investment in startups in 2022; we didn’t get to answer the question of whether AI startup investment shrunk more or less than the rest.”

On the bright side for AI research, government spending is up according to the report, at least in the United States. The AI Index Report indicated that nondefense U.S. government agencies allocated $1.7 billion to AI R&D in 2022, up 13.1 percent from 2021. And the U.S. Department of Defense requested $1.1 billion for nonclassified AI-specific research for fiscal year 2023, up 26.4 percent from 2022 funding. These numbers were hard to come by, Perrault said. The AI Index team took several different measurement approaches, and came up with roughly similar numbers, but were unable to gather comparable data from around the world.

The increase, Perrault indicated, has a couple of potential sources. “There was a national security committee looking at AI that released its report in 2021, recommending about a billion in increased funding for AI proper and another billion for high performance computing,” he said. It looks like that’s having some effect. And it used to be that AI was being funded out of a small number of agencies, like DARPA, NSF, and some DOD groups, but now I suspect that, given that AI is seen as being relevant to problems in a broader range of interests, like biology, it’s spreading the areas in which funding is happening.”


Industry, not academia, is drawing new AI Ph.D.’s

In 2021 (that’s the latest numbers available), 65.4 percent of all AI Ph.D.’s went to industry, compared with 28.2 percent who took jobs in academia, according to the AI Index Report. (Others, not shown here, are self-employed, unemployed, or report “other.”) This split has steadily grown since 2011, when the percentages were nearly equal.


Industry is also the place for new machine learning models

With greater numbers of Ph.D.’s, it’s no surprise that industry has raced ahead of academia in producing new machine learning models. Until 2014, most new machine learning models came from academia, but industry has quickly surged ahead. In 2022, according to data collected by HAI, there were 32 industry-produced machine learning models, compared with only three produced by academia. The AI Index Report notes that industry also has an advantage in terms of access to large amounts of data, computer power, and money, necessary to build state-of-the-art AI systems.

Given this trend, Perrault says, “one of the big questions is the extent to which universities will be given resources to build their own large models rather than tinker with models from the outside.”


It was a great year for AI technical breakthroughs

The AI Index Steering Committee selected the most significant technical developments in AI during 2022, presented in chronological order. This “model of the month,” Perrault says, was something new for the team, which is increasing data gathering done internally, rather than relying solely on studies published by others. “We have plenty of ideas for other things we should tackle, but the flexibility to do original work is limited by funding,” he continued.


With use comes abuse

Using data from the AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository, a publicly available database, the AI Index reported that the number of incidents concerning the misuses of AI is shooting up. The data is a roughly a year behind, allowing for reports to be vetted, though it includes some incidents from early 2022, like a deepfake of Ukraine President Volodymyr Zelenskyy surrendering and news that Intel had developed a system for monitoring student emotions over Zoom, a technology that raised privacy and discrimination concerns.


The law is starting to catch up

Laws related to AI passed in 127 countries has jumped, HAI reported, with only one passed in 2016 compared with 37 in 2022. These included amendments to a Latvian National Security Law to enable restrictions on organizations important for national security, including a commercial company developing AI, and a Spanish act requiring that AI algorithms used in public administrations take into account bias-minimization criteria.


In China, citizens are generally fans of AI; in France, Canada, the Netherlands, and the U.S., not so much

According to a survey conducted by global research firm IPSOS, 78 percent of Chinese respondents agreed that products and services using AI have more benefits than drawbacks. In the United States, only 35 percent see a net benefit to AI, with France at the bottom with 31 percent. Generally, men have a more positive attitude toward AI than women, IPSOS reported.


​​Only​ a third of researchers think AI could cause catastrophe. Only?

A group of U.S. researchers surveyed natural-language-processing researchers, as evidenced by publications, to get a handle on what AI experts think about AI research, HAI reported. While nearly 90 percent indicated that the net impact of AI past and future is good, they aren’t ignoring its power or its risks. A large majority—73 percent—expect AI to soon lead to revolutionary social change, while a not-insignificant minority—36 percent—think AI could cause a nuclear-level catastrophe.

“That was quite an interesting survey result,” Perrault says, “given that these are mostly people who know what they are talking about. Those numbers are about a year old; it would be interesting to see them now, given what has been happening” with large language models. “This is something that needs to be followed.”

The Conversation (0)