The December 2022 issue of IEEE Spectrum is here!

Close bar

AI in the 2020s Must Get Greener—and Here’s How

The push for energy efficient “Green AI” requires new strategies

4 min read
Image of a green circuit board with AI on it, also in green.
Illustration: Jorg Greuel/Getty Images

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

The environmental impact of artificial intelligence (AI) has been a hot topic as of late—and I believe it will be a defining issue for AI this decade. The conversation began with a recent study from the Allen Institute for AI that argued for the prioritization of “Green AI" efforts that focus on the energy efficiency of AI systems.

This study was motivated by the observation that many high-profile advances in AI have staggering carbon footprints. A 2018 blog post from OpenAI revealed that the amount of compute required for the largest AI training runs has increased by 300,000 times since 2012. And while that post didn't calculate the carbon emissions of such training runs, others have done so. According to a paper by Emma Strubel and colleagues, an average American is responsible for about 36,000 tons of CO2 emissions per year; training and developing one machine translation model that uses a technique called neural architecture search was responsible for an estimated 626,000 tons of CO2.

Unfortunately, these so-called “Red AI" projects may be even worse from an environmental perspective than what's being reported, as a project's total cost in time, energy, and money is typically an order of magnitude more than the cost of generating the final reported results.

Many high-profile advances in AI have staggering carbon footprints.

Moreover, the reality is that some high-profile areas of Red AI—like developing new object-detection models to improve autonomous navigation in complex environments, or learning rich text representations from massive amounts of unstructured web data—will continue to remain off-limits to everyone but the researchers with the most resources (in other words, those working for big tech companies). The sheer size of the datasets and cost of compute required keeps out smaller players.

So what can be done to push Green AI forward? And should we prioritize Green AI at all costs?

Red AI Isn't All Bad

Many of today's Red AI projects are pushing science forward in natural language processing, computer vision, and other important areas of AI. While their carbon costs may be significant today, the potential for positive societal impact is also significant.

As an analogy, consider the Human Genome Project (HGP), which took US $2.7 billion and 13 years to map the human genome. The HGP's outcome was originally viewed as a mixed bag due to its cost and the dearth of immediate scientific breakthroughs. Now, however, we can map an individual's genome in a few hours for around $100 using sequencing technology that relies on the main artifact of the HGP (the reference genome). While the HGP lacked in efficiency, it nonetheless helped pave the way for personalized medicine.

Similarly, it's critical to measure both the input and the output of RedAI projects. Many of the artifacts produced by RedAI experiments (for example, image representations for object recognition, or word embeddings in natural language processing) are enabling rapid advances in a wide range of applications.

The Move Toward Green AI

Yet regardless of its underlying scientific merits, RedAI isn't sustainable, due to both environmental concerns and the barriers of entry that it introduces. To continue the analogy, the HGP did succeed in sequencing the human genome, but novel DNA sequencing technologies were required to drastically reduce costs and make genome sequencing broadly accessible. The AI community simply must aim to reduce energy consumption when building deep learning models.

Here are my suggestions for steps that would turn the industry toward Green AI:

Emphasize reproducibility: Reproducibility, and sharing of intermediate artifacts, is crucial to increasing efficiency of AI development. Too often, AI research is published without code, or else researchers find that they can't reproduce results even with the code. Additionally, researchers can face internal hurdles in making their work open source. These factors are significant drivers of Red AI today, as they can force duplicated efforts and prevent efficient sharing. This situation is changing slowly, as conferences like NeurIPS are now requiring reproducible code submissions along with research papers.

Increase hardware performance: We're currently witnessing a proliferation of specialized hardware that not only offers better performance on deep learning tasks, but also increased efficiency (performance per watt). The AI community's demand for GPUs led to Google's development of TPUs and pushed the entire chip market toward more specialized products. In the next few years we'll see NVIDIA, Intel, SambaNova, Mythic, Graphcore, Cerebras, and other companies bring more focus to hardware for AI workloads.

Understand deep learning: We know that deep learning works. But although the technique's roots go back several decades, we as a research community still don't fully understand how or why it works. Uncovering the underlying science behind deep learning, and formally characterized its strengths and limitations, would help guide the development of more accurate and efficient models.

Democratize deep learning: Pushing the limit on deep learning's accuracy remains an exciting area of research, but as the saying goes, “perfect is the enemy of good." Existing models are already accurate enough to be deployed in a wide range of applications. Nearly every industry and scientific domain can benefit from deep learning tools. If many people in many sectors are working on the technology, we'll be more likely to see surprising innovations in performance and energy efficiency.

Partner more: Most of the world's largest companies don't have the talent to build AI efficiently, but their leaders realize that AI and deep learning will be key components of future products and services. Rather than go it alone, companies should look for partnerships with startups, incubators, and universities to jumpstart their AI strategies.

While it's easy to look at a self-driving car whizzing down a road in Silicon Valley and think that we've reached a technological peak, it's important to understand we're still in the very early days of AI.

In aviation, the “pioneer age" of flight in the early 1900s was characterized by incredibly important but slow progress coming from disparate projects around the world. Fifty years later, in the “jet age," the aviation industry had developed a continuous cycle of advancement, making planes bigger, safer, faster, and more fuel efficient. Why? Because fundamental advances in engineering (such as turbine engines) and society (such as the advent of regulatory agencies) provided the necessary building blocks and infrastructure to democratize powered flight.

The 2020s may see incredible advances in AI, but in terms of infrastructure and efficient use of energy we're still in the pioneer age. As AI research progresses, we must insist that the best platforms, tools, and methodologies for building models are easy to access and reproducible. That will lead to continuous improvements in energy-efficient AI.

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}