This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
The environmental impact of artificial intelligence (AI) has been a hot topic as of late—and I believe it will be a defining issue for AI this decade. The conversation began with a recent study from the Allen Institute for AI that argued for the prioritization of “Green AI" efforts that focus on the energy efficiency of AI systems.
This study was motivated by the observation that many high-profile advances in AI have staggering carbon footprints. A 2018 blog post from OpenAI revealed that the amount of compute required for the largest AI training runs has increased by 300,000 times since 2012. And while that post didn't calculate the carbon emissions of such training runs, others have done so. According to a paper by Emma Strubel and colleagues, an average American is responsible for about 36,000 tons of CO2 emissions per year; training and developing one machine translation model that uses a technique called neural architecture search was responsible for an estimated 626,000 tons of CO2.
Unfortunately, these so-called “Red AI" projects may be even worse from an environmental perspective than what's being reported, as a project's total cost in time, energy, and money is typically an order of magnitude more than the cost of generating the final reported results.
Moreover, the reality is that some high-profile areas of Red AI—like developing new object-detection models to improve autonomous navigation in complex environments, or learning rich text representations from massive amounts of unstructured web data—will continue to remain off-limits to everyone but the researchers with the most resources (in other words, those working for big tech companies). The sheer size of the datasets and cost of compute required keeps out smaller players.
So what can be done to push Green AI forward? And should we prioritize Green AI at all costs?
Red AI Isn't All Bad
Many of today's Red AI projects are pushing science forward in natural language processing, computer vision, and other important areas of AI. While their carbon costs may be significant today, the potential for positive societal impact is also significant.
As an analogy, consider the Human Genome Project (HGP), which took US $2.7 billion and 13 years to map the human genome. The HGP's outcome was originally viewed as a mixed bag due to its cost and the dearth of immediate scientific breakthroughs. Now, however, we can map an individual's genome in a few hours for around $100 using sequencing technology that relies on the main artifact of the HGP (the reference genome). While the HGP lacked in efficiency, it nonetheless helped pave the way for personalized medicine.
Similarly, it's critical to measure both the input and the output of RedAI projects. Many of the artifacts produced by RedAI experiments (for example, image representations for object recognition, or word embeddings in natural language processing) are enabling rapid advances in a wide range of applications.
The Move Toward Green AI
Yet regardless of its underlying scientific merits, RedAI isn't sustainable, due to both environmental concerns and the barriers of entry that it introduces. To continue the analogy, the HGP did succeed in sequencing the human genome, but novel DNA sequencing technologies were required to drastically reduce costs and make genome sequencing broadly accessible. The AI community simply must aim to reduce energy consumption when building deep learning models.
Here are my suggestions for steps that would turn the industry toward Green AI:
Emphasize reproducibility: Reproducibility, and sharing of intermediate artifacts, is crucial to increasing efficiency of AI development. Too often, AI research is published without code, or else researchers find that they can't reproduce results even with the code. Additionally, researchers can face internal hurdles in making their work open source. These factors are significant drivers of Red AI today, as they can force duplicated efforts and prevent efficient sharing. This situation is changing slowly, as conferences like NeurIPS are now requiring reproducible code submissions along with research papers.
Increase hardware performance: We're currently witnessing a proliferation of specialized hardware that not only offers better performance on deep learning tasks, but also increased efficiency (performance per watt). The AI community's demand for GPUs led to Google's development of TPUs and pushed the entire chip market toward more specialized products. In the next few years we'll see NVIDIA, Intel, SambaNova, Mythic, Graphcore, Cerebras, and other companies bring more focus to hardware for AI workloads.
Understand deep learning: We know that deep learning works. But although the technique's roots go back several decades, we as a research community still don't fully understand how or why it works. Uncovering the underlying science behind deep learning, and formally characterized its strengths and limitations, would help guide the development of more accurate and efficient models.
Democratize deep learning: Pushing the limit on deep learning's accuracy remains an exciting area of research, but as the saying goes, “perfect is the enemy of good." Existing models are already accurate enough to be deployed in a wide range of applications. Nearly every industry and scientific domain can benefit from deep learning tools. If many people in many sectors are working on the technology, we'll be more likely to see surprising innovations in performance and energy efficiency.
Partner more: Most of the world's largest companies don't have the talent to build AI efficiently, but their leaders realize that AI and deep learning will be key components of future products and services. Rather than go it alone, companies should look for partnerships with startups, incubators, and universities to jumpstart their AI strategies.
While it's easy to look at a self-driving car whizzing down a road in Silicon Valley and think that we've reached a technological peak, it's important to understand we're still in the very early days of AI.
In aviation, the “pioneer age" of flight in the early 1900s was characterized by incredibly important but slow progress coming from disparate projects around the world. Fifty years later, in the “jet age," the aviation industry had developed a continuous cycle of advancement, making planes bigger, safer, faster, and more fuel efficient. Why? Because fundamental advances in engineering (such as turbine engines) and society (such as the advent of regulatory agencies) provided the necessary building blocks and infrastructure to democratize powered flight.
The 2020s may see incredible advances in AI, but in terms of infrastructure and efficient use of energy we're still in the pioneer age. As AI research progresses, we must insist that the best platforms, tools, and methodologies for building models are easy to access and reproducible. That will lead to continuous improvements in energy-efficient AI.
Ameet Talwalkar is an assistant professor in the Machine Learning Department at Carnegie Mellon University, and also co-founder and chief scientist at Determined AI. He led the initial development of the MLlib project in Apache Spark, is a co-author of the textbook Foundations of Machine Learning (MIT Press), and created an award-winning edX MOOC on distributed machine learning.