Open-Source AI Is Good for Us

But current regulatory trends threaten to quash transparency and competition in AI

7 min read
A computer generated landscape with framework over the land and mountains, and a beautiful sun on the horizon. In the foreground is a glowing triangle with the letters AI glowing with some circuitry.
Jorg Greul/Getty Images

This is a guest post. For the other side of the argument about open-source AI, see the recent guest post “Open-Source AI Is Uniquely Dangerous.

A culture war in AI is emerging between those who believe that the development of models should be restricted or unrestricted by default. In 2024, that clash is spilling over into the law, and it has major implications for the future of open innovation in AI.

Today, the AI technologies under most scrutiny are generative AI models that have learned how to read, write, draw, animate, and speak, and that can be used to power tools like ChatGPT. Intertwined with the wider debate over AI regulation is a heated and ongoing disagreement over the risk of open models—models that can be used, modified, and shared by other developers—and the wisdom of releasing their distinctive settings, or “weights,” to the public.

Since the launch of powerful open models like the Llama, Falcon, Mistral, and Stable Diffusion families, critics have pressed to keep other such genies in the bottle. “Open source software and open data can be an extraordinary resource for furthering science,” wrote two U.S. senators to Meta (creator of Llama), but “centralized AI models can be more effectively updated and controlled to prevent and respond to abuse.” Think tanks and closed-source firms have called for AI development to be regulated like nuclear research, with restrictions on who can develop the most powerful AI models. Last month, one commentator argued in IEEE Spectrum that “open-source AI is uniquely dangerous,” echoing calls for the registration and licensing of AI models.

The debate is surfacing in recent efforts to regulate AI. First, the European Union has just finalized its AI Act to govern the development and deployment of AI systems. Among its most hotly contested provisions was whether to apply these rules to “free and open-source” models. Second, following President Biden’s executive order on AI, the U.S. government has begun to compel reports from the developers of certain AI models, and will soon launch a public inquiry into the regulation of “widely-available” AI models.

However our governments choose to regulate AI, we need to promote a diverse AI ecosystem: from large companies building proprietary superintelligence to everyday tinkerers experimenting with open technology. Open models are the bedrock for grassroots innovation in AI.

I serve as head of public policy for Stability AI (makers of Stable Diffusion), where I work with a small team of passionate researchers who share media and language models that are freely used by millions of everyday developers and creators around the world. My concern is that this grassroots ecosystem is uniquely vulnerable to mounting restrictions on who can develop and share models. Eventually, these regulations may lead to limits on fundamental research and collaboration in ways that erode this culture of open development, which made AI possible in the first place and helps make it safer.

Open models promote transparency and competition

Open models play a vital role in helping to drive transparency and competition in AI. Over the coming years, generative AI will support creative, analytic, and scientific applications that go far beyond today’s text and image generators; we’ll see such applications as personalized tutors, desktop healthcare assistants, and backyard film studios. These models will revolutionize essential services, reshape how we access information online, and transform our public and private institutions. In short, AI will become critical infrastructure.

As I have argued before the U.S. Congress and U.K. Parliament, the next wave of digital services should not rely solely on a few “black box” systems operated by a cluster of big tech firms. Today, our digital economy runs on opaque systems that feed us content, control our access to information, determine our exposure to advertising, and mediate our online interactions. We’re unable to inspect these systems or build competitive alternatives. If models—our AI building blocks—are owned by a handful of firms, we risk repeating what played out with the Internet.

We’ve seen what happens when critical digital infrastructure is controlled by just a few companies.

In this environment, open models play a vital role. If a model’s weights are released, researchers, developers, and authorities can “look under the hood” of these AI engines to understand their suitability and to mitigate their vulnerabilities before deploying them in real-world tools. Everyday developers and small businesses can adapt these open models to create new AI applications, tune safer AI models for specific tasks, train more representative AI models for diverse communities, or launch new AI ventures without spending tens of millions of dollars to build a model from scratch.

We know from experience that transparency and competition are the foundation for a thriving digital ecosystem. That’s why open-source software like Android powers most of the world’s smartphones, and why Linux can be found in data centers, nuclear submarines, and SpaceX rockets. Open-source software has contributed as much as US $8.8 trillion in value globally. Indeed, recent breakthroughs in AI were only possible because of open research like the transformer architecture, open code libraries like PyTorch, and open collaboration from researchers and developers around the world.

Regulations may stifle grassroots innovation

Fortunately, no government has ventured to abolish open models altogether. If anything, governments have resisted the most extreme calls to intervene. The White House declined to require premarket licenses for AI models in its executive order. And after a confrontation with its member state governments in December, the E.U. agreed to partially exempt open models from its AI Act. Meanwhile, Singapore is funding a US $52 million open-source development effort for Southeast Asia, and the UAE continues to bankroll some of the largest available open generative AI models. French President Macron has declared “on croit dans l’open-source”—we believe in open-source.

However, the E.U. and U.S. regulations could put the brakes on this culture of open development in AI. For the first time, these instruments establish a legal threshold beyond which models will be deemed “dual use” or “systemic risk” technologies. Those thresholds are based on a range of factors, including the computing power used to train the model. Models over the threshold will attract new regulatory controls, such as notifying authorities of test results and maintaining exhaustive research and development records, and they will lose E.U. exemptions for open-source development.

In one sense, these thresholds are a good faith effort to avoid overregulating AI. They focus regulatory attention on future models with unknown capabilities instead of restricting existing models. Few existing models will meet the current thresholds, and those that do first will be models from well-resourced firms that are equipped to meet the new obligations.

In another sense, however, this approach to regulation is troubling, and augurs a seismic shift in how we govern novel technology. Grassroots innovation may become collateral damage.

Regulations could hurt everyday developers

First, regulating “upstream” components like models could have a disproportionate chilling effect on research in “downstream” systems. Many of the restrictions for above-the-threshold models assume that developers are sophisticated firms with formal relationships to those who use their models. For example, the U.S. executive order requires developers to report on individuals who can access the model’s weights, and detail the steps taken to secure those weights. The E.U. legislation requires developers to conduct “state of the art” evaluations and systematically monitor for incidents involving their models.

For the first time, these instruments establish a legal threshold beyond which models will be deemed “dual use” or “systemic risk” technologies.

Yet the AI ecosystem is more than a handful of corporate labs. It also includes countless developers, researchers, and creators who can freely access, refine, and share open models. They can iterate on powerful “base” models to create safer, less biased, or more reliable “fine-tuned” models that they release back to the community.

If governments treat these everyday developers the same as the companies that first released the model, there will be problems. Developers operating from dorm rooms and dining tables won’t be able to comply with the premarket licensing and approval requirements that have been proposed in Congress, or the “one size fits all” evaluation, mitigation, and documentation requirements initially drafted by the European Parliament. And they would never contribute to model development—or any other kind of software development—if they thought a senator might hold them liable for how downstream actors use or abuse their research. Individuals releasing new and improved models on GitHub shouldn’t face the same compliance burden as OpenAI or Meta.

The thresholds for restrictions seem arbitrary

Second, the criteria underpinning these thresholds are unclear. Before we put up barriers around the development and distribution of a useful technology, governments should assess the initial risk of the technology, the residual risk after considering all available legal and technical mitigations, and the opportunity cost of getting it wrong.

Yet there is still no framework for determining whether these models actually pose a serious and unmitigated risk of catastrophic misuse, or for measuring the impact of these rules on AI innovation. The preliminary U.S. threshold—1026 floating point operations (FLOPs) in training computation—first appeared as a passing footnote in a research paper. The EU threshold of 1025 FLOPs is an order of magnitude more conservative, and didn’t appear at all until the final month of negotiation. We may cross that threshold in the foreseeable future. What’s more, both governments reserve the right to move these goalposts for any reason, potentially bringing into scope a massive number of smaller but increasingly powerful models, many of which can be run locally on laptops or smartphones.

Restrictions are based on speculative risks

Third, there is no consensus about precisely which risks justify these exceptional controls. Online safety, election disinformation, smart malware, and fraud are some of the most immediate and tangible risks posed by generative AI. Economic disruption is possible too. However, these risks are rarely invoked to justify premarket controls for other helpful software technologies with dual-use applications. Photoshop, Word, Facebook, Google Search, and WhatsApp have contributed to the proliferation of deepfakes, fake news, and phishing scams, but our first instinct isn’t to regulate their underlying C++ or Java libraries.

Instead, critics have focused on “existential risk” to make the case for regulating model development and distribution, citing the prospect of runaway agents or homebuilt weapons of mass destruction. However, as a recent paper from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) notes of these claims, “the weakness of evidence is striking.” If these arguments are to justify a radical departure from our conventional approach to regulating technology, the standard of proof should be higher than speculation.

We should regulate AI while preserving openness

There is no debate that AI should be regulated, and all actors—from model developers to application deployers—have a role to play in mitigating emerging risks. However, new rules must account for grassroots innovation in open models. Right now, well-intended efforts to regulate models run the risk of stifling open development. Taken to their extreme, these frameworks may limit access to foundational technology, saddle hobbyists with corporate obligations, or formally restrict the exchange of ideas and resources between everyday developers.

In many ways, models are regulated already, thanks to a complex patchwork of legal frameworks that governs the development and deployment of any technology. Where there are gaps in existing law—such as U.S. federal law governing abusive, fraudulent, or political deepfakes—they can and should be closed.

However, presumptive restrictions on model development should be the option of last resort. We should regulate for emerging risks while preserving the culture of open development that made these breakthroughs possible in the first place, and that drives transparency and competition in AI.

The Conversation (0)