OpenAI’s GPT-OSS, released on August 5, was never going to grab the spotlight. The company’s release schedule guaranteed that, as it quickly followed GPT-OSS with GPT-5, the company’s largest and best large language model.
Yet GPT-OSS is, in many ways, a more notable and surprising model. The two-version model (GPT-OSS-20b and GPT-OSS-120b) is OpenAI’s first open large language model since GPT-2’s launch in 2019 (though many would argue neither are truly open; more on that to come).
It’s also released under an Apache 2.0 license, which is among the most permissive licenses in common use. Dustin Carr, co-founder and CTO of the AI startup Darkviolet.ai, called it a “maximally permissive license” and said the OpenAI release was a “very positive, very surprising development.” Carr’s company uses open models to power AI tools for educational websites.
OpenAI returns to “open weights” with style
The Apache 2.0 license that accompanies GPT-OSS imposes no limits on commercial use and, unlike some other open-source licenses, allows those who build on Apache 2.0–licensed software to release derivative works under a different license. The license also includes a patent grant, giving users full permission to use any related patents. This patent grant helps shield those who build on GPT-OSS from future infringement claims by the model’s contributors.
That approach contrasts with the licenses attached to many popular open models, including Meta’s Llama and Alibaba’s Qwen. These models are also freely available for anyone to download, but they have a few strings attached.
Meta’s Llama license requires derivative models to include the Llama name and comply with Meta’s brand guidelines. Most Alibaba Qwen 2.5 models are under an Apache 2.0 license, but some, including the largest and most capable variant, are released under a research license that includes restrictions on commercial use for large organizations.
OpenAI’s release also impressed developers on another, more nuanced point: launch logistics.
Releasing a model under a permissive license is just one part of what’s required to make it useful. It must also be broadly available and optimized for common hardware.
OpenAI tackled that challenge head-on with a broad release that saw GPT-OSS immediately available not only on model repositories, like Hugging Face, but also in leading open LLM “front-ends,” like Ollama and LM Studio. These front-ends give users a ChatGPT-like interface so they can interact with models without writing code and can run them on their own computers. This was flanked by day-one support from hardware companies, including Nvidia and AMD, and cloud providers, like Microsoft Azure and Amazon AWS, to ensure that the model ran smoothly on a variety of systems.
“You had the models instantly,” said Carr. “You didn’t have to wait a week for the models to get ready for LM Studio, and so on. It was an impressive execution.”
Open weights, but still not open source
The release of GPT-OSS made waves across the open AI community. It also highlighted deep divisions.
Hanna Hajishirzi, a senior research director at the research institute AI2 and a professor at the University of Washington, perhaps best summarized the divide, remarking that “meaningful progress in AI is best achieved in the open—not just with open weights, but with open data, transparent training methods, intermediate checkpoints from pre-training and mid-training, and shared evaluations.” The Open Source Initiative responded to the GPT-OSS release with a tweet linking to its definition of Open Source AI.
The critical point: There’s a difference between models that release open weights and those that are fully open source.
Models that are open weight, such as GPT-OSS, include the model weights required to use the model. These can also be used to modify the model through techniques like fine-tuning.
However, the model weights are only the end result of the model’s training. The weights don’t provide information needed to reproduce the model, such as the data used to train it. That means it’s impossible to rebuild the model from scratch. OSI’s Open Source AI Definition 1.0 requires that model releases include all training code and details on the data used for training.
The GPT-OSS release doesn’t fulfill those requirements.
Open or no, GPT-OSS turns up the heat
Even so, many developers seem willing to overlook GPT-OSS’s lack of full transparency. The reason? The model is still useful.
Carr said the 20B variant was able to execute at 45 to 50 tokens per second on his M4 MacBook. He said that speed of execution is comparable to far smaller models from a year ago but, because GPT-OSS-20B is a larger model, it delivers a major jump in quality. “Nothing of this quality has come close to that speed,” he said.
Brendan Ashworth, co-founder of Bunting Labs, argues that criticizing GPT-OSS for not meeting full open-source standards sets the bar too high. “Expecting them to open source more is kind of a weird thing to complain about,” he said, pointing to the model’s free availability and permissive license.
That view carries weight given Ashworth’s credentials as an open-source developer. His company recently released Mundi.ai, an AI-powered geographic information system (GIS), under an open-source license. While he acknowledges a fully open-source GPT-OSS would be preferable, he sees OpenAI’s return to open weights as a win for developers overall.
The enthusiastic reaction to GPT-OSS could pressure companies like Meta and Alibaba to loosen their license terms. Meta, in particular, stands to lose out, given the rocky launch of its most recent open model, Llama 4. Prior to this year, Meta was regarded as the obvious U.S. leader in open weight models. GPT-OSS is the most serious threat yet to that title.
Matthew S. Smith is a freelance consumer technology journalist with 17 years of experience and the former Lead Reviews Editor at Digital Trends. An IEEE Spectrum Contributing Editor, he covers consumer tech with a focus on display innovations, artificial intelligence, and augmented reality. A vintage computing enthusiast, Matthew covers retro computers and computer games on his YouTube channel, Computer Gaming Yesterday.



