Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Grokking X.ai’s Grok—Real Advance or Just Real Troll?

Grok-1 is the largest open-source LLM yet, though not without caveats

4 min read
different colored lines intersecting every which way with dots of light
Getty Images

Last weekend, X.ai released the world’s largest “open-source” large language model (LLM), Grok-1. At 314 billion parameters, it’s the largest open-source model yet, far exceeding predecessors like Falcon 180B at 180 billion.

That’s impressive, but the timing of X.ai’s release—just a few weeks after Elon Musk, founder of X.ai, filed suit against OpenAI for an alleged lack of openness—raised eyebrows. Is Grok-1 a useful effort to move open-source AI forward, or more of a ploy to prove Musk’s company is more open than its rival?

“One strategy is truly open source. And the second strategy is more of an ‘open weight’ strategy. What X.ai has done here is gone with the open weight strategy.” —Mahyar Ali, Smodin

“There are two strategies to open source that usually big AI companies are taking. One strategy is truly open source. And the second strategy is more of an ‘open weight’ strategy,” said Mahyar Ali, a product leader at the Casper, Wy.-based AI firm Smodin. “What X.ai has done here is gone with the open weight strategy.”

Grok doesn’t settle AI’s open-source debate

The definition of “open-source” is a contentious point in the AI community, as the release of Meta’s Llama 2 proved. Meta provides the model weights, evaluation code, and documentation, but the company hasn’t revealed the model’s training data or related code. There are commercial limitations, too: Anyone using it for a product or service with more than 700 million monthly active users must request a license from Meta.

Grok-1 scores a win on the license. It’s released under Apache 2.0, a common open-source license introduced over twenty years ago. “The Apache 2.0 license is important because the definition of open source fluctuates a lot. The gold standard is the Apache 2.0 license,” said Cameron R. Wolfe, director of AI at the Minneapolis-based e-commerce platform Rebuy Engine. The license allows use by organizations of any size and has no restrictions on commercial use.

“The difference between what’s been released so far from the open source perspective, and Grok-1, is that Grok is really big. Which is cool, because maybe it’s closer to what OpenAI is doing.” —Cameron R. Wolfe, Rebuy Engine

But while the license is open, Grok’s release wasn’t accompanied by the documentation and benchmarks Meta released alongside Llama and Llama 2. And like Meta, X.ai hasn’t released information on how the model was trained, nor has the company released the code used to do so. Wolfe contrasted Grok’s release to OLMo, an open-source LLM that includes not only model weights and documentation but also training code, logs, and metrics. “It’s open source, but [X.ai is] holding stuff back,” said Wolfe.

For Ali, the data missing from Grok’s release is a problem, as it hampers Grok’s usefulness for AI research. Anyone can download the model weights and deploy the model, but it will prove difficult to analyze and understand. “If a company has released the model, can I replicate that model? That is truly open source,” said Ali.

More parameters, more problems?

Sheer size is the most noteworthy aspect of Grok-1’s release. At 314 billion parameters, it’s roughly four and a half times larger than Meta’s largest Llama 2 model, Llama-2-70b.

“The difference between what’s been released so far from the open source perspective, and Grok-1, is that Grok is really big. Which is cool, because maybe it’s closer to what OpenAI is doing,” said Wolfe. Grok’s size could help open-source catch up to more capable closed models, like OpenAI’s GPT-4 and Anthropic’s Claude-3 Opus. (In both cases, the number of parameters in the model remains undisclosed, but popular estimates place each well beyond a trillion parameters).

Grok-1 may serve as a warning for future open-source models: Pursue size at your own risk.

However, size is both a blessing and a curse. Adding parameters can improve model quality but also makes the model more difficult for developers to deploy.

“It’s relatively easy for smaller companies and the open-source community to fine-tune a small model,” said Ali. “But with such a gigantic model, to even load [it], you would need a GPU that would cost you around 15 to 20 dollars per hour [to rent], just to run this model. And to fine-tune it, you would need 20 or 30 of them.”

That’s an especially relevant problem for Grok-1 because, unlike most publicly released models, it’s not fine-tuned. That means it isn’t adapted for a particular use (like chat) and lacks the baked-in trust and safety measures fine-tuned models often include.

Wolfe said this isn’t an unusual move: OLMo also released a base model first, then followed with a fine-tuned model. On the other hand, Musk’s social media posts against what he called “woke” AI suggest he may view a lack of safeguards as a feature, not a bug. X.ai hasn’t said when, or if, it will release a fine-tuned model.

A wireframe model of a falcon flies across a purple sky.Earlier large open-source LLMs—like Falcon 180B, whose logo is pictured here—weren’t popular with the open-source community.Technology Innovation Institute

The history of other such models suggests fine-tuning and deploying Grok-1 will be difficult. Several prior models, including Meta’s Galactica 120B and TII’s Falcon 180B, were clearly designed to deliver the benefits of a large model to the broader AI development community. But the models didn’t prove popular, and the Huggingface OpenLLM Leaderboard remains dominated by models with 7 billion to 72 billion parameters. If Grok-1 fails to buck that trend, it will serve as a warning for future open-source models: Pursue size at your own risk.

“Grok has gathered a lot of headlines. ... But I think in the long term, this model will be forgotten, as were other big models,” said Ali. “What works are the smaller models that individual researchers can use and expand on. This is hype, and I think it’s going to go away very soon.”


UPDATE: 25 March 2024:The story was updated to correct a mistranscription. Mahyar Ali described an “open weight” strategy of pursued by the coders who created the Grok-1 LLM, not an “open way” strategy, as was originally reported.

The Conversation (0)