The October 2023 issue of IEEE Spectrum is here!

Close bar

Embedded Generative AI Will Power Game Characters

Unity’s Sentis lets game developers incorporate standalone neural networks

5 min read
A rendering of a humanoid alien character with large purple growths around the crown of their head.

Orb is a 3D character that spouts dialogue produced by an AI model.


Unity, the world’s most popular 3D real-time development environment, recently unveiled Sentis, a feature to help developers incorporate generative AI models into games and other applications built using its platform. It might seem a natural, even simple, addition. Unity is frequently used as a game engine, and video games have used AI for decades. But cutting-edge generative models, which are powerful yet unpredictable, present unique challenges.

“It makes sense, because I do see that game developers of multiple levels of scale, whether they’re small creators or big studios, are curious and interested about these new AI technologies,” says Dr. Jeff Orkin, cofounder and CEO of Central Casting AI, a startup that provides developers with pretrained nonplayer characters (NPCs) to populate their games. “They’re concerned about the cost. They don’t want to be beholden to some third-party company that, every time a user interacts with your game or a character in the game, you need to make an API call.”

Orkin developed the AI for F.E.A.R., a 2005 title praised for introducing games to the concept of “automated planning,” a goal-oriented approach that produces more effective and dynamic AI agents. Central Casting AI meshes this with recent advancements in generative AI to construct large “planning domains” that support a wide range of AI actions including dialogue and interaction with in-game objects.

This tech is powerful, yet it highlights the limits developers meet with when attempting to build more advanced AI. The planning domain is extensive but fixed, so behavior outside the planning domain won’t appear. Central Casting’s product runs on Amazon Web Services, so an Internet connection is required. These traits can be an advantage or disadvantage, depending on a developer’s needs, but represent only one possible path.

Central Casting’s AI theater shows off the company’s AI, which can be implemented in Roblox.Central Casting AI

Unity’s Sentis, currently in closed beta, provides an alternative route previously impossible for developers to explore. “With Unity Sentis, designers can build game loops that rely on inference—the process of feeding data through a machine-learning model—on devices from mobile to console to Web and PC, without cloud-compute costs or latency issues,” Luc Barthelet, Unity’s chief technology officer, said in a press release. “This will be used to run NPC characters...or restylize a game without requiring all-new artwork (for a night scene, for example, very much as Hollywood does it), or it could replace a physics engine by something 1,000 times more efficient.”

Put more simply, Sentis gives developers the option to build generative AI models inside a Unity app and run it on consumer-grade hardware—which includes everything from an iPhone to an Xbox. It’s a first for a 3D real-time development environment and a significant change from Unity’s last effort, the ML Agents Toolkit, which functioned outside the runtime, meaning it wasn’t integrated into the code actually driving the game environment in real time.

“[Unity ML Agents] became popular with students and AI researchers, who could more easily use Unity to build experimental environments. But running the model in a separate process makes it more complicated to ship a game depending on the model and comes with a performance penalty,” explains Julian Togelius, associate professor of computer science and engineering at New York University and cofounder of “Integrating into the Unity runtime can help with both performance issues and with shipping a packaged product, especially when deploying to multiple platforms.”

Developers wrestle with generative AI’s unpredictable potential

Sentis may help developers meet the challenge of implementing an AI model in Unity, but that doesn’t mean it’s a slam dunk.

Jeremy Tryba, CEO at, reinforces this point. His company builds tools to help developers bring generative AI to 3D real-time environments but focuses on creating so-called assets, such as the textures that are composited on top of the geometric definitions of a wall or NPC’s body to make them look realistic. Creating assets is a costly and time-intensive element of any 3D game, film, or app. “A lot of being able to build good models is understanding the training sets, and I think that we’ve got a long way to go before the right data exists to drive the real-time models that people really want to be inside game engines,” says Tryba.

This points to a familiar problem: generative AI models are unpredictable. helps developers use generative AI to create assets, but the assets are fixed once implemented. Running an AI model in real time, as Sentis allows, will challenge developers with unexpected results.

Unity teased generative AI’s potential with Orb, an interstellar visitor with surprising insight into human hair styles.Unity

Even so, Sentis could prove alluring to developers looking for shortcuts—something all software developers, and game developers in particular, desperately need. Improperly redacted filings from the FTC’s attempt to block Microsoft’s acquisition of Activision-Blizzard revealed that The Last of Us 2, a hit action-adventure game recently adapted into a series by HBO, cost $220 million dollars to develop over six years. Large companies, like Sony and Microsoft, can pay for these Herculean endeavors, but smaller development studios are on the hunt for ways to achieve more with less.

“What it comes down to is, a lot of game developers want to focus on building the game, right? They don’t want to focus on things that are a little more parallel to the game or separate from the core,” says Aaron Vontell, founder and CEO of Regression Games. “What I’ve been seeing is that a lot of studios want to use AI tools to make it easier to do some of those more mundane and difficult tasks.”

And while embedding an AI model in a game’s runtime may introduce more unpredictability to begin with, it provides hope of eventually bringing a model more firmly under the game developer’s control. That’s an important distinction. A general-purpose, third-party AI model, such as ChatGPT, is opaque, it supports a variety of functions that are likely irrelevant to a particular game or app. Bringing models into the runtime offers an opportunity to build more predictable models with precise capabilities.

“What it comes down to is, a lot of game developers want to focus on building the game, right? They don’t want to focus on things that are a little more parallel to the game or separate from the core.” —Aaron Vontell, founder and CEO of Regression Games

“I don’t think there’s ever an end to plugging every possible hole people can find to trick [general purpose models] into saying things,” says Orkin. “If you can run the models in your own engine, that means you have control over the model itself, and you can choose what data to train it on, which can give you more control of the things it could do.”

This possibility will take years to come to fruition, but Unity’s decision to bring AI into the runtime with Sentis is a first step, and one its competitors—like Unreal Engine—are likely to follow.

The Conversation (0)