You’re following a gloomy corridor into a large boiler room, dimly lit by a flickering fluorescent lamp and echoing with the rhythms of unseen machinery. Three enemy soldiers suddenly appear on a catwalk high above the floor. They split up, one of them laying down suppressive fire, which forces you to take cover. Although you shoot back, the attackers still manage to creep forward behind a curtain of smoke and flying debris.
Moments later, a machine gun rings out, and you are cut down in a shower of bullets. Then, as you lie dying, you glimpse the soldier who flanked you from behind while his two buddies drew your attention.
Thankfully, it was only a video game, so in fact you’re not mortally wounded. Still, your ego might well be bruised, because you were not only outgunned but also outsmarted by artificial intelligence (AI).
The game is called F.E.A.R., short for First Encounter Assault Recon, and its use of AI, along with its impressive graphics, are its prime attractions. The developer, Monolith Productions of Kirkland, Wash., released it in 2005 to rave reviews, including the GameSpot Web site’s Best Artificial Intelligence award. Such recognition means a lot to the game’s creators, who face stiff competition in what has become a multibillion-dollar industry.
The game is a far cry from the traditional diversions that AI researchers like ourselves have long studied, such as chess and checkers. Whereas the goal in the past was to write computer programs capable of beating expert players at such board games, now the metric of success for AI is whether it makes video games more entertaining.
Because a high fun factor is what sells, the video-game industry has become increasingly keen to make use of developments in AI research—and computer scientists have taken notice. A watershed came in 2000, when John E. Laird, a professor of engineering at the University of Michigan, and Michael van Lent, now chief scientist at Soar Technology, in Ann Arbor, Mich., published a call to arms that described commercial video games as ”AI’s killer application.” Their point was that research to improve AI for such games would create spin-offs in many other spheres.
The main challenge is to make computer-generated characters—dubbed bots—act realistically. They must, of course, look good and move naturally. But, ideally, they should also be able to engage in believable conversations, plan their actions, find their way around virtual worlds, and learn from their mistakes. That is, they need to be smart.
Today many video games create only an illusion of intelligence, using a few programming tricks. But in the not-so-distant future, game bots will routinely use sophisticated AI techniques to shape their behavior. We and our colleagues in the University of Alberta GAMES (Game-playing, Analytical methods, Minimax search and Empirical Studies) research group, in Edmonton, Canada, have been working to help bring about such a revolution.
The AI of F.E.A.R. is based loosely on an automated planner called STRIPS (for STanford Research Institute Problem Solver), which Richard E. Fikes and Nils J. Nilsson, both now of Stanford University, developed way back in 1971. The general idea of STRIPS was to establish one or more goals along with a set of possible actions, each of which could be carried out only when its particular preconditions were satisfied. The planning system kept track of the physical environment and determined which actions were allowed. Carrying out one of them in turn modified the state of the environment, which therefore made other actions possible.
The designers of F.E.A.R. gave its soldiers such goals as patrolling, killing the player’s character, and taking cover to protect their own virtual lives. The makers of the game also gave each kind of bot a set of possible actions with which to accomplish each of its goals. One advantage of this approach is that it saves the developers the burden of trying to specify a response to every situation that might arise. Further, it allows seemingly intelligent behaviors to appear almost magically—such as the maneuver described above.
In that instance, the three attackers were carrying out two types of basic actions. One is to move to covered positions that are as close as possible to the player’s character. The other is simply to move around obstacles. The combination creates something that was not explicitly programmed into the game at all: a devastating flanking maneuver.
The spontaneous emergence of such complex behaviors is important because it provides a sense of deeper intelligence. That’s really what gets your heart pounding when you play the game. But you’d also like your adversaries to become more cunning over time, and F.E.A.R. has no mechanism for accomplishing that.
Why do bots need to get smarter? Imagine a game of badminton in which your opponent always reacts to your serves in the same way, always falls for your drops, and never attempts to anticipate your smashes. It would be a boring match. Up until recently, AI had been able to offer video gamers no better: the imps of Doom, released in 1993, never shoot their fireballs preemptively, and the civil-protection officers in Half-Life 2 (2004) always take the nearest cover while reloading their weapons—to mention just a couple of things players experience with two well-known releases.
The standard solution is to add an element of randomness to the code that controls a bot’s decision making. Doing so varies a player’s experience, but the result does not necessarily come across as being intelligent.
A better approach is for the computer to learn about the player and to adapt a bot’s tactics and strategy appropriately. Of course, you don’t want the bot to become so good that it will win all the time; you just want it to give the human player a good run for the money. This capability, known as machine learning, is found in very few commercial games: Creatures, from the now-defunct Creature Labs, employed machine learning as early as 1997, as did Black & White, developed by the UK-based Lionhead Studios a few years later. But most video games are not able to ”learn” on the fly or otherwise adapt to the person playing. Our group is hoping to push things forward in this regard using a system we’ve created for research purposes called PaSSAGE, which stands for Player-Specific Stories via Automatically Generated Events.
PaSSAGE, as its name implies, is all about storytelling, which has long been a staple of various role-playing games. But video games of all types rely to some extent on engaging storytelling. You can categorize such games by the way they vary their repertoire to appeal to different people.
Some games— Half-Life (2001), for example—are immensely popular even though they feature just a single linear story. So good scriptwriting can clearly go a long way. Other games, such as Star Wars: Knights of the Old Republic (2003), offer several alternatives to the main plot. This gives you the impression that you can shape your virtual fate—what psychologists call a sense of agency. That feeling of being in control is usually limited, however, because the branching plot lines often merge later on.
Titles like The Elder Scrolls IV: Oblivion (2006) and S.T.A.L.K.E.R.: Shadow of Chernobyl (2007) work similarly, taking one main story and complementing it with episodes drawn from a library of side quests. Other games, such as The Sims 2 (2005), go a step further by dispensing with a scripted plot altogether and creating an open-ended world in which players can effectively design their own happenings.
Although each of these techniques has enjoyed success, they all force the designer to make a trade-off between scriptwriter expressiveness and player agency. The approach we’ve taken with PaSSAGE avoids that conundrum by having the computer learn players’ interests and preferences and mold the story to suit them as the game progresses.
PaSSAGE uses the same game engine as Neverwinter Nights, a fantasy adventure set in medieval times, produced by BioWare of Edmonton. With PaSSAGE, scriptwriters determine only the most general arc to the story and provide a library of possible encounters the player’s character may have. The computer studies the player as he or she progresses and cues in the kinds of experiences that are most desired. For instance, if you like fighting, the game will provide ample opportunities for combat. If you prefer to amass riches, the game will conjure up ways for you to be rewarded for your actions. The software is able to make the sequence of events globally consistent by maintaining a history of the virtual world’s changing state and modifying the player’s future encounters appropriately. The game will therefore always appear to make sense, even though it unfolds quite differently for different people—or even for the same person as his moods and tastes change.
Machine learning can also be used to formulate the tactics that bots use, a job that now must be handcrafted by a game’s designers. Pieter Spronck and his colleagues, of the University of Tilburg, in the Netherlands, demonstrated this ability in 2005 using Neverwinter Nights. Spronck had one computer play against computerized opponents, programming it to get better over time by choosing the combat tactics that most often led to victory.
Members of our research group have been following through on Spronck’s work with Neverwinter Nights, using a different learning algorithm. Other colleagues of ours at the University of Alberta aim to do something similar with a multiplayer online game called Counter-Strike (2003), which pits a group of terrorists against a squad of antiterrorist commandos. Each character can be controlled either by a person or by the computer. As with F.E.A.R., players view the virtual world from the perspective of the characters they manipulate, making Counter-Strike an example of what’s known as a first-person-shooter game.
This project has so far produced a formal system for analyzing and classifying a team’s opening moves. That may not sound like much, but this task proved immensely challenging, because positions and actions are not nearly as constrained as they are in a game like chess. Researchers in our group have used this formalism to analyze computer logs of more than 50 hours of tournament-level play between seasoned Counter-Strike teams. Soon, we expect, computer bots programmed to learn tactics from such logs will play reasonably well—doing things a person might do. It’ll be a long time before these bots will be able to beat expert human players, though. But that’s not the objective, after all—they just need to make for entertaining adversaries.
Jeff Orkin and Deb Roy of MIT are undertaking a similar effort with something they call The Restaurant Game, for which they are applying machine learning to the task of making bots speak and act believably in social settings. In this case, the bots’ behaviors are based on observations gleaned from more than 10 000 sessions of human play.
Machine learning can also pay off for poker, which has become an especially hot game in recent years with the explosion of opportunities for playing it online. The strongest programs for two-player fixed-bet-size poker attempt to calculate the mathematically optimal solution for winning each hand. It turns out that finding such solutions is computationally infeasible, at least right now—there are just too many possible combinations of cards and betting sequences. But members of our research group have devised ways to calculate near-optimal strategies using certain simplifying assumptions. For example, instead of allowing four rounds of betting—which is permitted in competition poker—the program sets the limit at three. By further reducing the complexity of the game in clever ways, the computational burden can be reduced to a reasonable level. BioTools, a commercial spin-off of our research group in Edmonton, has incorporated some of our group’s work in this area in its Poker Academy software.
Although this program plays poker pretty well, it can’t yet do what is most required—spot and exploit the other player’s weaknesses. Figuring out how to program a computer to do that is extraordinarily hard. Why so? Studying an opponent should be easy, after all—and it is, but only if you have thousands of poker hands to analyze. What do you do if you have only a few? To make matters worse, human poker players make a point of changing their style so as to be hard to predict.
Right now, the best poker-playing programs to come out of our research group will make money off your average human player, and they are beginning to beat even some of the best in the world in organized competitions. This suggests that poker is just now joining the ranks of chess and checkers—games at which computers have trounced even world champions.
One lesson that computer scientists learned from working on chess and checkers is that programs must strike a balance in how they decide what move to make next. At one extreme, the computer can look all the way to the end of a game, examine every possible final position, and evaluate whether each one constitutes a win, a draw, or a loss. Then it can work backward from those possibilities, assuming best play by both sides at every stage, to select the optimal move. But searching that far ahead would take a great deal of time—for chess, enough for the sun to burn out.
The alternative is to use an evaluation function that incorporates knowledge of the game, enough to go beyond just recognizing an outright win to sense, rather, the slightest inkling of an advantage. In the ideal case, such a program would play perfectly while looking only a single move ahead. Of course, such a sophisticated evaluation would also require a lot of computational power.
In actuality, chess-playing programs operate somewhere between these two extremes. The computer typically examines all the possibilities several moves ahead and evaluates each, say, by tallying points, having assigned a different number of points to a pawn, a knight, a rook, and so forth. The computer then works backward to the current board position. The result is a ranking of all the available next moves, making it easy to pick the best one.
The trade-off between blind searching and employing specialized knowledge is a central topic in AI research. In video games, searching can be problematic because there are often vast sets of possible game states to consider and not much time and memory available to make the required calculations. One way to get around these hurdles is to work not on the actual game at hand but on a much-simplified version. Abstractions of this kind often make it practical to search far ahead through the many possible game states while assessing each of them according to some straightforward formula. If that can be done, a computer-operated character will appear as intelligent as a chess-playing program—although the bot’s seemingly deft actions will, in fact, be guided by simple brute-force calculations.
Take, for example, the problem of moving around intelligently in a virtual world—such as finding the shortest path to take from one spot to another. That’s easy enough to figure out if you can fly like a crow. But what if you’re earthbound and there are obstacles to contend with along the way?
A general algorithm for determining the best route between two points on a map has been around since the late 1960s. The problem with this scheme—known as A*—is that the amount of time the solution takes to compute scales with the size of the territory, and the domains of video games are normally quite large. So there isn’t time to calculate the optimal path in this way. In some games, the computer needs to move hundreds—or even thousands—of bots around their virtual stomping grounds without the action grinding to a crawl, which means that computation times must often be kept to just a few milliseconds per bot.
To address this issue, our research group has developed a series of pathfinding algorithms that simplify the problem. Rather than considering each of the vast number of possible positions each bot can take, these algorithms seek good paths by using coarser versions of the game map. Some of these algorithms can use a set amount of time for planning each move, no matter how vast the playing field, so they can be applied to game worlds of any size and complexity. They are also suitable for environments that change frequently, for instance when paths are blocked, bridges destroyed, doors closed, and so forth. BioWare will be using some of our group’s pathfinding algorithms in its forthcoming Dragon Age: Origins.
This same general approach can help computers master real-time strategy games, such as the Warcraft series, introduced in 1994, which was developed by Blizzard Entertainment of Irvine, Calif. In this popular genre, players control armies of game characters that work together to gather resources and battle enemies on uncharted terrain. The fast pace and large numbers of bots make these games too complex for today’s AI systems to handle, at least at a level that would challenge good human players.
Our research tries to address this problem by considering only the relatively small set of high-level strategies each player can follow, such as having your army of characters rush the opponent or expand aggressively so as to take over more territory. The computer simulates what the outcome would be, given the current state of play, if each side picked one of these strategies and kept to it for the duration of the game. By taking into account whether its human opponent is using all or just a few particular strategies, the computer can choose the counterstrategy that is most likely to succeed. This approach works better than the scripted maneuvers computers now employ in real-time strategy games when pitted against a human player.
The need for better AI in commercial video games is readily apparent—especially to the people playing them. And their thirst for more computer-generated intelligence will only continue to grow. Yet game makers rarely have the time or resources to conduct the research required to solve the many thorny problems involved, which is why they have come to recognize the value of engaging the scholarly community—a community that is hard at work in such places as Georgia Tech; Simon Fraser University, in Burnaby, B.C., Canada; the University of Teesside, in the UK; and the Technical University of Lisbon, to name but a few of the many research centers around the world involved in this kind of work.
With the increased participation of academics in game-related AI research, it will not be long before major improvements are apparent in the quality of the games entering the market. But there is a more significant reason to applaud the growing interest of AI researchers in the video-game industry—something Laird and van Lent pointed out to us and other computer scientists nearly a decade ago. The work we must do to make games feel more realistic will also take us a long way toward our ultimate goal of developing general-purpose machine intelligence. Now that sounds like a smart move.
About the Author
VADIM BULITKO, JONATHAN SCHAEFFER, and MICHAEL BURO are all part of the GAMES group at the University of Alberta, in Canada. As its acronym suggests, their research group creates software for games, with the goal of beating—or at least seriously challenging—the human competitor. In 1994, Chinook, the team’s checkers program, became the first game software to win a championship against humans, earning it a place in Guinness World Records.
To Probe Further
The full range of interests of the authors’ research group is available at http://games.cs.ualberta.ca. The inner workings of F.E.A.R. are detailed in ”Three States and a Plan: The A.I. of F.E.A.R. ,” by Jeff Orkin, available at http://web.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf[PDF download]. Laird and van Lent’s call to action can be found in ”Human-level AI’s Killer Application: Interactive Computer Games,” AI Magazine, Summer 2001.