Custom AI Programs Take on Top Ranked Humans in StarCraft

As far as StarCraft goes, humans aren’t doomed (yet)

5 min read

StarCraft screenshot.
StarCraft screenshot.
Image: Blizzard Entertainment

Every year, the Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) co-hosts a competition for computer programs that play StarCraft: Brood War against each other. StarCraft is a complex real-time strategy (RTS) game that poses a significant challenge to AI research “because of hidden information, vast state and action spaces, and the requirement to act quickly,” as the AIIDE website explains. “The best human players still have the upper hand in RTS games, but in the years to come this will likely change, thanks to competitions like this one.”

This year, 22 programs were entered into the competition at the University of Alberta, Canada, playing against each other continuously for two weeks on 12 virtual machines (that’s over 1,800 games each). At the end of the competition, three of the best AIs played some exhibition matches against a Russian StarCraft player who goes by Djem5 and is “widely regarded as one of the best non-Korean Protoss players in the world.”

So did the AI win? Are humans doomed? Spoiler alert: nope. As far as StarCraft goes, we’re still way better than the machines, but that doesn’t make it any less interesting to watch these pro human vs. AI matches unfold.

Doing well at StarCraft is dependent on (among other things) the number of actions per second that a player can take along with the ability of that player to micromanage groups of units in multiple places. So, you might expect that a computer (which is way better at both actions per second and multitasking) would be able to totally wreck a human player without much trouble. However, the reason that StarCraft is (still) played in tournaments worldwide is because of the depth of strategy involved, something that might not be immediately obvious from the look of the game. Winning is much more complex than just pushing more buttons in more places faster than your opponent does.

Note that this is a StarCraft competition, not a StarCraft 2 competition. The company who makes both games, Blizzard, is supportive of AIIDE, but the API that was developed to allow direct AI control of the game relies on reading and writing directly to game memory, which potentially could be used as cheating mechanism to give players an unfair advantage. So while Blizzard is okay with the existing API for the older version of the game, they understandably don’t want to open up StarCraft 2 to exploits.

Anyway, let’s watch one of the games: this is the winning AI, Tcsmoo, playing against Djem5. There’s some nice commentary so you can tell what’s going on:

“In game one of the series, Djem5 did a standard Zealot rush vs. Tscmoo who did a Hatchery first Hydralisk build. Tscmoo tried to defend the rush by placing static defenses (Sunken Colonies) near its expansion, however the human player realized that it was able to run by those defenses into the main base where it was able to kill the bot's workers, crippling its economy and eventually winning the game. This game is a good example of the type of calculation that humans are quite good at - recognizing the global impact of actions rather than concentrating on local battles. Most of the bots in the competition would have seen the sunken colonies in the expansion and decided that they were the first target that they should attack, unaware of the long-term benefits of running by to the main worker line. These sorts of long-term decision making are what make RTS games such a difficult domain for traditional AI methods.”

You can watch the rest of the AI vs. human matches here, but sadly, the bots don’t win a single one.

Each developer was also asked to provide an FAQ, in which they answer a few questions about their AI bots and the strategies that they decided to use. Here’s one fascinating excerpt about how Tcsmoo decides whether to attack an opponent:

“My bot separates all enemy units into distinct combat groups. Enemy units that are near each other are added to the same group, though one unit can only be in one group. It then sorts each group by a value which is roughly worker units are super valuable and army units have negative value (unless they are attacking our base, in which case they're super valuable too). Then it uses a combat simulation to determine how many army units to assign to each group. It essentially assigns enough units to beat the enemy units. I use my own pathfinding to make sure it only sends units to a group that it can actually reach. If a unit must go through another combat group to reach the target group then it will not be assigned. If my bot has surplus units left after assigning enough units to each group then the rest will simply be sent to the largest group.

The result from this is that my bot will assign units to attack multiple locations at once and it will prioritize attacking workers or small groups of enemy units. It will also constantly reschedule units by demand; if one of our bases are attacked it should send enough units to defend, but it will not pull its entire army back unless it is needed. At least in theory.

Despite not actually wanting to engage the enemy army, it will often do so regardless, simply because they are blocking the path to any more valuable target, or because enough units has been dispatched to them.

Each combat group also generates a threat area around itself, in a radius around each unit slightly longer than its attack range. This is primarily used when units are retreating; for each group the combat simulation is used to determine if our units should attack or retreat. If they retreat then each unit is sent to the edge of the retreat area. With large amounts of units, this will create a concave of our units towards the enemy units, where they might regroup or run back and forth like idiots and suffer more losses than necessary. When my bot has much less units than the enemy in that group, the result is often that the opponent bot will chase after our units and they will be led on a sightseeing trip around the map.”

This gives a good sense of some of the complexities involved in the game, and helps explain that some of the bot’s behavior that doesn’t seem to make sense is actually just it following rules that do make general sense, but that might not be applicable in some of the specific situations that the game evolves into.

Despite these quirks, there’s still a lot of optimism about the future of the competition, according to its organizers:

“It is obvious from the results of the human vs. machine match that Starcraft AI bots still have a long ways to go before beating the best expert human players. However, despite Djem5 making the bots look silly this year, several bots such as Tscmoo krasi0 have done additional testing vs. human players on ICCup in which they were able to defeat D-ranked, and even some C-ranked players. After the human players have played one or two games against the bots they are then easily able to detect and exploit small mistakes that the bots make in order to easily win the majority of games. So while bots are able to win some games against decent amateur human players, they do not do well in best-of-n game scenarios where the humans get to learn their behaviours.

With the skill level bots continuing to rise, and by keeping the competition open source and fixing the blunders that the bots made this year I believe that in a year or two we will have several bots which will be able to easily defeat amateur human players. We are still a long way from beating the professionals though!”

All of the code that the bots use is available at the website below, and the StarCraft: Brood War API can be found on Github. Even more videos, including some amazing bot vs. bot matches with live commentary, can be seen on YouTube here.

[ 2015 AIIDE StarCraft AI Competition ] via [ Hacker News ]

The Conversation (0)