On Saturday I got to witness something that has only happened a handful of times: a computer beating a professional at a game of go. Now, to be fair, the game was rigged to give the the program, Many Faces of Go, a huge advantageâ''it was allowed to place seven stones on the board before the match startedâ''and the human still only lost by 4.5 points. But the fact that a computer program can win at any handicap level means that it's finally possible to make quantitative estimates about how long we have until the best human falls prey to the Deep Blue of go.
The match I watched took place at the annual meeting of the American Association for the Advancement of Sciences, held in Chicago over the weekend. It was part of a talk about computer science and games. Humans have been playing go in Asia for three to four-thousand years, and until recently, even amateurs could easily beat the best software out there. Back in 2007, Feng - Hsiung Hsu wrote "Cracking Go" for IEEE Spectrum, in which he discussed how brute force computing techniques were finally starting to make progress in computer go.
That progress finally resulted in the first computer go win in August 2008. In addition to Many Faces of Go, programs named Crazy Stone and MoGo have combined to win five more high handicap games since then, including the one in Chicago. These victories allowed the organizer, Robert A. Hearn, a researcher at Dartmouth College, to make a back-of-the-envelope calculation: if Moore's law and improvements to go algorithms continue at the current pace, he predicts that computers will be able to beat the best players in the world (in an even game) in about 28 years (roughly in agreement with an informal poll of the computer go mailing list, where the average estimate was 20 years).
In this match, the software ran on a 32-node cluster and played out about 7 million complete games for each move it had to make. As the game concluded, James Kerwin, the human player noted that he lost the game when he made a blunder in the middle of the board, which cost him a few points. This made me wonder about the differences in the kinds of mistakes that computers and humans each make. "Humans are more likely to miss an entire branch and variation of gameplay," Hearn told me, but programs often make moves that no human player would ever make.
IEEE Spectrum editor Philip Ross has chronicled this stylistic difference in the (computationally) easier game of Chess: from the near-parity of human-computer chess six years after Deep Blue's much publicized victory, to Kasparov's anti-computer strategies just five years ago, to the anti-anti-computer strategies the program Fritz began perfecting in 2005. Most chess programs rely on tree searchers, where all possible moves are searched out to a certain depth, and then the resulting positins are evaluated. But as Hsu's article explains, go has many more legal positions, which make these types of searches exponentially more difficult. Checkers, for instance, which was formally solved in 2007, has on the order of 10^20 legal positions possible; chess has on the order of 10^44. For go, the 19x19 board has on the order of 10^171 positions. In addition, go scores are only tallied at the end of a game, so it's very hard to determine who is ahead after a given number of moves.
To overcome this limitation, top go games have increasingly turned to Monte Carlo methods. In a Monte Carlo search, the computer plays out lots of random games all the way to their conclusion. Some of these games share intermediate configurations (called nodes). At each node, the program keeps track of the winning percentage and the number of games that have passed through it. This allows the software to quickly identify the most useful nodes for further exploration. Such a technique also scales better with parallel processing, because more cores can simply play out more random games. Hsu was cautiously optimistic of Monte Carlo methods in 2007, but he wrote that, "My hunch, however, is that they wonâ''t play a significant role in creating a machine that can top the best human players in the 19-by-19 game." Now, however, it looks like Monte Carlo methods are the future of computer go.
The researchers on the panel also discussed the ways that go can impact computer science. Elwyn Berlekamp has applied game theory to go and proved that certain configurations near the end of games are possible to solve analytically. He's now working to understand where the reductionism of Western science (which can analytically solve late-stage go games) meets up with the more holistic approach of Eastern cultures. Berlekamp also developed a variation of the game, which he calls "coupon go" that gives researchers a way to probe the quantitative value of any given moves. It's worth checking out.
Kearns is more interested in undecidable problems, where no conceivable algorithm could ever be designed that's capable of always giving the right answer. While he and his team have created several artificial games that qualify as unsolvable, he also pointed to another go variation that might qualify: a bizarre game called Rengo Kriegspiel where players inherently have incomplete information.
I'm looking forward to watching programs get better and better at go. Maybe I should learn to play before it's an obsolete human skill.