The December 2022 issue of IEEE Spectrum is here!

Close bar

AI’s Grandmaster Status Overshadows Chess Scandal

Magnus Carlsen–Hans Niemann controversy underscores humans’ perpetual underdog role

4 min read
Two men playing chess

Magnus Carlsen [left] and Hans Niemann compete during the 2022 Sinquefield Cup at the Saint Louis Chess Club.

Crystal Fuller/Saint Louis Chess Club

Last week Magnus Carlsen, the world chess champion, directly accused Hans Niemann, a U.S. grandmaster, of cheating during their game at the Sinquefield Cup, in St. Louis, Mo. He thus made plain an accusation he had been hinting at for weeks.

Carlsen has so far provided no evidence to back up his charge, nor has he specified how the cheating took place. Everyone agrees, however, that if there was cheating, then it must have involved computers, because nothing else could dismay Carlsen, whose rating of 2856 is higher than that of any other player. And everyone seems to have chosen sides.

Those who back Carlsen point to Niemann’s own admission that he used computers to cheat in online play at least twice—once at age 14 and again at 16; Niemann is now 19. Others note that his performance has risen very rapidly in the past two years. Still others raise an eyebrow at the large number of games he has played in recent years that get a score of nearly perfect from computer analysis. And behind it all are statements from leading players that they are convinced that cheating happens all the time nowadays, though hardly anybody ever gets caught.

Computers loom so large because they now play chess like gods.

What makes the scandal so big is not merely the level of the players. In 1961 the great Bobby Fischer wrote an article for Sports Illustrated titled “The Russians Have Fixed World Chess.” He alleged that Soviet chess players arranged draws to ensure that one of them would win a tournament.

Nor is the scandal notable for flagrancy. In 1967 Milan Matulović, a Yugoslavian grandmaster, shockingly took back a move he had just played and only then said “J’adoube,” the French phrase uttered when a player merely adjusts the position of a chessman. Players thereafter called him “J’adoubavić.”

No, what makes today’s accusations resonate is the pervasive role of chess computers. They give children around the world sparring partners that earlier generations couldn’t have dreamed of facing, even if they’d lived next to the Moscow Central Chess Club. No wonder prodigies of the game have gotten younger and younger.

And computers do so well in helping the home preparation of the opening, the early moves of a game, that players, including Carlsen, will sometimes deliberately play a second-best move just to force the opponents out of “book.”

Finally, computer analysis offered during Internet broadcasts of ongoing tournaments will look 12 moves ahead within a second or two. They show the amateurs in the audience much that the grandmasters miss, creating the illusion that the amateurs actually understand what’s going on. Of course, any viewer could give illicit help to a player if provided a means of communication.

Several things are at stake. There is the prize money, which runs in the hundreds of thousands of dollars for the circuit of which the Sinquefeld Cup tournament is a part. There are the invitations to future events, which are often contingent on doing well in qualifying events. Then there are the rating points. Carlsen cares deeply about this metric: Although he recently declined to contest his World Championship title in 2023, he insists that he will continue to play in the hope of raising his rating to an unprecedented 2900.

The cheating to which Niemann does admit—in his younger years, during online play—was itself detected with the aid of computers of, the online playing forum in question. Recently, however, the Wall Street Journal reported that an internal investigation by has found that Niemann in fact cheated in more than 100 online games, most recently when he was 17. The company did not impugn the grandmaster’s over-the-board play.

A key hint can be encoded in just a few bits of data, which means it might be transmitted, perhaps via a buzzer in the player’s shoe, on his body—or inside it.

Online play is fast and loose, and its computerized basis may provide clues that a cheat-detection algorithm can catch. But over-the-board offers less data. Often there are only one or two key points in a game at which cheating might occur; a little hint, offered at such a point, is enough to make the difference to a grandmaster. Even a duffer, when showed a chess problem, may be truly stumped. But told that it is “mate in three moves,” the duffer may see the light. Just a phrase—“the rook,” say, or “double attack”—may also make the idea apparent.

A key hint can be encoded in just a few bits of data, which means it might be transmitted, perhaps via a buzzer in the player’s shoe, on his body—or inside it. Do not laugh, but innuendos have been made concerning the possible use of a buzzing sex toy. As a joke, Niemann declared that he was willing to play naked. A camsite called Stripchat promptly offered him [US] $1 million to do so.

Computers loom so large because they now play chess like gods. The best free program, Stockfish 14, is rated at 3534—678 points ahead of Carlsen. That’s enough of a gap to predict a winning expectancy of 99 percent.

In the early days, when chess programs were a lab project for AI, they played like idiots. Then the programmers began to enter their creations in competitions, and the programs got good. I learned that the hard way.

In late 1974, at a student tournament held in Evanston, Ill., I was paired against Northwestern University’s Chess 4.0 program, played the Sicilian Defense, blundered a knight for two pawns, mentally kicked myself, and hastily resigned. David Slate, the programmer, waited patiently as I completed the ritual of resignation, which involves signing the score sheet and handing it to the tournament director—in this case, him. Only then did he tell me that if I’d just played on, I would have gotten a draw.

The Strongest Computer Chess Engines Over

“It can’t play endgames,” Slate said. I kicked myself again.

Back then I was rated somewhere in the 1600s, about average for an amateur. Still, I was the highest-rated player any machine had yet beaten in a tournament game. It’s my claim to fame. Chess 4.0 went on to beat another guy higher rated than me, somewhat soothing my wounded pride.

It took years for the Northwestern program to reach 2000. Other university programs then took the lead, until at last a machine originating at Carnegie Mellon and redomiciled at IBM reached 2600, about grandmaster strength. That was strong enough to beat my old, 1600-rated self 99.9 percent of the time. In 1997 an even stronger version of the IBM machine, dubbed Deep Blue, beat Gary Kasparov, the reigning world champion.

Deep Blue filled a room. Today, a smartphone can crush any human player.

The Conversation (3)
Matthias Bendewald06 Oct, 2022

Did you recognize how much stronger engines are today compared to humans? Giving a human an engine to help would result in a bizarre situation where the human wouldn't make any difference at all.

And yes, the engines already changed chess a lot. Variations even beginners play today were not known 20 years ago. If you want to see humans competing each other, engines have to be excluded.

Suthikshn Channarayapatna06 Oct, 2022

I think, its better to accept the ubiquitous presence of AI. I would advise the chess world to allow the players to search for better moves using AI and select and play the moves. It should be like any other sports such as Football, basketball etc., where coaches are present in the field. Chess should evolve adapting to rapid technological development.

1 Reply

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less