The December 2022 issue of IEEE Spectrum is here!

Close bar

Pictionary-Playing AI Sketches the Future of Human-Machine Collaborations

As either “guesser” or “drawer,” the Allen Institute’s new AI cooperates with a human player

4 min read
IEEE Spectrum editor Eliza Strickland draws and uses icons to convey the phrase 'repairing an engine in the parking lot' for Iconary.
IEEE Spectrum editor Eliza Strickland draws and uses icons to convey the phrase "repairing an engine in the parking lot" for Iconary.
Photo: Randi Klett

What do the games of chess, Jeopardy!, Go, Texas Hold’em, and StarCraft have in common? In each of these competitive arenas, an AI has resoundingly beat the best human players in the world. These victories are astounding feats of artificial intelligence—yet they’ve become almost humdrum. Another day, another triumph over humans. 

At the Allen Institute for Artificial Intelligence (AI2), in Seattle, researchers set out to do something different. Their AllenAI collaborates with a human player in a Pictionary-style drawing and guessing game, which is won through human-AI cooperation.

Want to see for yourself? Go play it. AI2 has just launched a public version of the game, a simplified version of Pictionary that it calls Iconary. The current version of AllenAI has limited abilities—but as it engages with a diverse set of players, with different aptitudes and game strategies, it will get better. 

The lessons learned may be applicable to any system that requires human-AI communication.

However, the goal isn’t to build “the world’s best Pictionary player,” says Ali Farhadi, senior research manager of the computer vision group at AI2. Rather, Farhadi sees Pictionary as a way to cultivate skills that can be transferred out of the game realm.

“To play Pictionary, the AI has to do some common sense reasoning, it has to know about abstraction, and it needs a little bit of theory of mind,” Farhadi tells IEEE Spectrum. (When you hypothesize about another person’s emotions or thoughts, that’s what psychologists call theory of mind.) “By learning to play Pictionary, the AI can acquire skills and knowledge that transfer to real world applications.”

The lessons learned may be applicable to any system that requires human-AI communication, says Farhadi. He imagines more productive interactions with voice assistants like Alexa, and with helper robots that adapt based on feedback.

The Allen Institute isn’t the only institution that’s investigating collaborative AI via gameplay. OpenAI, a San Francisco-based research organization, has a team of 5 AI agents that work together to play the video game Dota 2, matching their skills against a team of humans (the AIs took on human champions last summer, but didn’t win). And Spectrum recently covered a competition that requires AI agents to cooperate in the game of Minecraft.

Those efforts require AIs to collaborate with each other, as a step toward human-AI collaboration. But the Iconary project jumps straight to that end goal.

In the classic game of Pictionary, one member of a team has to draw a sketch representing a certain word or phrase, and the other team members have to guess it. In Iconary, AllenAI can play the role of either drawer or guesser. When it’s the drawer, it pops up a series of icons, and the human tries to guess the phrase. If the human is stumped, the AI will elaborate on its image.

When AllenAI is guessing, the human player draws on a sketch pad, then selects the icon that best represents whatever she was trying to draw. By repeating this process and arranging the resulting icons on a board, she creates a little narrative to help AllenAI to guess the phrase.

Currently, the game includes 75,000 phrases, which must be depicted via 12,000 icons. The researchers say they restricted the number of icons so both the AI and human players would have to be creative about combining them, using simple elements to build up to larger concepts.

Throughout the course of a game, AllenAI adapts to the individual player’s needs, says Aniruddha Kembhavi, a research scientist at AI2 who co-led the Iconary project. This is where it displays a rudimentary theory of mind, he says. “It needs to put itself in the mind of its partner, and decide, ‘What do I need to draw to get this human to guess right?’”

For AllenAI’s training, it observed some 100,000 Iconary games played by workers on the crowd-labor platform Mechanical Turk, gradually learning successful game strategies. To accelerate the learning curve, it also played games of Iconary against itself, racing through games at a speed no human could match.

That type of self-play has been key to other AI gaming victories, such as DeepMind’s AlphaGo system that taught itself to play chess, Go, and Shogi with no instructions. But Kembhavi says his team couldn’t rely exclusively on self-play to train AllenAI: “It might do a great job of playing Pictionary with itself—but its drawings might not be understandable to humans,” he says.

The AI2 team has yet to publish any papers on the project or its methods. Several AI researchers told IEEE Spectrum that they can’t comment on the work’s scientific significance without a better understanding of “what’s going on beneath the hood,” as Mark Riedl, an associate professor at Georgia Tech, puts it. Still, he says the Iconary project seems to be “a positive step.” Riedl, who directs the Entertainment Intelligence Lab, works on AI collaboration and creativity; he’s currently working on an AI that can play the role-playing game Dungeons & Dragons [PDF].

Riedl argues that AI research needs to move away from games with a known set of rules and possible actions, which includes complex games like Go and StarCraft. He sees real potential for the advancement of artificial intelligence in games that are unconstrained and open, that require ingenuity and imagination. While Iconary is a greatly simplified version of Pictionary, “I think it has worthy goals,” he says. “A full version of Pictionary could be a very interesting research problem.”

Even in the current version, Riedl says, he appreciates the emphasis on common sense reasoning. An AI system that can transfer its skills from a game to real-world applications will need a grounding of basic knowledge, he says: “How does the world work? How do social interactions work? What are the scripts we tend to follow?”

AI2’s Farhadi says that even as Iconary helps the AI learn about society, the human players will also learn about AI. The hoopla about AIs defeating human grandmasters in certain games has led to a misperception that AI will soon surpass humans in general intelligence—when in fact, he says, “it’s not as intelligent as a dog.”

While Elon Musk and some other tech luminaries have raised the specter of a superintelligent AI that will wipe out humanity, or maybe just take all our jobs, Farhadi puts those ideas in the category of science fiction. “It’s far from reality, and it’s far from our vision,” he says. “We envision a world in which humans and AI are working together.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}