For Better AI, Turn Up the Contrast on Reality

Humans are sometimes criticized for seeing the world in black and white, but maybe AI should learn the same trick

Advertisement

Many years ago, I was touring a working orchard with a friend. His son, who was the orchard’s manager, was describing his work. His father and I, being engineers, got into a discussion about how a robot might be instructed to pick the fruit.

The son stopped and stared at us in consternation. “What are you guys talking about!? It’s simple—you see it, you pick it.”

Not so simple: It’s only now, decades later, that commercial fruit-picking robots are on the radar. There are many everyday tasks that seem trivial yet are difficult to describe and structure for automation. Humans have the advantage of common-sense reasoning, which is much more deep and profound than most people would believe.

In my February column, I wrote about our success in creating computer programs that can master games like chess and poker. By their descriptions, these games are extraordinarily simple—a small number of immutable rules involving a few elements, whether they be chess pieces or playing cards. But there is a paradox, because underneath this simplicity is an enormous complexity. Nonetheless, that complexity is precisely defined, and that’s what we engineers are good at.

However, life is fuzzy and often ill defined. (If only real-life tasks could be modeled as board games, we’d be in business.) I love the idea of fuzzy logic, but on reflection, I actually do want my computer to be precise.

But maybe there could be some mechanism that would take fuzziness as an input and hand off well-defined output to a computational unit. This unit could then bring to bear the kind of techniques we’ve used to master games.

In real life we have such mechanisms. Consider American football, for example. There are rules about what happens following a forward pass that depend on whether it is caught or not caught. But “catch” is a fuzzy concept. So we have a device that digitizes the analog “catch.” It is called a referee. In law we have the equivalent in the courts, where judges and juries use various subjective standards such as “reasonableness” to determine whether or not an action falls on one side of the law or the other. And if we don’t like the court’s digitization, we treat it as an analog result and send it through another court.

As a manager, I was the digital arbiter on many personnel decisions—who got raises and in what dollar amounts, who got promoted, who got fired, and so forth. People always asked me what criteria I used for these decisions. What is the algorithm you use? they wanted to know. In truth, I wanted an algorithm too. There was a lot of fuzziness involved.

Besides being fuzzy, much of life is influenced by luck. The best team doesn’t always win, and the best person doesn’t always get the promotion. Life isn’t always fair, but this does shake up the pieces, so I’m not sure if this is a bug or a feature. Many board games, like Monopoly, do combine luck with skill. Maybe the fuzziness converter could add a bit of randomness.

But I’m just dreaming about this. Whatever real-life task we’re trying to automate, someone will ask why it’s taking so long. It’s simple, they’ll say. But it really isn’t.

This article appears in the May 2019 print issue as “AI’s Achilles’ Heel: Ambiguity.”

Advertisement