Will the Future of AI Learning Depend More on Nature or Nurture?

NYU's Yann LeCun and Gary Marcus debated the degree to which algorithms will need the inherent understanding of the world with which we are hardwired

4 min read

NYU's Yann LeCun and Gary Marcus debated whether the future of AI learning is more about nature or nurture
NYU's Yann LeCun and Gary Marcus debate whether or not AI needs more built-in cognitive machinery similar to that of humans and animals to achieve similar intelligence.
Illustration: Science Photo Library/Getty Images

A self-driving car powered by one of the more popular artificial intelligence techniques may need to crash into a tree 50,000 times in virtual simulations before learning that it’s a bad idea. But baby wild goats scrambling around on incredibly steep mountainsides do not have the luxury of living and dying millions of times before learning how to climb with sure footing without falling to their deaths. And a psychologist’s 3-year-old daughter did not need to practice millions of times before she figured out, upon a whim, how to climb through an opening in the back of a chair.

Today’s most powerful AI techniques learn almost everything about the world from scratch with the help of powerful computational resources. By comparison, humans and animals seem to intuitively understand certain concepts—objects and places and sets of related things—that allow them to quickly learn about how the world works. That begs an important “nature vs. nurture” question: Will AI learning require built-in versions of that innate cognitive machinery possessed by humans and animals to achieve a similar level of general intelligence?

Two leading researchers in AI and psychology went head-to-head debating that topic in an event hosted by New York University’s Center for Mind, Brain and Consciousness last night.

“None of the AI techniques we have can build representations of the world, whether through structure or through learning, that are anywhere near what we observe in animals and humans,” said Yann LeCun, a computer scientist at NYU and director of Facebook Artificial Intelligence Research.

LeCun helped pioneer the AI field of deep learning that has helped tech giants automate popular services such as filtering friends’ faces on Facebook or translating between Chinese and English through Google Translate. Deep learning algorithms figured out how to perform all those tasks without the AI equivalent of the innate cognitive machinery that humans and animals have. Instead, deep-learning algorithms gradually learn to recognize certain patterns about the world while filtering huge amounts of data—a process that works for certain perception tasks such as image recognition when you have the vast computational resources of Facebook, Google, or Microsoft.

Everyone agrees that current AI techniques such as deep learning still fall short of enabling a general AI that has intelligence comparable to animals or humans. Still, LeCun believes that AI can make progress toward developing that general intelligence based on unsupervised deep learning, a recent development which removes much of the need for humans to provide hand-labeled data that the machines then learn from.

"My quest is to minimize the amount of innate machinery to learn with the amount of data we have available."

The success of modern AI has relied in large part upon not building in assumptions or structured concepts of how the world works, LeCun points out. In that sense, he favors maintaining such simplicity with minimal structuring of AI algorithms. And he imagines doing it without having to leverage the insights of human linguists, psychologists or cognitive scientists. “My quest is to minimize the amount of innate machinery to learn with the amount of data we have available,” LeCun said.

Not so fast, said Gary Marcus, a research psychologist at NYU and founder of the startup Geometric Intelligence (now owned by Uber’s AI group). He acknowledged that unsupervised deep learning has a chance of success. But he believes that such algorithms can only succeed if they have a “richer set of primitives and representations than just pixels” to understand the world.

"We want the kinds of things that kids have, which are representations and primitives that are built for comprehending the behavior of objects and entities and physics of the world."

“We want the kinds of things that kids have, which are representations and primitives that are built for comprehending the behavior of objects and entities and physics of the world,” Marcus said.

Marcus wants to see AI researchers “borrow a little more generously from cognitive science” by building more structured algorithms that can represent cognitive concepts such as objects, sets, places, and spatial-temporal continuity. He cited his own work and that of colleagues such as Elizabeth Spelke, a cognitive psychologist at Harvard University, in showing how human children have the capacity very early on to perceive concepts such as persons, objects, sets and places. His suggestion: Why not leverage a similar approach in AI with some structure that maps onto similar concepts?

Even LeCun’s own pioneering work on convolutional neural networks—a technique that allows for more efficient computational processing in object recognition tasksis a good example of how using a more structured approach to constrain the amount of information an AI must filter can help it better comprehend the world, Marcus said.

“What we really need, I think, are systematic thoughts and analyses of what happens when we embed different amounts of innate machinery into machine learning,” Marcus said.

LeCun agreed that AI needs some structure to help it comprehend the world. But he wondered whether there is a “single learning algorithm or principle or procedure” in biological brains, or whether it’s more like a nonsensical collection of “hacks” with no underlying organizing principle. In his view, AI could benefit greatly from a single learning principle—or collection of such principlesthat would arise with or without having built-in structure modeled on innate cognitive machinery.

“What’s missing is a principle that would allow our machine to learn how the world works by observation and by interaction with the world,” LeCun said. “A learning predictive world model is what we’re missing today, and in my opinion is the biggest obstacle to significant progress in AI.”

The essence of intelligence is the ability to predict, because predicting the future is a very special case of “filling in the blanks” about the state of the world, LeCun says. Common sense enables humans and animals to fill in missing information based on their knowledge of how the world works. That is why human drivers do not need to crash into a tree 50,000 times before they realize that is a bad idea; humans already have a sense of what might happen if they steer their car into a tree.

LeCun hopes that unsupervised learning can lead AI to eventually develop a sense of how the world works from a physics standpoint, if not some crude form of common sense. “I would be happy at end of my career if we have a machine as smart as a cat,” LeCun said. “Or a rat.”

The debate over whether AI learning will ultimately prove more “nature or nurture” is far from being settled. But both LeCun and Marcus agreed on the key performance indicators that would say if one or the other was correct. If the unsupervised learning algorithms eventually require more structure similar to cognitive representations of objects, sets, places, and so forth, Marcus could claim victory. If unsupervised learning finds success without requiring such structure, then LeCun would have been proven correct.

“A little innate structure might help you get a long way towards that,” Marcus said.

“A minimal amount of it, yes,” LeCun clarified.

The Conversation (0)