The Institute

Deep Learning Can’t Be Trusted, Brain Modeling Pioneer Says

Stephen Grossberg explains why his ART model is better

Photo of a man in glasses and a blue checkered shirt.
Stephen Grossberg

During the past 20 years, deep learning has come to dominate artificial intelligence research and applications through a series of useful commercial applications. But underneath the dazzle are some deep-rooted problems that threaten the technology’s ascension.

The inability of a typical deep learning program to perform well on more than one task, for example, severely limits application of the technology to specific tasks in rigidly controlled environments. More seriously, it has been claimed that deep learning is untrustworthy because it is not explainable—and unsuitable for some applications because it can experience catastrophic forgetting. Said more plainly, if the algorithm does work, it may be impossible to fully understand why. And while the tool is slowly learning a new database, an arbitrary part of its learned memories can suddenly collapse. It might therefore be risky to use deep learning on any life-or-death application, such as a medical one.

Now, in a new book, IEEE Fellow Stephen Grossberg argues that an entirely different approach is needed. Conscious Mind, Resonant Brain: How Each Brain Makes a Mind describes an alternative model for both biological and artificial intelligence based on cognitive and neural research Grossberg has been conducting for decades. He calls his model Adaptive Resonance Theory (ART).

Grossberg—an endowed professor of cognitive and neural systems, and of mathematics and statistics, psychological and brain sciences, and biomedical engineering at Boston University—based ART on his theories about how the brain processes information.

“Our brains learn to recognize and predict objects and events in a changing world that is filled with unexpected events,” he says.

Based on that dynamic, ART uses supervised and unsupervised learning methods to solve such problems as pattern recognition and prediction. Algorithms using the theory have been included in large-scale applications such as classifying sonar and radar signals, detecting sleep apnea, recommending movies, and computer-vision-based driver-assistance software.

ART can be used with confidence because it is explainable and does not experience catastrophic forgetting, Grossberg says. He adds that ART solves what he has called the stability-plasticity dilemma: How a brain or other learning system can autonomously learn quickly (plasticity) without experiencing catastrophic forgetting (stability).

Grossberg, who formulated ART in 1976, is a pioneer in modelling how brains become intelligent. He is the founder and director of Boston University’s Center for Adaptive Systems and the founding director of the Center of Excellence for Learning in Education, Science, and Technology. Both centers have sought to understand how the brain adapts and learns, and to develop technological applications based on their findings.

For Grossberg’s “contributions to understanding brain cognition and behavior, and their emulation by technology,” he received the 2017 IEEE Frank Rosenblatt Award, named for the Cornell professor considered by some to be the “father of deep learning.”

Grossberg attempts to explain in his nearly 800-page book how “the small lump of meat that we call a brain” gives rise to thoughts, feelings, hopes, sensations, and plans. In particular, he describes biological neural models that attempt to explain how that happens. The book also covers the underlying causes of conditions such as Alzheimer’s disease, autism, amnesia, and post-traumatic stress disorder.

“Understanding how brains give rise to minds is also important for designing smart systems in computer science, engineering and tech, including AI and smart robots,” he writes. “Many companies have applied biologically inspired algorithms of the kind that this book summarizes in multiple engineering and technological applications.”

The theories in the book, he says, are not only useful for understanding the brain but also can be applied to the design of intelligent systems that are capable of autonomously adapting to a changing world. Taken together, the book describes the fundamental process that enables people to be intelligent, autonomous, and versatile.

THE BEAUTY OF ART

Grossberg writes that the brain evolved to adapt to new challenges. There is a common set of brain mechanisms that control how humans retain information without forgetting what they have already learned, he says.

“We retain stable memories of past experiences, and these sequences of events are stored in our working memories to help predict our future behaviors,” he says. “Humans have the ability to continue to learn throughout their lives, without new learning washing away memories of important information that we learned before.”

Understanding how brains give rise to minds is also important for designing smart systems in computer science, engineering, and tech, including AI and smart robots.

One of the problems faced by classical AI, he says, is that it often built its models on how the brain might work, using concepts and operations that could be derived from introspection and common sense.

“Such an approach assumes that you can introspect internal states of the brain with concepts and words people use to describe objects and actions in their daily lives,” he writes. “It is an appealing approach, but its results were all too often insufficient to build a model of how the biological brain really works.”

The problem with today’s AI, he says, is that it tries to imitate the results of brain processing instead of probing the mechanisms that give rise to the results. People’s behaviors adapt to new situations and sensations “on the fly,” Grossberg says, thanks to specialized circuits in the brain. People can learn from new situations, he adds, and unexpected events are integrated into their collected knowledge and expectations about the world.

ART’s networks are derived from thought experiments on how people and animals interact with their environment, he adds. “ART circuits emerge as computational solutions of multiple environmental constraints to which humans and other terrestrial animals have successfully adapted….” This fact suggests that ART designs may in some form be embodied in all future autonomous adaptive intelligent devices, whether biological or artificial.

“The future of technology and AI will depend increasingly on such self-regulating systems,” Grossberg concludes. “It is already happening with efforts such as designing autonomous cars and airplanes. It’s exciting to think about how much more may be achieved when deeper insights about brain designs are incorporated into highly funded industrial research and applications.”