Deep Learning Isn't Deep Enough Unless It Copies From the Brain

Jeff Hawkins, the Palm Pilot's creator, thinks AI needs much more neuroscience in the mix

12 min read

A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins
Book cover: Basic Books

“I believe that general-purpose AI will dominate machine intelligence in the latter part of the twenty-first century,” says Jeff Hawkins in his new book, A Thousand Brains: A New Theory of Intelligence. Hawkins isn’t talking about today’s artificial intelligence, which can perform stunningly well on very narrow tasks. He’s imagining flexible thinking machines that can learn just about anything, as humans do—the artificial general intelligence, or AGI, that would represent the next quantum leap in AI. 

Hawkins’s storied career includes the invention of the Palm Pilot in the late 1990s, followed by a hard pivot to theoretical neuroscience. In 2005, he founded the company Numenta to pursue research on the neocortex, the part of the brain responsible for intelligence.  

The main objective of Hawkins’s new book is to present his grand theory of intelligence: both how the human brain produces this feature, and what it really means to be intelligent. Happily for engineers, he then goes on to consider how his theories could impact AI research. He spoke with IEEE Spectrum about the path forward for machine learning. 

Jeff Hawkins on…

  1. What's Better Than Deep Learning
  2. The Role of Roboticists
  3. Continuous Learning
  4. Why AI Won’t Get Emotional
  5. Whether AI Poses an Existential Threat to Humanity
  6. Figuring Out Consciousness
  7. Growing Interest in Brain-Inspired AI Research
  1. What’s Better Than Deep Learning

    IEEE Spectrum: Today, most of the money and energy in AI is going into deep learning. Given that you think it's the wrong track to general intelligence, do you anticipate another AI winter when people realize that deep learning can't bring us all the way?

    Jeff Hawkins: Well, I think deep learning is great, and I made a point of saying that it can do amazing things. But I do think it's not the track to true machine intelligence. It's very hard to know whether there'll be another winter—I suspect not. In the previous AI summers, they never really got to commercial value. In this case, there really is commercial value.

    Jeff HawkinsPhoto: Tri Nguyen Photography

    The question is, how can we transition from today's deep-learning technologies to truly intelligent machines? How do we get there? I don't think many researchers understand the depth of the issues that have to be addressed.

     Spectrum:  There’s a lot of enthusiasm today for hybrid systems: so-called neuro-symbolic systems that are trying to use symbolic reasoning and logic while also harnessing the power of deep learning. Do you think this could be a useful way forward?

    Hawkins: We have learned a lot about how actual brains represent knowledge using neurons. Will we have to do it the same way in machines as in the brain? Probably. The way brains do it is so elegant and scalable it is hard to imagine a better way.  

    One thing every truly intelligent system needs is to have structure for its knowledge. The brain uses what I call reference frames, which is a structure that works across all sensory modalities and even for abstract knowledge. Reference frames also explain how movement is used to create knowledge and to access knowledge. I explain these ideas in more detail in the book.

    BACK TO TOP↑

    The Role of Roboticists

    Spectrum: I'm interested in your assertion that movement is essential to this endeavor. It’s easy to see how movement might be necessary to understand the geographical world and to understand objects through manipulation. But how does it apply beyond that?

    Hawkins: If you believe that AI systems need common sense knowledge, then they do need to learn about environments and objects and how to use tools. They need to know how balls bounce and what things do when you interact with them. I’ll never understand what a stapler is unless I open and close it and observe staples coming out. You're not going to get there with a list of rules.

    Now the question you asked is, do we need that kind of movement for abstract knowledge and high-level thoughts? The architecture of different areas of the neocortex looks remarkably the same, suggesting that the same underlying principles apply to all aspects of intelligence. For example, when we're doing mathematics, we don't have to physically move, but your cortex is doing something equivalent. It applies an action, such as a mathematical operation, that updates a location in a reference frame. Solving a mathematics problem requires finding a set of operations that gets you to the right result.

    So it's hard for me to imagine achieving general intelligence, a system that can understand all of these different domains and converse about things, without having this movement component. I can't see another way of doing it.

    Spectrum: In the section of the book where you talk about the basic requirement for AGI, you say that roboticists can play a role. Why do you think they could be important in this quest for AGI?

    Hawkins:  Because of the imperative for learning via movement, I concluded that you can't have intelligence without some sort of embodiment. I make a very specific prescription: You need to have one or more sensors; they have to move to sense different parts of the world; and the AI has to know where its sensors are relative to the world. Roboticists can play an important role because they are already using reference frames to represent the locations of a robot’s body, limbs, and sensors. They are used to thinking about reference frames for capturing knowledge. Most AI systems today don’t have reference frames, so it is natural these two fields will merge at some point in the future.

    BACK TO TOP↑

    Continuous Learning

    Spectrum: In terms of acquiring that common sense knowledge through movement: It seems like to create a really flexible general intelligence, it would have to spend a lot of time acquiring models of the world. It takes a while to learn about a stapler, and then you have to learn about a cat, and then you have to learn about… How do you get around that, is it going to take forever?

    “Humans never stop learning, and when we learn something new, we don't forget other things. Most AI systems today don't learn this way.”

    Hawkins: No, it won’t take forever. To be intelligent requires learning a model of the world, so we can’t skip that step. But it won't take forever. Humans never stop learning, and when we learn something new, we don’t forget other things. If I learn to drive one kind of car, and then I rent a different kind of car, I can figure it out; and I don’t forget how to ride a bicycle.  Most AI systems today don’t learn this way.  These systems learn by batch training; they relearn everything about the world each time they are trained. 

    One advantage AI systems have over humans is that we can clone the system. You can say, okay, now that we have a trained system, we can make more copies just like it. But some AI systems will have to experience the world and learn one item at a time.

    Spectrum: Kind of like raising a kid.

    Hawkins: Like raising a kid, but when your first kid is twenty years old, you could say, I like how that one turned out, let's copy it.

    BACK TO TOP↑

    Why AI Won't Get Emotional

    Spectrum: You mentioned embodiment. One thing I've wondered about is whether, if embodiment is necessary, then human-like emotions are also necessary to get a human level intelligence. For example, could an AI really understand why humans do the things they do if it doesn't understand the fear of death?

    Hawkins: General intelligence, in my mind, doesn't mean you have to understand humans. There will be a lot of intelligent machines that don’t need to know that stuff. In the book I gave an example of robotic construction workers on Mars: I don't think they need to understand human emotions and human desires to be able to construct things.

    But there will be some applications for interacting with humans that will need a model of human behavior. And so, yeah, that's an issue. But I don't think it's a core issue. If I want to create an aide for an elderly person, that aide needs to have some ability to either mimic or understand human emotions. But many other types of AI won’t need that kind of knowledge. It's an optional component.

    If I think about people I’m not familiar with—sometimes I have trouble understanding why they feel certain ways, even though we are both human and both have emotions. I just don't have exactly the same ones as they do. We deal with that issue all the time. You can even think about animals. I've had cats, and I don't think my cats really understood my emotional states. And I didn't really understand theirs. But we still got along quite well.

    BACK TO TOP↑

    Whether AI Poses an Existential Threat to Humanity

    Spectrum: Speaking of emotions and the old brain drives that make humans tick. The book has an interesting discussion of whether AI poses an existential threat to humanity. You say that intelligent machines won’t have old brain drives such as the urge for dominance unless we put them there. And that's your answer for why they won't turn on us.

    Hawkins: Well, that's a simplistic view of it. The threat of AI is an issue that many people are concerned about, so I want to give it due consideration, not just a glib answer. Let me state up front, I think AI can be a very dangerous technology. Humans can misuse it to cause a lot of harm. But the question I wanted to address was, on its own, does AI represent an existential risk? I tried to peel apart the main arguments: the lack of control argument and the intelligence explosion argument.

    Humans have a neocortex, which is the organ of intelligence. It learns a model of the world which is the core of what makes us smart. We also have other brain structures that create our emotions and motivations. An AI system that has the equivalent of a neocortex but not the other parts of the brain will not spontaneously develop human-like emotions and drives. So if we don't put them in machines, they won't just suddenly appear.

    Spectrum:The existential risk argument that you referred to as the lack of control problem is also sometimes called the goal misalignment problem. People talk about how an AI with misaligned goals could turn all the world into paperclips, which is an example I always enjoyed.

    Hawkins: Where would these goals that could get misaligned come from? The general assumption is that they will spontaneously appear, or that they’re a natural consequence of being intelligent. But when you look at brains, that's not the way it works. Many goals come from other parts of your brain, not your cortex, not the intelligent part of the brain.

    The builder of the AI machine will get to decide the goals of that machine.  That person could be malevolent or incompetent, and design bad goals.  A simple example would be a self-driving car. This would be like saying the car takes your first instruction—to go to the supermarket—and then decides that that's all it's going to do forever, and it refuses to go to the doctor instead. Or, the designer could instruct a car to drive through a crowded square if that was the shortest path. These would be bad outcomes, but they would not because the AI took over, but rather because the AI designer misused the technology.

    Spectrum: Can you also talk about the second existential risk argument, and why you don’t fear the singularity?

    Hawkins: The singularity theory says that if we create machines that are smarter than us, they will create the next intelligent machines which are smarter still, and then those will create even more intelligent machines, and so on.

    That's operating under the assumption that somehow we can just build in superintelligence, but we can't, machines have to learn to be smart. They have to learn the model of the world; you can't just program it in. You could make a bigger machine or a faster machine, but it will still have to acquire knowledge, and knowledge cannot be acquired instantaneously. Knowledge is not something that's like a dial where you can just turn it up and say, be smarter.

    “There's no way you can build the Hubble space telescope in one second. It's just not going to happen. I don't care if you have robotic workers or not.”

    Most knowledge requires interacting with the world. You can't sit around and think and figure everything out. If we're talking about scientific knowledge, it requires creating scientific instruments and collecting data over years. There's no shortcut to that. I don't care how smart you are, you can't answer questions about the physical universe without measuring it and detecting things. And there's no way you can build the Hubble space telescope in one second. It's just not going to happen. I don't care if you have robotic workers or not.

    I think we actually can accelerate the accretion of knowledge dramatically, and I’d like to do that. I think that's one of the benefits of AI, but it's not going to be this sort of exponential growth.

    Spectrum: I'm curious to see how this book will be received. Because you definitely challenge some big ideas in AI.

    Hawkins: I recognize that some of these ideas will take people be surprise, but I hope they will see the logic behind my proposals and perhaps see the world a little differently.  If I ruffle some feathers, that wasn’t my goal. My intention is to lay out logical arguments and have people debate them.

    BACK TO TOP↑

    Figuring Out Consciousness

    Spectrum: Another controversial part might be the section that discusses whether machines can be conscious, and where you basically say that consciousness is not a hard problem.

    Hawkins: When people say consciousness is a hard problem, they often mean it can’t be understood or it that it is much harder than understanding the brain. I don’t agree. I break down consciousness into its parts. First, I argue that our sense of awareness is caused by our ability to remember our thoughts and play them back from memory. We have this constant stream of, “I was doing this a moment ago, I'm doing this now, I'm going to do this next.” This is the essence of why we feel present in the world. Next I explore the issue of qualia, which is the question of, why do things feel like anything, why does pain feel like pain, or why does green feel different than red?  I argue that we can readily understand the various components of consciousness, and even if we don’t understand it all today, we will continue to make progress on it.

    I made the analogy to figuring out what life is. About 120 years ago, we didn't understand what life was. There was a concept called élan vital, which was a mysterious force that people thought existed that turned regular matter into living matter. Understanding what made something living was a very difficult problem. It took a good part of a century to figure out the complex biochemistry of life, but we figured it out. Understanding what life is was a hard problem, yes, but it wasn't impossible. We did it. And I think understanding consciousness is very similar.

    Spectrum: You think that intelligent machines that have awareness and some sense of qualia could perhaps be considered conscious. But you still wouldn't feel bad about turning them off?

    Hawkins: Let’s say we all agree some machine is conscious. The issue then is, what's the harm of turning it off? Would the machine care? If it feared being unplugged the way you and I fear death then we might have to consider its wishes. But fear of being unplugged doesn’t just appear because a machine is intelligent and conscious. If we don’t go out of our way to put these fears and emotions into a machine, they won’t exist. So they wouldn’t care about being turned off.

    BACK TO TOP↑

    Growing Interest in Brain-Inspired AI Research

    Spectrum: You say that most people have a skewed view of how intelligent AI systems are because they're looking at performance on specific tasks like driving or the game of Go, and AI can achieve great things in specific tasks.

    Hawkins: I don't say that people have a skewed view, I say that's the current state of AI. AI today is mostly focused on relatively narrow tasks.  Benchmarks are used to measure progress on those tasks.  But there is a growing understanding among AI researchers that today’s systems are not intelligent in the way we are intelligent.

    Spectrum: There’s a benchmark that Francois Chollet is developing, which is a sort of IQ test. It asks the AI system to learn to recognize patterns as a way to get at its ability to learn. Which I think comes closer to what you consider a measure of intelligence.

    Hawkins: I’m not familiar with that project in detail. But focusing on how systems learn and what intelligence fundamentally is, that’s the core of what I'm arguing for. That means not focusing on specific tasks. What we've learned from the brain is that it's a general-purpose learning system. The brain learns a model of the world using different sensors and it can master thousands of tasks. The closer we get to that type of general-purpose learning system, the closer we’ll get to AGI.

    Spectrum: Do you see any trends toward people developing the kind of AI you're interested in?

    Hawkins: There have always been some AI researchers interested in neuroscience. I have the impression that the number is growing. Many leaders in AI and deep learning have realized that they are reaching limits in the existing approaches and say we have to come up with new ways of doing AI. Maybe understanding the brain can give us a new roadmap.

    Cold Spring Harbor Labs just had a recent conference called NAISys, which was about the merging of brains and AI. This kind of conference didn't exist just a few years ago. That’s exciting progress.

    Spectrum: And what role do you see for Numenta? I thought you were just doing theoretical neuroscience, I didn't realize you were doing AI development.

    Hawkins:  If you came to our office two years ago, you'd have seen a bunch of people doing neuroscience research. And today we’ve 90 percent transitioned to applying the principles we've learned from neuroscience to machine learning.

    We started with sparsity. We've done groundbreaking work in sparsity to speed up existing deep learning networks. We've recently put out a white paper showing how we achieved over 50X speedups. We're currently working on implementing mechanisms that we've learned from the brain to do continuous learning, which is another problem in AI.

    Spectrum: What are your goals for Numenta’s work in AI? Do you have ambitions to make commercial products or do you just want to add to the knowledge in the world?

    Hawkins:  Well, Numenta is an unusual company. It’s structured as a for-profit company, but without expectations of near-term revenue. We don’t focus on how to commercialize our ideas today because we know that would take us away from our principal focus, which is accelerating the path of true machine intelligence.  And along the way, I think we can create a lot of value. We're building up a lot of expertise and a large intellectual property portfolio that is going to be valuable in AI.

    BACK TO TOP↑

The Conversation (0)