I'm just going to come out and say it: Human cognition might have nothing whatsoever to do with computation.
Yes, I am well aware that the computational theory of mind is a deeply entrenched one, starting with the work in the early 1940s of Warren McCulloch and Walter Pitts in Chicago, and then later at MIT, where they were joined by Jerome Lettvin and Humberto Maturana. But over the course of human history, lots of theories have been widely but wrongly held, sometimes for decades.
Consider the phlogiston theory of combustion. For over a century, starting in 1667, most scientists believed that what made disparate things burn was a common substance, later named phlogiston, that dissipated into the air via fire. Air had only a finite capacity to absorb it, which explained why fire was extinguished if there was only a small amount of air available.
By the end of the 19th century, Newtonian physics was pretty good at explaining the behavior of physical objects in the universe that could be directly observed. But right at the beginning of the 20th century, Einstein came up with two revolutions almost at once, the ideas of both relativity and that energy was quantized. The fallout from the second of these is quantum mechanics. Over 100 years later, its consequences are still being discovered and often still disbelieved. Quantum entanglement, which is leading to new forms of secure communications, is still routinely described as "spooky." Measuring a property at one place can cause something to change at another, and with a reaction time that is faster than the speed of light.
Today we all assume that intelligence, thought, cognition—whatever you want to call it—is a product of computation.
For the last 30 years it had been thought that the mechanism for Alzheimer's disease was the accumulation of amyloid protein plaques in the brain. A person with Alzheimer's always has clumps of plaque in the brain. However, recent trials with drugs that reduce these clumps failed to give relief from the disease. The theory has come under fire, and now it is postulated that the lumps of plaque are a side effect of the disease, not the causal mechanism. There have been accusations that alternate approaches to understanding the disease were underfunded, as the peer reviews of such proposals were viewed as out of the mainstream.
The point is that long-held theories get superseded pretty frequently, and then everyone gets to claim that they thought they were a bit kooky all along. And over time the heroes of any particular science sometimes get knocked out of the pantheon for being fundamentally wrong.
Today we all assume that intelligence, thought, cognition—whatever you want to call it—is a product of computation. Computational neuroscience is the respectable way to approach the understanding of these phenomena in all animals, including humans. And artificial intelligence, the engineering counterpart to neuro-"science," likewise assumes that to build an intelligent system we should write computer programs. In John McCarthy's proposal for the famous 1956 Dartmouth Workshop on AI, the field's foundational event, he argued precisely this position on the very first page.
Maybe conscious experiences come from some kind of self-organization.
Computationalism is not at all central to other aspects of our technology. New satellite-launch companies don't set out to write computer programs as the primary mechanism for getting people into orbit. Yes, there are inevitably a lot of computer programs involved, but the central mechanism is burning phlogiston-free rocket fuel with oxygen in a booster. Rocket engines turn that combustion into thrust, which conforms to classical Newtonian physics to escape the clutches of Newtonian gravity. A Python script by itself just can't get the job done. Likewise, crystals do not arise because there's something computing where atoms should go. The atoms self-organize as the result of the interplay of fundamental forces.
Inside your brain are two-dimensional sheets of neurons called maps. Researchers long ago established that neural activity in these maps corresponds very precisely to sensory stimulation, for example in the retina. Many of these researchers describe these neural actions as having been computed, and they contend that these computations are what give us conscious experiences of the world. But perhaps that is all wrong. Maybe instead these conscious experiences come from some kind of self-organization. The computation we associate with these sensations could be simply an invention of our own to explain the mechanism of sentience, not the primary cause of it.
Now that sounds both kooky and spooky to our current way of thinking. To me it is exciting. I like being a kook.
- Why We Should Build Humanlike Robots - IEEE Spectrum ›
- Will the Future of AI Learning Depend More on Nature or Nurture ... ›
- Yoshua Bengio, Revered Architect of AI, Has Some Ideas About ... ›
Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.