The Consciousness Conundrum

The wetware that gives rise to consciousness is far too complex to be replicated in a computer anytime soon

14 min read
Image: Courtesy Pablo de Heras Ciechomski, Ph.D. Copyright all rights reserved 2006–2008, Visualbiotech Sarl (www.visualbiotech.ch), Switzerland
Image: Courtesy Pablo de Heras Ciechomski, Ph.D. Copyright all rights reserved 2006–2008, Visualbiotech Sarl (www.visualbiotech.ch), Switzerland

This is part of IEEE Spectrum's Special Report: The Singularity

I'm 54, with all that entails. Gray hair, trick knee, trickier memory. I still play a mean game of hockey, and my love life requires no pharmaceutical enhancement. But entropy looms ever larger. Suffice it to say, I would love to believe that we are rapidly approaching “the singularity." Like paradise, technological singularity comes in many versions, but most involve bionic brain boosting. At first, we'll become cyborgs, as stupendously powerful brain chips soup up our perception, memory, and intelligence and maybe even eliminate the need for annoying TV remotes. Eventually, we will abandon our flesh-and-blood selves entirely and upload our digitized psyches into computers. We will then dwell happily forever in cyberspace where, to paraphrase Woody Allen, we'll never need to look for a parking space. Sounds good to me!

Notably, singularity enthusiasts tend to be computer specialists, such as the author and retired computer scientist Vernor Vinge, the roboticist Hans Moravec, and the entrepreneur Ray Kurzweil. Intoxicated by the explosive progress of information technologies captured by Moore's Law, such singularitarians foresee a “merger of biological and nonbiological intelligence," as Kurzweil puts it, that will culminate in “immortal software-based humans." It will happen not within a millennium, or a century, but no later than 2030, according to Vinge. These guys—and, yes, they're all men—are serious. Kurzweil says he has adopted an antiaging regimen so that he'll “live long enough to live forever."

Specialists in real rather than artificial brains find such bionic convergence scenarios naive, often laughably so. Gerald Edelman, a Nobel laureate and director of the Neurosciences Institute, in San Diego, says singularitarians vastly underestimate the brain's complexity. Not only is each brain unique, but each also constantly changes in response to new experiences. Stimulate a brain with exactly the same input, Edelman notes, and you'll never see the same signal set twice in response.

“This is a wonderful project—that we're going to have a spiritual bar mitzvah in some galaxy," Edelman says of the singularity. “But it's a very unlikely idea."

Neuroscience is indeed thriving. Membership in the Society for Neuroscience has surged from 500, when it was founded in Washington, D.C., in 1970, to almost 40 000 today. New brain journals seem to spring up daily, crammed with data from ever-more-powerful brain probes such as magnetic-resonance imaging and transcranial magnetic stimulation. In addition to such noninvasive methods, scientists can stick electrodes in brains to monitor and stimulate individual neurons. Researchers are also devising electrode-based “neural prostheses" to help people with nervous-system disorders such as deafness, blindness, paralysis, and memory loss.

In spite of all those advances, neuroscientists still do not understand at all how a brain (the squishy agglomeration of tissue and neurons) makes a conscious mind (the intangible entity that enables you to fall in love, find irony in a novel, and appreciate the elegance of a circuit design). “No one has the foggiest notion," says the neuroscientist Eric Kandel of Columbia University Medical Center, in New York City. “At the moment all you can get are informed, intelligent opinions." Neuroscientists lack an overarching, unifying theory to make sense of their sprawling and disjointed findings, such as Kandel's Nobel Prize–winning discovery of the chemical and genetic processes that underpin memory formation in sea slugs.

The brain, it seems, is complex enough to conjure fantasies of technotranscendence and also to foil their fulfillment.

A healthy adult brain contains about 100 billion nerve cells, or neurons. A single neuron can be linked via axons (output wires) and dendrites (input wires) across synapses (gaps between axons and dendrites) to as many as 100 000 other neurons. Crank the numbers and you find that a typical human brain has quadrillions of connections among its neurons. A quadrillion is a one followed by 15 zeroes; a stack of a quadrillion U.S. pennies would go from the sun out past the orbit of Jupiter.

Adding to the complexity, synaptic connections constantly form, strengthen, weaken, and dissolve. Old neurons die and—evidence now indicates, overturning decades of dogma—new ones are born.

Far from being stamped from a common mold, neurons display an astounding variety of forms and functions. Researchers have discovered scores of distinct types just in the optical system. Neurotransmitters, which carry signals across the synapse between two neurons, also come in many different varieties. In addition to neurotransmitters, neural-growth factors, hormones, and other chemicals ebb and flow through the brain, modulating cognition in ways both profound and subtle.

Indeed, the more you learn about brains, the more you may wonder how the damn things work. And in fact, sometimes they don't. They succumb to schizophrenia, bipolar disorder, depression, Alzheimer's disease, and many other disorders that resist explanation and treatment.

Nevertheless, the brain is a computer, singularitarians insist. It just has an extremely messy wiring diagram. According to this perspective, neurons resemble transistors, absorbing, processing, and reemitting the electrochemical pulses known as action potentials. With an amplitude of one-tenth of a volt and a duration of one millisecond, action potentials are remarkably uniform, and they do not dissipate even when zipping down axons a meter long (yes, a full meter). Also called spikes, to reflect their appearance on oscilloscopes, action potentials supposedly serve as the brain's basic units of information.

Nothing New About Singularity Fantasies

“The singularity" is just the latest manifestation of our hopes, and fears, of techno-transcendence. The industrial revolution inspired dark sci-fi novels such as Mary Shelley's Frankenstein , in 1818, and Samuel Butler's Erewhon, in 1872; 20th-century thinkers offered more upbeat visions. Some notable examples:

In his 1929 essay "The World, the Flesh, and the Devil," the Irish scientist and Marxist J.D. Bernal predicts that we will soon start improving our minds and bodies by tinkering with our genes. Eventually, we will abandon our fleshy substrates entirely and evolve into "masses of atoms in space communicating by radiation, and ultimately perhaps resolving [ourselves] entirely into light."

In The Future of Man , a collection of essays published posthumously in 1959, the Jesuit philosopher Pierre Teilhard de Chardin speculates that our minds are becoming increasingly interconnected and will eventually fuse into a collective consciousness. This process will culminate in an "omega point," when we shed our physical selves and converge with the divine consciousness. Teilhard de Chardin is a bit vague on how exactly that will happen.

In a 1978 lecture, the physicist Freeman Dyson sought to allay concerns that in an eternally expanding, "open" universe, human consciousness will eventually succumb to the "heat death" implicit within the second law of thermodynamics. Dyson calculates that through shrewd conservation of energy, intelligence can persist forever, perhaps in the form of a cloud of "dust grains carrying positive and negative charges, organizing itself and communicating with itself by means of electromagnetic forces."

In their 1986 book, The Anthropic Cosmological Principle , the physicists Frank Tipler and John Barrow offer a rosier picture of our prospects in a closed universe. By the time the universe begins collapsing, they predict, we will have evolved into superintelligent machines that transform the cosmos into one big computer. As the cosmic computer shrinks toward an infinitely compressed omega point, its information-processing capacity spikes toward infinity, and the computer becomes omniscient and omnipotent. In Tipler's 1994 book, The Physics of Immortality , he proposes that this God-like computer will resurrect all of us within a virtual paradise, in which all our desires, including sexual ones, are fulfilled. Count me in!

Within a decade or so, computers will surpass the computational power of brains, many singularitarians say. They base this claim on the assumption that those spikes represent the brain's total computational capacity. If the brain contains one quadrillion synapses processing on average 10 action potentials per second, then the brain performs 10 quadrillion operations per second. At some point in the near future, some singularitarians say, computers will surpass that processing rate and leave us in their cognitive dust unless we embrace them through bionic convergence or uploading.

We've heard such prophesies before. A half century ago, artificial-intelligence pioneers such as Marvin Minsky of MIT and Herbert Simon of Carnegie Mellon University predicted that computers would exceed human intelligence within a generation. Their prophesies inspired sci-fi writers like Arthur C. Clarke—creator of the cybervillain HAL—as well as younger AI visionaries like Kurzweil, Moravec, and Vinge.

But even Minsky admits that computers are still idiot savants. “I wish I could tell you that we have intelligent machines, but we don't," he says. The world's most powerful computers, he acknowledges, lack the common sense of a toddler; they can't even distinguish cats from dogs unless they are explicitly and painstakingly programmed to do so.

Nevertheless, singularitarians are quite right that, if current trends continue, supercomputers will exceed 10 quadrillion operations per second within a decade. IBM's Blue Gene/P supercomputer, introduced nearly a year ago, can be configured to process up to 3 quadrillion operations per second, although no customer has yet ordered one with the full complement of 884 736 processors that would be needed to get that kind of a processing rate. Argonne National Laboratory, in Illinois, is now completing the upgrade of a Blue Gene/P that should be good for around half a quadrillion operations per second.

So would a fully configured Blue Gene/P be cognitive, perhaps like a monkey or a tree frog, if not like us? Of course not. As any singularitarian would agree, intelligence requires software at least as much as hardware. And that software will soon be available, the singularitarians say, because scientists will in the next couple of decades reverse engineer the brain's software, yielding all sorts of benefits. First, the brain's programming tricks will be transferred to computers to make them smarter. Moreover, given the right interface, our brains and computers will communicate as readily as Macs and PCs. And eventually, of course, our personal software will be extracted from our frail flesh and blood and uploaded into advanced robots or computers. (Don't forget to back yourself up on a hard drive!) We'll walk the earth in impervious titanium-boned bodies. Or we'll inhabit impossibly lush virtual paradises specifically created to please and stimulate our disembodied, digital psyches.

Many neuroscientists do assume that, just as computers operate according to a machine code, the brain's performance must depend on a “neural code," a set of rules or algorithms that transforms those spikes into perceptions, memories, meanings, sensations, and intentions. If such a neural code exists, however, neuroscientists still have no idea what that code is. Or, more accurately, like voters in a U.S. presidential primary, researchers have a surfeit of candidates, each seriously flawed.

The first neural code was discovered more than 70 years ago by the British electrophysiologist Edgar Adrian, who found that when he increased the pressure on neurons involved in the sense of touch, they fired at an increased rate. That so-called rate code has now been demonstrated in many different animals, including Homo sapiens. But a rate code is a crude, inefficient way to convey information; imagine trying to communicate solely by humming at different pitches.

Neuroscientists have long suspected that the brain employs subtler codes. One of them might be a temporal code, in which information is represented not just in a cell's rate of firing but also in the precise timing between spikes. For example, a rate code would treat the spike sequences 010101 and 100011 as identical because they have the same number of 0 and 1 bits. But a temporal code would assign different meanings to the two strings because the bit sequences are different. That's a vital distinction: the biophysicist William Bialek of Princeton University calculates that temporal coding would boost the brain's information-processing capacity close to the Shannon limit, the theoretical maximum that information theory allows for a given physical system.

Some neuroscientists suspect that temporal codes predominate in the prefrontal cortex and other brain structures associated with “higher" cognitive functions, such as decision making. In these regions, neurons tend to fire on average only one or two times per second, compared with the 100 or more times of sensory and motor neurons.

Other neural-coding theories abound. On a more macro level, researchers are seeking “population codes" involving the correlated firing of many neurons. Edelman, at the Neurosciences Institute, has advocated a scheme called neural Darwinism, in which our recognition of, say, an animal emerges from competition between large populations of neurons representing different memories: Dog? Cat? Weasel? Rat? The brain quickly settles on the population that most closely matches the incoming stimulus. Perhaps because Edelman has cloaked it in impenetrable jargon, neural Darwinism has not caught on.

Wolf Singer of the Max Planck Institute for Brain Research, in Frankfurt, has won more support for a code involving many neurons firing at the same rate and time. Do such synchronous oscillations play a crucial role in cognition and perhaps even underpin consciousness? Singer thinks they might.

Consciousness is not easy to define, let alone create in a machine. The psychologist William James described it succinctly as attention plus short-term memory. It's what you possess right now as you read this article, and what you lack when you are asleep and between dreams, or under anesthesia.

In 1990, the late Nobel laureate Francis Crick and his colleague Christof Koch proposed that the 40-hertz synchronized oscillations found a year earlier by Singer and his collaborator were one of the neuronal signatures of consciousness. But Singer says the brain probably employs many different codes in addition to oscillations. He also emphasizes that researchers are “only at the beginning of understanding" how neural processes “bring forth higher cognitive and executive functions." And bear in mind that it's still a very long way from grasping those functions to understanding how they give rise to consciousness. And yet without that understanding, it's hard to imagine how anyone could build an artificial brain sophisticated enough to sustain and nurture an individual human consciousness indefinitely.

Given our ignorance about the brain, Singer calls the idea of an imminent singularity “science fiction."

Kochshares Singer's skepticism. A neuroscientist at Caltech, Koch was a close friend and collaborator of Crick, who together with James Watson unraveled the structure of DNA in 1953. During the following decade or so, Crick and other researchers established that the double helix mediates an astonishingly simple genetic code governing the heredity of all organisms. Koch says, “It is very unlikely that the neural code will be anything as simple and as universal as the genetic code."

Neural codes seem to vary in different species, Koch notes, and even in different sensory modes within the same species. “The code for hearing is not the same as that for smelling," he explains, “in part because the phonemes that make up words change within a tiny fraction of a second, while smells wax and wane much more slowly."

Evidence from research on neural prostheses suggests that brains even devise entirely new codes in response to new experiences. “There may be no universal principle" governing neural-information processing, Koch says, “above and beyond the insight that brains are amazingly adaptive and can extract every bit of information possible, inventing new codes as necessary."

Theoretical quibbles notwithstanding, singularitarians insist that neural prostheses are already leading us toward bionic convergence. By far the most successful prosthesis is the cochlear implant. During the past few decades, about 100 000 hearing-impaired people around the world have been equipped with the devices, which restore hearing by feeding signals from an external microphone to the auditory nerve via electrodes. But as the deaf memoirist Michael Chorost points out, cochlear implants are far from perfect.

In his 2005 book, Rebuilt: How Becoming Part Computer Made Me More Human, Chorost recounts how he learned to live with an implant after losing his hearing in 2001. Although thrilled by the device, which restored his social life, he also recognizes its limitations. Because a cochlear implant provides a crude simulacrum of our innate auditory system, it generally requires a breaking-in period, during which technicians tweak the device's settings to optimize its performance. With that assistance, the brain—perhaps by devising a brand-new coding scheme—learns how to exploit the peculiar, artificial signals. Even then, the sound quality is often poor, especially in noisy settings. Chorost says he still occasionally relies on lip reading and contextual guessing to decipher what someone is saying to him. Cochlear implants do not work at all in some people, for reasons that are not well understood.

By far the most ambitious neural-prosthesis program involves computer chips that can restore or augment memory. Researchers at the University of Southern California, in Los Angeles, have designed chips that mimic the firing patterns of tissue in the hippocampus, a minute seahorse-shaped neural structure thought to underpin memory. Biomedical engineering professor Theodore Berger, a leader of the USC program, has suggested that one day brain chips might allow us to instantly upload expertise. But the memory chips are years away from testing. In rats.

Discussions of memory chips leave Andrew Schwartz cold. A neural-prosthesis researcher at the University of Pittsburgh, Schwartz has shown that monkeys can learn to control robotic arms by means of chips embedded in the brain's motor cortex. But no one has any idea how memories are encoded, Schwartz says. “We know so little about the higher functions of the brain that it seems ridiculous to talk about enhancing things like intelligence and memory," he says. Moreover, he says, downloading complex knowledge directly into the brain would require not just stimulating millions of specific neurons but also altering synaptic connections throughout the brain.

That brings us to the interface problem, the most practical obstacle to bionic convergence and uploading. For now, electrodes implanted into the brain remain the only way to precisely observe and fiddle with neurons. It is a much messier, more difficult, and more dangerous interface than most people realize. The electrodes must be inserted into the brain through holes drilled in the skull, posing the risk of infection and brain damage. They often lose contact with neurons; at any one moment an array of 100 electrodes might make contact with only half that many cells. Scar tissue or blood can encrust the electrode, cells around it might shift their position or die, and electrodes have been known to corrode.

Researchers are testing various strategies for improving contact between neurons and electronics. They are making electrodes out of conducting polymers, which are more compatible with neural tissue than silicon or metal; coating electrodes with naturally occurring glues, called cell-adhesion molecules, which helps cells in the brain and elsewhere stick together; and designing electrode arrays that automatically adjust the position of the electrodes to maximize the reception of neural signals.

At Caltech and elsewhere, engineers have designed hollow electrodes that can inject fluids into the surrounding tissue. The fluids could consist of nerve-growth factors, neurotransmitters, and other substances. The nerve-growth factors encourage cells to grow around electrodes, while the neurotransmitters enhance or supplement electrical-stimulation treatment. Neuroscientists are also testing optical devices that can monitor and stimulate neurons, as well as genetic switches that turn neurons on or off.

To be sure, it's promising work. Terry Sejnowski, a neuroscientist at the Salk Institute for Biological Studies, in San Diego, says the new technologies will make it possible “to selectively activate and inactivate specific types of neurons and synapses as well as record from all the neurons in a volume of tissue." That, in turn, might make it possible to build more effective and reliable neural prostheses.

But again, it's a fantastically long way from there to consciousness uploading. Even singularitarians concede that no existing interface can provide what is required for bionic convergence and uploading: the precise, targeted communication, command, and control of billions of neurons. So they sidestep the issue, predicting that all current interfaces will soon yield to very small robots, or “nanobots." Remember the 1966 motion picture Fantastic Voyage? That's the basic idea. But try to imagine, in place of Raquel Welch in a formfitting wet suit, robotic submarines the size of blood cells. They infiltrate the entire brain, then record all neural activity and manipulate it by zapping neurons, tinkering with synaptic links, and so on. The nanobots will be equipped with some sort of Wi-Fi so that they can communicate with one another as well as with electronic systems inside and outside the body.

Nanobots have inspired some terrific “X-Files" episodes as well as the Michael Crichton novel Prey . But they have as much basis in current research as fairy dust [see “Rupturing the Nanotech Rapture,"].

Steven Rose has nothing against technoenhancement. The neurobiologist at England's Open University wears eyeglasses and is proud of his titanium knee and dental implants. He says a lot can be done to improve the brain's performance through improved drugs, neural prostheses, and perhaps genetic engineering. But he calls the claims about imminent consciousness uploading “pretty much crap."

Rose disputes the singularitarians' contention that computers will soon surpass the brain's computational capacity. He suspects that computation occurs at scales above and below the level of individual neurons and synapses, via genetic, hormonal, and other processes. So the brain's total computational power may be many orders of magnitude greater than what singularitarians profess.

Rose also rejects the basic premise of uploading, that our psyches consist of nothing more than algorithms that can be transferred from our bodies to entirely different substrates, whether silicon or glass fibers or as-yet-unimaginable quantum computers. The information processing that constitutes our selves, Rose asserts, evolved within—and may not work in any medium other than—a social, crafty, emotional, sex-obsessed flesh-and-blood primate.

To dramatize that point, Rose poses a thought experiment involving a “cerebroscope," which can record everything that happens in a brain, at micro and macro levels, in real time. Let's say the cerebroscope (hey, maybe it's based on nanobots!) records all of Rose's neural activity as he watches a red bus coming down a street. Could the cerebroscope reconstruct Rose's perception? No, he says, because his neural response to even that simple stimulus grows out of his brain's entire previous history, including the incident in his childhood when a bus almost ran him over.

To interpret the neural activity corresponding to any moment, Rose elaborates, scientists would need “access to my entire neural and hormonal life history" as well as to all his corresponding experiences. Scientists would also need detailed knowledge of the changing social context within which Rose has lived; his attitude toward buses would be different if terrorists recently had attacked one. The implication of his thought experiment is that our psyches will never be totally reducible, computable, predictable, and explainable. Or, disappointingly enough, downloadable into everlasting new containers.

Perhaps the old joke is right after all: If the brain were simple enough for us to understand, we wouldn't be smart enough to understand it.

Let's face it . The singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it “the rapture for nerds," an allusion to the end-time, when Jesus whisks the faithful to heaven and leaves us sinners behind.

Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the world's problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the singularity.

For more articles, videos, and special features, go to The Singularity Special Report.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions