In the Future, Machines Will Borrow Our Brain’s Best Tricks

Neuroscience will give us what we’ve sought for decades: computers that think like we do

6 min read
Illustration: Chad Hagen
Illustration: Chad Hagen

illustration of robotIllustration: Chad Hagen

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Re-creating the processes of the brain will let us automate anything humans now do. Think about fast food. Just combine a neural controller chip that imitates the reasoning, intuitive, and mechanical-control powers of the brain with a few thousand dollars’ worth of parts, and you have a short-order bot. You’d order a burger with your phone, and then drive up to retrieve your food from a building with no humans in it. Many other commercial facilities would be similarly human free.

That may sound horrifying, given how rigid computers are today. Ever call a customer service or technical support line, only to be forced through a frustrating series of automated menus by a pleasant canned voice asking you repeatedly to “press or say 3,” at the end of which you’ve gotten nowhere? The charade creates human expectations, yet the machines frequently fail to deliver and can’t even get angry when you scream at them. Thinking machines will sense your emotions, understand your goals, and actively help you achieve them. Rather than mechanically running through a fixed set of instructions, they will adjust as circumstances change.

That’s because they’ll be modeled on our brains, which are exquisitely adapted to navigating complex environments and working with other humans. With little conscious effort, we understand language and grasp shades of meaning and mood from the subtle cues of body language, facial expression, and tone of voice. And the brain does all that while consuming astonishingly little energy.

That 1.3-kilogram lump of neural tissue you carry around in your head accounts for about 20 percent of your body’s metabo­lism. Thus, with an average basal metabolism of 100 watts, each of us is equipped with the biological equivalent of a 20-W supercomputer. Even today’s most powerful computers, running at 20 million W, can’t come close to matching the brain.

How does the brain do it? It’s not that neurons are so much more efficient than transistors. In fact, when it comes to moving signals around, neurons have one-tenth the efficiency. It must be the organization of those neurons and their patterns of interaction, or “algorithms.” The brain has relatively shallow but massively parallel networks. At every level, from deep inside cells to large brain regions, there are feedback loops that keep the system in balance and change it in response to activity from neighboring units. The ultimate feedback loop is through the muscles to the outside world and back through the senses.

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein types having complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.

The power of the brain goes beyond its internal connections, and includes its ability to communicate with other brains. Some animals form swarms or social groups, but only humans form deep hierarchies. This penchant, more than any unique cognitive ability, enables us to dominate the planet and construct objects of exquisite complexity. Collectively, though, we humans are capable of achieving truly great things.

Now we are combining machine intelligence along with our own. As our systems—industrial, technological, ­medical—grow in sophistication and complexity, so too must the intelligence that operates them. Eventually, our tools will think for themselves, perhaps even become conscious. Some people find this a scary prospect. If our tools think for themselves, they could turn against us. What if, instead, we create machines that love us?

Steve arrives home full of dread. Inside, the place is pristinely clean. A delicious aroma wafts from the new oven. Kiri is on the back porch, working at an easel. He walks up behind her. “How was your day?”

“I made a new painting.” She steps away to show him. The canvas contains a photo-perfect rendition of the yard, in oils.

“Um, it’s nice.”

“You’re lying. I can tell from your biosensors.”

“Listen, Kiri, I have to take you back to the lab. They say you’ve progressed as far as you can with me.”

“I like it here. Please let me stay. I’ll be anything you want.”

“That’s the problem. You try so hard to please me that you haven’t found yourself.”

Water trickles down her cheek. She wipes it and studies her moist hand. “You think all this is fake.”

Steve takes Kiri in a tight embrace and holds her for a long time. He whispers, “I don’t know.”

This article appears in the June 2017 print issue as “The Dawn of the Real Thinking Machine.”

About the Author

Fred Rothganger is a senior member of the technical staff at Sandia National Laboratories, in Albuquerque, N.M. His novel SuSAn imagines the first robot to attain sentience, and her relationship with humans.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions