Stop us if you’ve heard this one before: In the near future, we’ll be able to build machines that learn, reason, and even emote their way to solving problems, the way people do.
If you’ve ever been interested in artificial intelligence, you’ve seen that promise broken countless times. Way back in the 1960s, the relatively recent invention of the transistor prompted breathless predictions that machines would outsmart their human handlers within 20 years. Now, 50 years later, it seems the best we can do is automated tech support, intoned with a preternatural calm that may or may not send callers into a murderous rage.
So why should you believe us when we say we finally have the technology that will lead to a true artificial intelligence? Because of MoNETA, the brain on a chip. MoNETA (Modular Neural Exploring Traveling Agent) is the software we’re designing at Boston University’s department of cognitive and neural systems, which will run on a brain-inspired microprocessor under development at HP Labs in California. It will function according to the principles that distinguish us mammals most profoundly from our fast but witless machines. MoNETA (the goddess of memory—cute, huh?) will do things no computer ever has. It will perceive its surroundings, decide which information is useful, integrate that information into the emerging structure of its reality, and in some applications, formulate plans that will ensure its survival. In other words, MoNETA will be motivated by the same drives that motivate cockroaches, cats, and humans.
Researchers have suspected for decades that real artificial intelligence can’t be done on traditional hardware, with its rigid adherence to Boolean logic and vast separation between memory and processing. But that knowledge was of little use until about two years ago, when HP built a new class of electronic device called a memristor. Before the memristor, it would have been impossible to create something with the form factor of a brain, the low power requirements, and the instantaneous internal communications. Turns out that those three things are key to making anything that resembles the brain and thus can be trained and coaxed to behave like a brain. In this case, form is function, or more accurately, function is hopeless without form.
Basically, memristors are small enough, cheap enough, and efficient enough to fill the bill. Perhaps most important, they have key characteristics that resemble those of synapses. That’s why they will be a crucial enabler of an artificial intelligence worthy of the term.
The entity bankrolling the research that will yield this new artificial intelligence is the U.S. Defense Advanced Research Projects Agency (DARPA). When work on the brain-inspired microprocessor is complete, MoNETA’s first starring role will likely be in the U.S. military, standing in for irreplaceable humans in scout vehicles searching for roadside bombs or navigating hostile terrain. But we don’t expect it to spend much time confined to a niche. Within five years, powerful, brainlike systems will run on cheap and widely available hardware.
How brainlike? We’re not sure. But we expect that the changes MoNETA will foment in the electronics industry over the next couple of decades will be astounding.
Artificial intelligence hasn’t stood still over the past half century, even if we never got the humanlike assistants that some thought we’d have by now. Computers diagnose patients over the Internet. High-end cars help keep you from straying out of your lane. Gmail’s Priority Inbox does a pretty decent job of prioritizing your e-mails.
But even the most helpful AI must be programmed explicitly to carry out its one specific task. What we want is a general-purpose intelligence that can be set loose on any problem; one that can adapt to a new environment without having to be retrained constantly; one that can tease the single significant morsel out of a gluttonous banquet of information the way we humans have evolved to do over millions of years.
Think about that MoNETA-enabled military scout vehicle for a moment. It will be able to go into a mission with partially known objectives that change suddenly. It will be able to negotiate unfamiliar terrain, recognize a pattern that indicates hostile activity, make a new plan, and hightail it out of the hostile area. If the road is blocked, it will be able to make a spur-of-the-moment decision and go off-road to get home. Intuition, pattern recognition, improvisation, and the ability to negotiate ambiguity: All of these things are done really well by mammalian brains—and absolutely abysmally by today’s microprocessors and software.
Consider Deep Blue, IBM’s 1.4-ton supercomputer, which in 1997 faced then world chess champion Garry Kasparov. In prior years, Kasparov had defeated the computer’s predecessors five times. After a taut series comprising one win apiece and three draws, Deep Blue finally trounced Kasparov in game six. Nevertheless, Deep Blue was not intelligent. To beat Kasparov, its special-purpose hardware used a brute-force strategy of simply calculating the value of 200 million possible chess moves each second. In the same amount of time, Kasparov could plan roughly two chess positions.
Over the next 10 years, computing capabilities skyrocketed: By 2007 the processing power of that 1.4-ton supercomputer had been contained within a Cell microprocessor roughly the size of a thumbnail. In the decade between them, transistor counts had jumped from 7.5 million on an Intel Pentium II to 234 million on the Cell. But that explosion of computing power did not bring artificial intelligence the slightest bit closer, as DARPA’s Grand Challenge has amply demonstrated.