The December 2022 issue of IEEE Spectrum is here!

Close bar

How Livermore Scientists Will Put IBM's Brain-Inspired Chips To The Test

The TrueNorth neuromorphic chip may help engineers reach the exascale

3 min read
IBM's Dharmendra Modha poses with TrueNorth chips
IBM fellow Dharmendra Modha and 16-chip arrays of the company's TrueNorth processor. The one on the right is now installed at Lawrence Livermore National Laboratory
Photo: IBM Research

Last week, Dharmendra Modha said goodbye to a computer some six years in the making: a set of 16 interconnected TrueNorth chips built to mimic the ultra-low-energy, highly-parallel operation of the human brain.

On Thursday, a team from IBM Research-Almaden in California hopped in a car and drove the unit some 75 minutes north to the U.S. Department of Energy’s Lawrence Livermore National Laboratory. There, scientists and engineers will evaluate whether the technology could be a useful weapon in their computing arsenal.

It was a big moment for the IBM program, which devised the TrueNorth concept in 2010 and unveiled the first chip in 2014. Developed in collaboration with Cornell University, the TrueNorth chips use traditional digital components to implement a decidedly more brain-like behavior; each 5.4-billion-transistor chip can consume as little as 70 milliwatts (for more on how that could possibly work, see our 2014 story “How IBM Got Brainlike Efficiency From the TrueNorth Chip”).  

Although these were not the first TrueNorth chips to ship, the array is notable, Modha says, because it integrates 16 chips onto a single board, allowing the company to demonstrate that it can “scale up” the approach to larger and larger systems. The entire 16-chip array can require as little as 2.5 watts (other systems, such as communications fabric, add some overhead to that).

Livermore, which has some of the world’s fastest supercomputers and signed a $1 million contract with IBM for the TrueNorth unit, will be exploring how this new technology might play a role in areas such as cybersecurity and physical simulation. 

I was particularly excited to see exascale computing mentioned in the press release announcing the system. Probably the looming question among high-performance computer makers is how we’ll reach the exascale—when machines are some 30 times as fast as the fastest supercomputer today—without also creating staggering (and probably infeasibly expensive) utility bills.

But as it turns out, chances are slim that we’ll be simulating nuclear weapons or designing tomorrow’s nuclear reactors on supercomputers composed entirely of chips modeled on the human brain. Although TrueNorth can, in principle, perform any computation, the speed and efficiency of such neuromorphic chips only shines in particular applications such as pattern recognition. Traditional computers will still be with us, Modha says: “What we’re offering is a complementary architecture.” 

Engineers are still sorting out the best way to build an exascale supercomputer, says Brian Van Essen, a computer scientist at Livermore’s Center for Applied Scientific Computing. Heterogeneous computing, which could mix of different computing technologies such as CPUs, graphics processing units, FPGAs, and neuromorphic chips—“is definitely one potential path,” he says. But, he adds, “it’s not clear what the final system design is going to look like.”

Van Essen says one area Livermore hopes to explore with the TrueNorth chips is their potential role in large-scale simulation. “As we scale simulations and modeling [of] physical systems up to large sizes, sometimes the simulations can get into an area where the numerics get kind of garbled up,” he says.

He says a team is in the midst of evaluating whether machine learning can be used to detect problems before a simulation crashes and correct for the behavior. Van Essen says that if the approach looks promising, one could envision chips distributed thoughout the system that will monitor the progress of a simulation. It would take a “nontrivial amount of horsepower to monitor the system,” Van Essen says, adding that it would be a good application for a l0w-power technology such as TrueNorth. 

If you’re looking to keep track of TrueNorth developments, Dharmendra Modha maintains a detailed blog

Follow Rachel Courtland on Twitter at @rcourt.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}