The December 2022 issue of IEEE Spectrum is here!

Close bar

Novel Annealing Processor Is the Best Ever at Solving Combinatorial Optimization Problems

Tokyo Tech engineers say their CMOS processor bests current technologies in solving the traveling salesman conundrum and other complex puzzles

3 min read
STATICA processor on a fingertip for scale
Photo: Tokyo Institute of Technology

During the past two years, IEEE Spectrum has spotlighted several new approaches to solving combinatorial optimization problems, particularly Fujitsu’s Digital Annealer and more recently Toshiba’s Simulated Bifurcation Algorithm. Now, researchers at the Tokyo Institute of Technology, with help from colleagues at Hitachi, Hokkaido University, and the University of Tokyo, have engineered a new annealer architecture to deal with this kind of task that has proven too taxing for conventional computers to deal with.

Dubbed STATICA (Stochastic Cellular Automata Annealer Architecture), the processor is designed to take on challenges such as portfolio, logistic, and traffic flow optimization when they are expressed in the form of Ising models.

Originally used to describe the spins of interacting magnets, Ising models can also be used to solve optimization problems. That’s because the evolving magnetic interactions in a system progress towards the lowest-energy state, which conveniently mirrors how an optimization algorithm searches for the best—i.e. ground state—solution. In other words, the answer to a particular optimization question becomes the equivalent of searching for the lowest energy state of the Ising model.

Current annealers such as D-Wave’s quantum annealer computer and Fujitsu’s Digital Annealer calculate spin-evolutions serially, points out Professor Masato Motomura at Tokyo Tech’s Institute of Innovative Research and leader of the STATICA project. As one spin affects all the other spins in a given iteration, spin switchings are calculated one by one, making it a serial process. But in STATICA, he notes, that updating is performed in parallel using stochastic cellular automata (SCA). That is a means of simulating complex systems using the interactions of a large number of neighboring “cells” (spins in STATICA) with simple updating rules and some stochasticity (randomness).

In conventional annealing systems, if one spin flips, it affects all of the connected spins and therefore all the spins must be processed in the next iteration. But in STATICA, SCA introduces copies (replicas) of the original spins into the process. All original spin-spin interactions are redirected to their individual replica spins.

Diagrams comparing conventional and proposed spin-spin interactions

“In this method, all the replica spins are updated in parallel using these spin-spin interactions,” explains Motomura.” If one original spin flips, it affects its replica spin but not any of the other original spins because there is no interaction between them, unlike conventional annealing. And in the next iteration, the replica spins are interpreted as original spins and the parallel spin-update is repeated.

As well as enabling paralleling processing, STATICA also uses pre-computed results to reduce computation. “So if there is no spin-flip, there is nothing to compute,” says Motomura. “And if the influence of a flipped spin has already been computed, that result is reused.”

STATICA processor designImage: Tokyo Institute of Technology

For proof of concept, the researchers had a 3-by-4-mm STATICA chip fabricated using a 65-nm CMOS process operating at a frequency of 320 megahertz and running on 649 milliwatts. Memory comprises a 1.3 megabit SRAM. This enabled an Ising model of 512 spins, equivalent to 262,000 connections, to be tested.

“Scaling by at least two orders of magnitude is possible,” notes Motomura. And the chip can be fabricated using the same process as standard processors and can easily be added to a PC as a co-processor, for instance, or added to its motherboard.

STATICA chip mounted on a circuit board with a USB connection and connected to a laptop PC as proof of conceptPhoto: Tokyo Institute of Technology

“At the ISSCC Conference in February, where we presented a paper on STATICA, we mounted the chip on a circuit board with a USB connection,” he says, “and demonstrated it connected to a laptop PC as proof of concept.”

To compare STATICA’s performance against existing annealing technologies (using results given in published papers), the researchers employed a Maxcut benchmark test of 2,000 connections. STATICA came out on top in processing speed, accuracy, and energy efficiency. Compared with its nearest competitor, Toshiba’s Simulated Bifurcation Algorithm, STATICA took 0.13 milliseconds to complete the test, versus 0.5 ms for SBA. In energy efficiency, STATICA ran on an estimated 2 watts of power, far below the to 40 watts for SBA. And in histogram comparisons of accuracy STATICA also came out ahead, according to Motomura.

For the next step, he says the team will scale up the processor and test it out using realistic problems. 

Other than that, there are no more technology hurdles to overcome. 

“STATICA  is ready,” states Motomura. “The only question is whether there is sufficient market demand for such an annealing processor. We hope to see interest, for instance, from ride-sharing companies like Uber, and product distributors such as Amazon. Local governments wanting to control problems such as traffic congestion might also be interested. These are just a few examples of how STATICA might be used besides more obvious applications like portfolio optimization and drug discovery.”

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}