The December 2022 issue of IEEE Spectrum is here!

Close bar

"Tardis" Memory Could Enable Huge Multi-Core Computer Chips

MIT's new technique could more easily enable multi-core chips with hundreds or thousands of cores

2 min read
"Tardis" Memory Could Enable Huge Multi-Core Computer Chips
Illustration: MIT

Future generations of computer chips could become much more powerful, with processors containing hundreds or even thousands of cores. But these huge multi-core processors will also require loads of memory so their directories can keep track of data on each individual core and coordinate updates to shared data. A new MIT technique promises to greatly reduce the required memory for such coordination as multi-core processors scale up in the coming years.

The new MIT system, called Tardis, coordinates the shared data on multi-core processors much more efficiently than existing directory techniques, according to an MIT press release. As a result, the required memory increases according to the logarithm of the number of cores instead of in direct proportion to the number of cores. So, a future 128-core chip might need only one-third the memory of a comparable chip based on an existing directory system. Memory savings become even greater as the number of cores increases: 80 percent in memory savings for a 256-core chip and 96 percent for a 1,000-core chip.

MIT’s group plans to present a paper on its new technique at the International Conference on Parallel Architectures and Compilation Techniques in October.

So how does Tardis improve on the existing system? Today’s multi-core chips have directories to make sure no conflicts arise whenever one of the cores needs to update the data shared by all the cores. When a shared update takes place, the chip’s directory system checks in with other cores working on the same shared data and sends messages to invalidate the local “stale copies” of the data on those other cores.

But Tardis uses a logical-time ordering system that may not just save memory for huge multi-core chips, but also boost their performance. That’s because it eliminates the directory system’s requirement to send out all the messages invalidating the local data copies. Instead, it assigns time stamps to each computing operation and ensures that the operations all follow the time stamp order, said Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and the paper’s lead author.

If core “A” updates some shared data, it takes ownership of that data and receives the latest time stamp. Other cores can continue working on their local copies of the same data with older time stamps. But each time a core reads shared data, it receives a “lease” that eventually needs to be renewed. If one of the other cores – let’s say core “B” – eventually needs to renew its lease and read the shared data again, it must coordinate with core “A” to read the updated data and receive the latest timestamp. That leap ahead in time inspired the MIT group to name their system Tardis in honor of the time-traveling spacecraft in the long-running TV series Doctor Who.

Tardis’ simple timestamp system eliminates the limitations of the directory system that relies upon physical time, Christopher Hughes, a principal engineer at Intel Labs, told MIT News. The directory system limits computing performance because it allows only one core to work on shared data at any given time and prevents other cores from touching it for a certain period of physical time. By comparison, Tardis can simply “advance the clock” with its timestamps, allowing individual cores to keep working on their local data copies. 

Whether chipmakers will quickly switch over to the new Tardis system remains to be seen.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}