Future generations of computer chips could become much more powerful, with processors containing hundreds or even thousands of cores. But these huge multi-core processors will also require loads of memory so their directories can keep track of data on each individual core and coordinate updates to shared data. A new MIT technique promises to greatly reduce the required memory for such coordination as multi-core processors scale up in the coming years.
The new MIT system, called Tardis, coordinates the shared data on multi-core processors much more efficiently than existing directory techniques, according to an MIT press release. As a result, the required memory increases according to the logarithm of the number of cores instead of in direct proportion to the number of cores. So, a future 128-core chip might need only one-third the memory of a comparable chip based on an existing directory system. Memory savings become even greater as the number of cores increases: 80 percent in memory savings for a 256-core chip and 96 percent for a 1,000-core chip.
MIT’s group plans to present a paper on its new technique at the International Conference on Parallel Architectures and Compilation Techniques in October.
So how does Tardis improve on the existing system? Today’s multi-core chips have directories to make sure no conflicts arise whenever one of the cores needs to update the data shared by all the cores. When a shared update takes place, the chip’s directory system checks in with other cores working on the same shared data and sends messages to invalidate the local “stale copies” of the data on those other cores.
But Tardis uses a logical-time ordering system that may not just save memory for huge multi-core chips, but also boost their performance. That’s because it eliminates the directory system’s requirement to send out all the messages invalidating the local data copies. Instead, it assigns time stamps to each computing operation and ensures that the operations all follow the time stamp order, said Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and the paper’s lead author.
If core “A” updates some shared data, it takes ownership of that data and receives the latest time stamp. Other cores can continue working on their local copies of the same data with older time stamps. But each time a core reads shared data, it receives a “lease” that eventually needs to be renewed. If one of the other cores – let’s say core “B” – eventually needs to renew its lease and read the shared data again, it must coordinate with core “A” to read the updated data and receive the latest timestamp. That leap ahead in time inspired the MIT group to name their system Tardis in honor of the time-traveling spacecraft in the long-running TV series Doctor Who.
Tardis’ simple timestamp system eliminates the limitations of the directory system that relies upon physical time, Christopher Hughes, a principal engineer at Intel Labs, told MIT News. The directory system limits computing performance because it allows only one core to work on shared data at any given time and prevents other cores from touching it for a certain period of physical time. By comparison, Tardis can simply “advance the clock” with its timestamps, allowing individual cores to keep working on their local data copies.
Whether chipmakers will quickly switch over to the new Tardis system remains to be seen.
Jeremy Hsu has been working as a science and technology journalist in New York City since 2008. He has written on subjects as diverse as supercomputing and wearable electronics for IEEE Spectrum. When he’s not trying to wrap his head around the latest quantum computing news for Spectrum, he also contributes to a variety of publications such as Scientific American, Discover, Popular Science, and others. He is a graduate of New York University’s Science, Health & Environmental Reporting Program.