A Reality Check for the World’s Largest Radio Telescope

Chinese supercomputer tests software of the world’s biggest telescope

3 min read
photograph of Tianhe-2 supercomputer
Photo: Yutong Lu

The construction of the world’s largest radio telescope, the Square Kilometer Array, or SKA, will begin in 2017. When completed in 2023, the largest part of the project will consist of an array of thousands of telescope dishes in the Murchison region of Australia covering an area 200 kilometers wide. The array will produce radio images of the universe at higher radio frequencies with an unprecedented angular resolving power corresponding to that of a single dish 200 km wide. A smaller area, 70 km across, will be populated with about 250,000 antennas covering a lower frequency range. A third, mid-frequency antenna park will be located in South Africa, near Cape Town.

Signals from all these antennas will be collected and integrated in a data stream that can then be further processed for use by the scientific community. Software and computing power is one of the main challenges of the SKA project, says Andreas Wicenec, head of data intensive astronomy at the International Centre for Radio Astronomy Research (ICRAR) in Perth, Australia. The computing power that we need will correspond to what can be achieved with the now-fastest computers, he adds. Two dedicated computers, one based in Cape Town and one in Perth, each with a speed of 150 petaflops, will harness the data stream of the SKA.

Recently, a part of the software under development ran on the world’s second fastest supercomputer, the Tianhe-2, located in the National Supercomputer Center in Guangzhou, China. “For the time being, we do mostly deployment scaling capability tests, rather than real computing tests. The reason that we are doing this so early is that deployment will demand the highest processing power of the SKA computers,” says Wicenec.

The results of the test with the Tianhe-2 computer were encouraging. Reports Wicenec, “We are quite happy that the architecture is sound and rigid. We applied a few small changes to address failures in a more robust way. Failures in such very large computing systems are part of the normal operation. Our current prototype only implements a very naive failure handling, but we are working on a complete architecture just to address this issue. This will be prototyped and integrated over the next half year or so and will then be part of future large-scale tests.”

The experiment also allowed the identification of potential bottlenecks with different types of data streams. “This will be resolved by creating independent interfaces, one for small messages and events, one for logging and alarms, and one for bulk data,” says Wicenec.

The whole data processing system, called the SKA Science Data Processor, will reside in the two dedicated SKA computers. Ideally they would be custom-built ones, but this option is too expensive. “We are looking into common, off-the-shelf hardware, but assembled in a way that is optimized for us,” says Wicenec. And there is a multitude of options, microservers, integrated CPUs with ARM processors, and so on. He adds, “We don’t know yet what will be better, and this is why we are doing the software development right now, and testing the different options.”

Also the options for data storage, which will be off-site, and distribution are now the subject of research. An interesting feature for the research community will be that data will automatically be processed and presented in what is, for astronomers, a useful way. However, here computing power will have to be taken into account. Too much processing will demand more computing power. “There will be a compromise between computer costs, network and capital costs,” says Wicenec, who adds that the computing grid developed by CERN is an interesting precedent. “They distribute data across the world, and we are looking into it.”

The Conversation (0)