The December 2022 issue of IEEE Spectrum is here!

Close bar

U.S. Blacklisting of China's Supercomputers May Backfire

A U.S. decision to block Intel from upgrading a Chinese supercomputer won't stop China in the long run

2 min read
U.S. Blacklisting of China's Supercomputers May Backfire
Photo: Imaginechina/AP Photo

When China wanted to upgrade Tianhe-2, currently the world’s fastest supercomputer, it turned to U.S. chipmaker Intel. But the U.S. government has blocked Intel from helping with the tech upgrade and blacklisted several Chinese supercomputing centers over concerns for their involvement in nuclear weapons development. Experts warn that in the long run such a move may hurt the business of U.S. chipmakers and encourage China to speed up its homegrown chip development.

The U.S. Commerce Department initially denied Intel an export license for supplying chips to China’s supercomputing centers last fall, according to the Wall Street Journal. The Commerce Department followed up by posting a notice on 18 February that listed the Chinese supercomputing centers as having been “involved in activities contrary to the national security and foreign policy interests of the United States.” Specifically, the notice cited those centers’ involvement in “nuclear explosive activities” that could cover anything from research and development to testing.

Chuck Mulloy, an Intel spokesperson, gave HPC Wire and other news media the following statement:

Intel was informed in August by the U.S Department of Commerce that an export license was required for the shipment of Xeon and Xeon Phi parts for use in specific previously disclosed supercomputer projects with Chinese customer INSPUR. Intel complied with the notification and applied for the license which was denied. We are in compliance with the U.S. law.

The four Chinese centers that were blacklisted include the National University of Defense Technology (NUDT), the National Supercomputing Center in Changsha, National Supercomputing Center in Guangzhou, and the National Supercomputing Center in Tianjin. China’s Tianhe-2 supercomputer was built by NUDT and is located at the Guangzhou center. An earlier supercomputing system called Tianhe-1A resides at the Tianjin center.

Tianhe-2 currently rules the Top 500 list of supercomputers ranked according to calculation speeds. The Chinese supercomputer relies upon 80,000 Intel Xeon chips to achieve a peak computing performance of 33 petaflops (33 thousand trillion calculations per second). The sale of additional Intel chips would have helped Tianhe-2 upgrade its peak performance beyond 110 petaflops.

Such an upgrade would help keep Tianhe-2 ahead in the worldwide supercomputing race. The U.S. government has invested $325 million in building two supercomputers capable of achieving peak performances of 100 petaflops by 2017. The United States has also struck a separate deal with Intel to build a 180-petaflops Aurora supercomputer at the Argonne National Laboratory in Illinois. The latter is scheduled to go online by 2019.

Both government and other experts overwhelmingly criticized the U.S. government’s decisions as merely delaying China’s supercomputing ambitions and hurting U.S. business prospects in the long run. China currently has every reason to focus on homegrown efforts to develop high-performance computing chips and not to use U.S. chips lest the U.S. government block future supercomputing projects, according to VR World, a tech news publication that broke this story.

The Chinese will be more incentivized to develop their own technology, and U.S. manufacturers will be seen as less reliable and potentially not able to satisfy foreign orders,” said Horst Simon, deputy director of the U.S. Department of Energy’s Lawrence Berkeley National Laboratory, in a Wall Street Journal interview.

Jack Dongarra, a computer scientist at the University of Tennessee and the leading editor of the Top 500 supercomputing list, voiced similar concerns in an interview with the official Chinese news agency Xinhua.

"The U.S. government is trying to stop the spread of high performance computer systems in China," Dongarra told Xinhua. "The ban will probably accelerate the development of a processor designed in China for use in high performance computers."

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}