In June 2020,OpenAI, an independent artificial-intelligence research lab based in San Francisco, announced GPT-3, the third generation of its massive Generative Pre-trained Transformer language model, which can write everything from computer code to poetry.
A year later, with much less fanfare, Tsinghua University’s Beijing Academy of Artificial Intelligence released an even larger model, Wu Dao 2.0, with 10 times as many parameters—the neural network values that encode information. While GPT-3 boasts 175 billion parameters, Wu Dao 2.0’s creators claim it has a whopping 1.75 trillion. Moreover, the model is capable not only of generating text like GPT-3 does but also images from textual descriptions like OpenAI’s 12-billion parameter DALL-E model, and has a similar scaling strategy to Google’s 1.6 trillion-parameter Switch Transformer model.
A researcher on the Wu Dao project, said in a recent interview that the group built an even bigger, 100 trillion-parameter model in June, though it has not trained it to “convergence,” the point at which the model stops improving. “We just wanted to prove that we have the ability to do that,” the Wu Dao researcher said.
This isn’t simple one-upmanship. On the one hand, it’s how research progresses. But on the other, it is emblematic of an intensifying competition between the world’s two technology superpowers. Whether the researchers involved like it or not, their governments are eager to adopt each AI advance into their national security infrastructure and military capabilities.
That matters, because dominance in the technology means probable victory in any future war. Even more important, such an advantage likely guarantees the longevity and global influence of the government that wields it. Already, China is exporting its AI-enabled surveillance technology—which can be used to quash dissent—to client states and is espousing an authoritarian model that promises economic prosperity as a counter to democracy, something that the Soviet Union was never able to do.
Ironically, China is a competitor that the United States abetted. It’s well known that the U.S. consumer market fed China’s export engine, itself outfitted with U.S. machines, and led to the fastest-growing economy in the world since the 1980s. What’s less well-known is how a handful of technology companies transferred the know-how and trained the experts now giving the United States a run for its money in AI.
Blame Bill Gates, for one. In 1992, Gates led Microsoft into China’s fledgling software market. Six years later, he established Microsoft Research Asia, the company’s largest basic and applied computer-research institute outside the United States. People from that organization have gone on to found or lead many of China’s top technology institutions.
China is a competitor that the United States abetted. A handful of U.S. tech companies transferred their know-how and trained some of China's top AI experts.
Ever hear of TikTok? In 2012, Zhang Yiming, a Microsoft Research Asia alum, founded the video-sharing platform’s parent company, ByteDance, which today is one of the world’s most successful AI companies. He hired a former head of Microsoft Research Asia, Zhang Hongjiang, to lead ByteDance’s Technical Strategy Research Center. This Zhang is now head of the Beijing Academy— the organization behind Wu Dao 2.0, currently the largest AI system on the planet. That back-and-forth worries U.S. national-security strategists, who plan for a day when researchers and companies are forced to take sides.
“That's when the Chinese started saying, ‘We're moving beyond attrition warfare’ to what they referred to as systems confrontation, the confrontation between their operational system and the American operational system,” says Robert O. Work, former U.S. Deputy Secretary of Defense and vice chairman of the recently concluded National Security Commission on Artificial Intelligence. “Their theory of victory is what they refer to as system destruction.”
“The Chinese and the Americans see this much the same way,” says Work, calling it a hot competition. “If one can blow apart their adversary’s battle network, the adversary won't be able to operate and won't be able to achieve their objectives.”
System-destruction warfare is part and parcel of what the People’s Liberation Army thinks of as “intelligentized” warfare, in which war is waged not only in the traditional physical domains of land, sea, and air but also in outer space, nonphysical cyberspace, and electromagnetic and even psychological domains—all enabled and coordinated with AI.
Work says the first major U.S. AI effort toward intelligentized warfare was to use computer vision to analyze thousands of hours of full-motion video being downloaded from dozens of drones. Today, that effort, dubbed Project Maven, detects, classifies, and tracks objects within video images, and it has been extended to acoustic data and signals intelligence.
The Chinese have kept pace. According to Georgetown University’s Center for Security and Emerging Technology, China is actively pursuing AI-based target recognition and automatic-weapon-firing research, which could be used in lethal autonomous weapons. Meanwhile, the country may be ahead of the United States in swarm technology, according to Work. Georgetown’s CSET reports that China is developing electromagnetic weapon payloads that can be attached to swarms of small unmanned aerial vehicles and flown into enemy airspace to “disrupt or block the enemy's command and decision-making.”
“I worry about their emphasis on swarms of unmanned systems,” says Work, adding that the Chinese want to train swarms of a hundred vehicles or more, including underwater systems, to coordinate navigation through complex environments. “While we also test swarms, we have yet to demonstrate the ability to employ these types of swarms in a combat scenario.”
Chinese firm Baidu—whose comparatively modest Sunnyvale, Calif. office is pictured here in 2018—is one of the largest Internet companies in the world. Smith Collection/Gado/Getty Images
This type of research and testing has prompted calls for preemptive bans on lethal autonomous weapons, but neither country is willing to declare an outright prohibition. Barring a prohibition, many people believe that China and the United States, along with other countries, should begin negotiating an arms-control agreement banning the development of systems that could autonomously order a preemptive or retaliatory attack. Such systems might inadvertently lead to “flash wars,” just as AI-driven autonomous trading has led to flash crashes in the financial markets.
“Neither of us wants to get into a war because an autonomous-control system made a mistake and ordered a preemptive strike,” Work says, referring to the United States and China.
All of this contributes to a dilemma facing the twin realms of AI research and military modernization. The international research community, collaborative and collegial, prefers to look the other way and insist that it only serves the interest of science. But the governments that fund that research have clear agendas, and military enhancement is undeniably one.
Geoffrey Hinton, regarded as one of the godfathers of deep learning, the kind of AI transforming militaries today, left the United States and moved to Canada largely because he didn’t want to depend on funding from the Defense Advanced Research Projects Agency, or DARPA. The agency, the largest funder of AI research in the world, is responsible for the development of emerging technologies for military use.
Hinton instead helped to put deep learning on the map in 2012 with a now-famous neural net called AlexNet when he was at the University of Toronto. But Hinton was also in close contact with the Microsoft Research Lab in Redmond, Wash., before and after his group validated AlexNet, according to one of Hinton’s associates there, Li Deng, then principal researcher and manager and later chief scientist of AI at Microsoft.
In 2009 and 2010, Hinton and Deng worked together at Microsoft on speech recognition and Deng, then Editor-In-Chief of the IEEE Signal Processing Magazine, was invited in 2011 to lecture at several academic organizations in China where he said he shared the published success of deep learning in speech processing. Deng said he was in close contact with former Microsoft colleagues at Baidu, a Chinese search engine and AI giant, and a company called iFlyTek, a spin off from Deng’s undergraduate alma mater.
When Hinton achieved his breakthrough with backpropagation in neural networks in 2012, he sent an email to Deng in Washington, and Deng said he shared it with Microsoft executives, including Qi Lu who led the development of the company’s search engine, Bing. Deng said he also sent a note to his friends at iFlyTek, which quickly adopted the strategy and became an AI powerhouse—famously demonstrated in 2017 with a convincing video of then-president Donald Trump speaking Chinese.
Qi Lu went on to become COO of Baidu where Deng said another Microsoft alum, Kai Yu, who also knew Hinton well, had already seized on Hinton’s breakthrough.
China’s “theory of victory is what they refer to as system destruction.”
Literally within hours of Hinton’s results, according to Deng, researchers in China were working on repeating his success.
Had they not learned of Hinton’s work through the research grapevine, they still would have read about it in published papers and heard about it through international conferences. Research today has no borders. It is internationally fungible.
But the United States has since tried to limit this crosspollination, barring Chinese nationals known to have worked for China’s military or intelligence organizations from working with U.S. research institutions. Yet research continues to flow back and forth between the two countries: Microsoft maintains its research lab in Beijing, and the Chinese Internet and AI giant Baidu has a research lab in Silicon Valley, for example.
The Wu Dao project researcher says decoupling the two countries would slow China’s AI research—not because it would stop the flow of ideas, but because it would cut China off from the advanced semiconductors needed to train AI models. He said his group is working on chip designs to speed AI training. China, meanwhile, is working to build extreme ultraviolet lithography machines and upgrade its semiconductor foundries to free itself from Western control.
While the U.S. government must negotiate with private sector organizations and researchers to participate in its military modernization, China’s National Intelligence Law compels its companies and researchers to cooperate when asked.
China began pouring billions of dollars into AI research in 2017, following Google subsidiary DeepMind’s success at defeating the world Go champion with its AI model AlphaGo. Among the organizations set up with that funding was Tsinghua’s Beijing Academy, where the project leader and his team built Wu Dao 2.0.
By most metrics, Wu Dao 2.0 has surpassed OpenAI’s GPT-3. The researcher says it was trained on 4.9 terabytes of clean data, including Chinese-language text, English-language text, and images. OpenAI has said that GPT-3 was trained on just 570 gigabytes of clean, primarily English-language text.
The Wu Dao researcher says his group is now working on video with the goal of generating realistic video from text descriptions. “Hopefully, we can make this model do something beyond the Turing test,” he says, referring to an assessment of whether a computer can generate text indistinguishable from that created by a human. “That's our final goal.”