Civilian AI Is Already Being Misused by the Bad Guys

And the AI community needs to do something about it

6 min read
An illustration showing abstract artificial intelligence .
ISTOCK

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Last March, a group of researchers made headlines by revealing that they had developed an artificial-intelligence (AI) tool that could invent potential new chemical weapons. What’s more, it could do so at an incredible speed: It took only 6 hours for the AI tool to suggest 40,000 of them.

The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity data set.

The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue states, nonstate armed groups, criminal organizations, or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security.

Many “responsible AI” initiatives share the same blind spot. They ignore international peace and security.

People working in the field of life sciences are already well attuned to the problem of misuse of peaceful research, thanks to decades of engagement between arms-control experts and scientists.

The same cannot be said of the AI community, and it is well past time for it to catch up.

We serve with two organizations that take this cause very seriously, the United Nations Office for Disarmament Affairs and the Stockholm International Peace Research Institute. We’re trying to bring our message to the wider AI community, notably future generations of AI practitioners, through awareness-raising and capacity-building activities.

A blind spot for responsible AI

AI can improve many aspects of society and human life, but like many cutting-edge technologies it can also create real problems, depending on how it is developed and used. These problems include job losses, algorithmic discrimination, and a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly. Today, there is no shortage of “responsible AI” initiatives—more than 150, by some accounts—which aim to provide ethical guidance to AI practitioners and to help them foresee and mitigate the possible negative impacts of their work.

The problem is that the vast majority of these initiatives share the same blind spot. They address how AI could affect areas such as health care, education, mobility, employment, and criminal justice, but they ignore international peace and security. The risk that peaceful applications of AI could be misused for political disinformation, cyberattacks, terrorism, or military operations is rarely considered, unless very superficially.

This is a major gap in the conversation on responsible AI that must be filled.

Most of the actors engaged in the responsible AI conversation work on AI for purely civilian end uses, so it is perhaps not surprising that they overlook peace and security. There’s already a lot to worry about in the civilian space, from potential infringements of human rights to AI’s growing carbon footprint.

AI practitioners may believe that peace and security risks are not their problem, but rather the concern of states. They might also be reluctant to discuss such risks in relation to their work or products due to reputational concerns, or for fear of inadvertently promoting the potential for misuse.

The misuse of civilian AI is already happening

The diversion and misuse of civilian AI technology are, however, not problems that the AI community can or should shy away from. There are very tangible and immediate risks.

Civilian technologies have long been a go-to for malicious actors, because misusing such technology is generally much cheaper and easier than designing or accessing military-grade technologies. There are no shortage of real-life examples, a famous one being the Islamic State’s use of hobby drones as both explosive devices and tools to shoot footage for propaganda films.

The misuse of civilian technology is not a problem that states can easily address on their own, or purely through intergovernmental processes. However, AI researchers can be a first line of defense, as they are among the best placed to evaluate how their work may be misused.

The fact that AI is an intangible and widely available technology with great general-use potential makes the risk of misuse particularly acute. In the cases of nuclear power technology or the life sciences, the human expertise and material resources needed to develop and weaponize the technology are generally hard to access. In the AI domain there are no such obstacles. All you need may be just a few clicks away.

As one of the researchers behind the chemical weapon paper explained in an interview: “You can go and download a toxicity data set from anywhere. If you have somebody who knows how to code in Python and has some machine-learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic data sets.”

We’re already seeing examples of the weaponization of peaceful AI. The use of deepfakes, for example, demonstrates that the risk is real and the consequences potentially far-ranging. Less than 10 years after Ian Goodfellow and his colleagues designed the first generative adversarial network, GANs have become tools of choice for cyberattacks and disinformation—and now, for the first time, in warfare. During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian president Volodymyr Zelenskyy telling his troops to surrender.

The weaponization of civilian AI innovations is also one of the most likely ways that autonomous weapons systems (AWS) could materialize. Nonstate actors could exploit advances in computer vision and autonomous navigation to turn hobby drones into homemade AWS. These could be not only highly lethal and disruptive (as depicted in the Future of Life Institute’s advocacy video Slaughterbots) but also very likely violate international law, ethical principles, and agreed standards of safety and security.

Nation states can’t address AI risks alone

Another reason the AI community should get engaged is that the misuse of civilian products is not a problem that states can easily address on their own, or purely through intergovernmental processes. This is not least because governmental officials might lack the expertise to detect and monitor technological developments of concern. What’s more, the processes through which states introduce regulatory measures are typically highly politicized and may struggle to keep up with the speed at which AI tech is advancing.

Moreover, the tools that states and intergovernmental process have at their disposal to tackle the misuse of civilian technologies, such as stringent export controls and safety and security certification standards, may also jeopardize the openness of the current AI innovation ecosystem. From that standpoint, not only do AI practitioners have a key role to play, but it is strongly in their interest to play it.

AI researchers can be a first line of defense, as they are among the best placed to evaluate how their work may be misused. They can identify and try to mitigate problems before they occur—not only through design choices but also through self-restraint in the diffusion and trade of the products of research and innovation.

AI researchers may, for instance, decide not to share specific details about their research (the researchers that repurposed the drug-testing AI did not disclose the specifics of their experiment), while companies that develop AI products may decide not to develop certain features, restrict access to code that might be used maliciously, or add by-design security measures such as antitamper software, geofencing, and remote switches. Or they may apply the know-your-customer principle through the use of token-based authentication.

Such measures will certainly not eliminate the risks of misuse entirely—and they may also have drawbacks—but they can at least help to reduce them. These measures can also help keep at bay potential governmental restrictions, for example on data sharing, which could undermine the openness of the field and hold back technological progress.

The responsible AI movement has tools that can help

To engage with the risks that the misuse of AI poses to peace and security, AI practitioners do not have to look further than existing recommended practices and tools for responsible innovation. There is no need to develop an entirely new tool kit or set of principles. What matters is that peace and security risks are regularly considered, particularly in technology-impact assessments. The appropriate risk-mitigation measures will flow from there.

Responsible AI innovation is not a silver bullet for all the societal challenges brought by advances in AI. However, it is a useful and much-needed approach, especially when it comes to peace and security risks. It offers a bottom-up approach to risk identification, in a context where the multipurpose nature of AI makes top-down governance approaches difficult to develop and implement, and possibly detrimental to progress in the field.

Certainly, it would be unfair to expect AI practitioners alone to foresee and to address the full spectrum of possibilities through which their work could be harmful. Governmental and intergovernmental processes are absolutely necessary, but peace and security, and thus all our safety, are best served by the AI community getting on board. The steps AI practitioners can take do not need to be very big, but they could make all the difference.

Authors’ note: This post was drafted as part of a joint SIPRI-UNODA initiative on Responsible Innovation in AI, which is supported by the Republic of Korea. All content is the responsibility of the authors.

The Conversation (1)
Vaibhav Sunder
Vaibhav Sunder20 Sep, 2022
M

Before furore sets in on a more elaborate use of AI by a rogue element, much like Linux Foundation and governmental regulatory agencies for Genomic study, a guiding institution must be setup, obviously global in reach and led by America, in Genomics the CrisprCas9 test tube baby in China is a muted affair currently, it is a big deal for even the secular world. Regulating Open Source is a bad way, but guiding is not, similarly AI organizations should have sprouted but have not, I hope it happens soon, before algorithms get utilized by say West Asia mercenaries or worse East Asian anger