How Can We Make Sure Autonomous Weapons Are Used Responsibly?

Technical challenges of AI are exacerbated in autonomous weapons systems

4 min read
Abstract illustration of a light shining through overlapping rectangles of different sizes
ssnjaytuturkhi/Getty Images

This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and invites you to answer these questions.

International discussions about autonomous weapons systems (AWS) often focus on a fundamental question: Is it legal for a machine to make the decision to take a human life? But woven into this question is another fundamental issue: Can an automated weapons system be trusted to do what it’s expected to do?

If the technical challenges of developing and using AWS can’t be addressed, then the answer to both questions is likely “no.”


AI Challenges Are Magnified When Applied to Weapons

Many of the known issues with AI and machine learning become even more problematic when associated with weapons. For example, AI systems could help process data from images far faster than human analysts can, and the majority of the results would be accurate. But the algorithms used for this functionality are known to introduce or exacerbate issues of bias and discrimination, targeting certain demographics more than others. Given that, is it reasonable to use image-recognition software to help humans identify potential targets?

But concerns about the technical abilities of AWS extend beyond object recognition and algorithmic bias. Autonomy in weapons systems requires a slew of technologies, including sensors, communications, and onboard computing power, each of which poses its own challenges for developers. These components are often designed and programmed by different organizations, and it can be hard to predict how the components will function together within the system, as well as how they’ll react to a variety of real-world situations and adversaries.

Testing for Assurance and Risk

It’s also not at all clear how militaries can test these systems to ensure the AWS will do what’s expected and comply with International Humanitarian Law. And yet militaries typically want weapons to be tested and proven to act consistently, legally, and without harming their own soldiers before the systems are deployed. If commanders don’t trust a weapons system, they likely won’t use it. But standardized testing is especially complicated for an AI program that can learn from its interactions in the field—in fact, such standardized testing for AWS simply doesn’t exist.

We know how software updates can alter how a system behaves and may introduce bugs that cause a system to behave erratically. But an automated weapons system powered by AI may also update its behavior based on real-world experience, and changes to the AWS behavior could be much harder for users to track. New information that the system accesses in the field could even trigger it to start to shift away from its original goals.

Similarly, cyberattacks and adversarial attacks pose a known threat, which developers try to guard against. But if an attack is successful, what would testing look like to identify that the system has been hacked, and how would a user know to implement such tests?

Physical Challenges of Autonomous Weapons

Though recent advancements in artificial intelligence have led to greater concern about the use of AWS, the technical challenges of autonomy in weapons systems extends beyond AI. Physical challenges already exist for conventional weapons and for nonweaponized autonomous systems, but these same problems are further exacerbated and complicated in AWS.

For example, many autonomous systems are getting smaller, even as their computational needs grow, including navigation, data acquisition and analysis, and decision making—and potentially all while out of communication with commanders. Can the automated weapons system maintain the necessary and legal functionality throughout the mission, even if communication is lost? How is data protected if the system falls into enemy hands?

Issues similar to these may also arise with other autonomous systems, but the consequences of failure are magnified with AWS, and extra features will likely be necessary to ensure that, for example, a weaponized autonomous vehicle in the battlefield doesn’t violate International Humanitarian Law or mistake a friendly vehicle for an enemy target. Because these problems are so new, weapons developers and lawmakers will need to work with and learn from experts in the robotics space to be able to solve the technical challenges and create useful policy.

There are many technical advances that will contribute to various types of weapons systems. Some will prove far more difficult to develop than expected, while others will likely be developed faster. That means AWS development won’t be a leap from conventional weapons systems to full autonomy, but will instead make incremental steps as new autonomous capabilities are developed. This could lead to a slippery slope where it’s unclear if a line has been crossed from acceptable use of technology to unacceptable. Perhaps the solution is to look at specific robotic and autonomous technologies as they’re developed and ask ourselves whether society would want a weapons system with this capability, or if action should be taken to prevent that from happening.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.
The Conversation (6)
John Mcbain
John Mcbain22 Nov, 2022
LS

As a product safety engineer, I'm more focused on how equipment breaks - and how to mitigate the consequences - than on how it works. Too many R&D engineers believe that how it is supposed to work is identical to how it does work. Must robots behave perfectly ethically when humans do not, and make no mistakes while humans do? Not going to happen. "War crimes" have been defined, agreed to be avoided, and consistantly commited by humans, so robots will follow that path. Inevitable? Perhaps not, but neither is global climate change. The Doomsday Clock should consider AWS now.

Fotios Sotiropoulos
Fotios Sotiropoulos22 Nov, 2022
M
Since AI, if it will be used for military applications, will be a very advanced weapon, I will answer to this topic with the following question:How Can We Make Sure Weapons Are Used Responsibly by a group of humans, and how our society defines the bad guys? I believe, instead of trying to find the bad guys on earth, it is proper to redefine our role in the universe and oversee the real challenges our planet faces and be unified as humankind, before we destroy ourselves!

William Croft
William Croft22 Nov, 2022
INDV

Suggest you watch the videos produced by the Future of Life Institute. Which consists of scientists and engineers that are opposed to autonomous weapons. Their video "Slaughterbots" is both chilling and a preview of where we are headed if this direction is not curtailed. Any engineer developing such robots has lost connection to their heart and soul.