Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Industry Urges United Nations to Ban Lethal Autonomous Weapons in New Open Letter

Representatives from 116 companies around the world, including Elon Musk, renew a call for the UN to ban lethal autonomous weapon systems

7 min read

AI
Illustration: iStockphoto

Today (or, yesterday, but today Australia time, where it's probably already tomorrow), 116 founders of robotics and artificial intelligence companies from 26 countries released an open letter urging the United Nations to ban lethal autonomous weapon systems (LAWS). This is a follow-up to the 2015 anti-"killer robots" UN letter that we covered extensively when it was released, but with a new focus on industry that attempts to help convince the UN to get something done.

Here's the letter in full:

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm.

We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

The press release accompanying the letter mentions that it was signed by Elon Musk, Mustafa Suleyman (founder and Head of Applied AI at Google’s DeepMind), Esben Østergaard, (founder & CTO of Universal Robotics), and a bunch of other people who you may or may not have heard of. You can read the entire thing here, including all 116 signatories.

For some context on this, we spoke with Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney and one of the organizers of the letter.

Why was it important to release this second open letter? What has happened in the two years since the first letter was released?

There are two reasons it is important to put out this second open letter. First, we wanted to demonstrate that the industry putting AI and Robotics into our lives supports the concerns of the research community who signed the first letter. Second, we wanted to add more impetus to the talks at the UN. It is very unfortunate that, despite all sides agreeing of the need to meet quickly, that the first talks have been postponed. We also felt the public needed to know that this issue was stalled for the want of a few hundred thousand dollars. We should be angry that the UN is hampered from finding a solution to this issue due to the lack of a pathetically small amount of money.

What is your concern about lethal autonomous weapons? What kind of future are you worried about?

In the short term, I worry more about stupid AI than smart AI. We'll give the responsibility to make life and death decisions to machines that cannot comply with international humanitarian law. In the longer term, I am worried we will industrialize war, introducing machines that we cannot defend ourselves against, resulting in an arms race that will destabilise further an already delicate world. It sickens me to think that the AI technologies we work on might be used to cause such harm. I would be much happier if the focus was on all the ways AI could improve our lives, improve health care, education, road safety, and remove the mundane and repetitive from our jobs and many other aspects of our lives.

The letter ends with a request to “find a way to protect us all from these dangers.” How, specifically, do you hope that can be accomplished?

I believe an international ban, similar to those we have for chemical, biological weapons, and other weapon types like blinding lasers and anti-personnel mines is likely the best way to limit the role of these technologies in the battlefield. 

While this most recent letter renews the call for a United Nations ban on lethal autonomous weapons systems and makes the perspective from a subset of robotics companies a little more explicit than it might have been before, there has not otherwise been a lot of tangible progress towards an actual ban that we've been able to identify over the past two years. This may be the normal pace of operations for the UN, but essentially all of the questions and concerns that we (and others) raised about the last “killer robots” letter are largely unresolved. Here's a big pile of links to our past coverage:

One of the primary critiques of a ban on lethal autonomous weapons systems is that it would be practically impossible to implement, considering how much usefulness autonomous systems offer in all kinds of other applications, the minimal separation between commercial and military technology, and how little difference there can be between an autonomous system and a weaponized autonomous system, or a weaponized system with a human in the loop and one without.

Meanwhile, we asked Clearpath Robotics CTO Ryan Gariepy if, as someone who knows probably way too much about robots and the first person to sign this letter, he had any ideas about where to start when it comes to crafting a lethal autonomous weapons ban that might actually work. (We should note that Ryan is not speaking as a representative of the folks behind this letter; these are his personal opinions.)

Do you think that there is a realistic way to implement a purely technological ban on lethal autonomous weapons?

At present, I haven't identified (which I don't think would be surprising) any particular aspect which both makes a system transition from a semi-autonomous to a fully autonomous lethal weapons system, and is auditable in a straightforward manner by a third party. 

What practical steps do you think could be used to help ensure the safety of autonomous weapons systems?

Proper, auditable fail-safes. Not to prevent a system from using weapons on its own, but more as an accountability measure against the person who did use these weapons, who chose to authorize that system to take lethal action. There's a lot of technical development that can be done along those lines.

Are you then talking about accountability for a human who authorizes a system to take lethal action autonomously, or verifying that there's a human in the loop making all the decisions about whether or not a system can take a lethal action?

It's more about the human in the loop. There are open questions about when you authorize an [autonomous] system, what are you authorizing? The release of a single weapon? Prosecuting a target for a defined amount of time? But I think this approach is not only beneficial in cases of autonomous weapons, it would also be immediately applicable to semi-autonomous weapons. We'd like there to be traceability of the person who looked at a particular situation and took action; that accountability gap is a major concern.

Fundamentally, Gariepy told us, one of the most important things that could come out of the UN discussions is an understanding that the use of lethal autonomous weapon systems is simply not the way that warfare should happen. That could help put pressure on governments not to use them, even if a specific ban does not exist.

A basic question that needs to be addressed in all this is what autonomy means, and what having a human in the loop means, since (as Gariepy alludes to) there are lots of loops, and those loops can get very big and complicated and messy. While I may not agree that a complete ban on autonomous weapons is the right thing to do, I certainly agree that verifiable accountability is vital, and not just when it comes to autonomous systems. If this is the approach that the UN decides to take, as opposed to an outright ban with dubious technical enforceability, I'm all for that.

I've already ranted on this topic six ways to Sunday, so I'll end this article with three expert reactions to the letter from Australian roboticists. Please feel free to communicate what you think in the comments.

James Harland, Associate Professor in Computational Logic in the School of Computer Science and IT at RMIT University in Melbourne, Australia:

In the past, technology has often advanced much faster than legal and cultural frameworks, leading to technology-driven situations such as mutually assured destruction during the Cold War, and the proliferation of landmines. I think we have a chance here to establish this kind of legal framework in advance of the technology for a change, and thus allow society to control technology rather than the other way around.

Michael Harre, Lecturer in Complex Systems Group and PM Program in the Faculty of Engineering & Information Technologies at the University of Sydney:

It is an excellent idea to consider the positives and the negatives of autonomous systems research and to ban research that is unethical. An equally important question is the potential for non-military autonomous systems to be dangerous, such as trading bots in financial markets that put at risk billions of dollars.

Soon we will also have autonomous AIs that have a basic psychology, an awareness of the world similar to that of animals. These AIs may not be physically dangerous but they may learn to be dangerous in other ways just as Tay, IBM's chatbot, learned to be anti-social on Twitter. 

So what are our ethical responsibilities as researchers in these cases? These issues deserve a closer examination of what constitutes ‘ethical’ research.

Mary-Anne Williams, Director, Disruptive Innovation at the Office of the Provost at the University of Technology Sydney (UTS),  Founder and Director of Innovation and Enterprise Research Lab (The Magic Lab), Fellow at Stanford University:

From its earliest beginnings, human history is a tale of an arms race littered with conflicts aimed at achieving more power and control over resources. In the near future, weaponized robots could be like the velociraptors in Jurassic Park, with agile mobility and lightning fast reactions, able to hunt humans with high precision sensors augmented with information from computer networks. Imagine a robot rigged as a suicide bomber able to detect body heat or a heartbeat that might be remotely controlled or able to make its own decisions about who and what to seek and destroy. 

I signed the killer robot ban in 2015 because state-sponsored killer robots are a terrifying prospect. However, enforcing such a ban is highly problematic and it might create other problems, such as stopping countries such as Australia from developing defensive killer robots, thereby being vulnerable to other countries and groups that ignore the ban. Furthermore, today the potential loss of human life is a deterrent for conflict initiation and escalation, but when the main casualties are robots, the disincentives change dramatically and the likelihood of conflict increases.

So a ban on killer robots cannot be the only strategy. The nature of destructive weapons is changing; they are increasingly DIY. One can 3D print a gun, launch a bomb from an off-the-shelf drone, and turn ordinary cars into weapons.

[ Press Release ]

The Conversation (0)