Debating Slaughterbots and the Future of Autonomous Weapons

People can look at the same technology and disagree about how it will shape the future, explains Paul Scharre as he shares a final perspective on the Slaughterbots debate

6 min read
A scene from “Slaughterbots,” a video produced by the Future of Life Institute to illustrate the dangers of autonomous weapons
In "Slaughterbots," a video produced by the Future of Life Institute, AI-powered micro-drones are built en masse and used to kill thousands of people around the world.
Image: Slaughterbots/YouTube

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Stuart Russell, Anthony Aguirre, Ariel Conn, and Max Tegmark recently wrote a response to my critique of their “Slaughterbots" video on autonomous weapons. I am grateful for their thoughtful article. I think this kind of dialogue can be incredibly helpful in illuminating points of disagreement on various issues, and I welcome the exchange. I think it is particularly important to have a cross-disciplinary dialogue on autonomous weapons that includes roboticists, AI scientists, engineers, ethicists, lawyers, human rights advocates, military professionals, political scientists, and other perspectives because this issue touches so many disciplines.

I appreciate their thorough, point-by-point reply. My intent in this response is not to argue with them, but rather to illuminate for readers points of disagreement. I think it is important and meaningful that different people who look at the same technology and agree on what is technologically feasible will still have major disagreements about how that technology is likely to play out. These disagreements have as much to do with sociology, politics, and how human institutions react to technology as they do science and engineering.

I see the central point of disagreement as an issue of scale. There is no question that autonomy allows an increase in scale of attacks. In just the past few weeks, we have seen multiple non-state actors launch saturation attacks with drones. These include 13 homemade aerial drones launched against a Russian air base in Syria and three remote-controlled boats used to attack a Saudi-flagged oil tanker in the Red Sea. I predict we are likely to see more attacks of this kind over time, at larger scales, with greater autonomy for the drones, and eventually cooperative autonomy (“swarming"). I do not think it is likely that non-state actors will gain access to sufficient scale and capability to launch attacks on a scale that would be reasonable to consider these drones “weapons of mass destruction," however.

There is no question that autonomy allows an increase in scale of attacks. But I do not think that non-state actors will gain access to sufficient scale and capability to launch attacks that would be reasonable to consider these lethal micro-drones “weapons of mass destruction"

With regard to the likelihood that nations would build and deploy millions of lethal micro-drones for anti-personnel attacks against civilian populations, I see no evidence of that today. It is certainly possible. Countries deliberately targeted civilians in terror bombing attacks during World War II. But the current trajectory of development in autonomy in weapons appears to be aimed primarily at building increasingly autonomous weapons to fight other military hardware. There are some examples of simple robotic anti-personnel weapons: the South Korean SGR-A1 sentry gun, Israeli Guardium unmanned ground vehicle, U.S. Switchblade drone, and a number of Russian ground robotic systems. The main impetus for greater autonomy, however, is gaining an advantage over other nations' military forces: tanks, radars, ships, aircraft, etc.

Even if nations did build lethal micro drones for use as weapons of mass destruction, there are a host of countermeasures that could be deployed against such weapons. These include missiles, guns, electronic jammers, cyber weapons, high-powered microwaves, and even passive defenses such as nets. Militaries are already working on countermeasures against small drone attacks today. Like virtually all useful military hardware, the efficacy of these countermeasures depends on the specific situation and how they are deployed. Drone attacks have been used to harass U.S. and partner forces in Iraq and Syria. Their effectiveness today is roughly comparable to small flying improvised explosive devices. This is a serious threat, and the United States should be (and is) taking measures to build more effective countermeasures.

In other cases, existing countermeasures have worked. Russia took down all 13 of the drones used to attack its airbase by using a combination of surface-to-air missiles and electronic warfare measures. The remote-controlled boat attack was similarly thwarted. Going forward, some of these attacks are likely to succeed and some will fail. Terrorists will find new ways of attacking and nations will develop new countermeasures. It is certainly not the case that militaries today are defenseless against micro-drones, however. Micro-drones are small and fragile and are susceptible to a variety of both hard kill (kinetic) and soft kill (non-kinetic) means of disruption. (I have a friend who shot one down with an M4 rifle.) The most significant problem militaries have today is finding cost-effective ways of countering drones at scale, but militaries are working on it and the challenges do not seem insurmountable.

Lethal micro-drone featured in the Slaughterbots filmThe lethal micro-drone featured in the “Slaughterbots" video carry miniature explosives and use AI to autonomously target specific individuals.Image: Slaughterbots/YouTube

In a world where nations actually built lethal micro-drones in the millions to be used as weapons of mass destruction, these countermeasures would take on a whole new urgency. If weaponized micro-drones were to shift from merely being a tool of harassment to a weapon of mass destruction, then finding ways to defeat them would be a national priority. Relatively onerous defensive measures such as deploying large amounts of netting or fencing would become entirely reasonable, similar to how concrete barricades have become common around secure buildings today to protect against car bombs, a security precaution that was not common two decades ago. Even if countries were to build micro-drones en masse as a weapon of mass destruction, there are good reasons to think it would not be an effective tactic against another modern country with sophisticated defenses.

With regard to proliferation, Russell and his colleagues point to the global proliferation of AK-47 assault rifles as evidence of the likelihood of lethal micro-drone proliferation. I don't think AK-47s are a useful comparison when thinking about efforts to control sensitive military technology. There is virtually no effort to control the spread of AK-47 rifles around the world or limit their proliferation. Semi-automatic AK-47s are legal for sale in the United States. Certainly, a world in which lethal autonomous micro-drones were available for purchase at Walmart would be horrifying. A more relevant comparison is nations' ability to limit the proliferation of sensitive military items that are withheld from the general public, such as rocket launchers or anti-aircraft missiles. While weapons of this type frequently appear in war zones such as Syria, they are not readily available in developed nations. Nor is it trivial to smuggle them into developed nations for attacks, which is why terrorists resort to other more accessible (sometimes makeshift) weapons such as airplanes, cars, guns, or homemade explosives.

For all of these reasons, I think the fear of lethal micro-drones being used as weapons of mass destruction in the hands of terrorists is not realistic. Smaller scale attacks are certainly possible and, in fact, are already occurring today. These are serious threats and nations should respond accordingly, but I do not see the scenario depicted in the “Slaughterbots" video as plausible.

Aircraft deploys swarm of lethal micro-drones in a scene from 'Slaughterbots'Aircraft deploys a swarm of lethal micro-drones in a scene from 'Slaughterbots.'Image: Slaughterbots/YouTube

On a broader note, aside from predictions about how the technology will unfold, I fundamentally disagree with the authors about the best methods of engaging the broader general public on important policy matters. They explain that they made the video because their experience has been that “serious discourse and academic argument are not enough to get the message through." I 100 percent disagree. I believe that both experts and the general public are more than capable of listening and engaging in serious discussions on technology policy matters, on autonomous weapons, and other topics.

The authors note their perception is that some senior defense officials “fail to understand the core issues" surrounding autonomous weapons. If national security leaders seem unconcerned about the risks the authors have highlighted, I do not think it is because government officials have failed to listen or take the issue seriously. I suspect it is likely because they remain unconvinced by the authors' arguments, perhaps because of the issues that I raise above. In fact, I see a vibrant debate within the U.S. defense community on the future role of autonomy and human control in weapons. The Vice Chairman of the Joint Chiefs of Staff, former Deputy Secretary of Defense, a U.S. Senator on the Armed Services Committee, and a former four-star general have all commented publicly on this issue. All of their statements suggest to me a thoughtful attempt to grapple with a thorny issue.

I think it's also critical to engage the broader public on these issues, but I think the most constructive way to do so is through reasoned arguments of the kind that the authors present in their response, for which I am very grateful.

The Conversation (0)