Autonomous Robots in the Fog of War
Robots in Combat
Books have been written about the feasibility and ethics of weaponizing robots, and it's not my intent to explore that topic in any great detail here. The fact is, weaponized robots—missile-launching unmanned combat air vehicles, rifle-toting unmanned combat ground vehicles, and mine-deploying unmanned combat underwater vehicles—are already a reality.
At present the decision of whether these robots attack is still left to humans. But as robots gain more autonomy, will we or won't we allow them to decide to fire weapons on their own? The U.S. Defense Department continues to mull the issue. In 2007, for instance, it released a report called Unmanned Systems Safety Guide for DOD Acquisition [PDF], which includes a section on designing weaponized unmanned systems. It lays out a number of ethical, legal, and technical areas of concern that any designer of armed autonomous robots should be prepared to address. These include the inadvertent firing of weapons, erroneous target discrimination, and the possibility of the enemy taking control of the unmanned system.
John Canning of the Naval Surface Warfare Center Dahlgren Division, in Virginia, has pointed out that deploying weaponized robots while maintaining a human operator to do the actual firing is costly. He's put forth several concepts of operation that might allow autonomous armed robots to coexist on the battlefield with other manned and unmanned systems. One of Canning's key concepts is to "let machines target other machines." That is, design armed unmanned systems so that they can automatically identify, target, and neutralize or destroy the weapons used by adversaries, but not the people using the weapons.
In those instances when it becomes necessary to target humans, Canning proposes that an armed unmanned system not be allowed to act autonomously but rather be remotely controlled by humans. The machine, he suggests, should be designed with "dial-a-level" autonomy so that it can switch among operational modes according to its environment and other circumstances. It would also be equipped with both nonlethal and lethal weapons, the former for convincing the enemy to abandon its arms and the latter for actually destroying those weapons.
Ronald C. Arkin, director of the Mobile Robot Laboratory at Georgia Tech, has been looking at ways to imbue robots with a sense of "ethics" [PDF] and even an artificial "conscience" so that they adhere to international rules of warfare. That should make it possible, he believes, for autonomous robots to conduct themselves on the battlefield at least as well as humans—and probably better.
—Lora G. Weiss