Teaching Robots the Rules of War

Can we trust battlefield robots designed to kill?

3 min read

In May, we posted about a group of researchers from Georgia Tech who have been working on an “ethical governor” for military robots. Dr. Ronald Arkin, director of Georgia Tech’s Mobile Robot Laboratory, was interviewed by H+ magazine on the subject, and we’ve got some choice excerpts below:

In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” — in consultation with humans on the battlefield. These scenarios include ROE and LOW adherence (Taliban and Iraq), discrimination (Korean DMZ), and proportionality and tactics (urban sniper).

Arkin’s “rules” end up altering Asimov’s rules to look more like these:

1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage — intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.

Dr. Arkin and his colleagues at Georgia Tech have developed a “proof-of-concept” prototype ethical governor. His software architecture is likely years away from use on the battlefield.

h+: Some researchers assert that no robots or AI systems will be able to discriminate between a combatant and an innocent, that this sensing ability currently just does not exist. Do you think this is just a short-term technology limitation? What such technological assumptions do you make in the design of your ethical governor?

RA: I agree this discrimination technology does not effectively exist today, nor is it intended that these systems should be fielded in current conflicts. These are for the so-called war after next, and the DoD would need to conduct extensive additional research in order to develop the accompanying technology to support the proof-of-concept work I have developed. But I don’t believe there is any fundamental scientific limitation to achieving the goal of these machines being able to discriminate better than humans can in the fog of war, again in tightly specified situations. This is the benchmark that I use, rather than perfection. But if that standard is achieved, it can succeed in reducing noncombatant casualties and thus is a goal worth pursuing in my estimation.

This is pretty much exactly what we were saying back in February when the media freak-out of the week was killer robots: in a nutshell, you can program a robot soldier just as well as, and in some cases more effectively than, a human soldier in specific combat situations. We can’t do it yet, but that’s why robots currently don’t have direct unsupervised control over their own weaponry.

h+: Do you envision robots ever disobeying military orders on the battlefield to “do the right thing?” If so, under what circumstances?

RA: Asimov originated the use of ethical restraint in robots many years ago and presented all the quandaries that it can generate. In our prototype ethical governor (and in the design itself) we do provide the robot with the right to refuse an order it deems unethical. It must provide some explanation as to why it has refused such an order. With some reluctance, we have engineered a human override capability into the system, but one which forces the operator to explicitly assume responsibility for any ethical infractions that might result as a consequence of such an override.

h+: Do you ever foresee a scenario where both sides in a conflict are strictly robotic?

RA: Not really. I view these unmanned systems as highly specialized assets that will be working alongside our troops, not directly replacing them. They will conduct specialized operations (for example, building clearing, counter sniper operations, and so forth) that will provide an asymmetric advantage to our war fighters. A human presence on the battlefield will be maintained, and some would argue that it must be so for a range of reasons.

See? There’s no reason to panic, some very smart people are working on this robot ethics issue, and it’s not even an issue yet. If you just take a step back and look at it logically, you realize that just like humans, robots can be taught to follow rules, obey regulations, and make ethical decisions… And they can probably do it more strictly, and reliably, than humans can.

You can read the rest of the interview at h+, and get Ronald Arkin’s book on the subject here.

[ h+ Magazine ] via [ Boing Boing ]

The Conversation (0)