The SGR-A1 sentry robot, developed by Samsung Techwin Co. for the South Korean government, uses a low-light Âcamera and pattern recognition software to distinguish humans from animals or other objects and, if necessary, can fire its builtâ''in machine gun. (Read more.)
PHOTO: Kim Dong-Joo/AFP/Getty Images
In today's New York Times, Cornelia Dean writes about ethical concerns that the eventual arrival of autonomous robots to the battlefield raise. Can we design robots with killing capabilities that behave ethically -- perhaps even more ethically than humans -- and so never violate the Geneva Conventions or battlefield rules of engagement?
Some may think it's preposterous to debate robot ethics in the battlefield when human ethics in the battlefield is a much bigger problem these days. That's a fair point, but it doesn't mean we can't address both issues. After all, the weaponized robots are coming.
As my blog colleague Mikell noted in an early post, "the bulk of military robotics has been oriented toward surveillance, security, and disposal of mines and IEDs -- situations where most everyone can agree that it's a good idea to keep a human out of the way." But, she added, "things are changing." In addition to missile-equipped drones, companies are now adding weapons payload options to ground robots. There's still a human at the trigger, but that could change soon. In the military robotics world, "autonomous" is a word you hear often.
Times' Dean interviewed a range of roboticists, philosophers, and other experts, and I thought it'd be a good idea to summarize their positions -- below, snippets from the Times story:
* â''My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,â'' said Ronald C. Arkin, a computer scientist at Georgia Tech, who is designing software for battlefield robots under contract with the Army.
* Randy Zachery, who directs the Information Science Directorate of the Army Research Office, which is financing Dr. Arkinâ''s work, said the Army hoped this â''basic scienceâ'' would show how human soldiers might use and interact with autonomous systems and how software might be developed to â''allow autonomous systems to operate within the bounds imposed by the warfighter.â'' ... â''It doesnâ''t have a particular product or application in mind,â'' said Dr. Zachery, an electrical engineer. â''It is basically to answer questions that can stimulate further research or illuminate things we did not know about before.â''
* [Noel Sharkey, a computer scientist at the University of Sheffield in Britain] said he would ban lethal autonomous robots until they demonstrate they will act ethically, a standard he said he believes they are unlikely to meet. Meanwhile, he said, he worries that advocates of the technology will exploit the ethics research â''to allay political opposition.â''
* â''We donâ''t want to get to the point where we should have had this discussion 20 years ago,â'' said Colin Allen, a philosopher at Indiana University and a co-author of â''Moral Machines: Teaching Robots Right From Wrong,â'' published this month by Oxford University Press. ... In their book, Dr. Allen and his coauthor, Wendell Wallach, a computer scientist at the Yale Interdisciplinary Center for Bioethics, note that an engineering approach â''meant to cover the range of challengesâ'' will probably seem inadequate to an ethicist. And from the engineerâ''s perspective, they write, making robots â''sensitive to moral considerations will add further difficulties to the already challenging task of building reliable, efficient and safe systems.â'' But, Dr. Allen added in an interview, â''Is it possible to build systems that pay attention to things that matter ethically? Yes.â''
* Daniel C. Dennett, a philosopher and cognitive scientist at Tufts University, agrees. â''If we talk about training a robot to make distinctions that track moral relevance, thatâ''s not beyond the pale at all,â'' he said. But, he added, letting machines make ethical judgments is â''a moral issue that people should think about.â''
So they all appear to agree that battlefield robots can be designed to behave ethically, or at least they agree that we have to have this discussion now, which is a good thing. Still, most of the discussion is based on theoretical considerations or computer simulations. We're still a long way from being able to claim that a completely autonomous gun-totting robot, with lots of subsystems, loaded with thousands of lines of code, subject to unexpected, untested conditions, won't shoot the wrong thing, or person. Or worse, that it won't go completely rogue. That said, I'm an optimist when it comes to our engineering capabilities. We'll get there.
PS: And if we don't ... there might be other means of stopping the 'bots. Over at our neighbor blog Risk Factor, Bob Charette wrote about WAR Defence, a London startup developing "weapons systems against robots, detection and monitoring of robotic entities, and 'robo-viruses.' " As Charette noted, "I never thought of fighting future robots as an emerging career opportunity, but with the military involvement in robots, the singularity supposedly approaching, maybe ..."