My company is pretty firmly entrenched in the defense industry. In fact, many robotics companies are -- defense contracting is a good way to pay the bills while growing other areas of research and development. But while robots are really amazing things to work on in and of themselves, the technology is slowly advancing toward greater capability and autonomy -- and for those of us working defense contracts, this has some uncomfortable implications.
Bluefin's AUVs aren't weapons (when people I ask, I remind them that there is already a word for an autonomous submarine that explodes -- "torpedo") and most other companies aren't actively weaponizing their robots. To date the bulk of military robotics has been oriented toward surveillance, security, and disposal of mines and IEDs -- situations where most everyone can agree that it's a good idea to keep a human out of the way.
But things are changing. Even if companies aren't putting on guns, they're at least putting on gun mounts. Early last month Wired reported on the newly weaponized ground robots. Other companies are building in weapons payload options: recently a Reaper aerial drone made history as the first Army unmanned military vehicle to kill (thank you for the correction, Kevin); its remote operators used it to locate two men suspected of placing an IED and dropped its "precision munitions" on the targets.
What do the users of these robots think? At the OceanTech Expo in early September, I attended an AUV panel; one of the panelists, Bill Schopfel, is the event manager at the Office of Naval Research. He spoke specifically to the role of robotic vehicles in underwater mine countermeasures -- he says for the forseeable future, the decision to engage and neutralize mines will not be autonomous; even if the vehicle is capable of performing neutralization measures, there will still be a person in the loop who is making the decision to engage. With respect to autonomous vehicles that operate without a human's control, a DoD proposal from last year discusses the idea that humans target humans and machines target machines -- though that proposal has not yet passed legal review.
The Army's Future Combat Systems initiative is becoming a reality, but it really demands thought and careful consideration of the ways we're deciding to employ technology -- though our military needs demand immediate technological solutions. How can we make sure ethics and technology develop at the same pace?