Benjamin Wittes at Lawfare publishes a note from John C. Dehn, of the West Point Military Academy about “killer” robots. Dehn goes into how the report is problematic with it’s definitions:
The report might be discussing only those weapons on the most autonomous end of the spectrum, at one point referring to “fully autonomous weapons that could select and engage targets without human intervention” and at another as “a robot [that] could identify a target and launch an attack on its own power.” Somewhat confusingly, though, the report includes three types of “unmanned weapons” in its definition of “robot” or “robotic weapon”—human-in-the-loop; human-on-the-loop; and human-out-of-the-loop. (p. 2) Thus, the report potentially generates confusion about the precise level of autonomy that the authors of the report intended to target (pun intended), though human-(totally-)out-of-the-loop weapons are the obvious candidate.
Even assuming the report clearly intends “fully autonomous weapons” to include only weapons that independently identify/select and then engage targets, the discussion here (particularly between Ben and Tom) demonstrates that this definition of the term is not without its problems. These problems include: (1) what types of targets should be cause for concern (humans, machines, buildings, infrastructure (roads, bridges, etc.), or munitions (such as rockets and artillery or mortar rounds); and (2) what is meant by target “selection” or “identification.”
The take-away:
Those of us who have spent many years training soldiers on what constitutes “hostile intent” or a “hostile act” justifying the proportionate use of responsive force are familiar with the endless “what ifs” that accompany any hypothetical example chosen. Ultimately, we tell soldiers to use their best “judgment” in the face of potentially infinite variables. This seems to me a particularly human endeavor. While artificial intelligence can deal with an extremely large set of variables with amazing speed and accuracy, it may never be possible to program a weapon to detect and analyze the limitless minutia of human behavior that may be relevant to an objective analysis of whether a use of force is justified or excusable as a moral and legal matter.
Ultimately, it seems, one’s view of the morality and legality of “fully autonomous weapons” depends very much upon what function(s) they believe those weapons will perform. Without precision as to those functions, however, it is hard to have a meaningful discussion. In any case, I fully agree with Ben that existing international humanitarian law and domestic policy adequately deals with potentially indiscriminate weapons, rendering the report’s indiscriminate recommendation unnecessary.
In the beginning of the post, Wittes rounds up his discussion with Kenneth Anderson and Mathew Waxman. Previous RobotCentral round-ups here and here.