On the subject of autonomous armed drones, Marcelo Rinesi would rather the killer be robotic and not human:
The counterargument is obvious: have you seen what already happens in human-driven battlefields? Empirically, soldiers’ ethical constraints are anything but foolproof (naturally so, given their training and the context of war); there’s no reason to think even buggy software will be worse, and software, at least, can be debugged and improved.
The more important issue:
Ultimately, the problem of having a killer drone flying over your head is nothing but the problem of having a killer anything flying over your head. The fact of killing by specifically trained and organized groups of people with the explicit backing of their societies is where has always lied, and should continue to lay, the locus of ethical concern.
George Wallach is concerned about new wars, among other issues (via io9):
“A common concern among some military pundits is that it lowers the barriers to starting new wars,” says Wallach, “that it presents the illusion of a quick victory and without much loss of force – particularly human losses.” It’s also feared that these machines would escalate ongoing conflicts and use indiscriminate force in the absence of human review. There’s also the potential for devastating friendly fire.