This article started out an effort to dig into the interesting and sociological ramifications of developing and embedding a conscience into robots as Ronald Arkin of Georgia Tech’s College of Computing is working to do. I read article after article in order to find and stitch together the technology thread. Instead I grew more and more irate at the prospect of one of our commanders giving a robot an order to fire and the robot declining because the target was in a mosque.
What boggles my mind is that the DoD gave Ron $290K last year to fund this research.
Battery power? Check.
Ammunition? Check.
Geneva Convention Rules? Loaded.
Ron’s says he would create a “multidimensional mathematical decision space of possible behaviour (sic) actions” to which the robot would compare the given circumstances and decide whether the target is a legal.
Over the next two months, Ron will be visiting several military installations to get the soldiers’ take on this thing. His angle is that it benefits the soldiers because the robots would be prevented from doing anything as “embarrassing” as what happened in Abu Ghraib.
What Ron is forgetting is that today’s military robots already have a conscience, assuming he believes that our own soldiers have a conscience. Behind every robot in our armed forces is an American soldier. When that soldier sees a target in the cross-hairs of his computer screen and pushes a button, that robot better damn well fire.
Sources: