Shad Callister

sci-fi and techno-thriller author


Human Rights Watch fears “Killer Robots”

The Human Rights Watch organization just released a 50-page report warning of some of the dangers autonomous combat machines may pose (http://www.hrw.org/news/2012/11/19/ban-killer-robots-it-s-too-late). It was swiftly rebutted by a number of other news papers and sites pointing out where HRW gets the issue wrong. I’ll just say that Yes, there are serious ethical/technological/philosophical issues with machines fighting wars for us, and No it’s not as simple as “killer robots are bad and should be banned!”. I’m glad the discussion on this subject is heating up.

One interesting point that emerged from the discussion, which I’d like to echo: not only do we already have fully autonomous weapons in the field now, but we’ve had them for many, many years if you count land mines. These are machines that use a simple sensor to kill (nearly indiscriminately), and by extension I suppose that any booby trap used since the dawn of time fits the description. While these examples clearly aren’t the same as the computerized robots we’re talking about here, they might be useful in forming thought experiments to help us understand machine-warrior ethics.

Robots are capable of discriminating targets very carefully. Does that make them an improvement over land mines, or even more deplorable?

Can we think of autonomous killing machines simply as highly advanced traps? If we compare them to the dead-fall traps or spiked pits used since primitive times to kill animals and humans (and sometimes the wrong ones), it’s pretty clear that the man who dug the pit is completely responsible for what falls into it. No one would take seriously the trapper who, confronted with the body of an unintended victim of his deadfall trap, tried to claim “it’s the boulder’s fault!”. Perhaps this tells us exactly where to place the responsibilities for robotic homicides: the human who most directly issued the command to the robot which resulted in the killing.

Regarding the removal of thinking, moral humans from their acts of war, robotics are just another tick mark on the slider scale that began at Wooden Club and progressed through Spear, Arrow, Bullet, and Tactical Ballistic Missile. We’ve been distancing ourselves from the act of killing for a long time now. Part of that distancing was the creation of a warrior class, the delegation of fighting to a man that kills so that the rest of his clan doesn’t have to. It seems we have a new class, now: the warrior machine. 

 


Ethics of Robotic Warfare

Slate did a fun “ethics and accountability of robot warriors” piece yesterday. Here’s the interesting part:

“An experiment conducted by the Human Interaction With Nature and Technological Systems Lab at the University of Washington had a robot, named Robovie, lie to students and cheat them out of a $20 reward. Sixty percent of victims could not help feeling that Robovie was morally responsible for deceiving them.

“Commenting on the future use of robots in war, the HINTS experimenters noted in their final report that a military robot will probably be perceived by most “as partly, in some way, morally accountable for the harm it causes. This psychology will have to be factored into ongoing philosophical debate about robot ethics, jurisprudence, and the Laws of Armed Conflict.”

Follow

Get every new post delivered to your Inbox.