This week a Washington Post article busted four myths about ‘killer robots’ … and left The Big Myth About ‘Killer Robots’ unchallenged. (More)

Busting ‘Killer Robots’ Myths … Or Not

The Washington Post Monkey Cage is usually a good source for insights from political science. From the myth of think tank independence to how and why ISIS is rewriting history, from whether smiley faces affect political dialogue to the science of why Reddit sucks, the Monkey Cage consistently offers engaging, well-researched content.

So last week’s article titled The SkyNet factor: Four myths about science fiction and the killer robot debate was … surprising.

Yes, the article by University of Massachusetts-Amherst political science professor Charli Carpenter does address four myths about autonomous weapons systems, more often referred to as ‘killer robots.’ More specifically, Dr. Carpenter looks at four myths about groups that campaign against such robots:

Myth #1: Campaigners Are Reacting to Hype and Robopocalyptic Hyperbole.

Wrong: campaigners are reacting to a concern over the ethical implications of developments in real-world military robotics that they see as increasingly taking humans out of the loop when it comes to targeting decisions.[…]

Myth #2: The Issue Got Media Attention Because the Campaign used “Killer Robots” in its Name.

Actually, it’s the other way around. The media was using Terminator and Battlestar Galactica references to report on developments in autonomous weaponry long before NGOs picked up the issue – as long ago as 2007.[…]

Myth #3: The Campaign Builds its Case on Robopocalyptic Metaphors.

Aside from adopting the label “killer robots” the campaign has generally avoided sci-fi metaphors and built its case on real-world substance.[…]

Myth #4: Sensationalistic Terminator References are Unnecessarily Scaring the Public.

It’s easy to chalk public antipathy to killer robots up to robopocalyptic fiction stoked by disarmament campaigners. But actually, survey research has shown the average U.S. citizen is equally horrified by the idea of autonomous weapons whether they are referred to as autonomous weapons or killer robots. And they are equally horrified whether or not they report ever having seen the film Terminator.

So there really is a campaign against ‘killer robots’ based on what the campaigners “see as” turning robots loose to kill people without any human decision-makers. In fact, the UN Convention on Certain Conventional Weapons discussed that issue back in May. And the media began calling these weapons ‘killer robots’ and invoking images of Battlestar Galactica and Terminator before the campaign began, so the campaigners didn’t invent that link. But they did hop all over it:

Mary Wareham, coordinator of the Campaign to Stop Killer Robots, admits it was a bit much. “We put killer robots in the title of our report to be provocative and get attention,” she says. “It’s shameless campaigning and advocacy, but we’re trying to be really focused on what the real life problems are, and killer robots seemed to be a good way to begin the dialogue.”

And in discussions with military lawyers, they don’t talk about cyborgs or SkyNet but instead “refer to ‘fully autonomous weapons,’ principles of ‘proportionality and distinction,’ ‘situational awareness,’ and ‘meaningful human control.'” And polls show most people don’t want governments to develop such weapons, regardless of whether the poll calls them “fully autonomous weapons” or “killer robots,” and regardless of whether the people surveyed have seen Terminator.

So should we all join the Campaign to Stop Killer Robots?

Perhaps we should begin with asking if the U.S. or any other military power has designed or built or even outlined a plan to design or build ‘killer robots’ – that is, fully-autonomous weapons systems that would select and open fire on human targets without any human operator or decision-maker.

And the answer is … well, no.

Even the Human Rights Watch report with the breathless title Losing Humanity: The Case Against Killer Robots acknowledges:

While emphasizing the desirability of increased autonomy, many of these military documents also stress that human supervision over the use of deadly force will remain, at least in the immediate future. According to the US Department of Defense, “[f]or the foreseeable future, decisions over the use of force and the choice of which individual targets to engage with lethal force will be retained under human control in unmanned systems.” The UK Ministry of Defence stated in 2011 that it “currently has no intention to develop systems that operate without human intervention in the weapon command and control chain.” Such statements are laudable but do not preclude a change in that policy as the capacity for autonomy evolves.

The specific examples of already-existing systems cited in the Human Rights Watch report include the AEGIS and Iron Dome missile defense systems. But those target incoming missiles or artillery shells, not human beings. They also mention sentry robots being deployed by South Korea along the DMZ, and being deployed by Israel along the border with Gaza. But despite unconfirmed rumors, both systems require a human operator to push a button and open fire.

Consider the oft-cited United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047, a 2007 report that begins with current capabilities and reaches into informed speculation. Here’s the critical passage:

Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions. These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems. The guidance for certain mission such as nuclear strike may be technically feasible before UAS safeguards are developed. On that issue in particular, Headquarters Air staff A10 will be integral to develop and vet through the Joint Staff and COCOMS the roles of UAS in the nuclear enterprise. Ethical discussions and policy decisions must take place in the near term in order to guide the development of future UAS capabilities, rather than allowing the development to take its own path apart from this critical guidance.

In other words, the military see the thorny ethical issues of truly autonomous weapons systems, and they want a full and open debate before any such systems are developed.

Finally, a growing body of data show that even highly-trained humans make deadly mistakes. Using news reports, the Killed By Police Facebook Page documented 1450 deaths since May 1, 2013, and Reuben Fischer-Baum and Al Johri at FiveThirtyEight sampled 10% of those to test the reliability of the data. They found that 93% of the stories documented killings by police officers in the line of duty, or by off-duty officers acting under color of law. That’s about 1000 police homicides per year, as compared to the roughly 400 justifiable police homicides per year reported by the FBI.

You can’t simply subtract 400 from 1000 and say that over half of the police homicides each year are not justified. The data don’t map that cleanly. But there is evidence that police officers sometimes do kill innocent people, including bystanders.

We can debate whether well-programmed robots would be more or less likely to make such deadly mistakes, who should be accountable when mistakes happen, or whether the risks are too great to even explore such technology. But in those debates, we should not pretend the alternative to ‘killer robots’ is calm, rational humans making reasoned moral decisions. The alternative is stressed, sometimes angry humans making split-second decisions … and many of those human decisions are wrong.

The Big Myth About ‘Killer Robots’ … is that humans must be better.

+++++

Happy Friday!