3 Questions on Killer Robots
Fully autonomous weapons should be outlawed before they are developed, says a human-rights scholar.
International human-rights law does not account for the use of weapons that could kill targets on their own.
Delegates to the United Nations Convention on Certain Conventional Weapons are meeting this week in Geneva to discuss fully autonomous weapons—machines that could decide to kill someone without any human input. Though this technology does not exist yet, some national-security experts say it’s plausible, given the development of “semi-autonomous” missile defense systems and unmanned aircraft that can take off, fly, and land on their own. Today a person is pushing the button when a drone fires on a target, but in the near future, nations might try to develop weapons that don’t need a human in the loop. In advance of the meeting, a group from Harvard Law School and Human Rights Watch released a report that calls for an international treaty banning these technologies as soon as possible. The report’s lead author, Bonnie Docherty, a lecturer at Harvard Law School and a senior researcher at Human Rights Watch, spoke to Mike Orcutt of MIT Technology Review.
Since fully autonomous weapons don’t yet exist, why isn’t a ban premature?
We believe this is a technology that could revolutionize warfare, and we think we should act now, before countries invest too much in the technology and then don’t want to give it up. There are many concerns about these weapons, including ethical and legal concerns, concerns about how to determine accountability, and the risk of an arms race, to name a few. The precautionary principle says that if there is a serious threat of public harm, even scientific uncertainty like we have in this case should not stand in the way of action to prevent the harm.
Isn’t it difficult to define a “fully autonomous” weapon?
Our definition, which would not be a legal definition but one meant to get people on the same page, is a weapons system that can select and kill a target without what we call meaningful human control. In a treaty there would have to be more of a definition of what meaningful human control is, but we think it’s a good starting point. It’s when you lose that human control that you cross a threshold into something that most people don’t want.
In addition to the errors that could lead an autonomous weapon to kill civilians, what are some of the novel legal problems they could cause?
If these machines did come into existence, there would be no way to hold anyone accountable if they violated international law. The programmer, the manufacturer, the commander, and the operator would all escape liability under existing law. It’s also important to note that our report looks at both criminal law and civil law, and we found that there’s an accountability gap under both. Even under civil law, which has lower standards for establishing accountability, the programmer or manufacturer couldn’t be held responsible, because the military and its contractors would have immunity. There would also be other evidentiary hurdles. So it’s really a broad-based international, domestic, criminal, and civil accountability gap that we’re worried about.
Time is running out to register for EmTech Digital. You don’t want to miss expert discussions on artificial intelligence.Learn more and register