Skip to Content

3 Questions on Killer Robots

Fully autonomous weapons should be outlawed before they are developed, says a human-rights scholar.
April 17, 2015

Delegates to the United Nations Convention on Certain Conventional Weapons are meeting this week in Geneva to discuss fully autonomous weapons—machines that could decide to kill someone without any human input. Though this technology does not exist yet, some national-security experts say it’s plausible, given the development of “semi-autonomous” missile defense systems and unmanned aircraft that can take off, fly, and land on their own. Today a person is pushing the button when a drone fires on a target, but in the near future, nations might try to develop weapons that don’t need a human in the loop. In advance of the meeting, a group from Harvard Law School and Human Rights Watch released a report that calls for an international treaty banning these technologies as soon as possible. The report’s lead author, Bonnie Docherty, a lecturer at Harvard Law School and a senior researcher at Human Rights Watch, spoke to Mike Orcutt of MIT Technology Review.

Since fully autonomous weapons don’t yet exist, why isn’t a ban premature?

We believe this is a technology that could revolutionize warfare, and we think we should act now, before countries invest too much in the technology and then don’t want to give it up. There are many concerns about these weapons, including ethical and legal concerns, concerns about how to determine accountability, and the risk of an arms race, to name a few. The precautionary principle says that if there is a serious threat of public harm, even scientific uncertainty like we have in this case should not stand in the way of action to prevent the harm.

Isn’t it difficult to define a “fully autonomous” weapon?

Our definition, which would not be a legal definition but one meant to get people on the same page, is a weapons system that can select and kill a target without what we call meaningful human control. In a treaty there would have to be more of a definition of what meaningful human control is, but we think it’s a good starting point. It’s when you lose that human control that you cross a threshold into something that most people don’t want.

In addition to the errors that could lead an autonomous weapon to kill civilians, what are some of the novel legal problems they could cause?

If these machines did come into existence, there would be no way to hold anyone accountable if they violated international law. The programmer, the manufacturer, the commander, and the operator would all escape liability under existing law. It’s also important to note that our report looks at both criminal law and civil law, and we found that there’s an accountability gap under both. Even under civil law, which has lower standards for establishing accountability, the programmer or manufacturer couldn’t be held responsible, because the military and its contractors would have immunity. There would also be other evidentiary hurdles. So it’s really a broad-based international, domestic, criminal, and civil accountability gap that we’re worried about.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.