Skip to Content

3 Questions on Killer Robots

Fully autonomous weapons should be outlawed before they are developed, says a human-rights scholar.
April 17, 2015

Delegates to the United Nations Convention on Certain Conventional Weapons are meeting this week in Geneva to discuss fully autonomous weapons—machines that could decide to kill someone without any human input. Though this technology does not exist yet, some national-security experts say it’s plausible, given the development of “semi-autonomous” missile defense systems and unmanned aircraft that can take off, fly, and land on their own. Today a person is pushing the button when a drone fires on a target, but in the near future, nations might try to develop weapons that don’t need a human in the loop. In advance of the meeting, a group from Harvard Law School and Human Rights Watch released a report that calls for an international treaty banning these technologies as soon as possible. The report’s lead author, Bonnie Docherty, a lecturer at Harvard Law School and a senior researcher at Human Rights Watch, spoke to Mike Orcutt of MIT Technology Review.

Since fully autonomous weapons don’t yet exist, why isn’t a ban premature?

We believe this is a technology that could revolutionize warfare, and we think we should act now, before countries invest too much in the technology and then don’t want to give it up. There are many concerns about these weapons, including ethical and legal concerns, concerns about how to determine accountability, and the risk of an arms race, to name a few. The precautionary principle says that if there is a serious threat of public harm, even scientific uncertainty like we have in this case should not stand in the way of action to prevent the harm.

Isn’t it difficult to define a “fully autonomous” weapon?

Our definition, which would not be a legal definition but one meant to get people on the same page, is a weapons system that can select and kill a target without what we call meaningful human control. In a treaty there would have to be more of a definition of what meaningful human control is, but we think it’s a good starting point. It’s when you lose that human control that you cross a threshold into something that most people don’t want.

In addition to the errors that could lead an autonomous weapon to kill civilians, what are some of the novel legal problems they could cause?

If these machines did come into existence, there would be no way to hold anyone accountable if they violated international law. The programmer, the manufacturer, the commander, and the operator would all escape liability under existing law. It’s also important to note that our report looks at both criminal law and civil law, and we found that there’s an accountability gap under both. Even under civil law, which has lower standards for establishing accountability, the programmer or manufacturer couldn’t be held responsible, because the military and its contractors would have immunity. There would also be other evidentiary hurdles. So it’s really a broad-based international, domestic, criminal, and civil accountability gap that we’re worried about.

Keep Reading

Most Popular

conceptual illustration of a heart with an arrow going in on one side and a cursor coming out on the other
conceptual illustration of a heart with an arrow going in on one side and a cursor coming out on the other

Forget dating apps: Here’s how the net’s newest matchmakers help you find love

Fed up with apps, people looking for romance are finding inspiration on Twitter, TikTok—and even email newsletters.

digital twins concept
digital twins concept

How AI could solve supply chain shortages and save Christmas

Just-in-time shipping is dead. Long live supply chains stress-tested with AI digital twins.

still from Embodied Intelligence video
still from Embodied Intelligence video

These weird virtual creatures evolve their bodies to solve problems

They show how intelligence and body plans are closely linked—and could unlock AI for robots.

computation concept
computation concept

How AI is reinventing what computers are

Three key ways artificial intelligence is changing what it means to compute.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.