Skip to Content
Artificial intelligence

Military Robots: Armed, but How Dangerous?

The debate over using artificial intelligence to control lethal weapons in warfare is more complex than it seems.
August 3, 2015

An open letter calling for a ban on lethal weapons controlled by artificially intelligent machines was signed last week by thousands of scientists and technologists, reflecting growing concern that swift progress in artificial intelligence could be harnessed to make killing machines more efficient, and less accountable, both on the battlefield and off. But experts are more divided on the issue of robot killing machines than you might expect.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by many leading AI researchers as well as prominent scientists and entrepreneurs including Elon Musk, Stephen Hawking, and Steve Wozniak. The letter states:

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Rapid advances have indeed been made in artificial intelligence in recent years, especially within the field of machine learning, which involves teaching computers to recognize often complex or subtle patterns in large quantities of data. And this is leading to ethical questions about real-world applications of the technology (see “How to Make Self-Driving Cars Make Ethical Decisions”).

Meanwhile, military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.

The issue of automating lethal weapons has been a concern for scientists as well as military and policy experts for some time. In 2012, the U.S. Department of Defense issued a directive banning the development and use of “autonomous and semi-autonomous” weapons for 10 years. Earlier this year the United Nations held a meeting to discuss the issue of lethal automated weapons, and the possibility of such a ban.

But while military drones or robots could well become more automated, some say the idea of fully independent machines capable carrying out lethal missions without human assistance is more fanciful. With many fundamental challenges still remaining in the field of artificial intelligence, however, it’s far from clear when the technology needed for fully autonomous weapons might actually arrive.

“We’re pushing new frontiers in artificial intelligence,” says Patrick Lin, a professor of philosophy at California Polytechnic State University. “And a lot of people are rightly skeptical that it would ever advance to the point where it has anything called full autonomy. No one is really an expert on predicting the future.”

Lin, who gave evidence at the recent U.N. meeting, adds that the letter does not touch on the complex ethical debate behind the use of automation in weapons systems. “The letter is useful in raising awareness,” he says,  “but it isn’t so much calling for debate; it’s trying to end the debate, saying ‘We’ve figured it out and you all need to go along.’”

Stuart Russell, a leading AI researcher and a professor at the University of California, Berkeley, dismisses this idea. “It’s simply not true that there has been no debate,” he says. “But it is true that the AI and robotics communities have been mostly blissfully ignorant of this issue, maybe because their professional societies have ignored it.”

One issue of debate, which the letter does acknowledge, is that automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.

Those behind the letter have little time for this argument, however.

Max Tegmark, an MIT physicist and founder member of the Future of Life Institute, which coördinated the letter signing, says the idea of ethical automated weapons is a red herring. “I think it’s rather irrelevant, frankly,” he says. “It’s missing the big point about what is this going to lead to if one starts this AI arms race. If you make the assumption that only the U.S. is going to build these weapons, and the number of conflicts will stay exactly the same, then it would be relevant.”

The Future of Life Institute has issued a more general warning about the long-term risks posed by unfettered AI, cautioning that it could pose serious dangers in the future. 

“This is quite a different issue,” Russell says. “Although there is a connection, in that if one is worried about losing control over AI systems as they become smarter, maybe it’s not a good idea to turn over our defense systems to them.”

While many AI experts seem to share this broad concern, some see it as a little misplaced. For example, Gary Marcus, a cognitive scientist and artificial intelligence researcher at New York University, has argued that computers do not need to become artificially intelligent in order to pose many other serious risks, to financial markets or air-traffic systems, for example.

Lin says that while the concept of unchecked killer robots is obviously worrying, the issue of automated weapons deserves a more nuanced discussion. “Emotionally, it’s a pretty straightforward case,” says Lin. “Intellectually I think they need to do more work.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.