Skip to Content
Artificial intelligence

Why AI researchers shouldn’t turn their backs on the military

The author of a new book on autonomous weapons says scientists working on artificial intelligence need to do more to prevent the technology from being weaponized.
August 14, 2018
US Army

More than 2,400 AI researchers recently signed a pledge promising not to build so-called autonomous weapons—systems that would decide on their own whom to kill. This follows Google’s decision not to renew a contract to supply the Pentagon with AI for analysis of drone footage after the company came under pressure from many employees opposed to its work on a project known as Maven.

Paul Scharre, the author of a new book, Army of None: Autonomous Weapons and the Future of War, believes that AI researchers need to do more than opt out if they want to bring about change.

Photo of Paul Scharre
Paul Scharre.
Win McNamee/Getty Images

An Army Ranger in Iraq and Afghanistan and now a senior fellow at the Center for a New American Security, Scharre argues that AI experts should engage with policymakers and military professionals to explain why researchers are concerned and help them understand the limitations of AI systems.

Scharre spoke with MITTechnology Review senior editor Will Knight about the best way to halt a potentially dangerous AI arms race.

Image of Army of None book cover
paulscharre.com

How keen is the US military to develop AI weapons?

US defense leaders have repeatedly stated that their intention is to keep a human “in the loop” and responsible for lethal-force decisions. Now, the caveat is they’ve also acknowledged that if other countries build autonomous weapons, then they may be forced to follow suit. And that’s the real risk—that if one country crosses this line then others may feel they have to respond in kind just to remain competitive.

Can these promises really be trusted, though?

I think senior US defense officials are sincere that they want humans to remain responsible for the use of lethal force. Military professionals certainly don’t want their weapons running amok. Having said that, it remains an open question how to translate a broad concept like human responsibility over lethal force into specific engineering guidance on what kinds of weapons are allowed. The definition of what constitutes an “autonomous weapon” is contested already, so there may be differing views on how to put those principles into practice.

Why do technologists need to be involved?

AI researchers must be a part of these conversations, as their technical expertise is vital to shaping policy choices. We need to take into account AI bias, transparency, explainability, safety, and other concerns. AI technology has these twin features today—it’s powerful but also has many vulnerabilities, much like computers and cyber risks. Unfortunately, governments seem to have gotten the first part of that message (AI is powerful) but not the second (it comes with risks). AI researchers can help governments and militaries better understand why they are so concerned about the consequences of weaponizing this technology. To make that case effectively, AI researchers need to be part of a constructive dialogue.

What do you make of the recent pledge against autonomous weapons, organized by the Future of Life Institute?

It’s is not the first call to action by AI scientists; it builds on prior open letters on autonomous weapons in 2015 and 2017. But these letters are a symbolic gesture and probably have diminishing returns in their effectiveness. Countries have also been discussing autonomous weapons at the United Nations since 2014, and the pressure from AI scientists adds an important dimension to the conversation but has yet to sway major military powers to support a comprehensive ban. It would be more impactful to have more AI researchers attending the UN meetings and helping policymakers understand why AI scientists are so concerned.

What about Google’s decision not to renew its contract with the Pentagon?

It was a bit surprising because Maven didn’t actually involve autonomous weapons or targeting and appeared to be compliant with Google’s recently released AI principles. But the competition for top AI talent is fierce, and I suspect Google couldn’t risk some of its best engineers resigning in protest.

Do you think such gestures will help slow down the development of autonomous weapons?

When it comes to Maven, Google wasn’t involved in building even human-controlled weapons, much less autonomous weapons, so there isn’t a direct connection there. The pledge letter is of course directly aimed at autonomous weapons. But I don’t think either is likely to have a major effect on how militaries incorporate AI and autonomy into their weapons, since weapons are likely to be built by defense contractors. If major tech companies like Google opt out of working with militaries, then that could slow the incorporation of AI technology into vital support functions like data analysis, which Maven was doing. But eventually other companies will step in to fill the gap. Already, we’ve seen othercompanies say quite publicly that they want to work with the military.

Could these efforts have unintended consequences too?

Painting many legitimate uses of AI as unacceptable could further drive a wedge between the technical and policy communities and make reasonable discourse harder. Engineers absolutely should refrain from working on projects they cannot support, but when those personal motivations shift to pressuring others from working on important and legitimate national security applications, they harm public safety and impinge on the rights of other engineers to pursue their own conscience. Democratic countries will need to use AI technology for a variety of important and lawful national security purposes—intelligence, counterterrorism, border security, cybersecurity, and defense.

Is the US already in an AI weapons arms race with China?

China has publicly declared its intention to become the global leader in artificial intelligence by 2030 and is increasing its research and recruiting top talent from around the globe. China’s model of military-civil fusion also means that AI research will readily flow from tech firms into the military without the kind of barriers that some Google employees aim to erect in the United States. China has already begun to lay the foundations for an AI-empowered techno-surveillance state.

If AI researchers’ tactics only succeed in slowing the adoption of AI tools in open, democratic societies that value ethical behavior, their work could contribute to ushering in a future where the most powerful technology is in the hands of regimes who care the least about ethics and the rule of law.

In your book, you point out that defining autonomy can be tricky. Won’t this complicate the discussion over military uses of AI?

The authors of the recent pledge against autonomous weapons object to autonomous weapons that would kill a person but acknowledge that some kinds of autonomous systems would be needed to defend against other such weapons. It’s the gray area where autonomy might be needed to defend against weaponry and a person is still onboard, like in targeting a fighter jet or submarine, where the real challenge lies. Balancing competing objectives is not simple, and policymakers will face real choices as they adopt this technology.

AI engineers will be most impactful in shaping these choices if they engage in a constructive, continuous dialogue with policymakers, rather than opting out. AI researchers who care about how the technology is used will be more effective if they move beyond pressure campaigns and start helping to educate policymakers about AI technology today and its limitations.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.