More than 2,400 AI researchers recently signed a pledge promising not to build so-called autonomous weapons—systems that would decide on their own whom to kill. This follows Google’s decision not to renew a contract to supply the Pentagon with AI for analysis of drone footage after the company came under pressure from many employees opposed to its work on a project known as Maven.
Paul Scharre, the author of a new book, Army of None: Autonomous Weapons and the Future of War, believes that AI researchers need to do more than opt out if they want to bring about change.
An Army Ranger in Iraq and Afghanistan and now a senior fellow at the Center for a New American Security, Scharre argues that AI experts should engage with policymakers and military professionals to explain why researchers are concerned and help them understand the limitations of AI systems.
Scharre spoke with MITTechnology Review senior editor Will Knight about the best way to halt a potentially dangerous AI arms race.
How keen is the US military to develop AI weapons?
US defense leaders have repeatedly stated that their intention is to keep a human “in the loop” and responsible for lethal-force decisions. Now, the caveat is they’ve also acknowledged that if other countries build autonomous weapons, then they may be forced to follow suit. And that’s the real risk—that if one country crosses this line then others may feel they have to respond in kind just to remain competitive.
Can these promises really be trusted, though?
I think senior US defense officials are sincere that they want humans to remain responsible for the use of lethal force. Military professionals certainly don’t want their weapons running amok. Having said that, it remains an open question how to translate a broad concept like human responsibility over lethal force into specific engineering guidance on what kinds of weapons are allowed. The definition of what constitutes an “autonomous weapon” is contested already, so there may be differing views on how to put those principles into practice.
Why do technologists need to be involved?
AI researchers must be a part of these conversations, as their technical expertise is vital to shaping policy choices. We need to take into account AI bias, transparency, explainability, safety, and other concerns. AI technology has these twin features today—it’s powerful but also has many vulnerabilities, much like computers and cyber risks. Unfortunately, governments seem to have gotten the first part of that message (AI is powerful) but not the second (it comes with risks). AI researchers can help governments and militaries better understand why they are so concerned about the consequences of weaponizing this technology. To make that case effectively, AI researchers need to be part of a constructive dialogue.
What do you make of the recent pledge against autonomous weapons, organized by the Future of Life Institute?
It’s is not the first call to action by AI scientists; it builds on prior open letters on autonomous weapons in 2015 and 2017. But these letters are a symbolic gesture and probably have diminishing returns in their effectiveness. Countries have also been discussing autonomous weapons at the United Nations since 2014, and the pressure from AI scientists adds an important dimension to the conversation but has yet to sway major military powers to support a comprehensive ban. It would be more impactful to have more AI researchers attending the UN meetings and helping policymakers understand why AI scientists are so concerned.
What about Google’s decision not to renew its contract with the Pentagon?
It was a bit surprising because Maven didn’t actually involve autonomous weapons or targeting and appeared to be compliant with Google’s recently released AI principles. But the competition for top AI talent is fierce, and I suspect Google couldn’t risk some of its best engineers resigning in protest.
Do you think such gestures will help slow down the development of autonomous weapons?
When it comes to Maven, Google wasn’t involved in building even human-controlled weapons, much less autonomous weapons, so there isn’t a direct connection there. The pledge letter is of course directly aimed at autonomous weapons. But I don’t think either is likely to have a major effect on how militaries incorporate AI and autonomy into their weapons, since weapons are likely to be built by defense contractors. If major tech companies like Google opt out of working with militaries, then that could slow the incorporation of AI technology into vital support functions like data analysis, which Maven was doing. But eventually other companies will step in to fill the gap. Already, we’ve seen othercompanies say quite publicly that they want to work with the military.
Could these efforts have unintended consequences too?
Painting many legitimate uses of AI as unacceptable could further drive a wedge between the technical and policy communities and make reasonable discourse harder. Engineers absolutely should refrain from working on projects they cannot support, but when those personal motivations shift to pressuring others from working on important and legitimate national security applications, they harm public safety and impinge on the rights of other engineers to pursue their own conscience. Democratic countries will need to use AI technology for a variety of important and lawful national security purposes—intelligence, counterterrorism, border security, cybersecurity, and defense.
Is the US already in an AI weapons arms race with China?
China has publicly declared its intention to become the global leader in artificial intelligence by 2030 and is increasing its research and recruiting top talent from around the globe. China’s model of military-civil fusion also means that AI research will readily flow from tech firms into the military without the kind of barriers that some Google employees aim to erect in the United States. China has already begun to lay the foundations for an AI-empowered techno-surveillance state.
If AI researchers’ tactics only succeed in slowing the adoption of AI tools in open, democratic societies that value ethical behavior, their work could contribute to ushering in a future where the most powerful technology is in the hands of regimes who care the least about ethics and the rule of law.
In your book, you point out that defining autonomy can be tricky. Won’t this complicate the discussion over military uses of AI?
The authors of the recent pledge against autonomous weapons object to autonomous weapons that would kill a person but acknowledge that some kinds of autonomous systems would be needed to defend against other such weapons. It’s the gray area where autonomy might be needed to defend against weaponry and a person is still onboard, like in targeting a fighter jet or submarine, where the real challenge lies. Balancing competing objectives is not simple, and policymakers will face real choices as they adopt this technology.
AI engineers will be most impactful in shaping these choices if they engage in a constructive, continuous dialogue with policymakers, rather than opting out. AI researchers who care about how the technology is used will be more effective if they move beyond pressure campaigns and start helping to educate policymakers about AI technology today and its limitations.
This artist is dominating AI-generated art. And he’s not happy about it.
Greg Rutkowski is a more popular prompt than Picasso.
What does GPT-3 “know” about me?
Large language models are trained on troves of personal data hoovered from the internet. So I wanted to know: What does it have on me?
DeepMind has predicted the structure of almost every protein known to science
And it’s giving the data away for free, which could spur new scientific discoveries.
An AI that can design new proteins could help unlock new cures and materials
The machine-learning tool could help researchers discover entirely new proteins not yet known to science.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.