Skip to Content
Artificial intelligence

AI arms control may not be possible, warns Henry Kissinger

March 1, 2019

Henry Kissinger, former US secretary of state and a controversial giant of American foreign policy, believes it may be a lot harder to control the development of AI weapons than nuclear ones.

Warning shots: Speaking at an event at MIT yesterday, Kissinger warned that new developments in AI will bring all sorts of dangers. In particular, he worries that the AI weapons could be harder to control than nukes because development of the technology will happen in secret: “With AI, the other side’s ignorance is one of your best weapons—sharing will be much more difficult.”

Peace movement: He isn’t the only person worried about an AI arms race. A significant number of academics, industry researchers, and tech luminaries have backed a campaign to ban the use of autonomous weapons.

Arms dealing: The issue is challenging, however. As security researcher Paul Scharre writes in a recent book, Army of None, autonomy has been creeping into weapons systems for decades. It isn’t always easy to draw a line around autonomous systems, and the technology is alluring because it can be used to make weapons more reliable.

Dumbing down: Kissinger has been something of an AI naysayer lately. After learning about AlphaGo, he wrote an article in the Atlantic warning that the technology could alter the nature of human knowledge and discovery in ways that ultimately harm humanity (“How the Enlightenment Ends”).

“By mastering certain competencies more rapidly and definitively than humans, [AI] could over time diminish human competence and the human condition itself as it turns it into data,” Kissinger wrote. “Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.