AI arms control may not be possible, warns Henry Kissinger
Henry Kissinger, former US secretary of state and a controversial giant of American foreign policy, believes it may be a lot harder to control the development of AI weapons than nuclear ones.
Warning shots: Speaking at an event at MIT yesterday, Kissinger warned that new developments in AI will bring all sorts of dangers. In particular, he worries that the AI weapons could be harder to control than nukes because development of the technology will happen in secret: “With AI, the other side’s ignorance is one of your best weapons—sharing will be much more difficult.”
Peace movement: He isn’t the only person worried about an AI arms race. A significant number of academics, industry researchers, and tech luminaries have backed a campaign to ban the use of autonomous weapons.
Arms dealing: The issue is challenging, however. As security researcher Paul Scharre writes in a recent book, Army of None, autonomy has been creeping into weapons systems for decades. It isn’t always easy to draw a line around autonomous systems, and the technology is alluring because it can be used to make weapons more reliable.
Dumbing down: Kissinger has been something of an AI naysayer lately. After learning about AlphaGo, he wrote an article in the Atlantic warning that the technology could alter the nature of human knowledge and discovery in ways that ultimately harm humanity (“How the Enlightenment Ends”).
“By mastering certain competencies more rapidly and definitively than humans, [AI] could over time diminish human competence and the human condition itself as it turns it into data,” Kissinger wrote. “Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.”
Deep Dive
Artificial intelligence
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
We are hurtling toward a glitchy, spammy, scammy, AI-powered internet
Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.