Elon Musk, the founders of DeepMind, and other AI luminaries have signed a letter that guarantees they won’t develop “lethal autonomous weapons.” It’s the latest effort to draw attention to the moral risks raised by AI weapons, but prohibiting the technology may ultimately prove challenging.
Big shots: The letter was signed by Musk; DeepMind’s Demis Hassabis, Shane Legg, and Mustafa Suleyman; Skype founder Jaan Tallinn; and the well-known AI researchers Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.
Peace movement: Tech companies are being forced to examine military uses of their technology. Employee outrage recently prompted Google to promise that it wouldn’t let its AI be used to make weapons. Other companies face similar outcry.
Arms race: In practice, it may prove tricky to prohibit autonomous weapons. A few fully autonomous weapon systems are already available, and many others have some degree of partial autonomy. The underlying technology is also already widely available, and many companies are eager to fulfill lucrative military contracts.
How the Supreme Court ruling on Section 230 could end Reddit as we know it
As tech companies scramble in anticipation of a major ruling, some experts say community moderation online could be on the chopping block.
2022’s seismic shift in US tech policy will change how we innovate
Three bills investing hundreds of billions into technological development could change the way we think about government’s role in growing prosperity.
Mass-market military drones: 10 Breakthrough Technologies 2023
Turkish-made aircraft like the TB2 have dramatically expanded the role of drones in warfare.
We’re witnessing the brain death of Twitter
An analysis of Musk’s tweets shows him at the center of conversations once kept on the fringes of Twitter.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.