Big names in AI vow not to build autonomous weapons
Elon Musk, the founders of DeepMind, and other AI luminaries have signed a letter that guarantees they won’t develop “lethal autonomous weapons.” It’s the latest effort to draw attention to the moral risks raised by AI weapons, but prohibiting the technology may ultimately prove challenging.
Big shots: The letter was signed by Musk; DeepMind’s Demis Hassabis, Shane Legg, and Mustafa Suleyman; Skype founder Jaan Tallinn; and the well-known AI researchers Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.
Peace movement: Tech companies are being forced to examine military uses of their technology. Employee outrage recently prompted Google to promise that it wouldn’t let its AI be used to make weapons. Other companies face similar outcry.
Arms race: In practice, it may prove tricky to prohibit autonomous weapons. A few fully autonomous weapon systems are already available, and many others have some degree of partial autonomy. The underlying technology is also already widely available, and many companies are eager to fulfill lucrative military contracts.
Deep Dive
Policy
Is there anything more fascinating than a hidden world?
Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.
What Luddites can teach us about resisting an automated future
Opposing technology isn’t antithetical to progress.
Africa’s push to regulate AI starts now
AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.
Yes, remote learning can work for preschoolers
The largest-ever humanitarian intervention in early childhood education shows that remote learning can produce results comparable to a year of in-person teaching.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.