Ethical Tech

Elon Musk, the founders of DeepMind, and other AI luminaries have signed a letter that guarantees they won’t develop “lethal autonomous weapons.” It’s the latest effort to draw attention to the moral risks raised by AI weapons, but prohibiting the technology may ultimately prove challenging.

Big shots: The letter was signed by Musk; DeepMind’s Demis Hassabis, Shane Legg, and Mustafa Suleyman; Skype founder Jaan Tallinn; and the well-known AI researchers Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.

Peace movement: Tech companies are being forced to examine military uses of their technology. Employee outrage recently prompted Google to promise that it wouldn’t let its AI be used to make weapons. Other companies face similar outcry.

Arms race: In practice, it may prove tricky to prohibit autonomous weapons. A few fully autonomous weapon systems are already available, and many others have some degree of partial autonomy. The underlying technology is also already widely available, and many companies are eager to fulfill lucrative military contracts.