Tech firms and universities interested in building AI-powered weapons for lucrative military contracts are, predictably, facing some significant pushback.
The news: A letter circulating at Google, signed by thousands of employees, protests the company’s involvement in creating technology for the Defense Department, according to the New York Times. The letter comes after Gizmodo and the Intercept both revealed that Google was giving the Pentagon special access to its machine-learning software to help analyze images from drones.
Also: A group of 50 leading AI researchers is boycotting Korean university KAIST over its plan to develop autonomous weapons, reports the Verge. The university announced in February that it was launching a joint research center with South Korean defense company Hanwha Systems.
Why it matters: Autonomous weapons could easily enough become a reality, but determining whether using them is a good idea is fraught with tricky ethical arguments—on both sides. There is a campaign to ban autonomous weapons, but so far it’s had little success in halting their development.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.