Skip to Content
Artificial intelligence

Big organizations may like killer robots, but workers and researchers sure don’t

April 4, 2018

Tech firms and universities interested in building AI-powered weapons for lucrative military contracts are, predictably, facing some significant pushback.

The news: A letter circulating at Google, signed by thousands of employees, protests the company’s involvement in creating technology for the Defense Department, according to the New York Times. The letter comes after Gizmodo and the Intercept both revealed that Google was giving the Pentagon special access to its machine-learning software to help analyze images from drones.

Also: A group of 50 leading AI researchers is boycotting Korean university KAIST over its plan to develop autonomous weapons, reports the Verge. The university announced in February that it was launching a joint research center with South Korean defense company Hanwha Systems.

There’s more: Other projects exist. Russian weapons maker Kalashnikov is developing combat robots, and China’s plans for armed autonomous submarines were leaked earlier this month, for instance.

Why it matters: Autonomous weapons could easily enough become a reality, but determining whether using them is a good idea is fraught with tricky ethical arguments—on both sides. There is a campaign to ban autonomous weapons, but so far it’s had little success in halting their development.

Deep Dive

Artificial intelligence

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

The viral AI avatar app Lensa undressed me—without my consent

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

Roomba testers feel misled after intimate images ended up on Facebook

An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.