Skip to Content
Tech policy

Don’t be AI-vil: Google says its algorithms will do no harm

Google has created a set of principles for its artificial-intelligence researchers to live by—and they prohibit weapons technology.
IYIKON/anbileru adaleru/yaroslav samoylov/SBTS/Hea Pon Lin (The Noun Project)

Google has created an artificial-intelligence code of ethics that prohibits the development of autonomous weapons. But the principles leave sufficient wiggle room for Google to benefit from lucrative defense deals down the line.

The announcement comes in the wake of significant internal protest over the use of Google’s AI technology by a Department of Defense initiative called the Algorithmic Warfare Cross-Functional Team. The goal of this venture, known internally as Project Maven, is to improve the accuracy of drone strikes, among other things.

Last month a dozen Google workers quit over the scandal, and many more signed an open letter of protest.

The uproar captures the fears many have about how technology might help automate warfare in the future. The situation is not simple, however.

Artificial intelligence could help make some weapons systems safer and less error prone. There are also many mundane applications of AI across the defense industry. Google doesn’t want to disavow this huge potential market for its cloud AI technology.

Google’s CEO, Sundar Pichai, announced the new code in a blog post today. It suggests seven principles for guiding Google’s use of AI, stating that it should benefit society; avoid algorithmic bias; respect privacy; be tested for safety; be accountable to the public; maintain scientific rigor; and be made available to others in accordance with the same principles.

But Pichai also took pains to state that Google would not allow its AI technology to be used to develop anything that could cause harm, including “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Pichai said Google would also avoid developing surveillance technology that violates internationally accepted norms on human rights, or any technology that contravenes international laws.

Artificial intelligence is developing quickly, and Google has run into other problems involving AI projects. One of its computer vision systems, for example, repeatedly misidentified people of African heritage as gorillas. The company abandoned its “Don’t be evil” motto this April, but it retains an idealistic culture.

Military uses of artificial intelligence could be increasingly contentious as the technology is adopted in new ways and companies seek to sell their cloud AI technology as widely as possible.

Machine learning and artificial intelligence will inevitably become more important for intelligence and defense work. Other US tech companies, including Amazon and Microsoft, have bid on a multibillion-dollar cloud computing project with the Pentagon.

Deep Dive

Tech policy

Europe's AI Act concept
Europe's AI Act concept

A quick guide to the most important AI law you’ve never heard of

The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.

security cameraa
security cameraa

The world’s biggest surveillance company you’ve never heard of

Hikvision could be sanctioned for aiding the Chinese government’s human rights violations in Xinjiang. Here’s everything you need to know.

Mifiprex pill
Mifiprex pill

Where to get abortion pills and how to use them

New US restrictions could turn abortion into do-it-yourself medicine, but there might be legal risks.

Women marching at the Supreme Court holding signs
Women marching at the Supreme Court holding signs

The US Supreme Court has overturned Roe v. Wade. What does that mean?

The final decision ends weeks of speculation following the leaking of a draft opinion in May, which detailed the Supreme Court’s resolve to strike down the ruling.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.