Skip to Content
Artificial intelligence

Google appoints an “AI council” to head off controversy, but it proves controversial

A team that includes philosophers, engineers, and policy experts will determine how ethical Google’s AI projects are—but some have already criticized its makeup.
March 26, 2019
Jeremy portje

Developing and commercializing artificial intelligence has proved an ethical mine field for companies like Google. The company has seen its algorithms accused of perpetuating race and gender bias and fueling efforts to build autonomous weapons.

The search giant now hopes that a team of philosophers, engineers, and policy experts will help it navigate the moral hazards presented by artificial intelligence without press scandals, employee protests, or legal trouble.

Kent Walker, Google’s senior vice president for global affairs and chief legal officer, announced the creation of a new independent body to review the company’s AI practices at EmTech Digital, an AI conference in San Francisco organized by MIT Technology Review. 

Walker said that the group, known as the Advanced Technology External Advisory Council (ATEAC), would review the company’s projects and plans and produce reports to help determine if any of them contravene the company’s own AI principles. The council will not have a set agenda, Walker said, and it would not have the power to veto projects itself. But he said the group’s reports “will help keep us honest.”

The first ATEAC will feature a philosopher, an economist, a public policy expert, and several researchers on data science, machine learning, and robotics. Several of those chosen actively research issues such as algorithmic bias. The full list is as follows: Alessandro Acquisti, Bubacarr Bah, De Kai, Dyan Gibbens, Joanna Bryson, Kay Coles James, Luciano Floridi, and William Joseph Burns.

But it is for tech companies to prove they are sincere about ethical concerns. The announcement has already provoked a backlash from some AI experts who question the inclusion of Gibbens and James.

The former is the founder and CEO of a drone company, a choice that seems tone deaf after Google faced an employee backlash and a storm of negative press for its involvement in Maven, a project to supply cloud AI to the US Air Force for the analysis of drone imagery. The fallout prompted Google to announce a set of AI principles in the first place. The latter is the president of the conservative think tank The Heritage Foundation who has pushed an anti-LGBTQ agenda, and whose organization has spread misinformation about climate change, among other things.

The controversial announcement comes amid a series of scandals that Google and other big tech companies have faced related to the development and use of artificial intelligence. For example, the algorithms used for face recognition and filtering job applicants have been shown to exhibit racial bias.

Walker said on stage that Google already vets its AI projects carefully. He noted that the company has chosen not to supply face recognition technology over fears it could be misused. In another instance, he said the company had chosen to release a lip-reading AI algorithm despite worries that it might be used for surveillance, because it was judged that the potential benefits outweighed the risks. 

At EmTech, Walker acknowledged that the council would need to consider emerging AI risks, and he identified misinformation and AI-powered video manipulation as particular concerns. “How do we detect this across our platforms? We are working very hard on this,” he said. “We are a search engine, not a truth engine.”

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.