Skip to Content
Uncategorized

DeepMind’s New Ethics Team Wants to Solve AI’s Problems Before They Happen

October 4, 2017

From automation’s erosion of jobs to killer robots, there are plenty of thorny social AI issues to chew on. Google’s machine learning division, DeepMind, has now decided to try and head off some of the most contentious problems facing AI by establishing its own ethics and society research team.

In a blog post announcing the news, it says the new unit will “help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.” It will focus on six areas: privacy, transparency and fairness; inclusion and equality; governance and accountability; misuse and unintended consequences; AI morality and values; and AI and the world’s complex challenges.

They’re broad topics indeed—though DeepMind does give examples of some open questions that it will try to answer. They include challenges surrounding Elon Musk’s dreaded weaponized robots, AI’s impact on the labor market, and the troubling problem of building biased machines.

According to Wired UK, the team currently comprises eight internal staff members and six external fellows, with the team expected to grow to 25 members within a year. But DeepMind cofounder Mustafa Suleyman tells the magazine that it’s going to “be collaborating with all kinds of think tanks and academics,” adding that he thinks that it’s “exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”

The new team is far from the first effort to investigate the societal threats of AI. A similar research center already exists at Carnegie Mellon University. And DeepMind is actually already a part of an industry-wide effort known as the Partnership on Artificial Intelligence to Benefit People and Society which intends to, well, work out how artificial intelligence can benefit people and society.

But that partnership might not be moving fast or far enough for DeepMind, if its aspirations are anything to go by. “We want these [AI] systems in production to be our highest collective selves,” says Suleyman to Wired UK. “We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last 60 years.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.