MIT Technology Review Subscribe

DeepMind’s Cofounder Thinks AI Should Get Ethical in 2018

Mustafa Suleyman, who cofounded Google’s deep-learning subsidiary, wants the artificial-intelligence community to focus on ethics in 2018.

His argument: Writing in Wired UK, Suleyman explains that machine learning has the potential to improve or worsen inequalities in the world. To make sure it ends up being a net positive, he says, research into AI ethics needs to be prioritized.

Advertisement

What’s been done: This isn’t a new concern for Suleyman. DeepMind established its own ethics and society research team earlier this year to work on these sorts of issues. And there are other industry groups, like AI Now and Partnership on AI, that are looking into it too.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

What’s left to do: A lot. Suleyman writes that we still have to figure out “what ethical AI really means,” which is why his ethics and society research team has broad topics to consider, like “transparency” and “inclusion.” We’ll be lucky to get a definition of ethical AI in 2018—let alone a solution.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement