DeepMind’s Cofounder Thinks AI Should Get Ethical in 2018
Mustafa Suleyman, who cofounded Google's deep-learning subsidiary, wants the artificial-intelligence community to focus on ethics in 2018.
His argument: Writing in Wired UK, Suleyman explains that machine learning has the potential to improve or worsen inequalities in the world. To make sure it ends up being a net positive, he says, research into AI ethics needs to be prioritized.
What's been done: This isn't a new concern for Suleyman. DeepMind established its own ethics and society research team earlier this year to work on these sorts of issues. And there are other industry groups, like AI Now and Partnership on AI, that are looking into it too.
What's left to do: A lot. Suleyman writes that we still have to figure out "what ethical AI really means," which is why his ethics and society research team has broad topics to consider, like "transparency" and "inclusion." We'll be lucky to get a definition of ethical AI in 2018—let alone a solution.
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.