DeepMind’s Cofounder Thinks AI Should Get Ethical in 2018
Mustafa Suleyman, who cofounded Google's deep-learning subsidiary, wants the artificial-intelligence community to focus on ethics in 2018.
His argument: Writing in Wired UK, Suleyman explains that machine learning has the potential to improve or worsen inequalities in the world. To make sure it ends up being a net positive, he says, research into AI ethics needs to be prioritized.
What's been done: This isn't a new concern for Suleyman. DeepMind established its own ethics and society research team earlier this year to work on these sorts of issues. And there are other industry groups, like AI Now and Partnership on AI, that are looking into it too.
What's left to do: A lot. Suleyman writes that we still have to figure out "what ethical AI really means," which is why his ethics and society research team has broad topics to consider, like "transparency" and "inclusion." We'll be lucky to get a definition of ethical AI in 2018—let alone a solution.
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.