DeepMind’s New Ethics Team Wants to Solve AI’s Problems Before They Happen
From automation’s erosion of jobs to killer robots, there are plenty of thorny social AI issues to chew on. Google’s machine learning division, DeepMind, has now decided to try and head off some of the most contentious problems facing AI by establishing its own ethics and society research team.
In a blog post announcing the news, it says the new unit will “help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.” It will focus on six areas: privacy, transparency and fairness; inclusion and equality; governance and accountability; misuse and unintended consequences; AI morality and values; and AI and the world’s complex challenges.
They’re broad topics indeed—though DeepMind does give examples of some open questions that it will try to answer. They include challenges surrounding Elon Musk’s dreaded weaponized robots, AI’s impact on the labor market, and the troubling problem of building biased machines.
According to Wired UK, the team currently comprises eight internal staff members and six external fellows, with the team expected to grow to 25 members within a year. But DeepMind cofounder Mustafa Suleyman tells the magazine that it’s going to “be collaborating with all kinds of think tanks and academics,” adding that he thinks that it’s “exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”
The new team is far from the first effort to investigate the societal threats of AI. A similar research center already exists at Carnegie Mellon University. And DeepMind is actually already a part of an industry-wide effort known as the Partnership on Artificial Intelligence to Benefit People and Society which intends to, well, work out how artificial intelligence can benefit people and society.
But that partnership might not be moving fast or far enough for DeepMind, if its aspirations are anything to go by. “We want these [AI] systems in production to be our highest collective selves,” says Suleyman to Wired UK. “We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last 60 years.”
Keep Reading
Most Popular
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.