From automation’s erosion of jobs to killer robots, there are plenty of thorny social AI issues to chew on. Google’s machine learning division, DeepMind, has now decided to try and head off some of the most contentious problems facing AI by establishing its own ethics and society research team.
In a blog post announcing the news, it says the new unit will “help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.” It will focus on six areas: privacy, transparency and fairness; inclusion and equality; governance and accountability; misuse and unintended consequences; AI morality and values; and AI and the world’s complex challenges.
They’re broad topics indeed—though DeepMind does give examples of some open questions that it will try to answer. They include challenges surrounding Elon Musk’s dreaded weaponized robots, AI’s impact on the labor market, and the troubling problem of building biased machines.
According to Wired UK, the team currently comprises eight internal staff members and six external fellows, with the team expected to grow to 25 members within a year. But DeepMind cofounder Mustafa Suleyman tells the magazine that it’s going to “be collaborating with all kinds of think tanks and academics,” adding that he thinks that it’s “exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”
The new team is far from the first effort to investigate the societal threats of AI. A similar research center already exists at Carnegie Mellon University. And DeepMind is actually already a part of an industry-wide effort known as the Partnership on Artificial Intelligence to Benefit People and Society which intends to, well, work out how artificial intelligence can benefit people and society.
But that partnership might not be moving fast or far enough for DeepMind, if its aspirations are anything to go by. “We want these [AI] systems in production to be our highest collective selves,” says Suleyman to Wired UK. “We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last 60 years.”
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.