From automation’s erosion of jobs to killer robots, there are plenty of thorny social AI issues to chew on. Google’s machine learning division, DeepMind, has now decided to try and head off some of the most contentious problems facing AI by establishing its own ethics and society research team.
In a blog post announcing the news, it says the new unit will “help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.” It will focus on six areas: privacy, transparency and fairness; inclusion and equality; governance and accountability; misuse and unintended consequences; AI morality and values; and AI and the world’s complex challenges.
They’re broad topics indeed—though DeepMind does give examples of some open questions that it will try to answer. They include challenges surrounding Elon Musk’s dreaded weaponized robots, AI’s impact on the labor market, and the troubling problem of building biased machines.
According to Wired UK, the team currently comprises eight internal staff members and six external fellows, with the team expected to grow to 25 members within a year. But DeepMind cofounder Mustafa Suleyman tells the magazine that it’s going to “be collaborating with all kinds of think tanks and academics,” adding that he thinks that it’s “exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.”
The new team is far from the first effort to investigate the societal threats of AI. A similar research center already exists at Carnegie Mellon University. And DeepMind is actually already a part of an industry-wide effort known as the Partnership on Artificial Intelligence to Benefit People and Society which intends to, well, work out how artificial intelligence can benefit people and society.
But that partnership might not be moving fast or far enough for DeepMind, if its aspirations are anything to go by. “We want these [AI] systems in production to be our highest collective selves,” says Suleyman to Wired UK. “We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last 60 years.”
Here’s how a Twitter engineer says it will break in the coming weeks
One insider says the company’s current staffing isn’t able to sustain the platform.
Technology that lets us “speak” to our dead relatives has arrived. Are we ready?
Digital clones of the people we love could forever change how we grieve.
How to befriend a crow
I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.
Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not
Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.