Skip to Content

Tech Titans Join Forces to Stop AI from Behaving Badly

The new partnership is also designed to head off unwanted regulation.
September 28, 2016

When it comes to policing artificial intelligence, technology leaders think there is safety in numbers.

A new organization called the Partnership on Artificial Intelligence to Benefit People and Society will seek to foster public dialogue and create guidelines for developing AI so that systems do not misbehave. The companies involved include Google and its subsidiary DeepMind, Facebook, Amazon, Microsoft, and IBM. The partnership is founded on eight tenets or principles, including the idea that AI should benefit as many people as possible; that the public should be involved in its development; that research should be conducted in an open way; and that AI systems should be able to explain their reasoning.

That such fierce rivals would come together in this way shows how important the companies feel it is to head off public concern and speculation over the potential impacts of AI. These businesses are all reaping huge rewards from advances in AI, and they do not wish to see their industry subject to strict government regulation, which could slow or alter the technology’s progress at a critical moment in its evolution. But at the same time, rapid recent advances have raised concerns about the potential for AI systems to discriminate, disadvantage, and displace people.

“We all share a duty to take the field forward in a thoughtful and positive and, importantly, ethical way,” says Mustafa Suleyman, cofounder of Google DeepMind and interim co-chair of the organization. “The positive impacts of AI will depend not only on the quality of our algorithms, but on the level of public engagement, of transparency, and ethical discussion that takes place around it.”

AI is advancing at such a breakneck speed that there are legitimate concerns about it being deployed in ways that have unintended or unwanted effects. For instance, a machine-learning system designed to identify disease that is fed biased data might discriminate against certain people. The companies involved in the partnership recognize that a backlash could emerge in response to such effects.

“AI is an incredibly important technology for Amazon,” says Rolf Herbrich, director of machine learning for the e-commerce giant. “[And] the biggest asset in the customer experience is customer trust.”

It’s unclear precisely how the organization will engage with the public or those developing AI. It will not seek to enforce its guidelines so much as to lead an open discussion, according to Suleyman.

There are also legitimate concerns over the potential for AI and related technologies such as robotics to displace people and increase inequality (see “How Technology Is Destroying Jobs”). But recent progress has led to some more outlandish and futuristic warnings about the potential for AI to pose an existential threat to humanity (see “Our Fear of Artificial Intelligence”). One concern among technologists is that these warnings could stoke unwarranted public outcry and inspire unnecessary government regulations.

“With all the hyperbole about AI over the last two to four years, there’s been concern that in that echo chamber of anxiety the government itself will be misinformed,” says Eric Horvitz, managing director of Microsoft Research and the organization’s other co-chair.

The government has signaled an interest in ensuring that artificial intelligence does not have unwelcome consequences, coordinating a series of workshops this year aimed at exploring its potential effects.

Some ethical conundrums involving AI will be more complex to resolve than others. For instance, it may prove challenging to devise systems that take differing ethical perspectives into account. This issue has been raised, in a largely theoretical way, concerning the behavior of self-driving cars. How to make AI systems more transparent and accountable is also an open question, and one that may itself require progress in the technology (see “AI’s Language Problem”).

The idea for the partnership emerged at an event coordinated in New York this January by Facebook to discuss ethical issues surrounding AI. Other companies and organizations involved in developing AI, such as Apple and the Allen Institute for Artificial Intelligence, have discussed becoming involved in the effort in some capacity, the founding members say.

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, says it will be important for the organization to broach the issue of how AI may take away jobs. And he adds that the biggest challenge may be finding a way for competitors to work together effectively. “​These are some of the leading high-tech companies​ in the world,” he says. “They will need work together here.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.