Skip to Content

A Billion-Dollar Effort to Make a Kinder AI

A new nonprofit will aim to make artificial intelligence that “benefits humanity.”
December 12, 2015

For all the hand-wringing over the potential dangers of super-intelligent AI, there’s been little practical effort to address the issue. Now, some big-name entrepreneurs have created a billion-dollar nonprofit, called OpenAI, which will dedicate itself to building artificial intelligence that won’t leave humans behind. Here’s how the website for OpenAI describes the effort:

“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. […] We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.”

The effort is backed by some major entrepreneurs, including Elon Musk, Sam Altman of Y Combinator, Reid Hoffman of LinkedIn, and Peter Thiel. OpenAI also includes some prominent engineers, such as Ilya Sutskever, a wunderkind deep-learning expert at Google (and one of our Innovators of the Year for 2015).

The announcement coincides with the biggest technical conference focused on AI, the Neural Information Processing Systems (NIPS) meeting, held this week in Montreal. I spent the week there, and noticed that many AI researchers are starting to think about the long-term implications of AI. There was a symposium dedicated to the ethical issues—from unemployment to the long-term existence of the human race.

However, this contrasted with most of the technical content of the meeting, which consisted of novel mathematical approaches and algorithms for improving the latest machine learning methods. Hardly the kind of thing to make you worry about the future of our species.

Undoubtedly, AI has made some spectacular progress in recent years, especially thanks to deep learning. But while this method has resulted in amazing progress in perceptual tasks such as image and voice recognition, it seems likely that much more will be needed to achieve even toddler-like levels of intelligence. 

Still, with machine learning becoming increasing integral to everyday life, it isn’t a bad time to talk about the implications of this technology. It’ll certainly be interesting to see how the OpenAI effort develops.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.