For all the hand-wringing over the potential dangers of super-intelligent AI, there’s been little practical effort to address the issue. Now, some big-name entrepreneurs have created a billion-dollar nonprofit, called OpenAI, which will dedicate itself to building artificial intelligence that won’t leave humans behind. Here’s how the website for OpenAI describes the effort:
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. […] We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.”
The effort is backed by some major entrepreneurs, including Elon Musk, Sam Altman of Y Combinator, Reid Hoffman of LinkedIn, and Peter Thiel. OpenAI also includes some prominent engineers, such as Ilya Sutskever, a wunderkind deep-learning expert at Google (and one of our Innovators of the Year for 2015).
The announcement coincides with the biggest technical conference focused on AI, the Neural Information Processing Systems (NIPS) meeting, held this week in Montreal. I spent the week there, and noticed that many AI researchers are starting to think about the long-term implications of AI. There was a symposium dedicated to the ethical issues—from unemployment to the long-term existence of the human race.
However, this contrasted with most of the technical content of the meeting, which consisted of novel mathematical approaches and algorithms for improving the latest machine learning methods. Hardly the kind of thing to make you worry about the future of our species.
Undoubtedly, AI has made some spectacular progress in recent years, especially thanks to deep learning. But while this method has resulted in amazing progress in perceptual tasks such as image and voice recognition, it seems likely that much more will be needed to achieve even toddler-like levels of intelligence.
Still, with machine learning becoming increasing integral to everyday life, it isn’t a bad time to talk about the implications of this technology. It’ll certainly be interesting to see how the OpenAI effort develops.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Data analytics reveal real business value
Sophisticated analytics tools mine insights from data, optimizing operational processes across the enterprise.
Driving companywide efficiencies with AI
Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.