Hello,

We noticed you're browsing in private or incognito mode.

To continue reading this article, please exit incognito mode or log in.

Not an Insider? Subscribe now for unlimited access to online articles.

Intelligent Machines

This Supercomputer Will Try to Find Intelligence on Reddit

Researchers at OpenAI are developing algorithms capable of learning language by reading the Web and controlling robots through practice.

Is it possible that the secret to building machine intelligence lies in spending endless hours reading Reddit?

That’s one question a team of researchers at OpenAI, a nonprofit backed by several Silicon Valley luminaries, hopes to answer with a new kind of supercomputer developed by chipmaker Nvidia. The researchers are also training robots do the dishes through experimentation, and they are building algorithms capable of learning to play a wide variety of different computer games.

The new machine, called a DGX-1, is optimized for the form of machine learning known as deep learning, which involves feeding data to a large network of crudely simulated neurons and has resulted in great strides in artificial intelligence in recent years. The DGX-1 will let AI researchers train deep-learning systems more quickly using more data. As a rough comparison, computations that would take 250 hours on a conventional computer take about 10 hours on the DGX-1.

Nvidia’s CEO, Jen-Hsun Huang, delivers the first DGX-1 to Elon Musk’s OpenAI.

At OpenAI, the performance boost achieved with the new hardware may be seen most immediately in language understanding. The OpenAI researchers are feeding message threads from the popular website Reddit to algorithms that build a probabilistic understanding of the conversation. If fed enough examples, the underlying language model will be good enough to hold a conversation itself, the researchers hope. And the hardware will make it possible to feed many more snippets of text into the model, and to apply more computing power to the problem.

Andrej Karpathy, a research scientist at OpenAI, says that modern machine-learning techniques tend to become smarter as they get larger. “Deep learning is a very special class of models because as you scale up the models, they always work better,” Karpathy says in a video released by Nvidia today.

Language remains a very tricky problem for artificial intelligence, but in recent years researchers have made progress in applying deep learning to the problem (see “AI’s Language Problem”). Researchers at Google, for instance, fed movie dialogue to a deep-learning system originally designed to perform translation and then showed that it could answer some questions remarkably well.

OpenAI’s researchers also plan to explore whether a robot could learn to use language by interacting with people and the real world. This research is at an early stage, though, and it will be easier to scale up the work involving Reddit. Karpathy says the models trained on Reddit data could go from consuming months of conversation to years thanks to the new hardware.

Nvidia, which makes graphics processing units for gaming, has benefited from the deep-learning boom because its hardware is well suited to the parallel computations required. In recent years the company has sought to exploit this advantage, and the DGX-1, developed at a cost of about $2 billion, is essentially a bank of graphics chips optimized for deep learning. The chips can process data very quickly (at a peak of about 170 teraflops, or billion arithmetic calculations per second) and can share data more easily.

Andrew Ng, chief scientist at the Chinese Internet company Baidu, has taken a close look at the DGX-1, which Baidu plans to use. “The capabilities it provides will allow us to try new ways of scaling our training process,” Ng says. “This will allow us to train models on larger data sets, which we have found leads to progress in AI.”

OpenAI is involved in a range of bleeding-edge AI research. Besides deep learning, its researchers are focused on developing algorithms capable of learning through extensive trial and error, a field of research known as reinforcement learning.

OpenAI hopes to use reinforcement learning to build robots capable of performing useful chores around the home, although this may prove a time-consuming challenge (see “This Is the Robot Maid Elon Musk Is Funding” and “The Robot You Want Most Is Far from Reality”).

The researchers at OpenAI are also exploring ways for AI algorithms to learn far more efficiently by generating their own models, or theories, about what a data set means. An algorithm might learn to play a range of computer games, for example, by determining that collecting coins usually helps push the score up.

Ilya Sutskever, research director at OpenAI and a prominent figure in AI, says work in this area could ultimately lead to better algorithms that can learn more effectively. “Once all these improvements are made, it should be possible to build agents that can achieve more sophisticated goals using much less experience,” he says.

OpenAI was founded in 2015 with $1 billion in funding from tech industry VIPs including Elon Musk, CEO of Tesla and SpaceX, and Sam Altman, chairman of Y Combinator. The goal of the nonprofit is to do open AI research and help ensure that AI benefits humanity.

Jen-Hsun Huang, CEO of Nvidia, says the decision to give the first DGX-1 to OpenAI reflects a belief in OpenAI’s goals. “Nvidia’s strategy is to democratize AI,” Huang said in an interview. “We would like this technology, as powerful as it is, to move in a direction that is good for society.”

Couldn't make it to Cambridge? We've brought EmTech MIT to you!

Watch session videos
Nvidia’s CEO, Jen-Hsun Huang, delivers the first DGX-1 to Elon Musk’s OpenAI.

Uh oh–you've read all of your free articles for this month.

Insider Premium
$179.95/yr US PRICE

Next in Top Stories

Your guide to what matters today

Want more award-winning journalism? Subscribe and become an Insider.
  • Insider Plus {! insider.prices.plus !}* Best Value

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus the digital magazine, extensive archive, ad-free web experience, and discounts to partner offerings and MIT Technology Review events.

    See details+

    What's Included

    Unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

    Bimonthly print magazine (6 issues per year)

    Bimonthly digital/PDF edition

    Access to the magazine PDF archive—thousands of articles going back to 1899 at your fingertips

    Special interest publications

    Discount to MIT Technology Review events

    Special discounts to select partner offerings

    Ad-free web experience

  • Insider Basic {! insider.prices.basic !}*

    {! insider.display.menuOptionsLabel !}

    Six issues of our award winning print magazine, unlimited online access plus The Download with the top tech stories delivered daily to your inbox.

    See details+

    What's Included

    Unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

    Bimonthly print magazine (6 issues per year)

  • Insider Online Only {! insider.prices.online !}*

    {! insider.display.menuOptionsLabel !}

    Unlimited online access including articles and video, plus The Download with the top tech stories delivered daily to your inbox.

    See details+

    What's Included

    Unlimited 24/7 access to MIT Technology Review’s website

    The Download: our daily newsletter of what's important in technology and innovation

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.