MIT Technology Review Subscribe

The Man Selling Shovels in the Machine-Learning Gold Rush

Nvidia’s CEO says his hardware will revolutionize robotics and that his chips can learn from Google’s AlphaGo.

Jen-Hsun Huang, CEO of the chipmaker Nvidia, is either very prescient or very lucky. His company was built around graphics processing units (GPUs) for video games. But those same chips are now widely used in artificial-intelligence projects such as efforts to build self-driving cars.

Nvidia’s chips turned out to be especially efficient for training the neural networks used in a technique called deep learning that has recently made software much smarter and caused tech giants and investors to pile money into machine-learning research. This week the company announced a new chip designed specifically for the task (see “A $2 Billion Chip to Accelerate Artificial Intelligence”). Huang spoke with Will Knight, MIT Technology Review’s senior editor for AI and robotics, at the company’s annual technology conference in San Jose this week.

Advertisement

What do you expect will be the next big market for your hardware?

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

I think robotics is going to be huge. The reason we chose [to make a chip for] self-driving cars is that it’s the easiest robotics challenge. Deep learning has given us an algorithm that can finally allow robots to learn for themselves, from high-level goals, and through iteration discover for itself. I don’t think it’s possible to teach a robot that by writing programs.

Deep learning has certainly been successful, but it’s only a very approximate simulation of what goes on in the brain. Are you interested in developing hardware that works more like the underpinnings of biological intelligence?

We’re trying to build a better plane rather than figure out how a bird works. Some people describe it as neurons, but the analogy to the brain is very loose. To us it’s a whole bunch of mathematics that extracts the important features out of images or voice or sensor action. Any analogy to a brain is not necessarily that important.

Google DeepMind’s software AlphaGo recently defeated the world’s top Go player. Will cutting-edge AI research like that shape future hardware?

We work very closely with the DeepMind guys, and there is no question AlphaGo was a milestone in human endeavor. It’s amazing that a machine could learn the deep intuition needed to play. I’d love to see us advance these new ideas, whether its memory, reinforcement learning, or transfer learning, unsupervised learning. All of these areas of research will expand the capabilities of this tool called deep learning dramatically. As soon as I learn the challenges of today’s architectures, I can put those ideas into the next architecture.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement