Hardware design, rather than algorithms, will help us achieve the next big breakthrough in AI. That’s according to Bill Dally, Nvidia’s chief scientist, who took the stage Tuesday at EmTech Digital, MIT Technology Review’s AI conference. “Our current revolution in deep learning has been enabled by hardware,” he said.
As evidence, he pointed to the history of the field: many of the algorithms we use today have been around since the 1980s, and the breakthrough of using large quantities of labeled data to train neural networks came during the early 2000s. But it wasn’t until the early 2010s—when graphics processing units, or GPUs, entered the picture—that the deep-learning revolution truly took off.
“We have to continue to provide more capable hardware, or progress in AI will really slow down,” Dally said.
Nvidia is now exploring three main paths forward: developing more specialized chips; reducing the computation required during deep learning; and experimenting with analog rather than digital chip architectures.
Nvidia has found that highly specialized chips designed for a specific computational task can outperform GPU chips that are good at handling many different kinds of computation. The difference, Dally said, could be as much as a 20% increase in efficiency for the same level of performance.
Dally also referenced a study that Nvidia did to test the potential of “pruning”—the idea that you can reduce the number of calculations that must be performed during training, without sacrificing a deep-learning model’s accuracy. Researchers at the company found they were able to skip around 90% of those calculations while retaining the same learning accuracy. This means the same learning tasks can take place using much smaller chip architectures.
Finally, Dally mentioned that Nvidia is now experimenting with analog computation. Computers store almost all information, including numbers, as a series of 0s or 1s. But analog computation would allow all sorts of values—such as 0.3 or 0.7—to be encoded directly. That should unlock much more efficient computation, because numbers can be represented more succinctly, though Dally said his team currently isn’t sure how analog will fit into the future of chip design.
Naveen Rao, the corporate vice president and general manager of the AI Products Group at Intel, also took the stage and likened the importance of the AI hardware evolution to the role that evolution played in biology. Rats and humans, he said, are divergent in evolution by a time scale of a few hundred million years. Despite vastly improved capabilities, however, humans have the same fundamental computing units as their rodent counterparts.
The same principle holds true when it comes to chip designs, Rao said. Any chip—whether specialized or flexible, digital or analog, optical or otherwise—is simply a substrate for encoding and manipulating information. But depending on how that substrate is designed, it could be the difference between the capabilities of a rat and a human.
Insects, like rats, he said, are also built with the same fundamental units as humans. But insects have fixed architectures whereas humans have more flexible ones. Neither one, he argued, is superior to the other, but they clearly evolved to suit different purposes. Insects can likely survive a nuclear war, while humans have much more sophisticated capabilities.
Again, those principles can be applied to chip design. As we bring more smart devices online, it won’t always make sense to send their data to the cloud in order to be processed through a deep-learning model. Instead, it may make sense to run a small, efficient deep-learning model on the device itself. This idea, known as “AI on the edge,” could benefit from specialized, fixed chip architectures that are more efficient. Data centers that power “AI on the cloud,” on the other hand, would run on fully flexible and programmable chip architectures, to handle a much broader spectrum of learning tasks.
Rao noted that whatever chip designs Intel and Nvidia decide to pursue, the effect on the evolution of AI will be significant. Throughout history, individual civilizations evolved in very different ways because of the unique materials at their disposal. Likewise, the operations that Intel and Nvidia make easier through different chip designs will heavily influence the kinds of learning tasks the AI community will pursue.
“We’re in this rapid Precambrian explosion [for chip architectures] right now,” Rao said, “and not every solution is going to win.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.