Skip to Content

Intel Outside as Other Companies Prosper from AI Chips

The world’s leading chip maker missed a huge opportunity in mobile devices. Now the rise of artificial intelligence gives the company another chance to prove itself.

Back in 1997, Andy Grove, then chief executive officer of Intel, became one of the first corporate titans to embrace the teachings of Harvard Business School professor Clayton Christensen. Sensing that Intel might be undercut by PC chip rivals with cheaper wares, Grove invited Christensen to speak to his team about industrial leaders of the past who had waited too long to address emerging threats. Within a few quarters, Intel had brought out a line of lower-end Celeron chips for PCs, which pretty much smashed the dreams of Intel wannabes such as Advanced Micro Devices. “Innovator’s dilemma” averted.

See the Rest of the Package

  • 50 Smartest Companies

Intel is no longer a case study in adaptability. On the contrary, it has whiffed in the market for mobile chips used in smartphones and tablets, by far the largest new opportunity for chip makers in the past 10 years. On April 19, the same day it said it would cut 12,000 jobs, Intel scrapped development of some of its mobile Atom chips despite years of heavy investment. And for the past few years, the world’s largest chip maker has seemed indifferent to another potentially vast market: the one in chips designed for the artificial--intelligence technique known as deep learning.

This once-obscure corner of AI research has blossomed into one of tech’s hottest trends (see “10 Breakthrough Technologies,” May/June 2013). Large Internet companies are using it to roll out online services that understand images and speech, and deep-learning chips are being designed into drones, driverless cars, and other products in the much-ballyhooed “Internet of things.” That’s especially dangerous for Intel, because CEO Brian Krzanich has said that the company’s future depends on its performance in big data centers and the Internet of things.

Intel is only now introducing its first chip designed specifically for deep learning. It’s a new version of the Xeon Phi coprocessor, which works in tandem with Intel’s flagship x86 microprocessors. But even though the chip is well suited for many deep-learning jobs, the company that essentially monopolized the PC market with its “Intel Inside” strategy remains far behind in developing the programming tools that customers need with such chips. Smaller rival Nvidia has established early dominance by offering such tools, says Bryan Catanzaro, a senior researcher with Baidu, a big user of deep-learning hardware. When it builds these systems, Baidu packs in four times more chips from Nvidia than from Intel. “Intel can be a major player, but it’s a question of focus,” Catanzaro says. “They’re in the process of cutting back in a lot of areas, so you have to wonder if they have the institutional will.”

So far, the financial damage to Intel is minimal. Amazon, Google, and other cloud giants will buy just over $133 million worth of chips to run their deep-learning systems this year, according to Tractica, a market research firm. That’s a pittance next to Intel’s 2015 revenue of $56 billion. Rather than promise revolutionary innovations, Intel suggests that its current chips will suffice for many jobs and that it has the engineering prowess to create new chips as the market matures, says Catanzaro. And the company is determined not to focus on deep learning to the exclusion of other AI approaches. After all, Intel veterans have seen AI crazes take hold in the past; they fear that deep learning is not the panacea many make it out to be. “We’ve seen these cycles before,” says Nidhi Chappell, director of machine learning for Intel’s Data Center Group.

Intel cuts wafers like this into chips in the Xeon Phi family of products. The chips are designed to handle deep-learning tasks.

For Nvidia, however, deep learning is starting to generate revenue growth. The company’s first-quarter sales to big cloud companies jumped 63 percent. Based near Intel in Santa Clara, California, Nvidia used to sell its graphics-processing chips (GPUs) primarily to makers of PCs and game consoles. But it has taken a commanding lead in the nascent deep-learning market since big Internet companies discovered how well graphics chips could handle AI-related jobs. Now, Nvidia says, it is working with 3,500 customers in industries ranging from automotive to pharmaceuticals to financial services.

Nvidia isn’t the only company trying to cash in while Intel plays it cool. Qualcomm is introducing software tools to help customers use its mobile chips for deep learning. And startups such as Knupath and Nervana are coming up with even more radically redesigned deep-learning chips. Tactica projects this market will be worth $3.6 billion by 2024.

Knupath, which was started by former NASA chief Dan Goldin, announced an AI chip called Hermosa in June, along with software to link up 512,000 Hermosas and other chips. The first version will focus on recognizing unexpected voices in noisy environments—say, so you could sign into your bank using only your voice while driving in a convertible with the radio on. The company has raised $100 million in funding, on the assumption that existing chip architectures will not be able to satisfy future demand. “We are entering the very early stages of machine intelligence and machine learning. It’s like the Wild West,” says Goldin. “Some wildly crazy things are going to happen.”

Hole in the market

When the likes of Facebook, Google, and Microsoft teach software how to detect the content of images or identify speech, they build what are often called neural networks, in which enormous amounts of data are run through thousands of connected processors. Eventually the machines can recognize patterns on their own and make judgments accordingly. In January, a Google neural network beat one of the world’s best players of the board game Go in four out of five contests.

In such applications, Intel’s x86 microprocessors usually do little more than digital housekeeping. While a top-of-the-line Intel processor packs more than enough punch to run sprawling financial spreadsheets or corporate operations software, chips optimized for deep learning break particular types of problems—such as understanding voice commands or recognizing images—into millions of bite-size chunks. Because GPUs like Nvidia’s consist of thousands of tiny processor cores crammed together on one slice of silicon, they can handle thousands of these chunks simultaneously. Assigning an Intel processor to such work would be a huge waste of resources, since each of these processors contains a few dozen cores that are designed to run complex algorithms. Deep-learning chips don’t need to do that much thinking to handle all those micro-tasks. Graphics-processor cores have the right amount of arithmetic muscle for a quick once-over to properly classify an image or other piece of data.

This Nvidia chip is meant for large ­Internet data centers and deep-learning applications.

Catanzaro, who helped launch Nvidia’s deep-learning assault before going to Baidu, is testing the Xeon Phi coprocessor and says it can handle some deep-learning tasks around 90 percent as effectively as graphics processors. But he’s skeptical. Not only has Intel not developed any of the software tools Nvidia offers to help customers refine and maintain neural networks, but also, he says, Intel must do a better job of getting its chips into the hands of the deep--learning luminaries pushing the field forward. So far, Intel has endeavored to sell the Xeon Phi in volume to big corporate buyers for well-understood applications, says Catanzaro. “I’m pulling for Intel,” he says. “It’s not good for anyone if Nvidia is the only viable alternative, so we need Intel in this market. But they have to start focusing.”

In May, Google surprised the AI world by announcing that it had been using a chip of its own creation, called the Tensor Processing Unit, for more than a year. Although Google has happily poured billions into “moon shot” projects such as driverless cars, this was the first time it had delved into the expensive, difficult chip business. Why bother? It was the only way to “push our machine-learning-powered applications forward,” Norm Jouppi, a distinguished hardware engineer at Google, wrote in an e-mail. While Google will continue using Intel processors in its computing infrastructure, he said, “we needed more than what was available in the market.”

Feeling the heat

Intel has also been quiet in another promising corner of the deep--learning market: the one for chips that embed the wisdom learned by neural networks inside phones, cars, and other devices we want to make smarter. DJI, the world’s largest drone maker, included a “visual processing unit” made by Movidius in its new Phantom 4 model. The chip processes what the Phantom’s cameras see, enabling the craft to avoid crashes a human pilot may not be skilled enough to head off from the ground. It’s designed to use very little battery power—again, not Intel’s specialty.

These chips could prove far less profitable than the processors that made Intel a household name, but the volumes could be too large to resist should the components become standard in smarter MRI machines, manufacturing robots, and surveillance cameras, says Jim McGregor, founder of Tirias Research, a chip-industry research firm. Most tantalizing is the market for self-driving cars, which could reach tens of millions of units a year. If each vehicle has many of these chips, this market alone could rival the size of the PC market.

Intel’s Chappell doesn’t dismiss such projections, but she says Intel’s opportunity lies in taking a broader, pragmatic view of the market. AI researchers’ most pressing challenge is to create ways to train neural networks much faster—say, in an afternoon rather than over the course of a few weeks. The new Xeon Phi chip will help solve this problem, she says, in part because researchers can use it to design a training system on their own computers and keep using it as they expand to larger networks of servers and eventually at massive scale in the cloud.

In the longer term, Intel could build chips that work in everything from those training systems to low-power devices in the Internet of things, says Chappell. In that scenario, graphics processors and other specialized deep-learning chips would be at a disadvantage relative to general-purpose, jack-of-all-trades microprocessors. Thanks to Intel’s engineering talent and manufacturing capabilities, the company may be able to stuff deep-learning circuitry into future processors at little incremental cost. If Intel can create a common set of software tools for managing everything from neural networks to drones, it could make deep learning accessible to far more companies—and give Intel a strategic lock on their business.

These are the tricks that helped Intel monopolize the PC industry. Even now, few are willing to count the company out. “The last time I checked, they had $15 billion in the bank, and they are not stupid people,” says Remi El-Ouazzane, Movidius’s CEO. “But at this point at least, we’re not feeling the heat.”

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.