Skip to Content

The Intel Lookout

Rethinking Corporate Research: Why does the world’s biggest chip maker outsource so much research?

Last time around, we examined how IBM overhauled its research management. This time, let’s look at a dramatically different approach to research: Intel’s.

Unlike IBM, Intel avoids deep internal research whenever it can. This unusual approach to research is rooted in the experience of Intel’s founders: Gordon Moore, Robert Noyce and Andrew Grove. All worked in Fairchild Semiconductor’s research organization. All found it enormously difficult to get research out of the lab and into actual production. And all vowed to avoid that problem when they started Intel.

Intel chose instead to rely on excellence in the “D” part of R&D and effectively to outsource a great deal of its “R.”

This sounds easy to do-just don’t hire any researchers, and don’t pay for research. In practice, though, it is quite tricky to manage this in a technology-intensive industry.

Controlling Without Creating

How does Intel gain access to critical new technologies if it does not create them itself? How can Intel plan its future product developments if it does not control the key technologies in those products? How will Intel avoid being overtaken by companies with greater technological capability, if it does not do enough research to know where the technologies are moving? After all, many agree with Steve Jobs that “the best way to predict the future is to invent it.”

First, I should qualify what I’ve stated above about Intel’s research. Intel does have a large staff of researchers. Three laboratories house them: the Intel Architecture Lab, the Intel Microprocessor Lab and the Intel Component Research Lab. Intel also has created a large number of smaller labs (80 by the latest count) around the world. However, Intel manages these labs quite differently from IBM and expects different things.

A Fab Approach

For starters, there’s location, location, location. The three major labs are immediately adjacent to an Intel chip fabrication facility (a “fab” in industry jargon). This simplifies the transfer of new ideas and methods, since it’s a quick walk across the parking lot to introduce them into the production environment.

In addition, Intel takes great pains to make sure that the equipment and tooling in its labs are closely related to the equipment already in use in its fabs, so that new processes and technologies are more easily replicated in the fab. This is the essence of Intel’s famed “Copy Exactly” methodology of scaling up its production.

Once a fab creates a process for a new chip, it is “copied exactly” in a second fab so that the many headaches of ramping up volume are diminished. As a result, new Intel products get into high volume, high yield-and higher profits-faster.

Consistent with this philosophy, each of the labs reports to the head of one of Intel’s business groups. (At IBM and many other organizations with central research facilities, research is a separate function that reports into the CEO’s office as a peer organization to IBM’s businesses.) So Intel has structured and managed its labs with technology transfer to the fab foremost in mind.

Not that this is the only activity underway in the in-house labs. The Intel Architecture Lab focuses on developing new software and hardware standards for the PC and Internet industries for Intel’s customers and their customers. The Microprocessor Lab conducts research into new microprocessor features, functions and structures. The Component Research Lab focuses on the intricate supply chain that provides the tools and equipment to make semiconductors. The smaller labs might focus on one aspect of technology development, to feed into Intel’s overall system.

This emphasis on manufacturing has paid off for Intel. When the industry adopted new standards for semiconductors, such as the larger 200-millimeter and 300-millimeter wafer sizes, Intel put them into high-volume production faster than most other competitors. Indeed, there were instances where AT&T or IBM helped to fund a new semiconductor technology and got the earliest access to the technology-but Intel still got it into production faster!

Leveraging Universities

But Intel does not passively sit and wait for new research discoveries to arise. Instead, it has been proactive in identifying, funding and leveraging the research discoveries of others.

In particular, Intel runs an elaborate program to fund hundreds of university research projects in over 15 selected U.S. universities and a similar number of foreign universities. These projects are in areas that Intel deems critical to its future business success, and it manages these relationships not only with individual researchers but also with their institutions.

Intel’s spending in these programs exceeds $100 million, making it one of the largest sources of research funding available to university researchers in its targeted areas outside the federal government. As a result it solicits research proposals from hundreds of excellent university scientists and engineers. This allows it to survey the landscape of research opportunities, before it invests a dollar.

If this programmatic approach to funding external research sounds familiar, well, Intel has even hired a former senior DARPA official, David Tennenhouse, to lead this effort. In late August, Tennenhouse announced the opening of three new Intel labs at Berkeley, Carnegie Mellon and the University of Washington, to focus on collaborative research efforts with university scientists and engineers. (See “Intel Revamps R&D”.)

Rethinking Roles

This philosophy changes the role of a researcher. Instead of seeking to defend his or her particular solution to a problem, Intel’s researchers evaluate others’ solutions. It also allows Intel to hedge its technical bets, by funding multiple approaches to solving a particular problem. Indeed, this flexibility is one of the primary benefits of Intel’s model.

Once Intel identifies and funds a promising research proposal, it must negotiate with the university to assure that it gets access to the results of that research. Intel does not want to fund the research only to find that the university has later patented the technology and now wants Intel to pay royalties to use it.

Intel must also decide whether and when a research project is ready to be put into the fab. The labs select the ones that look to be ready for prime time and then get deeply involved in the transfer of an external research project into Intel’s production environment.

Additionally, Intel devotes significant time and energy to industry groups such as Sematech, which hammers out the path for future semiconductor developments. That helps to coordinate complex, fundamental standards that dictate the investment of billions of dollars in equipment and processes. While Intel plays a leadership role in the organization, it cannot and does not unilaterally dictate future technologies for the industry. Instead, it must persuade others to agree to a particular approach and must sometimes sacrifice its own preferred agenda to iron out final differences so that the industry can move ahead.

The Limits of Central Research

Intel’s approach suggests some important principles for research in the future:

  • We don’t have to own the research to profit from it.
  • Not all of the smart people in the world work for us.
  • We have to be smart enough, though, to recognize excellent people and strong research projects when we see them. That knowledge requires us to do some amount of internal research.
  • While we must compete in order to win, we must also collaborate to advance our technology, particularly in creating standards upstream in our supply chain and downstream in our architectures.
  • Research can help define when and how we cooperate.

Moreover, there is another side to Intel’s approach to research, reflecting how many important new technologies are coming out of startup organizations rather than central research laboratories. That will be the subject of my next column.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.