Skip to Content

Moonlight over Academe

Slicing billions of dollars from corporate laboratories hasn’t hurt U.S. competitiveness. One reason: companies hire professors to do R&D moonlighting.

I’m tired of the whining about innovation. Tired of hearing ossified academics and weary policy-meisters chronicle the decimation of corporate support for “pure” research. Tired of doomsday rhetoric that predicts the imminent demise of American technology. Tired of industrial ostriches complaining about the “short-term” focus of such high-tech pacesetters as Intel, Microsoft, and Sun.

Let’s face it: The naysayers are dead wrong. Slicing billions of dollars from corporate laboratories hasn’t made a dent in U.S. competitiveness. Indeed, from AT&T to IBM to Xerox, American industry is healthier because it has slimmed down its bloated and centralized research staffs. Across the spectrum of information technologies-from the Web to chips to software-U.S. ingenuity reigns supreme. Ditto for agrotechnology, aerospace, materials, and telecommunications. Only in biotechnology and pharmaceuticals does the United States have foreign rivals with deep pockets backed by first-class research.

This reemergence of American dominance has been led not by the national government and not by whole sectors of industry but by individual companies. To paraphrase the language of historian Thomas Hughes, we live in a time when individual companies are “transcendent”: They define the terms on which industry after industry operates.

But if individual firms reign supreme, and those same firms are trimming their research establishments, does fundamental research have any role in the resurgence we’re seeing? Absolutely. The irony is that, far from being banished from the corporate tent by cutbacks, serious researchers are playing a growing role in innovation at the level of individual firms. The explanation for this apparent paradox is that innovative companies aren’t looking for full-time scientists; they want moonlighting academics, professors willing to work on specific projects for often-lucrative piece rates.

“There’s almost no company that I’m aware of that doesn’t have heavy involvement from professors,” says Michael Crow, who oversees research and development at Columbia University. “Professors are playing a much more significant role than 25 years ago in firm-level innovation.”

Consider Barbara Hayes-Roth, a cognitive scientist at Stanford University who is a world leader in creating “intelligent agents,” or digital characters, for interactive media. The characters can carry on conversations and offer advice, playing off key words, pattern recognition, and their own “knowledge” of the world they inhabit.

When Hayes-Roth began her research a decade ago, interactive media was in its infancy and the Web-where most of her characters reside-didn’t even exist. But her research interests drew the support of the Advanced Research Projects Agency and a host of corporations; over time the commercial relevance of her work became overwhelming. For years she stood firm, limiting her consulting and some years confining her work to academia. Last year, she decided to scale back her duties at Stanford and devote more time to a company she’s launched to commercialize intelligent agent software.

The shift came easily, she says. “In academia, you’re already a kind of entrepreneur. You’re creating ideas and bringing in the resources to make your research possible.” The only difference, she adds, is that “in the university, we sell our idea before we make it and outside we sell it after we make it.”

This alliance of industry and academia would seem to bring wide benefits, yet it’s deeply troubling to people wedded to older models of how universities should interact with the private sector. And to be sure, there are at least two big dangers in the industry’s growing reliance on professors. As they become more profit-driven, academics threaten to undermine one of the hallmarks of a liberal education: the largely unfettered exchange of ideas. While intellectual property claims remain the exception rather than the rule, it isn’t absurd to imagine that, at least in some scientific and technical fields, one professor will have to pay a royalty simply to read another’s published findings.

Then there’s industry’s emphasis on technologies that are fast and cheap. If one thing separates university researchers from those in corporations, it’s the academic’s insistence on pursuing solutions that are interesting-without regard to efficiency. “I always like to see a research project hung on a problem, but then I don’t like to see any constraints on how the problem is pursued,” says Michael Lynch, an applied mathematician at the U.K.’s University of Cambridge. Even though Lynch has formed two successful software companies while a professor, he worries about the tendency of academics to become midwives to industry. “The danger is that we kill the golden goose because we ask it to lay too many eggs,” he says.

These concerns are quite real, but the fact is that the Ivory Tower has fallen, and its pieces, like Humpty Dumpty’s, can’t be put back together again in the same way.

Just ask Craig Barret. In the early 1970s Barret was a rising star at Stanford, a PhD in materials science. One summer Intel came knocking, asking for a student to help sift through a problem with the ceramic packaging around one of the company’s new chips. On a lark, Barret offered himself. He quickly solved the problem and was hooked. Recently, he was named Intel’s CEO, succeeding Andrew Grove.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.