Measure for Measure
A young genius in a low-budget lab toils to uncover the workings of cancer cells. Physicists from several universities collaborate to coax never-before-seen particles from a supercollider. Teams of astronomers ply huge telescopes to scan the far reaches of the universe, capturing stunning images of black holes and nascent stars. Eureka moments can occur in almost any kind of setting. Behind all those modes of inquiry, however, is often a common thread: U.S. government funding.
But what is the best way for federal funding agencies like the National Science Foundation (NSF) and the National Institutes of Health (NIH) to invest in science? Do comparatively small grants for individual researchers spark the most groundbreaking ideas, or does it take long-term awards for large teams? For that matter, should the goal be to shoot for occasional big breakthroughs or consistent, incremental advances in knowledge? Or is the whole process of scientific discovery and technological innovation too complex for any general rules to apply?
These questions seem especially pressing today. In 2009, Congress allocated more than $20 billion in stimulus money for science, giving the funding agencies plenty of decisions to make. Such a large injection of cash is unlikely to be repeated any time soon, however, and overall federal funding for science–about $148 billion in fiscal year 2010–may soon be subject to intense political sparring.
So it would behoove scientific organizations to wring all the value they can out of their budgets. But until recently, no one has really studied what makes scientists productive. “There are a lot of anecdotes and stories but no serious empirical basis for studying the funding of science,” says Julia Lane, program director for the Science of Science and Innovation Policy group at the NSF.
Read the research papers mentioned in this story
Azoulay, Graff Zivin, Wang, 'Superstar Extinction"
Azoulay, Graff Zivin, Manso, "Incentives and Creativity"
Jones, "Age and Great Invention"
Jones, "The Burden of Knowledge"
Jones, Wuchty, Uzzi, "The Increasing Dominance of Teams in the Production of Knowledge"
Murray, "The Oncomouse that Roared"
Murray, Huang, "Does Patent Strategy Shape the Long-Run Supply of Public Knowledge"
Within the last decade, however, a number of economists with MIT links have been shedding new light on the ways scientists work. “This topic has had a high ratio of pontificating to actual research achievements,” says Pierre Azoulay, PhD ‘01, an associate professor at MIT’s Sloan School of Management. “But we want to bring the scientific method to bear on the scientific enterprise.” In recent years, the NSF and the NIH have both started programs to that end, while in academia, this subspecialty–economists who study scientists–has grown to the point where about 200 researchers attended the conference on innovation that the National Bureau of Economic Research (NBER) held in 2009.
“To an economist, this issue is extremely important, because an economist thinks innovation is what drives growth and progress,” says Ben Jones, PhD ‘03, an MIT-trained economist and associate professor at Northwestern’s Kellogg School of Management. Indeed, Nobel Prize-winning research by MIT economist Robert Solow, HM ‘90, among others, has shown that technological innovation accounts for a large portion of economic growth. Today, some economists continue to study the relationship between innovation and growth in an overarching, macroeconomic way. Others, including Jones, look in detail at the state of the laboratory, examining how scientists collaborate and what kinds of incentives spur discoveries and new technologies.
“There is a real sense that we’re beginning to think about science not just as an enterprise carried out by brilliant people who are unmanageable, so that you simply give them money and hope they go away and do something clever,” says Fiona Murray, an associate professor at Sloan, who has studied scientific innovation extensively. “Scientists clearly value their autonomy, but that doesn’t mean you can’t think about science as an organizational activity.
New tools for studying scientists
In the 1990s, a few labor economists (including MIT’s Joshua Angrist) discovered new ways to conduct “natural experiments,” studies that mimic laboratory-style randomized trials. They began using historical data to pinpoint the impact a single difference makes between two otherwise equivalent groups of workers. At the same time, detailed Internet citation databases began cropping up, giving economists a source of hard data for natural experiments that assessed the influence, productivity, and teamwork of equivalent groups of scientists. These developments enabled economists to study scientists closely for the first time, says Scott Stern of Kellogg, a former MIT economist and a prominent figure in the analysis of science.
Azoulay and his colleagues put these new tools to work to study the widely held belief that working with the top people in a given field makes other scientists more productive. The project began in 2002, when Azoulay, a voluble native of France, gave a talk about the biotechnology industry and discovered an audience member who was equally knowledgeable about the subject: Joshua Graff Zivin, an economist at the University of California, San Diego. Soon the two were studying the effect of “superstar” scientists on their colleagues. They found that scientists who worked alongside these leading lights indeed had more impressive publication records than those who did not.
But after presenting some findings to the NBER in 2004, they recognized that they had an unresolved problem. Were the collaborators more productive because they worked in the orbit of their fields’ stars? Or did they get the opportunity to work with the stars because they were more capable scientists in the first place?
Going back to the drawing board, Azoulay and Graff Zivin found the answer by examining the performance of laboratories whose star researchers had died suddenly. After combing through obituaries in such publications as the New York Times, the economists compiled a list of 161 such scientists and then scrutinized the records of more than 8,000 researchers who had coauthored papers with them. The result? The productivity of the collaborators dropped 5 to 8 percent after the superstars died. The finding, which they published this year with MIT PhD candidate Jialan Wang in the Quarterly Journal of Economics, quantifies the extent to which top scientists infuse their fields with new ideas and research topics.
The freedom to fail
Natural experiments also allow economists to study how different types of grants affect scientists. For instance, it turns out that scientists whose funding affords them unusual long-term freedom in the lab are more likely to generate breakthroughs, according to a November 2009 working paper by Azoulay, Graff Zivin, and Gustavo Manso, an assistant professor at Sloan.
To reach this conclusion, they compared the productivity of two groups of scientists from 1998 through 2006: investigators at the Howard Hughes Medical Institute (HHMI) in Maryland and researchers given NIH grants. The HHMI scientists were encouraged to take risks and received five years of financial support, with a two-year grace period after funding was terminated. The standard NIH grants, known as R01 grants, lasted three to five years, and recipients were monitored more closely; funding ceased immediately if the grant was not renewed. The researchers found that papers by the HHMI scientists were far more likely to be heavily cited and covered a broader range of subjects. Those scientists also mentored more young colleagues who went on to win prizes.
“If you want people to branch out in new directions, then it’s important to provide for their long-term horizons, to give them time to experiment and potentially fail,” Azoulay says. “You can generate innovation, but the details matter. What you want to provide incentives for is future performance, not performance today.”
Naturally, the HHMI welcomed the findings. “HHMI has identified highly creative scientists and given them the freedom to pursue critical medical research, even if it takes them years and means a change of research direction,” says Avice Meehan, the institute’s vice president for communications and public affairs. (An NIH spokesman, Don Ralbovsky, says NIH staffers considered the study “interesting,” but he refrained from further comment until the paper was officially published.) But couldn’t it be that the HHMI recruited better scientists to begin with? That would be a version of the same problem Azoulay and Graff Zivin encountered when studying the effects of star scientists.
This time the researchers anticipated this potential objection from the start and designed their study accordingly. They identified 73 well-regarded HHMI researchers and found matching groups of high-flying scientists among the NIH awardees: one set of 393 who had won early-career prizes, and another group of 92 who had received so-called Merit funds, reserved for highly promising projects. The HHMI researchers produced twice as many papers in the top 5 percent of their fields in terms of citations, and three times as many in the top 1 percent, as the prize-winning NIH-backed scientists; they also published 50 percent more papers in the top 1 percent than the Merit recipients.
So it might be best just to hand scientists money and leave them alone after all–but now at least we have some empirical support for that approach. On the other hand, as Azoulay notes, scientific progress might still depend on a combination of radical insights and incremental advances. If so, there is not one inherently superior type of grant, and funding agencies should be looking for the right mix.
From lab to market
Basic lab research is only one link in the chain through which scientific research leads to economic growth; discoveries must be turned into commercial products. This, too, is rich territory for study. When researchers began patenting newly isolated genes in the 1990s (a practice made possible by the Bayh-Dole Act of 1980, which allowed researchers who receive government funding to keep control of their inventions), the ensuing debate raised an important question: does doing this dissuade other scientists from studying those same genes?
Patents in bioscience have become a favorite source of insight for Murray, a self-described “lapsed chemist” who studied chemistry at Oxford and got a PhD in applied science at Harvard. “I’m the kind of person who doesn’t have the patience to let the cake bake in the oven until it’s finished,” she says. “Not a good quality if you want to be a chemist.” Instead, Murray has broken new economic ground by examining the impact of intellectual-property practices in the life sciences.
In a series of detailed studies, Murray and several coauthors have found that patenting genes can–at least initially–depress subsequent research. In a 2008 paper with Kenneth Huang of Singapore Management University, she looked at thousands of bioscience publications and discovered that the research on a publicly disclosed gene sequence diminishes by 5 percent after that gene is patented. And yet, as she argues in a paper forthcoming in the American Journal of Sociology, “over time that negative effect goes away.”
Today, patents can represent an invitation to collaborate. “People used to think that if I were running a lab and patenting a lot, it meant that the quality of my science had gone down, I wasn’t publishing anymore, and I had sold out to commercial interests,” says Murray, whose new paper examines research practices before and after DuPont gained patent rights in the 1980s over a mouse developed at Harvard to study cancer. But after studying researchers at labs like that of MIT’s Eric Lander, founding director of the Broad Institute and a leader of the Human Genome Project, she doesn’t see it that way: “If people have a really good set of ideas, now they tend to both publish and file intellectual-property claims. And this kind of activity is really important, because it not only contributes to our long-term stock of knowledge but has potential applications.”
The death of the Renaissance man
Science may lead to technological innovation, and innovation may lead to economic growth, but in recent decades, America’s science and technology infrastructure has grown faster than the overall economy. “If you look at the number of people in research and development in science and technology, and the money spent on it, you see that our collective effort level is increasing demonstrably,” Jones says. “Yet our growth rate is not improving.” A corollary, he says, is that “the contributions of individuals [to economic growth] seem to be declining over time.”
Why? Jones believes that increased specialization in the sciences is a major factor. “Because there is more and more knowledge, it’s increasingly difficult for any individual to have a share of that knowledge,” he says.
Jones, along with Northwestern colleagues Stefan Wuchty and Brian Uzzi, has quantified this change. In a paper published in Science in 2007, they looked at 19.9 million papers and 2.1 million patents since 1955 and found that the average number of authors per paper has nearly doubled, from 1.9 to 3.5; the number of inventors per patent has increased from 1.7 to 2.3.
“Your probability of writing a home-run paper as a solo author has declined dramatically,” Jones says. Perhaps surprisingly, the same trend is seen in the social sciences, where 52 percent of papers are written in teams–up from 18 percent in 1955. “The fact that we see the same pattern everywhere suggests it’s the human capital that matters most,” Jones adds. Nothing is more valuable than the accretion of knowledge, but now it takes more people to produce it.
It also takes scientists more time to acquire the knowledge expected of them before they are granted full professional status. In a 2008 paper, “The Burden of Knowledge and the ‘Death of the Renaissance Man,’ ” Jones observed that the age of scientists receiving PhDs rose across all major fields starting in the late 1960s; the duration of a PhD program in the life sciences has also expanded since the 1960s; and today’s Nobel Prize winners received their PhDs substantially later than those who received the awards in the early 20th century.
It may even be that science is now demanding too much of students before stamping them with PhDs, preventing talented young researchers from making breakthroughs or setting the research agendas for their own labs. “In medical research, there is a palpable sense that people are being delayed before getting the chance to do research,” says Jones. Indeed, Elias Zerhouni, director of the NIH from 2002 to 2008, has called this the most important challenge facing the federal funding agencies.
A quixotic quest?
Despite the progress they have made in establishing a new area of research, economists who study the scientific enterprise increasingly find themselves preoccupied with a new problem: getting people outside economics, and inside science, to look at–and act on–their findings. Executives at the NIH and NSF generally embrace economic studies. But active scientists, as Azoulay points out, tend to be too focused on their own research to imbibe the social-science literature. “If we do our job correctly, we’ll convince scientists and scientific leaders they should build these kinds of evaluations into their projects,” he says. “But we’re not there yet.”
Jones agrees. “The agencies are actually quite receptive to evidence,” he says. “But the scientific community hasn’t fully absorbed these findings yet.” Murray says it has been “a hard struggle to convince people that studying science and scientists is a valid activity and that the social sciences have something useful to say.”
The NSF started its Science of Science and Innovation Policy program in 2005–in part, Lane says, because President George W. Bush’s science advisor, John Marburger, insisted that more detailed documentation about its spending patterns would be necessary in the years ahead, to justify the federal backing it receives. The NIH’s Science of Science Management unit held its first major conference on the subject in October 2008. Among other goals, the agencies hope to create common reports on the outcomes of grants across agencies, so that information about the effectiveness of funding can be compiled more readily. “We want to build a common empirical infrastructure along with the universities,” says the NSF’s Lane. “This is a hard problem, but that’s never made scientists run screaming into the night before.”
In the process, economists will have to persuade scientific leaders that, for instance, citations really measure the value of a paper and do not overrate papers that are more controversial than substantive. Multiple studies by economist Manuel Trajtenberg of Tel Aviv University have shown that the number of citations a patent receives does correspond to its innovative value, but it is harder to judge the significance of citations for papers.
Scientists may also be reluctant to let economists define what constitutes a research advance. Take the paper in which Azoulay, Manso, and Graff Zivin compared NIH and HHMI researchers. The HHMI investigators generated about 10 percent more variety in the keywords they used to describe their own work, a statistic that the researchers used as a proxy for “creativity” in the life sciences. And yet, Azoulay acknowledges, “we don’t have a secret indicator for scientific creativity. That is a somewhat quixotic quest.”
And there is always the danger that statistics about scientists could be yanked out of context in political debates. Suppose, Azoulay says, that 10 percent of all published papers represent significant advances in knowledge. Politicians trying to cut science funding might spin that as a low number. “Politically, such a finding could be a disaster,” he says. “But substantively, one big paper out of 10 could be a very good batting average.”
Because studying science and innovation is such a complex undertaking, economists and science administrators alike say their next big step is to get scientists involved. “We have to get scientists engaged,” says Lane. “It’s too important to mess this up, so it has to be a collaborative activity.” Scientists need to work together to develop tools that objectively measure the impact of their own work. And then they need to use them.
The more conclusively science can prove that it is indeed an engine of innovation and growth, Lane believes, the more effectively the science agencies can insulate themselves from potential cuts at a time when the idea of fiscal discipline, fairly or not, is increasingly hard to avoid. Given the climate in Washington, “I think all the evidence is that we’re going to have to have more documentation as science agencies,” Lane says. “We’ve got the anecdotes, but that isn’t going to do much longer.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.