A young genius in a low-budget lab toils to uncover the workings of cancer cells. Physicists from several universities collaborate to coax never-before-seen particles from a supercollider. Teams of astronomers ply huge telescopes to scan the far reaches of the universe, capturing stunning images of black holes and nascent stars. Eureka moments can occur in almost any kind of setting. Behind all those modes of inquiry, however, is often a common thread: U.S. government funding.
But what is the best way for federal funding agencies like the National Science Foundation (NSF) and the National Institutes of Health (NIH) to invest in science? Do comparatively small grants for individual researchers spark the most groundbreaking ideas, or does it take long-term awards for large teams? For that matter, should the goal be to shoot for occasional big breakthroughs or consistent, incremental advances in knowledge? Or is the whole process of scientific discovery and technological innovation too complex for any general rules to apply?
These questions seem especially pressing today. In 2009, Congress allocated more than $20 billion in stimulus money for science, giving the funding agencies plenty of decisions to make. Such a large injection of cash is unlikely to be repeated any time soon, however, and overall federal funding for science–about $148 billion in fiscal year 2010–may soon be subject to intense political sparring.
So it would behoove scientific organizations to wring all the value they can out of their budgets. But until recently, no one has really studied what makes scientists productive. “There are a lot of anecdotes and stories but no serious empirical basis for studying the funding of science,” says Julia Lane, program director for the Science of Science and Innovation Policy group at the NSF.
Within the last decade, however, a number of economists with MIT links have been shedding new light on the ways scientists work. “This topic has had a high ratio of pontificating to actual research achievements,” says Pierre Azoulay, PhD ‘01, an associate professor at MIT’s Sloan School of Management. “But we want to bring the scientific method to bear on the scientific enterprise.” In recent years, the NSF and the NIH have both started programs to that end, while in academia, this subspecialty–economists who study scientists–has grown to the point where about 200 researchers attended the conference on innovation that the National Bureau of Economic Research (NBER) held in 2009.
“To an economist, this issue is extremely important, because an economist thinks innovation is what drives growth and progress,” says Ben Jones, PhD ‘03, an MIT-trained economist and associate professor at Northwestern’s Kellogg School of Management. Indeed, Nobel Prize-winning research by MIT economist Robert Solow, HM ‘90, among others, has shown that technological innovation accounts for a large portion of economic growth. Today, some economists continue to study the relationship between innovation and growth in an overarching, macroeconomic way. Others, including Jones, look in detail at the state of the laboratory, examining how scientists collaborate and what kinds of incentives spur discoveries and new technologies.
“There is a real sense that we’re beginning to think about science not just as an enterprise carried out by brilliant people who are unmanageable, so that you simply give them money and hope they go away and do something clever,” says Fiona Murray, an associate professor at Sloan, who has studied scientific innovation extensively. “Scientists clearly value their autonomy, but that doesn’t mean you can’t think about science as an organizational activity.