The NSF started its Science of Science and Innovation Policy program in 2005–in part, Lane says, because President George W. Bush’s science advisor, John Marburger, insisted that more detailed documentation about its spending patterns would be necessary in the years ahead, to justify the federal backing it receives. The NIH’s Science of Science Management unit held its first major conference on the subject in October 2008. Among other goals, the agencies hope to create common reports on the outcomes of grants across agencies, so that information about the effectiveness of funding can be compiled more readily. “We want to build a common empirical infrastructure along with the universities,” says the NSF’s Lane. “This is a hard problem, but that’s never made scientists run screaming into the night before.”
In the process, economists will have to persuade scientific leaders that, for instance, citations really measure the value of a paper and do not overrate papers that are more controversial than substantive. Multiple studies by economist Manuel Trajtenberg of Tel Aviv University have shown that the number of citations a patent receives does correspond to its innovative value, but it is harder to judge the significance of citations for papers.
Scientists may also be reluctant to let economists define what constitutes a research advance. Take the paper in which Azoulay, Manso, and Graff Zivin compared NIH and HHMI researchers. The HHMI investigators generated about 10 percent more variety in the keywords they used to describe their own work, a statistic that the researchers used as a proxy for “creativity” in the life sciences. And yet, Azoulay acknowledges, “we don’t have a secret indicator for scientific creativity. That is a somewhat quixotic quest.”
And there is always the danger that statistics about scientists could be yanked out of context in political debates. Suppose, Azoulay says, that 10 percent of all published papers represent significant advances in knowledge. Politicians trying to cut science funding might spin that as a low number. “Politically, such a finding could be a disaster,” he says. “But substantively, one big paper out of 10 could be a very good batting average.”
Because studying science and innovation is such a complex undertaking, economists and science administrators alike say their next big step is to get scientists involved. “We have to get scientists engaged,” says Lane. “It’s too important to mess this up, so it has to be a collaborative activity.” Scientists need to work together to develop tools that objectively measure the impact of their own work. And then they need to use them.
The more conclusively science can prove that it is indeed an engine of innovation and growth, Lane believes, the more effectively the science agencies can insulate themselves from potential cuts at a time when the idea of fiscal discipline, fairly or not, is increasingly hard to avoid. Given the climate in Washington, “I think all the evidence is that we’re going to have to have more documentation as science agencies,” Lane says. “We’ve got the anecdotes, but that isn’t going to do much longer.”