With the exception of one-time stimulus funding, our national investment in fundamental research has flat-lined since the Bush years. But there’s more to the story than research spending.
Yes, scientists may have warned of climate change, parsed the origins of the universe, mapped the human genome, and started organizing the store of human knowledge. But far too few representatives of our national research enterprise can explain where science dollars really go or how research actually works. The current funding crunch makes answering these questions imperative and that means going beyond touting great new breakthroughs like GPS and the Internet.
Science funding isn’t likely to be stable until science agencies find the will and way to provide evidence on whether the current national investments are too low or too high. They also need to ensure that the funds the U.S. spends on research are being wisely allocated.
In our view, science agencies — in their zeal to report results to Congress — have gone down the wrong path in answering these questions. Too often, they try to attribute outcomes to a single grant or investigator — such as the value of Google to NSF funding of Sergey Brin and Larry Page. They use outmoded data. And they look at too narrow outcomes.
Research is a team sport and success requires sustained work by large teams. Most of the sweat and toil of discovery and training take place in collaborative groups working multiple overlapping projects, a pastiche of grants that serve multiple purposes, and often spanning many years and depending on several funding agencies. Science agencies should recognize this basic fact to better manage science — just as baseball managers use “moneyball” techniques to look at their teams, not just pitchers, and the season and not just one game. (If big league managers knew as little about how their teams work as we currently do about how science does, academics might be invited to take the field!)
Science agencies need better data. Science creates the data used to identify international terrorists, volcanic eruptions, and the “god particle.” Scientists can create data to track the progress of science in their own administrative and accounting systems. Let’s use such data better by investing in automatic and scalable approaches to track science’s outcomes. Right now, manual reporting by some estimates eats up as much as 40 percent of top scientists’ time. Let’s put their precious and expensive time to more productive use.
The Committee on Institutional Cooperation, a consortium of universities, is spearheading some promising initiatives. For starters, members are beginning to combine their own data with other data sets to:
Calculate the combined contributions of scientific teams — professors, graduate students, post-doctoral fellows and undergraduate students — to better understand how funding enables research and training
Size up purchases from vendors that supply inputs to the research enterprise so we can identify just how science investments also help business
Describe the career trajectories of team members and the effects their work has in organizations that they found or that employ them
Maintaining and expanding the U.S. lead in research and development requires better understanding what science investments actually do. The U.S. science community, led by the CIC, is plying 21st century approaches to measuring impact and creating a community equipped answer to the questions tax payers and their representatives are asking.
Now it’s up to Congress to ask federal agencies to their part. That means adopting less burdensome and higher quality approaches to science reporting, using the right unit of analysis — people not documents, discoveries not spending — and insisting that 21st century scientific tools be developed to describe how science funding works and generates results. Or it could go one better — establishing an independent institution to provide impartial, evidence based information about what works and what doesn’t, the other half of the story of science funding that the public needs to hear.
Julia Lane is a fellow at the American Institutes for Research and co-editor of Privacy, Big Data, and the Public Good (Cambridge University Press, 2014). Jason Owen-Smith is a professor of organizational studies at the University of Michigan.