As 2014 begins, I approach the challenges facing the scientific community with a bit of a renewed sense of optimism. Congress passed a budget agreement that represents a compromise on the competing issues facing our national fiscal system. This lays the foundation for some predictability regarding the framework for appropriations supporting science and other important areas for at least a couple of years, although the details regarding funding for science have not yet been determined.
At the same time, other key elements regarding the use of existing resources are under substantial discussion. For example, Congress has been discussing a potential reauthorization bill for the National Science Foundation termed the Frontiers in Innovation, Research, Science and Technology, or FIRST, Act. In the name of increased transparency and accountability, the FIRST Act draft states that:
Prior to any award of Federal funding described in subsection (a), the Foundation shall publish on its public website a written justification …(of the worthiness for funding)…along with the name of the employee or employees who made the determination and any other information about the research proposal the Director considers appropriate.
As I have previously written, I feel very strongly about the importance of transparency, but making this information available PRIOR to issuing an award appears potentially quite problematic. The draft bill does not elaborate on this issue, but one possible interpretation is that the goal is to allow public (or congressional) comment about whether a particular proposal is, in fact, suitable for funding prior to the issuance of an award. The intended and unintended consequences of this dramatic change in practice should be considered carefully before moving forward. I believe the negative consequences are very likely to outweigh any positive ones.
National Institutes of Health Director Francis S. Collins also recently made comments (see here (see time 1:13 to 1:16 for comments) and here) that a discussion is planned regarding the possibility of expanding programs, like the NIH Director’s Pioneer Award program, that weigh funding “people, not projects” at an upcoming meeting of the institute and center directors. Collins noted an extensive outcomes evaluation of the Pioneer program. The evaluation compared the Pioneer awardees from 2004 to 2006 with several groups, including a group of R01 investigators (approximately matched for level of funding), Pioneer finalists who were not funded and investigators of the Howard Hughes Medical Institute.
A range of bibliometric analyses as well as expert opinion were used as metrics. The report concluded that Pioneer awardees generally outperformed the matched R01 portfolio but that the Pioneer awardees did not outperform the HHMI investigators. The report noted that the Pioneer awards provided less support than HHMI awards and that HHMI provides about $650,000 in direct costs per investigator (“from discussions with HHMI staff and program leadership”). In Collins’ comments regarding this issue and the issue of the demographics of the biomedical workforce, he said that “We run a meritocracy and the meritocracy is blind to other things (including the age of the investigator).”
This discussion led me to do an analysis of the level of NIH funding for HHMI investigators. Using data from NIH Reporter, I found that more than 80 percent of the 330 investigators had some NIH funding. Of these, all but a few had some R (R01, R21, R03), P01 subproject and research-focused (as opposed to resource-focused) cooperative agreements (U awards) funding.
These investigators average 1.96 awards (dividing an award by the number of principal investigators for multiple-PI grants) and average more than $750,000 in total (direct and indirect) funding (again dividing multiple PI award amounts by the number of investigators). More than 40 percent of the funded HHMI investigators have more than two awards and more than 18 percent have more than three awards.
HHMI supports many outstanding scientists and science. However, I am concerned that the notion that the NIH funding system is a pure meritocracy based on peer-review scores needs more analysis. No one would argue that preliminary results and previous publications are not important factors in receiving a good score on an NIH proposal and having access to substantial amounts of other funding facilitates generating such results.
The National Institute of General Medical Sciences has a long-standing policy regarding issuing awards to well-funded laboratories (defined as laboratories with more than $750,000 in annual direct cost from any source, including HHMI). NIH is piloting a policy for well-funded investigators, defined as those receiving more than $1 million in annual direct costs from NIH (without consideration of funding from other sources). Note that these policies do not represent caps on the amount of support but rather invoke special scrutiny for applications from well-funded investigators. As I have written elsewhere, I feel strongly that a special scrutiny, rather than a cap, is the better policy, but only if the analysis is thorough and consequential. NIH is evaluating its policy, but I am skeptical that it will have much impact as constituted.
In the name of increasing transparency about the NIH and how it operates, the author of the valuable blog Medical Writing, Editing & Grantsmanship and I have written a book about the NIH funding process and how best to interact with NIH staff entitled “How the NIH Can Help You Get Funded.” In addition to discussing general aspects of the grant-application process, the book describes the NIH and the component institutes and centers and highlights some of the differences in policy and process between these units. In addition, we did obtain some data from nine institutes regarding their R01 funding curves. The accessibility of such data as well as clear articulation of the funding decision process for each funding agency is critical for the evaluation of existing policies and the development of new policies for using the resources available to the scientific community to drive scientific progress and impact.
Jeremy Berg (firstname.lastname@example.org
) is the associate senior vice-chancellor for science strategy and planning in the health sciences and a professor in the computational and systems biology department at the University of Pittsburgh.