Rethinking the NIH review system
Winston Churchill famously noted, “It has been said that democracy is the worst form of government except all the others that have been tried.” I would not say the same about the system of reviewing and funding grants at the National Institutes of Health. In fact, I am fully confident that the NIH has made every effort over the years to be as fair and equitable as possible. However, as with all enterprises that involve fallible human beings, it is not perfect; if asked, both applicants who have succeeded in generating research support and those who have failed in this regard (likely especially those who been unsuccessful in successive rounds of submission) would have much to say about what they perceive as the system’s deficiencies and would have suggestions for its improvement.
As a long-time reviewer as well as an applicant who has failed more often than succeeded, I have undertaken the perhaps Sisyphean task of making suggestions that in my mind might be worthy of consideration. I have divided this article into three sections: (1) changes that I believe could be considered and instituted now; (2) serious suggestions for future consideration; and (3) a few more provocative ideas.
Immediate changesA tiered system of funding:
My first suggestion relates to leveling the playing field. It has always struck me as quite unfair that nascent assistant professors, attempting to obtain their first competitive extramural grant, often are competing with full professors who have years of experience and publications behind them and are likely supported by large and productive laboratories.
Many years ago, some agency (in this case not an NIH review panel) decided to award additional funds to currently funded researchers in a particular cancer-related area. By far the best grant was submitted by a very well-known and high-profile investigator whose laboratory at that time was supported by approximately $4 million in grant funding. Of course, an unbiased review resulted in this being the best scored grant, and I do not disagree with the evaluation of the quality of the science proposed. But I could not help thinking how the money awarded to this investigator could have been more useful as a number of smaller grants for young researchers just setting up their first laboratories or early-stage investigators with novel, even if unproven, ideas. It further occurred to me that this individual did not actually need these additional funds and that they could have performed the proposed work using their available resources if they felt that the problem was worthy of investigation.
My solution is a tiered approach. Individuals at different career stages should be able to compete with others at approximately the same career level. That is, assistant professors would compete with other assistant professors, associate professors with other associate professors, and full professors with other full professors. Of course, some tinkering would be needed to define where individuals stood in their career paths reliably, but this could be accomplished quite easily.
The tiering would not be limited to the applicants’ career stage. The maximal funding for each level also would have tiered steps, such as three-year grants with maximal direct costs of $150,000 a year for the assistant professor level, four-year grants at the associate professor level at a maximum of $200,000 per year and five-year grants at the full professor level at $250,000 per year. Again, these tiers could be modified after consideration of different possibilities.
Furthermore, this approach would not prevent someone at the assistant professor level from submitting a five-year grant if they felt that they could compete with more established investigators. But it would not work the other way around, with full professors dipping into the assistant professor pool of funds. For many years, I have served as the PI at my home institution for the American Cancer Society Institutional Research Grants mechanism that supports junior investigators. It would be a travesty if senior investigators could compete for this same pool of funds.
A tiered approach might have the additional benefit of stretching out the available NIH funds, since, in general, only well-established investigators would apply for five-year grants.
The R21 mechanisms might be thought to have the same utility; however, the R21 allows for a maximum length of six pages for grant applications; in my experience (and I think others would agree), this is not sufficient to incorporate a comprehensive background, preliminary data and detailed experimental approaches. Although preliminary data is not officially required for an R21, I would postulate that rarely has an R21 been funded without it. I would eliminate this mechanism entirely and substitute the tiered system envisioned above.The scoring system:
The system of scoring from 1 to 9 without intermediate values allows far less discrimination and subtlety than the prior system running between 1 and 5 with the option of scoring by decimal increments. I strongly advocate moving back to the previous system. Those of us who served on panels in the “old days” could distinguish between a grant that scored at 1.6 and one at 1.4 or better. Any score below 1.5 indicated that the grant fell into the outstanding range and should be considered for funding. The scientific review officers always were required to say that the study section does not actually make funding decisions; however, there was clearly an elephant parked in the center of the room where the grants were being evaluated, since that is precisely what every reviewer was thinking about during scoring phase.
The current scoring situation is one that I personally find quite frustrating. A score of 1 is rare as a hen’s tooth. A score of 2 means “Yes, clearly support the grant for funding.” A score of 3 means “Good try, you came close, but you aren’t going to get the money; still, it’s likely worth trying again.” A score of 4 means “Unless you come up with some new ideas that will blow our socks off, a resubmission is probably not worth the bother.” And anything above a score of 5 largely adds insult to injury.
But what happens when the scores for what might be termed competitive grants fall between 2 and 3, as is likely often the case? The panel members who scored at 3 may have considered the proposal to be relatively strong but not ready for prime time. Perhaps a panel member made an offhand remark that diminished its worthiness in the minds of the group. Panel members scoring at 2 may have felt some of the same reservations as those who scored at 3, but being unable to score at 2.3, 2.4 or 2.5, these reviewers gave the applicant the benefit of the doubt. Similarly, the panel members who scored at 3 also might have wanted to place the project between 2 and 3 but were unable to do so.The innovation criterion:
I expect serious disagreement with this suggestion but would argue that this criterion should be eliminated. The innovation criterion largely provides an effortless mechanism for reducing the enthusiasm for a grant, with little relevance to the potential value of the proposed research. A grant can be highly innovative yet have little chance of succeeding due to other deficiencies. More importantly, to my mind, a grant need not be innovative to have significant and groundbreaking impact. I suggest that this criterion be replaced by one such as potential impact, and see the sections below relating to the significance criterion.The environment criterion:
The criterion of environment should not be scored. A reviewer might simply indicate whether the environment is adequate for the proposed research. In the rare cases where the environment is deemed to be inadequate, the grant could be eliminated from consideration.Page length:
Twelve pages is far too short for effective presentation of preliminary data, background and approaches with alternative strategies. I am not suggesting we revert back to the 25-page limit. However, 15 pages would be more realistic for an RO1. The Department of Defense recently increased its limit to 14 pages. Perhaps 8 pages for an R21.
Furthermore, one page is insufficient to respond to criticisms of the previous submission. Two pages should be allowed, which is still less than the previous system’s three-page response.
Suggestions for future considerationThe significance criterion:
I don’t have a problem with this criterion other than its vagueness. Why not substitute something more straightforward, such as “scientific rationale,” with wording such as “To what degree do the preliminary data support the proposed approaches?” or “Does the literature provide a sufficient foundation for the proposed studies?”A related issue — the premise:
For a while, the NIH imposed its own definition on this terminology but could not sway reviewers from the standard meaning and finally gave up on trying to convince the reviewers that “premise” did not actually mean the underlying foundation for the grant. According to my dictionary (I guess in actuality Google), a premise is what forms the basis of a theory or a plot. In logic, the premise is the basic statement upon whose truth an argument is based.Makeup of the panels:
Recently, the NIH decided to temporarily sideline reviewers who had reviewed a set maximum number of grants during the previous 12 years. I believe this was a consequence of complaints from applicants who observed that certain names frequently appeared on the roster of study sections where their grants had been reviewed and not funded. This led to the impression that perhaps their grants were not receiving a fair review, since the same panel members might have been involved repeatedly.
I don’t recall ever reviewing the same grant twice, although I cannot speak for other panel members. But I don’t believe this is a productive strategy. Experienced reviewers provide the foundation for the most solid and sound review process and should be recruited with enthusiasm. (Yes, of course I can’t wait to be invited back.)
A few more provocative ideasUnscientific critiques:
What can be done about absurd or inflammatory critiques? All of us have received review comments that clearly came from left field and had no relevance to the topic at hand. Inappropriate or irrelevant criticisms of this type should be parsed further by NIH personnel and expunged, if at all possible, although I fully recognize the challenge of doing this and further realize how this would delay the delivery of summary statements to the applicants.Internal review:
Does the NIH analyze which types of funded grants have had the greatest scientific impact? I believe this would be a highly productive exercise, and perhaps this has been occurring at intervals throughout the history of the NIH. There long has been a concern in the larger scientific community that the NIH is fundamentally conservative and largely hesitant to fund novel ideas.
Specifically, should there be fundamental change to what types of proposals are funded? Should there be far less focus on mechanistic aspects of research projects and more attention to the larger potential scientific impact? Should there be greater emphasis on supporting higher risk/higher gain projects? Should the NIH identify specific areas of need and solicit grant applications for these areas rather than allowing for the currently broad nature of the grant solicitation process? Should there be a secondary level of review, as is the case with the Integration Panels at the Department of Defense research programs, where grants scored in the outstanding range by the review panels have frequently not been funded?
I am not advocating for any of these approaches but only suggesting that these could be considered. Furthermore, in the case of a second level of review/oversight, there could be a real danger that the funding process could be politicized.Continuous submission:
For a number of years, continuous submission, which allowed applicants an additional time period (I believe about six or eight weeks after the standard due dates) for grant applications to be submitted, was one of the few perks provided to scientists who had given generously of their time and expertise to review a certain number of grants or serve on a certain number of panels during the course of a year. It recently has been eliminated, unfortunately entirely without explanation. Living in Richmond, Virginia, it was always fairly easy for me to attend study section meetings in the Washington, D.C., and Maryland area. But I very much admired those reviewers who lived on the West Coast and their commitment to carrying out this service. This perk should be reinstated.
I believe that our system of study section reviews and funding approaches is, like democracy, fundamentally strong. It has weathered both good and bad times, has withstood potential external interference and is overseen by scientists with the goal of supporting research that will improve the lives of our citizens. One can only marvel at the speed with which the COVID-19 vaccines were developed. We have to be grateful to industry but must at the same time acknowledge that these breakthroughs would have never been acheived in the absence of decades of basic research supported by the NIH in many disciplines. But this does not mean that we should become complacent and fail to consider how the system might be further improved.
Jack Yalowich at the Ohio State University read the drafts of this manuscript and made insightful comments and suggestions.
Join the ASBMB Today mailing list
Sign up to get updates on articles, interviews and events.
Sharing raw data is an important norm for the proteomics community. But as clinical studies become more detailed, researchers may need to clamp down to protect patient privacy.
Top researchers get more credit and funding than lesser-known scientists, creating inequality and amplifying gender disparities that hold back women.
It is more important than ever to understand why some people deny, doubt or resist scientific explanations – and how these barriers can be overcome.
Five members of the board of directors of the Association of Medical and Graduate Departments of Biochemistry talk about difficulties of hiring faculty during the pandemic and lessons learned from the disruption.