In May 2017, the National Institutes of Health announced plans to cap the number of awards that an individual investigator could receive in order to free up funds to invest in young investigators. The science-funding agency justified this decision in part by citing published analyses indicating that productivity per dollar awarded began to decline as investigators accumulated multiple concurrent R01 grants.The James H. Shannon Building, also known as Building One, on the National Institutes of Health campus in Bethesda, Md.Courtesy of Lydia Polimeni/NIH
The plans were quickly retracted (although the NIH’s neuroscience institute announced plans this spring to limit grants to well-funded labs), but the proposal highlighted the continuing desire of many institutional administrators to devise metrics for quantifying the productivity of academic scientists.
When looking at the analyses cited by the NIH, a universal theme becomes evident — the measures of productivity focus exclusively on either the number of publications generated or some derivative thereof, such as impact factor or citation index. No effort is made to assess, nor is any explicit value attached to an investigator’s contributions to the training of the next generation of scientists.
Counting only co-authored papers is like rewarding someone who produces a given quantity of lumber by clear-cutting a patch of forest to the same degree as someone who produces the identical quantity by carefully selecting the trees to be removed and replanting afterward. If we as a community are to inform our decision-making processes with data pertaining not just to the immediate quality but also the long-term sustainability of the biomedical and molecular life science research enterprise, then education and training must be incorporated into any discussion concerning metrics of productivity.
One may counter by saying that a causal, linear relationship exists between the amount of publishable research generated by a student or other trainee and the quality of the training received while resident in the laboratory where the research was performed. However, while output of publishable data from individual students and postdoctoral trainees may vary greatly, in general, the capacity to generate publishable data increases with the amount of experience and training received over time, reflecting all the students’ educational and training experiences. The value we implicitly place on prior training becomes evident whenever a principal investigator makes decisions about how to staff a laboratory or what expectations to place on new and current group members. In both hiring and admissions, experience as an undergraduate research student, summer intern, graduate student and so forth carries weight across academia, government and industry. Yet no accepted mechanism exists for recognizing and crediting undergraduate and graduate research mentors for their contributions to their students’ long-term success.
How can we remedy this? One way would be to give prior research mentors a share of the credit for their trainees’ subsequent publications. While an admittedly imperfect measure of a mentor’s education and training contributions, allowing former mentors to list themselves as “shadow” co-authors in progress reports and biographical sketches would allow a single metric to express both their immediate and long-term contributions to the research enterprise. It also acknowledges that our focus on publications as the ultimate currency for determining value likely will persist for many years to come. Under this model, progress reports and biographical sketches would include lists of papers on which a scientist-educator participated directly as well as those that benefited from their training activities.
Should papers by former trainees be counted the same as papers where the investigator is an explicit co-author? Should a paper published two years after moving on from a mentor’s tutelage count the same as one published a decade later? Current models focused on research productivity already struggle with weighing how much a given publication reflects the contributions of each of its authors: Should a three-author paper on which a researcher is listed as second author be weighted equally with a second authorship on a six-author paper? This has not, however, kept the scientific community from using paper counts in some form or another as the default metric for assessing productivity, progress or impact.
Given that current systems are both imperfect and persistent, how could (or should) we weigh papers by trainees? First, I would propose a time limit. Only papers authored by a former trainee during the next stage in their training would be eligible. So if one mentored an undergraduate research student for at least one academic year, as documented by transcripts, only papers containing work performed as a graduate student would be eligible. Similarly, a graduate student’s former major professor would be able to cite work done during the student’s (first) postdoctoral training position.
How do we translate this into a number that can be added to traditional research publications to give a total paper count? An undergraduate research mentor could be credited for, say, a tenth of a publication for every first-author paper their trainee produces in graduate school and perhaps 5 percent of second-author publications. Given the more intensive nature of graduate training, perhaps these figures could be raised to 20 percent and 10 percent, respectively, for a major professor. In this scenario, when a principal investigator fills out their progress report for a three-year grant award, they would be able to cite not just the three papers on which they were a co-author but also the two first- and three second-author papers published by their former graduate students as postdoctoral trainees during that same period and the three second-author papers published by their former undergraduate research students. So instead of a paper count of 3, their count would be 3.0 + (2 x 0.2) + (3 x 0.1) + (3 x 0.05) = 3.85, nearly 30 percent higher than someone who had no former students publish.
Does this formula give too much or too little credit for training contributions? Readers can and undoubtedly will raise numerous objections to my approach. However, what should not be in dispute is that in these times of tight funding, regulatory micromanagement and administrative obsessiveness with accountability, it is more important than ever to focus on the sustainability of the research enterprise when making strategic decisions such as where to allocate resources. To do so, we must more explicitly and generously reward the educational and training activities that develop the intellectual infrastructure upon which “productivity” relies.
Whatever the merits of the “credit for future publications” model described above, I hope it will provoke reflection and discussion of how we assess the success of scientist–educators.
Join the ASBMB Today mailing list
Sign up to get updates on articles, interviews and events.
Melanie M. Cooper and Mike Klymkowsky urge their fellow faculty members to abandon unnecessary obstacles to inclusion and consider new ways of evaluating their students’ learning.
When Ursinus College offered a choice between on-campus classes or teaching and learning from home this semester, every faculty member and student was empowered to take the path that was right for them.