In many parts of the world, faculty appointments, promotions and grant evaluations are based on the number of papers a scientist has published combined with the impact factor of the journals in which the work appeared. A journal’s impact factor is the average number of citations per paper published in that journal during the two preceding years. Journals that publish few papers, of relatively high impact, have high impact factors. But not every paper in a high-impact journal is itself of high impact, and the publication of article retractions actually enhances a journal’s impact factor.
There are other easy ways for a journal to manipulate its impact factor (1, 2). For example, well-written, timely review articles are widely cited, and journals such as the Annual Review of Biochemistry and Nature Reviews Molecular and Cellular Biology have some of the highest impact factors. (Now I better understand why an editor once encouraged me to cite previous reviews in the same review series when drafting my own review article.) In addition, Nature “News and Views” pieces are wonderful for readers, but they also are wonderful for editors, because they can count toward citations (when cited) but don’t count toward the total-number-of-papers-published denominator. “News and Views” pieces always include citations of other articles within a given issue, further increasing the impact factor. Finally, a blockbuster paper can skew a journal’s impact factor significantly: In 2008, a single paper in Acta Crystallographica was cited more than 6,600 times, raising the journal’s impact factor from approximately two to a value of 49.926 – higher than that of Nature or Science.
Some search committees use the H index to compare the scientific impact of a candidate’s research (3, 4). According to Wikipedia, “The H index is based on the set of the scientist’s most cited papers and the number of citations that they have received in other people’s publications … a scholar with an index of h has published h papers each of which has been cited by others at least h times.” Another impact metric! Wouldn’t it be great if a simple algorithm could simplify comparison of scientific impact and stature? If only it were that simple.
Like the sizes of our noses and ears, H values reflect longevity as much as quality and can never decrease with age, even if an individual leaves science (3). Younger scientists are at an instant disadvantage because the total number of papers influences the value. H indices for female scientists also suffer in comparison with those for males because they apparently publish fewer papers during their careers than their male counterparts (4). In addition, the H index of a mechanistic enzymologist could be very different from that of a molecular cell biologist because of differences in what types of papers are published in a given subfield and how often a group of researchers cites each other’s papers. If I happened to work in a smaller field, my findings might lead to the rewriting of textbooks without needing many citations. And now in the age of online libraries, fewer authors seem to be citing original articles and often rely on review article citations instead.
In 2007, the European Association of Science Editors issued a statement recommending that journal impact factors be used "only – and cautiously – for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programs either directly or as a surrogate." This is an important document and has led to changes in Europe and elsewhere.
Earlier this year, the German funding agency Deutsche Forschungsgemeinschaft limited applicants to citing only particularly significant publications to reduce the importance placed on publication lists and numerical indices. The U.S. National Institutes of Health guidelines also have changed: NIH now encourages applicants to limit the list of selected peer-reviewed publications to no more than 15 based on importance to the field and/or relevance to the proposed research. Let us hope that similar policies that emphasize quality rather than quantity soon will be adopted worldwide.