January 2011

Impact factors – what the H?

In many parts of the world, faculty appointments, promotions and grant evaluations are based on the number of papers a scientist has published combined with the impact factor of the journals in which the work appeared. A journal’s impact factor is the average number of citations per paper published in that journal during the two preceding years. Journals that publish few papers, of relatively high impact, have high impact factors. But not every paper in a high-impact journal is itself of high impact, and the publication of article retractions actually enhances a journal’s impact factor.

There are other easy ways for a journal to manipulate its impact factor (1, 2). For example, well-written, timely review articles are widely cited, and journals such as the Annual Review of Biochemistry and Nature Reviews Molecular and Cellular Biology have some of the highest impact factors. (Now I better understand why an editor once encouraged me to cite previous reviews in the same review series when drafting my own review article.) In addition, Nature “News and Views” pieces are wonderful for readers, but they also are wonderful for editors, because they can count toward citations (when cited) but don’t count toward the total-number-of-papers-published denominator. “News and Views” pieces always include citations of other articles within a given issue, further increasing the impact factor. Finally, a blockbuster paper can skew a journal’s impact factor significantly: In 2008, a single paper in Acta Crystallographica was cited more than 6,600 times, raising the journal’s impact factor from approximately two to a value of 49.926 – higher than that of Nature or Science.

Some search committees use the H index to compare the scientific impact of a candidate’s research (3, 4). According to Wikipedia, “The H index is based on the set of the scientist’s most cited papers and the number of citations that they have received in other people’s publications … a scholar with an index of h has published h papers each of which has been cited by others at least h times.” Another impact metric! Wouldn’t it be great if a simple algorithm could simplify comparison of scientific impact and stature? If only it were that simple.

Like the sizes of our noses and ears, H values reflect longevity as much as quality and can never decrease with age, even if an individual leaves science (3). Younger scientists are at an instant disadvantage because the total number of papers influences the value. H indices for female scientists also suffer in comparison with those for males because they apparently publish fewer papers during their careers than their male counterparts (4). In addition, the H index of a mechanistic enzymologist could be very different from that of a molecular cell biologist because of differences in what types of papers are published in a given subfield and how often a group of researchers cites each other’s papers. If I happened to work in a smaller field, my findings might lead to the rewriting of textbooks without needing many citations. And now in the age of online libraries, fewer authors seem to be citing original articles and often rely on review article citations instead.

In 2007, the European Association of Science Editors issued a statement recommending that journal impact factors be used "only – and cautiously – for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programs either directly or as a surrogate." This is an important document and has led to changes in Europe and elsewhere.

Earlier this year, the German funding agency Deutsche Forschungsgemeinschaft limited applicants to citing only particularly significant publications to reduce the importance placed on publication lists and numerical indices. The U.S. National Institutes of Health guidelines also have changed: NIH now encourages applicants to limit the list of selected peer-reviewed publications to no more than 15 based on importance to the field and/or relevance to the proposed research. Let us hope that similar policies that emphasize quality rather than quantity soon will be adopted worldwide.

NEXT PAGE 1 | 2

First Name:
Last Name:
Email:
Comment:


Comment on this item:
Rating:
Our comments are moderated. Maximum 1000 characters. We would appreciate it if you signed your name to your comment.


  


COMMENTS:

All the attempts to come up with quantitative values are flawed as pointed out by the article. They have some value when considered with the caveats of longevity, gender, etc. But, what really should matter most in promotions is what leaders in your field, have to say about the value of your contributions in confidential letters. Henry Jay Forman Professor of Chemistry and Biochemistry University of California, Merced Adjunct Professor of Gerontology University of Southern California President-elect Society for Free Radical Biology and Medicine

 

I believe the concerns over research metrics are overblown and ignore some of the good that they do by making journal prestige a widely known quantity. It is currently in vogue to bash quantitative analysis of research productivity, but we should not forget that the alternative is a scientific evaluation dominated by bias and what are essentially established social networks, also known as ‘old boys’ clubs’. Science strives to be a merit-based profession and quantitative metrics can act as an antidote to personal politics by revealing highly productive individuals who might not be otherwise famous because they: spend less time on the conference circuit (perhaps to spend more time with family), work outside one of the geographical hotspots of science, or are new to science or a particular field. It is true that a journal’s impact factor (or any other immediate measure) cannot be an accurate measure of a paper’s impact on a field, which in some cases is only apparent after decades have passed. The reason th

 

To consider "times you were cited just to pad the introduction section" as a negative sounds like the idea of someone who never did anything of consequence. Most introductory articles are positively cited, because they are seminal, they added new concepts or insights, or they summarized the area in a clear, concise fashion. The omission of these citations would be intellectually dishonest and a denial of their impact on the author's work. Joseph A. D'Anna

 

The Acta Crystall. paper was by GM Sheldrick's paper on the SHELX program. His papers are cited by thousands of authors. There must be a lot of crystalographers out there. Journals seeking to increase their impact factors need to seek out people like this.

 

Impact factors are largely a waste of time. If we really wanted to measure the value of a scientist's publications, we would read the articles and evaluate the science. Simple numbers are what we use as a "metric" because they are easy. They do not result in value, but mainly reflect popularity. Value and depth is what science should be about, not popularity. David Ballou, University of Michigan

 

0 Comments

Page 1 of 1

found= true1148