January 2011

Impact factors – what the H?

Students and postdoctoral fellows still feel that publishing in one of the very elite journals is essential to their success. Why is this so? It’s primarily because scientists who sit on funding and hiring panels are easily wowed by candidates who do publish in the top journals. Looking at the name of a journal gives panel members an excuse to be lazy and not read the paper itself. Each of us has to be the judge of scientific significance, and we must not forget that elite journals tend to seek out trendy science – and simply refuse to make additional space for the broad array of elegant analysis and the diversity of outstanding discoveries that might be submitted to them. That’s OK; trendy science sells magazines, but a lot of excellence can be found elsewhere.

Attention evaluators: When reading a curriculum vitae, do not rely solely on journal names; please look more closely at the work and judge its impact for yourself. Hiring and grant panelists often are asked to evaluate science (or scientists) outside of their particular research areas. Authors and applicants always must make especially clear why their findings are important both to those working within a particular area and to all biochemists and molecular biologists. How did this work change thinking in the field or answer a longstanding question? At every opportunity, we must all explain, with clarity, the importance of our science. The simple act of highlighting a project’s significance will guide our focus toward the most important questions that need to be addressed in biochemistry, molecular biology and biomedical research.

References

1. Rossner, M., Van Epps, H., and Hill, E. (2007) Show me the data. J. Cell. Biol. 179, 1091 – 1092.

2. Falagas, M. E., and Alexiou, V. G. (2008) The top-ten in journal impact factor manipulation. Arch. Immunol. Ther. Exp. (Warsz). 56, 223 – 226.

3. Van Noorden, R. (2010)Metrics: A profusion of measures. Nature 465, 864 – 866.

4. Symonds, M. R., Gemmell, N. J., Braisher, T. L., Gorringe, K. L., and Elgar, M. A. (2006) Gender differences in publication output: towards an unbiased metric of research performance. PLoS One 1, e127.

ASBMB President Suzanne Pfeffer (pfeffer@stanford.edu) is a biochemistry professor at the Stanford University School of Medicine.

NEXT PAGE 1 | 2

First Name:
Last Name:
Email:
Comment:


Comment on this item:
Rating:
Our comments are moderated. Maximum 1000 characters. We would appreciate it if you signed your name to your comment.


  


COMMENTS:

All the attempts to come up with quantitative values are flawed as pointed out by the article. They have some value when considered with the caveats of longevity, gender, etc. But, what really should matter most in promotions is what leaders in your field, have to say about the value of your contributions in confidential letters. Henry Jay Forman Professor of Chemistry and Biochemistry University of California, Merced Adjunct Professor of Gerontology University of Southern California President-elect Society for Free Radical Biology and Medicine

 

I believe the concerns over research metrics are overblown and ignore some of the good that they do by making journal prestige a widely known quantity. It is currently in vogue to bash quantitative analysis of research productivity, but we should not forget that the alternative is a scientific evaluation dominated by bias and what are essentially established social networks, also known as ‘old boys’ clubs’. Science strives to be a merit-based profession and quantitative metrics can act as an antidote to personal politics by revealing highly productive individuals who might not be otherwise famous because they: spend less time on the conference circuit (perhaps to spend more time with family), work outside one of the geographical hotspots of science, or are new to science or a particular field. It is true that a journal’s impact factor (or any other immediate measure) cannot be an accurate measure of a paper’s impact on a field, which in some cases is only apparent after decades have passed. The reason th

 

To consider "times you were cited just to pad the introduction section" as a negative sounds like the idea of someone who never did anything of consequence. Most introductory articles are positively cited, because they are seminal, they added new concepts or insights, or they summarized the area in a clear, concise fashion. The omission of these citations would be intellectually dishonest and a denial of their impact on the author's work. Joseph A. D'Anna

 

The Acta Crystall. paper was by GM Sheldrick's paper on the SHELX program. His papers are cited by thousands of authors. There must be a lot of crystalographers out there. Journals seeking to increase their impact factors need to seek out people like this.

 

Impact factors are largely a waste of time. If we really wanted to measure the value of a scientist's publications, we would read the articles and evaluate the science. Simple numbers are what we use as a "metric" because they are easy. They do not result in value, but mainly reflect popularity. Value and depth is what science should be about, not popularity. David Ballou, University of Michigan

 

0 Comments

Page 1 of 1

found= true1148