Epistasis Blog

From the Computational Genetics Laboratory at Dartmouth Medical School (www.epistasis.org)

Thursday, August 16, 2012

Journal Impact Factor

There has been a lot of discussion online the last few days about the value of journal impact factors for judging the quality of publications. This is important because faculty promotion and tenure is often measured by impact. The following are several posts calling into question the use of journal impact factors.

1) Nature editorial from 2005 titled "Not-so-deep impact: Research assessment rests too heavily on the inflated status of the impact factor"

2) A 2008 note about impact factors from the Editor-in-Chief of Nature title "Escape from the impact factor"

3) Another Nature editorial from 2010 titled "Dissecting our impact factor"

4) A PNAS editorial from 2010 that says "placing too much emphasis on publication in high impact factor journals is a recipe for disaster"

5) 2010 opinion piece in Front. Psychology titled "Are scientists nearsighted gamblers? The misleading nature of impact factors" - be sure and read the criticism of this piece in the comments by Pep Pàmies. The numbers presented seem suspect.

6) A provocative blog post by Stephen Curry titled "Sick of impact factors"

7) A blog post by Tom Webb titled "My own personal Impact Factor"

8) Mendeley page by Jonathan Eisen on papers that discuss impact factors.

9) The San Francisco Declaration on Research Assessment (DORA) released in 2013

10) Editorial in Science by Bruce Alberts from 2013 commenting on DORA

11) Editorial in The EMBO Journal from 2013 commenting on DORA

My personal view is that we should judge faculty based on three measures. These were originally suggested by Dr. John Blangero.

1) Total Citations. This is a good measure of impact. The more your papers are cited, the better your impact. This especially true for your first/senior authors papers.

2) Total Publications. This is a good measure of how hard you work. It takes time to write, submit, revise and publish papers.

3) Competitive Funding. This is a good measure of how your peers view your work. It is tough to get a grant funded from the NIH or NSF if you are not doing timely, innovative and significant work with solid methods.

1 Comments:

At 9:59 PM, Blogger David J. States said...

As Curry points out, impact factors use the wrong statistics. Apply an average to a highly skewed distribution with a long tail is simply not meaningful. Second, as is well know, different classes of articles generate different frequencies of citation. A number of journals intentionally began carrying review article in order to boost their impact factors. Does the fact that a journal publishes review article somehow magically make the original research that they publish more meaningful? Finally, and most importantly, impact factors are an entirely introspective metric that says nothing about the impact scientific work has in the real world of medicine and commerce.

 

Post a Comment

<< Home