Impact Factors: A Broken System

From Flickr by The Official CTBTO Photostream

How big is your impact? Sedan Plowshare Crater, 1962. From Flickr by The Official CTBTO Photostream

If you are a researcher, you are very familiar with the concept of a journal’s Impact Factor (IF). Basically, it’s a way to grade journal quality. From Wikipedia:

The impact factor (IF) of an academic journal is a measure reflecting the average number of citations to recent articles published in the journal. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones.

The IF was devised in the 1970s as a tool for research libraries to judge the relative merits of journals when allocating their subscription budgets. However it is now being used as a way to evaluate the merits of individual scientists– something for which it was never intended to be used.  As Björn Brembs puts it, “…scientific careers are made and broken by the editors at high-ranking journals.”

In his great post, “Sick of Impact Factors“, Stephen Curry says that the real problem started when impact factors began to be applied to papers and people.

I can’t trace the precise origin of the growth but it has become a cancer that can no longer be ignored. The malady seems to particularly afflict researchers in science, technology and medicine who, astonishingly for a group that prizes its intelligence, have acquired a dependency on a valuation system that is grounded in falsity. We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career. We submit to time-wasting and demoralising rounds of manuscript rejection, retarding the progress of science in the chase for a false measure of prestige.

Curry isn’t alone. Just last week Bruce Alberts, Editor-in-Chief of Science, wrote  a compelling editorial about Impact Factor distortions. Alberts’ editorial was inspired by the recently released San Francisco Declaration on Research Assessment (DORA). I think this is one of the more important declarations/manifestoes peppering the internet right now, and has the potential to really change the way scholarly publishing is approached by researchers.

DORA was created by a group of editors and publishers who met up at the Annual Meeting of the American Society for Cell Biology (ASCB) in 2012. Basically, it lays out all the problems with impact factors and provides a set of general recommendations for different stakeholders (funders, institutions, publishers, researchers, etc.). The goal of DORA is to improve “the way in which the quality of research output is evaluated”.  Read more on the DORA website and sign the declaration (I did!).

An alternative to IF?

If most of us can agree that impact factors are not a great way to assess researchers or their work, then what’s the alternative? Curry thinks the solution lies in Web 2.0 (quoted from this post):

…we need to find ways to attach to each piece of work the value that the scientific community places on it though use and citation. The rate of accrual of citations remains rather sluggish, even in today’s wired world, so attempts are being made to capture the internet buzz that greets each new publication…

That’s right, skeptical scientists: he’s talking about buzz on the internet as a way to assess impact. Read more about “alternative metrics” in my blog post on the subject: The Future of Metrics in Science.  Also check out the list of altmetrics-related tools at altmetrics.org. The great thing about altmetrics is that they don’t rely solely on citation counts, plus they are capable of taking other research products into account (like blog posts and datasets).

Other good reads on this subject:

Tagged , , ,

5 thoughts on “Impact Factors: A Broken System

  1. […] researcher and research impact in new, interesting ways (see my blog posts on the topic here and here). What would this altmetrics movement look like in terms of projects? I’m not sure, but I […]

  2. […] journal impact factors as a poor way to assess research value. Or, as the DATA PUB blog would say, impact factors are a broken system. All sorts of web-based alternatives exist, but somehow aren’t being valued by the […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 318 other followers

%d bloggers like this: