We need negative metrics, too / Nature

Keith Brown, Kelli Barr, and I have a short piece published in the new issue of Nature.

The correspondence also contains a link to a slightly revised version of our original submission. Since Nature keeps everything behind a pay wall, here is that link.

Very interested in hearing everyone’s thoughts on the idea that seemingly negative events could be turned to indicate positive impact.

7 thoughts on “We need negative metrics, too / Nature

    • Thanks, Carl, for the comment. I’ve no doubt about the value of such improvements in traditional bibliometrics. But the main point we wanted to make was less that there’s something wrong with such metrics and more that such deficiencies allow researchers some room to maneuver. Put differently, the point isn’t that we need to improve existing approaches to bibliometrics (as we think a lot of developments in altmetrics do); it’s rather that all metrics are subject to interpretation, even to the point of turning supposedly negative indicators of impact — such as a refutation or an angry letter — into indicators of positive impact. Suppose the person refuting your work is really important in the field? I think that’s a good thing, as long as we’re not afraid of being proved wrong. Similarly, making certain people — even important ones — angry might well be a sign that one is on to something important.

      • “Suppose the person refuting your work is really important in the field?”

        Why does it matter that the person is important? Possibly, it is his work that is… results can refute other results. Let’s stick to that level and leave people out of scientific arguments.

        • Yes, well, I wouldn’t divide a person from her work in the way you suggest. By saying that a person is “important in the field,” I mean to include the idea that the reason that person is important is because of her work. But I do think your comment raises an important issue.

          I recognize the tendency to objectify research, and would agree that this tendency is particularly strong in science (in general). The whole point of replicating results and the concomitant fetishizing of ‘methods’ is precisely to satisfy this drive toward objectivity. Indeed, we tend only to attach a scientist’s name to his work once it’s been refuted or displaced (Newtonian physics) or in order to attempt to do so (charges of ‘Darwinism’ are a case in point).

          But really, don’t you think it’s prima facie absurd to suggest leaving people out of scientific arguments? If not, how about we leave the people out of scientific experiments? How about we leave the people out of writing grant proposals or scientific papers? How about we completely objectify science by having a computer program a computer to program a computer (not sure how far we’d have to go to completely remove all the people from this) to perform scientific research?

          I am not in favor of the death of the author. Nor am I in favor of the death of the PI. Scientists are, after all, people. And some of them are women. Both points are important and should not be objectified away. Need I stress that these are my opinions?

      • Indeed, I agree and I think that many scholars would naturally already recognize the value or (or irrelevance) all of these things whether or not we call them “metrics.” As you observe, science is a social enterprise and humans have been finely tuned to recognize social queues like “critiques from someone important” for a long long time. Yet when we call these things metrics we try to turn them into something algorithmic. I recognize the value of doing so, however imperfect that may be, and only point out in my comment above that we are getting better a ways to add semantic meaning to things like citations that can make them less crude as metrics. The human brain will always excel at the individual social queues, but the algorithmic metrics could benefit from semantic context. No doubt we need both.

        • I agree with everything you say here.

          We actually tried to use the more ambiguous term ‘indicators’ in place of ‘metrics’, as you can see here (http://cas-csid.cas.unt.edu/?p=4475), which is very close indeed to the version we submitted originally. Of course, there’s a lot of theory behind quantitative indicators, as well. We were really attempting to play with the fact that language, like metrics, is ambiguous (like all our attempts at ostension).

  1. Pingback: Research impact: We need negative metrics too | Reason & Existenz

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s