Scientists Offer New Formula to Predict Career Success – Percolator – The Chronicle of Higher Education

Scientists Offer New Formula to Predict Career Success – Percolator – The Chronicle of Higher Education.

I’m posting this now without further comment. I hope to find time to think this through soon!

In the meantime, regular readers of the CSID blog will surely find this proposed enhancement of the H-Index of interest.

UPDATE: I also just stumbled upon this related editorial in Nature, which also contains a link to a little tool you can play with online to predict your future H-index.

This entry was posted in Accountability, Future of the University, Metrics, Peer Review, Science and technology ramifications, STEM Policy, Transformative Research and tagged . Bookmark the permalink.

One Response to Scientists Offer New Formula to Predict Career Success – Percolator – The Chronicle of Higher Education

  1. Kelli Barr says:

    Here’s what jumped out at me:

    Kording readily admitted his method—tweaking the h-index by adding numbers such as years of publication and number of distinct journals—cannot be perfect and shouldn’t be a substitute for thoughtful human analysis of a researcher’s past writings and future goals. But even careful subjective reviews have their limits, especially in the real world of deadline pressures and global competition, Kording said.

    “Both ways of evaluating people,” he said, “have advantages and disadvantages.”

    Ok. If we are just thinking of evaluation as a choice between objectivity and subjectivity, then sure; point taken. But even if we take this distinction to be the operative one (which I don’t – keep reading!), what are the advantages and disadvantages of each, and are they equivalent, or at least analogous? In other words, are the respective advantages of ‘subjective’ and ‘objective’ evaluation schemes equivalently (or analogously) advantageous? Are their disadvantages equivalently disadvantageous?

    If you look at the technical advantages and disadvantages of each, they are not as dissimilar as one would think. The advantages and disadvantages to ‘subjective’ evaluation (i.e. peer review) boil down to something like “nuanced and detailed, but slow;” likewise, ‘objective’ evaluation boils down to something like “fast, but generic and homogenizing.” They each can promote perverse incentives, so the bottom-line distinction at play is fast vs. slow – which is insubstantial, to say the least.

    I would argue that the appropriate conclusion to draw from this is that considerations of technical advantages and disadvantages of each kind of schema are really beside the point, for the following reason: Why must we choose between ‘subjective’ evaluation (with its attendant disadvantages) and ‘objective’ evaluation (with its attendant disadvantages)? Why is there only a choice between the two, and not, rather, a deeper question posed?

    Preliminarily, I would answer that presuming there to be only a choice between the two betrays that the issue of how to evaluate is being framed as a purely technical matter that treats peer review and metrics as equivalent in the sense of both being measuring instruments. Only one, however, was designed to be a measuring instrument, so the dichotomy is a false one.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>