“It’s like we have a fresh snowfall across this docu-plain, and we have fresh footprints everywhere,” he says. “That has the potential to really revolutionize how we measure impact.”
I agree with this. But combine it with Open Access policies, and it starts to get even more interesting. That’s because the so-called “downstream usage” of scholarship will include use by non-traditional users (that is, non-academics). So, now we have the potential to track not merely impact on the scholarly community, but also, dare I say it, broader impacts. The point is, once we have real altmetrics, we’re dealing with a different sort of impact.
Peer review “has served scholarship well” but has become slow and unwieldy and rewards conventional thinking. Citation-counting measures such as the h-index take too long to accumulate. And the impact factor of journals gets misapplied as a way to assess an individual researcher’s performance, which it wasn’t designed to do.
Perhaps this is all true. But the same issue arises here with ‘peer review’ that arose with ‘impact’. Now we’re talking about something different, some sort of extension of traditional peer review. If altmetrics is to be taken seriously, and this is a political, as well as an epistemological point, then we need to take seriously the question: who ought to count as a peer?
Jason Priem seems aware of this political dimension, at least with regard to citations:
“I’m not down on citations,” Mr. Priem says. “I’m just saying it’s only part of the story. It’s become the only part of the story we care about.”
The danger with any metric is that it becomes the whole story (the metric becomes the message). The article does a decent job of addressing this issue, except that it gets a bit lost in the discussion of the technical difficulties associated with altmetrics.
The inclusive, diffuse approach that drives altmetrics may actually help protect it.
I hope altmetrics includes altpeerreview, too!