The Role of altmetrics and Peer Review in the Democratization of Knowledge

altmetrics12
ACM Web Science Conference 2012 Workshop
Evanston, IL, 21 June 2012

Kelli Barr
kelli.barr@unt.edu
University of North Texas, USA

Abstract

The altmetrics community, according to its manifesto, has grown around the assumption that the use of peer review as a filtering mechanism for quality scholarship has outlived its usefulness in the changing landscape of scholarly communication (Priem et al. 2010). It is certainly uncontroversial that the introduction of more web-based platforms for the dissemination and discussion of research has changed, and continues to change, the culture of scholarly communication. Journal articles, however, remain the dominant means of disseminating original research in many academic fields, and researchers continue to rely on peer review as a quality screening measure at multiple stages of both the research process and personal career advancement.

Peer review, of course, is not without faults (see especially Cole, Cole, and Simon 1981; Peters and Ceci 1982). Priem and Hemminger (2012) argue that pre-publication journal peer review has not undergone a major transformation since its inception in the 17th century proceedings of Philosophical Transactions, concluding that the map of peer review no longer corresponds to the shifting territory of academic research. Evidence detailing the shortcomings of peer review is plentiful, but of notable importance is that the sheer glut of information facing modern scholars further aggravates the attempts of peer review to keep up with recent innovations in scholarly communication. Priem et al. (2010) respond to this by proposing that better filter(s) are sorely needed; altmetrics are an attempt to provide just that. Altmetricians propose that the simple answer to the shortcomings of current impact measures, including peer review, is more: more and more diverse metrics to capture the significance of a piece of academic work (Neylon and Wu 2009), based on which its importance becomes clear. However, I argue that the altmetrics community should resist the attempt to supplant peer review with a host of altmetrics, no matter how diverse.

Any evaluation scheme is simultaneously a system of incentives, and so assessing the impact of research according to a suite of altmetrics will inevitable steer research in particular directions, as peer review has done. An assertion of the value of democratizing venues for academic communication, and therefore the importance of a diversity of measures to provide as complete a picture as possible of the scope of research impact, is implicit in the manifesto. Thus, one of the goals of altmetrics is to promote greater physical access to academic research for non-academics, thereby working to make research more accountable to its public benefactors (Priem et al. 2010). For example, because altmetricians value democratization of research assessment, altmetrics challenge traditional notions of who counts as a peer, a valuable check on the potential for peer review practices to become old boy’s networks. But the desire to democratize impact implicitly expressed by the authors seems to be at odds with their explicit claim that better filters are needed to limit the volume of information to which academics are subjected. On the other hand, Neylon and Wu (2009), while embracing the goal of democratization, treat filters as a form of prioritization. They are adamantly opposed to ‘any response that favours publishing less,’ which they call nonsensical ‘either logistically, financially, or ethically.’

This reflects an important contradiction inherent in altmetrics themselves. Any measure of impact for published academic work already incorporates prior judgments from various peer review processes (Frodeman, Holbrook, and Barr 2012). Journal articles are subject to pre-publication peer review; citations, web mentions, or any example of an article’s re-use – to use Neylon’s (2011) terminology – only accrue to articles which have been vetted by peer review judgments. But even prior to pre-publication peer review, the research that actually makes it to publication was previously subject to grant peer review to determine whether it was worth funding, and the researchers themselves were (and continually are) subject to peer review evaluations of their academic portfolios for the purposes of departmental evaluations, promotions, hiring, and tenure.

Article impact measures, then, do not escape the shortcomings of the peer review judgments upon which they are based and which they reflect, regardless of what kind of review process takes place, be it traditional expert panels, web-based crowdsourcing, or third-party external certification (such as that proposed by Priem and Hemminger). Peer judgments rendered via web-based crowdsourcing would speed up the review process, certainly, and would make reviews visible and accountable, but the judgments rendered will not be immune to the promotion of conventionality (i.e. groupthink, or cultural exclusivity on a larger scale), nor would it limit the volume of research published. While using altmetrics to filter for impactful or significant research would somewhat democratize the evaluation of published research, it would also centralize and concentrate decisions regarding the direction of research trajectories into the hands of those who design and administer the metrics – only a subsection of the academic community.

Traditional peer review is a significant check on this capacity for concentration of power because it is the principle means through which academics assert their autonomy, including the right to determine the direction of research as a community. As Chubin and Hackett (1990) argue, peer review serves a multiplicity of functions – epistemic, sociological, political, and economic – within academic communities; that is, peer review itself has a more diverse meaning and functionality than just winnowing out what is excellent scholarship from what is not. The ‘added value’ of peer review for scholarly communication, the establishment of research trajectories, and the evaluation of academic work is that it is conscious of its own activity of judgment-making. Peer review allows for the exercise of autonomy within a framework that has the capacity to be self-critical. Thus, as the value of democratizing knowledge gains momentum in the academic community, altmetrics and peer review can balance one another: the former questions the latter on who counts as a peer, and the latter challenges the former on whose judgment counts in evaluating quality, impactful scholarship.

References:

Chubin, D.E. and E.J. Hackett (1990) Peerless Science: Peer Review and U.S. Science Policy. Albany, NY: State University of New York Press.

Cole, S., Cole, J.R., and G.A. Simon (1981) Chance and Consensus in Peer Review. Science, 214(4523): 881-886.

Frodeman, R., Holbrook, J.B., and K. Barr (2012) The University, Metrics, and the Good Life, in P. Brey, A. Briggle, and E. Spence (eds.) The Good Life in a Technological Age. Routledge Studies in Science, Technology, and Society. New York, NY: Routledge.

Neylon, C. and S. Wu (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biology, 7(11): e1000242. doi:10.1371/journal.pbio.1000242.

Neylon, C. (2011) Re-use as Impact: How Re-assessing What we Mean by “Impact” Can Support Improving the Return on Public Investment, Develop Open Research Practice, and Widen Engagement. Available at: http://altmetrics.org/workshop2011/neylon-v0/.

Peters, D.P. and S.J. Ceci (1982) Peer-review Practices of Psychological Journals: The Fate of Published Articles, Submitted Again. Behavioral and Brain Sciences, 5(2): 187-195.

Priem J. and B.M. Hemminger (2012) Decoupling the Scholarly Journal. Frontiers in Computational Neuroscience, 6(19). doi: 10.3389/fncom.2012.00019.

Priem, J., Taraborelli, D., Groth, P., and C. Neylon (2010) altmetrics: A Manifesto. Available at: http://altmetrics.org/manifesto/.

Creative Commons License

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

2 Comments

  1. ian mulvany
    Posted June 20, 2012 at 9:01 am | Permalink

    Thank you very much for the recent comment on http://partiallyattended.com/2012/06/18/some-thought-on-peerreview-and-altmetrics/

    I am in broad agreement with your point, and it is much clearer to me now what you are discussing. I would say that this point on transparency, and understanding the motivations behind the kinds of decisions, is critical in any system.

    I strongly feel that at the moment impact factor led decision making is not fit for purpose for many appropriate questions that may be asked of the research literature. For example if the goal is to change government policy it may be better to understand the amount of press coverage that a research topic gets, rather than the number of citations. To find out which papers will be of most use to an undergraduate class it may be better to have information on which papers are being interacted with more by undergraduates across an entire country, or continent, rather than looking at the number of citations.

    These are the kinds of questions that I would like to see altmetrics tools be able to help answer. I know that http://altmetric.com/ and http://mendeley.com provide data that can answer these kinds of questions.

    Having the use cases clearly described up front, and having transparency around decisions is a good goal. When these metrics start to make real material differences, then people will start to game them. I feel there is a lot to be learnt from the SEO wars on how to deal with such situations.

    In terms of transparency around impact, one of the big gaps is being able to track funding related to published outputs, as grant numbers are not easy to access, and are not regularly associated with publications. It’s a complicated area, and one that I feel needs significant investment of effort from grant bodies, publishers and the creators of tools to help track this information. I would also like to see movement in this direction from the altmetrics community, and I think that speaks directly towards the concerns you have outlined in your abstract.

  2. Posted June 25, 2012 at 7:09 pm | Permalink

    This in particular stood out to me: “When these metrics start to make real material differences, then people will start to game them. I feel there is a lot to be learnt from the SEO wars on how to deal with such situations.”

    I’m not so familiar with the SEO wars, but I agree that attaching material benefits to the outcomes of particular metrics is what drives gaming. This theme was definitely a main concern at the altmetrics12 workshop. But to a large extent, the response to gaming was couched in terms of making the metrics more efficient and effective, and better at resisting efforts to game them.

    I have some issues with this perspective, because I think it tacitly assumes that using metrics to make normative judgments, like what websites one ought to visit (in the case of SEOs) is an appropriate application of them. Sometimes this is unavoidable – search efficiency, after all, has been immeasurably improved by page ranking algorithms – but in many cases it is not only avoidable but ethically dubious. Decisions such as the determination of research impact and excellence have palpable social and political consequences that I would like to see the altmetrics community engage with more.

    In making this point, I’d like to point to the existence of a kind of esoteric economy that deals in citation measures as proxies for research excellence and impact, but as of now the evidence for its existence is only anecdotal. Gregg Gordon (from SSRN) mentioned an noteworthy case in his keynote at the workshop, but one thing I would like to see from the community is an attempt to survey the scope of the use of more accepted measures, like citation analysis and the h-index, in promotion, tenure, hiring, and funding decisions in the academy. Richard Price (academia.edu) suggested doing as much, and I think this is critical for the community – getting a handle on how widespread these existing metrics are and the importance placed on them will only strengthen the argument that more inclusive and diverse metrics are a more fair use of such numbers for these kinds of decisions.

    And I absolutely agree that transparency and contextual appropriateness need to be emphasized when it comes to evaluation via metrics. Linking grants to subsequent publications is important for demonstrating impact. But ethically speaking, federally funded research has, I think, a duty to attempt to influence context outside as well as inside of the academy. So while linking grants to published work is vital for establishing the trajectory of the research process, it is only part of the story. This needs to be coupled to a description of efforts on the part of researchers to influence situations outside of the academy. I think altmetrics can be incredibly useful for making such descriptions more persuasive.

    *Cross-posted at http://partiallyattended.com/2012/06/18/some-thought-on-peerreview-and-altmetrics/

One Trackback

  1. [...] interesting take on my abstract for the upcoming altmetrics12 workshop from Ian Mulvaney – Head of Technology for a neat new [...]

Post a Comment

Your email is never shared. Required fields are marked *

*
*