Peer review, altmetrics, and ex ante broader impacts assessment – a proposal

altmetrics12
ACM Web Science Conference 2012 Workshop
Evanston, IL, 21 June 2012

Britt Holbrook
britt.holbrook@unt.edu
University of North Texas, USA

To state the obvious, peer review is used in all sorts of contexts other than prepublication of journal articles. The most common of these other contexts is the peer review of grant proposals for research funding. In this presentation, I consider whether altmetrics may be of help to proposers and reviewers in addressing what has become a divisive issue in the peer review of grant proposals: addressing and assessing the potential broader impacts of the proposed activities.

Research funding agencies have a relatively long history (at least since the middle of the 20th century) of using peer review to assist in their decisions regarding funding for grant proposals (Holbrook 2010; Frodeman et al. 2012). Generally, proposers have been asked to describe, and peer reviewers have been asked to provide ex ante evaluations of, the intellectual merit of the proposed activities. Despite some criticisms of the use of peer review in a research funding context (e.g., Roy 1985), for the most part proposers and reviewers have been fairly content to describe and to render judgments on the intellectual merit of the proposed activities.

Since the end of the 20th century, however, funding agencies have begun to ask proposers to address, and reviewers to assess, not only the intellectual merit of the proposed activities, but also their potential broader societal impacts. In contrast to their acceptance and use of intellectual merit criteria, both proposers and reviewers have shown marked resistance to such impact criteria (Holbrook 2005, 2012). Indeed, some (e.g., Bozeman and Boardman 2009; Sarewitz 2011) assert that asking researchers to address and assess broader impacts is either of ìdubious benefitî or a matter of policy makers “passing the buck” on difficult questions of science policy to researchers who are not well-equipped to answer them.

Indeed, the introduction of impact considerations into the peer review of grant proposals has prompted critics to one-up each other on their criticisms of the system. Whereas Ziman (1983) decried peer review as ìa higher form of nonsense,î Rip (2000) sees the introduction of impact criteria into the peer review process as creating “an even higher form of nonsense.” In the UK, where the so-called “impact agenda” has taken hold not only in the ex ante peer review of grant proposals, but also in the nation-wide ex post quality assessment scheme of the Research Excellence Framework, researchers are decrying impact considerations as “sheer lunacy” and holding mock funerals for science. I have argued, however, that ex ante impact assessment is in principle no less plausible than ex ante assessment of intellectual merit (Holbrook and Frodeman 2011). Nevertheless, the fact of resistance to impact remains.

If I am correct in arguing that ex ante impact assessment is in principle no more fortune-telling than ex ante assessment of intellectual merit, then the question becomes less one of whether such a thing can be done and more one of how to go about it. One important point is that researchers need convincing not only that ex ante impact assessment is plausible, but also that it is a good thing for them to do. This will involve more closely linking intellectual merit, scholarly impact, and the notion of broader impacts.

Enter altmetrics, in particular the work of Cameron Neylon (at the Beyond Impact Workshop and at altmetrics11) and the subsequent work of Heather Piwowar and Jason Priem (at total-impact). The work of Neylon, Piwowar, and Priem presents exciting possibilities for going beyond the typical measures of the impact of research outputs, especially citations in the scholarly literature. Neylon has introduced the notion of thinking of impact as the “re-use” of research, and Piwowar and Priem have begun to develop tools for measuring re-use that go beyond citation counts or the h-index and include various non-traditional forms of scholarly publication and dissemination (for instance, research notes, data sets, blogs, Twitter, and so on).

Of course, interactions and mentions on Twitter and the like (for instance, Facebook “likes” or Google +1s) can only be captured during dissemination or after the research has been disseminated. Total-impact-type reports of re-use would therefore tend to lend themselves to formative (ex nunc) evaluations or summative (ex post) evaluation. One could, for instance, create a total-impact report for a research output, disseminate it through various means, then monitor its re-use to see what is working (or where work still needs to be done) to encourage its re-use. Updating the report will give an account of the impact (in terms of re-use) the output has had up to that point. Yet, more work needs to be done to make total-impact a tool that could be useful for ex ante impact assessment.

This is where Neylon’s notion of “re-usability” is key: “While the outputs of much research will see limited re-use, particularly in the short term, researchers can be judged on the extent to which they have maximised the ability of others to build upon or re-use the outputs of their research” (Neylon, altmetrics11). Put in terms of ex ante impact assessment, the ìhave maximizedî would change into something like “propose to maximize.” One could imagine not only including in the grant proposal a plan to deposit oneís research in an institutional repository, to pay for Gold OA, to tweet its publication, and so on, but also citing the (still growing) relevant literature that supports the claim that such means of dissemination tend to increase the re-use of research. One could also propose to use total-impact for formative and summative evaluation.

Although altmetrics are an exciting supplement to ex ante judgments of both intellectual merit and broader impacts, they should not replace peer review. Moreover, even if altmetrics provide evidence of re-use and a justification for claims of maximizing re-usability – which I think they do – it remains to be seen whether: (1) altmetrics can help researchers embrace the idea of impact; and (2) whether altmetrics and the notion of re-use are sufficient to account for broader impacts.

References

Bozeman, B and Boardman, C (2009). Broad impacts and narrow perspectives: passing the buck on science and social impacts. Social Epistemology 23(3-4), 183-198.

Frodeman, R, Holbrook, JB, Mitcham, C, and Hong, X (2012). Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions. Beijing: Peoples Publishing House.

Holbrook, JB (2005). Assessing the science-society relation: the case of the U.S. National Science Foundationís second merit review criterion. Technology in Society 27(4), 437-451.

Holbrook, JB (2010). Peer Review, in The Oxford Handbook of Interdisciplinarity, Robert Frodeman, Julie Thompson Klein, Carl Mitcham, eds. Oxford: Oxford University Press, 2010, 321-32.

Holbrook, JB (2012). Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997-2011).

Holbrook, JB and Frodeman, R (2011). Peer review and the ex ante assessment of societal impacts. Research Evaluation, 20(3), September 2011, pp. 239-246. DOI: 10.3152/095820211X12941371876788.

Neylon, C (2011). Beyond Impact Workshop Report, published version, 13 June.

Neylon, C (2011). Re-use as Impact: How re-assessing what we mean by “impact” can support improving the return on public investment, develop open research practice, and widen engagement [v0].

Piwowar, H and Priem, J (2011). Total-impact.

Rip, A (2000). Higher forms of nonsense. European Review 8(4), 467-486.

Roy, R (1985). Funding Science: The Real Defects of Peer Review and an Alternative to It. Science, Technology, & Human Values, Vol. 10, No. 3, pp. 73-81.

Sarewitz, D (2011). The dubious benefits of broader impact. Published online 13 July 2011. Nature 475(141). doi:10.1038/475141a

Ziman, J (1983). The collectivization of science. Proceedings of the Royal Society B219, 1-19.

Creative Commons License

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

2 Comments

  1. ian mulvany
    Posted June 21, 2012 at 12:35 pm | Permalink

    Being able to track grant money as it relates to published work would also help here.

  2. Posted June 22, 2012 at 12:34 pm | Permalink

    Thanks, Ian. Yes, I definitely think funding agencies could use altmetrics as a way to capture some of the impacts of the research they fund — impacts that go beyond mere publication counts or bibliometrics.

    In terms of ex ante determinations of impact, though — before the research is undertaken — I suspect that altmetrics could help build a kind of researcher profile that would give some indication of her or his involvement in activities that are likely to promote broader societal impact. Adding altmetrics to CVs, then, could help illuminate an area that many reviewers still find difficult to maneuver.

    If anyone knows of examples of altmetrics added to CVs, I’d be interested to see them.

    Also very interested to see examples of altmetrics included in grant proposals. Of course, I understand if people are less willing to share those!

One Trackback

  1. [...] in terms of allowing researchers to self themselves in the sense discussed above and in terms of research evaluation in general. Share this:FacebookTwitterGoogle +1LinkedInStumbleUponDiggRedditEmailPrint This entry [...]

Post a Comment

Your email is never shared. Required fields are marked *

*
*