Expanding altmetrics to include policy documents will boost its reputation

Altmetrics may prove to be a more flexible and versatile tool to inform research assessment, if academics get behind it
Twitter altmetrics
Altmetrics offer more than unscholarly tweet counting. Photograph: Ognen Teofilovski/Reuters

Alternative metrics, or altmetrics as they are more commonly known, have received a lot of attention recently. Blogs, conferences and papers examine these new measures of attention surrounding published research and consider whether they are the good, the bad, or the ugly brother of bibliometrics – an indication of the number of research papers published and how often they are cited.

Is a direct comparison between the two appropriate? I am not so sure. Where bibliometrics are based on a (limited) set of citation data in closed systems, altmetrics seem not to have any boundaries.

The roots of altmetrics lie in counting and analysing tweets (hence the direct comparison with citation counts). They further evolved to encompass mentions or shares of articles on other forms of social and traditional media, online academic tools including post-publication peer-review forums and reference managers, and, in some instances, publically available documents such as patent and policy documents.

I would argue that altmetrics alone in their current form cannot be used to judge the quality of research or its output, neither at the point of discovery nor at the point of evaluation. Nevertheless, pending on improving the underlying data sources, it is likely that altmetrics will play a crucial role in informing the research assessment and impact agenda.

Altmetrics: bridging the gap between socialmetric and sciencemetric?

The advent of bibliometrics (as a consequence of the journal impact factor) led to what is commonly recognised as the Stem/non-Stem divide – whereby those disciplines that publish at a more frequent rate, and in journals rather than books or monographs, tend to have a better coverage. Proof of this culminated in the 2009 Hefce bibliometrics pilot, run in order to assess the use of citation metrics as the base of all research assessment.

Would that trend of division continue with the use of supplementary altmetrics? I firmly believe that altmetrics offer a unique chance to bring Stem and non-Stem together again. How better to build on the current process of the evaluation of research (and the recent introduction in the UK of impact assessment) than by gathering, publishing and analysing data on scholarly influence among the audience beyond the scholarly sphere?

Where to find evidence of such influence? In public policy documents. Altmetric for Institutions (a new tool which collates and disambiguates the online mentions for any published article) includes article mentions in such sources – NICE guidelines and the NHS Evidence Base at the moment – which hopefully will be extended to track similar outputs from other areas of scholarly activity, in particular the humanities and social sciences.

Tracking the referencing of published research in policy documents lifts alternative metrics out of the unscholarly tweet counting (where the discussion of the value is ongoing), and would be useful for all disciplines; Stem and non-Stem alike.

Can altmetrics substitute bibliometrics?

Any alternative form of measurement or assessment will share the same fate and subsequent discussion that the use of citations encountered: Is a citation positive or negative? What about self-citation?

Euan Adie, founder of Altmetric, clarified a key point of altmetrics: they are designed to highlight the attention a scholarly output has received, and not to be a measure of quality (which many consider the impact factor to be). A lot of attention is not necessarily a good thing. Using these new indicators as a supplementary tool to existent and trusted forms of research assessment, altmetrics may well prove to be a more flexible and versatile tool to inform research assessment than bibliometrics have ever been.

What will the future of altmetrics look like?

It is a bit preposterous to speculate on the future of altmetrics when the basics and definitions of these measures are yet to be agreed on. However, we have a chance for academia (its scholars, librarians and research administrators) to drive the development and application of altmetrics. Based on a defined, verified and trusted set of underlying data, we have the opportunity to showcase the broader impact of our work.

There are plenty of opportunities for all involved to form and shape the future of altmetrics - for example Hefce are in the process of a consultation with the scholarly community to try to establish how (and which) metrics could be used to further evaluate academic research.

Undoubtedly there will be intense scholarly dispute on how, where and to what extent altmetrics can or should be used. That is a good direction. It is those involved in academia that should discuss and evaluate these new measures, so that the potential for altmetrics to assist in the research evaluation process is reached.

Juergen Wastl is head of research information at the University of Cambridge – follow him on Twitter @juergen_wastl

Join the higher education network for more comment, analysis and job opportunities, direct to your inbox. Follow us on Twitter @gdnhighered.

About Guardian Professional

  • Guardian Professional Networks

    Guardian Professional Networks are community-focused sites, where we bring together advice, best practice and insight from a wide range of professional communities. Click here for details of all our networks. Some of our specialist hubs within these sites are supported by funding from external companies and organisations. All editorial content is independent of any sponsorship, unless otherwise clearly stated. We make Partner Zones available for sponsors' own content. Guardian Professional is a division of Guardian News & Media.
;