A key challenge: the evaluation gap

What are the best ways to evaluate research? This question has received renewed interest in both the United Kingdom and the Netherlands. The dynamics in research, the increased availability of data about research, as well as the rise of the web as infrastructure for research leads to regular revisiting this question. The UK funding agency HEFCE has installed a Steering Group to evaluate the evidence on the potential role of performance metrics in the next instalment of the Research Excellence Framework. The British ministry of science suspects that metrics may help to alleviate the pressure of the large-scale assessment exercise on the research community. In the Netherlands, a new Standard Evaluation Protocol has been published in which the number of publications is no longer an independent criterion for the performance of research groups. Like the British, the Dutch are putting much more emphasis on societal relevance than in the previous assessment protocols. However, whereas the British are exploring new ways to make the best use of metrics, the Dutch pressure group Science in Transition is calling for an end to the bibliometric measurement of research performance.

In 2010 (three years before Science in Transition was launched), we started to formulate our new CWTS research programme and we took a different perspective to research evaluation. We defined the key issue not as a problem of too many or too little indicators. Neither do we think that using the wrong indicators (either bibliometric or peer review based) is the fundamental problem, although misinterpretation or misuse of indicators certainly does happen. The most important issue is the emergence of a more fundamental gap between on the one hand the dominant criteria in scientific quality control (in peer review as well as in metrics approaches), and on the other hand the new roles of research in society. This was the main point of departure for the ACUMEN project, which aimed to address this gap (and has recently delivered its report to the European Commission).

This “evaluation gap” results in discrepancies at two levels. First, research has a variety of missions: to produce knowledge for its own sake; to help define and solve economic and social problems; to create the knowledge base for further technological and social innovation; and to give meaning to actual cultural and social developments. These different missions are strongly interrelated and can often be served within one research project. Yet, they do require different forms of communication and articulation work. The work needed to accomplish these missions is certainly not limited to the publication of articles in specialized scientific journals. Yet, it is this type of work that figures most prominently in research evaluations. This has the paradoxical effect that the requirements to be more active in “valorization” and other forms of society-oriented scientific work is piled on top of the requirement to be excellent in publishing high impact articles and books. No wonder a lot of Dutch researchers regularly show signs of burn out (Tijdink, Rijcke, Vinkers, Smulders, & Wouters, 2014; Tijdink, Vergouwen, & Smulders, 2012). Hence, there is a need for diversification of quality criteria and a more refined set of evaluation criteria that take into account the real research mission of the group or institute that is being evaluated (instead of an ideal-typical research mission that is actually not much more than a pipe dream).

Second, research has become a huge enterprise with enormous amounts of research results and an increased complexity of interdisciplinary connections between fields. The current routines in peer review cannot keep up with this vast increase in scale and complexity. Sometimes there is a lack of sufficient numbers of peers to check the quality of the new research. In addition, new forms of peer review of data quality are in increasing demand. A number of experiments with new forms of review to address these issues have been developed in response to these challenges. A common solution in massive review exercises (such as the REF in the UK or the judgement of large EU programmes) is the bucreaucratization of peer review. This effectively turns the substantive orientation of peer expert judgment into a procedure in which the main role of experts is ticking boxes and checking whether the researchers have fulfilled their procedural requirements. Will this in the long run undermine the nature of peer review in science? We do not really know.

A possible way forward would be to re-assess the balance between qualitative and quantitative judgement of quality and impact. The fact that the management of large scientific organizations require lots of empirical evidence and therefore also quantitative indicators, does not mean that these indicators should inevitably be leading. The fact that the increased social significance of scientific and scholarly research means that researchers should be evaluated, does not mean that evaluation should always be a formalized procedure in which the researchers participate only willy-nilly. According to the Danish economist Peter Dahler-Larsen, the key characteristic of “the evaluation society” is that evaluation has become a profession in itself and has become detached from the primary process that it evaluates (Dahler-Larsen, 2012). We are dealing with “evaluation machines”, he argues. The main operation of these machines is to make everything “fit to be evaluated”. Because of this social technology, individual researchers or individual research groups are not able to evade evaluation without jeopardizing their career. At the same time, there is also a good non-Foucauldian reason for evaluation: evaluation is part of the democratic accountability of science.

This may be the key point in re-thinking our research evaluation systems. We must solve a dilemma: on the one hand we need evaluation machines because science has become too important and too complex to do without, and on the other hand evaluation machines tend to start to lead a life of their own and reshape the dynamics of research in potentially harmful ways. Therefore, we need to re-establish the connection between evaluation machines in science and the expert evaluation that is already driving the primary process of knowledge creation. In other words, it is fine that research evaluation has become so professionalized that we have specialized experts in addition to the researchers involved. But this evaluation machine should be organized in such a way that the evaluation process becomes a valuable component of the very process of knowledge creation that it wants to evaluate. This, I think, is the key challenge for both the new Dutch evaluation protocol and the British REF.

Would it be possible to enjoy this type of evaluation?

References:

Dahler-Larsen, P. (2012). The Evaluation Society (p. 280). Stanford University Press. Retrieved from http://www.amazon.com/The-Evaluation-Society-Peter-Dahler-Larsen/dp/080477692X

Tijdink, J. K., Rijcke, S. De, Vinkers, C. H., Smulders, Y. M., & Wouters, P. (2014). Publicatiedrang en citatiestress. Nederlands Tijdschrift Voor Geneeskunde, 158, A7147.

Tijdink, J. K., Vergouwen, A. C. M., & Smulders, Y. M. (2012). De gelukkige wetenschapper. Nederlands Tijdschrift Voor Geneeskunde, 156, 1–5.

The new Dutch research evaluation protocol

From 2015 onwards, the societal impact of research will be a more prominent measure of success in the evaluation of research in the Netherlands. Less emphasis will be put on the number of publications, while the vigilance about research integrity will be increased. These are the main elements of the new Dutch Standard Evaluation Protocol which was published a few weeks ago.

The new protocol aims to guarantee, improve, and make visible the quality and relevance of scientific research at Dutch universities and institutes. Three aspects are central: scientific quality; societal relevance; and feasibility of the research strategy of the research groups involved. As is already the case in the current protocol, research assessments are organized by institution, and the institutional board is responsible. Nationwide comparative evaluations by discipline are possible, but the institutions involved have to agree explicitly to organize their assessments in a coordinated way to realize this. In contrast to performance based funding systems, the Dutch system does not have a tight coupling between assessment outcomes and funding for research.

This does not mean, inter alia, that research assessments in the Netherlands do not have consequences. On the contrary, these may be quite severe but they will usually be implemented by the university management with considerable leeway for interpretation of the assessment results. The main channel through which Dutch research assessments has implications is via the reputation gained or lost for the research leaders involved. The effectiveness of the assessments is often decided by the way the international committee works which performs the evaluation. If they see it as their main mission to celebrate their nice Dutch colleagues (as has happened in the recent past), the results will be complimentary but not necessarily very informative. On the other hand, they may also punish groups by using criteria that are actually not valid for those specific groups although they may be standard for the discipline as a whole (and this has also happened, for example when book-oriented groups work in a journal-oriented discipline).

The protocol does not include a uniform set of requirements or indicators. The specific mission of the research institutes or university departments under assessment is leading. As a result, research that is mainly aimed at having practical impact may be evaluated with different criteria from a group that aims to work on the international frontier of basic research. The protocol is not unified around substance but around procedure. Each group has to be evaluated every six years. A new element in the protocol is also that the scale for assessment has been changed from a five-point to a four-point scale, ranging from “unsatisfactory”, via “good” and “very good” to “excellent”. This scale will be applied to all three dimensions: scientific quality, societal relevance, and feasibility.

The considerable freedom that the peer committees have in evaluating Dutch research has been maintained in the new protocol. Therefore, it remains to be seen what the effects will be of the novel elements in the protocol. In assessing the societal relevance of research, the Dutch are following their British peers. Research groups will have to construct “narratives” which explain the impact their research has had on society, understood broadly. It is not yet clear how these narratives will be judged according to the scale. The criteria for feasibility are even less clear: according to the protocol a group has an “excellent” feasibility if it is “excellently equipped for the future”. Well, we’ll see how this works out.

With less emphasis on the amount of publications in the new protocol, the Dutch universities, the funding agency NWO and the academy of science KNAW (who collectively are reponsible for the protocol) have also responded to the increased anxiety about “perverse effects” in the research system triggered by the ‘Science in Transition’ group and to recent cases of scientific fraud. The Dutch minister of education, culture and the sciences Jet Bussemaker welcomed this change. “Productivity and speed should not be leading considerations for researchers”, she said at the reception of the new protocol. I fully agree with this statement, yet this aspect of the protocol will also have to stand the test of practice. In many ways, the number of publications is still a basic building block of scientific or scholarly careers. For example, the h-index is very popular in the medical sciences  ((Tijdink, Rijcke, Vinkers, Smulders, & Wouters, 2014). This index is a combination of the number of publications of a researcher and the citation impact of these articles in such a way that the h-index can never be higher than the total number of publications. This means that if researchers are compared according to the h-index, the most productive ones will prevail. We will have to wait and see whether the new evaluation protocol will be able to withstand this type of reward for high levels of article production.

Reference: Tijdink, J. K., Rijcke, S. De, Vinkers, C. H., Smulders, Y. M., & Wouters, P. (2014). Publicatiedrang en citatiestress. Nederlands Tijdschrift Voor Geneeskunde, 158, A7147.

Metrics in research assessment under review

This week the Higher Education Funding Council for England (HEFCE) published a call to gather “views and evidence relating to the use of metrics in research assessment and management” http://www.hefce.ac.uk/news/newsarchive/2014/news87111.html. The council has established an international steering group which will perform an independent review of the role of metrics in research assessment. The review is supposed to contribute to the next installment of the Research Excellence Framework (REF) and will be completed Spring 2015.

Interestingly, two members of the European ACUMEN project http://research-acumen.eu/ are members of the 12 person steering group – Mike Thelwall (professor of cybermetrics at Wolverhampton University http://cybermetrics.wlv.ac.uk/index.html) and myself – and it is led by James Wilsdon, professor of Science and Democracy at the Science Policy Research Unit (SPRU) at the University of Sussex. The London School of Economics scholar Jane Tinkler, co-author of the book The Impact of the Social Sciences, is also member and has put together some reading material on their blog http://blogs.lse.ac.uk/impactofsocialsciences/2014/04/03/reading-list-for-hefcemetrics/. So there will be ample input from the social sciences to analyze both the promises and the pitfalls of using metrics in the British research assessment procedures. The British clearly see this as an important issue. The creation of the steering group was announced by the British minister for universities and science, David Willett at the Universities UK conference on April 3 https://www.gov.uk/government/speeches/contribution-of-uk-universities-to-national-and-local-economic-growth. In addition to science & technology studies experts, the steering group consists of scientists from the most important stakeholders in the British science system.

At CWTS, we responded enthusiastically to the invitation by HEFCE to contribute to this work, because this approach resonates so well with the CWTS research programme http://www.cwts.nl/pdf/cwts_research_programme_2012-2015.pdf. The review will focus on: identifying useful metrics for research assessment; how metrics should be used in research assessment; ‘gaming’ and strategic use of metrics; and the international perspective.

All the important questions about metrics have been put on the table by the steering group, among others:

-       What empirical evidence (qualitative or quantitative) is needed for the evaluation of research, research outputs and career decisions?

-       What metric indicators are useful for the assessment of research outputs, research impacts and research environments?

-       What are the implications of the disciplinary differences in practices and norms of research culture for the use of metrics?

-       What evidence supports the use of metrics as good indicators of research quality?

-       Is there evidence for the move to more open access to the research literature to enable new metrics to be used or enhance the usefulness of existing metrics?

-       What evidence exists around the strategic behaviour of researchers, research managers and publishers responding to specific metrics?

-       Has strategic behaviour invalidated the use of metrics and/or led to unacceptable effects?

-       What are the risks that some groups within the academic community might be disproportionately disadvantaged by the use of metrics for research assessment and management?

-       What can be done to minimise ‘gaming’ and ensure the use of metrics is as objective and fit-for-purpose as possible?

The steering group also calls for evidence on these issues from other countries. If you wish to contribute evidence to the HEFCE review, please make it clear in your response whether you are responding as an individual or on behalf of a group or organisation. Responses should be sent to metrics@hefce.ac.uk by noon on Monday 30 June 2014. The steering group will consider all responses received by this deadline.

 

 

On citation stress and publication pressure

Our article on citation stress and publication pressure in biomedicine went online this week – co-authored with colleagues from the Free University and University Medical Centre Utrecht:

Tijdink, J.K., S. de Rijcke, C.H. Vinkers, Y.M. Smulders, P.F. Wouters, 2014. Publicatiedrang en citatiestress: De invloed van prestatie-indicatoren op wetenschapsbeoefening. Nederlands Tijdschrift voor Geneeskunde 158: A7147.

* Dutch only *

Tales from the field: On the (not so) secret life of performance indicators

* Guest blog post by Alex Rushforth *

In the coming months Sarah De Rijcke and I have been accepted to present at conferences in Valencia and Rotterdam on research from CWTS’s nascent EPIC working group. We very much look forward to drawing on collaborative work from our ongoing ‘Impact of indicators’ project on biomedical research in University Medical Centers (UMC) in the Netherlands. One of our motivations behind the project is that there has been a wealth of social science literature in recent times about the effects of formal evaluation in public sector organisations, including universities. Yet too few studies have taken seriously the presence of indicators in the context of one of the universities core-missions: knowledge creation. Fewer still have looked to take an ethnographic lens to the dynamics of indicators in the day-to-day work context of academic knowledge. These are deficits we hope to begin addressing through these conferences and beyond.

The puzzle we will be addressing here appears – at least at first glance- straightforward enough: what is the role of bibliometric performance indicators in the biomedical knowledge production process? Yet comparing provisional findings from two contrasting case studies of research groups from the same UMC – one a molecular biology group and the other a statistics group – it becomes quickly apparent that there can be no general answer to this question. As such we aim to provide not only an inventory of different ‘roles’ of indicators in these two cases, but also to pose the more interesting analytical question of what conditions and mechanisms explain the observed variations in the roles indicators come to perform?

Owing to their persistent recurrence in the data so far, the indicators we will analyze are journal impact factor, H-index, and ‘advanced’ citation-based bibliometric indicators. It should be stressed that our focus on these particular indicators have have emerged inductively from observing first-hand the metrics that research groups attended to in their knowledge-making activities. So what have we found so far?

Dutch UMCs constitute particularly apt sites through which to explore this problem given how bibliometric assessments have been central to the formal evaluations carried-out since their inception in the early-2000s. On one level it is argued that researchers in both cases encounter such metrics as ‘governance/managerial devices’, that is, as forms of information required of them by external agencies on whom they are reliant for resources and legitimacy. Such examples can be seen when funding applications, annual performance appraisals, or job descriptions demand such information of an individual’s or group’s past performance. As the findings will show, the information needed by the two groups to produce their work effectively and the types of demands made on them by ‘external’ agencies varies considerably, despite their common location in the same UMC. This is one important reason why the role of indicators differs between cases.

However, this coercive ‘power over’ account is but one dimension of a satisfying answer to our role of indicators question. Emerging analysis reveals also the surprising discovery that in fields characterized by particularly integrated forms of coordination and standardization (Whitley, 2000)– like our molecular biologists – indicators in fact have the propensity to function as a core feature of the knowledge making process. For instance, a performance indicator like the journal impact factor was routinely mobilized informally in researchers’ decision-making as an ad hoc standard against which to evaluate the likely uses of information and resources, and in deciding whether time and resources should be spent pursuing them. By contrast in the less centralized and integrated field statistical research such an indicator was not so indispensable to routines of knowledge making activities. In the case of the statisticians it is possible to speculate that indicators are more likely to emerge intermittently as conditions to be met for gaining social and cultural acceptance by external agencies, but are less likely to inform day-to-day decisions. Through our ongoing analysis we aim to unpack further how disciplinary practices interact with organisation of Dutch UMCs to produce quite varying engagements with indicators.

The extent to which indicators play central/peripheral roles in research production processes across academic contexts is an important sociological problem to be posed in order to enhance understanding of the complex role of performance indicators in academic life. We feel much of the existing literature on evaluation of public organisations has tended to paint an exaggerated picture of formal evaluation and research metrics as synonymous with empty ritual and legitimacy (e.g. Dahler-Larsen, 2012). Emerging results here show that – at least in the realm of knowledge production- the picture is more subtle. This theoretical insight will prompt us to suggest further empirical studies are needed of scholarly fields with different patterns of work organisation in order to compare our results and develop middle-range theorizing on the mechanisms through which metrics infiltrate knowledge production processes to fundamental or peripheral degrees. In future this could mean venturing into fields far outside of biomedicine, such as history, literature, or sociology. For now though we look forward to expanding the biomedical project, by conducting analogous case studies from a second UMC.

Indeed it is through such theoretical developments that we can consider not only the appropriateness of one-size-fits-all models of performance evaluation, but also unpack and problematize discourses about what constitutes ‘misuse’ of metrics. And indeed how convinced should we be that academic life is now saturated and dominated by deleterious metric indicators? 

References

DAHLER-LARSEN, P. 2012. The evaluation society, Stanford, California, Stanford Business Books, an imprint of Stanford University Press.

 WHITLEY, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.

How does science go wrong?

We are happy to announce that our abstract got accepted for the 2014 Conference of the European Consortium for Political Research (ECPR), which will be held in Glasgow from 3-6 September. Our paper is selected for a panel on ‘The role of ideas and indicators in science policies and research management’, organised by Luis Sanz-Menéndez and Laura Cruz-Castro (both at CSIC-IPP).

Title of our paper: How does science go wrong?

“Science is in need of fundamental reform.” In 2013, five Dutch researchers took the lead in what they hope will become a strong movement for change in the governance of science and scholarship: Science in Transition. SiT appears to voice concerns heard beyond national borders about the need for change in the governance of science (cf. The Economist 19 October 2013; THE 23 Jan. 2014; Nature 16 Oct. 2013; Die Zeit 5 Jan. 2014). One of the most hotly debated concerns is quality control, and it encompasses the implications of a perceived increasing publication pressure, purported flaws in the peer review system, impact factor manipulation, irreproducibility of results, and the need for new forms of data quality management.

One could argue that SiT landed in fertile ground. In recent years, a number of severe fraud cases drew attention to possible ‘perverse effects’ in the management system of science and scholarship. Partly due to the juicy aspects of most cases of misconduct, these debates tend to focus on ‘bad apples’ and shy away from more fundamental problems in the governance of science and scholarship.

Our paper articulates how key actors construct the notion of ‘quality’ in these debates, and how they respond to each other’s position. By making these constructions explicit, we shift focus back to the self-reinforcing ‘performance loops’ that most researchers are caught up in at present. Our methodology is a combination of the mapping of the dynamics of media waves (Vasterman, 2005) and discourse analysis (Gilbert & Mulkay, 1984).

References

A revolutionary mission statement: improve the world. Times Higher Education, 23 January 2014.

Chalmers, I., Bracken, M. B., Djulbegovic, B., Garattini, S., Grant, J., Gülmezoglu, A. M., Oliver, S. (2014). How to increase value and reduce waste when research priorities are set. The Lancet, 383 (9912), 156–165.

Gilbert, G. N., & Mulkay, M. J. (1984). Opening Pandora’s Box. A Sociological Analysis of Scientists’ Discourse. Cambridge: Cambridge University Press.

Research evaluation: Impact. (2013). Nature, 502(7471), 287–287.

Rettet die Wissenschaft!: “Die Folgekosten können hoch sein.” Die Zeit, 5 January 2014.

Trouble at the lab. The Economist, 19 October 2013.

Vasterman, P. L. M. (2005). Media-Hype. European Journal of Communication , 20 (4 ), 508–530.

On exploding ‘evaluation machines’ and the construction of alt-metrics

The emergence of web-based ways to create and communicate new knowledge is affecting long-established scientific and scholarly research practices (cf. Borgman 2007; Wouters, Beaulieu, Scharnhorst, & Wyatt 2013). This move to the web is spawning a need for tools to track and measure a wide range of online communication forms and outputs. By now, there is a large differentiation in the kinds of social web tools (i.e. Mendeley, F1000,  Impact Story) and in the outputs they track (i.e. code, datasets, nanopublications, blogs). The expectations surrounding the explosion of tools and big ‘alt-metric’ data (Priem et al. 2010; Wouters & Costas 2012) marshal resources at various scales and gather highly diverse groups in pursuing new projects (cf. Brown & Michael 2003; Borup et al. 2006 in Beaulieu, de Rijcke & Van Heur 2013).

Today we submitted an abstract for a contribution to Big Data? Qualitative approaches to digital research (edited by Martin Hand & Sam Hillyard and contracted with Emerald). In the abstract we propose to zoom in on a specific set of expectations around altmetrics: Their alleged usefulness for research evaluation. Of particular interest to this volume is how altmetrics information is expected to enable a more comprehensive assessment of 1. social scientific outputs (under-represented in citation databases) and 2. wider types of output associated with societal relevance (not covered in citation analysis and allegedly more prevalent in the social sciences).

Our chapter we address a number of these expectations by analyzing 1) the discourse in the “altmetrics movement”, the expectations and promises formulated by key actors involved in “big data” (including commercial entities); and 2) the construction of these altmetric data and their alleged validity for research evaluation purposes. We will combine discourse analysis with bibliometric, webometric and altmetric methods in which both methods will also interrogate each others’ assumptions (Hicks & Potter 1991).

Our contribution will show, first of all, that altmetric data do not simply ‘represent’ other types of outputs; they also actively create a need for these types of information. These needs will have to be aligned with existing accountability regimes. Secondly, we will argue that researchers will develop forms of regulation that will partly be shaped by these new types of altmetric information. They are not passive recipients of research evaluation but play an active role in assessment contexts (cf. Aksnes & Rip 2009; Van Noorden 2010). Thirdly, we will show that the emergence of altmetric data for evaluation is another instance (following the creation of the citation indexes and the use of web data in assessments) of transposing traces of communication into a framework of evaluation and assessment (Dahler-Larsen 2012, 2013; Wouters 2014).

By making explicit what the implications are of the transfer of altmetric data from the framework of the communication of science to the framework of research evaluation, we aim to contribute to a better understanding of the complex dynamics in which new generation of researchers will have to work and be creative.

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.

Beaulieu, A., van Heur, B. & de Rijcke, S. (2013). Authority and Expertise in New Sites of Knowledge Production. In A. Beaulieu, A. Scharnhorst, P. Wouters and S. Wyatt (Eds.), Virtual Knowledge -Experimenting in the Humanities and the Social Sciences. (pp. 25-56). MIT Press.

Borup, M, Brown, N., Konrad, K. & van Lente, H. 2006. “The sociology of expectations in science and technology.” Technology Analysis & Strategic Management 18 (3/4), 285-98.

Brown, N. & Michael, M. (2003). “A sociology of expectations: Retrospecting prospects and prospecting retrospects.” Technology Analysis & Strategic Management 15 (1), 3-18.

Costas, R., Zahedi, Z. & Wouters, P. (n.d.). Do ‘altmetrics’ correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.

Dahler-Larsen, P. (2012). The Evaluation Society. Stanford University Press.

Dahler-Larsen, P. (2013). Constitutive Effects of Performance Indicators. Public Management Review, (May), 1–18.

Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure. Serials Review, 39(1), 56–61.

Hicks, D., & Potter, J. (1991). Sociology of Scientific Knowledge: A Reflexive Citation Analysis of Science Disciplines and Disciplining Science. Social Studies of Science, 21(3), 459 –501.

Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010a). Altmetrics: a manifesto. http://altmetrics.org/manifesto/

Van Noorden, R. (2010) “Metrics: A Profusion of Measures.” Nature, 465, 864–866.

Wouters, P., Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. Utrecht: SURF foundation.

Wouters, P. (2014). The Citation: From Culture to Infrastructure. In B. Cronin & C. R. Sugimoto (Eds.), Next Generation Metrics: Harnessing Multidimensional Indicators Of Scholarly Performance (Vol. 22, pp. 48–66). MIT Press.

Wouters, P., Beaulieu, A., Scharnhorst, A., & Wyatt, S. (eds.) (2013). Virtual Knowledge – Experimenting in the Humanities and the Social Sciences. MIT Press.

%d bloggers like this: