Jane Tinkler presents the latest case against the UK’s Research Excellence Framework (REF) here:
At the risk of coming off only as a staunch defender of the REF, I want to suggest that there’s something like a category mistake that keeps getting made in criticisms of it. Or maybe it’s a combination of a sort of category mistake and a sort of reactive attitude.
Although there are exceptions, researchers in the UK generally are resistant to the REF. This resistance usually takes the form of criticizing one or another aspect of the REF, or protesting its very existence as unfair to academics, or a combination of both. Jane Tinkler’s objections fall into the third category.
Citing a description from guidance to ‘Panel C’ — the panel that will be in charge of assessing the impact of research in the social sciences — Tinkler suggests that the REF is inadequate for capturing the real essence of the sort of impact research in these disciplines might have, especially on policy making.
Although she acknowledges that policy makers often want advice regarding particular policy issues, she insists that what policy makers really want is an academic’s expertise, rather than the results of a specific research project. It is this expertise that government seeks when it places academics on advisory panels, for instance. Yet, according to Tinkler, the REF unfairly excludes participation on an advisory panel as evidence of impact.
Tinkler’s criticisms thus are two-fold: (1) the REF incorrectly expects evidence of impact from particular research projects, which is more a matter of luck than skill; and (2) insofar as it is incapable of capturing the serendipity of expertise, the REF is unfair to academics.
I suggest that there’s something wrong-headed in expecting the REF to capture the serendipity of expertise. What academics ought to expect from the REF is that it provide enough room for a university to maneuver in making a case for impact. And this is how the REF ought to be judged.
But it seems that many critics make a category mistake in judging the REF to be inadequate. The category mistake involves expecting the REF to capture the impact of every project of every academic researcher. But the REF is designed for a higher level of analysis than that. It looks at universities over a period of time, and it allows universities to pick the best examples of impactful research. It does not look at each individual bit of research or each and every university.
The reactive attitude is more subtle. Evidence of this attitude, however, can be found in the idea of expecting the REF to capture impact. The idea is something like insisting on an assessment framework capable of capturing all that’s good about academic research when we academics do what we do. But this has things backwards — it looks only through the lens of autonomy, without any attempt to see through the lens of accountability.
In contrast to the ‘leave us alone to do what we do — and if you want to know the impact of what we do, develop a better instrument to capture it’ attitude, viewing the REF through an accountability lens will allow academics to take ownership of it. That’s good, because this will allow academics to exercise (rather than merely insisting on) their autonomy while satisfying demands for accountability.
So, another way to put the point about the REF allowing room to maneuver is to ask: does the REF allow for the exercise of academic autonomy? If the answer is ‘yes’, the academics ought to stop resisting it and start figuring out how to own it.
Let’s try to answer what I think are the right questions about the REF. In what follows, I’ll make reference to pages in this presentation on the REF, since it seems to have the latest information. Rather than asking what does the REF require, I suggest we ask: What does the REF allow us to do?
There are three things about the way the REF is organized that allow us (universities) room to maneuver:
- The REF defines impact loosely.
- The REF utilizes peer review.
- The REF asks universities to provide impact case studies in the form of narrative accounts .
Taking these one at a time, the first thing the REF allows us to do is to offer our own accounts of impact:
In drawing up its assessment criteria and the advice to submitting institutions, the main panel agreed that providing HEIs [Higher Education Institutions] with detailed lists of impacts and evidence and/or indicators for those impacts would be unhelpful because these could appear prescriptive or limiting.
Thank you very much, REF! This provides us with room to maneuver when presenting our cases for impact. This is an opportunity for those universities willing to seize it.
Second and third, the REF uses peer review of narrative accounts in order to judge impact:
Within their narrative account in the case study, institutions should provide the indicators and evidence most appropriate to the impact(s) claimed, and to support that chain. The sub-panels will use their expert judgement regarding the integrity, coherence and clarity of the narrative of each case study, but will expect that the key claims made in the narrative to be supported by evidence and indicators.
The main panel anticipates that impact case studies will refer to a wide range of types of evidence, including qualitative, quantitative and tangible or material evidence, as appropriate. Individual case studies may draw on a variety of forms of evidence and indicators. The main panel does not wish to pre-judge forms of evidence. It encourages submitting units to use evidence most appropriate to the impact claimed. (p. 60)
Thanks again, REF! The fact that you don’t wish to prejudge forms of evidence, say, by limiting impact to some “objective” indicator is wise. It’s also an opportunity for universities to present the best cases they can. Might the skillful use of certain indicators help certain cases? Of course! But the point is that no one-size fits all indicator is imposed. Universities get the chance (this is the point) to make their case for each case on a case by case basis. It hardly gets any better than that. But it could get a lot worse.
What about the fact that, as Tinkler brings up in her article, merely serving on an advisory committee is insufficient to demonstrate impact? Well, to be honest, that strikes me as fair. After all, it doesn’t make for much of a story to say that researcher X served in such a capacity. Did it make any difference? No? Then I would choose a different case — one that allowed me (the university) to take advantage of the opportunity the REF presents.
If it did make a difference, then I would want to ask what the difference was, how big it was, and how important it was. Moreoever, I would want to ask how it happened. And I would want to answer all of these questions in my case study — that is, I would want to generate the best narrative account I could. The point is not to exclude participation on an advisory panel as a viable impact. Instead, the point is to ask for a richer descriptions (a narrative account) in place of a simple description (researcher X served on advisory panel y).
The REF represents an opportunity, particularly for universities that cannot rely on a reputation for expertise, to demonstrate that they are making a difference in the world. Imagine if all that were required were a simple listing of all the prestigious advisory panels your faculty were appointed to? My guess is that this would unfairly favor the Oxfords and Cambridges over the rest. One last point — my guess is that Oxford and Cambridge are clever enough to treat the REF as an opportunity. I wonder whether the rest will catch on in time?