Research evaluation

is the name for a field that got rolling in the 1990s, concerned with identifying ways to measure the outcomes of scientific and academic research. We created a systematic, publicly funded research culture in the years after World War II. This was in response to the successes of science in helping to win the war–radar, the Bomb–and as a spur for techno-capitalism. It was inevitable that we would create an audit culture to match it.

The audit culture isn’t simply driven by questions of economic efficiency. It’s purpose is political as well. It’s shifting the balance of power in the academy. Research evaluation empowers managers (deans, provosts, and boards of regents, the latter who after all are often business people). Academic types have long been independent: peer review has been the means for the self-policing of research and of the academy generally. External measures allow non-experts to claim the right to make funding decisions about areas that are outside their realm of expertise. Deans and provosts can now more easily compare different departments for making funding decisions, promotion and tenure, etc. Humanists disdain questions of the ‘philosophy of metrics’ or of research evaluation at their peril.

The same for ‘broader impacts’. The importation of broader impacts concerns within peer review is another indication of the loss of disciplinary autonomy and the immanent end of the cult of the expert. But it isn’t just the growing demand for accountability that’s upending the culture of expertise; it’s also the growth of ecological points of view, where everything is understood as connected. Expertise requires that things be separable from one another. Despite whatever pretensions to expertise they may have, ecologists and environmental philosophers are part of the leading edge of the deconstruction of expertise. It would thus seem that UNT Philosophy, with its focus on environmental philosophy, would also be a leading center for the philosophy of expertise, research evaluation, and the like.

Research evaluation raises its own set of theoretical snarls. As the field currently stands it is the province of social scientists. The old cliche holds–the drunk under the lamppost–research evaluation focuses on what is most easily measurable, questions of the economic effects of research funding. (Not that economic outcomes from research are easy to identify; far from it.) Or the conversation turns to other countables–bibliometrics, citation analysis, counting patents.

Assessing the societal effects of research is the really tough question. Britt and I are about to leave for a London workshop on ways to assess the broader impacts of research. We’ll be the only philosophers at the meeting (though I should say that this will be a pretty philosophical group, as these things go). Now, if to a hammer every problem looks like a nail, to a philosopher every problem…. But really: the problem of how to evaluate research is a classically philosophic problem. It involves a series of philosophic questions–what should our goals or ends be? How do we balance between competing senses of the good life? And what counts as success?

These questions will always repulse ‘methodology’. They are matters of judgment, or to speak with Aristotle, phronesis. Of course, judgment reads as ‘subjectivity’ today–that’s part of the problem. That’s why we turn to numbers–even though numbers merely gloss over the question of the value of what’s being counted (a problem that a colleague of ours has called the ‘cryto-oujiboard.)

Now, they say that you can’t beat something with nothing. And so I have been told that railing against methodology is a loser. Advocate a different, or better methodology: for instance, participatory democracy philosophers argue for a new participatory method on the grounds that this will lead to better science and a better world. Perhaps. But I believe that we should not miss the opportunity to challenge the over reliance on method at every turn. And build in moments of critique and of judgment into the methods that you use.

So how do we do a philosophy of research evaluation? A very good question. The easier part is via negativa–offering critiques of the scientistic pretensions underlying many objectivist claims. The real challenge lies in finding ways to make positive contributions on the project level under real world conditions. That’s field philosophy.

This is not much of an answer, perhaps, but I think the way forward lies in understanding philosophy’s contribution as ameliorative in nature. Work at the project level; ask questions; expand the imaginative bounds of the conversation; undermine easy answers; emphasize the aesthetic, ethical, metaphysical, and spiritual dimensions of our problems. I suppose this describes a kind of philosophical raconteur. But I can live with that. And it strikes me as rather profound, at that. For it is a belief in process as a kind of product–that one of the most valuable things we can do is improve the tenor and thoughtfulness of our conversations.

This entry was posted in Accountability, Future of the University. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>