A recent LSE Impact blog piece about the steering effects of publication pressures highlights that inattention to policy on the part of academics is largely the result of ‘rewarding A and hoping for B’ – Aristotle would call this a failure of practical wisdom, which amounts to incorrect judgment as to the appropriate means for achieving a desired goal (end). The academy is effectively working at cross-purposes: the emphasis on traditional outputs for decisions regarding tenure and promotion, hiring, and disciplinary prestige draws scholars away from having an impact on social contexts outside of the academy – which, ironically, is a popular evaluative criteria for academics typically called “public outreach” or “service.” And, the author argues (and with which I agree), our public policy is suffering for it.
Meanwhile, elsewhere in the websphere, Google makes surprising claims on its “editorial license” in response to criticisms of its page ranking technology policies. Cory Doctorow’s analysis, as I read it, identifies the problem as one of ‘claiming to provide A, but actually doing B.’ Google’s page ranking algorithms have been criticized for being unduly biased against its own products, for failing to divulge the majority of criteria included in the ranking algorithms (i.e. the ranking process remains a black box due to claims of proprietary information), and for being subjective. Where before, Google rushed to defend itself against the last point, insinuating, as Doctorow puts it:
that Google has discovered a mathematical model of relevance, a way of measuring some objective criteria that allows a computer to score and compare the relevance of different web-pages
Yet now, Google is embracing the “subjective” and aesthetic aspects of judging relevance, which Doctorow compares with magazine editing:
The argument, presumably, is that Google should put the “most relevant” listings at the top of the screen, not the ones that make it the most money, lest it strangle competing services.
In response, Google has advanced an argument based on editorial integrity. The company implies that a page of search results is effectively the table of contents for a custom-made magazine that is assembled on the fly in response to a user’s query. This is a major shift for Google… A magazine’s editor-in-chief looks at her table of contents as it is being formed through the month, moving things around, commissioning new items, deleting things and shifting others to greater prominence.
The judgments she makes are aesthetic ones. They reflect her distinctive expertise and vision for the publication, a vision and expertise that is honed from month to month by feedback from readers and colleagues, sales figures, public review, and pageviews in the online edition. Magazines rise and fall based on their e-i-cs, and a change in leadership can utterly transform the experience of reading the magazine… I think that Google’s best chance of maintaining its independence from regulatory interference in search results hinges on making this argument about editorial integrity. However, I wonder if Google is prepared to start telling low-ranked website owners that their rankings reflect its subjective judgments and not cold equations.
It’s one thing to be told that you’ve been banished to “ooooo”space by the numbers, another thing altogether to learn that you’ve been buried on page 10,000 because Google’s engineers just don’t think you do very good work.
What is it that these two instances have in common? They are both grappling with issues of judgment. But this is not simply judgment broadly construed. The kind of judgment at stake is expertise.
In the first instance, academic reward structures act as filters for what counts as knowledge and the standards of rigor that are passed from academic generation to generation, which contribute to the character and direction of disciplines – this, in turn, reinforces the nature of what becomes codified as knowledge as such. Ultimately, publication pressures affect what constitutes and who qualifies as an expert. In the second instance, Google itself is making expertise claims regarding the relevance of particular information. So in a sense, its expertise is being marketed as that which defines the difference between information and relevant or useful information – dare I call it the difference between information and knowledge?
And in both cases, this epistemic point is being made with regard to regulation – either by university administrators or disciplinary standard-bearers (i.e. flagship journals), or by organizations legislating against Google’s ranking results in the interest of promoting “search neutrality” (i.e. the RIAA). We have made the argument before at CSID that these instances are not problems for which a quick and reactive solution is needed, but rather situations that arise from basic, fundamental philosophical questions – like that of the difference between information and knowledge, and who is to decide what constitutes that difference.
From the latter perspective, both academics and Google have an opportunity here to take charge of the situation by recognizing the basic questions at stake and making an argument that addresses that question, as opposed to simply defending their entrenched, traditional positions. Google appears to have made a step in that direction, and academics ought to prepare themselves to do the same – by insisting on the importance of retaining expert judgment in the face of climbing pressure to submit to numerical calculations of scholarly quality and success. To use Aristotle’s language: identifying scholarly goals, like facilitating greater interactions between legislators and academics (fundamentally, making the academy more receptive to societal engagement), and how best to pursue them is an act of phronesis (practical wisdom), which is a combination of techne (technical know-how), episteme (theoretical knowledge), and self-critical, reflexive evaluation. None of these alone – pure critiques of the current state of affairs, purely technical ‘fixes’, or purely theoretical solutions – is enough to take charge of these situations in a way that results in something better.
While analysis of publication pressures and their undesirable consequences is valuable in its own right, we need to also be thinking of ways to promote alternative evaluation criteria that does not reject the current trends for metricizing scholarly communication, but that takes charge of them for the purpose of achieving more worthy goals, like engaging public policy processes.