Peer Review: a personal history

I’ve been working on questions surrounding peer review for a number of years. But only recently have I started to figure out what I’ve gotten myself into.

The story begins with the National Science Foundation. NSF changed its peer (or “merit”) review criteria in 1997. To that point there had been four criteria, all of which centered on disciplinary standards of excellence. When I started sitting on panels around 2000 I was introduced to the new criteria–intellectual merit and broader impact. It was the latter term that presented a problem for the reviewers: what counts as a broader impact? And, when we figured out an answer to that question, how were we to relate it to the intellectual merit of a project?

Panels puzzled over these questions. But there was also a bit of sleight of hand going on. One did not hear “what counts as ‘intellectual merit’?” Everyone assumed they knew what intellectual merit was. They were experts, after all. (The situation reminds one of Augustine’s comments on time: “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.”)

Questions concerning intellectual merit were hidden by pecking order. Besides sharing a discipline of e.g., geology or chemistry, people around the table were disciplined in another way: older, louder, or more preeminent people set the agenda, expressed their judgments, and the newbies caved. (Dan Sarewitz’s essay in my Earth Matters (2000) offers a wonderful account of the breakdown of agreement over ‘intellectual merit’.)

The broader impact criterion has had a tangled history. It was first ignored by reviewers. Then in 2001, NSF required that every proposal address broader impacts in the required one page summary of the project. Forced upon the scientific community, it was then ghettoized as education and public outreach (EPO). It still largely remains that way today, serving the interests of scientific triumphalism. But in a crucial decision, in what has become the nose of the camel in the tent, the NSF decided to offer five framings of broader impacts to help proposal writers. It was the 5th ‘representative example of broader impacts that really opened things up: “What may be the benefits of the proposed activity to society?”

Now we were not simply talking about explaining the benefits of science to the unwashed, say through museum exhibits, or by expanding the scientific opportunities for underrepresented groups by bringing more women or minorities into the lab. A critical element was introduced. Implicitly, society was now invited to judge science.

While the consequences of this are still being worked out, the implications are quite radical. Even with the restricted reading of broader impacts as EPO the standards of peer review were being challenged. After all, if part of the criteria for judging a proposal involves communicating to school children, then shouldn’t experts in communication studies or education be involved in the process? Similarly, there was an implicitly disruptive element in the request to involve underrepresented groups. For their interpretation of the meaning or impacts of science might be at variance with others. The crucial point, however, is this: it was no longer disciplinary peers who were deciding things. Apples were not being compared to apples; the biologists could claim that the education folks did not fully understand the portentousness or weakness of a proposal; and the education experts could claim that the biologists were missing out on how innovative or stupid the EPO part of the proposal was. And so there was no functioning adjudication procedure.

The academic pecking order kept this from being realized. Academics in the fields of education and public outreach were happy to have any seat at the table, and were not in a position to offer fundamental critiques of the peer review process.

Now consider the implications of the 5th ‘representative activity’ of broader impacts–the benefits of the proposed activity to society. Now the floodgates were opening. In streamed a variety of perspectives, ethicists, science policy analysts, standpoint epistemologists, and a wide variety of science studies types. The result has been the interdisciplining of peer review. No longer were e.g., chemistry proposals, simply to be judged by chemical criteria by chemists.

It’s important to see the implications here. The basic idea underlying scientific claims to truth is replicability. To count as true an experiment must be demonstrable on demand, over and over again. For this to be possible it must be necessary to control the parameters of an experiment, and then vary one element at a time. Only in this way is it possible to demonstrate something as ‘true’.

Of course, this means that things that exist in time are not susceptible to scientific truth. To be ‘in time’ means that each moment is unique. There is, then, no possible replicability, because the conditions are no longer the same from experiment to experiment.

Science, then, is an adjudication procedure based on comparing apples to apples. And just as it must be possible to wall off the scientific experiment from outside influences, peer review depends on the walling off of other disciplines and wider points of view from the review process. Peer review depends on disciplinarity just as science depends on the controlled experiment.

Broader impacts, then, threatens to make peer review impossible. The decision making process within disciplinary peer panels might have been gamed by e.g., stronger personalities or more prestigious people, but there was at least a fig leaf of plausibility to the claim that the process was objective. After all, everyone knew the same things. Everyone was e.g., a biologist. (Of course this was bogus; there is no such thing as biology or chemistry except in undergraduate textbooks. Every biologist has different training and background from every other. But to proceed…)

But now differences in background were enshrined by the same basic process–disciplinarity–that previously made peer review possible. It was now obvious that there was no way to adjudicate between different disciplinary points of view: everyone was an expert in one or another area, and largely ignorant of the rest. And once this fact was realized the door was open for anyone to claim a seat at the table. It was only a short distance from that to the belief that scientists are Democrats and that ‘climate science is the greatest hoax ever perpetrated on mankind’ (Senator Coburn).

All this is so far only dimly realized. Right now, piety surrounds broader impacts. It is a way for a scientist to demonstrate his or her good heartedness. But there are also intimations that broader impacts could threaten the entire structure of peer review. We are tugging on the curtain that hides the Wizard. Who knows what will be revealed if we pull it aside.

This entry was posted in NSF, Peer Review, STEM Policy. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>