There is a strong positive bias in how scientific knowledge is generated, written about, and measured. It is easier to find research proving a hypothesis than replication studies that fail to confirm earlier findings. It is easier to access explanations of why certain technologies came to be than studies about why we don’t have flying cars, or some other breakthrough promised to us through the magnificence of science and technology. It’s an enormous hole in our understanding of the world, facilitated by the mores of the scientific reward system.
The same is true for metrics. While the number of ways one can assess the impact of a particular paper is changing, many of the ‘alt’ metrics emerging are still thinking primarily in positive terms. At least that’s the proposition of J. Britt Holbrook and some of his colleagues at the University of North Texas. In a letter to Nature, they suggest that negative metrics have their value.
“We think that researchers can generate a more complete account of their
impact by including seemingly negative indicators — such as confrontations with important people or legal action — as well as those that seem positive.”
This would mean that something like prompting a Congressional investigation (whether based on the fact that your research got funded or on the findings of that research) should somehow be noted and follow that paper. The letter authors have a list of possible ‘negative’ metrics that could help give a more complete record of research impacts. See if they missed anything. I think the list reflects the academic perspective a bit too much, but I don’t think that could be helped, given the authors.
I think this is an excellent idea. I’d prefer to see it accompanied by greater attention (and credit) to so-called ‘negative results’ papers. It would not surprise me to see these ‘negative’ metrics pop up more for such ‘negative results’ papers. For one reason, the public and policymakers often make as much out of so-called ‘negative’ findings as positive ones.