A group of scientists in Britain has authored a list of the 40 most pressing, unanswered questions concerning the intersection of science and public policy, the result of a workshop at Cambridge University. Some have met the exercise with open arms, claiming that a culture of reflection is a good thing for science.
However, Sheila Jasanoff, a preeminent science, technology, and society scholar, expressed concerns that this may be a failure of science communication: “There is a surface consensus here” about the distinction between ‘science’ and ‘policy’, and what positively links them together, “which is probably misleading… The big challenge is that most scientists and policy-makers remain blissfully unaware of all that we do actually know.”
Our recent workshop at NSF regarding the societal and ethical implications of Transformative Research is an attempt to remedy such communication gaps: what do we know about transformative research, and why is the concept so difficult to nail down? Is the phrase merely window dressing for existing NSF policies – i.e., the Broader Impacts Merit Review Criterion – or is it a distinct phenomenon?
Though NSF has done it’s own list-making in the past, the bent of its policy arc has largely shifted toward more flexible definitions, e.g. of what counts as broader impacts, with an emphasis on community participation in making these concepts meaningful. And so the organization of the workshop, learning from NSF’s previous experiences, was designed not to reach a more narrowly construed definition of TR to supplement NSF’s existing one. Rather, the workshop sought to broaden and enrich our thinking regarding transformation and how (or whether!) this concept fits into the dynamic between science, policy, and society.
Lists are obviously very useful, as tools through which we motivate ourselves to action, or externally note what may be easily forgotten, or as means of accounting for inclusiveness (to make sure nothing is left out); but it is important to remember that a list is only as good as those who are making it, which is to say that an interested and specialized group of scientists may have great insights about the science – policy relationship, but they are by no means an authority on the subject. And further, the act of making lists tends to give more credence to the list’s account of particular concepts, rather than promote critical reflection on the how those concepts were interpreted such that they were deemed ‘list-worthy’ – an integral part, as I see it, of Jasanoff’s concerns.
I applaud these scientists for their initiative and creativity, and I agree that the culture of science would benefit from more of this kind of reflective activity. But efforts to bring in outside perspectives are suspiciously lacking, and lists are notorious for setting and limiting the terms of the debate regarding who is included or not, and whose input counts. This point was raised a number of times at our workshop: Is this exercise really democratic if academics only talk among themselves or to other interested intellectual professionals? What about soliciting the views of the very society that the institution of science is purporting to benefit? Do these views not count?