I came across this report from a workshop that discussed the need for consideration of the societal impacts, both beneficial and potentially harmful, of recent developments in the field of synthetic biology by synthetic biologists. It seems to represent a kind of reflexive moment for the discipline (significant but not significantly deep, I would argue), in terms of recognizing that the pursuit of excellent science that may improve societal health and well being can be a risky endeavor. While I see a number of hugely interested trends and developments at work in this act of self-reflection, I want to highlight one that syncs up well with a recent discussion about scientific censorship. Take this summary from the report in particular about a presentation on ‘DIY’ synthetic biology:
[R]egional groups are establishing “community labs” where members of the general public can participate in hands-on workshops and educational events. While some professional scientists are involved, many of the participants in the DIYbio community have no formal laboratory training or professional experience. Bobe stated that we cannot expect these individuals to be up-to-speed on best practices in laboratory safety and proper disposal of biological waste, etc., and that survey results demonstrate a significant need to establish norms in the DIYbio community and provide practical biosafety resources.
I highlight this example precisely because the language of the report would seem to indicate that participants of the workshop were generally dancing around open and frank discussion of exactly what’s at stake in terms of societal impacts. DIY synthetic biologists? Is this not a fearsome prospect? Perhaps concerns such as these would be dismissed as fear-mongering in that audience, but maybe in this case a little fear-mongering is appropriate. DIYbio is only addressed once in the highlighted public policy areas needing further exploration:
Is the scientific establishment willing to concede that DIYbio does not constitute a significant public policy concern? Who is responsible for ensuring public safety and that proper laboratory protocol for dealing with hazardous biological materials is enforced? Precisely this question was recently discussed on Science‘s news website – a transcript is freely accessible here – between National Science Advisory Board for Biosecurity (NSABB) representative Michael Osterholm and Johns Hopkins virologist Andrew Peksoz.
Science and Nature were both approached by NSABB concerning redacting specific parts of two papers under review for publication in each venue. The research results concern identifying specific ways that avian (H5N1) flu could become more readily transmissible between humans, for the purposes of promoting preparedness in the event of a pandemic like 2009′s H1N1 (swine flu) outbreak. On the one hand, this can be considered an affront to scientific autonomy and censorship of the public availability of science which the public has funded and to which it has rights of access. but NSABB is concerned that publishing these results in full will enable the construction of more virulent strains of H1N1 by less than well-meaning individuals. Thus, they make a case for necessity of government involvement for ensuring public health and safety.
What I found so interesting about these two scenarios together is that they both reflect the ongoing confrontation between NSF and the scientific community over the broader impacts merit review criterion. As my colleague Britt has discussed in his recent posts, the controversy surrounding broader impacts is really about the perception of broader impacts as an affront to scientific autonomy; any ‘censorship’ to scientific research, such as redaction by NSABB, is judged similarly. NSABB’s involvment in the swine flu debate was framed as if scientific autonomy and risk were competing values. Likewise, synthetic biologists do not seem overly concerned with the development of DIYbio, or the pursuit of other potentially dangerous research, provided that requirements for responsible research conduct are met. In other words, scientific responsibility is the only moral criteria to which science is bound.
Though scientists are devoted to a creed of responsibility, as Osterholm brought up during the discussion, the assumption that broader impacts are antagonistic to scientific autonomy (implicit in many of Peksoz’s responses) reflects an unwillingness to reconceptualize science’s moral obligations in light of risks posed by contemporary scientific and technological development. One could argue that the need for such reconceptualization stems from the unprecedented scale of risks associated with scientific and technological progress in the 21st century, and the scope of their potential fallout. Or perhaps this need is derived from the increasing emphasis in public policy on accountability to taxpayers and increasing attention to public well being.
But whatever the origins of this need, thoughtfully addressing it does not require further entrenching a dichotomy between the goods of scientific autonomy and societal benefit. Rather, as Britt has also proposed, the meaning and political role of scientific autonomy ought to be reconsidered. If the scientific community embraced broader impacts as an element of their responsibility toward the public, because of whom their research is funded, this would preserve their autonomy, not infringe upon it. The question ultimately up for grabs is this: who sets the goals for science? There are assuredly many who would gladly pick up the torch if scientists drag their feet, and many who, in doing so, may not have the public interest at heart.