Knowing and acting: The precautionary and proactionary principles in relation to policy making, J. Britt Holbrook and Adam Briggle

SERRC —  April 16, 2013 — 9 Comments

Author Information: J. Britt Holbrook, Britt.Holbrook@unt.edu, and Adam Briggle, Adam.Briggle@unt.edu, University of North Texas

Holbrook, J. Britt and Adam Briggle. 2013. “Knowing and acting: The precautionary and proactionary principles in relation to policy making.” Social Epistemology Review and Reply Collective 2 (5): 15-37.

The PDF of the pre-print gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-KQ


This essay explores the relationship between knowledge (in the form of scientific risk assessment) and action (in the form of technological innovation) as they come together in policy, which itself is both a kind of knowing and acting. It first illustrates the dilemma of timely action in the face of uncertain unintended consequences. It then introduces the precautionary and proactionary principles as different alignments of knowledge and action within the policymaking process. The essay next considers a cynical and a hopeful reading of the role of these principles in public policy debates. We argue that the two principles, despite initial appearances, are not all that different when it comes to formulating public policy. We also suggest that principles in general can be used either to guide our actions, or to determine them for us. We argue that allowing principles to predetermine our actions undermines the sense of autonomy necessary for true action.

Keywords: Precautionary Principle; Proactionary Principle; Policy; Decision Procedure

Knowledge kills action. (Nietzsche)[1]

1. Knowing and acting

How are knowledge and action related? This question is asked less often than another: When do we know enough to justify taking action? In the context of making science and technology policy, the question assumes yet a different form: When do we have sufficient scientific risk assessments about a new technological activity to warrant promoting that activity and embedding it in society? In this paper, we explore how the relation between knowledge and action should be structured in policymaking.

Decision makers often confront a dilemma: Act too soon, and we create avoidable harms; but act too late, and we forfeit possible improvements. Consider, for example, the case of hydraulic fracturing. In 1947, engineers working for the Stanolind Oil and Gas Corporation conducted the first experimental trial of the “Hydrafrac” technique. They injected 1,000 gallons of gasoline thickened with naphthenic-acid-and-palm-oil (napalm) and a gel breaker to stimulate a gas well in the Hugoton gas field in Grant County, Kansas. The results were unimpressive. But they had reason to keep experimenting. Fracturing had been used since the 1860s, when nitroglycerin was used to stimulate hard rock wells in Pennsylvania. Though extremely dangerous, the technique had great success in “shooting” wells, or breaking-up oil-bearing formations to increase initial flow and ultimate recovery.

In the 1930s, non-explosive liquid acids were experimented with in attempts to “pressure part” formations. Floyd Farris of Stanolind used these trials to establish a relationship between observed well performance and various treatments. This pioneering scientific study created a better understanding of the phenomena and led to the idea of hydraulic fracturing or Hydrafrac. Further experiments were conducted, now with Haliburton Oil Well Cementing Company owning an exclusive license to pump the new Hydrafrac process. In 1949, 332 wells were treated with far better results: the average production had increased 75%.

By 2010, 2.5 million hydraulic fracturing treatments had been performed worldwide. The technique, modified and refined over the years, not only boosts well productivity but also has been credited with adding 9 billion bbl of oil and more than 700 Tscf of gas to US reserves since 1949. Without this technique, those reserves would have remained uneconomical to develop (Montgomery and Smith 2010).

The scientists and engineers of the 1940s had reason to believe that high-pressure fluid injections could result in the desired outcomes of increased flow and, thus, profit. There had been earlier results that looked promising. They were not acting in the dark. Yet neither did they know for certain which technique would perform best. There was some level of knowledge that made the risk of further experimentation seem worthwhile. Nonetheless it remained risky. It could have been a bust.

But this is not the whole picture, because the knowledge discussed so far pertains only to the intended outcome of increasing well productivity. This confines the picture to the corporate players and reduces the question to one about their bottom-line: is further experimentation a good risk to take in terms of maximizing profit? The unintended effects not directly related to the goal of productivity are left out of the picture. For example, what are the risks posed by various chemicals and techniques to groundwater? What will happen to the millions of gallons of chemicals that flow back up the well? Where will all the necessary water come from? Is the further production and consumption of fossil fuels a good thing?

How much knowledge do we require about unintended consequences prior to taking action? Your answer to that question is likely to depend how you are situated relative to intended and unintended consequences. Some interests are better served by acting sooner rather than later. The corporations reap the benefits of increased productivity and do not bear the full burden of environmental contamination. Other interests are better served by delaying action until more certainty can be had about possible harms to their water. Some will advocate acting and fixing problems along the way. Others will advocate anticipating and resolving problems prior to action.

Decisions must be made, and public policy makers, at all levels of government, are struggling to balance these competing interests. Might they find useful guidance in a general principle?

2. The precautionary and proactionary principles

The precautionary and proactionary principles have both been advanced as useful tools for making such science and technology policies. Our goal is to explore the normative question of how we ought to think about these principles and their role in policy making.

Our understanding of ‘policy’ is the “social process of authoritative decision making by which the members of a community clarify and secure their common interests” (Clark 2002, p. 6). The policy process is a social dynamic that determines who gets what, when, and how (Lasswell 1950). It involves multiple actors, institutions, and perspectives engaged with defining and adjudicating problems. The policy process entails iterative interactions between intelligence gathering and advocacy (where knowledge and values are applied and contested), the anchoring decision or prescription (that stabilizes expectations in the form of rules), implementation and application (enforcing and sanctioning rules), and appraisal and termination (assessments that can feedback into the intelligence phase for new policies) (see Clark 2002, chap. 5).

h_b_fig1

Figure 1: The Policy Process

So, our main goal re-stated is to understand how the proactionary and precautionary principles would have us structure relationships between knowledge and action in this process.

The precautionary and proactionary principles have been developed as ways to structure policymaking where there is (a) the possibility of harm and benefit resulting from a new technological activity and (b) scientific uncertainty about the harms and benefits involved. They both pertain to the balancing of individual and corporate freedom (to pursue various intended desired outcomes) with the need for protections from the unintended adverse effects that the exercise of such liberties can produce.

In such situations we can generally attempt to prevent or restrain the activity until cause-effect relations are better understood (precaution); or we can generally promote the activity while learning more about cause-effect relations along the way (proaction). We can conceive of the technology as guilty until proven innocent (precaution, where the burden of proof lies with proponents of the activity) or as innocent until proven guilty (proaction, where the burden of proof lies with opponents of the activity). Precautionary politics tend to invoke scientific uncertainty to curb technological innovation (for example, we do not know enough about the health impacts of fracking to justify its promotion). Proactionary politics tend to encourage innovation as a way to test hypotheses (for example, we can improve drilling and production as we learn from successes and failures in practice).

But is this the best way to think of these principles, and do they really offer anything valuable to the policy process (cf. Stirling 2007; Luján and Todt 2012)?

The precautionary principle has not only been the subject of a great deal of academic discussion (see for instance Foster et al. 2000; Sunstein 2005; Stirling 2007; Luján and Todt 2012), but it has also been written into official science policy (European Commission 2000, Annex II). In contrast, the proactionary principle has attracted little academic attention outside of advocates of transhumanism (Fuller 2012a and 2012b are exceptions) and is not cited in any official policy.

In what follows, we first situate the discussion within a wider debate about the place of science in deliberative democracies. We then outline the precautionary and proactionary principles, noting how they relate to the policy process and the role of science therein. Next, we address the question of how, given the preceding discussion, we ought to think about the precautionary and proactionary principles in relation to science and technology policy making. Finally, we conclude with a set of recommendations about the use of such principles in policy making.

3. The diminishing role of science in deliberative democracies

Modern science had long been seen as a source of truth that can secure universal consent and legitimate a common authority in pluralist societies (Shapin and Schaffer 1985). Science was thought to produce impartial and neutral knowledge that establishes facts outside of any particular point of view (see Lacey 2005). This “value-free ideal” holds that “value judgments internal to science, involving the evaluation and acceptance of scientific results at the heart of the research process, are to be as free as humanly possible of all social and ethical values” (Douglas 2009, p. 45). Thus insulated from “external” values, science, by virtue of its special epistemic authority, would garner the legitimacy to compel political action (see Sarewitz 1996; Pielke and Byerly 1998; Guston 2000). In line with this value-free ideal, decisions in complex technologically-developed societies rightly flow from the top down as scientific experts isolate problems and apply tools to solve them (Collins and Evans 2002). Indeed, the very term ‘policy’ seems to implicitly denote this rationalistic approach to problems as distinct from the irrational bargaining and power of ‘politics’ or ‘advocacy’ (Cf. Pielke 2002).

But the notion that the universal method of science delivers certainty, dissolves disputes, and compels action began to unravel in the latter part of the 20th century (see Pielke 2007; Brown 2009; Oreskes and Conway 2010). Indeed, science often prolongs and exacerbates political controversies rather than resolving them (Sarewitz 1996; Sarewitz 2004). The value-free ideal is unable to explain persistent disagreements about such issues as climate change, biotechnology, and nuclear waste storage. One way to attempt to salvage the ideal is to posit that the science is ignored or misunderstood. Yet these explanations do not account for the fact that “the science” is often itself uncertain and contentious. Politics cannot be replaced by science or engineering. Ethics and values disputes are an inextricable part of science and technology policy, including the question of which science to treat as authoritative and which technologies to promote.

The conclusion that science does not automatically resolve political disputes and may in fact itself be ‘politicized’ has prompted Science, Technology, and Society (STS) scholars to conceive and construct alternative science policy relationships that respond to the following dilemma: “If it is no longer clear that scientists and technologists have special access to the truth, why should their advice be specially valued” (Collins and Evans 2002, 236)? Two efforts are most significant: (1) those that, focusing on policy for science, seek to reform knowledge production; and (2) those that, focusing on science for policy, seek to reform the roles of “experts” and “lay citizens” in science and technology policies. [2]

First, among those focusing on policy for science, several scholars seek to reconcile the supply of scientific knowledge with its demand in policy making contexts (see Sarewitz and Pielke 2007; Pielke 2007). Another important example is the idea of “Mode 2 knowledge production,” which outlines the need to replace investigator-initiated, discipline-based research with research that is problem-focused and interdisciplinary (Gibbons et al. 1994; Nowotny et al. 2001). Research on interdisciplinarity and interactive expertise has also flourished under increasing societal demand that knowledge producers be held accountable for achieving real-world outcomes (Klein 1996; Frodeman et al. 2010; Gorman 2010; Holbrook 2012a). Others are investigating how science funding agencies can and should respond to this accountability culture through the inclusion of broader societal impacts consideration in peer review processes (Bozeman and Boardman 2009; Frodeman and Briggle 2012; Holbrook 2009; 2012b; Holbrook and Frodeman 2011).

A second response to “post-normal” science is to focus on the use of science for policy making. A good example is the promotion of citizen participation in policy contexts where science and technology intersect with public interests. Since science cannot eliminate the need for political decisions about values, invoking the value-free ideal itself becomes a political tactic where choices are framed as “neutral,” while in reality they embody certain values and perspectives (Haraway 1997; Fischer 2000; Latour and Porter 2004; Evans 2006; Fischer 2009). The fuzziness of the line between politics and science (or facts and values) means that liberal democracies premised on the ideal of “government by discussion” must find ways to enrich deliberation about, broaden participation in, and improve the intelligibility of science and technology policies (Winner 1986; Fischer 1990; Kass 2002; Turner 2003).

At stake in the policy for science literature are moral and political questions about who to include, how to make decisions, how to refine and broaden the conceptual and normative language brought to bear in public deliberation, and how to define a “good” decision (Sclove 1995; Brown 2009). Some have blurred the line between “expert” and “public” by pointing out forms of “lay expertise” and “traditional knowledge” (Wynne 1996; Yearley 2000; Menzies 2006). Others have sought to justify and design more inclusive decision making arenas such as participatory, real-time, and constructive technology assessment (Rip, Misa, and Schot 1995; Guston and Sarewitz 2002; Sclove 2010), consensus conferences and citizen juries (Hamlett 2005), and policy advisory committees (Jasanoff 1990; Briggle 2009). Such “democratization” of scientific expertise raises questions about the legitimacy of deliberative governance (Lövbrand, Pielke, and Beck 2011). It also poses the “problem of extension” (Collins and Evans 2007): how far should the basis for technical decision making in the public domain be widened beyond certified experts? Is there a rationale for such expansion and what are its limits? What kinds of expertise exist and when should they be granted legitimacy in the public sphere?

4. The precautionary principle in science policy

Since others (for instance, Foster et al. 2000; Sunstein 2005; Stirling 2007; Luján and Todt 2012) have examined the precautionary principle on the level of theory, including its multiple formulations and their varying policy implications, we will not do so here. Instead, we focus on how the use of the precautionary principle in science policy contexts fits into our analysis of the diminishing role of science in science policy.

The precautionary principle has been incorporated into policy documents, especially in international environmental policy contexts, since the 1980s (European Commission 2000, Annex II).  Its inclusion in policy documents is a move that seems to fit squarely in the policy for science category: it subjugates scientific and technological development to the demand that it demonstrate a proposed development is safe, or at least that the level of uncertainty about the harm involved is reduced to an acceptable level of risk. The idea behind the precautionary principle, then, is to introduce a higher value (precaution) as a governor on both scientific knowledge production and technological development.

The application of the precautionary principle entails a distinction between the socio-political on the one hand and the scientific on the other, at least in the European context. The European Commission (2000) puts the point this way: “The Commission wishes to reaffirm the crucial importance it attaches to the distinction between the decision to act or not to act, which is of an eminently political nature, and the measures resulting from recourse to the precautionary principle, which must comply with the general principles applicable to all risk management measures.” Whether to invoke the precautionary principle is a political decision; once the principle is invoked, other principles, especially ‘scientific’ principles that guide risk management, are brought into play.[3]

However, it is not the case that this recognition of the political nature of the decision to invoke the precautionary principle implies a strict separation between politics and science. The political decision to invoke the precautionary principle is not made in isolation from scientific considerations. In fact, the decision to invoke the precautionary principle depends on both a prior identification of possible harm (a potential for the activity to lead to an undesirable outcome) and a level of scientific uncertainty concerning the risk (an unknown probability that the activity will lead to undesirable outcome). It is only if we both know that there may be risks and are uncertain about the extent of those risks that the precautionary principle is invoked.

In terms of the policy process, then, the precautionary principle inhabits both the intelligence gathering and values advocacy phases, as its invocation is a political decision about how to handle risk and uncertainty. Its application then structures the intelligence gathering phase, by calling for scientific evaluation to be “as complete as possible” and to identify “at each stage the degree of scientific uncertainty” (European Commission 2000).  The anchoring decision (or prescription) phase of the policy process hinges on whether the risks identified by the thorough scientific evaluation are acceptable or not.

At the anchoring phase of the policy process, in which rules are developed for specific cases, the precautionary principle takes on a diminished role. The development of these rules is governed by an additional set of principles, including proportionality, non-discrimination, consistency, cost-benefit analysis, ongoing monitoring of scientific developments, and the establishment of the burden of proof (European Commission 2000). These additional principles place limits on the application of the precautionary principle, so that the specific policies established, having been previously framed by the precautionary principle, also meet standards of fairness. Without these limits on the precautionary principle, it would be possible, for example, to single out a particular company and to prohibit it from developing a particular product until it proved the product safe even if another company were allowed to develop a similar product with similar risks without having to prove safety. Only in establishing the burden of proof, in the absence of existing state regulations, does the precautionary principle still apply in the anchoring phase.

The precautionary principle falls into the category of policy for science, since it imposes a general guiding value on knowledge production and use. The precautionary principle also falls into the category of science for policy, since it establishes science as the frame within which policy prescriptions are made. Policy making is situated as a response to the risks identified by science – the political decision is whether those risks are acceptable or not. Thus, the precautionary principle is limited in its application both by ‘scientific’ values (those associated with risk assessment and management, in particular) and in general by the principle of fairness in its implementation and transformation into specific policies. These limitations include guidance regarding the use of science in policy (for instance, in terms of establishing the burden of proof in particular cases).

Despite such limitations, some critics contend that the problem with the precautionary principle is neither that it is too prescriptive of particular rules nor that it is applied in discriminatory or inconsistent ways. Instead, they target their critique at the general idea they believe the precautionary principle entails: avoidance of risk. For this reason, they have developed an alternative, the proactionary principle, which entails the opposite value (risk taking).

5. The proactionary principle

The proactionary principle was explicitly designed as an alternative to the precautionary principle. It is in large part the brain child of the futurist and transhumanist philosopher Max More, who attributes its origins to a 2004 conference of his Extropy Institute. More and the other ‘proactionaries’ in attendance at the conference argued that rational risk-taking defines the human essence and that solving complex problems depends on encouraging (rather than restricting) technoscientific innovation. Most technological developments bring undesired effects along with the desired ones. The proactionary principle “allows for handling mixed effects through compensation and remediation instead of prohibition.”

The proactionary principle may be best seen in light of the more general “principles of extropy,” which outline a belief that the best kind of society is one that promotes individual freedom to think rationally, act independently, and create ways to liberate humanity from the bonds of our “biological heritage, culture, and environment” (More 2003) The proactionary principle states:

People’s freedom to innovate technologically is highly valuable, even critical, to humanity. This implies a range of responsibilities for those considering whether and how to develop, deploy, or restrict new technologies. Assess risks and opportunities using an objective, open, and comprehensive, yet simple decision process based on science rather than collective emotional reactions. Account for the costs of restrictions and lost opportunities as fully as direct effects. Favor measures that are proportionate to the probability and magnitude of impacts, and that have the highest payoff relative to their costs. Give a high priority to people’s freedom to learn, innovate, and advance. (More 2005)

More (2005) then offers his critique of the precautionary principle, which “biases decision making institutions toward the status quo, and reflects a reactive, excessively pessimistic view of technological progress.” By contrast, the proactionary principle would have decision makers “take into account all the consequences of an activity —good as well as bad — while apportioning precautionary measures to the real threats we face, in the context of an appreciation of the crucial role played by technological innovation and humanity’s evolving ability to adapt to and remedy any undesirable side-effects” (More 2005).

According to More (2005), the precautionary principle prevents the introduction of new technologies that are essential to progress. More breaks this broad critique into six specific problems with the precautionary principle: it (a) assumes worst-case scenarios; (b) under-estimates established threats to health, especially natural risks; (c) assumes the effects of regulation are positive or neutral, never negative; (d) ignores the potential benefits of technology; (e) illegitimately shifts the burden of proof on the proponent of an activity; and (f) conflicts with more balanced, common-law approaches to risk and harm.

More (2005) concludes his remarks with a section on “the essence of the proactionary principle”

Being proactive involves not only anticipating before acting, but learning by acting. When technological progress is halted, people lose an essential freedom and the accompanying opportunities to learn through diverse experiments. We already suffer from an undeveloped capacity for rational decision making. Prohibiting technological change will only stunt that capacity further. Continuing needs to alleviate global human suffering and desires to achieve human flourishing should make obvious the folly of stifling our freedom to learn. Let a thousand flowers bloom! By all means, inspect the flowers for signs of infestation and weed as necessary. But don’t cut off the hands of those who spread the seeds of the future.

It is clear from this formulation that the proactionary principle conceives of policy making as itself a kind of ongoing scientific experiment. The proactionary principle emphasizes the role of science in the later implementation, application, and appraisal stages where the ‘flowers’ that have already been planted by policy action are ‘inspected.’ By contrast, the precautionary principle emphasizes science in the early, intelligence gathering, stages of policy making in a way that structures science more as something that precedes policy. To push the metaphor, it would have us inspect the seeds prior to sowing them—an activity that More sees as both obstructionist (because we will not get flowers) and hopeless (because we cannot predict outcomes without experimentation). We argue in the conclusion, however, that these differences are a matter of degree rather than of kind. More does not advocate taking action on the basis of total ignorance, so science plays some role prior to policy making for him. The question is how much knowledge to gather prior to action, not whether to gather knowledge at all.

Although the proactionary principle is of recent coinage, More (2005) suggests something like it has long guided society’s approach to technology policy: had we been following the precautionary principle, “progress would have ground to a halt” as we would have “no chlorination and no pathogen-free water; no electricity generation or transmission; no X-rays; no travel beyond the range of walking.” We would have remained stuck inspecting seeds rather than sowing them and reaping the fruits while managing the unintended problems.

6. The limits of principles

We see two ways of thinking about the precautionary and proactionary principles and their relation to policy. The first is deeply cynical: the principles are of use in policy making only as masks for legitimating political decisions. The second is more hopeful: the principles are useful platforms for articulating the fundamental values choices at stake in any given policy. The difference between the views hinges on the role such principles play in policy making (which we discuss in §7).

6.1 The Cynical Story

The cynical story suggests that although they each seem to prescribe something rather specific, the precautionary and proactionary principles are actually masks. That is to say, they are parasitic upon prior values commitments embedded in a more fundamental conceptual scheme. This means that one could easily make and justify polar opposite policy decisions by appeal to the same principle. Fuller (2012a) writes, “In social psychological terms, precautionary policymakers set their regulatory focus on the prevention of worst outcomes, whereas proactionary policymakers seek the promotion of the best available opportunities.” (This is what decision theorists call “maximin” and “maximax” strategies.) [4]  But in the case of fracking, an environmentalist’s worst outcome is pollution, whereas for a venture capitalist it is diminished returns on investment. They could both be ‘precautionary’ in advocating contrasting policies. The same holds for ‘best available opportunities’: a healthy environment or a healthy bottom-line? One could be ‘proactionary’ either way.

Cass Sunstein (2008) has similarly argued that the precautionary principle is “deeply incoherent.” Precautions also create risks, “hence the principle bans what it simultaneously requires.” As one example, Sunstein points to the “drug lag” that ensues when government takes a highly precautionary approach to new pharmaceuticals: “Stringent review protects people against inadequately tested drugs; but it will also prevent people from receiving the benefits of new medications. Is it ‘precautionary’ to require extensive testing, or to do the opposite?”

This reading of principles has precedent in bioethics. In his landmark 1973 essay, “Bioethics as a discipline,” Daniel Callahan argued that ideally this emerging discipline would provide a “good methodology” or decision procedure. This would allow policy makers to reach “specific conclusions at specific times.” He notes that the only ethical methodologies capable of doing that are ones, like Roman Catholic scholasticism, that are essentially deductive. Given well-established first principles (presupposed cultural conditions and shared worldviews), one can deduce the right decision in any specific instance. Alas, as Callahan notes, in a pluralistic society we share no such common moral world.

Callahan argues that bioethics must find “some commonly shared principles” that could function in a society composed of many worldviews the same way first principles function in Roman Catholicism. “Short of finding that,” he concludes, “I do not see how ethical methodologies can be developed which will include methods for reaching quick and viable solutions in specific cases.” And indeed much of the subsequent history of bioethics has been occupied with the development of such ‘universal’ principles. Principlism in bioethics maintains that moral controversies can be resolved and policies made through the application of the principles of respect for autonomy, beneficence, and justice (see Beauchamp and Childress 1979).

Echoing the cyncial story about the precautionary and proactionary principles, K. Danner Clouser and Bernard Gert (1990) argue that bioethics principles can be at best a check-list of important things to remember. They cannot serve as decisions procedures, because each principle has a kind of relativism built into it. For example, ‘justice’ as a principle could denote any number of theoretical accounts of justice from Aristotle through Kant to Bentham, which rely on incommensurable premises. This means people motivated by deeply divergent value systems could invoke the same principle (for example, justice) to justify contradicting policy options. Similarly, a transhumanists will see the use of performance enhancements as the epitome of human autonomy (e.g., Savalescu 2007) whereas a bioconservative will see them as a threat to autonomy (e.g., President’s Council on Bioethics, 2003). ‘Autonomy’ means fundamentally different things to them – differences that are masked behind the principle.

To focus on the principles is in fact to let the tail wag the dog. Whether one adheres to the precautionary or the proactionary principle is driven by something deeper about that person’s worldview. Those who tend to value science over technology will also tend to value precaution. Those who tend to value technology over science will also tend toward proaction. In the case of fracking, those with an ecocentric or environmental values commitment will generally find the language of precaution more appealing. In general, those with anthropocentric or libertarian worldviews will find the language of proaction more appealing.

Allegiances to principles can also shift on the basis of how the technology at issue accords with one’s worldview. For example, the Sierra Club is precautionary about fracking but proactionary about solar and wind technologies. Allegiances can also hinge on more local interests rather than comprehensive worldviews. A libertarian may advocate precaution when it comes to fracking if it is likely to diminish his ability to enjoy his property.

When included as part of the policy process, the principles can be used to justify decisions one has already made (or wants to make) on the basis of foundational values or particular interests. They are there to provide the cover of a principled decision process, so that it can appear that one’s desired outcome is the result of a fair, objective, and scientific assessment rather than a foregone conclusion. According to the cynical story, then, the principles are masks for politics as usual.

6.2 The Hopeful Story

The hopeful story tells a markedly different tale, according to which the precautionary and proactionary principles serve as platforms for articulating and discussing basic values orientations. According to the hopeful story, the principles can serve as starting points for discussing underlying values and coming to grips with the societal implications of advances in scientific knowledge and technology. We see such a conversation already beginning in the scholarly literature, despite the fact that — or perhaps because — the precautionary principle has received the lion’s share of scholarly attention. [5]

Steve Fuller is at this point the main scholarly interlocutor coming from a proactionary orientation. Fuller (2012b) addresses precaution and proaction as the ‘new ideological divide’ coming to dominate the 21st century (with the new right — or “down” —characterized as precautionary and the new left — or “up” — as proactionary). Fuller suggests that a proactionary ideology would unite libertarians and technocrats in support of a re-imagined welfare state that would both reward risk taking among its citizens and protect citizens from the negative consequences of their risk taking behaviors. Putting the state in charge of risk taking would protect citizens, according to Fuller, from the interests of corporations whose values are dominated by the maximization of profits rather than the free pursuit of the realization of human potential.

Fuller’s call for such a proactionary welfare state should be understood in conjunction with a particular view of history and a particular view of ethics. Fuller describes what he calls a ‘Tory historiography’ (2003 chapter 19) or ‘deviant interdisciplinarity’ (2010) as taking an alternative view — an alternative to the dominant historical view, known as ‘Whig history’ — on how things have come to be as they are. Whereas Whig histories tell the story of how we advanced — and how we inevitably had to advance — along the path to our present state of knowledge, a Tory approach to history sees history’s ‘losers’ as having begun to venture down paths not yet taken. Those whom the Whig historian would describe as ‘losers’ are redescribed by the Tory historian as ‘deviant’ – though this description is a term of approbation, as with the appropriation of terms such as ‘rebel’ or ‘dangerous’ or ‘queer’ for positive purposes. Fuller (2012c) also outlines a kind of moral exemplar he terms the ‘moral entrepreneur’, a master of the “fine art of recycling evil into good” (63). A moral entrepreneur is one who takes a crisis (often one of her own making, according to Fuller) and converts it into an opportunity for learning something new. For the moral entrepreneur, the world is ‘reversible’ – losers can become winners, evil can become good.

Tory histories and tales of deviant interdisciplianrians past may help inspire the moral entrepreneur, but the proactionary welfare state helps provide the social and political conditions for the moral entrepreneur to operate. Reversal requires risk, and the conditions have to be right to allow that risk to pay off. Hence, the proactionary welfare state both rewards risk-taking on the part of moral entrepreneurs and protects the rest of society from the negative consequences of the moral entrepreneur’s risk-taking. Tory history looks back to show how past ‘losers’ can become winners; the proactionary welfare state looks forward to help bring about a future society friendly to moral entrepreneurs.

Fuller’s call for the establishment of a reformed welfare state to promote the proactionary adventures of moral entrepreneurs may sound utopian at best; or one might dismiss Fuller as a Nietzschean madman calling for a social revolution to promote the development of a transhuman Übermensch. We think this is part of his strategy. Fuller (2012a) writes:

In my popular book, The Intellectual, I defended a person who is more concerned with the whole truth than only the truth (Fuller 2005). The intellectual would prefer to utter falsehoods that are subsequently eliminated, attenuated or mitigated than utter truths that turn out to prevent the pursuit of further truths, either by declaring an end to a line of inquiry or threatening that a heterodox line of inquiry would render the inquirer pathological. In short, overstatement invites participation from others – however negative the consequences for the utterer herself – whereas understatement carries what Paul Grice used to call the “implicature” that individuals should worry most of all about their own personal epistemic status. (My recent interest in proactionary vs. precautionary attitudes towards risk – discussed below – is arguably an outgrowth of this awareness.) (page 5, all italics in original)

If we are correct, then Fuller’s advocacy of the proactionary principle, including his outrageous tone, is an effort to begin a critical discussion, to ‘invite participation’ from others, about the role of principles — specifically the precautionary principle — in policy making.

Another recent attempt to provoke such a discussion that also moves beyond, while still incorporating, the precautionary principle is René von Schomberg’s (2013) attempt to outline the conditions for Responsible Research and Innovation (RRI). Although von Schomberg argues that lack of precautionary measures is a violation of RRI, he also pairs the precautionary principle with technology foresight in an effort to explore alternative paths to innovation: “Rather than a constraint, the precautionary principle can thus provide an incentive to open up alternative research and development trajectories” (2013, 19). The point of RRI is precisely to lay out a framework that “invites the participation of others” in the discussion of innovation:

Definition: Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). (von Schomberg 2013,19 – italics in original)

Although von Schomberg’s tone is different, he seems to be aiming for something along the same lines as Fuller. Both feel the need to open up a conversation. Neither believes the precautionary principle alone is sufficient to determine policy prescriptions. Both believe that values that go beyond market values should be discussed. [6]  Both suggest that we need to have a forward-looking bent to our deliberations about science and technology. If they do not agree on every detail, at least there is sufficient common ground — as well as sufficient difference of opinion — on which to base a discussion. Or so the hopeful story goes.

7. Implications and recommendations for science policy

We see several implications of the preceding discussion for the role of principles in general for science policy.

  1. In order to avoid the problems outlined in our discussion of the cynical story regarding principles (in §6.1, above), principles should not be treated as decision procedures. Treating principles as decision procedures is a category mistake, one that leads to several confusions. Among them is the idea that principles will tell us what to do, if only we follow them. But in fact, principles are not sufficient in and of themselves to render specific rules. The formulation of such rules during the prescription stage of the policy process always requires judgments, which may be oriented, but not fully determined, by principles.
  1. Another confusion resulting from treating principles as decision procedures is to think that the use of principles is confined to the prescription stage of the policy process. On such a view, values are predetermined, intelligence is gathered, and this is fed into the principles, which spit out prescriptions. Used as guides, rather than as decision procedures, principles can be operative in many stages of policy formation. Principles can be used, for instance, to orient the appraisal stage of the policy process for assessing the specific rules rendered in the prescription stage.
  1. Finally, treating principles as decision procedures warps our perception of the policy process in a specific manner. Our focus is drawn to the promotion stage of the policy process, and our discussion is limited to the values we ought to instantiate in our decision procedure. Discussing values is an important part of the policy process, one that is arguably often undervalued as ‘merely’ subjective. But if establishing principles renders automatic decisions, then the importance of the promotion stage of the policy process is wildly exaggerated.

We also have several recommendations concerning the use of the precautionary and proactionary principles in science policy.

  1. As a policy for science and technology, invoking the precautionary principle ought to be expressly limited to situations in which there is scientific uncertainty regarding the extent of risk. We take this point to be consistent with the use of the precautionary principle in policy in the EU. Pace More and Fuller, the point of the precautionary principle is not to discourage anyone from taking risks, per se. The point, instead, is to reduce uncertainty (an unknown unknown) to risk (a known unknown) prior to taking action.
  1. In both our policies and our theoretical discussions, we should avoid discussing the precautionary principle in terms of ‘burden of proof’. Under most interpretations, when potential risk is identified but cannot be quantified (due to scientific uncertainty), the precautionary principle places the burden of proof to show that an action is safe on the proposer of the action. Rather than interpreting the precautionary principle to suggest that uncertainty about risk is relevant to determining the appropriate burden of proof – and further, that this burden ought to lie with those proposing an action — we suggest that the precautionary principle should be discussed in terms of burden of risk (see next point).
  1. Invoking the precautionary principle should result in a discussion of the appropriate burden of risk. Although the precautionary principle as instantiated in the EU includes the idea that the risk of inaction, as well as action, should be considered, we think that this should be supported by providing some sort of institutionalized process of discussion (von Schomberg’s outline for Responsible Research and Innovation and David Resnik’s version of the precautionary principle (Resnik 2012) might provide the starting point). In addition to discussing the pros and cons of action and inaction, however, this discussion ought also to result in specific policy recommendations about:

a. who should bear the responsibility for reducing uncertainty to risk (and this burden could be distributed among various stakeholders);

b. who should bear the risk of what potential harms; and

c. how potential harms will be mitigated, and by whom.

These policy recommendations should be seen in terms of recasting the precautionary principle in more proactionary terms. On a theoretical level, we are suggesting that we come to see the precautionary principle as neither based upon nor entailing the idea of risk-aversion. To put the point differently, from a theoretical point of view, the precautionary principle is not a claim about the essential epistemic value of certainty. If the hopeful story we tell above has any hope of coming to pass, the discussion must begin here.

The precautionary and proactionary principles are not fundamentally opposed. Rather, they map out vague destinations along a continuum (see Figure 2).

h_b_fig2

Figure 2. How much knowledge should we have prior to acting?

More suggests that the principles are polar opposites. But he is only able to do so by casting the precautionary principle in extreme terms. His reading of precaution makes it sound as though it is a recipe for squelching any and every innovation. To follow it consistently would trap us in the Stone Age. One could make a similarly extreme caricature of the proactionary principle as a recipe for a wanton, reckless, and cavalier free-for-all. To follow it consistently would be to license a total disregard for any potential risks.

But these extremes are caricatures. Indeed, the two principles have rather striking similarities. Most importantly, both acknowledge the significance of freedom to engage in and protection from the pitfalls of risk taking. They both acknowledge the importance of evaluating risks at least to some extent prior to taking action. No one, not even the most staunch proactionary, is seriously advocating widespread deployment of totally unknown chemicals without any prior knowledge of their properties. Similarly, no one is seriously calling for the cessation of all industrial activity.

More frames his idea as an “alternative” to the precautionary principle. But he is still speaking from within the same framework of risk evaluation and management. He just demands that such processes also consider the risks of regulating technologies rather than only the risks of not regulating them, a claim that many advocates of the precautionary principle also endorse. [7] He acknowledges the need for restrictive measures when the “potential impact of an activity has both significant probability and severity.” This leaves plenty of interpretive room for one to apply restrictions.

The significant middle ground shared by the two principles means that they could be used to arrive at very similar policy prescriptions. The principles are not simple decision procedures, which, if followed, would eliminate the need for interpretation and judgment in specific policy scenarios. This means, in turn, that the principles are far from sufficient for making policy. Polarizing characterizations of competing interests under the labels ‘precautionaries’ and ‘proactionaries’ only makes the policy process more difficult.

Fuller, like More, portrays ‘precautionaries’ as risk-averse. Invoking the argument between William James and William Kingdon Clifford about the ethics of belief (James 2009 [1896]), Fuller (2012a) links ‘proactionaries’ to James and ‘precautionaries’ to Clifford:

For the Jamesian voluntary believer, epistemology is about leveraging what we know now into a future we would like to see. For the Cliffordian ethical believer, epistemology is about shoring up what we know so that it remains secure as we move into an uncertain future. The former seeks risks and hence errs on the side of overestimating our knowledge, while the latter avoids risk and hence errs on the side of underestimating our knowledge.

However, as we argue above, the precautionary principle does not entail an avoidance of risk. On the contrary, the precautionary principle actually seeks risk, insofar as it requires the reduction of uncertainty (unknown unknowns) to risk (known unknowns). Whereas the dispute between James and Clifford turns on the epistemic risk of uncertainty, the dispute between ‘proactionaries’ and ‘precautionaries’ turns on the questions of how we reduce uncertainty to risk, who is responsible for doing so, and when we need to do so relative to taking action.

In fact, both ‘proactionaries’ and ‘precautionaries’ agree that reduction of uncertainty to risk ought to be based on science. But who should pay for the studies? Must the studies be conducted before the potentially harmful activity takes place? How many studies or how much evidence will be sufficient? Who should be held responsible, assuming harms actually result from the activity? We suggest that these are questions that ought to be answered on a case by case basis, with the joint participation of those in favor of and opposed to the action in a discussion of the burden of risk. Rather than invoking principles as a justification for our tendencies either to overestimate or underestimate our knowledge, in each case we should weigh the risks of both action and inaction and pursue the course of action we judge to be most likely to mitigate the harms of both.

8. Knowing and acting

We open this paper with a quote from Nietzsche: knowledge kills action. Yet, throughout the paper, we have sought to answer the question of how much knowledge we should have before acting. We return now to the more fundamental question: How are knowledge and action related? Nietzsche’s claim — knowledge kills action — seems to offer an easy answer: acting requires that we lack knowledge. Expressed in Schopenhauerian terms, Nietzsche continues: “action requires the veils of illusion” (1967 [1872], p. 60).

We prefer to put the point in more Kantian terms: acting requires autonomy. To say that acting requires autonomy means that acting requires that we own our actions. We own our actions by giving ourselves the principle according to which we ought to act. If anything else determines our actions for us, we are in a condition of heteronomy. This is why principles cannot be predetermined. Acting on the basis of predetermined principles is not really acting — that is, acting on the basis of predetermined principles rules out the possibility of giving ourselves the principles according to which we act (since they are already given to us). If we know in advance of acting what we ought to do, we are determined by that knowledge, and therefore we are not autonomous. Autonomy — acting on our own — requires that we not know ahead of time how we ought to act. Knowledge kills action in the sense that predetermination of principles kills autonomy.

References

Beauchamp, Tom and James Childress. 1979. Principles of Biomedical Ethics. 1st ed. Oxford: Oxford University Press.

Bozeman, Barry, and Craig Boardman. 2009. “Broad Impacts and Narrow Perspectives: Passing the Buck on Science and Social Impacts.” Social Epistemology 23 (3-4): 183-198.

Briggle, Adam, Robert Frodeman, and J. Britt Holbrook. 2006. “Introducing a Policy Turn in Environmental Philosophy.” Environmental Philosophy 3 (1): 70-77.

Briggle, Adam. 2009. “The Kass Council and the Politicization of Ethics Advice,” Social Studies of Science, 39 (2): 309-326.

Briggle, Adam. 2013. “Let Politics, Not Science Decide the Fate of Fracking,” Slate, March 12. http://www.slate.com/blogs/future_tense/2013/03/12/fracking_bans_let_politics_not_science_decide.html.

Brown, Mark. 2009. Science in Democracy: Expertise, Institutions, and Representation. Cambridge. MA: MIT Press.

Callahan, Daniel. 1973. “Bioethics as a Discipline,” Hasting Center Studies 1: 66-73.

Clouser, K. Danner and Bernard Gert. 1990. “A Critique of Principlism.” The Journal of Medicine and Philosophy 15: 219-236.

Collins, H.M., and Robert Evans. 2002. “The Third Wave of Science Studies: Studies of Expertise and Experience.” Social Studies of Science 32 (2): 235-296.

Collins, H.M., and Robert Evans. 2007. Rethinking Expertise. Chicago, IL: University of Chicago Press.

Douglas, Heather. 2009. Science, Policy, and the Value-free Ideal. Pittsburgh, PA: University of Pittsburgh Press.

Evans, John. 2006. “Between Technocracy and Democratic Legitimation: A Proposed Compromise Position for Common Morality Public Bioethics.” Journal of Medicine and Philosophy 31: 213–34.

Fischer, Frank. 1990. Technocracy and the Politics of Expertise. London: Sage.

Fischer, Frank. 2000. Citizens, Experts, and the Environment. London: Duke University Press.

Fischer, Frank. 2009. Democracy and Expertise: Reorienting Policy Inquiry. Oxford: Oxford University Press.

Foster, Kenneth R. Paolo Vecchia and Michael H. Repacholi. 2000. “Science and the Precautionary Principle.” Science 288: 979–981.

Frodeman, Robert, Julie Thompson Klein and Carl Mitcham, Carl, eds. 2010. The Oxford Handbook of Interdisciplinarity. Oxford: Oxford University Press.

Frodeman, Robert and Briggle, Adam. 2012. “The Dedisciplining of Peer Review.” Minerva 50 (1): 3-19.

Frodeman, Robert, Adam Briggle and J. Britt Holbrook. 2012. “Philosophy in the Age of Neoliberalism.” Social Epistemology 26 (3-4): 311-330.

Fuller, Steve. 2012a. “Social Epistemology: A Quarter-Century Itinerary.” Social Epistemology 26 (3-4): 267-283.

Fuller, Steve. 2012b. “Precautionary and Proactionary as the New Right and the New Left of the Twenty-First Century Ideological Spectrum.” International Journal of Politics, Culture, and Society 25 (4): 157-174.

Gibbons, Michael, Camille, Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott, and Martin Trow. 1994. The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. London: Sage.

Gorman, Michael, ed. 2010. Trading Zones and Interactional Expertise. Cambridge, MA: MIT Press.

Guston, David. 2000. Between Politics and Science: Assuring the Integrity and Productivity of Research. New York: Cambridge University Press.

Guston, David and Daniel Sarewitz. 2002. “Real-time Technology Assessment.” Technology in Society 24: 93-109.

Hamlett, Patrick. 2005. “Consensus Conferences” in The Encyclopedia of Science, Technology, and Ethics. 4 vols, ed. Carl Mitcham, 412-414. New York: Macmillan Reference.

Haraway, Donna.1997. Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse: Feminism and Technoscience. New York: Routledge.

Holbrook, J. Britt. 2009. “Editor’s Introduction.” Social Epistemology 23 (3-4): 177-181.

Holbrook, J. Britt. 2012a. “What is interdisciplinary communication? Reflections on the very idea of disciplinary integration.” Synthese. DOI:10.1007/s11229-012-0179-7.

Holbrook, J. Britt. 2012b. “Re-assessing the science – society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997 – 2011),” in Peer Review, Research Integrity, and the Governance of Science – Practice, Theory, and Current Discussions, eds.Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan, 328-362. Beijing: People’s Publishing House.

Holbrook, J. Britt and Robert Frodeman. 2011. “Peer review and the ex ante assessment of societal impacts.” Research Evaluation 20 (3): 239–246.

James, William. 2009 [1896]. “The Will to Believe,” in The Will to Believe and Other Essays in Popular Philosophy. Project Gutenberg edition. http://www.gutenberg.org/files/26659/26659-h/26659-h.htm.

Jasanoff, Sheila. 1990. The Fifth Branch: Science Advisors as Policymakers. Cambridge, MA: Harvard University Press.

Jasanoff, Sheila. 2010. “A Field of its Own: The Emergence of Science and Technology Studies,” in The Oxford Handbook of Interdisciplinarity, eds. Robert Frodeman, Julie Thompson Klein and Carl Mitcham, 191-205. Oxford: Oxford University Press:

Kass, Leon. 2002. Life, Liberty, and the Defense of Dignity: The Challenge for Bioethics. San Francisco: Encounter Books.

Klein, Julie. 1996. Crossing Boundaries: Knowledge, Disciplinarities, and Interdisciplinarities. Charlottesville : University of Virginia Press.

Kleinman, Daniel Lee, Jason Delborne and Robyn Autry. 2008. “Beyond the Precautionary Principle in Progressive Politics: Toward the Social Regulation of Genetically Modified Organisms.” Tailoring Biotechnologies 4 (1/2): 41-54.

Lacey, Hugh. 2005. Is Science Value Free? Values and Scientific Understanding. London: Routledge.

Latour, Bruno and Catherine Porter. 2004. Politics of Nature: How to Bring the Sciences into Democracy. Cambridge, MA: Harvard University Press.

Luján, José Luis and Oliver Todt. 2012. “Precaution: A Taxonomy.” Social Studies of Science 42 (1): 43-57. DOI: 10.1177/0306312711431836.

Lövbrand, Eva, Roger Pielke and Silke Beck. 2011. “A Democracy Paradox in Studies of Science and Technology.” Science, Technology, and Human Values 36 (4): 474-496.

Menzies, Charles, ed. 2006. Traditional Ecological Knowledge and Natural Resource Management. Lincoln, NE: University of Nebraska Press.

Montgomery, Carl and Smith, Michael. 2010. “Hydraulic Fracturing: History of an Enduring Technology,” Journal of Petroleum Technology (December): 26-41.

More, Max. 2003. “Principles of Extropy.” www.extropy.org/principles.htm.

More, Max. 2005. “The Proactionary Principle.” www.maxmore.com/proactionary.html.

Nietzsche, Friedrich. 1967 [1872]. The Birth of Tragedy and The Case of Wagner. trans. Walter Kaufmann. New York: Random House.

Nowotny, Helga, Peter Scott, and Michael Gibbons. 2001. Re-Thinking Science: Knowledge and the Public in an Age of Uncertainty. Cambridge: Polity.

Oreskes, Naomi and Erik Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury.

Pielke, Roger Jr., and Rad  Byerly. 1998. “Beyond Basic and Applied,” Physics Today 51 (2): 42-46.

Pielke, Jr., Roger. 2002. “Policy, Politics, and Perspective.” Nature 416: 367-68.

Pielke, Jr., Roger. 2007. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge: Cambridge University Press.

President’s Council on Bioethics. 2003. Beyond Therapy: Biotechnology and the Pursuit of Happiness. Washington, DC: U.S. Government Printing Office.

Resnik, David 2012. Environmental Health Ethics. Cambridge: Cambridge University Press.

Rip, Arie, Thomas J. Misa and Johan Schot. 1995. Managing Technology in Society: The Approach of Constructive Technology Assessment. London: Pinter.

Sarewitz, Daniel. 1996. Frontiers of Illusion: Science, Technology, and the Politics of Progress. Philadelphia, PA: Temple University Press.

Sarewitz, Daniel and Roger Pielke Jr. 2007. “The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science.” Environmental Science & Policy 10: 5-16.

Sarewitz, Daniel. 2004. “How Science Makes Environmental Controversies Worse.” Environmental Science & Policy, 7 (5): 385–403.

Savelescu, Julian. 2007. “Genetic Interventions and the Ethics of Enhancement of Human Beings” in Oxford Handbook of Bioethics. ed. Bonnie Steinbock. Oxford: Oxford University Press.

Sclove, Richard. E. 2010. “Reinventing Technology Assessment: A 21st Century Model.” http://loka.academia.edu/RichardSclove/Papers/238116/Reinventing_Technology_Assessment_A_21st_Century_Model.

Sclove, Richard E. 1995. Democracy and Technology. New York: Guilford Press

Shapin, Steven, and Simon Schaffer. 1985. Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. Princeton, NJ: Princeton University Press.

Stirling, Andrew. 2007. “Risk, precaution and science: towards a more constructive policy debate.” EMBO Reports 8 (4): 309-15.

Sunstein, Cass. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge: Cambridge University Press.

Sunstein, Cass. 2008. “Throwing Precaution to the Wind.” Boston Globe July 13: http://www.boston.com/bostonglobe/ideas/articles/2008/07/13/throwing_precaution_to_the_wind/?page=full.

Turner, Stephen. 2003. Liberal Democracy 3.0 Civil Society in an Age of Experts. London: Sage.

Von Schomberg, René. 2012. “The Precautionary Principle: Its Use Within Hard and Soft Law.” European Journal of Risk Regulation 2: 147-56.

Von Schomberg, René. 2013. “A vision of responsible innovation.” In Responsible Innovation eds. R. Owen, M. Heintz and J Bessant. London: John Wiley, forthcoming.

Yearley, Steven. 2000. “Making Systematic Sense of Public Discontents with Expert Knowledge: Two Analytical Approaches and a Case Study.” Public Understanding of Science 9 (2): 105–22.

Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: University of Chicago Press.

Wynne, Brian. 1996. “May the Sheep Safely Graze? A Reflexive View of the Expert–Lay Knowledge Divide,” in Risk, Environment & Modernity: Towards a New Ecology, eds. Scott Lash, Bronislaw Szerszynski and Brian Wynne, 44–83. London: Sage.


[1] Nietzsche (1967 [1872]), p. 60.

[2] This is not to discount reflexive efforts to determine the proper role for philosophers and STS scholars in relation to issues of science and technology policy (Briggle, Frodeman, and Holbrook 2006; Fuller 2012a; Frodeman, Briggle, and Holbrook 2012; Jasanoff 2010).

[3] Stirling (2007) argues convincingly that the precautionary principle is compatible with principles of risk assessment, as well as risk management.

[4] Thanks to David Resnik for pointing this out at a presentation of an earlier version of this paper.

[5] In the remainder of this section, we focus mainly on the proactionary principle in a small effort to redress the balance.

[6] See Kleinman, Delborne, and Autry (2008) for a similar discussion of the need to move beyond the “precautionary trap” that commits its adherents to a scientific framing of their concerns. Like us, they argue that what is needed is a conversation that directly and explicitly confronts underlying values. See also Briggle 2013.

[7] Cf. von Schomberg’s (2012) discussion of the role of the precautionary principle with regard to ‘soft’ regulation in the case of nanotechnology: “Rather than stifling research and innovation, the precautionary principle acts within the [European] Code of Conduct as a focus for action, in that it calls for funding for the development of risk methodologies, the execution of risk research, and the active identification of knowledge gaps” (p. 155). See also Resnik (2012).

9 responses to Knowing and acting: The precautionary and proactionary principles in relation to policy making, J. Britt Holbrook and Adam Briggle

  1. 

    Reblogged this on jbrittholbrook and commented:
    This is a preprint of a paper I’m working on with my colleague Adam Briggle. Would love to hear your thoughts.

  2. 

    Under the bonnet, Audi claims the new facelifted A4 will be offered with a range of engines that is 11 percent more fuel efficient than the current
    generation, despite a significant increase in power and torque output.
    Nathan Newman is world renowned for his advanced cosmetic
    surgery procedures. You also need to think about the recovery process and the ways it might affect your daily life and work.

Trackbacks and Pingbacks:

  1. Knowing and acting: The precautionary and proactionary principles in relation to policy making, J. Britt Holbrook and Adam Briggle « Social Epistemology Review and Reply Collective | csid - April 16, 2013

    [...] Knowing and acting: The precautionary and proactionary principles in relation to policy making, J. B…. [...]

  2. Welcoming a Friend to WordPress | Reason & Existenz - April 28, 2013

    [...] Holbrook and Briggle’s Knowing and Acting”: A Preprint (social-epistemology.com) [...]

  3. Precautionary – Proactionary | Reason & Existenz - April 28, 2013

    [...] via Knowing and acting: The precautionary and proactionary principles in relation to policy making, J. B…. [...]

  4. Interior Proposes New Rules for Fracking on U.S. Land – NYTimes.com | csid - May 17, 2013

    […] is an opportunity for some public philosophy. I believe even our scholarly philosophical work can be made relevant to this […]

  5. On Policy Oversimplification: The Precautionary Principle | Pasco Phronesis - May 29, 2013

    […] Britt Holbrook and Adam Briggle at the University of North Texas recently released a paper on the precautionary and proactionary principles.  If you haven’t heard of either (or at […]

  6. Who Should Govern the Welfare State 2.0? A Comment on Fuller, David Budtz Pedersen « Social Epistemology Review and Reply Collective - November 20, 2013

    […] […]

  7. How journals like Nature, Cell and Science are damaging science | Randy Schekman | Comment is free | The Guardian | jbrittholbrook - December 12, 2013

    […] which I will formulate as a principle here, provided that everyone promises to keep in mind my attitude toward […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s