[June 1, 5:51 p.m. | Updated Don't miss David Leonhardt's column from the upcoming Sunday magazine on this same topic.]
It should come as no surprise that experts in avoiding and stopping blowouts of oil and gas wells long ago saw the deep-ocean drilling frontier as particularly dangerous terrain, but industry — with federal assent, and our assent — drilled on.
So now, Gulf Coast residents face months, likely years, of environmental and economic disruption as a result of a known, but inadequately addressed, risk. There are plenty of signs of failed government oversight and hints of corporate malfeasance in the steps that led to the seabed gusher in the Gulf of Mexico. But the larger question of how best to manage certain hazards looms, too.
Over the weekend, I published a post exploring our seeming inability to deal well with hazards that are foreseeable, but complex and rare — the gulf oil gusher being the example of the moment.
I sent the piece to a variety of specialists in risk perception and response. Some of the varied, and valuable, reactions are appended below. I’ll add more and, of course, welcome your thoughts. Here goes:
Robert Frodeman, University of North Texas:
The point — a quixotic one, I suppose — is less to understand complexity than to lessen it. We inevitably generate messes like the gulf oil spill, the result of massive, ultimately ungovernable technical expertise. Expertise which itself is in the service of childish hungers (e.g., our idiotic energy culture).
Paul Slovic, Decision Research:
Certainly this worry about complexity is not new. Charles Perrow’s “Normal Accidents” is but one example from an earlier day. And many of us in the psychology realm have been documenting mistaken and non rational thinking for decades. We do know a lot about how we can go wrong.I take your piece, and Brooks, as a call to action. We need to design our world and its protective systems in a way that is truly respectful of what we know about risk. We need risk school but we also need public and private partnering in a mission to make our world less vulnerable to disaster.
We know the natural catastrophes that await our coastal communities. What more should we be doing to mitigate this enormous risk? What new legal and institutional arrangements are needed in the face of these prophesies of disaster. I think about this is my studies of psychic numbing — which is but one of numerous vestiges of a brain that evolved to deal with much simpler and more recognizable forms of risk.
Sure, everybody needs to have a better understanding of risk, and in particular the biases that make them under- or overestimate risks. And it’s worth pointing out that our understanding of these biases, however much it may seem to match with folk wisdom, is actually a precise product of careful scientific investigation in rather recent times.
However, we need to carefully distinguish among different types of “risk” that are cognitively and in practical terms very different. Everyday risks (smoking, fire, cars) are well understood and we know how best to avoid them. The risks due to complex systems are an entirely different breed of cat. These too have been subjected to a lot of study in the past half century and we understand a lot about them.
I recall reading an engineering text written in the 1950s that nailed it, long before Three Mile Island, etc. All great engineering catastrophes, it said, resulted from a combination of bad initial design, bad instrumentation that concealed the developing problem and bad training/procedures. I’ve been interested to see how well this has applied to every catastrophe I’ve read about since.
Since each great catastrophe is the result of a combination of small problems that would each be harmless in itself, the way to safety is to watch for small problems, that don’t lead to catastrophe, and deal with each of them systemically. I haven’t looked closely into the current spill but I’ll bet any amount that these old principles apply here — there would have been small problems, at this or at other wells, that could have served as warnings but were overlooked.
Thus complex systems do NOT have to lead to great catastrophes, provided somebody (usually government, although in a few cases industry bodies) carefully tracks all potentially dangerous events and forces changes in design, instrumentation, and training/procedures as the warning experiences indicate. This is what has kept the nuclear industry relatively safe since Chernobyl.
Then there are risks like global warming for which, unlike smoking or oil rigs, we do NOT have previous experience to guide us (well, aside from the Paleocene-Eocene Thermal Maximum, not terribly useful). These require an entirely different approach, and normal training in risk handling is not relevant.
Dennis Mileti, professor emeritus of sociology, University of Colorado:
Every now and then, humanity experiences a catastrophic event with a probability of occurring somewhere around three standard deviations from the mean — at least if probabilities are calculated in terms of a short human lifetime. Events like these rise from natural and technological systems and even from humanity itself; for example, a tsunami half a planet large, a broken nuclear power plant or collapsed oil platform, or a terrorist attack such as the one that happened on 9/11. These events give humanity reason to pause and ponder some variety of the question, “Why doesn’t humanity rationally act to manage 3-standard deviation risks?”
Almost certainly, the aftermath of such events result in two reoccurring phenomena: (1) reactive policies that seek to manage with hindsight the event that was just experienced (which I would argue is not risk management at all), and (2) imaginative conversations about other 3-standard deviation events that could happen, maybe, someday…. perhaps, e.g., a planetary asteroid impact extinction event, eruption of a super volcano, catastrophic changes in climate and ocean levels, and more.
The general principle that seems to sum all this up is that humanity may tend to ignore high consequence low probability events that it hasn’t yet experienced because people may find them too difficult to imagine and to spend money on until they occur. This tendency, if it exists, may be the seed for the extinction of our species, but if it is, no one will be around to accept or reject my hypothesis when the data become available.
David Ropeik, author of “How Risky is it, Really?”:
This is the same question you’ve asked before, in other contexts. Most recently on April 6, re: perceptions of climate change, when you first cited the Nature piece…. Again you suggest, as do Gigerenzer and many many others, that we’re just not smart enough to get risk right…and that ultimately the human faculty of reason is so superior that it can rise above the affective/emotional, instinctive, subconscious forces that still so powerfully inform decision making by the human ANIMAL. We just need to get smarter…go to “risk school.”
Reason is a fabulous goal, but an unachievable myth, at least at this point in human evolution. We CAN’T be perfectly rational, and there is lots of robust science that explains why, and which explains in rich detail how we actually do risk perception, and how that process is and always will be ‘affective’, a combination of fact and feeling, cognition AND intuition, reason AND gut reaction.
Frankly it’s frustrating to listen to people on big soap boxes (Brooks, Gladwell, Revkin) speculating about all this, and lamenting our lack of reason about risk, and all but ignoring the immense amount of evidence that reveals the affective characteristics of the risk perception process. I was at a dinner a couple weeks back at which several journalists spoke on just this issue, and Shankar Vedantam and Chris Mooney made a good case for what I have also suggested (including in my reply to you on April 6); What’s really irrational is for smart people, in support of the myth of perfect rationality and frustrated by the public’s ‘ignorance’ about risk, to ignore the mountains of evidence from neuroscience and social sciences about how human perception and decision-making actually works, about risk or anything else. The people responsible for risk management are the ones who have to go to risk school. Not a school that teaches how to understand complexity or get risk right ‘rationally”, based only on the facts. A school that teaches them how the human ANIMAL perceives and responds to risk, affectively, so the people in charge of keeping us safe can incorporate a more realistic understanding of human behavior into the way we manage risk.
Rather than just vent, I attach my April reply below, which tries to summarize the details of the evidence to which I’m referring. (Which is also laid out in fuller detail in my book “How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts”) I cite a few of the experts whose work helps us understand risk perception, and summarize their findings, in the hope that what they’ve taught us can be more widely known, and take its rightful place in careful risk management policy making.
Here’s a link to Ropeik’s April 6 reply, which provides a lot of helpful detail and background on the literature of risk and response.