Jump to main content or area navigation.

Contact Us

CADDIS Volume 1: Stressor Identification

Causal Assessment Background

A Chronological History of Causation for Environmental Scientists

This brief historical review is intended to provide a basis for deeper inquiry into causation. The history is based on individual contributors and is divided into four sections:

  • Major Pre-Contemporary Authors (those published before the mid-20th century) are presented chronologically, with original sources cited by title and date (when known). This section is for those who want a deep background into causation.
  • Recent Philosophy of Causation includes professional philosophers, plus philosophical writings by statisticians and scientists.
  • Recent Epidemiology and Causation is reviewed because human health epidemiology is the major source of methods for ecological epidemiology—including CADDIS.
  • Recent Applied Ecology and Causation is reviewed for ecologists who want to know how other ecologists have proposed to determine causation.

Recent authors are listed chronologically within disciplinary categories by their earliest relevant publication, and their works are cited conventionally.

Inevitably, this review leaves out many authors who addressed causation. Pre-contemporary authors are included if they made an important contribution to concepts of causation in the contemporary philosophy of science. Contemporary authors are included if they propose ideas of causation that are potential contributors to our approach or potentially viable alternatives.
The review also neglects non-Western philosophies. This is not because they do not address the issue of causation. Indian philosophy has been particularly concerned with the nature of causation, notably in the doctrine of Karma. However, these philosophies have not contributed to scientific concepts of causality in general or to our approach in particular.
Throughout this review, causes and effects, as logical propositions or statistical parameters, are represented as C and E, respectively.

Top of page


Pre-Contemporary Authors and Causation

This section addresses authors who are deceased and published their relevant work prior to the mid-20th century. Authors are arranged chronologically by the dates of their most important publications on causation. All disciplines are included, but most of these authors were philosophers. 

Highlights of this period begin with the development by Galileo of a scientific concept of causation involving necessary and sufficient conditions. David Hume showed that regular association could not prove causation, but provided the basis for belief that a relationship is causal. J.S. Mill was the first to make experimentation the definitive method for determining causation. John Herschel was the first to define causation in terms of a set of characteristics, which sets the stage for Hill’s criteria and the CADDIS method. Koch’s postulates provide an alternative set of causal criteria. Pearson developed probabilistic causation which is expressed by correlation. Fisher linked Pearson’s probabilistic causation with Mill’s experimental approach. 

Pre-Ancients

Prior to the ancient philosophers, causation was attributed to conscious agents (gods, humans, demons, etc.). Hence, causal explanation was a matter of assigning responsibility (Frankfort 1946, Mithen 1998).

Plato (427-347 BCE)

In Phaedo, Plato attributed to Socrates the rejection of empirical explanations of why things come into being and pass away, and reliance on reason. Things are as they are because they participate in a form, but the forms are eternal and uncaused (Mittelstrass 2007). In Parnenides, he wrote “whatever becomes must necessarily become, owing to some cause; without a cause it is impossible for anything to achieve becoming.” However, he did not indicate what the causes of things might be.

Aristotle (384-322 BCE)

In Analytics and Physics, Aristotle established the deductive method in science. That is, truths can be derived as a series of consequences from a few fundamental principles.

He defined four kinds of causes: material, formal, efficient, and final. Final cause of a thing is its purpose (telos). Efficient cause (the one that acts) is how a thing happens, which corresponds most nearly to the modern concept. Material cause is that out of which a thing is made. Formal cause is the form into which the thing is made. All are aspects of causation in the sense of “why is something the way it is” (Mittelstrass 2007). Aristotle's analysis of causation was intended to be deductive. A sculpture is caused to be, because a sculptor (efficient cause) takes a piece of stone (material cause) and imposes the form of his patron (formal cause) to earn a fee (final cause).

In Metaphysics, he wrote “that which is called Wisdom is concerned with the primary causes and principles.” He differed from modern philosophers of causation in his concern with telos and in lacking a concept of a regular functional relationship. His teleology referred to an internal principle that guides natural processes, not to a supernatural director of nature. Whatever “we ascribe to chance has some definite cause” (Physics).

Roman and Medieval Periods

In imperial Rome, and in both medieval Europe and the Muslim world, commentaries on and elaborations of Plato and Aristotle dominated writings on causation (see Ch. 3-5 in Machamer and Wolters (2007)).

Galileo Galilei (1564-1642)

In Discourses and Mathematical Demonstrations Concerning the Two New Sciences (Discorsi, 1638), Galileo distinguished description from explanation. A description is an empirical law; explanation is teleological. He advocated empirical description and warned against seeking causes in the Aristotelian, teleological sense.

Galileo also made the important distinction between necessary and sufficient conditions. In Dialogue on the Two Chief World Systems (Dialogo) (1632) he wrote, “If it is true that an effect has a single primary cause, and that between the cause and the effect there be a firm and constant connection, that it necessarily follows that whenever a firm and constant alteration is perceived in the effect, there will be a firm and constant alteration in the cause.”

He also wrote, “That and no other is to be called cause, at the presence of which the effect always follows, and at whose removal the effect disappears.” This is a practical definition of Aristotle's efficient cause. It is not clear whether this is the first manipulationist theory of causation or just a counterfactual statement. Ducheyne (2006) argued that Galileo was the first to propose a manipulationist theory of causation based on his scientific practices.

In the Dialogo, Galileo acknowledged that there may be a causal complex, but stated that there is always a unitary primary cause.

Galileo established causal explanation as the heart of natural philosophy (science). On the other hand, he did not hypothesize causes when he could only identify law-like regularities. For example, he determined that the velocity of a falling object was a function of the square of time, but he refused to speculate as to whether something (a causal agent) was pulling or pushing the falling object.

Francis Bacon (1561-1626)

Bacon was the father of the inductive method in science, but he criticized induction by enumeration of confirming cases as childish (Novum Organum, 1620).

He advocated examining both positive and negative instances, but emphasized the importance of negative instances: “The induction which is to be available for the discovery and the demonstration of sciences and arts must analyze nature by proper rejections and exclusions; and then after a sufficient number of negatives, come to a conclusion on the affirmative instances.”

Bacon also emphasized predictive performance. If a hypothesis (axiom) is “larger and wider” than the existing factual basis, in the sense of making a prediction beyond it, then an experiment verifying the chancy prediction “confirms its wideness and largeness by indicating new particulars as a kind of collateral security” (Novum Organum).

He warned against four delusions: idols of the tribe (perceptual illusions), cave (personal bias), marketplace (linguistic confusion), and theater (dogmatic systems). Each of these may lead to errors in causal assessments.

Isaac Newton (1642-1727)

Causes must be verae causae, known to exist in nature (i.e., based on evidence independent of the phenomena being explained) (Philosophiæ Naturalis Principia Mathematica, 1687). His advice against framing hypotheses (Hypotheses non fingo) had inordinate influence, but apparently he meant hypotheses about unknown or supernatural causes. Gravity is one of those unknown causes. In describing the effects of gravity without hypothesizing what it is, he followed Galileo's practice.

Newton's provided “Rules of Philosophizing” in the Principia:

  • “No more causes of natural things should be admitted than are both true and sufficient to explain their phenomena.”
  • “The causes assigned to natural effects of the same kind must be, as far as possible, the same.”
  • “Those qualities of bodies that cannot be increased or diminished and that belong to all bodies on which experiments can be made should be taken as qualities of all bodies universally.”
  • “In experimental philosophy, propositions gathered from phenomena by induction should be considered either exactly or very nearly true notwithstanding any contrary hypotheses, until yet other phenomena make such propositions either more exact or liable to exceptions.”

Gottfried Leibniz (1646-1716)

In a letter to Huygens, as an argument against Newton's theory of gravitation, which was all mathematical law and not physical mechanism, Leibniz stated “The fundamental principle of reasoning is, nothing without cause” (Gleick 2003). That is, Leibniz required a causal agent.

John Locke (1632-1704)

Locke was the father of empiricism and the empirical epistemology of causation (causation is something we perceive rather than an ideal or entity), followed by Berkeley and Hume. In An Essay Concerning Human Understanding, Book II (1690), he wrote “That which produces any simple or complex idea we denote by the general name 'cause,' and that which is produced, 'effect'.” Locke also suggested an early version of inference to the best explanation (see C.S. Peirce).

Baruch Spinoza (1632-1667)

In Improvement of the Understanding (published posthumously), Spinoza broke with Plato, the academicians, and Descartes in arguing that things are self-caused and self-sustained. He recognized two types of efficient causes: intrinsic self-moving and self-sustaining properties (e.g., the planets exist and move without outside intervention) and extrinsic causes (e.g., a collision of an asteroid with a planet would cause its orbit to change).

George Berkeley (1685-1753)

In Concerning Motion (1721), Berkeley argued that causation is purely mental and, therefore, the real causes of motion (i.e., causal agency) are a matter for metaphysics, not mechanics. He thought that Newton had shown that there was no mechanical causation, and terms like attraction and force convey the illusion of an explanation (McMullin 2000).

David Hume (1711-1776)

In An Enquiry Concerning Human Understanding (1748) and A Treatise of Human Nature (1739), Hume combined Locke's empiricism (causation based on impressions from experience) with a neo-skeptical stance and greater attention to the issue of causation. In the Enquiry, he wrote “All reasonings concerning cause and effect are founded on experience, and all reasonings from experience are founded on the supposition that the course of nature will continue uniformly the same.”

We may think that reason and experience can prove causation, but Hume demonstrated that is not true. We must assume “the uniformity of nature” or “conformability to the past” and that assumption may not hold. We believe that the future will be like the past because the future has been like the past in the past, but that is a circular argument. We believe based on “constant conjunction” and “lively or vivid impression.” His terminology is not consistent, but causal criteria are expressed in the Treastise as contiguity, priority, and constant conjunction. He defined causation as “an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second.” Hence, Hume replaced the concept of necessity in causation with regularity. The cause of a unique event cannot be determined, because there can be no constant conjunction. Therefore, singular causal events must be instances of a general causal relationship.

In addition to this regularity definition, Hume also presented in the Enquiry a counterfactual definition, “an object followed by another…where, if the first had not existed, the second had never existed.” He apparently considered this to be equivalent to the regularity definition, but Lewis (1973) showed that they are different.

Hume said that our idea of causation comes from our experience of manipulation.

Hume was skeptical about proving causation, but in the Treastise, he allowed that our inductive processes are genuinely “correspondent” to natural processes and are “essential to the subsistence of human creatures.” He called this inherent propensity to extrapolate from the past an instinct, held in common with the beasts. His solution in the Enquiry was to “explain where you cannot justify.”

He approved of Newton's physics, which quantified relationships (laws of nature) without appealing to unobserved underlying causal mechanisms (ether or phlogiston) (McMullin 2000).

Thomas Bayes (1702-1761)

Bayes argued that truth is objective, but knowledge is subjective. In particular, he accepted deductive logic but realized that it depended on the truth of premises and we typically are not certain of our premises. Hence, we must begin with our belief in the premises and then apply logic, including Bayes's theorem of conditional probability. He was more of an inferential pessimist than Hume in the sense that, while Hume showed that inductive proof is impossible, Bayes showed that deductive proof is also uncertain.

Immanuel Kant (1724-1804)

Causality is an a priori category of understanding (Critique of Pure Reason, 1781). In response to Hume, Kant tried to bridge the gap between rationalist and empiricist approaches by positing that human perception is filtered through innate categories of ideas such as time and space. Hence, “Every event is caused” is a synthetic a priori truth. This innate concept enables us to organize, label, and read experiences and generate causal laws. He followed Hume in arguing that causal laws apply to experience, not things.

Pierre-Simon Laplace (1749-1827)

In the Philosophical Essay on Probabilities, which is an introduction to The Analytical Theory of Probability (1812), Laplace presented a mechanistic and deterministic theory of causality. This theory states that, if we knew the state of the universe at a moment and had sufficient knowledge of natural laws and sufficient computational capability, we could predict future states. This determinism may seem surprising given that Laplace is one of the founders of probability theory. However, he ascribed uncertainty to our state of knowledge: “Ignorance of the different causes involved in the production of events, as well as their complexity, taken together with the imperfection of analysis, prevents our reaching the same certainty about the vast majority of phenomena.”

Georg W. F. Hegel (1770-1801)

In Science of Logic (1813 & 1817), Hegel, a German idealist, struggled with the idea of causation because the real world is so contingent and unpredictable. In particular, he argued that causation does not apply to living things, because they react to external influences and thereby modify their effect. “Whatever has life does not allow the cause to reach its effect, that is, cancels it as a cause.” Also, “That which acts upon living matter is independently changed and transmuted by such matter...Thus it is inadmissible to say that food is the cause of blood, or certain dishes or cold or damp the cause of fever, and so on.”

John Herschel (1792-1881)

In A Preliminary Discourse on the Study of Natural Philosophy (1830), Herschel, an English astronomer, physicist, and philosopher of science, advocated verae causae, causes which are known to exist in nature and not “mere hypotheses or figments of the mind,” an idea taken from Newton. However, he believed that verae causae must be agents that are observed in nature and for which we have inductive evidence that they may cause the sort of phenomenon that we are trying to explain. This redefinition of a vera causa requires direct experience.

Hershel defined five characters of causal relations:

    1. “Invariable antecedent of the cause and consequence of the effect, unless prevented by some counteracting cause.”
    2. “Invariate negation of the effect with the absence of the cause, unless some other cause be capable of producing the same effect.”
    3. “Increase or diminution of the effect with the increased or diminished intensity of the cause.”
    4. “Proportionality of the effect to its cause in all cases of direct unimpeded action.”
    5. “Reversal of the effect with that of the cause.”

He did not believe that these prove causation, but, rather, “That any circumstance in which all the facts without exception agree, may be the cause in question, or, if not, at least a collateral effect of the same cause...”

Herschel was apparently elaborating on Hume, but he appealed to the authority of Bacon and Newton. He was particularly concerned that “counteracting or modifying causes may subsist unperceived and annul the effects of the cause we seek...” He suggested the importance of natural or contrived experiments to resolve those concerns, but did not use the term or emphasize the idea.

He dealt with multiple causes by suggesting that, as each cause is understood, its effects can be “subducted,” leaving “a residual phenomenon to be explained.”

William Whewell (1794-1866)

Whewell was an English astronomer, historian, and philosopher of science but a Kantian. In The Philosophy of the Inductive Sciences Founded Upon Their History (1840), he suggested that we should accumulate observations and then derive a hypothesis that explains the data. He called this colligation, or seeing facts in a new light. Tests of colligations are

    1. Empirically adequate; account for all available data.
    2. Provide successful predictions
    3. Provide consilience of inductions (brings unrelated hypotheses together). Implies simplicity is goal.

He condemned Darwin for starting with his hypothesis, then gathering cases to support it. However, following Kant, he believed that the evidence simply stimulated the mind to recognize self-evident a priori truths. Hence, he opposed Herschel and Mill, who believed that the fundamental ideas of science were inductions from experience (Hull 1983).

He developed the concept of consilience—the results of one science should be consistent with those of other sciences. He also applied consilience to the process of identifying theories that cover a variety of phenomena or types of evidence.

As a Kantian, he believed that knowledge was possible only because the mind supplied certain fundamental ideas such as force, time, space, mass, and causality (Scarre 1998).

John Stuart Mill (1806-1873)

As an empiricist, Mill argued that no knowledge exists, independent of experience (Mill 1843). He avoided Hume's problem of induction by assuming uniformity, which he said was justified by experience of the universality of causation (causal relationships are consistent across space and time). He either did not appreciate the circularity of that justification or did not care.

Mill believed that axioms of science are alternative hypotheses that are eliminated by experiment, or observation. Axioms are not derived by elimination, but by enumerative induction (simple generalizations from experience).

He identified four qualitative classes of causal explanation.

  • The best “is a law of body of laws that gives necessary and sufficient conditions for any state of a system”
  • Next best is necessary and sufficient conditions for a state.
  • Next, necessary conditions for a state.
  • Finally, mere regular association, “the method of concomitant variation.”

Association comes from observation; the others require experiments. Mill is the first philosopher of science to clearly argue the priority of experiments over uncontrolled observations. He wrote that “…we have not yet proved that antecedent to be the cause until we have reversed the process and produced the effect by means of that antecedent artificially, and if, when we do, the effect follows, the induction is complete…”

However, Mill believed that social sciences are too complex for experimentation. Hence, we must deduce laws for wholes from laws for parts and then test against history. (This approach assumes there are no emergent properties.)

He argued against Whewell (whom he opposed as a Kantian idealist) and for experiments by pointing out that more than one hypothesis might account for observations so only a controlled manipulation could distinguish among them (Scarre 1998).

He did not believe that predicted associations were any better than observed prior associations.

He was a Monist, that is, he believed that there is only one cause of an effect, but in most cases, it is a network of conditions. This belief is associated with the idea that causes are unconditional and necessary. In the System of Logic he wrote, “That which is necessary, that which must be, means that which will be, no matter what supposition we may make in regard to all other things.” Hence, like Hume, he believed that necessity was the feature that distinguishes causation from other associations.

He derived his philosophy of science after he abandoned botany for chemistry and physics (Scarre 1998), which may be due to the conceptual difficulty of causation in complex and hierarchical living systems.

Charles Darwin (1809-1882)

Darwin tried to follow Herschel but could not resist framing hypotheses. In the end, he was unapologetic. He admitted that he framed his theory of coral reef formation before he had seen a reef and then performed observations needed to confirm his hypothesis (Hull 1983).

“The line of argument often pursued throughout my theory is to establish a point as a probability by induction and to apply it as a hypothesis to other parts and see whether it will solve them” (quoted in Hull 1983). This is analogous to using observations to derive potential causes and then making predictions about results of proposed studies or about existing observational data not used in hypothesis formation.

“He [Hutton] is one of the very few who see that the change of species cannot be directly proved, and that the doctrine must sink or swim according as it groups and explains phenomena” (quoted in Hull 1983).

Charles Sanders Peirce (1839-1914)

Peirce was a geologist, astronomer (U.S. Coast and Geodetic Survey), philosopher (the founder of pragmatism; followers include William James, Thorstein Veblen, and John Dewy), mathematician, and an early contributor to semiotics. His “principle of pragmatism” is that the meaning of a proposition depends on its practical consequences. That is, how might it conceivably modify our conduct? Hence, pragmatism is often described by the aphorism “thinking is for doing.” His writings were voluminous but mostly unpublished in his lifetime, so the standard reference is the Collected Papers.

Peirce argued that there are three types of inference: deduction, induction, and abduction. Abduction, which is his creation, is argument to the best explanation. That is, the hypothesis that best explains the existing information is most likely to be true. The strength of evidence analysis in the Stressor Identification Guidance and in CADDIS is an example of abductive inference.

Peirce strung the three inferential approaches together into a general scientific method, which he termed the inductive method (abductive inference plus the hypothetico-deductive method).

    1. Abduction is used to frame a most likely explanatory hypothesis.
    2. Deduction derives testable consequences from the hypothesis.
    3. Induction evaluates the hypothesis in light of the observed consequences.

According to the Oxford English Dictionary (1971), he also was the first to use the term “weight-of-evidence” in print. He provided a formal definition for the weight-of-evidence for two hypotheses in terms of the odds ratio.
Peirce recognized that the interpretation of probability as frequency relied on the problem being solved consisting of a series of repeated events (coin tosses, card draws, etc.). However, we want to make probabilistic arguments about unique events. So, he developed the concept of confidence and the concept of confidence intervals, which was made rigorous by Jerzy Neyman.

He believed that we could gain confidence in an inference based on the repetition of measurements or observations, because they will converge. Hence, they are self-correcting. Through “collective enquiry,” they will gradually stabilize on a reliable answer. Hence, science is a belief of the community (see the discussion by Hacking (2001)). However, he was a philosophical objectivist and opposed subjectivism in that sense.

In Illustrations of the Logic of Science, he propounded tychism: there is absolute chance in the universe, so the laws of nature are probabilistic. This belief was due to the influence of Darwin, but Peirce defended the role of chance in evolution as part of a pattern in science in general, citing gas laws and variance in astronomical observations.

Robert Koch (1843-1910)

Koch's postulates (also known as Henle-Koch postulates) provide a standard of proof for identifying pathogens responsible for diseases. However, Koch never wrote them so they must be inferred from cases in which he applied them. The four step version is

    1. The microorganism in question must be shown to be consistently present in diseased hosts.
    2. The microorganism must be isolated from the diseased host and grown in pure culture.
    3. Microorganisms from pure culture, when injected into a healthy susceptible host, must produce the disease in the host.
    4. Microorganisms must be isolated from the experimentally infected host, grown in pure culture, and compared with the microorganisms in the original culture.

Henri Poincaré (1854-1912)

In Nouvelles Methodes Des Méchanique Céleste (1892), Poincaré anticipated a bit of chaos theory but not the role of nonlinearity. He wrote “A very small cause which escapes our notice determines a considerable effect that we cannot fail to see, and then we say that the effect is due to chance....it may happen that small differences in the initial conditions produce very great ones in the final phenomenon.”

Karl Pearson (1857-1936)

In The Grammar of Science, 1st edition (1900), Pearson took Galton's concept of “co-relation,” developed it as a quantitative tool, and made causation, at most, a subset of it. He argued that everything we can know about causation is contained in contingency tables: “Once the reader realizes the nature of such a table, he will have grasped the essence of the concept of association between cause and effect” (The Grammar of Science, 3rd edition (1911)). Hence, causation is probabilistic consistency of association.

Bertrand Russell (1872-1970)

In Mysticism and Logic (1912), Russell was dismissive of causality: “The law of causality...like much that passes muster among philosophers, is a relic of a bygone age, surviving like the monarchy, only because it is erroneously supposed to do no harm.”

He pointed out that the laws of physics are symmetrical (F=ma and a=F/m) so F can be the cause of acceleration (the conventional interpretation) or its effect. In contrast, causation is taken to be asymmetrical. Why do we say that a force causes acceleration and not the other way around? For Russell (1913, cited in (Sloman 2005)), invariant natural laws are simply mathematical, not causal, because they do not imply agency.

In Principia Mathematica (1910) with A.N. Whitehead, Russell attempted to establish a formal symbolic logic based on set theory, which apparently included all of mathematics and formal logic but not causation. They argued that there is no one relationship that encompasses everything we call causation. The best we can do is “material implication” (the presence of A implies the presence of B).

In Human Knowledge (1948), Russell continued to reject conventional concept of causation but developed a theory of process causation based on continuous physical processes that form causal lines. “When two events belong to one causal line the earlier may be said to 'cause' the latter. In this way laws of the form 'A causes B' may preserve a certain validity.”

F. A. Moritz Schlick (1882-1936)

Schlick, a philosopher trained in physics, was a founder of the Vienna Circle and an originator of logical positivism. He made Hume's regularity a basis for assuming causation. “Every ordering of events in the temporal direction, of whatever kind it may be, is to be viewed as a causal relation. Only complete chaos, and utter lawlessness, could be described as non-causal occurrence, as pure chance; every trace of order would already signify dependence, and hence causality” (Schlick 1931). Epistemologically, he believed that “the true criterion of regularity, the essential mark of causality, is the fulfillment of predictions” (Schlick 1931). This fits with the logical positivists' belief that the meaning of a proposition is its method of verification.

Ronald Fisher (1890-1962)

Fisher argued that experiments in which treatments are randomly assigned to replicate systems can demonstrate causal relationships, but the relationship is probabilistic due to variance among replicates and treatments and sampling error (Fisher 1937).

He was operationally a falsificationist, providing a method for rejecting a null hypothesis (the hypothesis that there is no relationship between C and E). He allowed acceptance of a causal hypothesis by assuming that if the null hypothesis is rejected, there is only one causal alternative that can then be accepted. This may be justifiable in well-designed and well‑controlled experimental systems but not in the uncontrolled real world. He was influenced by Hume and attempted to design an inferential method in which regularity of association is indicative of causation (Armitage 2003).

Fisher famously opposed the idea that smoking causes cancer. He argued that smoking does not cause lung cancer, because most smokers do not contract lung cancer. He stated that smoking could be the result of lung cancer (e.g., tobacco smoke might relieve symptoms of early lung cancer) or that smoking and lung cancer may have a common cause (e.g., a genetic factor) (Fisher 1959, Salsburg 2001).

Sewall Wright (1889-1988)

A founder, with R.A. Fisher and J.B.S. Haldane, of theoretical population genetics, Wright was the first to publish a causal network model and developed path analysis to quantify it (Wright 1921). He began by modeling the determination of a phenotypic trait (coat pattern in guinea pigs) by incorporating genetic and chance environmental factors.

Karl Popper (1902-1994)

In The Logic of Scientific Discovery (1934 & 1968), Popper famously argued that we cannot prove hypotheses, only disprove them. He denied induction and emphasized the deduction of testable consequences of hypotheses. No number of corroborating cases increases the confidence in a cause, because we may be making the same error repeatedly or observing the same special case. Consistent cases “corroborate” but do not “confirm.” Hence, he was a falsificationist but limited his arguments almost entirely to deterministic science and “universal theories.” He held that, if action is needed, we may tentatively accept those theories that have withstood the most severe tests (which is effectively induction).

He separated the psychological problem of induction from the logical problem. The psychological problem is, why do we make inductive generalizations when they are not justified? The answer is that learning from experience works for us as it does for other animals and even bacteria. Hence, it is a selectively advantageous strategy. That leaves the logical problem, which Popper said we can avoid by avoiding induction and using deduction. That is, (1) make a conjecture, c, (2) deduce a testable implication of c, (3) perform the test, (4) reject (eliminate) c if the test fails, else tentatively accept c.

Hans Reichenbach (1891-1953)

Causality was central to Reichenbach's attempt to create a philosophical explication of relativity theory and quantum mechanics. In The Direction of Time (1956), he espoused a causal theory of time and space in which the directionality of time was due to the directionality of causation. It is a more philosophical version of the causal structure of the Lorentzian manifold, presented as a network of potentially causal interactions of events in space-time.

Reichenbach also developed the common cause principle. A correlation between events E1 and E2 indicates that E1 is a cause of E2, or that E2 is a cause of E1, or that E1 and E2 have a common cause. It has been abbreviated as “no correlation without causation” (Pearl 2000). Reichenbach explained that a common cause screens off the effects. That is, when we account for the common cause, the effects are no longer correlated. If the effects are not fully screened off, the identified common cause is incomplete. The causal Markov condition is a generalization of this principle for analysis of larger numbers of probabilistically associated variables. Reichenbach's networks and common causal principle have been important to the development of models of causal networks (Pearl 2000, Spirtes et al. 2000).

Carl Hempel (1905-1997)

Like Reichenbach and Schlick, Hempel was a member of the Vienna Circle. With Oppenheim in 1948, he formalized the covering law concept of causal explanation, calling it the deductive-nomological model. That is, a phenomenon requiring an explanation (the explanandum) is explained by premises (the explanans) consisting of at least one scientific law and suitable facts concerning initial conditions. Later, in Aspects of Scientific Explanation, he extended the concept to inductions from probabilistic observations, the inductive-statistical model. This theory made explanations and predictions equivalent by making them implementations of the same causal laws.

Top of page


Recent Philosophy of Causation

Although this section is dominated by professional philosophers, it also includes philosophical writings by statisticians such as I.J. Good and scientists such as Steven Weinberg. It is organized chronologically by the date of the author's publication that is most important to causal analysis.

Herbert Simon showed that the understanding of complex systems is typically insufficient to develop reliable causal models, but sufficient causal understanding for decision making is attainable. John Mackie clarified the concept of a set of causes which together are sufficient. Judea Pearl and other advocates of causal network modeling are currently highly influential and their methods are being used in ecology. 

Herbert Simon (1916-2001)

Simon was a Nobel Laureate and pioneer of econometrics, quantitative political science, computer science (for which he received the Turing Award), theory of science, operations research, and artificial intelligence. He described himself as a monomaniac about decision making. He also made important contributions to analysis of causation including pioneering the application of Wright's path analysis method to other types of causal networks (Simon 1952, Simon 1954, Simon and Rescher 1966). He argued that we can eliminate the philosophical objections to causality by treating it as a property of models. That transfers the problem to defending the legitimacy of causal models, but for this, he appealed to the reassuring respectability of models of electrical and mechanical systems.

Simon also recognized that many systems are too complex to model with sufficient reliability to guide decision making. For such cases, he developed the theory of bounded rationality, dealing with cases of incomplete knowledge (Simon 1983). The key original concept of the theory is satisficing, an alternative to optimizing, in which solutions are found that meet a defined set of criteria. He argued against Pareto that people ordinarily make good enough decisions, not optimal decisions. He also argued that satisficing is consistent with Darwinian evolution. That is, natural selection does not result in the fittest, but the fit enough to persist.

Mario Bunge

Beginning in 1959, the Argentine-born physicist and now Canadian philosopher, M. Bunge, tried to revive causality in the face of quantum mechanics and logical positivism in Causality and Modern Science (Bunge 1979). He provided the definition, “Causation is not a category of relation among ideas, but a category of connection and determination corresponding to an actual trait of the factual (external and internal) world, so that it has an ontological status...”

His theses are

    1. The causal relation is a relation among events.
    2. Every effect is somehow produced (generated) by its cause(s).
    3. The causal generation of events is lawful.
    4. Causes can modify propensities (in particular, probabilities), but they are not propensities.
    5. The world is not strictly causal.

Although he attempted to rigorously define causation in a generally applicable way, he recognized that “Almost every philosopher and scientist uses his own definition of cause...”

Clive W. J. Granger

Beginning in 1962, Granger (2007) developed an information theoretical definition of cause, known as G-causality, to identify causes in time series data. He pointed out that path models by Pearl, Glymour, and others have no time sequence; “Exactly the same graph with the arrows reversed will have the same likelihood” (Granger 2007).

Granger disagrees with manipulationist theories of causation because they assume that the manipulation does not change the causal relationships. That is not true of systems including humans and possibly not any living system.

He defined a good causal theory as one that makes good predictions and, in particular, leads to good decisions in realistic settings. “In an applied area, the effect of a causal definition on the decisions taken by a decision maker in a realistic setting is the only way that its usefulness can be discussed” (Granger 2007).

J. L. Mackie

Mackie (1965) developed a more realistic version of Hume's regularity account of causation by incorporating the “plurality of causes.” It may be summarized as C is a cause of E if and only if

  1. C and E are both actual
  2. C occurs before E
  3. C is an INUS condition.

INUS conditions are Insufficient but Necessary parts of Unnecessary but Sufficient set of conditions. For example, a fish kill may be caused by elevated copper concentrations (an insufficient condition), along with low pH, susceptible fish, and low levels of dissolved organic matter (the sufficient set). The set would not cause a kill without copper, so it is a necessary part. However, this set is unnecessary because other sets of conditions also cause fish kills. Mackie recognized that we will not specify all members of the sufficient set; some must be treated as background. He called these unspecified conditions the “causal field.” This approach was more fully developed in Mackie (1974).

Patrick Suppes

Suppes (1970) developed a probabilistic theory of causality in which C is identified as a prima facie cause if

  1. C precedes E
  2. C is real [i.e., P(C) > 0]
  3. C is correlated with E or raises the probability of E [i.e., P(E|C) > P(E)].

In addition, the relationship must be non-spurious. That is, the relationship must not be explained by any third variable. He defined C and E as events.

He argued that experiments can demonstrate all of these criteria. They demonstrate the reality of the cause and time sequence by imposing the cause on experimental subjects. The results demonstrate the difference in probability given the imposed cause. Good experimental design with replication and randomization eliminates spurious relationships.

Suppes also responded to Russell by pointing out that current physics often refers to causation. This is particularly true of complex physical systems, which are not readily characterized by a mathematical formula. He suggested that in such cases, causation is an abbreviation for the set of processes that determine the effect.

David Lewis

The modern counterfactual theory of causation dates to Lewis (1973). Counterfactual theory (had C not occurred, E would not have occurred) is an alternative to regularity analysis (Humean association or constant conjunction—E occurred because C occurred). Lewis's solution to the problem of defining the counterfactual situation was to define alternative worlds, which he considered to be actual and not just conceptual conveniences. He distinguished between counterfactual dependence between propositions and causal dependence, which applies the same logic to events (see also his chapters in Collins et al. (2004)).

Counterfactual theory suffers from cases of preemption (C2 would have been the cause but C1 acted first) and overdetermination (C1 and C2 both acted) in which neither is a counterfactual cause.

Donald Rubin

Donald Rubin developed a theory of causality for observational studies based on Neyman's concept of potential outcomes (Rubin 1974). In this approach, the effect is defined as the difference between results for two or more treatments of a unit, only one of which is observed. Various statistical techniques are used to estimate those differences, based on the observed outcomes and covariates. His potential outcomes concept, which is known as the Rubin Causal Model, is particularly popular in psychology and the social sciences.

Fredrick Mosteller and John Tukey

In their classic text on regression analysis, Mosteller and Tukey (1977) recognized that regression models and other statistical associations do not demonstrate causation. They suggested that the following ideas are needed to support causation.

  1. Consistency—Other things being equal, the relationship between C and E is consistent across populations in direction and perhaps in amount.
  2. Responsiveness—If we intervene and change C for some individuals, a property E will change accordingly.
  3. Mechanism—C is related, often step by step, with E, in a way that it would be natural to say “this causes that.”

Wesley Salmon

Salmon (1984) proposed a theory of physical causation based on causal processes and updated it in Salmon (1994, 1998). It involves the transmission of an invariant or conserved quantity (e.g., charge, mass, energy, and momentum) in an exchange between two processes. For example, the breaking of a window by a baseball can be characterized as the exchange of linear momentum between the ball in flight and the window at rest. Salmon considered this theory to be superior to the counterfactual theory (i.e., the window would not have broken if not for being hit by the ball).

Salmon (1998) described two accounts of probabilistic causation. Aleatory causation involves physical processes that may be probabilistic. Statistical causality involves statistical derivation of probabilities of putatively causal associations. Statistical causality is applied to general causal relationships. Aleatory causality, however, can be applied to specific cases as well as general relationships.

Richard Miller

Miller considered causation to be a primitive concept like number or art. Each science has a core concept of acceptable causation, which is extended by subsequent work, just as Jackson Pollock extended the concept of art (Miller 1987). Over time, a science's core causes are extended to create new accepted causes. There are no a priori principles for justifying causal attribution. New causes come from associations and relevant principles, which are themselves sustained by empirical findings or core definitions. Hence, we should begin by asking, given that the possible list of causes is potentially infinite, what causes are adequate given the circumstances at hand. For example, what must be part of the cause, and what can be considered background? The type of answer that is acceptable depends on the field. Causes must conform to an appropriate “standard causal pattern.”

Marjorie Grene

Hierarchy theory shows that causation is a matter of trios of levels: focal, lower, and higher (Grene 1987). For reproduction, the focal level (organisms) performs reproduction, the lower level (genes) determines what is reproduced, and the higher level (demes) constrains the amount of reproduction. There are two types of evolutionary biological hierarchies: genealogical (taxonomic) and ecological (economic-type interactions).

Paul Humphreys

Humphreys (1989) presented a probability raising theory of causal explanation in science. C is a cause of E if the occurrence of C increases the probability of E when circumstances are compatible. He based his concept of probability on the lack of knowledge of all of the contributing factors, not on frequency. As science increases knowledge of the causal factors in a system, the causal model approaches determinism.

Judea Pearl

Pearl (2009) has been concerned with imparting concepts of causation in artificial intelligence. His fundamental definitions are

    Causation = encoding of behavior under interventions
    Interventions = surgeries on mechanisms
    Mechanisms = stable functional relationships = physical laws, which are sufficient to determine all events of interest.

These definitions were summarized as “Y is a cause of Z if we can change Z by manipulating Y.” Pearl's scheme for causal computation consists of directed acyclic graphs (DAGs—box and arrow diagrams with no loops) and equations or probability values defining the relationships between nodes in the DAG. His examples of causal models included circuit diagrams and Sewall Wright's phenotype determination. Pearl presented his models as bases for counterfactual analyses, but he argued that other counterfactual analyses fail to properly account for confounding by covariates.

Michael Ruse

Ruse (1998) argued that causation is an epigenetic rule; those who inferred causes left more offspring. This differs from Kant's a priori concepts in that it is contingent, not necessary. It is Humean. “Hume's propensities correspond exactly to Wilson's epigenetic rules” (Ruse 1998).

Colin Howson

Howson (2000) developed a response to Hume's argument that induction is circular. (We believe that what has happened in the past will happen in the future, because in the past, the past was like the future.) To avoid circularity, we insert some assumption, which he claims takes the form of a Bayesian prior. “Inductive reasoning is justified to the extent that it is sound, given appropriate premises. These consist of initial assignments of positive probability that cannot themselves be justified in any absolute sense.”

Steven Weinberg

Weinberg is a Nobel laureate in physics who indulges in philosophy of science. He argued that physicists explain regularities while biologists, meteorologists, historians, etc. must explain individual events or phenomena (Weinberg 2002). Hence, physicists can explain things by deducing them from fundamental laws, which are not causes in the usual sense. A fundamental problem for others is to decide which of the many things that affect the state of the environment or other complex system will be considered the cause.

Nancy Cartwright

Cartwright provided often metaphysical and always logical critical analyses of causation in series of papers for London School of Economics (Cartwright 2003a, 2003b, 2003c) and a collection of essays (Cartwright 2007). She reviewed the current dominant accounts of causal laws and found them all wanting. She explained that each is legitimate but highly limited in scope by its inherent assumptions. She concluded that there is more than one type of cause. She advocated thick causal terms and laws. Rather than saying cause, use smother, compress, attract, etc.

Cartwright (2007) argued that those who make causal claims must answer three questions. (1) What do they mean? (usually answered by philosophers) (2) How do we confirm them? (usually answered by methodologists) (3) What use can be made of them? (usually answered by policy makers or their consultants). She further argued that these three activities should be integrated and subjected to consistent analyses. For example, inductive analyses of causation in real systems may justify causal hypotheses if assumptions are met.

James Woodward

Woodward (2003) provided a counterfactual theory of causation based on manipulation. That is, we know that E would not have occurred without C because we have withdrawn or added C, because we believe that we could have performed that manipulation or because we can imagine performing the manipulation. He refers to this extended manipulationist concept as intervention. In this theory, a manipulationist can provide a causal explanation of the extinction of dinosaurs by stating that, had someone diverted the asteroid, the extinction would not have occurred. This is equivalent to Pearl's interventions in network models, but Woodward is concerned with the meaning of causal claims rather than the use of models to identify causal relationships.

Robert Northcott

Northcott (2003) proposed two definitions of causal strength:

  • Difference magnitude—How much effect did C have? For example, if the elevation in temperature were eliminated, how much would that change the biotic community?
  • Potency magnitude—How much did C matter? For example, temperature may be capable of impairing a community, but, if the effect is defined as impairment status, and if low flow is impairing the community, then increased temperature does not matter.

He subsequently contrasted analysis of variance as a measure of causal strength with an explicitly causal model: E(C1 & W) - E(C0 & W) (Northcott 2008). C1 is the actual level of the agent, C0 is the baseline level of the agents, and W is background conditions. Note that C0 is not the counterfactual absence of C but rather a chosen baseline, because absence of the cause is nonsense for some causes such as temperature. He stated that this is a more intuitive definition, which it is more flexible, and, unlike ANOVA, it does not give nonsense results. He attributed ANOVA's emphasis on variance rather than differences to Fisher's reaction to logical positivism and its dismissal of causation.

Philip Dawid

Dawid is a statistician with an interest in the philosophy and logic of data analysis. He developed a Bayesian decision theoretic approach to the analysis of causal relationships, which he believed corrects some conceptual problems in the approaches of Rubin, Pearl, and others (Dawid 2000, 2004). In particular, he argued that counterfactual analyses are metaphysical rather than scientific. He also argued that causality is a theoretical concept, in Popper's sense, that only indirectly relates to the physical world. However, the link can be formalized as personal belief defined in terms of de Fenetti's exchangeability theory. Probabilistic causal hypotheses can be formulated and tested by calibration of the probabilistic hypothesis against actual outcomes in the world using Borel criteria (i.e., only P = 0 or 1 have direct external referents, so we should create an event which our theory assigns P = 1 and attempt to falsify it by finding that the event does not occur).

Jim Bogen

Bogen (2004) argued that regularists are wrong; neither factual nor counterfactual regularity are necessary. Causes may act sporadically. His example was, women bear children is true but not invariate. Hence, “the goodness of a causal explanation of one or more cases does not depend in any interesting way upon how many other cases satisfy the relevant generalities.”

A.M. Armstrong

Armstrong (2004) espoused a singularist theory of causation. Rather than Humean regularity, we should determine the cause in each singular incident. He referred to fixing the reference, that is, C causes E in this particular circumstance. Causation is a “theoretical entity” (from Menzies), an undefined primitive defined by platitudes of folk psychology. “Causation is that intrinsic relationship between singular events for which the causal platitudes hold.” The platitudes include regularity, agency (manipulative theory), counterfactual dependence, and probability raising. That is, we know it when we see it.

Armstrong rejected probabilistic causation. In any instance, C causes E, or it does not. This eliminates probabilistic causation, but we still have the concept “probability of causing” for general causation.

Steven Sloman

Sloman (2005) advocated Bayesian causal models for cognition (i.e., Bayesian networks). Contrary to Gigerenzer, he argued that Bayesian causal models are models of the way we think about and identify causal relationships. His argument includes the following points.

  • Causal relations associate events with other events.
  • Mechanisms take causes and induce effects.
  • “No matter how many times A and B occur together, mere co-occurrence cannot reveal whether A causes B, or B causes A, or something else causes both.” Hence, correlation leads only to a class of Markov-equivalent models.
  • Every arrow in a causal graph is a cause.
  • We have an inherent concept of causality that is based on contingency rather than association (i.e., C predicts E).
  • The advantage of experiments is that experimenters manipulate one thing at a time. Their disadvantage is that they manipulate only one thing.
  • The disadvantage of Bayesian network models is that it is difficult to deal with continuous processes and they cannot deal with feedbacks.

Phillip Wolff

Wolff (2007) is an experimental psychologist concerned with how people actually represent causation when making judgments about the causal nature of events. He criticized “dependency models” of causation (i.e., counterfactual and probability models) and espoused his own version of physical process models (like those of Salmon and Dowe), which he terms “dynamics models” after Talmy. He argued that people routinely infer physical dynamics when they observe kinetic interactions and they view the dynamic processes as causation. People view nonphysical (i.e., psychological and social) interactions analogously, as resulting from inferred dynamics. He supported this argument with results from experiments in which subjects view animations and report why the observed events occurred. He stated that dependency models simply capture “the side effects of causation.”

Christopher Hitchcock

Hitchcock (2007) presented various versions of causal pluralism. The most relevant is Philosophical Methodological Pluralism. Just as science uses various methods to elucidate causal relationships for different problems, he advocated using various philosophical concepts of causation depending on the problem. All fail in some circumstances (he gives examples).

Steven Pinker

Pinker (2008) is a linguistic Kantian. That is, he adopted Kant's a priori categories (space, time, substance, and causation) and made them basic linguistic concepts. He rejected the idealistic aspects of Kant (we can engage the real world directly, and the categories are not more real than the world we perceive) but argued that we tend to filter experience through those categorical concepts. The evidence that the concepts are in our heads rather than in the world is that, when we apply them, they work most of the time, but we encounter paradoxes when we analyze atypical cases, advanced science, or philosophical thought experiments. He showed how paradoxes arise in both associationist and counterfactual accounts of causation.

Top of page


Recent Epidemiology and Causation

The attempts of epidemiologists to determine the causes of observed patterns of disease and injury are closely analogous to the attempts of ecologists (ecoepidemiologists) to determine the causes of biological impairments in the nonhuman environment. However, epidemiologists are concerned more with generic causation (does trichloroethylene cause cancer) than specific causation (did trichloroethylene cause the cancer cluster in Woburn, Massachusetts). Ecologists are more likely to address individual cases. This brief review of the voluminous literature on causation in epidemiology is intended to represent the major themes and to emphasize the literature that addresses specific causation. It is organized chronologically by the date of the author's first publication that is important to causal analysis.

The causal criteria of Hill, Susser and others are a major source of CADDIS’s inferential approach. However, most epidemiologists have focused on developing sophisticated estimates of the degree of association of effects and putative causes. Frederica Russo and Jon Williams pointed out that, in practice, causation is accepted if two criteria are met: clear correlation and well defined mechanism.

J. Yerushalmy and Carroll E. Palmer

Yerushalmy and Palmer (1959) generalized Koch's postulates to causes of disease other than pathogens.
  1. The cause is more common in people with the disease.
  2. The disease is more common in people with the cause.
  3. The association must be tested in other cases for validity (requires specificity).

This weak version for epidemiologists does not require experimental testing, only verification in another case.

U.S. Department of Health Education and Welfare

Five criteria were applied to tobacco smoke as a cause of cancer by the Surgeon General's Committee on Smoking and Health: Temporality, Strength, Specificity, Consistency, and Coherence (U.S. Department of Health Education and Welfare 1964). This list was expanded by A.B. Hill to form his famous “criteria” (Hill 1965).

Austin Bradford Hill (1897-1991)

A.B. Hill is credited with being a founder of medical statistics. He designed and conducted the first randomized clinical trial and, with Richard Doll, conducted the first epidemiological studies of smoking and lung cancer. After Fisher attacked his smoking studies, he came to realize that statistics alone could not determine causality in epidemiology. In response, he proposed nine criteria (which he called considerations, features, or viewpoints) for causation (Hill 1965).

    1. Strength
    2. Consistency
    3. Specificity
    4. Temporality
    5. Biological gradient
    6. Plausibility
    7. Coherence
    8. Experiment
    9. Analogy

He argued that these considerations were not actually criteria. Rather, they answered the question: “What aspects of this association should we especially consider before deciding that the most likely interpretation of it is causation?”

He took a pragmatic approach to identifying causal relationships as shown in the following quotations from Chalmers (2003) “…one had to be content with far from perfect evidence and draw on the most likely explanation.” “In the strict sense of the word 'proven' you can prove nothing, but you can make one interpretation of the data more probable than any other (e.g., smoking and cancer of the lung).”

He also argued that the amount of evidence required should depend on the consequences of the interventions that would follow from the causal conclusion.

Richard Doll

Doll was the senior author of the first epidemiological study of smoking and lung cancer (Doll and Hill 1950). He summarized and illustrated his approach to causality in the 23rd Fisher Memorial Lecture (Doll 2002). He believed that causation could be proved beyond a reasonable doubt in epidemiological studies by

    1. Demonstrating that the association cannot reasonably be explained by chance, by methodological bias, or by confounding.
    2. Identifying positive support for causality by applying Hill's guidelines (criteria).

Jack D. Hackney and William S. Linn

Hackney and Linn (1979) adapted Koch's postulates to diseases caused by chemicals in the environment. They use four postulates, but Postulates 2 and 4 are quite different from the usual four-part version.

    1. A definable environmental chemical agent must be plausibly associated with a particular observable adverse health effect.
    2. The environmental agent must be available in the laboratory in a form that permits realistic and ethically acceptable exposure studies to be performed.
    3. Laboratory exposures to realistic concentrations of the agent must be associated with effects comparable to those observed in real-life exposures.
    4. The preceding findings must be confirmed in at least one investigation independent of the original.

Mervyn Susser

Susser (1973, 1986, 1988) clarified and revised Hill's criteria for inferring causation to

    Strength of association
    Consistency
    Specificity in effects of a cause
    Specificity in causes of an effect
    Time order
    Theoretical Coherence
    Biological Coherence
    Factual Coherence
    Statistical
    Predictive performance
    Probability

In addition, he used multiple + and - codes for scoring a cause with respect to each criterion, which increases transparency and encourages rigor.

Susser (1988) responded to Popperians by providing a philosophical justification for Hill's approach. Popper's best hypothesis is one that is refutable, but readily refuted hypotheses are the least useful. However, we may follow Popper to the extent of claiming somewhat greater decisiveness for falsification than affirmation, but we cannot do without induction and affirmative tests.” He advocated abductive inference without using the term: “The central concern of causal inference is to establish preferences among theories at a given point in time, Pt.”

He also rejected statistical hypothesis testing: “The limits are arbitrary and do not override logical inference based on other criteria” (Susser 1988).

Kenneth Rothman

Rothman is a prominent epidemiologist, author of numerous texts in the field and skeptic of causal analyses. He argued that in biological systems, we often know a cause (a contributing event, condition, or characteristic) that contributes to occurrence of effect but not the sufficient cause (e.g., smoking causes cancer but is not sufficient) (Rothman 1986, Rothman 1988, Rothman and Greenland 2005).

Rothman rejected probabilistic causation. He argued that probabilistic causation is due to unknown causes. Hence, his underlying model is deterministic, and probabilistic models reflect ignorance.

As a Popperian, he is skeptical of anything but disproof. He rejected Hill's criteria except temporality. The strength of a cause is a function of its rarity relative to other component causes, so strength is not a good criterion.

Rothman's (1988) possible solutions to the problem of induction are

  • Consensus of experts (e.g., NIH consensus development conferences),
  • Ad hominem appeals to authority or guilt,
  • Post hoc ergo propter hoc,
  • Disproof (Popperian), or
  • Consensus of the community.

He approved of Lanes's (1988) proposal to present the data and leave identification of causes to policy makers. He suggested that, given the conceptual problems with determining causality, epidemiologists should focus on estimating the magnitude of effects in a presumed causal relationship (Rothman and Greenland 2005). “Causal inference in epidemiology is better viewed as an exercise in measurement of an effect rather than as a criterion-guided process for determining whether an effect is present or not.”

Ronald E. Gots

Gots (1986) proposed causal criteria for individual human cases, focusing on specific clinical and morphological symptoms:

  • Can the agent cause the disease?
    • Animal data
    • Human data
  • Did it cause the disease in this case?
    • Alternative causes ruled out?
    • Confirmed exposure?
    • Sufficient exposure?
    • Appropriate clinical pattern?
    • Appropriate morphological pattern?
    • Temporality?
    • Appropriate latency?

Douglas L. Weed

Weed (1988) initially argued for Popper and against Hill. Hypotheses must be predictive and testable. Since refutation never gives a cause, we must reach a decision by less reliable means. Weed proposed the two Popperian criteria: predictability (can we deduce consequences from the causal hypothesis?) and testability (can we devise a test that could refute the prediction?) but couched them in terms of critical debates rather than critical experiments.

Weed later accepted and even recommended criteria-based inference, pointing out that epidemiologists have a moral obligation to inform public health decisions (Weed 1997, Weed 2002). That includes nutrients as well as toxicants (Potischman and Weed 1999). He indicated that meeting all of Hill's criteria is too demanding and is antithetical to public health goals and the precautionary principle (Weed 2005). Although this argument would seem to call for reducing the number of criteria or weakening rules of inference, he indicated that we do not know enough about how well the criteria work to modify them. Rather, he argued for greater clarity concerning what criteria are used, why they are used, and how they are used (Parascandola and Weed 2001, Weed 2001). In particular, he argued that epidemiologists should address plausibility more explicitly, defining the possible mechanisms and laying out the evidence for each (Weed and Hursting 1998). He concluded that judgment-free causal analysis is impossible and undesirable (Weed 2007).

Stephan Lanes

Lanes is another Popperian. We cannot calculate the probability that a theory is true” (Lanes 1988). We cannot even reject a hypothesis probabilistically, despite Fisher, because a statement of probability cannot be refuted.

Lanes attacked subjectivism, weight-of-evidence, degree of belief, Bayes, Hill, etc. He argued that we believe that cigarettes cause lung cancer because alternatives have been effectively refuted, not because of the Surgeon General's and Hill's beliefs.

He argued that Popperianism is about the acceptability of scientific theories. In contrast, the acceptability of actions (i.e., to eliminate a cause) depends on the consequences of an action. Scientists should determine the best explanation or lay out unrefuted alternatives and allow policy makers or the public to weigh the consequences.

Sander Greenland

Greenland is a conceptually oriented statistical epidemiologist. He argued that Popperianism strictly does not allow statistics; if you allow chance, you may as well allow induction (Greenland 1988). Greenland advocated Bayesian analysis. “To a subjective Bayesian, labeling a result as due to chance or random variation is analogous to diagnosing an illness as idiopathic in that it is just a way of making ignorance sound like technical explanation.” In other words, observations that are in conflict with a true hypothesis are due to our inability to take into consideration all conditions. He clarified his disagreement with Popper and the role of induction and deduction in Greenland (1998). Recently, he has propounded a counterfactual approach based on potential outcome models, structural equation models, or analysis of causal diagrams (Greenland et al. 1999, Greenland 2005).

U.S. Environmental Protection Agency

The U.S. EPA (2005) published guidelines for assessing risks from carcinogens that include guidance on how to evaluate causality in epidemiological studies. They recommend evaluating study design, exposure issues, biological markers, confounding factors, likelihood of the observed effects, and biases and then evaluate the evidence for causality using Hill's “criteria.” They are not to be treated as mandatory criteria; instead, they “should be used to determine the strength of the evidence for concluding causality.”

P.S. Guzelian

Guzelian et al. (2005) addressed causality in toxicology. They argued that there is no probabilistic causation—either a thing is a cause, or it is not.

They distinguished general causality, addressed by Hill and Susser, from specific causality. Their minimum criteria for specific causation are

    1. General causation
    2. Dose-response
    3. Temporality
    4. Alternative cause (eliminate confounders)
    5. Coherence

They demanded authoritative rather than comparative criteria. If they are criteria, then they must be satisfied. They condemned consensus and subjective judgments.

Paolo Vineis and David Kriebel

In their review of causation in epidemiology, Vineis and Kriebel (2006) identified two eras of medical causality:

    1. Agent of disease is a single necessary cause, as described by Koch
    2. Chronic diseases such as cancer and heart disease have causal webs

Causal webs are required in cases of individuals (smoking and susceptibility or something else cause cases of lung cancer so smoking is not necessary or sufficient), but we can speak of a single necessary cause for population (smoking is the cause of the increase in the frequency of lung cancer).

We must consider interactions, which the standard multiple regression approach does not do. They recommended Pearl's network models as heuristic tools; they make causal assumptions explicit.

Frederica Russo and Jon Williams

Russo and Williamson (2007) argued that there is only one type of cause in medical inference and practice, but two criteria.

    (1) Probabilistic evidence is consistency of association of cause and effect.
    (2) Mechanism is an explanation of how the causal relationship occurs.

They divide Hill's criteria between those criteria. Criteria 2, 4, 5, 8, and 9 involve mechanistic considerations, Criteria 1, 3, 7, and 8 involve probabilistic considerations, and Criterion 6 is ambiguous. They argue that medical science seldom adopts a causal claim until both types of evidence are provided.

Top of page


Recent Applied Ecology and Causation

The issue of causation in applied ecology received increasing attention in the 1990s because environmental monitoring programs were revealing biological impairments, but the causes of those impairments were unknown or controversial. Most of the proposed methods for ecological causal analysis have been adapted from epidemiology. In particular, Woodman and Cowling, Fox, Suter, Chapman, Kapustka, Beyers, Forbes and Calow, the U.S. EPA and others have developed variants of Koch’s postulates or Hill’s criteria as evidence of causation. However, others have advocated quantititative or semi-quantitative modeling methods to determine causation, including Westman, Landis, and Newman.

This review does not include publications that attempted to determine the cause of an ecological impairment but did not advocate a method. It also does not include methods for determining the chemical or chemical classes that are the causes of toxicity such as toxicity identification evaluation (TIE) (Norberg-King et al. 2005). The section is organized chronologically by the date of the author's first publication that is important to causal analysis.

Walter Westman

Westman (1979) used “path analysis to determine the most likely route of causation of the decline in native cover” of coastal sage scrub in Southern California. He concluded that mean annual oxidant concentration was the primary proximate cause and elevation was a secondary cause. This appears to be the first application of path analysis to an environmental impairment.

In a subsequent study of the causes of species distributions in Southern California plant communities, he reverted to the more conventional technique, multiple linear regression (Westman 1981). He did not explain the change in approach, but it seems likely that he could not define a causal path model for the 21 species and 43 potential predictor variables.

In both publications, he acknowledged that the results were only suggestive of likely causes and should be followed by experimental studies. Many ecologists since have used multiple regression and various forms of network analysis to model causal relationships, but usually for species management or academic problems.

James Woodman and Ellis Cowling

Woodman and Cowling (1987) adapted a three-part version of Koch's postulates to determine whether air pollution caused observed effects on forest trees.

  1. The injury or dysfunction symptoms observed in the case of individual trees in the forest must be associated consistently with the presence of the suspected causal factors.
  2. The same injury of dysfunction symptoms must be seen when healthy trees are exposed to the suspected causal factors under controlled conditions.
  3. Natural variance in susceptibility observed in forest trees also must be seen when clones of these same trees are exposed to the suspected causal factors under controlled conditions.

They indicated that these criteria can be achieved for relatively simple systems “that involve one, two, or possibly three interacting causal factors.” For more complex systems, they suggest a synoptic approach. It involves, “initial surveys to identify important variables, multiple regression analysis to generate hypotheses, and field tests to verify diagnoses.”

Cowling (1992) subsequently proposed another set of causal criteria for air pollution effects based on Mostellar and Tukey (1977):
1) a clear pattern of spatial and/or temporal consistency must be found between dysfunction in the system and one or more specific airborne chemicals.
2) a clear relationship must be found between dose of airborne chemical(s) and response of the system; and
3) a proven biological mechanism or a series of stepwise biological processes must be found by which dysfunction in the ecosystem can be linked to one or more specific airborne chemicals.

Kelly Munkittrick

Munkittrick and Dixon (1989a, 1989b) proposed that the causes of declines in fish population could be diagnosed as caused by one of a set of standard causal mechanisms based on a set of metrics commonly obtained in fishery surveys. This method was subsequently refined and expanded (Gibbons and Munkittrick 1994) and applied to assessments of Canadian rivers (Munkittrick et al. 2000). Numerous metrics contribute to the symptomology, but they are condensed to three responses: age distribution, energy expenditure, and energy storage. The most recent list of types of causes is exploitation, recruitment failure, multiple stressors, food limitation, niche shift, metabolic redistribution, chronic recruitment failure, and null response. These are mechanistically based categories of causes, not causes per se. As the authors acknowledge, follow-up studies are needed to determine the actual cause once this method has been used to identify the category of cause. This method has been incorporated into the causal analysis component of the Canadian Environmental Effects Monitoring Program (see Hewitt, below).

Glenn Suter

Suter (1990) proposed a four-part version of Koch's postulates as a general standard of proof of causation in ecological epidemiology. He showed that those requirements were met by many ecoepidemiological studies although the postulates were not employed by those authors (Suter 1993, 1998). A version specifically for toxicants from Suter (1993) is:

  1. The injury, dysfunction, or other putative effects of the toxicant must be regularly associated with exposure to the toxicant and any contributory causal factors.
  2. Indicators of exposure to the toxicant must be found in the affected organisms.
  3. The toxic effects must be seen when healthy organisms are exposed to the toxicant under controlled conditions, and any contributory factors should contribute in the same way during the controlled exposures.
  4. The same indicators of exposure and effects must be identified in the controlled exposures as in the field.

He argued that, when Koch's postulates could not be met, causation should be inferred by eliminating refuted causes and then applying Hill's or Susser's criteria or his own combination of those criteria (Suter 1993, 1998). Subsequently, he joined Susan Cormier and Susan Norton in developing the U.S. EPA's stressor identification guidance and CADDIS support system (see U.S. EPA, below).

Robert Elner and Robert Vadas

Elner and Vadas (1990) argued for hypothesis rejection and particularly strong inference (Platt 1964) as the only way to determine causes of ecological phenomena. They used population explosions of sea urchins and resulting replacement of macroalgae beds with “coralline barrens” as a case study. They showed that prior studies that relied on “weak” inference (i.e., inferences that include positive evidence), particularly those arguing that the cause was loss of lobster predation, had been inconclusive. However, their review of the case also demonstrates that there is no clear limit to the number of hypotheses to be rejected and that the outcomes of field experiments may have multiple interpretations.

Robert Peters

Peters (1991) argued that causality is an unnecessary and potentially misleading concept. The test of good science is prediction, not explanation. He preferred “instrumentalist science,” particularly empirical models such as allometric equations and QSARs, which go directly for prediction. He also argued that causally driven research never ends because one can extrapolate causes back up the chain, identify more steps between cause and effect, or laterally extend to more contributing causes (Peters 1991, Fig. 5.8). Hence, one can never identify “the cause.”

Glen Fox

Fox (1991) proposed that ecologists should use Susser's (1986) 11 criteria for causal analysis. This has been the most influential paper on the topic of causal analysis for applied ecology. That is, in part, because Susser's criteria and scoring system are more appealing to many ecologists than Hill's completely informal criteria, in part, because Fox presented Susser's system in a clear and compelling manner, and in part, because the system was immediately applied to a set of high profile problems: the declines of fishes, birds, and reptiles in the Laurentian Great Lakes.

Kristen Schrader-Frechette

Schrader-Frechette is a philosopher of environmental science who has argued that conventional scientific methods are not useful for environmental problem solving because of the complexity of the systems and the problems (Schrader-Frechette 1985, Schrader-Frechette and McCoy 1993). Her solution is the “method of case studies.” That is, rather than seeking generalities, environmental scientists should focus on solving individual real problems and thereby learning to be good practitioners. In this approach, the solution to the problem of causation is “to subject our inductive and causal judgments to repeated criticism, reevaluation and discussion and to seek independent evidence for them" (Schrader-Frechette and McCoy 1993). In part this can be accomplished by making it a point to seek alternative analyses of the same case, or to seek evaluations of a particular case study by persons with divergent scientific, epistemological, and personal presuppositions.”

Peter Chapman

Chapman (1995) proposed a three-part adaptation of Koch's postulates for determining whether contaminated sediments are causing an effect. He called them exploratory postulates:

  1. Contaminant(s) must be found in all cases of the effect(s) in question.
  2. Similar effect(s) must be demonstrable in the laboratory with field-collected sediments; and/or, the field with in situ experimentation.
  3. Similar effect(s) must be demonstrable in the laboratory with direct exposure to contaminant(s).

This is different from his better known Sediment Quality Triad in that, as in CADDIS assessments, the observation of an effect is assumed, and the purpose is to identify a cause. However, it is limited to contaminants as potential causes.

Allan Stewart-Oaten

The environmental statistician Stewart-Oaten (1996) reviewed designs for determining whether human activities are causing environmental effects. He concluded that none of the pseudo-experimental designs (Before After Control Impacts and variants) are reliable. Hence, he advocated using Hill's criteria to avoid “causal uncertainty.” For the statistical analysis, he recommended estimating differences between exposed and unexposed sites and associated confidence intervals rather than testing causal hypotheses.

Wayne Landis

Landis (1996) briefly presented a concept of causal analysis for toxic effects on ecosystems. First, laboratory studies, including toxicity tests, analysis of contaminant concentrations, and comparative toxicology provide a mechanistic basis. Second, multispecies studies reveal broad patterns of response to particular stressor types. Third, biomarkers and measurements of structure and function are used to establish causes.

More recently, Landis and colleagues have incorporated causal analysis into his relative risk model (Landis et al. 2004, Landis 2005). Alternate sources and stressors are identified for the effects of concern and are assigned ranks using a variety of criteria. Then the ranks for sources are multiplied by the ranks for connected stressors and then by 0 or 1 depending on whether that pathway is connected to the effect. These arithmetically combined ranks for a hypothetical pathway are called stressor rank scores and are taken to represent the relative risk that the stressors posed as causes of the effects. The authors claimed that their relative risk model subsumes both Chapman's triad approach and the use of causal criteria derived from Hill's considerations or Koch's postulates (Landis et al. 2004).

For a study of the cause of decline of a Pacific herring population, Landis and Bryant (2010) used a comparative weight of evidence analysis.

Lawrence Kapustka

Kapustka (1996) presented an adaptation of Koch's postulates for “forensic ecology” of toxic chemicals.

    1. Observe field conditions (injury); characterize the symptoms, magnitude, and extent of the problem in the field.
    2. Identify putative contaminants.
    3. Characterize mode of action and symptomology.
    4. Establish dose-response relationships.
    5. Demonstrate the presence of putative toxic substances in the field within the “effects range” concentrations.
    6. Demonstrate the opportunity for exposure.
    He then added two confirmation requirements.
    7. Apply weight-of-evidence criteria to establish relationship between contaminant and observed effect.
    8. Document the uncertainty of the conclusions.

Robert Bode

Some groups and organizations have attempted to use biotic community characteristics as symptoms to diagnose the causes of community impairments but with limited success (Norton et al. 2000, Yoder and Rankin 1995). Currently, the best developed and most successful of these is the Impact Source Determination (ISD) system developed for the state of New York by Bode et al. (1996, 2002). ISD attempts to relate the macroinvertebrate community of a stream to one of the following classes:

  • Natural
  • Nonpoint nutrients, pesticides
  • Toxic
  • Organic (sewage effluent or animal waste)
  • Complex (municipal/industrial)
  • Siltation
  • Impoundment

For each of these classes, 5-13 model communities have been created by cluster analysis of relative abundance data from sampled communities that have been judged to belong to the class. The multiple model communities within a class are judged to represent differences in response due to natural factors. An attempt to verify the classifications of sites by the ISD found that the system discriminated fairly well among nonpoint nutrient, siltation, complex and natural categories, but not others (Riva-Murray et al. 2002). Difficulties were attributed to the lack of species level taxonomic data and the lack of data concerning hydrology, habitat, and many chemicals. Because most of these classes are categories of sources rather than causes and because of the potential for misclassification, results of ISD would serve primarily to help define candidate causes.

C. J. Sindermann

Sindermann (1997) advocated the Hill/Susser/Fox approach to causal analysis combined with the precautionary principle when information is incomplete or uncertain to justify acting without proving a cause in marine pollution studies.

Daniel Beyers

Beyers (1998) combined Hill's 9 criteria with “Suter's second rule” (the second of his adaptations of Koch's postulates) to obtain 10 criteria for causation. He applied them to determining whether an insecticide caused impairment of the benthic invertebrate community in the Little Missouri River, North Dakota.

Bruce Chessman and Paul McEvoy

Community diagnostics may be possible for some classes of agents but not others. However, it will require genus- or species-level identification (Chessman and McEvoy 1998).

U.S. EPA (Risk Assessment Forum)

The Agency's Guidelines for Ecological Risk Assessment endorsed Fox's criteria, both Woodman and Cowling's and Suter's versions of Koch's postulates, and toxicity identification evaluation (TIE) for identifying causes of ecological impairments (U.S. EPA 1998).

Joseph Germano

Germano (1999) was strongly critical of the use of hypothesis testing statistics for causal analysis. He pointed out that hypothesis testing answers a question in which we are seldom interested: “Given that the null hypothesis (H0) is true, what is the probability of these (or more extreme) data?” On the other hand, he believed we should not resort to expert judgment because it is even less reliable than correlations. Rather, we need a better understanding of associations, which comes from fully specifying contingency tables and using Bayesian analyses.

U.S. EPA (S.M. Cormier, S.B. Norton, and G.W. Suter II)

The U.S. EPA (2000) developed the Stressor Identification Guidance to determine the causes of biological impairments in individual aquatic ecosystems. The methodology includes three inferential methods: elimination, diagnosis, and strength of evidence. The strength of evidence method was inspired by Fox (1991) and Susser (1986) but highly modified. In particular, it distinguishes evidence from the case from evidence from elsewhere and evidence based on integrating multiple types of evidence. The methodology and a case study are available in the open literature (Cormier et al. 2002, Norton et al. 2002, Suter et al. 2002).

In response to user feedback, the method was refined and expanded in a decision support system for causal analysis called CADDIS (www.epa.gov/caddis). The methodology changed in three ways from the Stressor Identification Guidance. First, elimination and diagnosis were integrated into the strength-of-evidence analysis to simplify the process. Second, the types of evidence were rewritten to make them clearer to users. Third, to help with the comparison of candidate causes, a set of basic causal characteristics was identified that summarized the 17 types of evidence.

Jerome Diamond

Studies of the causes of impairment of fish, mussel, and benthic macroinvertebrate communities used stepwise regression to relate land uses in the Clinch and Powell River watershed to biological indexes and metrics (Diamond et al. 2002, Diamond and Serveiss 2001). Although statistically significant relationships to land use were found, the model explained little of the biological variance.

Michael Newman

Newman argued that Bayesian conditional probabilities are the appropriate expression of causality. He wrote that “Belief in a causal hypothesis can be determined by simple or iterative application of Bayes's theorem” (Newman and Evans 2002). He repeated this position in (Newman et al. 2007).

Newman was critical of Hill's and Fox's weight-of-evidence approaches because they are qualitative and subject to cognitive errors (Newman et al. 2007). However, he has acknowledged that they can be useful and presented a good case study of the application of Hill's criteria to determining the cause of hepatic lesions in English sole from Puget Sound, Washington (Newman 2001).

William Clements

Clements et al. (2002) applied the Stressor Identification methodology (U.S. EPA 2000) to observational and experimental data to demonstrate that metals were responsible for effects on benthic invertebrates in Colorado streams.

Valey Forbes and Peter Calow

Forbes and Calow (2002) modified prior causal criteria, reducing the number to seven and framing them as yes/no questions.

  1. Is there evidence that the target is or has been exposed to the agent?
  2. Is there evidence for correlation between adverse effects on the target and exposure to the agent either in time or space?
  3. Do the measured or predicted environmental concentrations exceed quality criteria for water, sediment, or body burden?
  4. Have the results of controlled experiments in the field or laboratory led to the same effect?
  5. Has removal of the agent led to the amelioration of effects in the target?
  6. Is there an effect in the target known to be specifically caused by exposure to the agent?
  7. Does the proposed causal relationship make sense logically and scientifically?

They also provided a logic diagram based on answering the questions in sequence, which generates results as unlikely, possibly, likely, very likely, or don't know.

This method was applied to determining the cause of brown trout declines in Swiss rivers but with modifications of both the questions and the logic (Burkhardt-Holm and Scheurer 2007).

Mark Hewitt

Hewitt et al. (2003, 2005) developed a system for determining if a pulp mill is the cause of apparent effects on fish and invertebrates in Canadian monitoring programs. It has seven increasingly detailed tiers beginning with (1) confirming the effect and (2) relating it to the mill and ending with (7) identifying the specific causative chemicals. In the 2003 version, the second tier is based on Fox's explication of Susser's criteria, and the third tier, which looks for characteristic biological response patterns, is based on community or population diagnostics. In the 2005 version, only the response patterns are used. The higher tiers are based on testing of waste streams, chemical fractions, and individual chemicals in a manner similar to the U.S. EPA's toxicity identification evaluation (TIE) procedures. An earlier version of this system is presented in Ch. 9 of Munkittrick et al. (2000). The system has been applied to Canadian metal mines as well (Munkittrick, personal communication).

Tracy Collier and Marshall Adams

Collier and Adams compiled 14 papers on causal analysis for ecological field studies in a special 2003 issue of Human and Ecological Risk Assessment (Volume 9, Issue 1). They requested that the authors use a set of seven causal criteria:

    (1) strength of association,
    (2) consistency of association,
    (3) specificity of association,
    (4) time order,
    (5) biological gradient,
    (6) experimental evidence, and
    (7) biological plausibility.

The papers were inconsistent in their interpretation and use of the criteria, but several of them constitute useful case studies.

IPCC

The Intergovernmental Program on Climate Change (IPCC) developed a method for “attributing physical and biological impacts to anthropogenic climate change” (Rosenzweig et al. 2008). It used a list of environmental changes that are statistically significantly related to temperature as the effects to be analyzed. The causal analysis consisted of

    (1) determining whether the trend is consistent with temperature as the cause,
    (2) determining whether the change spatially co-occurred with increases in temperature, and
    (3) elimination of alternative causes.

Hence, their two criteria for causation were mechanistic plausibility and co-occurrence. They applied the same criteria to climate and the alternative causes. They performed source identification by appealing to other IPCC analyses to state that the temperature changes were due to anthropogenic greenhouse gases.

Dick de Zwart and Leo Posthuma

This method uses multivariate linear statistical models for a river basin to diagnose the causes of individual taxon abundances at specific sites with habitat variables and toxicity as the possible causes (de Zwart et al. 2009). Toxicity is expressed as the acute multistressor Potentially Affected Fraction (msPAF), which is derived from Species Sensitivity Distributions and assumptions about the additivity of the chemicals (de Zwart and Posthuma 2005). Because of statistical limitations, particularly correlations among explanatory variables, the authors consider this to be a “first-tier approach” that may be followed-up “by other lines of evidence.” Because of the inevitable data limitations, local causes that are not included in a regional model, and inherent variability, the predominant cause in their test case is “unknown.”

Top of page


Jump to main content.