P-value

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistical significance testing, the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1] One often "rejects the null hypothesis" when the p-value is less than the significance level α (Greek alpha), which is often 0.05 or 0.01. When the null hypothesis is rejected, the result is said to be statistically significant.

Although there is often confusion, the p-value is not the probability of the null hypothesis being true, nor is the p-value the same as the Type I error rate, \alpha.[2]

Contents

[edit] Coin flipping example

For example, an experiment is performed to determine whether a coin flip is fair (50% chance, each, of landing heads or tails) or unfairly biased (≠ 50% chance of either of the outcomes).

Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The p-value of this result would be the chance of a fair coin landing on heads at least 14 times out of 20 flips. The probability that 20 flips of a fair coin would result in 14 or more heads can be computed from binomial coefficients as


\begin{align}
& \operatorname{Prob}(14\text{ heads}) + \operatorname{Prob}(15\text{ heads}) +  \cdots + \operatorname{Prob}(20\text{ heads}) \\
& = \frac{1}{2^{20}} \left[ \binom{20}{14} + \binom{20}{15} + \cdots + \binom{20}{20} \right] = \frac{60,\!460}{1,\!048,\!576} \approx 0.058
\end{align}

This probability is the (one-sided) p-value. It measures the chance that a fair coin would give a result at least this extreme.

Example of a p-value for sample size 1 (the test statistic is the single observed value). The vertical coordinate is probability if the statistic is discrete and in that case we should see a probability histogram rather than a curve. (The curve shown is represented as a probability density function so the label on the vertical axis is misleading.) Data yielding a p-value of .05 means there is only a 5% chance obtaining the observed (or more extreme) result if no real effect exists.[3]This definition is rather counter-intuitive, leading to many misunderstandings and misinterpretations. "Common sense" tells us to judge our hypotheses based on how well they fit observed evidence. This is not what a p-value describes. Instead, it describes the likelihood of observing certain data given that the null hypothesis is true.

[edit] Interpretation

Traditionally, one rejects the null hypothesis if the p-value is less than or equal to the significance level,[1] often represented by the Greek letter α (alpha). (Greek α is also used for Type I error; the connection is that a hypothesis test that rejects the null hypothesis for all samples that have a p-value less than α will have a Type I error of α.) A significance level of 0.05 would deem as extraordinary any result that is within the most extreme 5% of all possible results under the null hypothesis. In this case a p-value less than 0.05 would result in the rejection of the null hypothesis at the 5% (significance) level.

When we ask whether a given coin is fair, often we are interested in the deviation of our result from the equality of numbers of heads and tails. In this case, the deviation can be in either direction, favoring either heads or tails. Thus, in this example of 14 heads and 6 tails, we may want to calculate the probability of getting a result deviating by at least 4 from parity in either direction (two-sided test). This is the probability of getting at least 14 heads or at least 14 tails. As the binomial distribution is symmetrical for a fair coin, the two-sided p-value is simply twice the above calculated single-sided p-value; i.e., the two-sided p-value is 0.115.

In the above example we thus have:

  • null hypothesis (H0): fair coin; P(heads) = 0.5
  • observation O: 14 heads out of 20 flips; and
  • p-value of observation O given H0 = Prob(≥ 14 heads or ≥ 14 tails) = 2*(1-Prob(< 14)) = 0.115.

The calculated p-value exceeds 0.05, so the observation is consistent with the null hypothesis — that the observed result of 14 heads out of 20 flips can be ascribed to chance alone — as it falls within the range of what would happen 95% of the time were the coin in fact fair. In our example, we fail to reject the null hypothesis at the 5% level. Although the coin did not fall evenly, the deviation from expected outcome is small enough to be consistent with chance.

However, had one more head been obtained, the resulting p-value (two-tailed) would have been 0.0414 (4.14%). This time the null hypothesis – that the observed result of 15 heads out of 20 flips can be ascribed to chance alone – is rejected when using a 5% cut-off.

[edit] Misunderstandings

The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). A small p-value that indicates statistical significance does not indicate that an alternative hypothesis is ipso facto correct.

Despite the ubiquity of p-value tests, this particular test for statistical significance has come under heavy criticism due both to its inherent shortcomings and the potential for misinterpretation.

There are several common misunderstandings about p-values.[4][5]

  1. The p-value is not the probability that the null hypothesis is true.
    In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability and which would explain the results more easily). This is the Jeffreys–Lindley paradox.
  2. The p-value is not the probability that a finding is "merely a fluke."
    As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is different from the real meaning which is that the p-value is the chance of obtaining such results if the null hypothesis is true.
  3. The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy.
  4. The p-value is not the probability that a replicating experiment would not yield the same conclusion.
  5. 1 − (p-value) is not the probability of the alternative hypothesis being true (see (1)).
  6. The significance level of the test is not determined by the p-value.
    The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed. (However, reporting a p-value is more useful than simply saying that the results were or were not significant at a given level, and allows the reader to decide for himself whether to consider the results significant.)
  7. The p-value does not indicate the size or importance of the observed effect (compare with effect size). The two do vary together however – the larger the effect, the smaller sample size will be required to get a significant p-value.

[edit] Problems

Critics of p-values point out that the criterion used to decide "statistical significance" is based on the somewhat arbitrary choice of level (often set at 0.05).[6] If significance testing is applied to hypotheses that are known to be false in advance, a non-significant result will simply reflect an insufficient sample size. The definition of "more extreme" data depends on the intentions of the investigator; for example, the situation in which the investigator flips the coin 100 times has a set of extreme data that is different from the situation in which the investigator continues to flip the coin until 50 heads are achieved.[7]

As noted above, the p-value p is the main result of statistical significance testing. Fisher proposed p as an informal measure of evidence against the null hypothesis. He called on researchers to combine p in the mind with other types of evidence for and against that hypothesis, such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies.[2] Many misunderstandings concerning p arise because statistics classes and instructional materials ignore or at least do not emphasize the role of prior evidence in interpreting p. A renewed emphasis on prior evidence could encourage researchers to place p in the proper context, evaluating a hypothesis by weighing p together with all the other evidence about the hypothesis.[8]

[edit] Related quantities

A closely related concept is the E-value,[9] which is the average number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The E-value is the product of the number of tests and the p-value.

[edit] See also

[edit] References

  1. ^ a b P-value - About.com Economics Dictionary, definition of p-value
  2. ^ a b Hubbard, R.; Lindsay, R. M. (2008). "Why P Values Are Not a Useful Measure of Evidence in Statistical Significance Testing". Theory & Psychology 18 (1): 69–88. DOI:10.1177/0959354307086923. http://wiki.bio.dtu.dk/~agpe/papers/pval_notuseful.pdf.  Paper that explains the difference between Fisher's evidential p-value and the Neyman–Pearson Type I error rate \alpha.
  3. ^ Siegfried, Tom. (2010, March 27). "Odds are, it's wrong: science fails to face the shortcomings of statistics", The Free Library. (2010). Retrieved December 22, 2011 from http://www.thefreelibrary.com/Odds%20are,%20it%27s%20wrong:%20science%20fails%20to%20face%20the%20shortcomings%20of...-a0223598536.
  4. ^ Sterne JAC, Smith GD (2001). "Sifting the evidence—what's wrong with significance tests?". BMJ 322 (7280): 226–231. DOI:10.1136/bmj.322.7280.226. PMC 1119478. PMID 11159626. //www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1119478. 
  5. ^ Schervish MJ (1996). "P Values: What They Are and What They Are Not". The American Statistician 50 (3): 203–206. DOI:10.2307/2684655. JSTOR 2684655. 
  6. ^ Sellke, Thomas; Bayarri, M.J.; Berger, James (2001). "Calibration of p values for testing precise null hypotheses". The American Statistician 55 (1): 62–71. DOI:10.1198/000313001300339950. JSTOR 2685531. 
  7. ^ Johnson, Douglas H. (1999). "The Insignificance of Statistical Significance Testing". Journal of Wildlife Management 63 (3): 763–772. DOI:10.2307/3802789. http://www.stats.org.uk/statistical-inference/Johnson1999.pdf. 
  8. ^ Goodman, SN (1999). "Toward Evidence-Based Medical Statistics. 1: The P Value Fallacy.". Annals of Internal Medicine 130: 995–1004. 
  9. ^ National Institutes of Health definition of E-value

[edit] Further reading

[edit] External links

  • Free online p-values calculators for various specific tests (chi-square, Fisher's F-test, etc.).
  • Understanding P-values, including a Java applet that illustrates how the numerical values of p-values can give quite misleading impressions about the truth or falsity of the hypothesis under test.
Personal tools
Namespaces

Variants
Actions
Navigation
Interaction
Toolbox
Print/export
Languages