The standard error is a measure of the variation among these differing estimates. It can be used to measure the precision with which an estimate from a particular sample approximates the expected result of all possible samples. The standard errors can be used to define a range or level of confidence (confidence interval) around an estimate. For instance, the 90 percent confidence level means that if all possible samples were selected and an estimate of a value and its sampling error were computed for each, then for approximately 90 percent of the samples, the intervals from 1.6 standard errors below the estimate to 1.6 standard errors above the estimate would include the "true" average value. For example, the 90 percent confidence interval for an index percent change estimate of 5.0 percent with a standard error of 1.1 percentage points would be 5.0 percent plus or minus 1.8 percentage points (1.6 standard errors times 1.1 percentage points) or 3.2 to 6.8 percent.

The chances are about 68 out of 100 percent that an estimate from the survey differs from the true population figure within one standard error. The chances are about 90 out of 100 percent that this difference would be within 1.6 standard errors. This means that in the example above, the chances are 90 out of 100 percent that estimated index percent change is between 3.2 and 6.8 percent.

Comparative statements appearing in ECI publications are statistically significant at the 90 percent level of confidence, unless otherwise indicated. This means that for differences cited, the estimated difference is greater than 1.6 times the standard error of the difference.

 

Last Modified Date: July 2, 2008