Skip To Content
Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

1996 National Household Survey on Drug Abuse: Preliminary Results

Previous Page TOC Next Page



APPENDIX 2: LIMITATIONS OF THE DATA

I. Target Population

An important limitation of the NHSDA estimates of drug use prevalence is that they are only designed to describe the target population of the survey, the civilian noninstitutionalized population. Although this includes more than 98% of the total U.S. population, it does exclude some important and unique subpopulations who may have very different drug-using patterns. The survey excludes active military personnel, who have been shown to have significantly lower rates of illicit drug use. Persons living in institutional group quarters, such as prisons and residential drug treatment centers, are not covered in the NHSDA and have been shown in other surveys to have higher rates of illicit drug use. Also excluded are homeless persons not living in a shelter on the survey date, another population shown to have higher than average rates of illicit drug use. Appendix 3 describes other surveys that provide data for these populations.

II. Sampling Error and Statistical Significance

The sampling error of an estimate is the error caused by the selection of a sample instead of conducting a census of the population. Sampling error is reduced by selecting a large sample and by using efficient sample design and estimation strategies such as stratification, optimal allocation, and ratio estimation.

With the use of probability sampling methods in the NHSDA, it is possible to develop estimates of sampling error from the survey data. These estimates have been calculated for all prevalence estimates presented in this report using a Taylor series linearization approach that takes into account the effects of the complex NHSDA design features. The sampling errors are used to identify unreliable estimates and to test for the statistical significance of differences between estimates.

Estimates considered to be unreliable due to unacceptably large sampling error are not shown in this report, and are noted by asterisks (*) in the tables in the appendix. The criterion used for suppressing estimates was based on the relative standard error (RSE), which is defined as the ratio of the standard error over the estimate. The log transformation of the proportion estimate (p) was used to calculate the RSE. Specifically, rates and corresponding estimated number of users were suppressed if:

RSE[-ln(p)] > 0.175 when p < .5

or RSE[-ln(1-p)] > 0.175 when p _ .5.

Estimates were also suppressed if they rounded to zero or 100 percent. This occurs if p < .0005 or if p _.9995. Statistical tests of significance have been computed for comparisons of estimates from 1995 with 1994. Results are shown in the appendix 5 tables. As indicated in the footnotes, significant differences are noted by "a" (significant at the .05 level of significance) and "b" (significant at the .01 level of significance). All changes described in this report as increases or decreases were tested and found to be significant at least at the .05 level, unless otherwise indicated.

Nonsampling errors such as nonresponse and reporting errors may affect the outcome of significance tests. Also, keep in mind that while a level of significance equal to .05 is used to determine statistical significance in these tables, large differences associated with slightly higher p-values (specifically those between .05 and .10) may be worth noting along with the p-values. Furthermore, statistically significant differences are not always meaningful, because the magnitude of difference may be small or because the significance may have occurred simply by chance. In a series of twenty independent tests, it is to be expected that one test will indicate significance merely by chance even if there is no real difference in the populations compared. In making more than one comparison among three or more percentages (comparing percentages within a table), there has been no attempt to adjust the level of significance to account for making simultaneous inferences (often referred to as multiple comparisons). Therefore, the probability of falsely rejecting the null hypothesis at least once in a family of k comparisons is higher than the significance level given for individual comparisons (in this report, either .01 or .05).

When making comparisons of estimates for different population subgroups from the same data year, the covariance term, which is usually small and positive, has typically been ignored. This results in somewhat conservative tests of hypotheses that will sometimes fail to establish statistical significance when in fact it exists.

III. Nonsampling Error

Nonsampling errors occur from nonresponse, coding errors, computer processing errors, errors in the sampling frame, reporting errors, and other errors. Nonsampling errors are reduced through data editing, statistical adjustments for nonresponse, and close monitoring and periodic retraining of interviewers.

Although nonsampling errors can often be much larger than sampling errors, measurement of most nonsampling errors is difficult or impossible. However, some indication of the effects of some types of nonsampling errors can be obtained through proxy measures such as response rates and from other research studies.

Of the 56,912 eligible households sampled, 52,770 were successfully screenedfor a screening response rate of 92.7%. In these screened households, a total of 23,240 sample persons were selected, and completed interviews were obtained from 18,269 of these sample persons, for an interview response rate of 78.6%. 2,300 (9.9%) of sample persons were classified as refusals, 1,795 (7.7%) were not available or never at home, and 876 (3.8%) did not participate for various other reasons, such as physical or mental incompetence or language barrier. The response rate was highest among the 12-17 year old age group (82%). Response rates were also higher among Hispanics (81%) than among blacks (79%) and whites (77%).

Among survey participants, item response rates were above 98% for most questionnaire items. However, inconsistent responses for some items, including the drug use items, are common. Estimates of drug use from the NHSDA are based on the responses to multiple questions by respondents, so that the maximum amount of information is used in determining whether a respondent is classified as a drug user. Inconsistencies in responses are resolved through a logical editing process that involves some judgement on the part of survey analysts and is a potential source of nonsampling error. A typical occurrence is when a respondent reports their most recent use of a drug as more than a month ago, but in a later question they report having used in the past month. (This could occur because the interviewer may have developed greater rapport with the respondent in the latter stages of the interview, leading to more openness on the part of the respondent.) This respondent would be considered a past month user. For 1996, 23% of the estimate of past month marijuana use and 40% of the past month cocaine use estimate is based on such cases.

NHSDA estimates are based on self-reports of drug use, and their value depends on respondents' truthfulness and memory. Although many studies have generally established the validity of self-report data and the NHSDA procedures were designed to encourage honesty and recall, some degree of underreporting is assumed. No adjustment to NHSDA data is made to correct for this (Appendix 4 lists a number of references addressing the validity of self-reported drug use data). The methodology used in the NHSDA has been shown to produce more valid results than other self-report methods (e.g., by telephone) (Turner, Lessler, and Gfroerer 1992; Aquilino 1994). However, comparisons of NHSDA data with data from surveys conducted in classrooms suggest that underreporting of drug use by youths in their homes may be substantial (Gfroerer 1993; Gfroerer, Wright, and Kopstein, in press).

The incidence estimates discussed in section 9 of this report are based on retrospective reports of age at first drug use by survey respondents interviewed during 1994-96, and may be particularly subject to several biases.

Bias due to differential mortality occurs because some persons who were alive and exposed to the risk of first drug use in the historical periods shown in the tables died before the 1994, 1995, and 1996 NHSDAs were conducted. This bias is probably very small for estimates shown in this report. Incidence estimates are also affected by memory errors, including recall decay (tendency to forget events occurring long ago) and forward telescoping (tendency to report that an event occurred more recently than it actually did). These memory errors would both tend to result in estimates for earlier years (i.e., 1960s and 1970s) that are downwardly biased (because of recall decay) and estimates for later years that are upwardly biased (because of telescoping). There is also likely to be some underreporting bias due to social acceptability of drug use behaviors and respondents’ fear of disclosure. This is likely to have the greatest impact on recent estimates, which reflect more recent use and reporting by younger respondents. Finally, for drug use that is frequently initiated at age 10 or younger, estimates based on retrospective reports one year later underestimate total incidence because 11 year old children are not sampled by the NHSDA. Prior analyses showed that alcohol and cigarette (any use) incidence estimates could be significantly affected by this. Therefore, for these drugs no 1994 estimates were made, and 1993 estimates were based only on the 1995 NHSDA.

Overall, these biases are likely to have the greatest effect on the most recent estimates, i.e., 1993-95, primarily because they reflect recent drug use and because they are heavily based on the reports of adolescents. Thus, the estimates for recent years may be less reliable than estimates for earlier periods. Analyses of estimates based on single years of NHSDA data have been done to attempt to better understand the effects of these biases and to assess the reliability of estimates for recent years. So far, no clear evidence of significant bias has been found.

IV. Estimation of Heavy Drug Use

While the NHSDA collects data on the most severely affected drug users, the survey design is less suited to estimate these problems. The limitations that preclude more accurate estimates are primarily the sample size, coverage, and the use of a self-report. Because heavy drug use is relatively rare in the general population, the NHSDA captures a small number of these users, resulting in a relatively large sampling error. In addition to this instability resulting from the small sample, underestimation is believed to occur because many heavy drug users may not maintain stable addresses and, if located, may not be available for an interview. Finally, as with all NHSDA respondents, heavy drug users who participate in the survey may not always report their drug use accurately during the interview.

A new estimation procedure was designed at OAS to produce improved estimates of heavy drug use (Wright, Gfroerer and Epstein 1995). This procedureuses external counts of the number of people in treatment for drug problems (from the National Drug and Alcoholism Treatment Unit Survey) and the number of arrests for non-traffic offenses (from the F.B.I.'s Uniform Crime Reports) to adjust NHSDA data. This ratio estimation procedure provides a partial adjustment that accounts for undercoverage of hard-to-reach populations and also adjusts for underreporting of drug use by survey respondents. However, it does not reduce sampling error.

Applicants of this adjustment have resulted in 40-80 percent higher estimates of past month and past year heroin use and 20-40 percent higher estimates of frequent cocain use.

V.Adjustment of 1979-1993 NHSDA Estimates to Account for the New Survey Methodology Used in the 1994 and 1995 NHSDAS

The NHSDA is an important source of data for policy makers, not only because it provides measures of substance abuse for a single year, but also because the series of surveys over the last several years provides a measure of change in substance abuse in the population over time. Beginning in 1994, the NHSDA began using an improved questionnaire and estimation procedure based on a series of studies and consultations with drug survey experts and data users. Because this new methodology produces estimates that are not directly comparable to previous estimates, the 1979-1993 NHSDA estimates presented in this report were adjusted to account for the new methodology that was begun in 1994.

Nearly all of the 1979-1993 substance use prevalence estimates presented in this report were adjusted using a simple ratio correction factor that was estimated at the total population level using data from the pooled 1993 and 1994 NHSDAs. The remaining substance use prevalence estimates were adjusted by formally modeling the effect of the new methodology, relative to the old methodology, using data from the 1994 NHSDA. The modeling procedure was used for the more prevalent substance use measures that changed significantly between the old- and new-version NHSDA questionnaires. The modeling procedure was particularly desirable for the more prevalent measures because the procedure was able to use a greater number of potentially significant explanatory variables in the adjustment compared to the simple ratio correction factor. Each of the procedures are discussed below.

Ratio Adjustment

Most of the 1979-1993 NHSDA estimates were adjusted using a ratio correction factor that measured the effect of the new methodology, relative to the old methodology, using data from the 1993 and 1994 NHSDAs. As explained in the Introduction in this report, the 1994 NHSDA was designed to generate two sets of estimates. The first set of estimates, which in previous reports was referred to as the 1994-A set of estimates, was based on the same questionnaire and editing method that was used in 1993 (and earlier). The second set of estimates, referred to as the 1994-B set, was based on the new NHSDA survey methodology. Since the 1994-A estimates were generated from a sample that was roughly one-fourth the size of the 1994-B, to increase the precision of the ratio correction factor, the 1994-A sample was pooled with the 1993 sample.

The 1979-1993 NHSDA estimates that were adjusted using the ratio correction factor included estimates of lifetime, past year and past month use of cocaine, crack, inhalants, hallucinogens (including PCP and LSD), heroin, any psychotherapeutic, stimulants, sedatives, tranquilizers, analgesics, any illicit drug other than marijuanaand smokeless tobacco as well as estimates of past year frequency of use of marijuana, cocaine and alcohol. This adjustment was computed at the total sample level and was equally applied to all corresponding estimates computed among subgroups of the total population. Consequently, for example, the same ratio adjustment was used to correct all estimates of past year cocaine use, regardless of the demographic subgroup under consideration. Mathematically, this ratio adjustment can be expressed as follows:

Suppose i denotes the sampled respondent, y_i denotes a 0/1 variable to indicate nonuse or use of some particular substance, and w_i denotes the sample weight. Then the ratio adjustment was computed as:

R ~=~ {sum from {i in S_{1994-B}} w_i~ y_i} over {sum from {i in S_{1993~ union ~1994-A}} w_i ~y_i} ~=~

{y bar_{1994-B}} over {y bar_{1993~ union ~1994-A}}

The latter equality is true because the sample weights in the pooled 1993 and 1994-A sample were adjusted slightly so that they would sum to the same demographic control totals as the 1994-B sample across the variables typically used in the NHSDA post stratification procedure.

Model-Based Adjustment

A model based method of computing adjustments that would account for the changes in the NHSDA methodology was used for estimates of the use of the more prevalent drugs including lifetime, past year and past month use of alcohol, marijuana, cigarettes, any illicit drug as well as past month binge drinking and past month heavy drinking. It was also used for measures of perceived risk of harm, but only for items which had wording changes in 1994.

The model that was used is based on a constrained exponential model originally proposed by Deville & Särndal (1992). Similar to the ratio adjustment, this method of adjusting previous estimates models the combined effect of all measurement error differences between the new and old methodologies. This model offers the primary advantages of allowing (1) a greater number of potentially significant explanatory variables in the adjustment and (2) bounding the resulting adjustment between predetermined thresholds. This apriori bounding eliminates extreme adjustments that might otherwise occur, particularly for small subpopulations. Additionally, the model fitting procedure used to compute the adjustment forces the adjusted 1994-A estimates to equal the 1994-B estimates within the subpopulations represented by the dummy variables in the vector of model predictors. Mathematically, this model can be expressed as follows:

R_i~=~{L(U-1) ~+~ U(1-L) e^{-AX_i beta} } over

{(U-1) ~+~ (1-L) e^{-AX_i beta} } (1)

Where the ratio adjustment R I can be interpreted as:

R_i~={ FUNC {Probability~Of~Reporting~Use~With~The~New~Survey~Methodology}} over

{ FUNC {Probability~Of~Reporting~Use~With~The~Old~Survey~Methodology}}

In equation (1) the constant A is simply a scale factor set equal toleft[ U-L right]~ DIV ~left[ (1-L)(U-1) right], beta are the model coefficients, and X_i

is a vector of explanatory variables. The explanatory variables considered in the models consisted of the categorical indicator variables for age group and race/ethnicity. The parameters L and U are the predetermined constants that force the estimated R_i

to be

L ~<=~ R_i~ <=~ U ~~~~~~~~~ func {for ~\all~ i ~\and~\for~\any~\value~\of~X_i beta}

.

Notice that if the constant L is set equal to zero and U approaches _ , then the constant A approaches 1, and equation (1) reduces to the familiar, unconstrained exponential model:

R_i~=~{e^{-X_i beta} }.

The model parameter vector beta

in (1) was estimated by solving the generalized raking equations:

Sum from {i in S_{1994-A}} w_i ~R_i~X_i^T~y_i ~~=~~

Sum from {i in S_{1994-B}} w_i ~X_i^T~y_i

subject to the constraints.

Notice from the above raking equations that the estimated adjustment R_i

forces the 1994-A estimate to equal the 1994-B estimate for any subpopulation represented by an indicator variable in X_i

. Therefore, for example, if an appropriate indicator for the age group=12-17 year-olds was included in X_i

, then the model-based estimate of the R_i‘s would produce an adjusted prevalence estimate using the 1994-A sample that exactly equaled the prevalence estimate generated from the 1994-B sample for the 12-17 year-old age group.

Previous Page Page Top TOC Next Page

This is the page footer.

This page was last updated on June 03, 2008.

SAMHSA, an agency in the Department of Health and Human Services, is the Federal Government's lead agency for improving the quality and availability of substance abuse prevention, addiction treatment, and mental health services in the United States.

Yellow Line

Site Map | Contact Us | Accessibility Privacy PolicyFreedom of Information ActDisclaimer  |  Department of Health and Human ServicesSAMHSAWhite HouseUSA.gov

* Adobe™ PDF and MS Office™ formatted files require software viewer programs to properly read them. Click here to download these FREE programs now

What's New

Highlights Topics Data Drugs Pubs Short Reports Treatment Help Mail OAS