LIST 4-3-A: MEASURING AND EVALUATING ERROR
- SAMPLE SELECTION, FRAMES AND COVERAGE - ADEQUACY OF FRAME
- Sources of error:
- Limitations of the frame-undercoverage/overcoverage of schools
or institutions, duplicates, cases of unknown eligibility;
- Listing error-failure of initial respondents to include or
exclude prospective respondents per instruction; and
- Selection of sampling units and respondent units within sampling
units.
- Evaluation of survey
coverage-examples:
- Comparison of estimated counts to reliable independent sources;
- Matching studies to earlier versions of the same data source
or to other data sources and the use of dual system estimation;
- Analysis of survey returns for deaths, duplicates, changes
in classification, and out-of-scope units; and
- Field work - such as area listings.
- Correcting for coverage error - examples:
- Use a dual frame approach for survey estimation and
- Employ post-stratification procedures.
- MEASUREMENT ERRORS-DATA COLLECTION
- Sources of error:
- Questionnaire design, content, wording and instructions;
- Length of reference period;
- Interview mode(s);
- Interviewers-Characteristics, training, and supervision;
- Respondent rules-self versus proxy respondents;
- Use of records by respondents;
- Other respondent effects;
- Consistency and time-in-sample bias for longitudinal
studies;
- Responses to related multiple measures within a questionnaire;
- Statistics derived for related measures from different questionnaires
within a survey system;
and
- Responses to related measures from multiple respondents in
a sampled unit (e.g., parent/student).
- Evaluation of measurement errors-examples:
- Pilot or field
test survey and procedures;
- Cognitive research methods;
- Reinterview studies;
- Response variance;
- Randomized experiments;
- Behavior coding;
- Interviewer variance studies;
- Interviewer observation studies;
- Record check studies; and
- Comparisons of related measures within questionnaires, across
respondents; and across questionnaires within a survey
system.
- Correcting for measurement errors-examples:
- Use the results from a pilot
or field test to modify questionnaire
and/or procedures;
- Use input from cognitive research to modify questionnaire;
- Where possible, use results from comparisons of related measures;
and
- Employ interviewer retraining and feedback.
- DATA PREPARATION ERROR
- Sources of error:
- Pre-edit coding;
- Clerical review;
- Data entry; and
- Editing.
- Evaluation of processing errors-examples:
- Pre-edit coding;
- Clerical review verification;
- Data entry verification;
- Editing verification for manual edits;
- Edit rates;
- Coder error variance estimates; and
- Rating and scoring error variance estimates.
- Correcting for data preparation errors-examples:
- Resolution of differences identified in verification;
- Increased training;
- Feedback during rating and coding; and
- Edits for lack of internal agreement, where appropriate.
- SAMPLING AND ESTIMATION ERRORS
- Sources of error:
- Weighting procedures;
- Imputation procedures;
and
- Sample survey estimation and modeling procedures.
- Evaluation of sampling and estimation errors-examples:
- Variance estimation;
- Analysis of the choice of variance estimator;
- Indirect estimates for reporting sampling error-use of generalized
variance functions, small area estimates, and regression models;
- Comparison of final design effects with estimated design effects
used in survey planning;
- Analysis of the frequency of imputation
and the initial and final distributions of variables; and
- Analysis of the effect of changes in data processing procedures
on survey estimates.
- Correcting for estimation errors-examples:
- Re-estimation using alternative techniques (e.g., outlier treatments,
imputation procedures, and
variance estimation procedures) and
- Explore fitting survey distributions to known distributions from
other sources to reduce variance and bias.
- NONRESPONSE ERRORS
- Sources of error:
- Household/school/institution nonresponse;
- Person nonresponse; and
- Item nonresponse.
- Evaluation of nonresponse errors-examples
(see Standard 4-4):
- Comparisons of respondents to known population characteristics
from external sources;
- Comparisons of respondents and nonrespondents across subgroups
on available sample frame characteristics or, in the case of
item nonresponse, on available
survey data;
- Comparisons of characteristics of early and late responding
cases;
- Follow-up survey of nonrespondents for a reduced set of key
variables to compare with data from respondents; and
- Descriptions of items not completed, patterns of partial nonresponse,
and characteristics of sampling units failing to respond to
certain groups of characteristics.
- Correcting for nonresponse errors-examples (see Standards
3-2, 4-1, and 4-4:
- If response rates are
low during initial phases of data collection and funds are not
available for intensive follow-up of all respondents, take a
random subsample of nonrespondents and use a more intensive
data collection method;
- Use nonresponse weight adjustments for unit
nonresponse; and
- Use item imputations for item
nonresponse.
- Methods for reducing nonresponse-examples (see Standards
3-2, 4-1, and 4-4:
- Employ pretest or embedded
experiments to determine the efficacy of incentives to improve
response rates;
- Use internal reporting systems to monitor nonresponse during
collection;
- Follow-up strategies for nonrespondents to encourage participation;
and
- Target a set of key data items for collection with unwilling
respondents; and
- For ongoing surveys, consider separate research studies to
examine alternative methods of improving response
rates.
|