Skip Navigation
small header image

Statistical Standards
Statistical Standards Program
 
Table of Contents
 
Introduction
1. Development of Concepts and Methods
2. Planning and Design of Surveys
3. Collection of Data
4. Processing and Editing of Data

 
4-1 Data Editing and Imputation of Item Nonresponse
4-2 Maintaining Confidentiality
4-3 Evaluation of Surveys
4-4 Nonresponse Bias Analysis

5. Analysis of Data / Production of Estimates or Projections
6. Establishment of Review Procedures
7. Dissemination of Data
 
Glossary
Appendix A
Appendix B
Appendix C
Appendix D
 
Download PDF (448KB)

For help viewing PDF files, please click here
PROCESSING AND EDITING OF DATA


LIST 4-3-A: MEASURING AND EVALUATING ERROR

  1. SAMPLE SELECTION, FRAMES AND COVERAGE - ADEQUACY OF FRAME
     
    1. Sources of error:
      1. Limitations of the frame-undercoverage/overcoverage of schools or institutions, duplicates, cases of unknown eligibility;
      2. Listing error-failure of initial respondents to include or exclude prospective respondents per instruction; and
      3. Selection of sampling units and respondent units within sampling units.
         
    2. Evaluation of survey coverage-examples:
      1. Comparison of estimated counts to reliable independent sources;
      2. Matching studies to earlier versions of the same data source or to other data sources and the use of dual system estimation;
      3. Analysis of survey returns for deaths, duplicates, changes in classification, and out-of-scope units; and
      4. Field work - such as area listings.
         
    3. Correcting for coverage error - examples:
      1. Use a dual frame approach for survey estimation and
      2. Employ post-stratification procedures.


  2. MEASUREMENT ERRORS-DATA COLLECTION
     
    1. Sources of error:
      1. Questionnaire design, content, wording and instructions;
      2. Length of reference period;
      3. Interview mode(s);
      4. Interviewers-Characteristics, training, and supervision;
      5. Respondent rules-self versus proxy respondents;
      6. Use of records by respondents;
      7. Other respondent effects;
      8. Consistency and time-in-sample bias for longitudinal studies;
      9. Responses to related multiple measures within a questionnaire;
      10. Statistics derived for related measures from different questionnaires within a survey system; and
      11. Responses to related measures from multiple respondents in a sampled unit (e.g., parent/student).
         
    2. Evaluation of measurement errors-examples:
      1. Pilot or field test survey and procedures;
      2. Cognitive research methods;
      3. Reinterview studies;
      4. Response variance;
      5. Randomized experiments;
      6. Behavior coding;
      7. Interviewer variance studies;
      8. Interviewer observation studies;
      9. Record check studies; and
      10. Comparisons of related measures within questionnaires, across respondents; and across questionnaires within a survey system.
         
    3. Correcting for measurement errors-examples:
      1. Use the results from a pilot or field test to modify questionnaire and/or procedures;
      2. Use input from cognitive research to modify questionnaire;
      3. Where possible, use results from comparisons of related measures; and
      4. Employ interviewer retraining and feedback.
         
  3. DATA PREPARATION ERROR
     
    1. Sources of error:
      1. Pre-edit coding;
      2. Clerical review;
      3. Data entry; and
      4. Editing.
         
    2. Evaluation of processing errors-examples:
      1. Pre-edit coding;
      2. Clerical review verification;
      3. Data entry verification;
      4. Editing verification for manual edits;
      5. Edit rates;
      6. Coder error variance estimates; and
      7. Rating and scoring error variance estimates.
         
    3. Correcting for data preparation errors-examples:
      1. Resolution of differences identified in verification;
      2. Increased training;
      3. Feedback during rating and coding; and
      4. Edits for lack of internal agreement, where appropriate.
         
  4. SAMPLING AND ESTIMATION ERRORS
     
    1. Sources of error:
      1. Weighting procedures;
      2. Imputation procedures; and
      3. Sample survey estimation and modeling procedures.
         
    2. Evaluation of sampling and estimation errors-examples:
      1. Variance estimation;
      2. Analysis of the choice of variance estimator;
      3. Indirect estimates for reporting sampling error-use of generalized variance functions, small area estimates, and regression models;
      4. Comparison of final design effects with estimated design effects used in survey planning;
      5. Analysis of the frequency of imputation and the initial and final distributions of variables; and
      6. Analysis of the effect of changes in data processing procedures on survey estimates.
         
    3. Correcting for estimation errors-examples:
      1. Re-estimation using alternative techniques (e.g., outlier treatments, imputation procedures, and variance estimation procedures) and
      2. Explore fitting survey distributions to known distributions from other sources to reduce variance and bias.
         
  5. NONRESPONSE ERRORS
     
    1. Sources of error:
      1. Household/school/institution nonresponse;
      2. Person nonresponse; and
      3. Item nonresponse.
         
    2. Evaluation of nonresponse errors-examples (see Standard 4-4):
      1. Comparisons of respondents to known population characteristics from external sources;
      2. Comparisons of respondents and nonrespondents across subgroups on available sample frame characteristics or, in the case of item nonresponse, on available survey data;
      3. Comparisons of characteristics of early and late responding cases;
      4. Follow-up survey of nonrespondents for a reduced set of key variables to compare with data from respondents; and
      5. Descriptions of items not completed, patterns of partial nonresponse, and characteristics of sampling units failing to respond to certain groups of characteristics.
         
    3. Correcting for nonresponse errors-examples (see Standards 3-2, 4-1, and 4-4:
      1. If response rates are low during initial phases of data collection and funds are not available for intensive follow-up of all respondents, take a random subsample of nonrespondents and use a more intensive data collection method;
      2. Use nonresponse weight adjustments for unit nonresponse; and
      3. Use item imputations for item nonresponse.
         
    4. Methods for reducing nonresponse-examples (see Standards 3-2, 4-1, and 4-4:
      1. Employ pretest or embedded experiments to determine the efficacy of incentives to improve response rates;
      2. Use internal reporting systems to monitor nonresponse during collection;
      3. Follow-up strategies for nonrespondents to encourage participation; and
      4. Target a set of key data items for collection with unwilling respondents; and
      5. For ongoing surveys, consider separate research studies to examine alternative methods of improving response rates.