Skip Navigation
small header image
Status of Education Reform in Public Elementary and Secondary Schools: Teachers' Perspective
NCES: 1999045
February 1999

Appendix A — Survey Methodology and Data Reliability

Survey Methodology and Data Reliability

A two-stage sampling process was used to select teachers for the FRSS Public School Teacher Survey on Education Reform. At the first stage, a stratified sample of 758 schools was drawn from the 1993-94 NCES Common Core of Data (CCD) public school universe file and included over 77,000 public elementary, middle, and high schools. Excluded from the frame were special education, vocational, and alternative/other schools, schools in the territories, and schools with the highest grade lower than grade one.

Sample Selection

The sample was stratified by instructional level (elementary, middle, secondary/combined), poverty status (as defined by percent of students eligible for free or reduced-price lunch: less than 35 percent; 35 to 49 percent; 50 to 74 percent; 75 percent or greater), school size (less than 300; 300 to 499; 500 to 999; 1,000 to 1,499; and 1,500 or more), and locale (city, urban, fringe, town, rural). The allocation of the sample to the major strata was made in a manner that was expected to be reasonably efficient for national estimates, as well as for estimates for major subclasses.

Teacher Sampling

The 758 schools in the sample were contacted by telephone during spring 1996 and asked to produce a list of eligible teachers for sampling purposes. Eligible teachers included all persons assigned to the school full time and teaching at least one class of children in grades 1-12. Excluded from the list were principals, itinerant teachers (unless at their home-based school), prekindergarten or kindergarten teachers, substitute teachers, teachers" aides, and unpaid volunteers. Using a list of randomly generated line numbers, a telephone interviewer specified the sequence numbers of the teachers on the list who were to be included in the survey. On average, one to two teachers were selected per school. The survey data were weighted to reflect these sampling rates (probability of selection) and were adjusted for nonresponse.

Response Rates

At the first stage of sampling of the 758 schools, 5 schools were found to be out of scope of the study. A response rate of 93.9 percent was obtained for the remaining 753 schools. In April 1996, questionnaires (Appendix C) were mailed to 1,445 teachers at their schools. Telephone followup of nonresponding teachers was initiated in early May and temporarily halted in late June because of school closings for summer vacation. Followup for nonresponse was resumed in September 1996. Of the sampled teachers, 9 were found to be out of scope. Data collection was completed on October 16, with a teacher response rate of 89.7 percent (1,288 of the 1,436 eligible teachers; table 12). The overall study response rate was 84.2 percent (93.9 percent rate of school response multiplied by the 89.7 percent response rate at the teacher level). The weighted overall response rate was 85.9 percent (94.9 percent weighted school response rate multiplied by the 90.5 percent weighted teacher response rate). Item nonresponse rates ranged from 0.0 to 4.9 with nonresponse rates under 1.0 percent for most items.

Sampling and Nonsampling Errors

The response data were weighted to produce national estimates. The weights used were designed to adjust for the variable probabilities of selection and differential nonresponse. The final poststratification adjustment was made so that the weighted teacher counts equal the corresponding estimated teacher counts from the CCD frame within cells defined by instructional level, poverty status, school size, and locale. The findings in this report are estimates based on the sample selection and, consequently, are subject to sampling variability. The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in collection of data. These errors can sometimes bias the data. Nonsampling errors may include such problems as the differences in the respondents" interpretations of the meaning of the questions; memory effects; or misrecording of responses; incorrect editing, coding, and data entry; differences related to particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaire was pretested with teachers similar to those who completed the survey. During the design of the survey and the survey pretest, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous terms. The questionnaire and instructions were extensively reviewed by the National Center for Education Statistics, Office of Education Research and Improvement, and the Planning and Evaluation Service. Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Imputations for item nonresponse were not implemented, as item nonresponse rates were very low. Data were keyed with 100 percent verification. (Table 13).

Variances

The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is what is called a 95 percent confidence interval. For example, the estimated percentage of teachers reporting that they understand the concept of new higher standards very well is 42 percent, and the estimated standard error is 2.1 percentage points. The 95 percent confidence interval for the statistic extends from [42 + (2.1 times 1.960)], or from 37.884 to 46.116 percent.

Estimates of standard errors were computed using a technique known as known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistics of interest for each replicate. The mean square error of the replicate estimates around the full sample estimates provides an estimate of the variance of the statistic. To construct the replications, 40 subsamples of the full sample were created and then dropped, one at a time, to define 40 jackknife replicates. A proprietary computer program (WESVAR), available at Westat, Inc., was used to calculated the estimates of standard errors.


Background Information

The survey was performed under contract with Westat, Inc., using the NCES Fast Response Survey System (FRSS). Westat's Project Director was Elizabeth Farris, and the Survey Managers were Debbie Alexander and Sheila Heaviside. Anjali Pandit was the Research Assistant. Judi Carpenter and Shelley Burns were the NCES Project Officers. The data were requested by Office of Education Research and Improvement (OERI), and the Planning and Evaluation Service (PES), U.S. Department of Education.

This report was reviewed by the following individuals:

Outside NCES

  • Daphne Hardcastle, PES
  • Nancy Loy, OERI
  • Valena Plisko, PES
  • Andrew Porter, University of Wisconsin-Madison
  • Ramsey Selden, American Institute for Research

Inside NCES

  • Michael Cohen
  • Mary Frase
  • Arnold Goldstein
  • Elvie Germino Hausken

For more information about the Fast Response Survey System contact http://nces.ed.gov.

Terms Defined on the Survey Questionnaire

Disability

An impairment that substantially limits one or more of the major life activities of an individual.

New higher standards/high standards: Refers to recent and current education reform activities that seek to establish more challenging expectations for student achievement and performance, such as the National Council of Teachers of Mathematics standards for mathematics, state- or local-initiated standards in various subjects, and those outlined in Goals 2000.

Parent/school compact

Voluntary written agreements between the school and parents on what each will do to help students succeed in school.

SSI

National Science Foundation's Statewide Systemic Initiatives program. For this program, NSF has cooperative agreements with states to undertake comprehensive initiatives for education reform in science, mathematics, and technology.

USI

National Science Foundation's Urban Systemic Initiatives program. For this program, NSF has cooperative agreements with urban areas to undertake comprehensive initiatives for education reform in science, mathematics, and technology.

Classification

Variables

  • Instructional level (elementary, middle, high school)
  • Geographic region (Northeast, Southeast, Central, West)
  • Enrollment size (less than 500, 500-999, 1,000 or more)
  • Locale (city, urban fringe, town, rural)
  • Percent of students eligible for free or reduced-price lunch (less than 35 percent, 35-49 percent, 50-74 percent, 75 or more percent)
  • Minority enrollment (less than 6 percent, 6-20 percent, 21-49 percent, 50 or more percent)
  • Number of years teaching (less than 10, 10 to 20, 21 or more)
  • Main subject area taught (self-contained, mathematics, science, social studies, and English/language arts)

Reference

U.S. Department of Education. (1994). Strong Families, Strong Schools. Washington, DC: U.S. Government Printing Office.

Top