Skip Navigation
small header image
Arts Education in Public Elementary and Secondary Schools: 1999-2000
NCES: 2002131
May 2002

Appendix A - Survey Methodology

List of Appendix Tables

  • Table A-1: Number and percentage distribution of all regular public elementary schools in the study, and the estimated number and percentage distribution in the nation, by school characteristics: Fall 1999

  • Table A-2: Number and percentage distribution of all regular public secondary schools in the study, and the estimated number and percentage distribution in the nation, by school characteristics: Fall 1999

  • Table A-3: Number and percentage distribution of all regular public elementary school music specialists in the study, and the estimated number and percentage distribution in the nation, by school characteristics: Spring 2000

  • Table A-4: Number and percentage distribution of all regular public elementary school visual arts specialists in the study, and the estimated number and percentage distribution in the nation, by school characteristics: Spring 2000

  • Table A-5: Number and percentage distribution of all regular public elementary school self-contained classroom teachers in the study, and the estimated number and percentage distribution in the nation, by school characteristics: Spring 2000

Fast Response Survey System

The Fast Response Survey System (FRSS) was established in 1975 by the National Center for Education Statistics (NCES), U.S. Department of Education. FRSS is designed to collect small amounts of issue-oriented data with minimal burden on respondents and within a relatively short timeframe. Surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly. Data are weighted to produce national estimates of the sampled education sector. The sample size permits limited breakouts by classification variables. However, as the number of categories within the classification variables increases, the sample size within categories decreases, which results in larger sampling errors for the breakouts by classification variables. FRSS collects data from state education agencies, local education agencies, public and private elementary and secondary schools, public school teachers, and public libraries.

Sample Selection

The samples for the school surveys consisted of 753 elementary school and 755 secondary school principals of regular public schools. Included in the mailing to the elementary school principals was a form for collecting a list of music, visual arts, and self-contained classroom teachers. The samples were selected using the 1999-2000 Schools and Staffing Survey (SASS) universe file, which was created from the 1997-98 NCES Common Core of Data (CCD) Public School Universe File. This sample was designed to minimize the overlap with other large NCES studies that were being conducted concurrently. The sampling frame included 81,405 regular public schools, consisting of 52,925 regular elementary schools, 27,055 regular secondary schools, and 1,425 regular combined schools. The frame included the 50 states and the District of Columbia and excluded special education, vocational, and alternative/other schools; schools in the U.S. territories; and schools with a high grade less than grade 1, or ungraded. A school was defined as an elementary school if the lowest grade was less than or equal to grade 6 and the highest grade was less than or equal to grade 8. A secondary school was defined as having a lowest grade greater than or equal to grade 7 and having grade 7 or higher. Combined schools were defined as those having grades higher than grade 8 and lower than grade 7.

Separate stratified samples of public elementary and secondary schools were selected to receive the appropriate survey instrument. Combined schools were given a chance for selection for both surveys, and if selected were asked to complete only one of the survey instruments. The sampling frame was stratified by instructional level (elementary, secondary, and combined) and school size (less than 300, 300 to 499, 500 to 999, 1,000 to 1,499, and 1,500 or more). Within the primary strata, schools were sorted by geographic region (Northeast, Southeast, Central, and West), locale (city, urban fringe, town, and rural), and percent minority enrollment (less than 5 percent, 5 to 19 percent, 20 to 49 percent, and 50 percent or more) to produce additional implicit stratification. A sample of 753 elementary schools and 755 secondary schools was then selected from the sorted frame with conditional probabilities that accounted for the selection of schools for other NCES studies, such as National Assessment of Educational Progress (NAEP), Early Childhood Longitudinal Study-Kindergarten Cohort (ECLSK), and Schools and Staffing Survey (SASS). The conditional probabilities were designed to minimize the overlap with the samples selected for the other studies while at the same time ensuring that the overall probabilities of selection were roughly proportionate to the aggregate square root of the enrollment of schools in the stratum. Respondents at each sampled elementary school were asked to send lists of its teachers-music specialists, visual arts specialists, and classroom teachers-from which a teacher sampling frame was prepared. The list collection instructions asked respondents to take their complete roster of teachers, identify music and visual arts specialists, and cross off teachers in the following categories: preschool teachers, teachers' aides, bilingual/ESL teachers, special education teachers, and non-full- time classroom teachers. Of the 753 sampled elementary schools, 634 provided a teacher list. Operations staff examined the records from the 634 schools that provided teacher lists and found that 85 percent had provided complete rosters of teachers, while the remaining 15 percent provided only a selected list of teachers. About three-fourths (73 percent) of the lists were provided by principals, and 5 percent by the assistant principal. The remaining lists were provided by secretaries, teachers, or were unspecified.

Exactly one teacher was randomly selected from each of the following groups: classroom teachers, full- or part-time music specialists (if available at the school), and full- or part-time visual arts specialists (if available). A total of 1,614 teachers were sampled: 559 music specialists, 422 visual arts specialists, and 633 self-contained classroom teachers. On average, 2.6 teachers were sampled per elementary school.

Only elementary-level teachers, and not secondary-level teachers, were sampled. First, given scope limitations of FRSS surveys, it was necessary to limit the teacher survey to either the elementary or secondary level. Second, data collection at the secondary level would be constrained by the fact that arts instruction is provided primarily through elective courses and is often taught by multiple specialists in each of the four arts subjects. In contrast, at the elementary level, arts instruction is usually limited to music and visual arts and is part of a standard curriculum in which all students participate. For the teacher level surveys, only music specialists, visual arts specialists, and classroom teachers were sampled. The number of schools offering dance and drama/theater teachers is small, so that it was not feasible to select adequate samples based on the list collection from the schools.

School-Level Respondents and Response Rates

Questionnaires and cover letters for the elementary and secondary school principal surveys were mailed in mid-September of 1999. Included in the elementary school mailing was a form for preparing a list of classroom teachers and arts specialists at the school. The cover letters indicated that the survey and list collection were designed to be completed by the school's principal.

Telephone follow up for those who did not respond to the initial questionnaire mailing was conducted from mid-October 1999 through mid- February 2000 for secondary principals, and through mid-March 2000 for elementary principals. Of the 755 secondary schools selected for the sample, 3 schools were found to be out of the scope of the survey. Of the 753 sampled elementary schools, 18 were out of scope for the survey, while 20 were out of scope for the list collection. The discrepancy between the number of out-of-scope schools for the elementary survey and the list collection was due to two schools that had 5 through 8 grade spans. Although these schools were eligible for the survey, they had no teachers in self-contained classrooms and so they could not provide teacher lists appropriate this study.

This left a total of 752 eligible secondary school principals, 735 eligible elementary school principals, and 733 eligible respondents for the list collection. Completed questionnaires were received from 686 secondary school principals and from 640 elementary school principals. Completed lists were received from 634 elementary schools. The weighted response rates were 87.8 percent for the elementary school survey, 91.7 percent for the secondary school survey, and 87.3 percent for the teacher list collection.

The item nonresponse rates for individual questionnaire items appearing in the secondary school survey rose above 3 percent for 14 items, with 10 of the 14 questions ranging between 4.7 to 6.2 percent item nonresponse. The remaining four items involved the same question across the four arts subjects: what percent of the budget, designated for that subject, came from outside sources. A possible explanation of the high rate of item nonresponse for these four items (8.7 to 18.9 percent) is that some respondents did not have ready access to this information, or that this information was not available.

All but seven of the questions appearing in the elementary school survey had item nonresponse rates of 3 percent or less. Six of the seven questions ranged between 4.7 and 6.6 percent. The question with the highest nonresponse rate (12.8 percent) was the same question that caused high item nonresponse in the secondary school survey, i.e., the percent of the budget designated for (in this case) drama/theatre that came from outside sources. Again, an explanation for the high item nonresponse rate is that some survey respondents did not have ready access this information or that the information was not available.

Teacher-Level Respondents and Response Rates

Questionnaires and cover letters were mailed to the sampled music specialists, visual arts specialists, and regular classroom teachers in late February 2000. The respondents were mailed one of three questionnaires that were tailored to each type of teaching assignment (see appendix C). Each cover letter indicated that the survey was designed to be completed by a music specialist, a visual arts specialist, or a regular classroom teacher.

Telephone follow up for questionnaire nonresponse was conducted from mid-April 2000 through late June 2000. Of the 1,614 teachers selected for the sample, 559 were music specialists, 422 were visual arts specialists, and 633 were regular classroom teachers. Of these, 31 music specialists, 36 visual arts specialists, and 50 regular classroom teachers were out of the scope of the survey. For both the music and visual arts surveys, respondents were out of scope because they were not employed at the sampled school at the time of the study, or did not primarily teach music or visual arts at an elementary school. Respondents found to be out of scope for the classroom teacher survey included non-selfcontained classroom teachers, such as special education or math teachers, teachers teaching grades beyond the grade scope of the survey, teachers who no longer taught at the school, and part-time teachers. Other reasons included teachers on long-term sick leave, long-term substitutes, and rotating teachers.

A total of 528 eligible music specialists, 386 eligible visual arts specialists, and 583 eligible classroom teachers were left in the sample. Completed questionnaires were received from 453 music specialists, 331 visual arts specialists, and 497 regular classroom teachers. The weighted response rates were 84.5 percent for the music specialist survey, 83.4 percent for the visual arts specialist survey, and 85.6 percent for the classroom teacher survey. The overall weighted response rates were computed by multiplying the weighted response rate for the teacher list collection (87.3 percent) by the weighted response rates of the particular surveys. The overall weighted response rate for the music specialist survey was 73.8 percent (87.3 percent multiplied by 84.5 percent), 72.8 percent for the visual arts specialist survey (87.3 percent multiplied by 83.4 percent), and 74.7 percent for the classroom teacher survey (87.3 percent multiplied by 85.6 percent).

Item nonresponse for the three elementary teacher surveys was generally low, with a few items over 3 percent. Six items from the music specialists survey had item nonresponse rates above 3 percent (ranging from 3.7 to 5.0 percent). For the visual arts specialists survey, item nonresponse ranged from 0 to 3 percent. Seven items from the selfcontained classroom teacher survey had item nonresponse rates above 3 percent (ranging from 5.4 to 15.6 percent). All items above 3 percent item nonresponse dealt with the extent to which teachers felt that participating in specific professional development activities improved their teaching.

Sampling and Nonsampling Errors

The responses to the five surveys were weighted to produce national estimates (see tables A-1 through A-5). The weights were designed to adjust for the variable probabilities of selection and differential nonresponse. The findings in this report are estimates based on the samples selected and, consequently, are subject to sampling variability.

The survey estimates are also subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage) errors, errors of reporting, and errors made in data collection. These errors can sometimes bias the data. Nonsampling errors may include such problems as misrecording of responses; incorrect editing, coding, and data entry; differences related to the particular time the survey was conducted; or errors in data preparation. While general sampling theory can be used in part to determine how to estimate the sampling variability of a statistic, nonsampling errors are not easy to measure and, for measurement purposes, usually require that an experiment be conducted as part of the data collection procedures or that data external to the study be used.

To minimize the potential for nonsampling errors, the questionnaires were pretested with respondents like those who completed the survey. During the design of the surveys and the survey pretests, an effort was made to check for consistency of interpretation of questions and to eliminate ambiguous items. The questionnaires and instructions were extensively reviewed by the National Center for Education Statistics (NCES); the Office of Educational Research and Improvement (OERI), U.S. Department of Education; and the National Endowment for the Arts (NEA). Manual and machine editing of the questionnaire responses were conducted to check the data for accuracy and consistency. Cases with missing or inconsistent items were recontacted by telephone. Data were keyed with 100 percent verification.

Variances

The standard error is a measure of the variability of estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors are used as a measure of the precision expected from a particular sample. If all possible samples were surveyed under similar conditions, intervals of 1.96 standard errors below to 1.96 standard errors above a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is a 95 percent confidence interval. For example, the estimated percentage of music specialists who reported that music was included in other academic subjects was 47.2, and the estimated standard error was 2.8.

The 95 percent confidence interval for the statistic extends from [47.2-(2.8 times 1.96)] to [47.2 + (2.8 times 1.96)], or from 41.7 to 52.7 percent. Tables of standard errors for each table and figure in the report are provided in appendix B. Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistic of interest for each replicate. The mean square error of the replicate estimates around the full sample estimate provides an estimate of the variances of the statistics. To construct the replications, 50 stratified subsamples of the full sample were created and then dropped one at a time to define 50 jackknife replicates. A computer program (WesVar4.0) was used to calculate the estimates of standard errors. WesVar4.0 is a stand-alone Windows application that computes sampling errors for a wide variety of statistics (totals, percents, ratios, log-odds ratios, general functions of estimates in tables, linear regression parameters, and logistic regression parameters).

The test statistics used in the analysis were calculated using the jackknife variances and thus appropriately reflected the complex nature of the sample design. In particular, an adjusted chisquare test using Satterthwaite's approximation to the design effect was used in the analysis of the two-way tables. Finally, Bonferroni adjustments were made to control for multiple comparisons where appropriate. For example, for an "experiment-wise" comparison involving g pairwise comparisons, each difference was tested at the 0.05/g significance level to control for the fact that g differences were simultaneously tested.

The Bonferroni adjustment results in a more conservative critical value being used when judging statistical significance. This means that comparisons that would have been significant with a critical value of 1.96 may not be significant with the more conservative critical value. For example, the critical value for comparisons between any two of the four categories of poverty concentration is 2.64, rather than 1.96. This means that there must be a larger difference between the estimates being compared for there to be a statistically significant difference.

Definitions of Analysis Variables

School instructional level-Schools were classified according to their grade span in the 1997-98 Common Core of Data (CCD) frame:

Elementary school-had grade 6 or lower and no grade higher than grade 8.

Secondary school-had no grade lower than grade 7 and had grade 7 or higher.

School enrollment size for the elementary school survey and the teacher surveys-total number of students enrolled on October 1, 1999, based on responses to question 17A on the elementary school survey:

Less than 300 students
300 to 599 students
600 or more students

School enrollment size for secondary school survey-total number of students enrolled on October 1, 1999, based on responses to question 21A on the secondary school survey:

Less than 400 students
400 to 999 students
1,000 or more students

Locale-as defined in the CCD: City-a large or mid-size central city of a Metropolitan Statistical Area (MSA).

Urban Fringe-a place within an MSA of a central city, but not primarily its central city, and defined as urban by the Census Bureau.

Town-an incorporated place not within an MSA, with a population of greater than or equal to 2,500.

Rural-any incorporated place with a population density of less than 1,000 per square mile and designated as rural by the Census Bureau.

Geographic region:

Northeast-Connecticut, Delaware, District of Columbia, Maine Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.

Southeast-Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia.

Central-Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.

West-Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oklahoma, Oregon, Texas, Utah, Washington, and Wyoming.

Percent minority enrollment-The percent of students enrolled in the school whose race or ethnicity is classified as one of the following: American Indian or Alaskan Native, Asian or Pacific Islander, Black non-Hispanic, or Hispanic, based on data in the 1997-98 CCD.

5 percent or less
6 to 20 percent
21 to 50 percent
More than 50 percent

Percent of students at the school eligible for free or reduced-price lunch-This was based on responses to question 19 on the elementary school survey and question 23 on the secondary school survey; if it was missing from the questionnaire, it was obtained from the 1997-98 CCD frame. This item served as the measurement of the concentration of poverty at the school.

Less than 35 percent 35 to 49 percent 50 to 74 percent 75 percent or more

It is important to note that some of the school characteristics used for independent analyses may also be related to each other. For example, enrollment size and locale are related, with large schools tending to be in cities, and small schools tending to be in towns and rural areas. Similarly, poverty concentration and minority enrollment are related, with schools with a high minority enrollment also more likely to have a high concentration of poverty. Other relationships between analysis variables may exist. Because of the relatively small samples used in this study, it is difficult to separate the independent effects of these variables. Their existence, however, should be considered in the interpretation of the data presented in this report.

Comparisons with the 1994 Arts Education Study

The National Center for Education Statistics conducted a school-level study in 1994 on arts education in public elementary and secondary schools, using the Fast Response Survey System (see appendix C for 1994 survey questionnaires). Although many of the questions on the 1999 elementary and secondary surveys asked for similar information as the 1994 surveys, the wording or organization of the questions differed to the extent that direct comparisons were not possible in this report.

The questions in the surveys were changed for several reasons. First, some items in the 1999 surveys were changed to model the arts assessment items of the 1997 National Assessment of Educational Progress (NAEP). Second, more space was available in the 1999 surveys, which allowed for certain questions to be elaborated upon, thereby making them more complex, but less comparable. Lastly, some of the 1994 questionnaire items contained limitations or wording problems that required that those questions be altered.

Although most items were not comparable across survey years, two were determined to be comparable between the 1994 and 1999 elementary questionnaires, and three were comparable between the 1994 and 1999 secondary questionnaires. The first comparable items between the 1994 and 1999 elementary surveys concerned whether schools had district-level curriculum guides in music and visual arts. No change was found between 1994 and 1999 in the percentage of elementary schools that indicated that their districts provided curriculum guides. Eighty-two percent of schools reported the availability of curriculum guides for music in 1994, as did 81 percent in 1999 (not shown in tables). In both 1994 and 1999, 78 percent of elementary schools reported curriculum guides for visual arts, also showing no change. The second comparison involved whether the district had a curriculum specialist or program coordinator in the arts. The percentage of schools with a districtlevel curriculum specialist or program coordinator in the arts increased from 38 percent in 1994 to 56 percent in 1999 (not shown in tables).

There were three possible comparisons for the 1994 and 1999 secondary surveys. The first concerned the number of different courses that schools offered in the arts. In 1994, the mean number of music courses offered to secondary school students was 4.5; however, in 1999 the data show an increase in the number of courses to a mean of 5.0. For the other three comparable subjects, visual arts, dance, and drama/theater, schools offered comparable numbers of courses between the 2 survey years. For visual arts the mean number of courses was 4.7 in 1994 and 5.0 in 1999 (not shown in tables). In 1994, the mean number of courses offered in both dance and drama/theater was 2.1, and in 1999, the mean number was 2.2 for dance and 2.3 for drama/theater.

As with elementary schools, the availability of district-level curriculum guides in public secondary schools offering music and visual arts instruction did not change between 1994 and 1999. In 1994, 82 percent of secondary schools reported that their district provided curriculum guides in music, and 83 reported curriculum guides for visual arts. In 1999, 86 percent of schools had curriculum guides for music and 87 percent had them for visual arts (not shown in tables). The availability of a district curriculum guide for schools that offered dance and drama/theatre also remained unchanged in secondary schools (66 percent in 1994 and 68 percent in 1999 for dance, and 75 percent in both years for drama/theatre). Lastly, there was an increase in the number of school principals reporting having a district-level curriculum specialist or program coordinator in the arts, 36 percent in 1994 to 53 percent in 1999 (not shown in tables).

Background Information

The survey was performed by Westat, using the FRSS, under contract to the NCES. Westat's project director was Elizabeth Farris, and the survey manager was Nancy Carey. The operations manager was Debbie Alexander, and the research assistant for the project was Rebecca Porch. Shelley Burns was the NCES Project Officer. The data were requested by the National Endowment for the Arts (NEA) and the Office of Educational Research and Improvement (OERI), U.S. Department of Education.

This report was reviewed by the following individuals:

Outside NCES

  • Stephanie Cronen, American Institutes for Research
  • Michael Day, Professor, Department of Visual Arts, Brigham Young University
  • Lawrence Lanahan, American Institutes for Research
  • Scott C. Shuler, Arts Consultant, Bureau of Curriculum and Instruction, Connecticut State Department of Education
  • Carol F. Stole, Vice President, Council for Basic Education, and Director, Schools Around the World

Inside NCES

  • Kerry Gruber, Elementary/Secondary and Libraries Studies Division
  • Bill Hussar, Early Childhood, International, and Crosscutting Studies Division
  • Edith McArthur, Early Childhood, International, and Crosscutting Studies Division
  • Marilyn McMillen Seastrom, Chief Statistician
  • Valena Plisko, Associate Commissioner, Early Childhood, International, and Crosscutting Studies Division
  • John Ralph, Early Childhood, International, and Crosscutting Studies Division
  • Bruce Taylor, Office of the Deputy Commissioner
  • Sheida White, NAEP Development and Operations- Assessment Division

For more information about the FRSS or the surveys on arts education, contact Shelley Burns, Early Childhood, International, and Crosscutting Studies Division, National Center for Education Statistics, Office of Educational Research and Improvement, U.S. Department of Education, 1990 K Street NW, Washington, DC 20006, email: Shelley.Burns@ed.gov, telephone (202) 502- 7319. This report and other NCES reports are also available on the NCES Web Site at http://nces.ed.gov.

Top