Skip navigation
Skip Navigation
small header image
Click for menu... About NAEP... Click for menu... Subject Areas... Help Site Map Contact Us Glossary NewsFlash
Sample Questions Analyze Data State Profiles Publications Search the Site
NAEP Data
National Assessment of Educational Progress (home)


Help for the NAEP 1997 Arts Summary Data Tables

Click on a topic to learn more about it:

Types of Summary Data Tables
Reporting Groups
Assessment Instruments
How to Read the Summary Data Tables
NAEP Arts Scales and Scores
Minimum Sample Sizes for Reporting
Drawing Inferences from the Results
Using Acrobat Reader
For More Information

Types of Summary Data Tables

These data tables present music, visual arts, and theatre results from the 1997 National Assessment of Educational Progress at grade 8. For music and visual arts, representative samples of public and nonpublic school students were assessed. A special sample was assessed for theatre.

A key component of the arts program was the contextual information collected from the students, their teachers, and the administrators in the schools they attended. The students were asked a series of background questions about themselves, their classroom instruction, and their perceptions toward school. Teachers of students in the theatre assessment completed a questionnaire that included general questions about their experience and training as well as specific questions about instruction and programs. Principals or other administrators in each student's school completed a questionnaire about school policies and practices.

The arts summary data tables are based on responses to background items from the student, school, and (for theatre) teacher questionnaires. The results include average scale or mean percent-correct scores and percentages for each response alternative and scale score percentiles for assessment questions. The results are enumerated for important demographic groups such as student gender, race/ethnicity, and parental education level.

Users can choose different types of summary data tables for each of the assessed arts disciplines by first selecting "Music," "Theatre," or "Visual Arts," then selecting the desired table type. The tables are grouped by the types of scores that were developed for each discipline. Separate IRT scales were constructed for music, theatre, and visual arts Responding exercises. In addition, Creating items in visual arts and music, Creating/Performing items in theatre, and Performing items in music were formed into separate percent-of-total-possible-points averages, with mean percent-correct scores reported at various levels.

Unless otherwise indicated in rows or column headings, the results presented in the music and visual arts tables are based on combined public- and nonpublic-school samples. Because the theatre assessment was a special targeted assessment, theatre results are not reported by type of school, even though there are some (very few) nonpublic-school students in the theatre sample.

Five types of summary data tables are provided:

1) Student Summary Data Tables

The Student Summary Data Tables present results based on data from the background questions, subject-area questions, and motivation questions that were administered to each student and on data derived from school-level information contained in the sampling frame. Separate tables are provided for each data variable, but all tables share a common format. The rows of the table contain information for the total sample and for the standard set of reporting groups. The column labeled N shows the number of students on which the summary is based. The column labeled CV shows the coefficient of variation for the estimated number of individuals in the population that belong to the indicated subgroup. The coefficient of variation, 100 x the standard deviation divided by the mean, is a measure of variability that does not depend on the scale.

Each of the remaining columns in the table contains the estimated percentage of students corresponding to each category of the background variable and their estimated average scores. Standard errors of these statistics are shown in parentheses.

The rightmost column, labeled "Missing," shows the percentage of students in the reporting group for whom membership in a particular response category is unknown because no response was given by the student. Average scores are also provided for this group of students, but they are not used in calculating percentages in the response categories.

2) Teacher Summary Data Tables (Theatre Only)

The Teacher Summary Data Tables contain results based on the responses of the theatre teachers of the students in the theatre assessment to a three-part questionnaire. The first section focused on the teacher's general background and experience; the second, on the teacher's back ground related to theatre; and the third, on classroom information about instructional practices in theatre. The Teacher Summary Data Tables are identical in format to the Student Summary Data Tables and contain the same types of statistics.

In the Teacher Summary Data Tables, N represents the total number of students in the sample (or in each reporting group) who could be matched to their teacher and for which the teacher responded to the particular question. A small number of students could not be matched to their teacher because the teacher either was not identified or did not return a questionnaire.

3) School Summary Data Tables

The School Summary Data Tables contain results based on the responses obtained from participating schools to the School Characteristics and Policies Questionnaire. The questionnaire was given to the principal or other administrator in the school. It asked about school policies, programs, and facilities, and the demographic composition and background of the students and teachers. The School Summary Data Tables are identical in format to the Student and Teacher Summary Data Tables and contain the same types of statistics. However, in the School Summary Data Tables, the N represents the number of students in the sample (or in each reporting group) who could be matched to a school questionnaire and for which the administrator responded to the question. Because not all schools returned the school questionnaire, a small percentage of students may not have school questionnaire information.

4) Test Question Summary Data Tables

In these data tables, response percentages are provided for the multiple-choice and constructed-response test questions that were administered to students in the assessments. Percentages are followed by their estimated standard errors, shown in parentheses. For multiple-choice questions, the percentage of students that responded to the various options appears, with the correct answer denoted by an asterisk. For constructed-response questions, percentages are provided for each of the score levels. The percentage of students that omitted or did not reach each question is also reported. Students were classified as having omitted a question item if they did not respond to the question but did respond to subsequent questions in the same section (or "block") of questions. Students were classified as having not reached a question if they omitted the question and omitted all subsequent questions in the same section ("block"). For dichomotously scored constructed-response questions, off-task responses were grouped with omitted responses. For all questions, the percentages are calculated after excluding the not-reached responses.

5) Percentile Summary Data Tables (for the Responding Scales)

The Percentile Summary Data Tables provide estimated percentiles for the arts Responding scales. The following information is provided: 1) estimated mean; 2) estimated standard deviation; and 3) estimates of the 10th, 25th, 50th (or median), 75th, and 90th percentiles. All estimates are followed in parentheses by their estimated standard errors.

Top of Page buttonBack to Top


Reporting Groups

The summary data tables provide results for groups of students defined by shared characteristics. Based on statistically determined criteria, results are reported for subpopulations only when sufficient numbers of students are assessed and adequate school representation criteria are met. For both public- and nonpublic-school students, the minimum requirement is 62 students in a particular subgroup from at least 5 primary sampling units (selected geographical regions comprising a county, group of counties, or a metropolitan statistical area). However, the data for all students, regardless of whether their subgroup was reported separately, were included in computing overall results. Definitions of the subpopulations referred to in the summary data tables are presented below.

1) Gender

Results are reported separately for males and females.

2) Race/ethnicity

Results are presented for students of different racial/ethnic groups based on the students' self-identification of race/ethnicity according to six mutually exclusive categories: White, Black, Hispanic, Asian, Pacific Islander, and American Indian/Alaskan Native. The numbers of Pacific Islander and American Indian students did not meet minimum reporting requirements in any of the arts disciplines, and so these subgroups do not appear as rows in the tables. Sufficient numbers of Asian students were available to report on only for music and visual arts.

3) Parents' Highest Level of Education

Students were asked to indicate the extent of schooling for each of their parents - did not finish high school, graduated high school, had some education after high school, or graduated college. The response indicating the highest level of education was selected for reporting.

4) Type of School (Music and Visual Arts Only)

Overall results are presented for public schools and for nonpublic schools.

5) Region of the Country

Results are reported for four regions of the nation: Northeast, Southeast, Central, and West. States included in each region are shown in the following list. For theatre, only the West region met minimum subgroup requirements for reporting.

Northeast: Connecticut, Delaware, District of Columbia, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont, Virginia (DC metropolitan statistical area only)
Southeast: Alabama, Arkansas, Florida, Georgia, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Virginia (other than DC metro area), West Virginia
Central: Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, Wisconsin
West: Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oklahoma, Oregon, Texas, Utah, Washington, Wyoming

6) Type of Location

Results are provided for students attending schools in three mutually exclusive location types - central city, urban fringe/large town, and rural/small town - as defined below. The type of location variable is defined in such a way as to indicate the geographical location of a student's school. The intention is not to indicate, or imply, social or economic meanings for these location types.

Central City: The Central City category includes central cities of all metropolitan statistical areas (MSAs). Central City is a geographic term and is not synonymous with "inner city."
Urban Fringe/Large Town: An Urban Fringe includes all densely settled places and areas within MSAs that are classified as urban by the Bureau of the Census. A Large Town is defined as places outside MSAs with a population greater than or equal to 25,000.
Rural/Small Town: Rural includes all places and areas with a population of less than 2,500 that are classified as rural by the Bureau of the Census. A Small Town is defined as places outside MSAs with a population of less than 25,000 but greater than or equal to 2,500.

7) Responding Score Groups

Low-, middle-, and high-scoring groups were formed by sorting the Responding scale scores from lowest to highest, calculating cumulative percentiles, and partitioning at the 25th and 75th percentiles.

Top of Page buttonBack to Top


Assessment Instruments

Student Booklets and Questionnaires

The assessments in music, theatre, and visual arts included "blocks" or sets of questions. Each block consisted of one or more stimuli and sets of multiple-choice, constructed-response, or Creating/Performing items to assess students' mastery of material.

The music assessment included five Creating/Performing blocks and four Responding blocks. The five Creating/Performing blocks were divided into three Creating/Performing blocks for students in the general population, and two Creating/Performing blocks for students currently active in some type of music activity. All students sampled for the music assessment completed one of the Creating/Performing blocks for the general student population and two Responding blocks. In addition, a small sample of students who indicated current involvement in a music activity completed one of the two additional Creating and Performing blocks as a fourth block.

The theatre assessment included three Creating/Performing blocks and four Responding blocks. All theatre students completed one Creating/Performing block and two Responding blocks.

The visual arts assessment included three Creating blocks and four Responding blocks. All students sampled for visual arts completed either one Responding block and one Creating block or two Responding blocks.

Each booklet in the assessment also included several sets of background questionnaires. Students sampled for the arts assessment completed one 5-minute set of student demographic background questions and one 10-minute set of subject-specific background questions. The subject-specific background questionnaires were designed to gather contextual information about students, their instructional and out-of-school arts experiences, and their attitudes toward the art domain in which they were being assessed.

The types of questions asked in each of the three categories of the subject-specific questionnaires are described as follows: students' interest in the subject included students' ratings of their interest and ability in the subject. For example, in music, three of the statements to which students were asked to respond "Agree," "Not Sure," or "Disagree," included: "I like to listen to music," "I think I have talent for music," and "People tell me I am a good musician."

Students' in-school experiences were characterized by the frequency with which their teachers provided various subject-related instructional activities during class and by student participation in various arts-related activities during school. Students' out-of school experiences were characterized by the frequency with which students were involved in various arts-related activities outside of school, not in connectio n with schoolwork.

Teacher Questionnaire

To supplement the information on instruction reported by students, the theatre teachers of the targeted students participating in the NAEP theatre assessment were asked to complete a questionnaire about instructional practices, teaching backgrounds, and characteristics. The results of the field tests in music and visual arts showed high percentages of missing data for the music and visual arts teachers' questionnaires. Because of this, teacher questionnaires were not administered in the operational music and visual arts assessments. The theatre teacher questionnaire was divided into three parts. The first part contained 85 questions about teachers' general educational background and training.

The second part contained 25 questions pertaining to teachers' background, activities, and preparation in theatre. The third section contained 50 questions on specific instructional practices.

School Questionnaire

Principals of students sampled for the assessment were asked to complete a questionnaire about the school's characteristics and students' access to instruction in the arts. The school questionnaire covered three broad areas. The first part pertained to the availability of courses in the arts and students' access to computers. The second part asked questions about the status of staff members teaching in the arts, the facilities and available resources for the arts, and the existence of special programs in the arts, such as artists-in-residence and summer arts programs. The final part of the school questionnaire pertained to demographics at the school, such as school enrollment. It also included variables used to describe the general climate of the school, such as attendance rates of students and staff, and the frequency of various problems in the school.

Top of Page buttonBack to Top


How to Read the Summary Data Tables

It is important to note that throughout these data tables the student is always the unit of analysis. The data from the questionnaires that a student's teacher and school administrator completed are linked to that student's record. Thus, even when information being reported is based on responses obtained from the teacher or school questionnaire, the results are presented for the percentages of students in each category. Furthermore, it should be remembered that all reported statistics were determined using sampling weights designed to produce consistent estimates of the quantity in question for each of the sampled populations.

Having the student as the unit of analysis and using sampling weights make it possible to describe instruction as received by representative samples of students. The perspective provided by using the student as the unit of analysis may differ somewhat from that which would be obtained by simply collecting information from a sample of teachers or from a sample of schools; however, the approach used is in keeping with NAEP's goal of providing information about the educational context and the performance of students.

The following is a description of the rows, columns, and notations used in the NAEP Arts Summary Data Tables:

1) Rows

Each table includes results for:

Total (Nation)
Gender
Race/ethnicity
Type of school (not available for Theatre)
Region of the country
Type of location
Responding scale score groups

2) Columns

N Number of students on which the summary is based.
Weighted % The weighted percent of the population represented by the reporting group with the standard error of this estimate in parent heses.
CV Coefficient of variation for the estimated number of individuals in the population that belong to the indicated demographic group.
Missing The rightmost column indicates the percentage of students in the total sample or in a reporting group for whom membership in a particular response category is unknown because no response was given by the student.

Each of the remaining columns in the table contains the estimated percentage of students choosing the indicated option of the background question (or derived variable) and their estimated average score. Standard errors of these statistics are shown in parentheses.

3) Notations

(****)Indicates that the value of the percentage is extremely low or extremely high.
*****(****) Indicates an insufficient sample size for this statistic. For results to be reported for any subgroup, at least 5 PSUs must be represented in the subgroup. In addition, a minimum sample of 62 students per subgroup is required.
! Indicates that the coefficient of variation is greater than or equal to 20 percent, hence the variability of the statistic cannot be accurately determined.

Top of Page buttonBack to Top


NAEP Arts Scales and Scores

For each arts domain, analyses were conducted to determine the percentage of students who gave various responses to each cognitive and background question. Item response theory (IRT) was used to estimate average proficiency for the nation and various subgroups of interest within the nation. Mean percent-correct scores were developed for items classified as Creating, Performing, or Creating/Performing, and percentages of each response to each item computed, along with mean percent-correct scores for each level.

For the music, theatre, and visual arts Responding exercises, separate IRT scales were constructed. Because of concerns about multidimensionality, and because there were too few items to create IRT scales for Creating, Performing, or Creating/ Performing, only items in the Responding categories of the arts framework were entered into the IRT scaling procedure. A single IRT scale was created for each of the three fields of art. In music, the single scale was a composite of two scales, which improved the fit of the model over that for a single scale.

Creating items in visual arts and music, Creating/Performing items in theatre, and Performing items in music were formed into separate percent-of-total-possible-points averages, with mean percent-correct scores reported at various levels. Certain theatre items which combined aspects of Responding and Creating (e.g., "draw a set design for this play") did not fit the Responding IRT scale , and so were not included in that scale, but were reported on an item-level basis. In addition, items with logical dependencies (e.g., write a new ending to a script, followed by a discussion on how that ending accomplished one's goals) were separated, with the discussions included in the IRT scale. The endings on which they depended were put in the Creating/ Performing averages.

For each of the three subjects in the arts assessment, a Responding IRT scale with a mean of 150 and a standard deviation of 35 was created, so that the great majority of students had scores between 45 and 255, using a generalized partial-credit model. Although the mean of the Responding scale for each subject has been set to 150, the scales are measuring different accomplishments. Comparisons cannot be made between student results on any pair of Responding scales, even though the scales share the same mean (150). In other words, a score of 165 in visual arts is not necessarily "better" than a score of 160 in music. It is useful to illustrate the level of performance of students with a given scale score in a given subject by identifying questions likely to be answered correctly by students with that scale score. This process is known as "mapping." The position of a question on the Responding scale for each arts area represents the scale score attained by students who had: (1) at least a 65 percent probability of reaching a given score level on a constructed-response question, or (2) at least a 74 percent probability of correctly answering a multiple-choice question.

Top of Page buttonBack to Top


Minimum Sample Sizes for Reporting

In some instances, the number of students in some of the demographic subpopulations was not sufficiently high to permit accurate estimation of performance and/or background variable results. As a result, data are not provided for the subgroups with students from very few schools or for the subgroups with very small sample sizes. For results to be reported for any subgroup, at least 5 PSUs must be represented in the subgroup. In addition, a minimum sample of 62 students per subgroup is required. For statistical tests pertaining to subgroups, the sample size for both groups has to meet the minimum sample size requirements. In the summary data tables, the notation "*****(****)" appears in place of a result whenever minimum sample size requirements are not met.

Top of Page buttonBack to Top


Drawing Inferences from the Results

Because the percentages of students in the assessed subpopulations and their average scale scores are based on samples rather than on the entire population eighth graders in the nation, the numbers reported are estimates. As such, they are subject to a measure of uncertainty, reflected in the standard error of the estimate. When the percentages or average scale scores of certain groups are compared, the standard error should be taken into account, and observed similarities or differences should not be relied on solely. Therefore, the comparisons presented in the summary data tables are based on statistical tests that consider the standard errors of those statistics and the magnitude of the difference among the averages or percentages.

Using confidence intervals based on the standard errors provides a way to take into account the uncertainty associated with sample estimates, and to make inferences about the population averages and percentages in a manner that reflects that uncertainty. An estimated sample average scale score plus or minus two standard errors approximates a 95 percent confidence interval for the corresponding population quantity. This statement means that one can conclude with approximately a 95 percent level of confidence that the average performance of the entire population of interest is within plus or minus two standard errors of the sample average.

Similar confidence intervals can be constructed for percentages, provided that the percentages are not extremely large or extremely small. Extreme percentages should be interpreted with caution. Adding or subtracting the standard errors associated with extreme percentages could cause the confidence interval to exceed 100 percent or go below 0 percent, resulting in numbers that are not meaningful.

Top of Page buttonBack to Top


Using Acrobat Reader

The NAEP summary data tables are provided in Portable Document Format (PDF), a cross-platform, fully searchable, open file format that retains the fidelity of the original documents. To use PDF files you must install an Adobe Acrobat Reader from Adobe Systems, Inc. With Acrobat Reader software, you can view, navigate, zoom in on, search, print, and extract information from PDF files. Versions of the Reader for Windows, Macintosh, MS-DOS, and Unix systems are available free of charge from Adobe's Web site (http://www.adobe.com/products/acrobat/readstep2_allversions.html). Installation instructions and system requirements are provided at the Adobe Web site.

Additional Extraction Capability

Acrobat Reader's features can be enhanced by plug-ins from several third-party software developers. For example, some plug-ins extend the Reader's text extraction features by allowing users to copy tables in formats that can be imported into popular spreadsheet and word processing programs. Information about these products and evaluation copies of the software are available on Adobe's Web site at http://www.adobe.com/products/acrobat/readstep2_allversions.html. Offline, you can get information by contacting

Adobe Systems Incorporated
345 Park Avenue
San Jose, California 95110-2704 USA
Tel: 408-536-6000
Fax: 408-537-6000

Top of Page buttonBack to Top


For More Information About NAEP and the Summary Data Tables

An overview of the design and analysis of the 1997 assessments is provided in the procedural appendix of the report NAEP 1997 Arts Report Card. For complete details, see The NAEP 1997 Arts Analysis Technical Report.

For questions about the NAEP Data Tool, contact:

Sherran Osborne
National Center for Education Statistics
Assessment Division, 8th Floor
1990 K Street, NW
Washington, DC 20009
Phone: (202) 502-7420
E-mail: sherran.osborne@ed.gov

Top of Page buttonBack to Top    NAEP 1997 Arts Summary Data Tables -- Music pageNAEP 1997 Arts Summary Data Tables -- Music page

Last updated 30 September 2002 (JBJ)
1990 K Street, NW
Washington, DC 20006, USA
Phone: (202) 502-7300 (map)