Skip navigation
Skip Navigation
small header image
Click for menu... About NAEP... Click for menu... Subject Areas... Help Site Map Contact Us Glossary NewsFlash
Sample Questions Analyze Data State Profiles Publications Search the Site
The Nation's Report Card (home page)
National Assessment of Educational Progress (NAEP) - click for home page

Report in Brief: NAEP 1994 Trends in Academic Progress

October 1996

Authors: Jay R. Campbell, Clyde M. Reese, Christine O'Sullivan, and John A. Dossey

PDF Download the complete report in a PDF file for viewing and printing. 694K


Introduction

Educational reform continues to be a major concern of parents, educators, and policy makers, as well as the general public. Reorganizing schools, enhancing the curriculum, establishing performance standards, and rethinking traditional instructional methods are just some of the efforts being made across the country to increase student achievement. As a part of these efforts, in 1990 the President and governors adopted a set of six ambitious national education goals for the 21st century: ensuring that children start school ready to learn, raising high school graduation rates, increasing levels of educational achievement, promoting science and mathematics achievement as well as literacy and lifelong learning, and freeing schools of drugs and violence.1 In the Spring of 1994, Congress broadened the goals to include improvements in teacher preparation and increased parental involvement in schools.2

Measuring students' progress toward higher achievement has been the purpose of the National Assessment of Educational Progress (NAEP) since its inception in 1969. Students in both public and nonpublic schools have been assessed in various subject areas on a regular basis. In addition, NAEP collects information about relevant background variables that provide an important context for interpreting the assessment results and for documenting the extent to which educational reform has been implemented.

One important feature of NAEP is its ability to monitor trends in academic achievement in core curriculum areas over an extended period of time. By readministering materials and replicating procedures from assessment to assessment, NAEP provides valuable information about progress in academic achievement and about whether the United States can meet the challenge of its national education goals.

The NAEP long-term trend assessments are separate from the main assessments conducted by NAEP that involve more recently developed instruments. While the long-term trend assessments have used the same sets of questions and tasks so that trends across time can be measured, the main assessments in each subject area have been developed to reflect current educational priorities. The use of both long-term trend and main assessments allows NAEP to provide information about students' achievement over time, and assess their achievement of more contemporary educational objectives. As each of these assessments is based on different sets of questions or tasks, the results from each cannot be directly compared.

This report presents results of the NAEP 1994 trend assessments in science, mathematics, reading, and writing. To provide a numeric summary of students' performance on the assessment questions and tasks, NAEP uses a 0 to 500 scale for each subject area. Comparisons of average scale scores are provided across the years in which trend assessments have been administered and among subpopulations of students. National representative samples totaling approximately 31,000 students were involved in the NAEP 1994 trend assessments.

In the following sections of this report, trend assessment results are given in science, mathematics, reading and writing. These results chart trends going back to the first year in which each NAEP assessment was given: 1969/70 in science; 1973 in mathematics; 1971 in reading; and 1984 in writing. Trends in average performance over these time periods are discussed for students at ages 9, 13, and 17 for the science, mathematics, and reading assessments and for grades 4, 8, and 11 for the writing assessment. Trends in average performance differences between White students and Black students, White students and Hispanic students, and male students and female students are also discussed.

The descriptions of trend results are based on the results of statistical tests that consider both the estimates of average performance in each assessment year as well as the degree of uncertainty associated with these estimates. The purpose of basing descriptions on such tests is to restrict the discussion of observed trends and group differences to those that are statistically dependable. Hence, the patterns of results that are discussed are unlikely to be due to the chance factors associated with the inevitable sampling and measurement errors inherent in any large-scale survey effort like NAEP. Throughout this report, all descriptions of trend patterns, differences between assessment years, and differences between subgroups of students which are cited are statistically significant at the .05 level.

Two distinct sets of statistical tests have been applied to the trend results. First, each sequence of assessment results (whether it be overall performance or differences in performance for race/ethnicity and gender subgroups) was tested for linear and quadratic trends. Separate tests were carried out in each subject area at each age or grade level. The purpose of this first set of general tests was to determine whether the results of the series of assessments in a given subject could be generally characterized by a line or a simple curve. A linear relationship indicates that results have steadily increased (or decreased) at a relatively constant rate over the time period of interest. Simple curvilinear (i.e., quadratic) relationships capture more complex patterns. For example, one possible pattern is to have initial score declines over part of the time period followed by score increases in more recent assessments. Another possible pattern is to have a sequence of several assessments in which scores have increased followed by a period of relative stable assessment performance. These examples are two, but not all, of the simple curvilinear relationships that were tested.

Simple linear and curvilinear patterns do not always provide a satisfactory summary description of the pattern of trend results. Hence, tests of linear and quadratic trends were supplemented by a second set of statistical tests which compared results for selected pairs of assessment years within each trend sequence. Again, separate tests were carried out in each subject area at each age or grade level. Two families of pairwise tests were carried out. One family of tests consisted of comparing the results from each trend assessment year to the results for the first assessment year. The second family of tests consisted of comparing the results from each trend assessment year to the 1994 results. The statistical tests in both families were carried out at a significance level that adjusted for the multiple comparisons being carried out within each family. The characterizations of trend data that appear below are based on the combined results of both the general (i.e., linear and quadratic) and the two families of pairwise tests.


  1. Executive Office of the President, National Goals for Education. (Washington, DC: Government Printing Office, 1990).
  2. Goals 2000: Educate America Act, Pub. L. No. 103-227 (1994).


PDF Download the complete report in a PDF file for viewing and printing. 694K

NCES 97-583 Ordering information

Last updated 23 March 2001 (RH)

Go to Top of Page
1990 K Street, NW
Washington, DC 20006, USA
Phone: (202) 502-7300 (map)