Skip Navigation
small header image
The Condition of Education Indicator List Site Map Back to Home

Supplemental Notes

<< Go Back
Note 4: National Assessment of Educational Progress (NAEP)

The National Assessment of Educational Progress (NAEP), governed by the National Assessment Governing Board (NAGB), is administered regularly in a number of academic subjects. Since its creation in 1969, NAEP has had two major goals: to assess student performance reflecting current educational and assessment practices and to measure change in student performance reliably over time. To address these goals, NAEP includes a main assessment and a long-term trend assessment. The two assessments are administered to separate samples of students at separate times, use separate instruments, and measure different educational content. Thus, results from the two assessments should not be compared.

MAIN NAEP

Indicators 12, 13, 14, 15, and 16 are based on the main NAEP. Begun in 1990, the main NAEP periodically assesses students’ performance in several subjects in grades 4, 8, and 12, following the assessment framework developed by NAGB and using the latest advances in assessment methodology. NAGB develops the frameworks using standards developed within the field, using a consensus process involving educators, subject-matter experts, and other interested citizens. Each round of the main NAEP includes a student assessment and background questionnaires (for the student, teacher, and school) to provide information on instructional experiences and the school environment at each grade.

Since 1990, NAEP assessments have also been conducted to give results for participating states. States that choose to participate receive assessment results that report on the performance of students within the state. In its content, the state assessment is identical to the assessment conducted nationally. However, because the national NAEP samples were not, and are not, designed to support the reporting of accurate and representative state-level results, separate representative samples of students are selected for each participating jurisdiction/state.

Beginning with the 2002 assessments, a combined sample of public schools was selected for both the state and national NAEP. This was done in response to the NCES/NAGB redesign of 1998. It was thought that, with most or almost all states participating in the state component of the NAEP, separate national samples would not be necessary. Thus, by using all students from all of the state samples to produce national estimates, the precision of estimates would be improved greatly and the burden of participation would be somewhat reduced by decreasing the total number of sampled schools. The national NAEP sample is a combination of state samples for those subjects where state scores are available at grades 4 and 8.

Therefore, since 2002, on those assessments with a state component, the main national sample includes all students assessed in the participating states. The typical sample size per grade and subject being assessed is 3,000 students from 100 schools and the Trial Urban District Assessment (TUDA) samples where applicable per state. Should any state or significant part of a state refuse to participate, a small additional sample is selected from schools in the same stratum. This additional sample ensures that the national sample is representative of the total national student population.

The ability of the assessments to measure change in student performance over time is sometimes limited by changes in the NAEP framework. While shorter term trends can be measured in most of the NAEP subjects, data from different assessments are not always comparable. (In cases where the framework of a given assessment changes, linking studies are generally conducted to ensure comparability over time.) However, recent main NAEP assessment instruments for science and reading have typically been kept stable for shorter periods, allowing for comparisons across time. For example, from 1990 to 2005, in general, assessment instruments in the same subject areas were developed using the same framework, shared a common set of questions, and used comparable procedures to sample and address student populations. In 2005, the NAGB revised the grade 12 mathematics framework to reflect changes in high school mathematics standards and coursework. As a result, even though many questions are repeated from previous assessments, the 2005 results cannot be directly compared with those from previous years.

NAGB called for the development of a new mathematics framework for the 2005 assessment. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and to include clear and more specific objectives for each grade level. The new mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content along with a variety of ways of knowing and doing mathematics. For grades 4 and 8, comparisons over time can be made among the assessments prior to and after the implementation of the 2005 framework. In grade 12, with the implementation of the 2005 framework, the assessment included more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework. Additionally, the measurement and geometry content areas were merged. Grade 12 results could not be placed on the old NAEP scale and could not be directly compared with previous years. The reporting scale for grade 12 mathematics was changed from 0–500 to 0–300. For more information regarding the 2005 framework revisions, see http://nces.ed.gov/nationsreportcard/
mathematics/whatmeasure.asp
.

The main NAEP results are reported in The Condition of Education in terms of both average scale scores and achievement levels. The achievement levels define what students who are performing at the Basic, Proficient, and Advanced levels of achievement should know and be able to do. NAGB establishes achievement levels whenever a new main NAEP framework is adopted. As provided by law, NCES, upon review of congressionally mandated evaluations of NAEP, has determined that achievement levels are to be used on a trial basis and should be interpreted with caution. NAEP achievement levels have been widely used by national and state officials. The policy definitions of the achievement levels that apply across all grades and subject areas are as follows:

  • Basic: This level denotes partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade assessed.

  • Proficient: This level represents solid academic performance for each grade assessed. Students reaching this level have demonstrated competency over challenging subject matter, including subject-matter knowledge, application of such knowledge to real-world situations, and analytical skills appropriate to the subject matter.

  • Advanced: This level signifies superior performance at each grade assessed.

In some indicators, the percentage of students at or above Proficient or at or above Basic are reported. The percentage of students at or above Proficient includes students at the Advanced achievement level. Similarly, the percentage of students at or above Basic includes students at the Basic, those at the Proficient, and those at the Advanced achievement levels.

Unlike estimates from other sample surveys presented in this report, NAEP estimates that are potentially unstable (large standard error compared with the estimate) are not flagged as potentially unreliable. This practice for NAEP estimates is consistent with the current output from the NAEP online data analysis tool. The reader should always consult the appropriate standard errors when interpreting these findings. For additional information on NAEP, including technical aspects of scoring and assessment validity and more specific information on achievement levels, see http://nces.ed.gov/nationsreportcard/.

Student Accomodations

Until 1996, the main NAEP assessments excluded certain subgroups of students identified as “special needs students,” including students with disabilities and students with limited English proficiency. For the 1996 and 2000 mathematics assessments and the 1998 and 2000 reading assessments, the main NAEP included a separate assessment with provisions for accommodating these students (e.g., extended time, small group testing, mathematics questions read aloud, and so on). Thus, for these years, there are results for both the unaccommodated assessment and the accommodated assessment. For the 2002, 2003, and 2005 reading assessments and 2003 and 2005 mathematics assessments, the main NAEP did not include a separate unaccommodated assessment; only a single accommodated assessment was administered. The switch to a single accommodated assessment instrument was made after it was determined that accommodations in NAEP did not have any significant effect on student scores. Indicators 12 and 13 present NAEP results with and without accommodations.

LONG-TERM TREND NAEP

The long-term trend NAEP measures basic student performance in reading, mathematics, science, and writing. Indicator 17 reports findings from the long-term reading and mathematics assessments. Since the early 1970s, the long-term trend NAEP has used the same instruments to provide a means to compare performance over time, but the instruments do not necessarily reflect current teaching standards or curricula. Results have been reported for students at ages 9, 13, and 17 in mathematics, reading, and science, and at grades 4, 8, and 12 in writing. Future assessments are scheduled to be conducted in reading and mathematics. Results from the long-term trend NAEP are presented as mean scale scores because, unlike the main NAEP, the long-term trend NAEP does not define achievement levels.



1990 K Street, NW
Washington, DC 20006, USA
Phone: (202) 502-7300 (map)