Skip navigation
Skip Navigation
small header image
Click for menu... About NAEP... Click for menu... Subject Areas... Help Site Map Contact Us Glossary NewsFlash
Sample Questions Analyze Data State Profiles Publications Search the Site
Reading
The Nation's Report Card (home page)

Interpreting NAEP Reading Results

Overview of the Assessment
Reporting the Assessment—Scale Scores and Achievement Levels
Description of Reading Performance by Item Maps for Each Grade
Results Are Estimates
NAEP Reporting Groups
Exclusion Rates
Statistical Significance
Cautions in Interpretations

Overview of the Assessment

NAEP assesses student performance in reading by administering assessments to samples that are representative of the nation's students. The content of the NAEP reading assessment is determined by a framework developed with the help of researchers, policymakers, and the interested public as well as expert perspectives about reading and its measurement. Read more about what the assessment measures, how it was developed, who took the assessment, and how the assessment was administered.

Beginning in 2002, the NAEP national sample was obtained by aggregating the samples of public school students from each state and jurisdiction, and then supplementing the aggregate sample with a nationally representative sample of students from nonpublic schools, rather than by obtaining an independently selected national sample. As a consequence, the national sample size increased, and smaller differences between years or between groups of students were found to be statistically significant than would have been detected in previous assessments. In keeping with past practice, all statistically significant differences are indicated in the current web results pages.

Comparisons are made to results from previous years in which the assessment was administered. In addition to the 2007 results, national results are reported from the 1992, 1994, 1998, 2002, 2003, and 2005 assessments and from the 2000 assessment for grade 4. Grade 4 and 8 state and/or jurisdiction results are reported from the 1992, 1994, 1998, 2002, 2003, 2005 and 2007 assessments. Results from 1998 or later are based on administration procedures in which testing accommodations were permitted for students with disabilities and English language learners. Accommodations were not permitted in earlier assessments. Read more about NAEP's policy of inclusion. Comparisons between results from 2003 and those from assessment years in which both types of administration procedures were used (i.e., 1998 and 2000 at grade 4 and in 1998 at grade 8) are based on the results when accommodations were permitted. Changes in student performance across years or differences between groups of students within an administration year are discussed only if they have been determined to be statistically significant.

Back to Top

Reporting the Assessment—Scale Scores and Achievement Levels

The results of student performance on the NAEP reading assessment are presented on this website in two ways: as average scores on the NAEP reading scale and as the percentages of students attaining NAEP reading achievement levels. The average scale scores represent how students performed on the assessment. The achievement levels represent how that performance measured up against set expectations for achievement. Thus, the average scale scores represent what students know and can do, while the achievement-level results indicate the degree to which student performance meets expectations of what they should know and be able to do.

Average reading scale score results are based on the NAEP reading scale, which ranges from 0 to 500. The NAEP reading assessment scale is a composite combining separate scales for each reading context specified by the reading framework (at grade 4, reading for literary experience and reading for information, and at grade 8, those contexts and reading to perform a task). Average scale scores are computed for groups of students; NAEP does not produce individual student scores. The average scores are based on analyses of the percentages of students who answered each item successfully. The results for all three grades are placed together on one reporting scale. In the base year of the trend line, the three grades are analyzed together to create a cross-grade scale. In subsequent years, the data from each grade level are analyzed separately and then linked to the original cross-grade scale established in the base year. Comparisons of overall student performance across grade levels on a cross-grade scale are acceptable;  however, other types of comparisons or inferences may not be supported by the available information.  Note that while the scale is cross-grade, the skills tested and the material on the test increase in complexity and difficulty at each higher grade level, so different things are measured at the different grades even though a progression is implied.

Achievement-level results are presented in terms of reading achievement levels adopted by the National Assessment Governing Board, and are intended to measure how well students' actual achievement matches the achievement desired of them. For each grade tested, the Governing Board has adopted three achievement levels: Basic, Proficient, and Advanced. For reporting purposes, the achievement-level cut scores are placed on the reading scales, resulting in four ranges: below Basic, Basic, Proficient, and Advanced.

The Governing Board established its achievement levels in 1996 based upon the reading content framework and standard-setting process. A cross section of educators and interested citizens from across the nation were asked to judge what students should know and be able to do relative to the content reflected in the NAEP reading framework. As provided by law, the NCES has determined that the achievement levels are to be considered on a trial basis and should be interpreted and used with caution. However, both NCES and the Governing Board believe these performance standards are useful for understanding trends in student achievement.

Description of Reading Performance by Item Maps for Each Grade

Item maps illustrate the knowledge and skills demonstrated by students performing at different scale points on the NAEP reading assessment. In order to provide additional context, the cut points for the three NAEP achievement levels are marked on the item maps. The map location for each question represents the probability that, for a given score point, 65 percent of the students for a constructed-response question and 74 percent of the students for a four-option multiple-choice question answered that question successfully. For constructed-response questions, responses may be completely or partially correct; therefore, different types of responses to the same question could map onto the scale at different score levels.

Approximately 30 reading questions per grade have been selected and placed on an item map for each grade.

Results Are Estimates

The average scores and percentages presented on this website are estimates because they are based on representative samples of students rather than on the entire population of students. Moreover, the collection of subject-area questions used at each grade level is but a sample of the many questions that could have been asked. As such, NAEP results are subject to a measure of uncertainty, reflected in the standard error of the estimates. The standard errors for the estimated scale scores and percentages in the figures and tables presented on this website are available through the NAEP Data Explorer.

Back to Top

NAEP Reporting Groups

Results are provided for groups of students defined by shared characteristics—gender, race/ethnicity, eligibility for free/reduced-price school lunch, students with disabilities, and students identified as English language learners. Based on participation rate criteria, results are reported for various student populations only when sufficient numbers of students and adequate school representation are present. The minimum requirement is at least 62 students in a particular group from at least five primary sampling units (PSU). However, the data for all students, regardless of whether their group was reported separately, were included in computing overall results. Explanations of the reporting groups are presented below.

Gender

Results are reported separately for males and females.

Race/Ethnicity

In all NAEP assessments, data about student race/ethnicity is collected from two sources: school records and student self-reports. Before 2002, NAEP used students' self-reports of their race and ethnicity on a background questionnaire as the source of race/ethnicity data. In 2002, it was decided to change the student race/ethnicity variable highlighted in NAEP reports. Starting in 2002, NAEP reports of students' race and ethnicity are based on the school records, with students' self-reports used only if school data are missing. Information based on student self-reported race/ethnicity will continue to be reported in the NAEP Data Explorer.

In order to allow comparisons across years, assessment results presented are based on school-reported information for six mutually exclusive racial/ethnic categories: White, Black, Hispanic, Asian/Pacific Islander, American Indian (including Alaska Native), and Other. Students who identified with more than one of the first five categories or had a background other than the ones listed were categorized as Other.

Eligibility for Free/Reduced-Price School Lunch

As part of the Department of Agriculture's National School Lunch Program, schools can receive cash subsidies and donated commodities in turn for offering free or reduced-price lunches to eligible children. Based on available school records, students were classified as either currently eligible for the free/reduced-price school lunch or not eligible. Eligibility for free and reduced-price lunches is determined by students' family income in relation to the federally established poverty level. Students whose family income is at or below 130 percent of the poverty level qualify to receive free lunch, and students whose family income is between 130 percent and 185 percent of the poverty level qualify to receive reduced-price lunch. For the period July 1, 2006 through June 30, 2007, for a family of four, 130 percent of the poverty level was $26,000 and 185 percent was $37,000. The classification applies only to the school year when the assessment was administered (i.e., the 2006–07 school year) and is not based on eligibility in previous years. If school records were not available, the student was classified as "Information not available." If the school did not participate in the program, all students in that school were classified as "Information not available."

Student with Disabilities (SD)

Results are reported for students who were identified by school records as having a disability. A student with a disability may need specially designed instruction to meet his or her learning goals. A student with a disability will usually have an Individualized Education Program (IEP) which guides his or her special education instruction. Students with disabilities are often referred to as special education students and may be classified by their school as learning disabled (LD) or emotionally disturbed (ED).

English Language Learners (ELL)

Results are reported for students who were identified by school records as being English language learners. (Note that English language learners were previously referred to as limited English proficient (LEP).)

Type of School

The national results are based on a representative sample of students in both public schools and nonpublic schools. Nonpublic schools include private schools, Bureau of Indian Affairs schools, and Department of Defense schools. Private schools include Catholic, Conservative Christian, Lutheran, and other private schools. The state results are based on public school students only.

Type of Location

NAEP results are reported for four mutually exclusive categories of school location: city, suburb, town, and rural. The categories are based on standard definitions established by the Federal Office of Management and Budget using population and geographic information from the U.S. Census Bureau. Schools are assigned to these categories in the NCES Common Core of Data based on their physical address.

The classification system was revised for 2007; therefore, trend comparisons to previous years are not available. The new locale codes are based on an address's proximity to an urbanized area (a densely settled core with densely settled surrounding areas). This is a change from the original system based on metropolitan statistical areas. To distinguish the two systems, the new system is referred to as "urban-centric locale codes." The urban-centric locale code system classifies territory into four major types: city, suburban, town, and rural. Each type has three subcategories. For city and suburb, these are gradations of size—large, midsize, and small. Towns and rural areas are further distinguished by their distance from an urbanized area. They can be characterized as fringe, distant, or remote.

Region

Prior to 2003, NAEP results were reported for four NAEP-defined regions of the nation: Northeast, Southeast, Central, and West. As of 2003, to align NAEP with other federal data collections, NAEP analysis and reports have used the U.S. Census Bureau's definition of "region." The four regions defined by the U.S. Census Bureau are Northeast, South, Midwest, and West. The Central region used by NAEP before 2003 contained the same states as the Midwest region defined by the U.S. Census. The former Southeast region consisted of the states in the Census-defined South minus Delaware, the District of Columbia, Maryland, Oklahoma, Texas, and the section of Virginia in the District of Columbia metropolitan area. The former West region consisted of Oklahoma, Texas, and the states in the Census-defined West. The former Northeast region consisted of the states in the Census-defined Northeast plus Delaware, the District of Columbia, Maryland, and the section of Virginia in the District of Columbia metropolitan area. The table below shows how states are subdivided into these Census regions. All 50 states and the District of Columbia are listed. Other jurisdictions, including the Department of Defense Educational Activity schools, are not assigned to any region.

States within regions of the country defined by the U.S. Census Bureau

Northeast

South

Midwest

West

Connecticut
Maine
Massachusetts
New Hampshire
New Jersey
New York
Pennsylvania
Rhode Island
Vermont

Alabama
Arkansas
Delaware
District of Columbia
Florida
Georgia
Kentucky
Louisiana
Maryland
Mississippi
North Carolina
Oklahoma
South Carolina
Tennessee
Texas
Virginia
West Virginia

Illinois
Indiana
Iowa
Kansas
Michigan
Minnesota
Missouri
Nebraska
North Dakota
Ohio
South Dakota
Wisconsin

Alaska
Arizona
California
Colorado
Hawaii
Idaho
Montana
Nevada
New Mexico
Oregon
Utah
Washington
Wyoming

SOURCE: U.S. Department of Commerce Economics and Statistics Administration.

 

Back to Top

Parental Education

Parents' highest level of education is defined by the highest level reported by eighth-graders and twelfth-graders for either parent. Fourth-graders were not asked to indicate their parents' highest level of education because their responses in previous studies were highly variable, and a large percentage of them chose the "I don't know" option.

Exclusion Rates

All 50 states and two jurisdictions (District of Columbia and the Department of Defense Education Activity (DoDEA)) participated in the 2007 reading assessment. To ensure that the samples in each state are representative, NAEP has established policies and procedures to maximize the inclusion of all students in the assessment. Every effort is made to ensure that all selected students who are capable of participating meaningfully in the assessment are assessed. While some students with disabilities (SD) and/or English language learners (ELL) can be assessed without any special procedures, others require accommodations to participate in NAEP. Still other SD and/or ELL students selected by NAEP may not be able to participate. Local school authorities determine whether SD/ELL students require accommodations or should be excluded because they cannot be assessed. The percentage of SD and/or ELL students who are excluded from NAEP assessments varies from one jurisdiction to another and within a jurisdiction over time. Read more about the potential effects of exclusion rates on assessment results.

See additional information about the percentages of students with disabilities and English language learners

See the types of accommodations permitted for students with disabilities and/or English language learners at the national level.

 

Statistical Significance

Differences between scale scores and between percentages that are discussed in the results on this website take into account the standard errors associated with the estimates. Comparisons are based on statistical tests that consider both the magnitude of the difference between the group average scores or percentages and the standard errors of those statistics. Throughout the results, differences between scores or between percentages are discussed only when they are significant from a statistical perspective.

All differences reported are significant at the 0.05 level with appropriate adjustments for multiple comparisons. The term "significant" is not intended to imply a judgment about the absolute magnitude or the educational relevance of the differences. It is intended to identify statistically dependable population differences to help inform dialogue among policymakers, educators, and the public.

Comparisons across states use a t-test (the method most commonly used to evaluate the differences in means between two groups) to detect whether a difference is statistically significant or not. There are four possible outcomes when comparing the average scores of jurisdictions A and B:

  • Jurisdiction A has a higher average score than jurisdiction B,
  • Jurisdiction A has a lower average score than jurisdiction B,
  • No difference in scores is detected between jurisdiction A and B, or
  • The sample does not permit a reliable statistical test. (This may occur when the sample size for a particular group is small.)

When comparing all jurisdictions to each other, the testing procedures are based on all pairwise combinations of the jurisdictions in a particular year or pair of years. It may be possible that a given state or jurisdiction has a higher average scale score than the nation or another state but that the difference is not statistically significant, while another state with the same average score may show a statistical significance compared to the nation or the other state. These situations may arise due to the fact that standard errors vary across states/jurisdictions and estimates.

Cautions in Interpretations

Users of this website are cautioned against interpreting NAEP results as implying causal relations. Inferences related to student group performance or to the effectiveness of public and nonpublic schools, for example, should take into consideration the many socioeconomic and educational factors that may also have an impact on performance.

The NAEP reading scale makes it possible to examine relationships between students' performance and various background factors measured by NAEP. However, a relationship that exists between achievement and another variable does not reveal its underlying cause, which may be influenced by a number of other variables. Similarly, the assessments do not reflect the influence of unmeasured variables. The results are most useful when they are considered in combination with other knowledge about the student population and the educational system, such as trends in instruction, changes in the school-age population, and societal demands and expectations.

Beginning in 2002, the NAEP national sample was obtained by aggregating the samples of public school students from each state and jurisdiction, and then supplementing the aggregate sample with a nationally representative sample of students from nonpublic schools, rather than by obtaining an independently selected national sample. As a consequence the national sample size increased, and smaller differences between years or between groups of students were found to be statistically significant than would have been detected in previous assessments.

A caution is also warranted for some small population group estimates. At times in the results pages, smaller population groups show very large increases or decreases across years in average scores. For example, fourth-grade Hispanic students in Delaware are reported as having a 36-point score increase between 1998 and 2002. However, it is often necessary to interpret such score gains with extreme caution. For one thing, the effects of exclusion-rate changes for small subgroups may be more marked for small groups than they are for the whole population. To continue with the Delaware example, 2 percent of Hispanic students were excluded in 1998. This number increased to 21 percent in 2002. Also, the standard errors are often quite large around the score estimates for small groups, which in turn means the standard error around the gain is also large. While the Delaware Hispanic student scores went up 36 points, the standard error of the gain is almost 12 points, which means that statisticians are confident that the estimate is correct within 23.5 points (i.e., 36 ± 11.75 points).

Return to the reading subject information.


Last updated 16 April 2008 (JM)
1990 K Street, NW
Washington, DC 20006, USA
Phone: (202) 502-7300 (map)