Skip navigation
Skip Navigation
small header image
Click for menu... About NAEP... Click for menu... Subject Areas... Help Site Map Contact Us Glossary NewsFlash
Sample Questions Analyze Data State Profiles Publications Search the Site
Mathematics
The Nation's Report Card (home page)

Interpreting NAEP Mathematics Results

Overview of the Assessment
Reporting the Assessment—Scale Scores and Achievement Levels
Description of Mathematics Performance by Item Maps for Each Grade
Results Are Estimates
NAEP Reporting Groups
Exclusion Rates
Statistical Significance
Cautions in Interpretations

Overview of the Assessment

NAEP assesses mathematics performance by administering assessments to samples that are representative of the nation's students. The content of the NAEP mathematics assessment is determined by a framework incorporating expert perspectives about mathematics knowledge and its measurement. Read more about what the assessment measures, how it was developed, who took the assessment, and how the assessment was administered. Find out how the main NAEP mathematics assessment differs from long-term trend mathematics.

Beginning in 2003, the NAEP national sample was obtained by aggregating the samples of public school students from each state and jurisdiction, and then supplementing the aggregate sample with a nationally representative sample of students from nonpublic schools, rather than by obtaining an independently selected national sample. As a consequence, the national sample size increased, and smaller differences between years or between groups of students were found to be statistically significant than would have been detected in previous assessments. In keeping with past practice, all statistically significant differences are indicated in the current web results pages.

Comparisons are made to results from previous years in which the assessment was administered. In addition to the 2007 results, national results are reported from the 1990,1992, 1996, 2000, 2003, and 2005 assessments at grades 4 and 8. The 2005 mathematics framework for grade 12 introduced changes from the previous framework in order to reflect adjustments in curricular emphases and to ensure an appropriate balance of content. See below for a summary of the changes to the framework. Consequently, the twelfth-grade results in 2005 cannot be compared to previous assessments in mathematics. There were, however, some questions from the 2000 assessment that fit the requirements in the new framework and were used again in 2005. A special analysis was performed by the Human Resources Research Organization to see how students' performance on this set of items differed between the two years. To download a copy of this analysis (135K PDF), visit the Human Resources Research Organization website.

Changes to the grade 12 NAEP mathematics assessment in 2005
  2005 Mathematics assessment Previous Mathematics assessment
Content areas Four content areas, with measurement and geometry combined into one because the majority of twelfth-grade measurement topics are geometric in nature Five content areas
Distribution of questions across content areas    
     Number properties and operations 10% 20%
     Measurement and geometry 30% 15% and 20%
     Data analysis and probability 25% 20%
     Algebra 35% 25%
Reporting scale 0-300 single-grade scale 0-500 cross-grade scale
Calculators Students given the option to bring their own graphing or scientific calculator Students provided with standard model scientific calculator

The more recent national results (those from 1996 or later) are based on administration procedures in which testing accommodations were permitted for students with disabilities and for English language learners. Accommodations were not permitted in earlier assessments (1990 and 1992). Read more about NAEP's policy of inclusion. Comparisons between results from 2007 and those from assessment years in which both types of administration procedures were used (1996 and 2000) are discussed based on the results when accommodations were permitted, even though significant differences in results when accommodations were not permitted may be noted in figures and tables. Changes in student performance across years or differences between groups of students in 2007 are discussed only if they have been determined to be statistically significant.

Back to Top

Reporting the Assessment—Scale Scores and Achievement Levels

The results of student performance on the NAEP mathematics assessment are presented on this website in two ways: as average scores on the NAEP mathematics scale and as the percentages of students attaining NAEP mathematics achievement levels. The average scale scores represent how students performed on the assessment. The achievement levels represent how that performance measured up against set expectations for achievement. Thus, the average scale scores represent what students know and can do, while the achievement-level results indicate the degree to which student performance meets expectations of what they should know and be able to do.

Average mathematics scale score results are based on the NAEP mathematics scale, which ranges from 0 to 500 for grades 4 and 8 and 0 to 300 for grade 12. The 2005 mathematics framework initiated minor changes at grades 4 and 8 and more substantial changes at grade 12. This meant that the current trend line could be maintained at the lower grades but a new trend line needed to be established at grade 12. There were no further changes in the 2007 framework.

The NAEP mathematics assessment is a composite combining separate scales for each of the mathematics content areas: (1) number properties and operations, (2) measurement, (3) geometry, (4) data analysis and probability, and (5) algebra. Average scale scores are computed for groups, not for individual students. The average scores are based on analyses of the percentages of students who answered each item successfully; NAEP does not produce individual student scores. The results for all three grades are placed together on one reporting scale. In the base year of the trend line, the three grades are analyzed together to create a cross-grade scale. In subsequent years, the data from each grade level are analyzed separately and then linked to the original cross-grade scale established in the base year. In 2005, the 12th grade results were removed from the cross-grade scale and put on a separate within-grade scale. Comparisons of overall student performance across grade levels on a cross-grade scale are acceptable; however, other types of comparisons or inferences may not be supported by the available information. Note that while the scale is cross-grade, the skills tested and the material on the test increase in complexity and difficulty at each higher grade level, so different things are measured at the different grades even though a progression is implied.

Achievement-level results are presented in terms of mathematics achievement levels as adopted by the National Assessment Governing Board, and are intended to measure how well students' actual achievement matches the achievement desired of them. For each grade tested, the Governing Board has adopted three achievement levels: Basic, Proficient, and Advanced. For reporting purposes, the achievement-level cut scores are placed on the mathematics scales, resulting in four ranges: below Basic, Basic, Proficient, and Advanced.

The Governing Board established its achievement levels in 1990 based upon the mathematics content framework and a standard-setting process involving a cross section of educators and interested citizens from across the nation who were asked to judge what students should know and be able to do relative to the content set out in the NAEP mathematics framework. As provided by law, NCES has determined that the achievement levels are to be considered on a trial basis and should be interpreted and used with caution. However, both NCES and the Governing Board believe these performance standards are useful for understanding trends in student achievement.

Back to Top

Description of Mathematics Performance by Item Maps for Each Grade

Item maps illustrate the knowledge and skills demonstrated by students performing at different scale scores on the NAEP mathematics assessment. In order to provide additional context, the cut points for the three NAEP achievement levels are marked on the item maps. The map location for each question represents the probability that, for a given score point, 65 percent of the students for a constructed-response question, 74 percent of the students for a four-option multiple-choice question, and 72 percent of the students for a five-option multiple-choice question answered that question successfully. For constructed-response questions, responses may be completely or partially correct; therefore, a question can map to several points on the scale.

Approximately 30 mathematics questions per grade have been selected and placed on each item map. Explore the mathematics item maps.

Back to Top

Results Are Estimates

The average scores and percentages presented on this website are estimates because they are based on representative samples of students rather than on the entire population of students. Moreover, the collection of subject-area questions used at each grade level is but a sample of the many questions that could have been asked. As such, NAEP results are subject to a measure of uncertainty, reflected in the standard error of the estimates. The standard errors for the estimated scale scores and percentages in the figures and tables presented on this website are available through the NAEP Data Explorer.

Back to Top

NAEP Reporting Groups

Results are provided for groups of students defined by shared characteristics—gender, race or ethnicity, eligibility for free/reduced-price school lunch, students with disabilities, and students identified as English language learners. Based on participation rate criteria, results are reported for subpopulations only when sufficient numbers of students and adequate school representation are present. The minimum requirement is at least 62 students in a particular group from at least five primary sampling units (PSUs). However, the data for all students, regardless of whether their group was reported separately, were included in computing overall results. Explanations of the reporting groups are presented below.

Gender

Results are reported separately for males and females.

Race/Ethnicity

In all NAEP assessments, data about student race/ethnicity is collected from two sources: school records and student self-reports. Before 2002, NAEP used students' self-reports of their race and ethnicity on a background questionnaire as the source of race/ethnicity data. In 2002, it was decided to change the student race/ethnicity variable highlighted in NAEP reports. Starting in 2002, NAEP reports of students' race and ethnicity are based on the school records, with students' self-reports used only if school data are missing. Information based on student self-reported race/ethnicity will continue to be reported in the NAEP Data Explorer.

In order to allow comparisons across years, assessment results presented are based on school-reported information for six mutually exclusive racial/ethnic categories: White, Black, Hispanic, Asian/Pacific Islander, American Indian (including Alaska Native), and Other. Students who identified with more than one of the first five categories or had a background other than the ones listed were categorized as Other.

Eligibility for Free/Reduced-Price School Lunch

As part of the Department of Agriculture's National School Lunch Program, schools can receive cash subsidies and donated commodities in turn for offering free or reduced-price lunches to eligible children. Based on available school records, students were classified as either currently eligible for the free/reduced-price school lunch or not eligible. Eligibility for free and reduced-price lunches is determined by students' family income in relation to the federally established poverty level. Students whose family income is at or below 130 percent of the poverty level qualify to receive free lunch, and students whose family income is between 130 percent and 185 percent of the poverty level qualify to receive reduced-price lunch. For the period July 1, 2006 through June 30, 2007, for a family of four, 130 percent of the poverty level was $26,000 and 185 percent was $37,000. The classification applies only to the school year when the assessment was administered (i.e., the 2006–07 school year) and is not based on eligibility in previous years. If school records were not available, the student was classified as "Information not available." If the school did not participate in the program, all students in that school were classified as "Information not available."

Student with Disabilities (SD)

Results are reported for students who were identified by school records as having a disability. A student with a disability may need specially designed instruction to meet his or her learning goals. A student with a disability will usually have an Individualized Education Program (IEP), which guides his or her special education instruction. Students with disabilities are often referred to as special education students and may be classified by their school as learning disabled (LD) or emotionally disturbed (ED).

English Language Learners (ELL)

Results are reported for students who were identified by school records as being English language learners. (Note that English language learners were previously referred to as limited English proficient (LEP).)

Type of School

The national results are based on a representative sample of students in both public schools and nonpublic schools. Nonpublic schools include private schools, Bureau of Indian Affairs schools, and Department of Defense schools. Private schools include Catholic, Conservative Christian, Lutheran, and other private schools. The state results are based on public school students only.

Type of Location

NAEP results are reported for four mutually exclusive categories of school location: city, suburb, town, and rural. The categories are based on standard definitions established by the Federal Office of Management and Budget using population and geographic information from the U.S. Census Bureau. Schools are assigned to these categories in the NCES Common Core of Data based on their physical address. The classification system was revised for 2007; therefore, trend comparisons to previous years are not available. The new locale codes are based on an address's proximity to an urbanized area (a densely settled core with densely settled surrounding areas). This is a change from the original system based on metropolitan statistical areas. To distinguish the two systems, the new system is referred to as "urban-centric locale codes." The urban-centric locale code system classifies territory into four major types: city, suburban, town, and rural. Each type has three subcategories. For city and suburb, these are gradations of size—large, midsize, and small. Towns and rural areas are further distinguished by their distance from an urbanized area. They can be characterized as fringe, distant, or remote.

Region

Prior to 2003, NAEP results were reported for four NAEP-defined regions of the nation: Northeast, Southeast, Central, and West. As of 2003, to align NAEP with other federal data collections, NAEP analysis and reports have used the U.S. Census Bureau's definition of "region." The four regions defined by the U.S. Census Bureau are Northeast, South, Midwest, and West. The Central region used by NAEP before 2003 contained the same states as the Midwest region defined by the U.S. Census. The former Southeast region consisted of the states in the Census-defined South minus Delaware, the District of Columbia, Maryland, Oklahoma, Texas, and the section of Virginia in the District of Columbia metropolitan area. The former West region consisted of Oklahoma, Texas, and the states in the Census-defined West. The former Northeast region consisted of the states in the Census-defined Northeast plus Delaware, the District of Columbia, Maryland, and the section of Virginia in the District of Columbia metropolitan area. The table below shows how states are subdivided into these Census regions. All 50 states and the District of Columbia are listed. Other jurisdictions, including the Department of Defense Educational Activity schools, are not assigned to any region.

States within regions of the country defined by the U.S. Census Bureau

Northeast

South

Midwest

West

Connecticut
Maine
Massachusetts
New Hampshire
New Jersey
New York
Pennsylvania
Rhode Island
Vermont

Alabama
Arkansas
Delaware
District of Columbia
Florida
Georgia
Kentucky
Louisiana
Maryland
Mississippi
North Carolina
Oklahoma
South Carolina
Tennessee
Texas
Virginia
West Virginia

Illinois
Indiana
Iowa
Kansas
Michigan
Minnesota
Missouri
Nebraska
North Dakota
Ohio
South Dakota
Wisconsin

Alaska
Arizona
California
Colorado
Hawaii
Idaho
Montana
Nevada
New Mexico
Oregon
Utah
Washington
Wyoming

SOURCE: U.S. Department of Commerce Economics and Statistics Administration.

 

Back to Top

Parental Education

Parents' highest level of education is defined by the highest level reported by eighth-graders and twelfth-graders for either parent. Fourth-graders were not asked to indicate their parents' highest level of education because their responses in previous studies were highly variable, and a large percentage of them chose the "I don't know" option.

Back to Top

Exclusion Rates

All 50 states and 2 other jurisdictions participated in the 2007 mathematics assessment. To ensure that the samples in each state are representative, NAEP has established policies and procedures to maximize the inclusion of all students in the assessment. Every effort is made to ensure that all selected students who are capable of participating meaningfully in the assessment are assessed. While some students with disabilities (SD) and/or English language learners (ELL) students can be assessed without any special procedures, others require accommodations to participate in NAEP. Still other SD and/or ELL students selected by NAEP may not be able to participate. Local school authorities determine whether SD/ELL students require accommodations or shall be excluded because they cannot be assessed. The percentage of SD and/or ELL students who are excluded from NAEP assessments varies from one jurisdiction to another and within a jurisdiction over time. Read more about the potential effects of exclusion rates on assessment results.

See additional information about the percentages of students with disabilities and English language learners

See the types of accommodations permitted for students with disabilities and/or English language learners at the national level.

Exclusion rates for other subjects, as well as rates of use of specific accommodations, are available.

Back to Top

Statistical Significance

The differences between scale scores and between percentages discussed in the results on this website take into account the standard errors associated with the estimates. Comparisons are based on statistical tests that consider both the magnitude of the difference between the group average scores or percentages and the standard errors of those statistics. Throughout the results, differences between scores or between percentages are discussed only when they are significant from a statistical perspective.

All differences reported are significant at the 0.05 level with appropriate adjustments for multiple comparisons. The term "significant" is not intended to imply a judgment about the absolute magnitude or the educational relevance of the differences. It is intended to identify statistically dependable population differences to help inform dialogue among policymakers, educators, and the public.

Comparisons across states use a t-test (the method most commonly used to evaluate the differences in means between two groups) to detect whether a difference is statistically significant or not. There are four possible outcomes when comparing the average scores of jurisdictions A and B:

  • Jurisdiction A has a higher average score than jurisdiction B,
  • Jurisdiction A has a lower average score than jurisdiction B,
  • No difference in scores is detected between jurisdiction A and B, or
  • The sample does not permit a reliable statistical test. (This may occur when the sample size for a particular group is small.)

When comparing all jurisdictions to each other, the testing procedures are based on all pairwise combinations of the jurisdictions in a particular year or pair of years. It may be possible that a given state or jurisdiction has a higher average scale score than the nation or another state but that the difference is not statistically significant, while another state with the same average score may show a statistical significance compared to the nation or the other state. These situations may arise due to the fact that standard errors vary across states/jurisdictions and estimates.

Back to Top

Cautions in Interpretations

Users of this website are cautioned against interpreting NAEP results as implying causal relations. Inferences related to group performance or to the effectiveness of public and nonpublic schools, for example, should take into consideration the many socioeconomic and educational factors that may also impact performance.

The NAEP mathematics scale makes it possible to examine relationships between students' performance and various background factors measured by NAEP. However, a relationship that exists between achievement and another variable does not reveal its underlying cause, which may be influenced by a number of other variables. Similarly, the assessments do not reflect the influence of unmeasured variables. The results are most useful when they are considered in combination with other knowledge about the student population and the educational system, such as trends in instruction, changes in the school-age population, and societal demands and expectations.

Beginning in 2002, the NAEP national sample was obtained by aggregating the samples of public school students from each state and jurisdiction, and then supplementing the aggregate sample with a nationally representative sample of students from nonpublic schools, rather than by obtaining an independently selected national sample. As a consequence the national sample size increased, and smaller differences between years or between groups of students were found to be statistically significant than would have been detected in previous assessments.

A caution is also warranted for some small population group estimates. At times in the results pages, smaller population groups show very large increases or decreases across years in average scores. For example, fourth-grade Hispanic students in Delaware are reported as having a 36-point score increase between 1998 and 2002. However, it is often necessary to interpret such score gains with extreme caution. For one thing, the effects of exclusion-rate changes for small groups may be more marked for small groups than they are for the whole population. To continue with the Delaware example, 2 percent of Hispanic students were excluded in 1998. This number increased to 21 percent in 2002. Also, the standard errors are often quite large around the score estimates for small groups, which in turn means the standard error around the gain is also large. While the Delaware Hispanic student scores went up 36 points, the standard error of the gain is almost 12 points, which means that statisticians are confident that the estimate is correct within 23.5 points (i.e., 36 ± 11.75 points).

Return to the mathematics subject information.

Back to Top


Last updated 16 April 2008 (JM)
1990 K Street, NW
Washington, DC 20006, USA
Phone: (202) 502-7300 (map)