Skip Navigation
small NCES header image

Guide to Sources

National Center for Education Statistics (NCES) Sources

Common Core of Data

The Common Core of Data (CCD) is the Department of Education’s primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general. Some of the CCD’s component surveys date back to the 1930s. The integrated CCD was first implemented in the 1986-87 school year.

The CCD collects statistical information annually from approximately 100,000 public elementary and secondary schools and approximately 18,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, Department of Defense dependents schools (DoDDS), and the outlying areas. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts; data on students and staff; and fiscal data. The general descriptive information includes name, address, phone number, and type of locale; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.

The EDFacts data collection system is the primary collection tool for the CCD. NCES works collaboratively with the Department of Education’s Performance Information Management Service to develop the CCD collection procedures and data definitions. Coordinators from State Education Agencies (SEAs) submit the CCD data at different levels (school, agency, and state) to the EDFacts collection system. Prior to submitting CCD files to EDFacts, SEAs must collect and compile information from their respective Local Education Agencies (LEAs) through established administrative records systems within their state or jurisdiction.

Once SEAs have completed their submissions, the CCD survey staff analyzes and verifies the data for quality assurance. Even though the CCD is a universe collection and thus not subject to sampling errors, nonsampling errors can occur. The two potential sources of nonsampling errors are nonresponse and inaccurate reporting. NCES attempts to minimize nonsampling errors through the use of annual training of SEA coordinators, extensive quality reviews, and survey editing procedures. In addition, each year, SEAs are given the opportunity to revise their state-level aggregates from the previous survey cycle.

The CCD survey consists of six components: the Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Aggregate Nonfiscal Survey of Public Elementary/ Secondary Education, the National Public Education Financial Survey (NPEFS), the School District Fiscal Data Survey, and the Teacher Compensation Survey.

Public Elementary/Secondary School Universe Survey

The Public Elementary/Secondary School Universe Survey collects information on all public schools providing education services to prekindergarten, kindergarten, grade 1-12, and ungraded students. Data include the school’s operating status, locale, and type, as well as the student enrollment for every grade; number of students in each racial/ethnic group and eligible for free-lunch programs; and number of reported full-time-equivalent (FTE) teachers. 

Local Education Agency (School District) Universe

The Local Education Agency (LEA) Universe Survey collects information on all school districts and administrative units providing education services to prekindergarten, kindergarten, grade 1-12, and ungraded students. Data include county location, metropolitan status, and type; the total number of students enrolled for every grade; number of ungraded students; number of English language learner (ELL) students served in appropriate programs; and number of instructional, support, and administrative staff. Data also include the number of high school graduates, other completers, and dropouts. Since 2007-08, the high school dropout and completion data have been separated from the LEA universe survey data and released as standalone data. 

State Nonfiscal Survey of Public Elementary/Secondary Education 

The State Nonfiscal Survey of Public Elementary/ Secondary Education collects information on all students and staff aggregated to the state level, including the number of students by grade level; counts of FTE staff by major employment category; and high school completers by race/ethnicity. 

National Public Education Financial Survey 

The National Public Education Financial Survey (NPEFS) collects detailed finance data at the state level, including average daily attendance, school district revenues by source (local, state, federal), and expenditures by function (instruction, support services, and noninstruction) and object (salaries, supplies, etc.). It also reports capital outlay and debt service expenditures. 

School District Finance Survey

The School District Finance Survey collects detailed data by school district, including revenues by source, expenditures by function and subfunction, and enrollment.

Teacher Compensation Survey

The Teacher Compensation Survey collects total compensation, teacher status, and demographic data about individual teachers from multiple states.

Further information about the CCD and its survey components is available at http://nces.ed.gov/ccd/.

Fast Response Survey System

The Fast Response Survey System (FRSS) was established in 1975 to collect issue-oriented data quickly, with a minimal burden on respondents. The FRSS, whose surveys collect and report data on key education issues at the elementary and secondary levels, was designed to meet the data needs of Department of Education analysts, planners, and decision-makers when information cannot be collected quickly through NCES’s large recurring surveys. Findings from FRSS surveys have been included in congressional reports, testimony to congressional subcommittees, NCES reports, and other Department of Education reports. The findings are also often used by state and local education officials.

Data collected through FRSS surveys are representative at the national level, drawing from a universe that is appropriate for each study. The FRSS collects data from state education agencies and national samples of other educational organizations and participants, including local education agencies, public and private elementary and secondary schools, elementary and secondary school teachers and principals, and public libraries and school libraries. To ensure a minimal burden on respondents, the surveys are generally limited to three pages of questions, with a response burden of about 30 minutes per respondent. Sample sizes are relatively small (usually about 1,000 to 1,500 respondents per survey) so that data collection can be completed quickly.

Further information about the FRSS is available at http://nces.ed.gov/surveys/frss.

Integrated Postsecondary Education Data System

The Integrated Postsecondary Education Data System (IPEDS) is the core program that NCES uses for collecting data on postsecondary education. IPEDS is a single, comprehensive system that encompasses all identified institutions whose primary purpose is to provide postsecondary education. Before IPEDS, some of the same information was collected through the Higher Education General Information Survey (HEGIS).

IPEDS consists of eight interrelated components that are collected in the fall, winter, and spring each year. Data on institutional characteristics and completions are collected in the fall. Data on employees by assigned position (EAP), salaries, and fall staff are collected in the winter.  Data on enrollment, student financial aid, finances, and graduation rates are collected in the spring. During the winter 2005-06 survey, the EAP, fall staff, and salaries components were merged into the human resources component. In 2007-08, the enrollment component was broken into two separate components: 12-month enrollment (collected in the fall) and fall enrollment (collected in the spring).

Researchers can use IPEDS to analyze information on (1) enrollments of undergraduates, first-time freshmen, and graduate and first-professional students by race/ethnicity and sex; (2) institutional revenue and expenditure patterns by source of income and type of expense; (3) completions (awards) by level of program, level of award, race/ethnicity, and sex; (4) characteristics of postsecondary institutions, including tuition, room and board charges, and calendar systems; (5) status of career and technical education programs; and (6) other issues of interest.

Beginning in 1993, the IPEDS survey completion became mandatory for all postsecondary institutions with a
Program Participation Agreement (PPA) with the Office of Postsecondary Education (OPE), U.S. Department of Education—that is, institutions that participate in or are eligible to participate in any federal student financial assistance program authorized by Title IV of the Higher Education Act of 1965, as amended (20 USC 1094[a] [17]). Such programs include Pell Gants and Stafford Loans given to students at 4-year and higher (4 year), 2-but-less-than 4-year (2 year), and less than 2-year postsecondary institutions, including degree and non-degree granting institutions. For institutions not eligible to participate in Title IV programs, participation in the IPEDS is voluntary. Prior to 1993, only national-level estimates from a sample of institutions are available for private less-than-2-year institutions.

Further information about the IPEDS classification of educational institutions is available in Appendix C - Commonly Used Measures. Further information about IPEDS is available at http://nces.ed.gov/ipeds/.

National Assessment of Educational Progress

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. At the national level, NAEP is divided into two assessments: long-term trend NAEP and main NAEP.

Long-term trend

NAEP long-term trend assessments are designed to inform the nation of changes in the basic achievement of America’s youth. Nationally representative samples of students have been assessed in science, mathematics, and reading at ages 9, 13, and 17 since the early 1970s. Students were assessed in writing at grades 4, 8, and 11 between 1984 and 1996. To measure trends accurately, assessment items (mostly multiple choice) and procedures have remained unchanged since the first assessment in each subject. Recent trend assessments were conducted in 1994, 1996, 1999, 2004, and 2008. Results are reported as average scale scores for the nation, for regions, and for various subgroups of the population, such as racial and ethnic groups.

Main

In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects. Student assessments are not designed to permit comparison across grades. The main state NAEP assessed students at both grades 4 and 8 in at least one subject in 1990, 1992, 1994, 1996, 1998, 2000, 2002, and 2003. Since 2003, the main state NAEP has assessed students in at least two subjects, reading and mathematics, every 2 years at grades 4 and 8.

The assessments are based on frameworks developed by the National Assessment Governing Board (NAGB). Items include both multiple-choice and constructed-response (requiring written answers) items. Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students meeting certain achievement levels are also reported for these groups. The achievement levels, developed by NAGB, are at or above Basic, at or above Proficient, and at or above Advanced.

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate.

In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main
NAEP and an aggregate of all state samples replaced the separate national sample.

Mathematics assessments were administered in 2000, 2003, 2005, 2007, 2009, and 2011. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand.
By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

For grades 4 and 8, comparisons over time can be made among the assessments prior to and after the implementation of the 2005 framework. The changes to the grade 12 assessment were too drastic to allow the results to be directly compared with previous years. The changes to the grade 12 assessment included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework, as well as the merging of the measurement and geometry content areas. The reporting scale for grade 12 mathematics was changed from 0-500 to 0-300.

For more information regarding the 2005 framework revisions, see http://nces.ed.gov/nationsreportcard/ mathematics/ whatmeasure.asp.

Reading assessments were administered in 2000, 2002, 2003, 2005, 2007, 2009, and 2011. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend or bridge study were conducted to determine if the “new” assessment was comparable to the “old” assessment. Overall, the results of the special analyses suggested that the old and new assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see http://nces.ed.gov/nationsreportcard/reading/whatmeasure.aspnationsreportcard/reading/whatmeasure.asp.

Science assessments were administered in 1995-96, 2000, 2005, and 2009. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP science assessment. The 2009 science framework organizes science content into three broad content areas, physical science, life science, and Earth and space sciences, thus keeping the content current with key developments in science curriculum standards, assessments, and research.

The 2009 framework change rendered the results from the 2009 assessment not comparable to the results from previous assessment years. For more information regarding the 2009 science framework and the specific content areas, see http://www.nagb.org/publications/frameworks/science-09.pdf.

Other assessments administered by NAEP include the geography assessments in 1993-94, 2000-01, and 2009- 10; the U.S. history assessments in 2001, 2006, and 2010; and the civics assessments in 1998, 2006, and 2010.

For additional information on NAEP, including technical aspects of scoring and assessment validity and more specific information on achievement levels, see http://nces.ed.gov/nationsreportcard/.

Analysis of Special Needs Students

Until 1996, the main NAEP assessments excluded certain subgroups of students identified as “special needs students,” that is, students with disabilities and students with limited-English-proficiency. For the 1996 and 2000 mathematics assessments and the 1998 and 2000 reading assessments, the main NAEP included a separate assessment with provisions for accommodating these students (e.g., extended time, small group testing, mathematics questions read aloud, etc.). Thus, for these years, there are results for both the unaccommodated assessment and the accommodated assessment. For the 2002, 2003, and 2005 reading assessments and the 2003 and 2005 mathematics assessments, the main NAEP did not include a separate unaccommodated assessment—only a single accommodated assessment was administered. The switch to a single accommodated assessment instrument was made after it was determined that accommodations in NAEP did not have any significant effect on student scores.

Since 1992, the percentage of students with disabilities excluded from the NAEP reading assessment has ranged from 3 to 5 percent. English language learners (ELLs) were excluded at a rate of between 1 and 2 percent.

Since 2005, the percentage of students with disabilities excluded from the NAEP mathematics assessment has ranged from 2 to 4 percent. ELLs were excluded at a rate of 1 percent or less.

Exclusion rates were also recorded for the science, geography, history, and civics assessments. For students with disabilities, the exclusion rates from these assessments generally ranged from 1 to 3 percent. The science assessment and accommodated history assessment had exclusion rates as high as 4 percent for students with disabilities. The unaccommodated geography and history assessments had exclusion rates as high as 7 percent. For ELLs, exclusion rates ranged from less than 1 to 2 percent.

Further information about exclusion rates for specific assessments and years is available at http://nces.ed.gov/nationsreportcard/about/inclusion.asp.

Private School Universe Survey

The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Begun in 1989 under the U.S. Census Bureau, the PSS has been conducted every 2 years, and data for the 1989-90, 1991-92, 1993-94, 1995-96, 1997-98, 1999-2000, 2001-02, 2003-04, 2005-2006, 2007-08, and 2009-10 school years have been released.

The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home). The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989-90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.

The PSS groups elementary and secondary schools according to one of seven program emphases: regular, Montessori, special program emphasis, special education, vocational, alternative, and early childhood.

Private schools are assigned to one of three major categories (Catholic, other religious, or nonsectarian) and, within each major category, one of three subcategories based on the school’s religious affiliation provided by respondents.

Further information on the PSS is available at http://nces.ed.gov/surveys/pss.

Program for International Student Assessment

Within the United States, NCES is responsible for administering assessments for the Program for International Student Assessment (PISA). PISA is a system of international assessments that focus on 15-year-olds’ capabilities in reading literacy, mathematics literacy, and science literacy. PISA also includes measures of general, or cross-curricular, competencies such as learning strategies. PISA emphasizes functional skills that students have acquired as they near the end of mandatory schooling. PISA is organized by the Organization for Economic Co-operation and Development (OECD), an intergovernmental organization of industrialized countries, and was administered for the first time in 2000, when 43 countries participated. In 2003, forty-one countries participated in the assessment; in 2006, fifty-seven jurisdictions (30 OECD members and 27 nonmembers) participated; and in 2009, sixty-five jurisdictions (34 OECD members and 31 nonmembers) participated.

PISA is a 2-hour paper-and-pencil exam. Assessment items include a combination of multiple-choice and open-ended questions that require students to come up with their own response. PISA scores are reported on a scale with a mean score of 500 and a standard deviation of 100.

PISA is implemented on a 3-year cycle that began in 2000. Each PISA assessment cycle focuses on one subject in particular, although all three subjects are assessed every 3 years. These cycles allow countries to compare changes in trends for each of the three subject areas over time.

In the first cycle, PISA 2000, reading literacy was the major focus, occupying roughly two-thirds of assessment time. For 2003, PISA focused on mathematics literacy as well as the ability of students to solve problems in real-life settings. In 2006, PISA focused on science literacy. In 2009, PISA focused on reading literacy again.

To implement PISA, each of the participating countries scientifically draws a nationally representative sample of 15-year-olds, regardless of grade level. In the United States, nearly 5,600 students from public and nonpublic schools took the PISA 2006 assessment.

In each country, the assessment is translated into the primary language of instruction; in the United States, all materials are written in English.

For more detailed information on sampling, administration, response rates, and other technical issues related to PISA data, see http://nces.ed.gov/ pubs2011/2011004.pdf.

The OECD developed the PISA 2009 Assessment Framework: Key Competencies in Reading, Mathematics, and Science to design the PISA 2009 assessment in a collaborative effort of the PISA Governing Board and an international consortium. The PISA 2009 framework acts as a blueprint for the assessment, outlining what should be assessed.

Reading literacy in PISA 2009 is defined as “understanding, using, reflecting on, and engaging with written texts in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society.”

Mathematics literacy in PISA 2009 is defined as “an individual’s capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgments and to use and engage with mathematics in ways that meet the needs of that individual’s life as a constructive, concerned and reflective citizen.”

Science literacy in PISA 2009 is defined as “scientific knowledge and use of that knowledge to identify questions, to acquire new knowledge, to explain scientific phenomena, and to draw evidence based conclusions about science-related issues, understanding of the characteristic features of science as a form of human knowledge and inquiry, awareness of how science and technology shape our material, intellectual, and cultural environments, and willingness to engage in science-related issues, and with the ideas of science, as a reflective citizen.” Details on the PISA 2009 framework and the reading, science, and mathematics literacy competencies can be found at http://www.oecd.org/dataoecd/11/40/44455820.pdf.

The PISA 2000 and 2009 OECD averages used in the analysis of trends in reading literacy scores over time are based on the averages of the 27 OECD countries with comparable data for 2000 and 2009. As a result, the reading literacy OECD average score for PISA 2000 differs from previously published reports and the reading literacy OECD average score for PISA 2009 differs from the OECD average score used for analyses other than trend comparisons. The seven current OECD members not included in the OECD average for trend analysis include the Slovak Republic and Turkey, which joined PISA in 2003; Estonia and Slovenia, which joined PISA in 2006; Luxembourg, which experienced substantial changes in its assessment conditions between 2000 and 2003; and the Netherlands and the United Kingdom, which did not meet the PISA response rate standards in 2000. Though reading literacy scores can be compared for all PISA administrative cycles (2000, 2003, 2006, and 2009), the U.S. averages in 2000 and 2009 are compared with OECD average scores in 2000 and 2009 because reading literacy was the major domain assessed in those years.

The PISA mathematics framework was revised in 2003. Because of changes in the framework, it is not possible to compare mathematics learning outcomes from PISA 2000 with those from PISA 2003, 2006, and 2009. The PISA science framework was revised in 2006. Because of changes in the framework, it is not possible to compare science learning outcomes from PISA 2000 and 2003 with those from PISA 2006 and 2009. Details on the changes to PISA since 2000 can be found at http://www.oecd.org/document/61/0,3746,en_32252351_32235731_46567613_1_1_1_1,00.html.

The PISA 2003 and 2009 OECD averages used in the analysis of trends in mathematics literacy scores over time are based on the 29 OECD countries with comparable data for 2003 and 2009. The five current members not included in the OECD average for trend analysis include Chile, Estonia, Israel, Slovenia, which did not participate in 2003, and the United Kingdom, which did not meet PISA response rate standards for the 2003 assessment.

For science literacy trends, all 34 OECD countries are used.

The OECD excluded the data for Austria from the trend analysis in PISA 2009 Results: Learning Trends - Changes in Student Performance Since 2000 (Volume V) because of a concern over a data collection issue in 2009; however, after consultation with Austrian officials, NCES kept the Austrian data in the U.S. trend reporting.

For more information on the OECD, see Appendix C - International Education Definitions.
Further information about PISA is available at http://nces.ed.gov/Surveys/PISA and http://www.pisa.oecd.org.

Schools and Staffing Survey

The Schools and Staffing Survey (SASS) is a set of linked questionnaires used to collect data on the nation’s public and private elementary and secondary teaching force, characteristics of schools and school principals, demand for teachers, and school/school district policies. SASS data are collected through a mail questionnaire with telephone follow-up. SASS was first conducted for NCES by the Census Bureau during the 1987-88 school year. SASS subsequently was conducted in 1990-91, 1993-94, 1999-2000, 2003-04, and 2007-08. The 1990-91, 1993-94, 1999-2000, 2003-04, and 2007-08 SASS also obtained data on Bureau of Indian Education (BIE) schools (schools funded or operated by the BIE). The universe of charter schools in operation in 1998-99 was given the Charter School Questionnaire to complete as part of the 1999-2000 SASS. In subsequent SASS administrations, charter schools were not administered a separate questionnaire, but were included in the public school sample.

Teacher certification is one way in which SASS stratifies the teacher subgroups. The regular certification category includes regular or standard state certificates and advanced professional certificates (for both public and private school teachers) and full certificates granted by an accrediting or certifying body other than the state (for private school teachers only). Probationary certificates are for those who have satisfied all requirements except the completion of a probationary period. Provisional certificates are for those who are still participating in an alternative certification program. Temporary certificates are for those who require additional college coursework and/or student teaching. Waivers or emergency certificates are for those with insufficient teacher preparation who must complete a regular certification program in order to continue teaching. No certification indicates that the teacher did not hold any certification in the state where the teacher had taught.

Further information on SASS is available at http://nces.ed.gov/surveys/sass.

School Survey on Crime and Safety

The School Survey on Crime and Safety (SSOCS) is administered to public primary, middle, high, and combined school principals in the spring of even numbered school years. SSOCS is administered at the end of the school year to allow principals to report the most complete information possible. SSOCS was first administered in the spring of the 1999-2000 school year (SSOCS:2000). It has since been administered in the spring of the 2003-04, 2005-06, 2007-08, and 2009-10 school years (SSOCS:2004, SSOCS:2006, SSOCS:2008, and SSOCS:2010). SSOCS focuses on incidents of specific crimes/offenses and a variety of specific discipline issues in public schools. It also covers characteristics of school policies, school violence prevention programs and policies, and school characteristics that have been associated with school crime. The survey was conducted with a nationally representative sample of regular public elementary, middle, and high schools in the 50 states and the District of Columbia. Special education, alternative, and vocational schools; schools in the other jurisdictions; and schools that taught only prekindergarten, kindergarten, or adult education were not included in the sample.

Further information about SSOCS is available at http://nces.ed.gov/surveys/ssocs.

Non-NCES Sources American Community Survey (ACS)

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Form of the Decennial Census. Aggregated over time, these data will serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

Since 2005, the survey has been mailed to approximately 250,000 addresses in the United States and Puerto Rico each month, or about 2.5 percent of the population annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began interviewing residents in group quarter facilities. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. Annual results were available for areas with populations of 65,000 or more beginning in the summer of 2006; for areas with populations of 20,000 or more in the summer of 2008; and for all areas—down to the census tract level. This schedule is based on the time it will take to collect data from a sample size large enough to produce accurate results for different size geographic units.

Further information about the ACS is available at http://www.census.gov/acs/www/.

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 60,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of information of labor force statistics for the U.S. noninstitutionalized population (e.g., excludes military personnel and their families living on bases and inmates of institutions). In addition, supplemental questionnaires are used to provide further information about the U.S. population. Specifically, in October, detailed questions regarding school enrollment and school characteristics are asked. In March, detailed questions regarding income are asked.

The current sample design, introduced in July 2001, includes about 72,000 households. Each month about 58,900 of the 72,000 households are eligible for interview, and of those, 7 to 10 percent are not interviewed because of temporary absence or unavailability. Information is obtained each month from those in the household who are 15 years of age and older and demographic data are collected for children 0-14 years of age. Prior to July 2001, data were collected in the CPS from about 50,000 dwelling units. The samples are initially selected based on the decennial census files and are periodically updated to reflect new housing construction.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services.

Supplemental Questionnaires

Each year, the Annual Social and Economic (ASEC) Supplement and October supplemental questionnaires contain questions of relevance to education policy. The ASEC Supplement, formerly known as the March CPS Supplement, is a primary source of detailed information on income and work experience in the United States. The October Supplement routinely gathers data on school enrollment, school characteristics, and educational attainment for elementary, secondary, and postsecondary education. Related data are also collected about preschooling and the general adult population. In addition, NCES funds additional items on education related topics such as language proficiency, disabilities, computer use and access, student mobility, and private school tuition. Responses are collected for all household members age 3 and over.

CPS interviewers initially used printed questionnaires. However, since 1994, the Census Bureau has used Computer-Assisted Personal and Telephone Interviewing (CAPI and CATI) to collect data. These technologies allow interviewers to administer a complex questionnaire with increasing consistency and reductions in interviewer error. In 1994, the survey methodology for CPS was changed, and weights were adjusted. Further information about the CPS data collections is available at http://www. census.gov/apsd/techdoc/cps/cps-main.html.

Monitoring the Future Survey

The National Institute on Drug Abuse of the U.S. Department of Health and Human Services is the primary supporter of the long-term study entitled Monitoring the Future: A Continuing Study of American Youth, conducted by the University of Michigan Institute for Social Research. One component of the study deals with student drug abuse. Results of the national sample survey have been published annually since 1975.

Approximately 50,000 public and private school students are surveyed each year. Students complete self-administered questionnaires given to them in their classrooms by University of Michigan personnel. Each year, 8th-, 10th-, and 12th-graders are surveyed (12th-graders since 1975, and 8th- and 10th-graders since 1991). The 8th- and 10th-grade surveys are anonymous, while the 12th-grade survey is confidential. The 10th-grade samples involve about 17,000 students in 140 schools each year, while the 8th-grade samples have approximately 18,000 students in about 150 schools. The 12th-grade sample includes about 16,000 students in approximately 133 schools. Beginning with the class of 1976, a randomly selected sample from each senior class has been followed in the years after high school on a continuing basis.

From 1990 to 2010, the student response rate for 10th-graders ranged from 85 to 89 percent, and the student response rate for 12th-graders ranged from 79 to 86 percent.

Further information on Monitoring the Future is available at http://www.monitoringthefuture.org.

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics - http://nces.ed.gov
U.S. Department of Education