Skip Navigation
acfbanner  
ACF
Department of Health and Human Services 		  
		  Administration for Children and Families
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™Download Reader  |  Print Print      

Office of Planning, Research & Evaluation (OPRE) skip to primary page content
Advanced
Search

Return to Previous page   

PDF Version, B&W Printable PDF Version of this report


Building Futures: Head Start Impact Study
Frequently Asked Questions

This report may contain external links. ACF cannot attest to the accuracy of information provided by external links. Providing links to a non-ACF Website does not constitute an endorsement by ACF or any of its employees of the sponsors of the site or the information or products presented on the site. Also, be aware that the privacy protection provided on the ACF domain (see ACF's Privacy Policy) may not be available at the external link.

 

Study Background and Objectives

Study Program Participants

Random Assignment

Study Schedule

Confidentiality

Study Reports

Study Contacts

Measuring Program Effectiveness

 

Study Background and Objectives

  1. What is the Head Start Impact Study?

The Head Start Impact Study is a Congressionally mandated study being conducted across 84 nationally representative grantee/delegate agencies. Approximately 5,000 newly entering 3-and 4-year old children applying for Head Start were randomly assigned to either a Head Start group that had access to Head Start program services, or to a non-Head Start group that could enroll in available non-Head Start services selected by their parents, in the community. Data collection began in fall 2002 and is scheduled to continue through 2006, following children through the spring of their 1st grade year.

  1. Why is the study being done?

In 1998, as part of the Head Start reauthorization, Congress mandated that the U.S. Department of Health and Human Services conduct a national study of the impact of Head Start programs to: 1) determine, on a national level, how participation in Head Start affects the school readiness of the children it serves, and 2) identify the types of participants and under what circumstances the program is most effective.

  1. What is the relationship between this study and the ongoing Family and Child Experiences Survey (or FACES) and the Head Start National Reporting System (HSNRS)

This study is building upon the knowledge gained from FACES. FACES studies a nationally-representative sample of Head Start programs, children and families and compares them to other national data. However, FACES does not use a random assignment study design, and therefore, only studies children enrolled in Head Start. The Impact Study includes a larger number of grantees/delegate agencies. Most importantly, it includes a comparison group of children and families not enrolled in Head Start, to allow comparing their outcomes with those of Head Start children and families.

The National Reporting System is designed to create a national data base on the progress and accomplishments of 4-and 5-year-old Head Start children on specific child outcomes. All Head Start programs administer a common NRS assessment to all 4-and 5-year-old children at the beginning and end of the program year in order to determine some of the skills with which they enter Head Start, their levels of achievement when they leave Head Start and the progress they make during the Head Start year. The assessment information collected through the NRS will be used to strengthen Head Start program effectiveness; it is not an evaluation of the Head Start program. The HSIS is a longitudinal evaluation of the impact of Head Start on school readiness.

  1. What is the relationship between this study and the ongoing Head Start monitoring process?

This study is being conducted independently from regular Head Start program monitoring activities. Information collected as part of this study will not be shared with program monitors, except perhaps in an aggregated form across multiple programs. Information on individual grantees/delegate agencies and participants will not be shared.

 

 top of page

 

Study Program Participants

  1. Who is participating in the study?

The study is a nationally representative sample of: (1) 84 randomly selected grantees/delegate agencies selected from among all 50 states, the District of Columbia, and the Commonwealth of Puerto Rico; (2) 383 randomly selected Head Start centers operated by the selected grantees and delegates; and (3) a total of 4,667 newly entering children (2559 3-year olds and 2,108 4-year olds). The study includes newly enrolled 3- and 4-year old children, and includes full-and part-time program services, and both center-and home-based program options. However, children enrolled in Early Head Start, as well as grantees/delegate agencies serving migrant children and those operated by Tribal Organizations, are not included in the study.

  1. How were grantees/delegate agencies selected?

The sample used for this study is intended to be representative of all Head Start grantees/delegate agencies operating in the 50 states, the District of Columbia, and the Commonwealth of Puerto Rico.

Some grantees, however, were not eligible to participate in the study. As noted above, programs operated by Tribal Organizations and migrant grantees were excluded, as were children currently or previously enrolled in Early Head Start. Also excluded were new grantees/delegate agencies that had only been in operation for less than 2 years, to minimize the potential that early start-up issues affecting the stability of program operations might impede the study.

To help ensure national representation, the remaining grantees and delegate agencies were organized into geographic clusters by: the availability of state-funded comprehensive preschool programs; region of the country; urban vs. rural location; and, the race/ethnicity of the children being served. From this list, we randomly selected (i.e., by chance) a sample of 25 geographic clusters, and subsequently created a list of all the grantees/delegate agencies operating within those geographic areas. These potentially eligible grantees/delegate agencies were then contacted by telephone so that the study could exclude grantees/delegate agencies operating in locations where Head Start “saturates” the community, i.e., where there are not enough unserved children to permit random assignment of a sufficient number of children to an unserved comparison group.

From those remaining on the list after these exclusions, a nationally representative sample of 84 grantees/delegate agencies was selected. This selection of grantees/delegate agencies was done to ensure representation of both the range of current program options (e.g., part-versus full-time, center-versus home-based, auspice, etc.), as well as the existing variation in the child care and early childhood program contexts in which Head Start operates.

A sample of 383 centers was randomly selected from the sampled grantees/delegate agencies.

  1. How were children and families selected for the study?

A total of about 4700 newly entering 3-and 4-year old children were selected from the sampled grantees and centers. Because Congress asked us to measure the difference that Head Start makes compared to never having any Head Start assistance, children enrolled in Head Start or Early Head Start previously were not eligible to participate in the study. The study population includes children with disabilities. About 60 percent of the selected children were assigned to receive Head Start and 40 percent were assigned not to receive Head Start, although the latter could receive other preschool or childcare services that parents selected.

In the spring of 2002, the selected study grantees initiated their regular family recruitment activities using whatever procedures were in place. All parents were told that this study was being conducted, and what participation entailed for them, should they be selected. Parents were also told that only a subset of children and families would be affected, that Head Start program slots are limited (as noted above, the study included only those grantees/delegate agencies that are not serving all eligible children), and that final selection of program participants would involve a lottery-like process.

Once all (or most) applications were received by the selected grantees, program staff used their existing procedures to determine applicant eligibility for entry into Head Start during the 2002-2003 program year. After program staff made eligibility determinations, children were randomly assigned by the research team. Of the total number of children and families selected, some were assigned to the program, or Head Start group, and some to the comparison or control group. Grantees/delegate agencies were asked to follow their regular referral procedures to other child care programs for comparison group families.

  1. How did you find “similar” children and why did you need to use random assignment?

The Congressional mandate for this study required that it examine outcomes for Head Start children and compare them with those that “…would not have occurred without the participation in the program.”

Random assignment is the most powerful method for measuring the effect of a program on its participants, because any observed differences between the program (Head Start) and comparison group must be due to the program’s services. Any other differences between them are random or chance differences. Given the importance of this study, it is in the best interest of the program that the most reliable and defensible method be used to determine how Head Start is affecting the lives of participating children and families. Any other research method would leave open far too many doubts.

The best way to find out what would have happened to the Head Start children had they not participated in the program is to randomly assign children whose parents want to enroll them into the program, and who are determined to be eligible for Head Start, to one of two groups — a program group that participates in Head Start, and a comparison group that does not participate in Head Start but who may be enrolled in some alternative child care or preschool program of their own choosing. Because the decision regarding which children end up in either of these two groups is determined by chance alone (e.g., by the “toss of a coin”), the two groups are comparable on important child and family background characteristics. Consequently, a comparison of what happens to both groups of children represents the best test of the effect of Head Start on children and their families. This is why Congress, and the national Advisory Committee on Head Start Research and Evaluation, recommended the use of random assignment for this study.

Because this selection of children is only done with grantees/delegate agencies in communities where there are more eligible children than can currently be served by the Head Start program, it is already understood that there will be more children desiring services than can be enrolled. The study did not result in any more children being denied services than would have occurred without the study. Equally important, is the recognition that, on average, only a small percentage of children per center are involved in this random assignment study.

 

 top of page

 

Random Assignment

  1. What is random assignment?

Random assignment means that chance alone (like a lottery) determines which children and families — among those selected for inclusion in the study — receive Head Start services, and which are in the comparison group. Study staff conducted the actual random assignment from lists of eligible applicants prepared by the selected grantees/delegate agencies.

  1. What happened to those children who were assigned to the comparison group?

As part of many of the grantees'/DAs' customary procedures, program staff provide parents with information on other community programs available for low-income children at the time of notification, whether or not their child enrolls in Head Start. This same procedure was continued. It was up to the parents to decide the appropriate placement for their child, which is no different from what parents already do when they are informed that their child has not been accepted into a Head Start program.

  1. How was random assignment implemented?

Random assignment procedures were worked out in cooperation with local grantee/delegate agency staff to ensure that the process maintained the needed scientific rigor and reflected the realities of local recruitment and enrollment procedures.

The general strategy began at the time of application, with informing all Head Start applicants that slots were limited and that applications were due by a certain “cut off” date. As is always the case, grantee/delegate agency staff determined which of these children were eligible to be enrolled in Head Start. This time, however, the choice of children to be enrolled (among those determined eligible) involved random selection and the randomization process actually was conducted by study staff.

For the most part, the usual process for recruiting families, assessing families’ needs and determining eligibility was utilized by each of the selected programs, with the major difference being the need to recruit some additional families. That is, the selection criteria that each Head Start program used to establish eligibility was maintained. Applicants were randomly assigned after local grantee/delegate agency staff determined eligibility. Study staff monitored and verified the appropriate handling of sample children by the selected grantees/delegate agencies.

Working with the research team, grantees/delegate agencies were asked to:

  • Include a letter explaining the study with applications for the 2002/2003 school year, and after an application was received, provide the parent with a notification letter. In many cases, the latter was a second notification to parents.

  • Accumulate applications until an agreed-upon date for random assignment. (In some cases, there was more than one round of random assignment.)

  • Determine eligibility for all applicants according to the usual agency procedures.

  • Provide the research team with a ranked list of children being considered for enrollment in the centers selected for the study. This list included identifying information and listed all eligible children who were newly entering, returning, and on the waiting list. Also, applications could not be held past the random assignment date to avoid assignment to the Head Start or non-Head Start study groups.

  • Identify the children on the list who were exempt from random assignment – those previously enrolled in Head Start or Early Head Start and a very small number of additional cases (discretionary exemptions).

  • Conduct random assignment among the highest-ranking children on the list.

  • Ensure that children in the non-Head Start research group were not admitted to Head Start, either initially or later.

  1. Did random assignment work?

Yes. In all the ways the two random assignment groups (the Head Start sample and the non-Head Start sample) could be compared—and in the secure procedures used to create them in the computer systems of the evaluation contractor—the integrity and comparability of the two groups have been upheld.

With respect to the individual-level characteristics measured prior to random assignment (child’s gender, child’s race/ethnicity, child’s language, parent’s language, child’s income eligibility) there are no statistically significant differences between the two randomly assigned groups, indicating that the groups do not differ to any discernable extent. The process of random assignment was cross-checked and made failsafe at every point. It used a probability-based selection procedure implemented through a standard computer program created for this purpose and applied with absolute uniformity through all rounds of random assignment. In each round, the desired number and ratio of children entering the Head Start and non-Head Start groups were verified, as were cumulative totals, to assure fidelity to the design. Together, these elements assure that the initial randomization was done with high integrity and that the samples can provide the necessary confidence in the validity of the impact estimates.

There was some observed occurrence of non-compliance. Some of the children assigned to the Head Start program did not show up to receive Head Start services (referred to as “no-shows”), and some of the children assigned to the non-Head Start group enrolled in the program (referred to as “crossovers”).

Although not to be dismissed, these instances of non-compliance with treatment assignment are not atypical of what has been found in other random assignment studies and do not undermine the basic validity of the study.

  1. How representative is the sample?

Taking into account the Head Start grantees/delegate and centers excluded because of saturation, the study sample is representative of 84.5 percent of the total universe of all newly entering 3- and 4-year olds across the country.

  1. Were the children selected for the comparison group as 3-year-olds able to be admitted to Head Start as 4-year-olds?

Yes. The children who were assigned to the comparison group as 3-year-olds were allowed to enroll in Head Start as 4-year-olds during the subsequent year.

Study Schedule

  1. What is the planned schedule for the study?

Prior to the start of full-scale random assignment and data collection in the summer and fall of 2002, a field test of all study procedures was conducted, beginning in early spring of 2001.

Sites were recruited for the full-scale study during the spring of 2001, and study staff worked with local staff through summer, 2002 to establish efficient procedures to maintain the integrity of the research within the program’s normal operating procedures. The selection and random assignment of children occurred during the spring/summer of 2002 with initial data being collected from the study participants in the fall of 2002 (at the start of the Head Start program year). To date, three rounds of data collection have been completed. The fourth round of data collection is taking place in Spring 2005 when the 3-year old cohort is in Kindergarten and the 4-year-old cohort is in first grade. This will be the end of data collection for the 4-year-old cohort, while the 3-year old cohort will be followed for one more year through Spring 2006.

  1. What types of data are being collected, how are data being collected, and when?

Data collection began in fall of 2002 and will continue through the spring of 2006, following children from age of entry into Head Start through the end of the preschool years, end of kindergarten, and end of 1st grade. Comparable data are being collected for both Head Start and non-Head Start children and consist of the following:

  • Measures of children’s development including: 1) direct child assessments, 2) parent reports, and 3) teacher/care provider reports. Child outcomes are measured in the cognitive and social-emotional, and health domains. This information is collected in the spring of each program year.

  • Characteristics and quality of children’s home environments are measured through parental reports of 1) beliefs and attitudes about their child’s learning, accomplishments, and problems; 2) family household and demographic information; 3) their relationship with their child, and activities done with child; 4) child and family receipt of a variety of comprehensive services; 5) parenting practices; and 6) safety in the household and community.

  • Characteristics and quality of the primary preschool and child care arrangement as measured through: 1) interviews with center-based directors, 2) surveys of teachers or interviews with care providers, and 3) observations of these settings.

  • Characteristics and quality of the kindergarten and first grade years as measured through teacher surveys and administrative data.

  1. What assessments were used for children?

We have administered age-appropriate assessments across the cognitive, social-emotional, and health domains in both the program and comparison groups. Assessments will continue for the selected study children as they progress through 1st grade. The assessment battery contains a series of tasks drawn from a number of commonly used assessment tools. Constructs being measured include: pre-reading, pre-writing, vocabulary, oral comprehension, phonological awareness, early math, problem behaviors, social skills and approaches to learning, social competencies, access to health care, and health status.

  1. What are the consequences to a program if the children don’t do well on the assessments?

There are no consequences to a program. Only the study team will have information on individual grantees and delegate agencies, and this information will not be released to the public nor to anyone in the Head Start Bureau, other than the Federal Project Officer.

  1. What procedures were used to assess children who do not speak English as their primary language? Into how many languages are the child assessments translated?

The child assessments are in English and Spanish. At the time of the initial assessment, the interviewer/assessor asked the main care provider a series of questions to determine the appropriate language for the child assessment. For children requiring assessment in Spanish, a bilingual interviewer/assessor administered the assessment battery in Spanish, and also administered two subtests in English. In spring 2003, the children assessed in Spanish in fall 2002 were assessed primarily in English, along with the continued administration of two Spanish language measures. One exception is Puerto Rico where, because instruction is in Spanish, all children continue to be assessed only with the complete Spanish battery. For children who could not be assessed in either English or Spanish in fall 2002, a bilingual interviewer/assessor or an interpreter for the child’s language was used. The interviewer/assessor (or interpreter) used the English assessment booklet, translated the instructions into the child’s language, and administered four subtests that focused on color naming, counting, and letter naming. For the spring assessments, these children were all tested in English.

  1. Were parents interviewed in their home language?

Every effort was made to interview parents in their home language. For parents speaking a language other than Spanish or English, field staff who are fluent in the language were used. When field staff fluent in the language were not available, local interpreters were identified to assist with the interview.

 

 top of page

 

Confidentiality

  1. What assurances of confidentiality were provided to families?

Parents were informed of the requirements of this study as part of our standard “informed consent” procedures. They were informed that no information on individual children or their parents will be released to anyone outside the small research team, and only those with an explicit need to know will have the ability to link data to individuals. All information will be held in strict confidence and protected. Participation is voluntary.

 

 top of page

 

Study Reports

  1. What does this report cover?

This report is a preliminary examination of the impact of Head Start after one year in the program, for children who entered in 2002. This is just the precursor to the wealth of information that this study will eventually provide. Besides the study methods, this report provides the results of the impact analyses. The impact of Head Start on children’s cognitive development is presented, focusing on seven different cognitive constructs (pre-reading skills, writing skills, vocabulary knowledge, oral comprehension, phonological awareness, early math skills, and parent reports of children’s literacy skills). The impact of Head Start on social-emotional development is presented, focusing on parent reported measures of social competencies, positive approaches to learning, and problem behaviors. The impact of Head Start on children’s health status, access to health services, and parenting practices in the areas of educational activities, discipline practices, and child safety practices is also presented. In addition, a discussion of the impact of Head Start on the types of preschool and child care settings that parents selected for their children as well as descriptive information on the characteristics of different types of early care arrangements is presented.

  1. When will other information and results be available?

Ongoing and updated information will be provided on the study web site as it becomes available. The URL is http://www.acf.hhs.gov/programs/opre/hs/impact_study. Future reports will examine additional areas of possible impact, will explore possible variation in impact by program characteristics (e.g. classroom quality, teacher educational level, full-day versus part-day programs, etc.) and community characteristics, and will follow children through the end of 1st grade.

 

 top of page

 

Study Contacts

  1. Who is conducting the study?

The study is being sponsored by the Administration for Children and Families (ACF) of the US Department of Health and Human Services. It is being conducted by Westat of Rockville, Maryland, in collaboration with Chesapeake Research Associates, the Urban Institute, the American Institutes of Research, and Decision Information Resources.

 

 top of page

 

Measuring Program Effectiveness

  1. Will it be possible to determine if there are different effects for different types of children (e.g., by gender and race/ethnicity), and for different program models?

Yes. The overall study sample is large enough to allow us to assess differences in the program’s effect on different types of children, as well as under different types of program service-delivery “models.” For example, the first year report looks at differences in effects by race/ethnicity, child and home language, parental depression, age of mother at first birth, gender, parent marital status, and special needs. Future reports will extend analyses to examine variation in impact by program characteristics (e.g. classroom quality, teacher educational level, full-day versus part-day programs).

  1. Don’t you have to understand the broader picture of childcare and preschool programs for low-income children to really understand the role and contribution of Head Start?

Understanding the broader picture of childcare and preschool programs is definitely important. That is why we plan to collect extensive and ongoing information about: (1) the state and local early care environment in which the selected Head Start grantees/delegate agencies are operating; (2) the nature of the local “market” for childcare and preschool programs serving low-income children; (3) the full picture of services being provided to the participating study children (not just their Head Start experience); and (4) the quality of both Head Start and other alternative programs serving study children.

  1. What are the study response rates?

The individual response rates for both child assessments and parent interviews, completed for the two data collection periods addressed in this report, have been very good. Overall, at both points in time 83 percent of parents completed interviews, and 82 percent of the children were assessed. There is some difference in response rates for the Head Start and non-Head Start groups, but the gap is slightly narrowed by the spring 2003 interview.

  1. What are the study findings?

The study quantifies the impact of Head Start separately for 3-and 4-year-old children across child cognitive, social-emotional, and health domains as well as on parenting practices. For children in the 3-year-old group, the preliminary results from the first year of data collection demonstrate small to moderate1 positive effects favoring the children enrolled in Head Start for some outcomes in each domain. There were fewer positive impacts found for children in the 4-year-old group. The key findings are summarized below and presented in Exhibit 1:

Cognitive Domain

The cognitive domain consists of six constructs each comprised of one or more measures. The key findings in this domain are:

  • There are small to moderate statistically significant positive impacts for both 3-and 4-year-old children on several measures across four of the six cognitive constructs, including pre-reading, pre-writing, vocabulary, and parent reports of children’s literacy skills.

  • No significant impacts were found for the two constructs, oral comprehension and phonological awareness or early mathematics skills, for either age group.

Social-emotional Domain

The social-emotional domain consists of three constructs, each comprised of one or more parent-reported measures.2 The key findings in this domain are:

  • For children who entered the study as 3-year-olds, there are small statistically significant impacts in one of the three social-emotional constructs, problem behaviors.

  • There were no statistically significant impacts on social skills and approaches to learning or on social competencies for 3-year-olds.

  • No significant impacts were found for children entering the program as 4-year-olds.

Health Domain

The key findings in this domain, consisting of two constructs, are:

  • For 3-year-olds, there are small to moderate statistically significant impacts in both constructs, higher parent reports of children’s access to health care and better reported health status for children enrolled in Head Start.

  • For children who entered the program as 4-year-olds, there are moderate statistically significant impacts on access to health care, but no significant impacts for health status.

Parenting Practices Domain

The key findings in this domain, consisting of three constructs, are:

  • For children who entered the program as 3-year-olds, there are small statistically significant impacts in two of the three parenting constructs, including a higher use of educational activities, and a lower use of physical discipline, by parents of Head Start children. There were no significant impacts for safety practices.

  • For children who entered the program as 4-year-olds, there are small statistically significant impacts on parents’ use of educational activities. No significant impacts were found for discipline or safety practices.

  1. What is the most important message from this report?
  • The findings indicate that the Head Start program has small to moderate significant impacts across a wide variety of outcomes for both 3-and 4-year olds.

  • The effects of the program narrow, but do not close the gap between Head Start children and the general population of 3-and 4-year olds in the US.

  1. How does one obtain copies of the report?

The report will be posted on the web. The URL is http://www.acf.hhs.gov/programs/opre/hs/impact_study.

  1. Do the measures used in the study really assess the important outcomes well?
  • The evaluation team selected measures that have a strong track record of use in other studies. The full technical report includes data on the psychometrics of the instruments so readers can judge the reliability of the measures.

  • It is noteworthy that impacts were found with different types of measures—standardized instruments (such as the Woodcock Johnson III and the adapted PPVT), parent reports (like the ratings of aggressive behavior), and live observations (ratings by the interviewers).

  1. Will these early impacts lead to important impacts on later child and family outcomes? Will Head Start children do better in school?
  • After one year of Head Start, there are statistically significant impacts across important domains of child development and parenting that are generally predictive of school success. It is still to be seen whether these early and small to modest impacts will lead to longer term differences in child and family outcomes.

  • In order to assess the longer-term impact of Head Start, ACF is assessing this sample of children annually through first grade. This study will determine whether the promising patterns identified at ages 3 and 4 are maintained throughout the first two years of school. This study will also provide much needed information about the importance of children’s educational experiences after Head Start for school readiness skills.

  1. Will it be possible to look at the study findings for a particular state or even a particular region?

The purpose of this study is to make national estimates of the effect of the Head Start program. The sample has not been designed to develop findings for a particular state or individual program.

  1. Who do I contact if I have questions?

ACF
Maria Woolverton
Federal Project Officer and Director
Administration for Children and Families
202-205-4039

Westat
Ronna Cook, Project Director
Westat Inc.
1-888-280-5081




1For this report we have adopted the following conventions for interpreting effect sizes: less than 0.2 is small, between 0.2 and 0.5 is a moderate impact, and over 0.5 is a large impact. (back)

2Future reports will also examine this domain using teacher-reported data. (back)

 

Return to Previous page