Skip Navigation
acfbanner  
ACF
Department of Health and Human Services 		  
		  Administration for Children and Families
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™Download Reader  |  Print Print      

Office of Planning, Research & Evaluation (OPRE) skip to primary page content
Advanced
Search

 Table of Contents | Previous | Next

2.0 Methodology

2.1 Overview

This chapter outlines the procedures that were followed for the selection of the FACES Head Start programs and children, and for the collection of data from the parents of these children. The end of the chapter provides a discussion of the strengths and limitations of this study component and a description of the results of the data collection effort. Information on the concurrent assessments and observations of children and classroom observations is available in the FACES Technical Report II.

2.2 The Sample Universe and Sampling Method

The primary sampling objective for the Head Start FACES was to provide a national probability sample of Head Start children to be used for descriptive and analytic purposes. The desired number of completed primary caregivers’ interviews and children’s assessments at the baseline data collection point in the fall of 1997 was 3,200. For sampling purposes, these children were identified by their age at the beginning of the program year.

The Sample Universe

Information about the available universe of Head Start programs was drawn from the 1995-96 Head Start Program Information Report (PIR) database. The PIR is a compilation of the descriptive information each program is required to submit at the conclusion of each program year. The universe of Head Start programs for this study was comprised of 1,734 programs (including both grantees that ran centers directly and delegate agencies that managed centers for grantees) that operated during the 1995 - 1996 program year in the 50 States, Puerto Rico, and the Territories of the United States. This universe did not contain those programs that were designated as American Indian or Migrant programs or those programs not serving 3- and 4-year-olds (Early Head Start). The 1,734 available Head Start programs served approximately 785,000 children aged 3 and older. Of the total number of children enrolled in these programs, 38% were African American, 34% were White, and 24% were Hispanic. The remaining children were Asian/Pacific Islander (3%) and American Indian/Alaskan Native (1%). Approximately 30% of all children enrolled in the Head Start Program universe were 3-year-olds, 64% were 4-year-olds, and 6% were older than 4 years of age.

The universe of programs was stratified on the basis of three variables: census region (Northeast, Midwest, South, and West), urbanicity (whether the zip code associated with the program address was located inside an urbanized area versus located outside an urbanized area), and the percentage of minority children in a program (greater than or equal to 50% minority enrollment versus less than 50% minority enrollment). The combination of these three stratification variables formed a 4 x 2 x 2 matrix with 16 cells. Exhibit 2-1 shows the total number of Head Start programs in each cell, the total number of study-eligible children enrolled, and the number of programs drawn from each cell for the sample.

Exhibit 2-1

Total Number of Programs Available, Total Enrollment of Children Aged 3 and Older, and the Number of Programs Drawn from Each Cella
  Minority Enrollment Under 50%
Northeast Midwest South West
Urban 72 96 32 36
23,765 37,191 13,542 14,039
1 2 1 1
Rural 89 192 156 70
19,068 63,600 48,202 15,363
1 3 2 1
  Minority Enrollment 50% or Higher
Northeast Midwest South West
Urban 174 155 240 148
71,296 93,614 177,878 106,316
4 5 9 5
Rural 6 12 193 63
1,663 4,338 75,283 19,646
0 0 4 1
aKey to each cell of the table:(back) Total number of programs;
Total enrollment of children aged 3 and older;
and Actual number of selected programs.

 

The sampling approach used a three-stage design. The first-stage was the selection of 40 Head Start programs. The universe of available Head Start programs was allocated to the 16 cells in proportion to the enrollment of children aged 3 and older contained in the 1995-96 PIR data for each stratum. The second stage of sampling involved the identification of four centers from those operated by each of the selected programs. The average Head Start program operated nearly seven centers, with a range from 0 through 131 (a small number of programs were entirely home-based and counted as having zero centers). The third stage of sampling was the identification of individual children in the selected centers.

The First-Stage: The Sample of 40 Head Start Programs

In a multi-stage sample design, Head Start programs were the Primary Sampling Units (PSUs). Because 40 PSUs was a relatively small number, it was necessary to carefully stratify the Head Start programs to ensure that the selected programs were well distributed on those characteristics that were likely to be correlated with the variables being measured. Information on the location of each of the programs in the study universe, the racial/ethnic composition of the children served, and the enrollment of children aged 3 and older was taken from the PIR database and used for stratification.

The selection of the 40 Head Start programs for FACES relied on the use of probability proportional to size (PPS) sampling, providing each Head Start family in the sample with an equal probability of selection. Use of PPS gave larger Head Start programs a greater chance of being selected. To use the PPS selection method, the measure of size for each program was the number of enrolled children aged 3 and older.

The universe of 1,734 programs was sorted into the four census regions (Northeast, Midwest, South, and West). In the 1995-96 PIR, the distribution of Head Start children aged 3 and older across the regions was: Northeast, 14.8%; Midwest, 25.3%; South, 40.1%; and West, 19.8%. Within each census region, the programs were sorted into two groups: 1) those located in a Metropolitan Statistical Area (MSA) county - urban, and 2) those located in a non-MSA county - rural. This sorting was done using a special data file that linked county level data with the zip code of the program office. This step provided a distinction between programs located in urban and rural areas. According to the 1995-96 PIR, about two thirds of Head Start children aged 3 and older were enrolled in programs where the offices were based in urban areas.

Within the MSA versus non-MSA grouping in each Region, programs were sorted on percentage of minority student enrollment above or below 50%. The use of these three stratifiers helped ensure that the sample of 40 programs was well distributed geographically with respect to urban versus rural locations, and also well distributed with respect to the racial/ethnic composition of the children being served. Thus, as shown in Exhibit 2-1, the first-stage sampling frame included 16 cells based on three strata: region (4) by urbanicity (2) by ethnicity (2). The exhibit also shows that two of the cells had a very small number of programs (<12) and therefore had no sample programs drawn.

The final sample included eight programs that provided a majority of enrolled children with full day services and 10 others that provided such services to a minority of their children (approximately one quarter of all programs provided such services). As well, 16 programs provided home-based services to at least some of their children.

The Second-Stage: The Sample of Head Start Centers

The most efficient way to sample children was to start by selecting a random sample of Head Start centers1. As shown in Exhibit 2-2, of the programs selected, 36 had 4 or more centers. Because the PIR database did not contain information on the enrollment of children within individual centers, each of the 40 programs selected to participate was asked to provide a listing of their centers, as well as the actual number of children enrolled in each center for the 1996-1997 school year.

Exhibit 2-2

Distribution of Centers Within Programs in FACES and in the 1995-96 PIR
  Programs selected for
FACES
N = 40
1995-1996 PIR
N = 1,734
Programs with less than 60 children total 0 (0.0%) 95 (5.5%)
Programs with 0 centers 0 (0.0%) 3 (0.6%)
Programs with 1 center 0 (0.0%) 283 (16.9%)
Programs with 2 centers 1 (2.5%) 164 (9.5%)
Programs with 3 centers 3 (7.5%) 149 (8.6%)
Programs with 4 or more centers 36 (90.0%) 1,040 (58.9%)

 

Prior to the project field test conducted in spring 1997 (see Section 2.8), a PPS sample of four centers was selected from each of the 40 programs, except for four programs that had less than four centers. A total of 157 centers was selected in the second stage sample.

When a new, larger cohort of children was selected for the main FACES study beginning in the fall of 1997, each sampled Head Start program was again asked to provide a current list of all their centers with an estimated number of 3- and 4-year-old children at each center who would be enrolling in Head Start for the first time that fall. Because the number of 3- and 4-year-old children to be selected was adjusted for each site to reflect the size of participating programs, additional centers (beyond the original four centers that participated in the spring 1997 field test) were added at some programs to provide the increased sample size. The total number of centers participating in the fall of 1997 was 180.

The Third-Stage: The Sample of Head Start Children

The final stage of sampling involved the selection of Head Start children and families. Class rosters of children were obtained from each Head Start center selected during the second stage of sampling, identifying children new to Head Start and with the 3- and 4-year-old children listed separately within each class. In order to achieve the desired sample of 3,200 children and families, an over-sample of 3,648 was targeted. This over-sample assumed an 85% response rate, was comprised of 1,410 3-year-old children and 1,510 4-year-old children, and included the estimated 728 returning children who participated in the spring 1997 field test.

To determine the distribution of 3- and 4-year-old children across programs, the desired sample size of 1,200 3-year-old children was first allocated across the sampling strata in proportion to the estimated number of 3-year-old children in each stratum. The number of 3-year-old children targeted for selection from each program was based on the proportion of 3-year-old children in the sampling stratum and the proportion of 3-year-old children new to the Head Start program in the fall of 1997, making the probability of selection of a 3-year-old child approximately equal within each stratum. A similar procedure was adopted for determining the number of 4-year-old children to be selected from the program.

Once the allocation of the sample was determined at the program level, the numbers of 3- and 4-year-old children to be selected at the center-level were determined by dividing the number of 3-year-old children needed from a program by the number of centers in the sample from that program. This number was multiplied by the inverse of the ratio of the number of 3-year-old children in the program to the total number of children in the program. Children were randomly selected, across classes having the highest proportion of 3- and 4-year-old children new to Head Start.

2.3 Response Rate

A critical indicator of the success of any study is the actual participation or response rate of the individuals selected to participate. For this study, 3,648 families were targeted for participation, and 3,179 of these families provided signed consent forms prior to the beginning of the fall 1997 data collection, for an overall response rate (agreement to be in the study) of 87.1%. Exhibit 2-3 shows the number of completed interviews for each of the data collection waves.

Exhibit 2-3

Number of Completed Parent Interviews by Data Collection Wave
  Fall 1997 Spring 1998 Spring 1999
Targeted for recruitment 3,648 3,648 3,648
Signed consent forms 3,179 3,179 3,179
Parent interviews 2,983 2,688 806a/1,520b
Supplemental interviews   137c  
aOnly parents of children who returned to Head Start for a second year.(back)

bParents of children who left Head Start in spring 1998 and were completing kindergarten in spring 1999.(back)

cParents who were not interviewed in fall 1997.(back)

 

A number of strategies were used to both encourage families’ continuing participation and minimize sample attrition. FACES posters were used to advertise the upcoming site visits. Appointment reminder postcards and FACES refrigerator magnets were mailed to homes one week prior to the visit and phone calls were made to each respondent the night before the interview to increase the probability that the respondents would keep their scheduled interview appointments. A monetary incentive of $15 was given to each participant for interview completion and participating classrooms were given developmentally appropriate toys for the children. At the end of the parent interviews, each respondent was asked to provide the names and addresses of three individuals who would always know their whereabouts. Respondents signed a release authorizing these individuals to provide this information to the study team, if necessary.

2.4 The Instruments

The research team developed a set of parent interview instruments, with consultation from ACYF staff and the investigators of the Head Start Quality Research Centers (1995-2000).2  One instrument was used at baseline, with adaptations used for the two subsequent data collections. The parent interviews were designed to collect up-to-date information necessary to paint a current picture of Head Start families, while being sensitive to differences based on the backgrounds of the respondents. Wherever possible, existing measures were included, depending on their length, reliability and validity, and appropriateness for the study goals. Both the English and the Spanish parent interview forms are found in Appendices B1-B3.

During the baseline data collection, the typical length of time for administration of the English parent interview was about 55 minutes. When interviews were conducted in Spanish, the length of the interview increased by about 10-15 minutes. Bilingual staff was available to conduct interviews in Spanish, as needed. Arrangements were made through the local programs to have interpreters available for families who spoke languages other than English or Spanish. Interpreters were paid by the study team and were not members of the local Head Start program staff.

Follow-up interviews were administered during the spring of 1998 and 1999. The baseline instrument was modified to include additional questions regarding the primary caregivers’ experiences and satisfaction with Head Start over the previous program years. Baseline demographic information about the child, the family, and how the family became linked with Head Start was not asked after the first interview. However, if for some reason a family was unable to complete the fall 1997 baseline interview but was participating in spring 1998, a supplemental parent interview was used to gather this information at the conclusion of the regular spring 1998 interview.

2.5 Staffing

Site visit teams were created for each program. Teams were led by a Site Manager from either Abt or CDM, and included trained, experienced field interviewers. Local Head Start program staff or parents were hired temporarily to serve as On-site Coordinators. The responsibilities for each of the positions related to the parent interview are described below. The additional field staff members who were responsible for child assessments and classroom observations are described in the FACES Technical Report II.

  • The Study Coordinators were senior staff from Abt and CDM who managed all site development activities with the programs, including materials development and all data collection logistics. Study Coordinators also supervised the training and work activities of the Site Managers, Field Interviewers, and On-site Coordinators.

  • The Site Managers, who were members of the Abt or CDM research staff, each had primary responsibility for one or more specific sites. While in the field, they conducted the staff interviews, coordinated the completion of the parent interviews, interviewed parents (as needed), and completed quality checks of the completed instruments before shipping them to Abt for data entry. Site Managers also conducted the home interviews with the case study families as well as the case study monthly telephone interviews between site visits (See Section IV for further information regarding the case study).

  • The Field Interviewers were drawn from a national pool of experienced data collectors, and included a number of bilingual staff who were able to interview both English-speaking and Spanish-speaking parents. Every attempt was made to culturally match interviewers to the study population. Their responsibility was to conduct parent interviews.

  • The On-site Coordinators (OSC) were local Head Start staff or parents, who were nominated by the local Head Start Directors, and worked under the supervision of the Abt and CDM Study Coordinators. They distributed project information to staff and parents, recruited parents, obtained consent forms, scheduled both parent and staff interviews prior to the visits, and assisted with the collection of attendance data throughout the year. At the end of each round of data collection, the OSCs received a stipend for their work. In some cases, this role was shared by more than one individual per program, based on the workload (number of children) and the distance from one selected center to another (centers in some programs were hundreds of miles apart). During the visits, the OSCs provided general logistical support but did not conduct interviews.

The Site Managers and Field Interviewers each attended two days of training in Washington, DC, prior to the first data collection. Prior to each subsequent data collection, the field staff received a single day of training. Information from the pilot test site visits (see Section 2.7) and experience from previous work on the Descriptive Study of Head Start Health Services (Keane, O’Brien, Connell, & Close, 1996), conducted in 1994, provided the foundation for this training. Training manuals that included study background information, general interviewing and confidentiality procedures, and specific field and administrative procedures were provided to each member of the site visit teams. OSCs received detailed training, instruction, and close, on-going supervision directly from the Study Coordinators.

2.6 Description of Data Collection Procedures

Following contact with the ACF Regional Offices and the mailing of letters from the Associate Commissioner of Head Start, the Study Coordinators called the 40 selected local programs to invite them to participate in the study. All selected programs agreed to participate. Programs provided all information required to draw the subsequent samples of centers and children. OSCs were identified, and arrangements were made to recruit selected families into the study and to set up the logistics of the visits (e.g. space, interview schedule). Materials, such as FACES brochures, FACES posters, refrigerator magnets, and reminder postcards were used to inform parents of the project and of the interview schedule.

A site visit team was sent to most programs for a two-week visit to conduct the parent and staff interviews, child assessments, and both child and classroom observations, as well as to collect the case-study data. A description of the data collection methodology as well as the findings from the child assessments and child and classroom observations can be found in the FACES Technical Report II. One large program took 4 weeks to complete, while one small program required only a one week visit.

In most instances, parents were interviewed privately in spaces arranged at their local Head Start centers, although some parents were interviewed at alternate locations, mostly homes. When parents were unavailable for their scheduled interviews, field staff worked with the OSCs to reschedule the interviews before the end of the site visit. Completed interviews were quality checked for missing data and coding errors, corrected if necessary, and forwarded to Abt for processing.

2.7 Confidentiality

Confidentiality was assured for all study respondents, parents and staff. At the time of recruitment, Head Start Directors were assured that this project was a descriptive study, and not an evaluation of their programs’ or centers’ effectiveness or compliance with the Program Performance Standards. Parents also received assurances prior to the interview that their responses would not be shared with Head Start program staff or subsequent school staff and would be reported only as part of group statistics for all the participating Head Start parents. Researchers obtained signed, informed consent (Appendix B4) from all parents prior to any participation by themselves or their children.

2.8 Tests of Procedures and Instruments

Pilot Test

During the development of parent and staff interviews, a series of pilot interviews was completed to establish the readability and comprehensibility of questions (in English and Spanish) with the target population as well the efficiency of the data collection procedures. The pilot test was completed at two Head Start programs, one urban and one rural, in February of 1997. The research team conducted interviews with appropriate Head Start staff and with four parents at each site, and completed child assessments and classroom observations. Many improvements in the parent interview resulted from feedback from respondents, as well as from debriefing sessions with parent interviewers after the conclusion of the pilot data collection.

The pretest not only assessed the instruments and data collection procedures but it also carefully tested the process for managing the multi-faceted data collection in a way that minimized the burden placed on programs for staff time and resources, the level of intrusion on normal program operations, and the burden placed on parents and children. The lessons learned from the ‘hands-on’ experience of this pilot test were incorporated into the revised OMB clearance submission and used to amend the procedures for the spring 1997 field test.

Field Test

A large field test was completed with approximately 2,400 children and families who were studied in all 40 of the sampled Head Start programs in the spring of 1997. The field test was an opportunity to assess the feasibility of interviewing and assessing parents and children on a large scale using the data collection instruments modified after the pilot test, as well as provide valuable information on the status of Head Start programs, children, and families. The procedures and results of this field test can be found in the Head Start Program Performance Measures: Second Progress Report (1998b).3 

2.9 Data Management and Child Weights

Questionnaires were reviewed in the field by the Site Managers, who noted any missing data that needed to be recovered and provided feedback to the interviewers as needed. A second review was completed when the forms were returned to the Abt project office. Upon completion of each site visit and subsequent data checking and data entry, all written responses to open-ended questions were coded. Data at this level were weighted to produce national Head Start estimates.

Weights 4 

Cross-sectional weights were generated for the fall 1997 and spring 1998 data, with additional weights created for use with the longitudinal findings. The fall 1997 child cross-sectional weights were calculated as the inverse of the product of the probabilities of selection at each stage of sampling. Using program level information from the PIR and center level information collected directly from the programs, three levels of weights -- program, center, and child -- were generated using the formulas below.

For each child, the final child weight = (program weight) x (center weight) x (child weight), where

program weight   (# 3 - and 4 - year olds in stratumh)
;  h = 1, 2, ..14 and nh = # programs sampled in
 = 
  nh * (#3 - and 4 - year olds in program)

stratum     h,

center weight   (# 3 - and 4 - year - olds in program)
  m = # centers sampled in program,
 = 
  m * (#3 - and 4 - year- olds in center)

 

child weight for new 3-year-olds   # new 3 - year- olds listed in center
,
 = 
  # new 3 - year old sampled respondent in center

 

child weight for new 4-year-olds   # new 4 - year- olds listed in center
, and
 = 
  # new 4 - year old sampled respondent in center

 

child weight for children returning from the field test   # returning children estimated for center
.
 = 
  # returning field test children in center

 

A final adjustment was made to each of these child weights so that they represented the full population of Head Start children. This adjustment was made by multiplying each child weight by the ratio of the expected number of children in Head Start in each category (new 3-year-olds, new 4-year-olds, returning 4-year-olds, as determined by the PIR) to the sum of the weights of the actual children in the study. As a result of the weighting procedure, the fall 1997 sample was weighted to represent a Head Start population of 779,785.

The three spring 1998 child cross-sectional weights were generated by making adjustments to the original fall 1997 cross-sectional weights to account for the change in sample size from fall to spring. This is shown in the following formulas:

child weight for new 3-year-olds   # new 3 - year- olds in study in fall 1997
,
 = 
  # new 3 - year olds remaining in spring 1998

child weight for new 4-year-olds   # new 4 - year - olds in study in fall 1997
and
 = 
  # new 4 - year olds remaining in study in spring 1998

child weight for children returning from the field test   # returning field test children in study in fall 1997 .
.
 = 
  # returning field test children in study in spring 1998

 

As a result of this weighting procedure, the spring 1998 sample was weighted to represent a Head Start population of 763,671.

The child longitudinal weights were generated for two groups of families: 1) those families in which the same respondent participated in both the fall 1997 and the spring 1998 parent interviews, and 2) those families in which the same respondent participated in the fall 1997, the spring 1998, and the spring 1999 parent interviews. In each case, the fall 1997 child weight was adjusted for non-response by multiplying the weight by a program-level factor that accounted for the number of families that had different interview respondents over time or who did not complete the interview due to refusal, an inability to contact the family at the time of the visit (although the family was still enrolled in Head Start), or the inability of the parent to be available to the interviewers during the time of the site visit. Weights were multiplied by a factor based on the following formula:

# returning children in study in spring 1998
.
# returning children in study in spring 1998 + # unable to interview + # with different respondent from fall 1997


The application of this weighting procedure for the longitudinal sample, families who were in Head Start from fall to spring, resulted in a representation of 634,949 Head Start families.

Data Analysis

Analyses were conducted in SAS and SUDAAN using unweighted and weighted data. Weighted findings are presented in the report, unless specified. As part of the routine data analysis strategy, care was taken to minimize the effects of multiple tests (i.e., increasing Type I error) by identifying and completing only those analyses that were meaningful to meeting the study goal of providing a descriptive picture of Head Start families and staff. However, because this was a descriptive study, between group differences are typically presented, whether there were significant differences present or not. In the presentation of data, where ‘N’ refers to the sample size, it indicates that the entire sample was used. In cases where the sample size is preceded by ‘n’, this indicates that the sample was less than the entire sample due to missing data, planned skip patterns in the questions, or the presentation of data for selected subsets of families. The ‘N’s’ that are reported in the text and exhibits are unweighted.

2.10 Strengths and Limitations of the Research

The collection of data at three time points provides some ability to look at prediction and change over time, but the overall time period used is still relatively limited – about 18 months for families who completed all three interviews, and 6-7 months for families who were in Head Start for only one year. To this end, it is recognized that the study has both strengths and weaknesses.

Strengths

The stratification plan used for the random sample provides a representative view of the general Head Start population, allowing child-level data to be weighted and national estimates produced. At the time of the data collection, this was the largest national sample of Head Start families ever studied, increasing the power to detect differences between subgroups of Head Start families. The large sample size also improved the ability to learn more about the many different populations represented among Head Start families, such as families with children having a diagnosed disability, families experiencing welfare reform, and different ethnic groups.

As a descriptive longitudinal study, FACES provides a unique, comprehensive look at a nationally representative group of children and families, including some who attended the program for two years. The ecological research design provides information from several different developmental contexts, including home, school, and neighborhoods, as well as information on how areas of broader social change influence Head Start children and families. This study is providing information that Head Start can use at both the national and local levels to effect programmatic changes that can quickly benefit the families that are served.

Limitations

A primary limitation of a descriptive study is that it does not provide conclusive findings regarding the actual impact of Head Start on children and families. Without a control or comparison group, it is difficult to infer causal relationships between positive or negative outcomes and a family’s Head Start experience.

The large number of topics addressed in the parent interview and the efforts to minimize the time burden on the participating families prevented the parent interview instrument from going into detail on any particular topic. While this strategy fit with the original goal of describing Head Start families, it has also left some questions unanswered.

2.11 Parent Interview Descriptors

The following tables present the basic information describing the collection of data at each of the three time points. Exhibit 2-4 shows the range of respondents (based on their relationship to the Head Start children) who were interviewed in fall 1997, while Exhibit 2-5 provides information on the relationship of the respondents, the location of the interviews, and the number of repeat respondents over the three data collection waves. As shown in these exhibits, almost 90% of the respondents were mothers (range = 86.1% to 88.0% over three time points), while fathers added an additional 5% (range = 4.8% to 5.1%) to this figure. A majority of the interviews were typically conducted in the Head Start centers (range = 74.0% to 79.4%).

Exhibit 2-4

Relationships of the Fall 1997 Respondents to the Head Start Children
  N Weighted
Percentages
  N Weighted
Percentages
Mother 2,670 87.8 Brother/stepbrother 0 0.0
Father 151 5.1 Other relative or in-law (female) 21 0.7
Stepmother 10 0.3 Other relative or in-law (male) 1 0.0
Stepfather 4 0.1 Foster parent (female) 34 1.1
Grandmother 125 4.2 Foster parent (male) 1 0.0
Grandfather 3 0.1 Other non-relative (female) 4 0.1
Great grandmother 5 0.2 Other non-relative (male) 0 0.0
Great grandfather 0 0.0 Parent’s partner (female) 2 0.1
Sister/stepsister 1 0.0 Parent’s partner (male) 1 0.0

 

Exhibit 2-5

Characteristics of the Parent Interviews over Three Data Collection Waves
Characteristics Unweighted Percentages
Fall 1997
(N = 2,983)a
Spring 1998
(N = 2,688)
Spring 1999
(N = 806)
Relationship of Respondent to Head Start Child
Mother 87.8 88.0 86.1
Father 5.1 4.8 4.8
Grandmother 4.2 4.3 5.0
Other 2.9 2.9 4.1
Location of Interview
Head Start center 79.4 76.0 74.0
Home 14.4 17.6 20.1
Other location 3.0 6.4 5.8
Repeat Respondents
Fall 97 and spring 98   85.2  
Fall 97, spring 98, spring 99     23.2a
aPercentage reflects families from original sample who returned to Head Start for a second year.(back)



1While the use of the term ‘centers’ broadly refers to the unit of direct service delivery, some Head Start programs included home-based services. These services were generally provided in small units (or were incorporated into operating centers for the purposes of reporting enrollment) that were considered ‘centers’ for the purposes of sampling.(back)

2The Head Start Quality Research Centers (QRCs) represented a federally funded consortium of researchers with expertise in various areas of child and program development. This consortium was created to foster ongoing partnerships among ACYF, Head Start Grantees, and the academic research community, with a goal of enhancing quality program practices and outcomes.(back)

3 This report can be found on: http://www.acf.hhs.gov/programs/opre/hs/faces/index.html or be requested by fax (703-683-5769) or email (hspmc6@mail.idt.net).(back)

4This subsection was adapted from work by Westat for the FACES Technical Report II.(back)

 

 Table of Contents | Previous | Next