Skip Navigation
acfbanner  
ACF
Department of Health and Human Services 		  
		  Administration for Children and Families
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™Download Reader  |  Print Print      

Office of Planning, Research & Evaluation (OPRE) skip to primary page content
Advanced
Search

 Table of Contents | Previous | Next

III. Data Collection Instruments

To capture the breadth of the Head Start program, the evaluation used a variety of outcome measures for children and parents and documented important child, family, and program characteristics. These measures were selected to collect information about child and family background characteristics, the family child care home implementation issues, the comprehensiveness and quality of Head Start services in the two settings, and child and family outcomes.

Criteria for Selecting Measures

The evaluation team used these criteria as the basis for selecting measures:

  • The content of the measure is appropriately descriptive or evaluative;

  • The measure can be administered reliably by trained interviewers rather than a trained clinician;

  • The time required for training and administration is reasonable;

  • The measure has adequate psychometric properties in terms of reliability and validity;

  • The measure takes into account respondents’ literacy levels and linguistic backgrounds;

  • The measure has been used with a wide variety of ethnic groups and populations similar to the target group and shown to be reliable and valid for those populations;

  • The measure has been translated and administered in Spanish or is translatable;

  • The measure has been used in other large-scale research studies; and

  • The measure has been shown to be sufficiently sensitive to program effects.

Exhibit III-1 Overview of Domains and Instruments
Domains Measured Instrument(s)
Child Development
Child characteristics Family Data Interview (RMC Research, 1993c)
Physical functioning Daberon-2 (Danzer, Lyons, Gerber, & Voress, 1991)Child Observation Record (High/Scope Ed. Research Foundation, 1992)Kindergarten Teacher Interview (RMC Research, 1993d)
Cognitive Functioning Daberon-2 (Danzer et al., 1991)Peabody Picture Vocabulary Test-Revised (Dunn & Dunn, 1981)Concepts About Print (RMC Research, 1993e)Child Observation Record (High/Scope Ed. Research Foundation, 1992)Kindergarten Teacher Interview (RMC Research, 1993d)
Social-emotional functioning Child Adaptive Behavior Inventory-Revised (Schaefer, Hunter, & Edgerton, 1984)Child Observation Record (High/Scope Ed. Research Foundation, 1992)Kindergarten Teacher Interview (RMC Research, 1993d)
Parent Functioning
Family characteristics Family Data Interview (RMC Research, 1993c)
Involvement in Head StartSatisfaction with Head StartParent-child literacy activities Parent Perceptions of Head Start Services (RMC Research, 1993f)
Parenting skills and attitudes Family Routines Inventory (Boyce, Jensen, James, & Peacock, 1983)Parent Questionnaire-Revised (Stipek, Milburn, Clements, & Daniels, 1992)
Parent attitudes toward discipline Parenting Dimensions Inventory (Slater & Power, 1987)
Adequacy of family resources Family Resource Scale-Revised (Leet & Dunst, 1985)
Family stressors Significant Life Events Checklist (Holmes & Rohe 1967)
Implementation Characteristics
Recruitment of providers and familiesCoordination of servicesSupervision and support trainingAgency experienceRecord keepingCostsFCC contractors or employees Agency Staff Questionnaire (RMC Research, 1993a)Caregiver Characteristics Form (RMC Research, 1993b)
Comprehensiveness and Quality of Services
Achievement of Head Start Performance Standards Head-Start On-Site Program Review Instrument (Head Start Bureau, 1993)
Caregiver characteristics Arnett Scale of Caregiver Behavior (Arnett, 1989)Caregiver Characteristics Form (RMC Research, 1993b)Agency Staff Questionnaire (RMC Research, 1993a)
Program structure and dynamics Head Start On-Site Program Review Instrument (Head Start Bureau, 1993)Developmental Practices Inventory (Goodson, 1990)Caregiver Characteristics Form (RMC Research, 1993b)Agency Staff Questionnaire (RMC Research, 1993a)

 

Exhibit III-2
Summary of Data Collection Instruments and Participants
Instrument Administrators Respondents
Local Data Collectors Data Supervisors Parents or Children Center Teachers and FCC Providers Kindergarten Teachers
Child and Family Background
Family Data Interview check box   check box    
Parent Perceptions of Head Start Services check box   check box    
Family Routines Inventory check box   check box    
Family Resource Scale check box   check box    
Parenting Dimensions Inventory check box   check box    
Significant Life Events Checklist check box   check box    
Program Quality and Comprehensiveness
Head Start OSPRI          
Observation items   check box   check box  
Record review items check box        
Interview items   check box check box check box  
Agency Staff Questionnairea   check box      
Caregiver Characteristics Form   check box   check box  
Developmental Practices Inventory   check box   check box  
Arnett Scale of Caregiver Behavior   check box   check box  
Child Outcomes
Peabody Picture Vocabulary Test check box   check box    
Daberon-2 check box   check box    
Concepts About Print check box   check box    
Child Observation Record       check box  
Child Adaptive Behavior Inventory       check box check box
Kindergarten Teacher Interview         check box
aThe family child care coordinator was the primary respondent, although other component coordinators or the Head Start director often participated in parts of the agency interview. (back)

 

Child and Family Background Information

The evaluation team conducted interviews with parents to collect background information on participating children and families. Exhibit III-3 summarizes the parent interview components and timelines.

Exhibit III-3
Parent Interview Components
Fall of Head Start Spring of Head Start Spring of Kindergarten
Family Data Interview Family Data Interview
Parent Perceptions of Head Start Services Parent Perceptions of Head Start Services Parent Perceptions of Head Start Services
Family Routines Inventory Family Routines Inventory
Family Resource Scale Family Resource Scale
Parenting Dimensions Inventory Parenting Dimensions Inventory
Significant Life Events Checklist

Family Data Interview

The Family Data Interview (RMC Research, 1993c) served as a measure of family background characteristics. RMC Research staff designed this form specifically for use in this evaluation utilizing the Head Start Family Information System (HSFIS) response categories as much as possible to facilitate comparability between Head Start Family Information System data and the data from this evaluation. Exhibit III-4 presents the data elements included in the Family Data Interview.

Exhibit III-4
Elements of the Family Data Interview
Child Background Characteristics Family Background Characteristics
Age Family structure: Parent ethnicity/language:
Gender Number of adults in home Ethnicity
Ethnicity/language: Number of children in home Language spoken in the home
Ethnicity Age of youngest child in home English speaking ability
English speaking ability Older siblings in Head Start Parent education:
Primary language spoken Family type Parent schooling completed
Previous child care experience: Socioeconomic status: School or training type
Type of previous care Family income Employment status:
Age of entry into day care Public assistance received Employment status
Length of time in day care Length of time at present address Hours per week of employment
Disabilities Number of moves in past year Past employment experience and stability
Transportation availability  
Parent health  

Parent Perceptions of Head Start Services

The Parent Perceptions of Head Start Services (RMC Research, 1993f) interview served two purposes: to examine parent preferences and satisfaction with Head Start services and to assess parenting skills and involvement as a parent outcome. Past child care research has shown that families choose child care arrangements for a number of reasons, including the flexibility of the provider, the location, and the atmosphere (Kisker et al., 1989). The Parent Perceptions of Head Start Services interview was conducted three times. The fall interview gathered information about parents’ program setting preferences, parents’ satisfaction with the assigned setting, the features of child care important to the parents, and the frequency of parent-child literacy activities.

The spring interview assessed the social services support provided to families during the Head Start year, parents’ satisfaction with the Head Start setting, parent involvement with Head Start, and parent-child literacy activities. The spring of kindergarten year interview assessed the extent to which the parents felt the Head Start program had prepared their children for kindergarten and current parent involvement in their children’s education.

Family Routines Inventory—Modified

Head Start seeks to involve parents in the education and welfare of their children and to improve the quality of life for low-income families. Head Start also promotes good parenting through home visits and training programs for parents. The Family Routines Inventory (Boyce et al., 1983) was selected to measure family interactions as a component of parenting skills. Parents’ discipline style and family interactions are areas of parental influence that research has shown to be most significant for young children's school success (Powell, 1991).

The 27-item Family Routines Inventory measures individual families’ enactment of positive routines that are thought to be productive. Evaluation staff selected 13 items from the inventory that focus on the routines most likely to be influenced by Head Start and a 10-item Parent Questionnaire (Stipek et al., 1992) that examined learning activities in the home. Respondents rated items such as “I read or tell stories to my child” and “My child does household chores” in terms of the frequency with which the activities and practices were carried out in their families. The 3-point rating scale ranged from every day to twice a month or less. Prior to this evaluation the Family Routines Inventory had been widely used in family research on a diverse range of families, including Head Start families who participated in a study of the relationship between family routines and child outcomes (Keltner, 1990). This and other studies indicate that family routines appear to be both a rich source of descriptive information about individual families and a sensitive indicator of similarities and differences among families. The original Family Routines Inventory demonstrated 30-day test-retest reliability of .79 (Boyce et al., 1983).

Family Resource Scale

In this evaluation self-sufficiency refers to the adequacy of resources such as money, time, social networks, and transportation to meet the needs of the family as a whole. This multifaceted definition of self-sufficiency (one that goes beyond a strictly financial definition) is necessary to capture the complex array of factors that contribute to a family's self-sufficiency. The Family Resource Scale (Leet & Dunst, 1985) assesses family self-sufficiency broadly and descriptively by determining the adequacy of different types of resources in the households of young children. The 25-item scale is composed of 3 subscales: (1) time (e.g., “to be by yourself, to be with your spouse, to be with your children, to be with your friends, to sleep”), (2) money (“to pay bills, to save, for child care”), and (3) basic needs (e.g., “food, clothing, housing, medical care, transportation”). Parents responded to the question “Do you have enough of the following things?” using a 3-point scale composed of the responses usually, sometimes, and rarely. The coefficient alpha computed from the average correlation is .92. The split half reliability is .95 (Leet & Dunst, 1985).

Parenting Dimensions Inventory

The Parenting Dimensions Inventory (Slater & Power, 1987) measures nurturance, responsiveness, and discipline style. The evaluation team selected only the seven items related to discipline because parental discipline techniques are a set of skills that Head Start parent education activities are likely to address. Respondents were asked to tell whether they usually, sometimes, or rarely adhered to certain discipline procedures such as, “I follow through on discipline for my child, no matter how long it takes,” and “There are times when I just don't have the energy to make my child behave as he/she should.” The internal consistency for the discipline subscale ranged from .56 to .77, and scores were also predictive of children’s psychosocial adjustment to school (Slater & Power, 1987).

Significant Life Events Checklist

The Significant Life Events Checklist (Holmes & Rohe, 1972) served as a measure of family stressors. Rather than attempting to measure mental states, this approach examines life events that are highly correlated with stress, such as changes in family structure (birth, marriage, divorce), financial or employment situation changes (new job, loss of job), education changes (finishing school, entering a new school), and other events (moving, family crises, alcohol or other drug problems). The 24-item checklist used in this evaluation prompted respondents to indicate whether any of these events had happened to their families in the past 6 months. The score equaled the total number of items checked.

Program Comprehensiveness and Quality

Evaluation staff expected the quality and comprehensiveness of Head Start services provided in family child care homes to vary over time and across the 18 demonstration project sites. The evaluation team used five instruments to assess program quality. The Head Start On-Site Program Review Instrument (OSPRI; Head Start Bureau, 1993) was administered in both settings during the spring 1995 data collection. Evaluation staff completed the Caregiver Characteristics Form (RMC Research, 1993b), the Developmental Practices Inventory (DPI; Goodson, 1990), and the Arnett Scale of Caregiver Behavior (Arnett, 1989) for each family child care home and comparison center classroom teacher in the fall and spring. The evaluation team also interviewed appropriate agency staff at each data collection point using the Agency Staff Questionnaire (RMC Research, 1993a). Exhibit III-5 summarizes the data collection plan and shows the instruments administered in each setting at each data collection point.

Exhibit III-5
Data Collection Plan for Assessing Program Comprehensiveness and Quality
Setting Cohort 2
Fall 1994 Spring 1995
FCC homes Caregiver Characteristics Form Caregiver Characteristics Form
Agency Staff Questionnaire Agency Staff Questionnaire
Developmental Practices Inventory Developmental Practices Inventory
Arnett Scale of Caregiver Behavior Arnett Scale of Caregiver Behavior
OSPRI
Center classrooms Caregiver Characteristics Form Caregiver Characteristics Form
Developmental Practices Inventory Developmental Practices Inventory
Arnett Scale of Caregiver Behavior Arnett Scale of Caregiver Behavior
OSPRI

Head Start On-Site Program Review Instrument (OSPRI)

The OSPRI comprises the Head Start Program Performance Standards, Performance Standards on Services to Children With Disabilities, eligibility and recruitment regulations, administrative regulations, staffing and option regulations, and fiscal regulations. Regional monitoring teams use the OSPRI during site visits to examine grantees’ compliance with federal regulations. The complete OSPRI instrument consists of 256 items. In this evaluation, the evaluation team collected data using the 166 items in the components of education, health, mental health, nutrition, social services, parent involvement, and disabilities services. The remaining items concerning enrollment, administration, and staffing requirements were not included because they pertain to the entire agency and would not differ for family child care homes and center classrooms.

OSPRI items vary in their complexity and degree of importance for assessing and determining compliance with the overall philosophy and goals of Head Start. Programs are evaluated for compliance using one or more methods of assessment, including observation, interview, and record review. When regional monitoring teams conduct OSPRI site visits, team members gather data from a sample of records and observations across all of a grantee's Head Start programs and make decisions about compliance in each component area at the grantee level. In this evaluation the OSPRI instrument was administered in each participating center classroom and family child care home in the spring of the Head Start year.

The full OSPRI takes monitoring teams several days to complete and has been used primarily to identify program areas out of compliance with federal regulations. To simplify the data collection procedures in this evaluation, the OSPRI was divided into observation, record review, and interview items. The data supervisor completed the observation items at the same time as the Developmental Practices Inventory (Goodson, 1990) and the Arnett Scale of Caregiver Behavior (Arnett, 1989). The record review items were completed by the local data collectors. The interview items were included in either the agency, caregiver, or parent interview protocols, as appropriate. Exhibit III-6 shows the distribution of items in each component area.

Exhibit III-6
Number of OSPRI Items by Data Collection Method
Component Observation Record Review Interview
Education 26 9 5
Health 4 23 6
Mental health 0 2 14
Nutrition 11 8 9
Social services 0 12 6
Parent involvement 2 14 5
Disabilities services 1 6 0
Total 44 74 45

Developmental Practices Inventory

The DPI (Goodson, 1990) is a 30-item scale designed to assess the developmental appropriateness of the preschool environment. Twenty of the items were taken from the Classroom Practices Inventory (Hyson, Hirsh-Pasek, & Rescorla, 1989). The inventory is based on the National Association for the Education of Young Children’s (NAEYC) guidelines for developmentally appropriate practices for 4- and 5-year-olds. The DPI comprises two scales: developmental appropriateness and developmental inappropriateness, with 15 items devoted to each scale. The inventory requires 15 minutes to complete after a half-day of observation. The DPI items use a 5-point scale ranging from 1 (not at all like this classroom) to 5 (very much like this classroom).

The DPI has demonstrated adequate levels of reliability and validity in more than 200 observations in 58 programs in a range of settings. Internal consistency (Cronbach alpha) for the total scale is .96. Intercorrelations among the appropriate and inappropriate items were highly significant at r = –.82. Interobserver reliability within 1 scale point was 98% and exact agreement was 64% based on observations in 10 programs. Concurrent validity was established through the relationship between self-reported educational attitudes of the program teachers and DPI scores, as well as the programs’ community reputations as academic, play-oriented, or unstructured. In addition, a study by Love, Ryer, & Faddis (1992) reported that DPI scores were highly correlated with scores of caregiver behavior as measured by the Arnett scale (Arnett, 1989) and program quality as measured by the Assessment Profile (Abbott-Shim & Sibley, 1987).

Arnett Scale of Caregiver Behavior

Caregiver behaviors have been shown to be related to child outcomes in a number of research studies. For example, children tend to score higher on scales of social development when they have caregivers who ask questions, interact frequently, and facilitate social problem solving (Clarke-Stewart, 1987). Based on this and other similar findings, the evaluation team felt that an assessment focused on caregiver behavior could yield valuable information about differences between caregivers in family child care homes and center-based programs. The Arnett Scale of Caregiver Behavior (Arnett, 1989) served as a measure of caregiver behavior.

The original rating scale designed by Arnett consists of 26 items organized into five areas: positive relationships, punitiveness, detachment, permissiveness, and prosocial interaction. Each item is rated on a 4-point scale indicating the extent to which the statement is characteristic of the caregiver. The evaluation team added to the scale 4 items that assess caregivers' promotion of self-help skills among children.

Factor analyses of the scale have shown either three or four factors. Sensitivity, detachment, and harshness factors were identified in the National Child Care Staffing Study (Whitebook, Howes, & Phillips, 1989), and attentive and encouraging, harsh and critical, detached, and controlling factors were found in the California Staff/Child Ratio Study (Love, Ryer, & Faddis, 1992). In this evaluation the Arnett Scale scores were highly correlated with many program quality measures, including the developmental appropriateness scale of the DPI and the learning, curriculum, and interacting scales of the Assessment Profile.

Agency Staff Questionnaire

Data supervisors administered the Agency Staff Questionnaire (RMC Research, 1993a) to family child care coordinators in the fall and spring to gather information about such implementation characteristics as recruitment and training of family child care providers, recruitment of children, contracting with family child care providers or hiring them as Head Start employees, methods for coordinating the component services to family child care homes, service delivery adaptations for the family child care home setting, supervision and support for family child care providers, and record keeping issues in family child care homes.

Caregiver Characteristics Form

The data supervisors used the Caregiver Characteristics Form (RMC Research, 1993b) to obtain data from caregivers in the family child care homes and comparison group center classrooms on their background characteristics, training and education, child care experience, wages and benefits, supervision and support received, coordination with component coordinators, record keeping requirements, and program issues. In some instances, the form was completed by the caregivers and returned to the data supervisor, but in most cases the form was completed in an interview format.

Child Outcome Measures

Two key issues emerged in reviewing and selecting instruments for measuring child outcomes related to cognitive, social-emotional, and physical growth. First, it is challenging to neatly divide children’s behavior into cognitive, social-emotional, and physical compartments because of the natural integration of these domains (Aber, Molnar, & Phillips, 1986). For this reason, an instrument purporting to measure cognitive or social-emotional development may include tasks and questions that require responses involving several domains. In addressing this obstacle, the evaluation team placed a high priority on clearly defining cognitive, social-emotional, and physical development and drawing data for each of these dimensions from multiple measures. A related challenge involved ensuring that the instruments possessed adequate psychometric properties and addressed the broad range of development that is stressed in early childhood settings.

Cognitive Functioning

For this evaluation cognitive functioning was defined as:

  • General knowledge that contributes to school success (e.g., names and uses of common objects, colors, knowledge of body parts);

  • Language competence (e.g., ability to follow directions, use of prepositions);

  • Literacy skills (e.g., basic concepts about books, how print conveys meaning); and

  • Mathematical knowledge (e.g., counting, use of words to solve math problems, ordinal positions, basic geometric shapes).

Social Functioning

The social development literature uses a three-part definition to describe social functioning. This definition includes social knowledge and social reasoning, that is, the ability to talk and think about social situations (Shure & Spivack, 1976, 1979). It also consists of social competence, which includes confidence, felt security, and impulse control (Pellegrim, 1988; Pellegrim & Glickman, 1990). Finally, it consists of adaptive social behavior, which involves the integration of social knowledge, attitudes and competence as applied in social settings (Schaefer & Edgerton, 1978). Based on this literature and a review of the social aspects of the Work Sampling System (Meisels, Marsden, & Jablon, 1992), social functioning was defined as:

  • Sociability factors that assist in making and maintaining friends;

  • Adaptability to a variety of social situations, such as the willingness to try new things and making transitions between activities;

  • Social adjustment factors that promote security, impulse control, self-direction, and focus; and

  • Social problem-solving skills that integrate social interaction with cognitive skills.

Physical Functioning

Physical functioning, as defined for this evaluation, had two aspects: fine motor development and gross motor development. The Work Sampling System (Meisels et al., 1992) describes the typical kindergarten child’s fine motor development as:

  • Using a pencil with a comfortable grasp;

  • Handling concrete materials, such as puzzles and blocks, to complete tasks;

  • Copying shapes such as squares and triangles; and

  • Turning pages in a book.

The typical kindergarten child’s gross motor development is described as:

  • Hopping with balance and control;

  • Performing gross motor locomotion tasks such as skipping and galloping; and

  • Demonstrating ball-handling skills, such as catching and throwing with direction.

Instrument Selection

Typically, national studies have settled for tools with well-established psychometric properties that focus on a narrow skill range, such as the Peabody Picture Vocabulary Test–Revised (Dunn & Dunn, 1981). However, Head Start has a broad range of developmental goals for children. Thus, the evaluation team’s approach to assessing child outcomes utilized psychometrically adequate instruments that reflected program goals and addressed a broad range of developmental outcomes. Exhibit III–7 provides an overview of the data collection plan.

Exhibit III-7
Data Collection Plan for Child Outcome Measures
Instrument Head Start Year Kindergarten
Fall Spring Spring
Peabody Picture Vocabulary Test-Revised check box check box check box
Daberon-2 check box check box check box
Concepts About Print check box check box check box
Child Adaptive Behavior Inventory check box check box check box
Child Observation Record check box check box  
Kindergarten Teacher Interview check box   check box

The Peabody Picture Vocabulary Test—Revised

The Peabody Picture Vocabulary Test—Revised (PPVT–R; Dunn & Dunn, 1981) was used to measure children's receptive language or vocabulary. Receptive vocabulary is frequently used as a quick estimate of verbal and mental abilities. One advantage of the test is its simple format and brief administration time (10 to 15 minutes). The PPVT–R has been used in a number of large research studies and surveys, including the national evaluations of the Comprehensive Child Development Program, the Even Start program, and the Head Start Transition Study.

The PPVT–R consists of 175 vocabulary items of increasing difficulty. The tester reads a word and the child selects one of four pictures that best describes the word’s meaning. The PPVT–R was standardized in 1979 on a nationally representative sample that included 5,000 individuals from a variety of demographic backgrounds who were between 30 months and 41 years of age. Split-half correlations for children and youth ranged from .67 to .88 on Form L and from .61 to .86 on Form M. The PPVT–R is also available and normed for Spanish-speaking children; the Spanish version, called the Test de Vocabulario en Imagenes Peabody (TVIP), was standardized in 1981–82 on more than 2,000 children in Mexico and Puerto Rico.

Daberon 2

The Daberon–2 (Danzer et al., 1991) is a screening tool designed to assess 10 skill and knowledge areas considered to be related to school readiness: body parts, color concepts, number concepts, prepositions, following directions, plurals, general knowledge, visual perception, gross motor skills, and categories. Designed for use with children between the ages of 3 and 7, the Daberon–2 is individually administered using a kit of game-like materials. The Daberon–2 was standardized on 1,647 children in 16 states. The norming sample had broad representation by race, geographic area of the United States, ethnicity, and family income. Reliability using Cronbach's coefficient alpha was computed by age with values for 4- to 6-year-olds ranging from .92 to .95 (Danzer et al., 1991).

The test developers established concurrent, predictive, and construct validity when the instrument was constructed. Concurrent validity of .83 was established for the Daberon–2 by correlating it with the Total Battery score from the Metropolitan Readiness Tests. Kindergarten-age children's scores correlated .84 with follow-up checklist ratings when the children entered first grade, establishing predictive validity. Construct validity was examined by considering several criteria: the scores should and do increase by age, with a correlation coefficient of .55; correlations with aptitude, as measured by the Detroit Test of Learning Aptitude—Primary, exceeded .50; the Daberon–2 cluster scores were intercorrelated and exceeded .30; and finally, the median item discrimination power of the items by age exceeded .30 except for at age 7, suggesting that the discriminative powers of the test are strongest at ages 3 to 6 (Danzer et al., 1991).

Concepts About Print

Concepts About Print (RMC Research, 1993e) consists of four scales: book handling, concept of word, story comprehension, and publishing knowledge. It is based on Clay’s Concepts of Print Test (1979), the Book Handling Knowledge Task (Goodman & Altwerger, 1981), and Concepts About Print (Teale, 1986). The modified Concepts About Print (RMC Research, 1993e) instrument is a sensitive measure of language competence and literacy skills that reflects the current body of early language development research. The instrument focuses on children’s book- and print-related knowledge. During the individual administration of the instrument, the child is asked to perform literacy-related behaviors, such as “show me the front of the book” and “point to where I should start to read” while the examiner reads a story to the child. The popular children’s book Goodnight Moon was selected for use in this evaluation because of its length and availability in Spanish.

Child Observation Record

The Child Observation Record (COR; High/Scope Educational Research Foundation, 1992) examines child behavior through naturalistic observations by the child's teacher or caregiver. The assessment is divided into six domains: initiative, social relations, creative representation, music and movement, language and literacy, and logic and mathematics. Each of these domains includes four to six intrinsically meaningful items. For example, item one under initiative is expressing choices. The observer selects one of five descriptions of expressing choices to rate the child. Unlike some child assessments, the COR assesses development across a broad range of contexts rather than performance on specific test items in contrived situations. Thus, the instrument allows for flexible documentation of children’s development that can take into account cultural, language, and social variations. The nature of the COR assessment requires high levels of administration time, however, because it is based on observations over time. The COR has high ecological validity and is minimally intrusive because the assessment occurs in the course of everyday activities in the preschool setting.

The COR validation study determined the instrument's appropriateness for multiple early childhood curricula, established the COR as a valid and reliable instrument, and demonstrated its feasibility for use in Head Start programs. The COR has adequate psychometric properties. Alpha coefficients of internal consistency ranged from .80 to .93. Inter-rater reliability ranged from .61 to .72. The concurrent validity of COR ratings was assessed by examining correlations with the McCarthy Scales of Children’s Abilities (1972). Scales assessing similar constructs on these two instruments produced correlations of .53 between the COR language and literacy and the McCarthy verbal scales, .52 between the COR creative representation and the McCarthy perceptual-performance scales, and .42 between the COR logic and mathematics and the McCarthy quantitative scales.

Because the child’s caregiver or teacher was to complete the COR for each child in this evaluation, the evaluation team provided training during the fall data collection site visits. Caregivers and teachers then conducted observations during the fall and spring data collection periods.

Child Adaptive Behavior Inventory—Modified

Using the definition established for social-emotional functioning, the Child Adaptive Behavior Inventory (CABI; Schaefer et al., 1984) measured three of the clusters described:

  • Sociability (e.g., makes friends quickly and easily, is left out by other children);

  • School adaptability (e.g., catches on quickly, works carefully); and

  • Social adjustment (e.g., cries a lot, is easily distracted).

To enhance the CABI and to ensure that the CABI better match the definition of social functioning, the evaluation team added four items to the CABI that are likely to cluster around social problem solving or conflict resolution skills. The version of the CABI used in this evaluation does not have technical data available (Schaefer et al., 1984). However, a factor analysis of an earlier version of the CABI instrument found that items clustered under two broad categories: academic competence or adaptability, and social adjustment (Schaefer & Edgerton, 1978).

Kindergarten Teacher Interview

The evaluation team developed the Kindergarten Teacher Interview (RMC Research, 1993d) to obtain information from kindergarten teachers concerning perceived readiness for kindergarten and progress in the areas of cognitive, social-emotional, and physical development. The teachers also provided information about the children’s attendance, the parents’ participation in school activities, and the kindergarten program.



 

 

 Table of Contents | Previous | Next