Skip Navigation
acfbanner  
ACF
Department of Health and Human Services 		  
		  Administration for Children and Families
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™Download Reader  |  Print Print      

Office of Planning, Research & Evaluation (OPRE) skip to primary page content
Advanced
Search

 Table of Contents | Previous | Next

INTRODUCTION TO THE EVALUATION DATA COORDINATION
PROJECT (EDCP)

History

Every year, government agencies and philanthropic organizations award millions of dollars to support social programs and interventions designed to enhance the well-being of children, youth, and families. Often, these monies include evaluation funds (studies investigating the effectiveness of these programs). Evaluation research is critical because it advances our understanding of the impact or lack of impact of different programs on adults, children, and families and assists policymakers in deciding how to invest scarce resources most effectively. However, funders and evaluators know that evaluation funds are precious and must be used wisely; every dollar that is spent on evaluation is a dollar that is not spent on the program. Thus, funders and evaluators share responsibility for maximizing what can be learned from the resources that are expended for research and for using research findings to improve services for those who need them. The EDCP is one step toward maximizing the efficiency of evaluation research.

The American Institutes for Research (AIR) and Child Trends began this project with the perspective that coordinating data collections across multiple evaluation projects is crucial for making comparisons across evaluations and for facilitating cross-study research after the evaluations have been concluded. This coordination will help researchers be more certain that cross-program differences in impacts on the same construct are due to differences in the effectiveness of the programs (as implemented with different populations and in different contexts) instead of differences in how the construct is measured. Coordinating the inclusion of identical, well-established measures across multiple evaluation studies will have a dramatic influence on the usefulness of these data to researchers and policymakers in the years to come. The work undertaken in this process will also enhance data collection efforts in future research.

Purpose

The original purpose of the EDCP was to develop common measures of constructs and reporting formats for nine selected evaluation projects funded by the Department of Health and Human Services (HHS) and to use the lessons learned from the nine evaluations to create measurement modules for future research and evaluation. Once the EDCP team began project work and gathered input from experts during our first Work Group (see Table 1 for a list of Work Group members) meeting, we drafted a brief proposal to the Administration for Children and Families (ACF) to outline a slightly revised scope in products to include options documents in each of four domains. Options documents generally provide information (with the amount of detail varying from document to document) about a range of measures available for assessing a given construct, such as the psychometric properties. Options documents do not, however, recommend a specific measure or a set of measures to be used.

Table 1. List of Work Group Members
ACF Evaluation Evaluation Contractor Work Group Member Government Project Officer Work Group Member
Enhanced Services for the Hard to Employ Demonstration and Evaluation Project Barbara Goldman, MDRC
Pamela Morris, MDRC
Jo Anna Hunter, MDRC
Gerry (Girley) Wright, ACF
Rural Welfare to Work Strategies Demonstration Evaluation Project Michael Ponza, Mathematica Policy Research, Inc.
Robert Wood, Mathematica Policy Research, Inc.
Michael Dubinsky, ACF
Employment Retention and Advancement Project Barbara Goldman, MDRC
Pamela Morris, MDRC
Jo Anna Hunter, MDRC
John (Ken) Maniha, ACF
Building Strong Families Robert Wood, Mathematica Policy Research, Inc. Nancye Campbell, ACF
Head Start Family and Child Experiences Survey Alberto Sorongon, Westat Louisa Tarullo, ACF
Early Head Start Evaluation and Tracking Pre-K Follow-up John Love, Mathematica Policy Research, Inc.
Cheri Vogel, Mathematica Policy Research, Inc.
Rachel Chazen Cohen, ACF
National Head Start Impact Study Camilla Heid, Westat Michael Lopez, ACF
National Survey of Child and Adolescent Well-Being Kathryn Dowd, Research Triangle Institute (RTI) Mary Bruce Webb, ACF
Evaluation of Child Care Subsidy Strategies Jean Layzer, Abt Associates
Cindy Creps, Abt Associates
Barbara Goodson, Abt Associates
Ann Collins, Abt Associates
Stephanie Curenton, ACF

Focal Evaluation Projects

Per HHS’s request the EDCP focuses on nine HHS evaluation projects:

  • Enhanced Services for the Hard-to-Employ Demonstration and Evaluation Project
  • Rural Welfare to Work Strategies Demonstration and Evaluation Project
  • Employment Retention and Advancement Project (ERA)
  • Building Strong Families
  • Head Start Family and Child Experiences Survey (FACES)
  • Early Head Start Evaluation and Tracking Pre-K (EHS and TPK)
  • National Head Start Impact Study
  • National Survey of Child and Adolescent Well-Being (NSCAW)
  • Evaluation of Child Care Subsidy Strategies

Drawing on the knowledge of our project staff, and Work Group members, we also incorporated into the options documents, the following additional 13 evaluations/surveys:

  • Panel Study of Income Dynamics Child Development Supplement (PSID-CDS)
  • National Survey of America’s Families (NASF)
  • National Longitudinal Survey of Youth, 1997 (NLSY97)
  • National Longitudinal Study of Adolescent Health (Add Health)
  • Fragile Families and Child Well-being Study (Fragile Families)
  • NICHD Study of Early Child Care and Youth Development
  • Early Childhood Longitudinal Study—Kindergarten Cohort (ECLS-K)
  • Early Childhood Longitudinal Study—Birth Cohort (ECLS-B)
  • National Household Education Survey Program (NHES)
  • Current Population Survey (CPS)
  • Survey of Income and Program Participation (SIPP)
  • National Study of Child Care for Low-Income Families
  • National Child Care Staffing Study (NCCSS)

For a description of each of the nine ACF and 13 non-ACF evaluations, including content covered, please see appendices A and B.

Challenges

We conceptualized two main challenges as we began the EDCP. One challenge was the difficulty of building consensus on such issues as key domains and constructs, owing in part to the differences in impacts on the same construct described earlier. The bigger challenge was the considerable diversity among the HHS evaluation studies with respect to scope, research goals and objectives, design, intervention, population, primary outcomes, and phase of work. We envisioned that the phase of work in which each evaluation study was engaged would be an especially difficult issue because it would affect how each of the nine evaluations could participate in the EDCP. For many of the evaluations (e.g., FACES and Early Head Start Evaluation and Tracking Pre-K), the EDCP was too late to inform the development or selection of measures. For other evaluations (e.g., Building Strong Families), however, the timing of the EDCP presented less of a challenge because instruments had not been developed, OMB clearance had not been granted, and data collection is far in the future. The experiences of the evaluation teams whose measures had already been developed and fielded, however, have been invaluable to the development of final products and helped the evaluations that were in the formative stages.

Benefits

Many of the challenges of the project actually contributed to perceived benefits by EDCP members. The EDCP helped weave together common threads to facilitate the use of data sets for synthetic and comparative purposes, enhanced the accessibility of data sets for consumers and secondary users, helped consolidate into a single source measures and surveys for various constructs, and enhanced the likelihood of common measurement in future studies, thereby moving the field forward. The EDCP was beneficial in that it exposed individuals to the ideas and experiences of other researchers, and individuals participating in newer evaluations had the potential to use the EDCP and the consortium of experts to help shape and develop the measures for their evaluations.

Method

To meet the goals of the EDCP, we emphasized the importance of using a consensus-building process and the substantive experience of ACF Project Officers and contractor staff from each of the evaluation studies. We wanted to tap into the research teams’ large knowledge base and ensure a team-like atmosphere in building consensus.

To this end, one of the main tasks for the EDCP was the formation of a Work Group comprising one or more people from each of the nine HHS evaluation projects. The Work Group was convened at two meetings—the first was to further develop project plans and lay the foundation for the project; the second was to review products and offer information about gaps in current measurement tools. We did not view the Work Group as a static entity. Rather we treated it as an evolving body, asking that core Work Group members draw on the expertise of other individuals working on their respective evaluations.

The first Work Group meeting entailed the development of a constructs grid that had the nine evaluations listed down the side and the key constructs grouped by domain (as identified by Work Group members) across the top. Work Group members used this matrix to indicate, and ultimately vote on, which constructs were primary and secondary to their evaluations and how confident they were in the measurement of each of those constructs. We summarized these votes to narrow down the constructs on which the ECDP would focus. Appendix C contains the constructs grid and Appendix D contains the summary of votes.

Following the meeting, EDCP staff developed a memorandum of proposed domains, constructs, and products for ACF. Keeping budget and level of effort considerations in mind, we recommended developing options documents to address four domains: economic well-being, child care, parenting, and children’s socioemotional development. These domains were suggested based on the outcome of the constructs discussion during the Work Group meeting and the importance of these domains to each of the nine evaluations. Appendix E contains the memorandum.

ACF approved our proposal and EDCP staff developed a number of templates on which to base our products development, including an overall strategy template, an evaluation/study/survey background template, and an options document template. Appendix F contains the templates. As we developed the final products, we consulted with ACF staff and Work Group members, included other national surveys that were informative to our domains of focus, and circulated drafts of products to ACF staff and Work Group members for their review and suggestions.

The purpose of the second Work Group meeting was to obtain general and specific feedback from Work Group members on the products and to discuss information about gaps in current measurement tools. Expert discussants in each of our chosen domains delivered a presentation focused on the challenges related to the research they conduct. Each presentation was followed by a 45 minute discussion and reactions from our Work Group members.

As the EDCP team developed options documents, we took multiple approaches to reviewing surveys and measures for each construct, depending on the evaluation or survey. For relatively recent evaluations and surveys, we started with the baseline data collection and went through all baseline instruments and identified which were pertinent to our constructs. If necessary, we proceeded to the next wave of data collection. For evaluations and surveys that are longer in duration (e.g., CPS and SIPP), we took the most recent iteration of the surveys and measures and worked backward. We were careful to examine at multiple time points those surveys and evaluations that focused specifically on children and change over time because we assumed that the content would change as appropriate for child age. It is important for readers to check the original surveys/measures for their own purposes to understand skip patterns and to examine the measure in full.

It is also important to note the types of measures on which EDCP is focused. The majority of the measurement information comes from the evaluations’ surveys, interviews, and observational measures. However, many of the evaluations also used other types of measurement tools, such as administrative data. For instance, MDRC uses administrative data to measure income in many of their evaluations. Specifically, administrative records provide data on monthly cash assistance and Food Stamps benefits as well as quarterly earnings in jobs covered by the UI system. For each year following random assignment, average annual parent income is based on the sum of earnings, AFDC/TANF payments, and Food Stamps payments. Note that this income measure omits self-employment and informal earnings, other public transfers, private transfers, and earnings from family members other than the sample member. However, the EDCP does not include administrative data as a part of its options documents.

Products and Purpose of This Document

We developed two types of products for the EDCP: options documents and informational papers. This document presents the options documents. The informational papers are in a separate document. In Appendix A, we describe the nine ACF evaluations that were the focus of the EDCP and outline the content covered by each evaluation. In Appendix B, we describe 13 additional data collection efforts relevant to the project goals, including the surveys and measures used in each of these data collection efforts that are relevant to our domains. In chapters II-V, we offer options documents for four constructs within four domains: income and earnings within the domain of economic well-being; quality of child care within the domain of child care; parental monitoring/ awareness within the domain of parenting; and internalizing/externalizing behavior problems within the domain of children’s socioemotional development. In Appendix G, we offer additional information that the EDCP team gathered during the project and considers useful to researchers as they develop evaluations.

It is important to mention that these products are not meant to represent the full set of constructs and domains but rather to be a first step toward developing measurement modules. In all, the Work Group had identified at least 60 constructs within eight domains as high priority for this set of evaluations. Each domain has numerous constructs that future iterations of the EDCP could explore.

It is also important to note that although Building Strong Families and the Evaluation of Child Care Subsidy Strategies are included in the EDCP, no options documents have been developed for the two evaluations at this time. Both of the evaluations were in development stages at the time the EDCP was conducted and had not yet selected measures.

Definition of Key Terms

  • Evaluation: An evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object (Trochim, 2000). For instance, if the object is an intervention program, an evaluation could use different instruments and measures, such as surveys, to determine its effectiveness.
  • Study: A study consists of all the information collected at a single time or for a single purpose or by a single principal investigator. A study may consist of one or more datasets and one or more files (UC San Diego, 2000).
  • Survey: A survey gathers data and analyzes the attitudes, beliefs, practices, and opinions of a population or a sample. Survey users gather data in-person, face-to-face, or by telephone. Surveys can also be self-administered via the mail, email, or fax (Stacks, 2002).
  • Domain: Domain is an overarching term referring to a broad substantive topical area. For instance, socio-emotional development or economic well-being are domains.
  • Construct: A construct is a more specific topic within a domain. For instance, “externalizing behavior” (i.e., acting out) is a construct within the domain of socio-emotional development.
  • Measure: A measure is a concrete way to assess, or “measure,” a construct. Measures can be made up of one item or a series of items (scales or indices) that assesses a given construct.
  • Scale: A scale is a series of related items that is used to measure a construct. Items in a scale are arranged in some order of intensity or purpose (Vogt, 1999).
  • Subscale: Many measurement instruments are multidimensional and measure more than one construct or more than one domain of a single construct. In such instances, subscales can be constructed in which the various items from a scale are grouped into subscales. Although a subscale could consist of a single item, in most cases subscales consist of multiple individual items that have been combined into a composite score.
  • Index: The terms index and scale are often used interchangeably, but they are slightly different in that items in an index do not have to be arranged in a particular order and each item is usually given the same weight (Vogt, 1999). In contrast to a scale, which comprises multiple related items that tap a single underlying construct, an index comprises varied items that may or may not be related and that cumulatively assess a broader construct. For example, depression as a single construct is typically measured by a scale, whereas family activities are measured by an index to reflect the substantial variation possible in activities that can fall under this construct, ranging from the arts to sports to shopping (Child Trends, 2000).
  • Item: An item is an individual question used to tap a given construct.

References

Child Trends. (2000). Children and welfare reform: A guide to evaluating the effects of state welfare policies on children. Washington, DC: Author.

Stacks, D. W. (2002). Dictionary of public relations measurement and research. Gainesville: University of Florida Institute for Public Relations.

Trochim, W. M. (2000). The research methods knowledge base (2nd ed.). Available at http://trochim.human.cornell.edu/kb/index.htm (version current as of August 2, 2000)

University of California at San Diego, Social Sciences and Humanities Library. (2000). Glossary of social science computer and social science data terms. San Diego: Author.

Vogt, W. P. (1999). Dictionary of statistics and methodology (2nd ed.). Thousand Oaks, CA: Sage Publications.



 

 

 Table of Contents | Previous | Next