Research and Evaluation

Research and Evaluation

Research and Evaluation

Research and evaluation are essential for continuing to expand the reach and impact of IECMHC. By gathering, analyzing, and reporting data, programs can learn about what they have been doing well, as well as what can be improved or made more efficient. Further, evaluations are essential for educating others – including policymakers, funders, and families – about the value of IECMHC.

The Center of Excellence has developed step-by-step guidance to help programs design and build upon their IECMHC program evaluations. Because a good program evaluation depends upon a well-defined program that is implemented with fidelity, programs should access resources available through the Center of Excellence for IECMHC to: ensure that they are familiar with the defining characteristics of IECMHC; develop manuals describing their specific programs; and track the fidelity with which their program is implemented.

Connect with us

The Center of Excellence provides individualized Technical Assistance (TA) for IECMHC programs at any stage of development. If you are a program manager and would like to speak with a member of our team about designing or tailoring your approach to program evaluation, please submit a request via our online portal.

Share with us

Would you like your IECMHC research findings to be shared on the Center of Excellence website? Do you have new ideas, requests, or corrections to the content provided above? We would love to hear from you! Please email iecmhc@georgetown.edu to share reports, articles, webinar ideas, questions, etc.

Five Steps for Evaluating IECMHC

COE Cycle of Evaluation Graphic

To organize concepts for both program development and evaluation purposes, theories of change and logic models serve as “roadmaps” that graphically depict the connections between the community context, the actions to be undertaken, and the desired outcomes. Please see the documents below for examples of theories of change and logic models as well as guidance for creating your own.

The evidence base for Infant and Early Childhood Mental Health Consultation has grown significantly in recent decades. Learning about evaluations of other IECMHC programs helps teams to develop their own questions that build upon the foundational evidence.

  • Evidence Synthesis: This brief summarizes the status of the evidence for IECMHC, including both peer-reviewed journal articles and evaluation reports. Future directions for research are provided.
  • Annotated Bibliography: Many key findings from innovative studies can be found in peer-reviewed journal articles, but many professionals do not have access to these journals. The Center of Excellence has created a comprehensive Annotated Bibliography that summarizes the important contributions of each known, published study of IECMHC, cutting across different settings and outcomes of interest. In addition, papers that describe IECMHC practices or theories but do not present new analyses are also included. This resource will be updated annually to provide consultants, administrators, students, and all other interested individuals with current research findings on IECMHC. The Annotated Bibliography is intended to build general knowledge about IECMHC, facilitate writing about IECMHC (for grant applications, reports, etc.), and to demonstrate future directions for research on IECMHC. In this resource, you will have access to:
    • Descriptions of each article’s unique contribution to the empirical knowledge of IECMHC
    • APA-format citations for each article
    • Abstracts for each article

Program Evaluation Reports

  • An evaluation that measured both IECMHC process and outcomes was conducted to evaluate IECMHC services in Alameda County. Quantitative and qualitative data were collected from 2017 through 2019. Key findings included statistically significant increases in consultant self-efficacy, director self-efficacy, classroom emotional climate, children’s attachment, children's self-regulation, and children’s initiative. There was a statistically significant decrease in children's risk of expulsion and consultant hopelessness. In line with this program’s theory of change, consultants who received a higher ‘dosage’ of training and technical assistance reported higher self-efficacy, which was positively associated with improvements in child outcomes and improvements in emotional classroom climate. Further, more training and technical assistance predicted higher fidelity to the intervention standards, stronger relationships with teachers, and better outcomes for directors.
  • This report presents outcomes from four years of Arizona’s statewide IECMHC program, Smart Support. In the large sample (over 1,000 children and 100 MHCs), there were positive outcomes across domains, including improved teacher-child relationships, classroom climate, and child social-emotional skills.
  • This report describes outcomes from 37 centers that engaged in Arkansas’ IECMHC program called Project PLAY. Project PLAY is unique in that it prioritizes child care centers serving children in foster care. The evaluators reported significant improvements in use of developmentally appropriate social emotional supports, classroom environments, and children’s behavior.
  • Colorado’s home visiting programs funded by MIECHV have incorporated IECMHC for home visitors since 2016. The evaluation team gathered interview and survey data from home visitors, supervisors, and consultants to report on the role of the consultant and the way consultation is implemented in home visiting. Barriers and facilitators to implementing IECMHC in this context are explored.
  • This report describes the first randomized-controlled evaluation of an IECMHC program, Connecticut’s Early Childhood Consultation Partnership (ECCP), a 12-week model in which school-based services were provided to infants, toddlers, and preschoolers. Results indicated that ECCP yielded significant improvements in child hyperactivity and oppositionality as well as increased communication between school and home, but did not significantly affect classroom climate. Additionally, information was provided regarding the budget for ECCP in comparison to alternative responses to challenging behaviors.
  • These reports present evaluation findings from the fourth and fifth years of D.C.’s IECMHC program, Healthy Futures, in which IECMHC Consultants were embedded in child development centers, primarily in low-income neighborhoods. Among other positive impacts, analyses indicated that there were significant improvements in children’s social-emotional skills and reduced expulsion rates after child-focused consultation, as well as significant improvements in teacher-child interactions and reductions in teacher turnover after programmatic consultation. A unique finding in Year 5 was the effect of dosage of consultation, with a full year of consultation significantly predicting classroom-and individual-level improvements in behavioral concerns. Process outcomes and lessons learned were also articulated. Sample evaluation tools are included in the Appendix of Year 4’s evaluation.
  • For this report, 11 sites (236 centers) in Maryland’s IECMHC program participated in three arms of the evaluation: the Service Description Study, the Impact Study, and the Exit Study. A unique contribution of this evaluation was the qualitative Exit Study in which 35 IECMHC stakeholders (consultants, ECE providers and directors, and parents) were interviewed to learn more about their experiences of having a child exit an ECE program because of behavioral concerns.
  • A mixed method evaluation was conducted for Michigan’s IECMHC program, the Childcare Expulsion Prevention Program (CCEP), which served center-and home-based childcare settings statewide. Results indicated some positive findings for children (e.g., decreased hyperactivity, improved social skills), parents (e.g., increased empowerment to advocate for child, decreased work/school problems) and childcare providers (e.g., increased sense of competence to manage challenging behavior). Additionally, the team reported on fidelity to the model and parent and provider satisfaction with CCEP.
  • In 2016, the Pennsylvania ECMH Consultation Project contracted with Georgetown University Center for Child and Human Development to conduct an external review geared towards situating their program in a national perspective. By analyzing Pennsylvania’s program based on the insights from the national “What Works” (2009) study, the authors identified the strengths of the program as well as areas for continued growth, which were then pursued by program leadership. Among others, one strength included their strong data collection system, and one suggestion was to hire a reflective supervisor. In addition, two years of program evaluation data were analyzed. Findings included: improved child behavior, reduced teacher stress, and increased teacher adherence to the Pyramid Model.
  • To guide your research questions, identify and consult with stakeholders, including families, funders, and providers. Ideally, representatives from all interested parties should be included in all aspects of the evaluation. Families and community members from diverse populations can offer key insights to help formulate relevant questions and identify variables to measure.
  • Enhancing equity is a foundational goal of IECMHC; all evaluations should use their data to answer questions related to closing disparities, addressing biases, providing culturally and linguistically appropriate care.
  • Primary questions will be driven in part by the funders, the policy climate, the model, and what can be measured accurately using reliable and valid tools. But useful program evaluations should also address questions that are relevant to the early childhood program, administrators, providers, and families. It is important to consider several key factors.
    • What questions MUST you answer (to meet reporting or other requirements)?
    • What questions would you LIKE to answer and WHY (how will you use the information)?
    • Expected effects—what is realistic, based upon your IECMH program implementation? Identify those aspects of early childhood practice that can be expected to change as a result of the “dose” of IECMHC your program is actually delivering. Consider collecting data about possible barriers to successful implementation as well.
    • What information can you readily collect? What reliable tools are available to collect these data?
  • For additional information on community-based participatory research specific to evaluations in tribal communities, please access A Roadmap for Collaborative and Effective Evaluation in Tribal Communities.
  • When developing the data collection plan, it is necessary to consider the participants that you will likely be able to engage (e.g., parents, home visitors, children, etc.) as well as the languages and cultural background of all participants. Evaluators should learn about the psychometrics of each measure (if available) and translation procedures, and should examine measures item-by-item to ensure that they are appropriate for the participants and measuring what is intended.
  • Measure PROCESS and OUTCOME variables
    • The Center of Excellence created a searchable tool for selecting Outcome Measures for IECMHC for evaluators to explore outcome measures used in prior ECE-based IECMHC evaluations.
    • Process variables are critical in moving beyond asking whether consultation had an impact to answering WHY and HOW it has the effect that it had. Process variables can lead to fruitful conversations about strengths and possible areas for improvement for the consultant or the consultation program overall. They can also allow programs to explain null findings and explore predictors of success. Questions that can be answered with process variables include:
      • What is the “dose” of consultation provided to each consultee?
      • Is consultation being delivered with fidelity (i.e., consistent with the program’s written guidelines or program manual)?
      • How strong is the relationship between the consultant and consultee?
      • How often does the consultant receive reflective supervision, and what is the quality of that supervision?
    • All evaluations should collect demographic information from all participants to allow for disaggregated analyses
  • Consider Qualitative Data
    • Gathering qualitative data involves the collection of non-numerical information by various methods such as recording, transcribing, and analyzing interviews or holding focus groups with staff members, family members, or consultants. Qualitative data often take the form of personal stories or case studies that convey the details of an individual’s experience with IECMHC. These data can stand alone, or can help contextualize the results from quantitative analyses. Qualitative data are critical for weaving in a range of ways-of-knowing, honoring the wisdom of all stakeholders, and incorporating clinical insights and cultural values.
  • Data collection and management
    • It is essential to have a well-organized plan for your data that is exempt or approved by the relevant Institutional Review Board (IRB) prior to beginning data collection. The written plan should address:
      • What information will be collected
      • Who will ensure that consent is obtained from all participants
      • The schedule for data collection (e.g., baseline, and then after every three months of consultation)
      • Who will be responsible for collecting data
      • Where the centralized data will be stored; and
      • Who will take responsibility for ensuring data are recorded, compiled, and reported, including monitoring for missing data and ensuring that they are managed in a HIPAA-compliant manner.
    • Given the likelihood of barriers to data collection, including staff turnover, create a system for double-checking that data collection is happening on schedule for each participant. If you are gathering data pre- and post-intervention, it is essential to have procedures to ensure that post-intervention data are collected – analyses hinge upon having a sufficient quantity of data at the post-intervention time point.
  • Data analysis
    • Depending on the skills and expertise of the evaluation team, consider a formal relationship with a university-based researcher or similarly trained professional. The analyst should have the ability to select a particular statistical test or qualitative analysis strategy that fits the type, quantity, and quality of data that was collected and answers the research questions.
    • Create a data analysis plan that addresses:
      • The type of data analysis to be performed
      • Who will complete the data analysis
      • How often the data will be analyzed
      • The format and process for sharing preliminary and final results
      • Expectations for disseminating the findings
  • Communicating Results
    • The contents of any evaluation report depends on its purpose and the intended audiences.
    • Evaluation data can be crafted into a message by blending science, marketing, communications, and graphical skills. The information can convey concrete take-away messages, comprehensible facts, and ideas for promoting consultation as a valuable service to young children, families, staff, and programs.
    • Make it accessible and meaningful! Can the public, policy-makers, and other stakeholders, including families and providers, easily understand what the data mean and what the implications are for the child, family, staff, programs, and ultimately the community well-being?
    • All system stakeholders should have an opportunity to review the data to provide interpretations. The community members should be informed of not only the results, but also the potential implications of the findings.
    • Publication in a final report or professional journal should not be the primary means of dissemination of community-based research. Ideally, the results should be published in user-friendly formats accessible to all diverse populations.
  • Ongoing data collection is valuable not only for reporting to an external audience, but also for internal program improvement and development efforts. This is called Continuous Quality Improvement (CQI). Preparing interim, and periodic reports provides ongoing opportunity for reflection, reviewing program performance, making mid-course corrections, and hopefully, celebrating successes.
  • Examples of programs that effectively communicated their evaluation results:

Additional Resources for Evaluation

This product was developed [in part] under grant number 1H79SM082070-01 from the Substance Abuse and Mental Health Services Administration (SAMHSA), U.S. Department of Health and Human Services (HHS). The views, policies and opinions expressed are those of the authors and do not necessarily reflect those of SAMHSA or HHS.