ACF Banner
[NCCIC header image][NCCIC Header Image]

Early Care And Education Results-Based Accountability

Early care and education results-based accountability (also known as performance-based accountability) is a method to assess the effectiveness of early care and education programs, services, and initiatives to ensure positive outcomes for young children. This method can be developed and used at State, community, agency, and/or program levels. Many States have multiple ongoing efforts to achieving measurable results for children and families.

Statewide Accountability Efforts

States and Territories use Child Care and Development Fund (CCDF) monies to support a variety of activities and services. CCDF Lead Agencies are implementing statewide accountability strategies to encourage and strengthen positive outcomes for young children.

Program and Service Evaluation

Representative examples have been included from the CCDF Plans for FY 2008–2009 to illustrate the different kinds of statewide accountability tools States are using to measure program and service effectiveness. These examples do not include all statewide accountability tools, but are meant to represent a range of approaches States have taken. Additional information is available in the Child Care and Development Fund Report of State and Territory Plans FY 2008–2009, which is available at http://nccic.acf.hhs.gov/pubs/stateplan2008-09/index.html. NCCIC does not endorse any practice, organization, publication, or resource.  

California

With the increased interest in the effectiveness of early education, the California Department of Education’s (CDE) Child Development Division (CDD) has revised its approach to evaluating the child care and development system to focus on the results desired from the system. This Desired Results for Children and Families system is compatible with the CDE’s standards and accountability system for elementary and secondary education. The system documents the progress of children and families in achieving desired results and provides information to help practitioners improve their child care and development services. Children’s progress is documented by teachers on the Desired Results Developmental Profiles, a developmental continuum used to measure the progress of children from birth to age thirteen.

In addition, preschool learning foundations are being developed to assist in preparing three-and four-year-old children for kindergarten. These foundations include language and literacy, mathematics, English learner and social-emotional development, to be followed in the next cycle by history—social science, science, visual and performing arts, health, and physical development. These foundations, currently under development, will be fully articulated with California’s academic content standards for kindergarten through grade twelve. Emphasis will be placed on intention-based instruction with a results-based accountability system.

Georgia

In cooperation with the National Association of Boards of Education (NASBE), the Georgia Department of Early Care and Learning (DECAL) and the Georgia Department of Education (DOE) have been working to create a seamless education system through the development of a Pre-K through third grade model. This model would build on existing work to further align the curriculum, assessment, and performance standards for children from birth through the third grade. DOE is developing, in collaboration with DECAL, a new kindergarten assessment, the Georgia Kindergarten Inventory of Developmental Skills (GKIDS), that will correlate with the Georgia Pre-K Assessment. Kindergarten teachers will receive training in the Georgia Pre-K Assessment as part of their GKIDS training. DOE and DECAL are exploring options for conducting cross-training between kindergarten and Pre-K teachers.

DECAL is implementing a Balanced Scorecard to develop measures that will show how effective DECAL’s initiatives are in meeting the stated objectives.

Indiana

The skills identified in the Foundations are referenced in the Indiana Standards Tool for Alternate Reporting (ISTAR). ISTAR is the measure of accountability for the progress of individual children within the assessment system. ISTAR holds the public schools and the State Education Agency (required users) accountable for improved performance as specified in No Child Left Behind and the Individuals with Disabilities Education Act. It offers other entities utilizing the Foundations a means to be accountable for improved performance.

Maryland

Maryland’s Model of School Readiness is a comprehensive system of support and training for teachers, standards for children’s learning and program performance, information for parents, and assessment of children. The model, developed by the Maryland Department of Education, is implemented in all public school kindergarten, prekindergarten, and most early childhood special education classrooms as well as in many child care and Head Start programs. Each fall, all kindergarten teachers assess children using a modified version of the Work Sampling System and report this data to the Department of Education. The Department of Education submits a report based on this and other data to the General Assembly each November about the level of school readiness statewide.

Pennsylvania

Pennsylvania provides recommendations of assessments that are aligned with the guidelines. In the Keystone STARS standards, programs at the highest two levels of the quality rating system must conduct child assessments three times per year using an assessment that is aligned with the guidelines. Additionally, Pennsylvania is creating an assessment and accountability system that incorporates the use of a child assessment tool that is aligned with the early learning guidelines. This assessment system will be piloted in programs starting in September 2007 and will be required across all early childhood programs starting in September 2008.

Evaluation of Quality Activities

Many Lead Agencies report in their CCDF Plans for FY 2008–2009 that they are in the process of evaluating, or have evaluated, the effectiveness of specific activities funded with CCDF quality funds. The following table shows the States and Territories that report having an evaluation in process and/or an evaluation completed for one or more of the quality activities supported by CCDF. Some States and Territories appear in multiple columns. For example, they may be in the process of evaluating one of their professional development activities and already have completed the evaluation of another professional development activity. The approaches taken for evaluation vary widely among the States and Territories. Some Lead Agencies have large-scale evaluation projects underway that examine in depth one or more quality activities, while others conduct evaluations on a regular basis (e.g., annual consumer satisfaction surveys or evaluations at the end of training courses). Many States that contract for the implementation of quality activities use performance-based contracting that specifies outcome benchmarks and evaluation processes to ensure that investments achieve expected results.

Evaluation of Quality Activities
Activity Evaluation in Process Evaluation Completed
Number of States/
Territories
State/Territory Number of States/
Territories
State/Territory
Provides comprehensive consumer education 13 DC, GA, IL, ME, MN, MO, NC, ND, NE, NV, OR, TN, TX 8 CO, DC, DE, HI, ME, MN, TX, WY
Offers providers grants or loans to help them meet local standards 12 DE, IL, MD, ME, MN, MO, NC, NE, NV, OR, PR, WV 2 MN, PA
Monitors compliance with licensing and regulatory requirements 12 AR, CA, HI, MD, MO, ND, NE, NV, OH, TX, VT, WV 8 CO, HI, NC, NE, OH, OK, OR, TX

Provides professional development activities, including training, education, and technical assistance

21 AR, CA, CT, GA, HI, IL, KS, ME, MN, MO, MT, ND, NE, NV, OK, OR, RI, SC, TN, TX, VT 5 CA, KS, ND, OK, OR
Improves salaries and other compensation for child care providers 11 CA, CT, DE, ME, MT, NE, NV, PR, TX, VT, WI 8 CA, CO, GA, IL, ME, NE, TX, WA
Supports early language, literacy, prereading, and math concepts development 9 DC, IL, MD, ND, NE, NV, OR, TX, VT 7 AR, GA, MD, MI, MS, OK, OR
Promotes inclusive child care

13 AR, GA, HI, IL, ME, MI, MN, MT, ND, NV, OK, WA, WI 4 CO, IL, MS, OK
Implements Healthy Child Care America and other health activities, including those designed to promote the social-emotional development of children 7 GA, IL, MI, ND, NV, OK, TX 2 AR, CO
Implements activities that increase parental choice and improve the quality and availability of child care 14 AL, DC, GA, HI, IL, MA, MD, MN, ND, NE, NV, TN, TX, WA 4 AL, CO, HI, TX

Performance-Based Contracting

In the CCDF Plans for FY 2008–2009, Lead Agencies reported on a range of monitoring strategies to ensure accountability and effective achievement of program goals when services or activities funded by CCDF are provided through other public or private entities.

Forty-five States and Territories (AK, AL, AZ, CA, CO, CT, DC, FL, HI, ID, IL, IN, KS, LA, MA, ME, MI, MN, MO, MS, MT, NC, ND, NE, NH, NJ, NV, NY, OH, OR, PA, PR, RI, SC, SD, TN, TX, UT, VA, VI, VT, WA, WI, WV, WY) implement services through contracts and agreements. In these States and Territories, the Lead Agency maintains oversight of all services through multilevel monitoring strategies, including caseload audits, onsite visits, financial audits, reviews of provider attendance and billing records, and other strategies. In addition, 26 of these States (AK, AL, CO, DC, DE, FL, IL, IN, KS, MA, ME, MN, MO, MS, NC, NH, NJ, NY, OK, OR, TN, TX, VA, WI, WV, WY) report that the Lead Agency specifies performance indicators or measurements in contracts with other entities.

Many States and Territories indicate that contracts include benchmarks or indicators to measure service accessibility, timeliness, and efficiency of service delivery. The following are some examples of benchmarks used by States for different types of performance-based contracts funded with CCDF funds. These examples do not include all benchmarks used by States, but are meant to represent a range of approaches States use to measure services provided through contracts. Each of these benchmarks may be measured by a specific accuracy rate, number of children, parents, and/or providers served, or some other measure.

Voucher Management Benchmarks

  • Increased accessibility to child care assistance;
  • Improved timeliness and efficiency of service delivery;
  • Accurate eligibility determination;
  • Accurate provider payment processes; and
  • Fiscal and financial compliance with Federal and State regulations.

Resource and Referral Services Benchmarks

  • Timely and accurate referral services for families;
  • Increased supply of child care programs that are safe, reliable, nurturing, geared to the ages of the children being served, and meet basic health and safety standards; and
  • Improved ability of consumers to make informed decisions about the quality of child care programs.

Quality Initiative Benchmarks

  • Increased parent and provider participation in quality initiatives;
  • Improved outcomes for children;
  • Improved ability of providers to support the inclusion of children with special needs/behavior problems in child care programs; and
  • Increased access to professional development opportunities.

Guiding Questions for Developing and Implementing Accountability Tools

The following are questions policymakers can use as they consider developing and implementing accountability tools.

Establishing a Culture of Accountability

  • Do processes exist for regularly evaluating how well programs are working?
  • Do managers/administrators value and use this information to assess progress, revise/revisit goals, adjust resources and staff focus, and/or develop new initiatives?
  • Are managers and staff committed to learning/continuous improvement through analysis and experimentation?
  • Are managers and staff knowledgeable about evaluation research and practice at a “familiarity level” (although they draw on recognized expertise)?

Developing a Long-Range Strategic Evaluation Plan

  • Does the agency have a legislative or driving goal that it is critical to link to?
  • Has a process been identified to engage key staff and stakeholders in developing the strategic plan?
  • Have those who need to be involved been identified?
  • Have the goal and purpose of the evaluation—in the short-term (6 months to 1 year), intermediate term (1 to 2 years), and long-term (2 to 5 years)—been clearly articulated?
  • Have strategic planning processes and appropriate resources (time, staff, and facilitator) been identified?

Partner With Researchers and Experts

  • Are stakeholders part of the process to develop and execute evaluations?
  • Are external evaluators regularly used to plan and conduct evaluation?
  • Is expertise of stakeholders valued, used, and supported?
  • Does the agency need to contract with experts to determine methodology?
  • Has the agency tapped into available technical assistance or partner expertise?

Ensure Data Quality

  • Are managers committed to ensuring data are accurate, timely, useful, and reliable?
  • Do processes exist to enter, store, and analyze data electronically?
  • Do administrative data systems need to be improved?
  • Does the budget include resources to support increased data capabilities over time?
  • Has the agency tapped into available technical assistance or partner expertise?

Engage Families and Community Leaders and Legislators

  • Are families and community leaders included in strategic planning and/or advisory meetings?
  • Are opportunities to interact and build relationships with families, community leaders, and legislators available and used to promote goals?
  • Are materials written in a language and format easily understood by families, community leaders, and legislators?
  • Are materials targeted to the needs/concerns of families, community leaders, and legislators?

Communicate Results Simply and Often

  • Have managers budgeted time and staff to analyze and communicate results?
  • Are data being collected on key questions of interest to primary stakeholders?
  • Have indicators been identified?
  • Has staff expertise in writing for a general audience been identified?
  • Do publications have one- or two-page summaries that quickly give key information and point to resources for further information?
  • Are presentations short and illustrative of real concerns and situations?

Publications

The following is a sample of publications with information about accountability systems, evaluation, and performance-based contracting.

Updated September 2008

 
PDF Icon Need Adobe Reader?