WORK WITH PARENTS & THE COMMUNITY
Are You Making Progress? Increasing Accountability Through Evaluation


       •  Glossary
         

    coverage—the extent to which the program is serving the intended target population.
    empowerment evaluation—evaluation that is designed to support program participants and staff in self-evaluation of their own programs (a form of internal evaluation).
    evaluation—systematic collection and use of program information for multiple purposes, including monitoring, program improvement, outcome assessment, planning, and policy-making.
    external evaluation—evaluation done by consultants or researchers not working for the same organization as the program.
    formative evaluation—evaluation that is designed to collect information that can be used for continuous program improvement.
    impacts—usually used to refer to long-term program outcomes.
    input—resources available to the program, including money, staff time, volunteer time, etc.
    internal evaluation—evaluation done by staff within the same organizational structure as the program.
    logic model—a flowchart or graphic display representing the logical connections between program activities and program goals.
    monitoring—a type of evaluation designed to ensure that program activities are being implemented as planned (e.g., the number of participants, number of hours, type of activities, etc).
    output—the immediate products or activities of a program.
    outcome—ways in which the participants of a prevention program could be expected to change at the conclusion of the program (e.g., increases in knowledge, changes in attitudes or behavior, etc.).
    multivariate analysis—a statistical term refering to analyses that involve a number of different variables. For example, an analysis that looked at whether peer factors and individual factors both influence alcohol use would be called "multivariate".
    participatory evaluation—evaluation that involves key stakeholders in the design, data collection, and interpretation of evaluation methods.
    process evaluation—evaluation that is designed to document what programs actually do: program activities, participants, resources, and other outputs.
    stakeholders—those persons with an interest in the program and its evaluation (e.g., participants, funders, managers, persons not served by the program community members, etc.).
    summative evaluation—evaluation that is designed to collect information about whether a program is effective in creating intended outcomes.
    triangulation—the use of multiple data sources and methods to answer the same research question.
    qualitative data—information that is reported in narrative form or which is based on narrative information, such as written descriptions of programs, testimonials, open-ended responses to questions, etc.
    quantitative data—information that is reported in numerical form, such as test scores, number of people attending, drop-out rates, etc.

    Western CAPT


   28 | 29 | 30
TOC
Print this page Printable view Send this page Share this page
Last Modified: 06/12/2008