DEMONSTRATING YOUR PROGRAMS WORTH
A Primer on Evaluation for Programs to Prevent Unintentional Injury

 

 

Introduction

 

GENERAL INFORMATION 

INTRODUCTION

Evaluation is the process of determining whether programs—or certain aspects of programs—are appropriate, adequate, effective, and efficient and, if not, how to make them so. In addition, evaluation shows if programs have unexpected benefits or create unexpected problems.1

All programs to prevent unintentional injury need to be evaluated whether their purpose is to prevent a problem from occurring, to limit the severity of a problem, or to provide a service.

And evaluation is much easier than most people believe. A well-designed and well-run injury prevention program produces most of the information needed to appraise its effects. As with most tasks, the key to success is in the preparation. Your program’s accomplishments—and the ease with which you can evaluate those accomplishments— depend directly on the effort you put into the program’s design and operation.

Ah, there’s the rub: whether ’tis wiser to spend all your resources running the injury prevention program or to spend some resources determining if the program is even worth running. We recommend the second option: programs that can demonstrate, through evaluation, a high probability of success also have a high probability of garnering legislative, community, technical, and financial support.

HISTORY OF EVALUATION

Early attempts to evaluate programs took one of two forms:

  • Evaluation based on practical experience.

  • Evaluation based on academic rigor.

Practical evaluation was conducted by people involved in prevention programs or by program staff. They were careful not to disrupt program activities more than absolutely necessary and to divert as few resources as possible away from the people being served. As a result, the evaluation design was often weak, and the data produced by the evaluation lacked credibility.

In contrast, academic evaluation was, in general, well designed and rigorously conducted. However, it was labor-intensive, intrusive, and therefore not applicable to large portions of the population because the results represented only the knowledge, attitudes, beliefs, or behaviors of people who would complete a laborious regimen of evaluation procedures.2

Over the years, evaluation evolved. The discipline profited from both practical experience and academic discipline. Methods became more feasible for use in the program setting and, at the same time, retained much of their scientific value. Furthermore, we now understand that effective evaluation begins when the idea for a program is conceived. In fact, much of the work involved in evaluation is done while the program is being developed. Once the prevention program is in operation, evaluation activities interact—and often merge—with program activities.

PURPOSE OF EVALUATION

Data gathered during evaluation enable managers to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward the program’s goal, and to judge the program’s ultimate outcome (Figure 1).

Indeed, not evaluating an injury prevention program is irresponsible because, without evaluation, we cannot tell if the program benefits or harms the people we are trying to help. Just as we would not use a vaccine that was untested, we should not use injury interventions that are untested. Ineffective or insensitive programs can build public resentment and cause people to resist future, more effective, interventions.

Evaluation will also show whether interventions other than those planned by the program would be more effective. For example, program staff might ask police officers to talk to students about the hazards of drinking and driving. The hope might be that stories the police tell about the permanently injured and dead teenagers they see in car crashes would scare the students into behaving responsibly.

Evaluation might show, however, that many teenagers do not respect or trust police officers and therefore do not heed what they say. Evaluation would also show what type of people the students would listen to—perhaps sports stars or other young people (their peers) who are permanently injured because of drinking and driving.

The right message delivered by the wrong person can be nonproductive and even counterproductive.

 


Why Evaluate Injury-Prevention Programs?

  • To learn whether proposed program materials are suitable for the people who are to receive them.

  • To learn whether program plans are feasible before they are put into effect.

  • To have an early warning system for problems that could become serious if unattended.

  • To monitor whether programs are producing the desired results.

  • To learn whether programs have any unexpected benefits or problems.

  • To enable managers to improve service.

  • To monitor progress toward the program’s goals.

  •  To produce data on which to base future programs.

  • To demonstrate the effectiveness of the program to the target population, to the public, to others who want to conduct similar programs, and to those who fund the program.

Figure 1.

SIDE BENEFITS OF EVALUATION

A side benefit of formal evaluation is that the people who are served by the program get an opportunity to say what they think and to share their experiences. Evaluation is one way of listening to the people you are trying to help. It lets them know that their input is valuable and that the program is not being imposed on them.

Another side benefit is that evaluation can boost employee morale—program personnel have the pleasure of seeing that their efforts are not wasted. Evaluation produces evidence to show either that their work is paying off or that management is taking steps to see that needed improvements are made.

A third side benefit is that, with good evaluation before, during, and after your program, the results may prove so valuable that the news media or scientific journals will be interested in publishing them. In addition, other agencies or groups may see how well you have done and want to copy your program.

A COMMON FEAR OF EVALUATION: "It shows Only nly what’s wrong! wrong!"

Often, a major obstacle to overcome is the program personnel’s concern that evaluation will reveal something bad that they are unaware of. And the truth is that evaluation will reveal new information that shows any aspects of the program that are not working as effectively as planned. But that is not bad news; that is good news. Because now something can be done to improve matters. We promise that evaluation will also bring news about aspects that are working better than expected.

Indeed, all evaluation produces four categories of information (See Table 1).



Table able 1. F Four our Categories of Information Produced by Evaluation


Characteristics 

Description

1.. > Information staff knows already.
     >
Indicates program is working well.

Data about aspects of program that work well, that program staff knows about, and that should be publicized whenever possible.

2.  > Information staff knows already.
     >
Indicates program needs improvement.

 

Data about aspects of program that need improvement, that staff knows about and hopes will not be found out. Staff is unlikely to mention these aspects to the evaluator.

3.  > New information.
     >
Indicates program
is working well.

 

Data about aspects of program that work well, but staff does not know about them. All evaluation uncovers some pleasant surprises, but program staff rarely expects them.

4. > New information.
   > Indicates program needs improvement.

 

Data about aspects of program that need improvement and about which staff is unaware. This is the type of information staff most expects when evaluation begins.


 

Well-designed evaluation always produces unexpected information. That information is just as likely to be about something that works well as it is to be about something that needs improvement.

So remember to expect pleasant surprises; and recognize that, by showing you why certain components of your program do not work, evaluation will often make what seemed an intractable problem easy to solve. With this change in perspective, evaluation ceases to be a threat and becomes an opportunity.

CHOOSING THE EVALUATOR

The first step in any evaluation is deciding who will do it. Should it be the program staff or should outside consultants be hired?

In almost all cases, outside consultants are best because they will look at your program from a new perspective and thereby provide you with fresh insights. However, outside consultants do not necessarily have to come from outside your organization. Evaluators within your organization who are not associated with your program and who have no personal interest in the results of an evaluation may serve your needs. Figure 2 contains a list of the most important characteristics of a consultant. Although these characteristics are listed with some regard to order of importance, the actual order depends on your program’s needs and the objectives for the evaluation.

Important factors to consider when selecting consultants are their professional training and experience. Some specialize in quantitative methods, others qualitative. Some have experience with one stage of evaluation, others with another stage. Some consider themselves in partnership with program staff; others see themselves as neutral observers. Some had formal courses in evaluation; others learned evaluation on the job. In other words, the background experiences of evaluators can vary considerably. They can even come from different professional disciplines (e.g., psychology, mathematics, or medicine). Find a consultant whose tendencies, background, and training best fit your program’s evaluation goals.

Another factor to consider is the consultant’s motivation (beyond receiving a fee). Consultants’ personal motivations will affect their perspective as they plan and implement the evaluation. For example, some consultants may be interested in publishing the results of your evaluation and consequently may shade results toward what they believe would interest journal editors. Other consultants may be interested in using the findings from your evaluation in their own research (e.g., they may be researching why certain people behave a certain way). Find consultants whose professional interests match the purpose of your evaluation. For example, if the purpose of your evaluation is to ensure that the program’s written materials are at the correct reading level for the people you are trying to reach, find a consultant whose interest is in producing data on which management can base decisions.

Listed next are some areas consultants specialize in:

  • Conducting basic research.

  • Producing data on which managers can base decisions (data may cover broad social issues or focus on a specific problem).

  • Solving problems associated with program management.

  • Increasing a program’s visibility to one or more audiences.

  • Documenting the final results of programs.

 

Make sure the consultants you hire have experience in conducting the evaluation methods you need, in evaluating programs similar to yours, and in producing the type of information you seek. Be sure to check all references before you enter into a contract with any consultant.

 

Characteristics of a Suitable Consultant

  • Is not directly involved in the development or running of the program being evaluated.

  • Is impartial about evaluation results (i.e., has nothing to gain by skewing the results in one direction or another).

  • Will not give in to any pressure by senior staff or program staff to produce particular findings.

  • Will give staff the full findings (i.e., will not gloss over or fail to report certain findings for any reason).

  • Has experience in the type of evaluation needed.

  • Has experience with programs similar to yours.

  • Communicates well with key personnel.

  • Considers programmatic realities (e.g., a small budget) when designing the evaluation.

  • Delivers reports and protocols on time.

  • Relates to the program.

  • Sees beyond the evaluation to other programmatic activities.

  • Explains both benefits and risks of evaluation.

  • Educates program personnel about conducting evaluation, thus allowing future evaluations to be done in house.

  • Explains material clearly and patiently.

  • Respects all levels of personnel.

Figure igure 2.


C
OST OF EVALUATION

Cost will vary depending on the experience and education of the consultant, the type of evaluation required, and the geographic location of your program. However, a good rule is for service programs (e.g., programs to distribute smoke detectors to qualified applicants) to budget about 10% to 15% of available funds for evaluation.

Programs with experimental or quasi-experimental designs (see page 51) are essentially research projects, so evaluation is built into the design of the program: the cost of the program includes the cost of evaluation. Operating programs with an experimental or quasi-experimental design is more expensive than operating service programs, but experimental or quasiexperimental programs will show whether the service being provided to the target population produces the intended result. Indeed such programs are likely to produce publishable information that can benefit other programs to prevent unintentional injuries. 

Be sure to include the cost of evaluation in your proposals for grant funds.

DESIGNING YOUR PROGRAM SO THAT EVALUATION IS S AN INTEGRALPART 

The information needed to evaluate the effects of your program will develop naturally and almost effortlessly if you put the necessary time and resources into designing a good program, pilot testing your proposed procedures and materials, and keeping meticulous records while the program is in operation.

To be the most effective, evaluation procedures and activities must be woven into the program’s procedures and activities. While you are planning the program, also plan how you will judge its success. 

Include the following components in the design of your program:

  • A plan for pilot testing all the program’s plans, procedures, activities, and materials (see "Formative Evaluation," page 25).

  • A method for determining whether the program is working as it should and whether you are reaching all the people your program planned to serve (see "Process Evaluation," page 27).

  • A system for gathering the data you will need to evaluate the final results of your program (see "Impact Evaluation," page 29, and "Outcome Evaluation," page 32).

 

COMPONENTS OF AN EVALUATION

Every evaluation must contain certain basic components (Figure 3):

  • A Clear and Definite Objective:
    Write a statement defining clearly and specifically the
    objective for the evaluation.

    Without such a statement, evaluators are unfocused and do not know what to measure. The statement will vary depending on the aspect of the program that is being evaluated. For example, before the program begins, you will need to test any materials you plan to distribute to
    program participants. In such a case, your evaluation objective might read something like this:

    To learn whether the people in our target population
    can understand our new brochure about the benefits of smoke detectors.

    Your evaluation objective for a completed program might
    read like this:

    To measure how many deaths were prevented as a
    result of our program to increase helmet use among teenage bicyclists in XYZ County.

 

  • A Description of the Target Population:
    Define the target population and, if possible, the comparison (control) group. Be as specific as possible.

    The target population will vary depending on the reason
    for the evaluation. In Section 2 (page 19), we discuss how to select an appropriate target population at each stage of evaluation. An example definition of a target population might read like this:

    All children from 8 through 10 years old who own bicycles and who attend public schools in XYZ County.

 

  • A Description of What Is To Be Evaluated:
    Write down the type of information to be collected and how that information relates to your program’s objectives.

    For example, if the goal of your program is to increase the
    use of smoke detectors among people with low incomes, the description of the information you need during the first stage of evaluation might read like this:

    For baseline information on our target population, we need to know the number and percentage of people with incomes below $_______ in the city of XYZ who now have smoke detectors in their homes.

 

  • Specific Methods:
    Choose methods that are suitable for the objective of the evaluation and that will produce the type of information you are looking for.

    In Section 2 (page 19), we discuss the four stages of evaluation and mention the methods most suitable for each stage. In Section 3 (page 35), we discuss the various methods in considerable detail.

 

  • Instruments To Collect Data:
    Design and test the instruments to be used to collect information.

    In Section 3 (page 35), we discuss various methods of collecting information and the most suitable instruments for each method. For example, you could collect information on people’s attitude toward wearing seatbelts by doing a survey (the method) using a questionnaire (the instrument).

 

  • Raw Information:
    Collect raw information from the members of the target
    population.

    Raw information is simply the information you collect as
    you run the program (e.g., the number of people who came to your location or the number of items you have distributed). Raw information is information that has not been processed or analyzed.

 

  • Processed Information:
    Put raw information into a form that makes it possible to analyze.

    Usually, that means entering the information into a computer data base that permits the evaluator to do various statistical calculations.

 

  • Analyses:
    Analyzing either quantitative or qualitative information requires the services of an expert in the particular evaluation method used to gather the information.

    We discuss analysis when we describe each method in detail (see Section 3, page 35).

 

  • Evaluation Report:
    Write a report giving results of the analyses and the significance (if any) of the results.

    This report could be as simple as a memo explaining the
    results to the program manager. However, it could also be an article suitable for publication in a scientific journal or a report to a Congressional Committee. The type of report depends on the purpose of the evaluation and the significance of the results.

Steps Involved in Any Evaluation

1. Write a statement defining the objective(s) of the evaluation.

2. Define the target population.

3. Write down the type of information to be collected.

4. Choose suitable methods for collecting the information. 5. Design and test instruments appropriate to the chosen methods for collecting the information.

6. Collect raw information.

7. Process the raw information.

8. Analyze the processed information.

9. Write an evaluation report describing the evaluation’s results.

Figure 3.

 

 

This page last reviewed April 1, 2005.

Privacy Notice - Accessibility

Centers for Disease Control and Prevention
National Center for Injury Prevention and Control