DEMONSTRATING YOUR PROGRAMS
WORTH A Primer on Evaluation for Programs to Prevent Unintentional Injury |
||
|
||
Introduction |
||
|
||
G ENERAL INFORMATIONI NTRODUCTIONEvaluation is the process of determining whether programs—or certain aspects of programs—are appropriate, adequate, effective, and efficient and, if not, how to make them so. In addition, evaluation shows if programs have unexpected benefits or create unexpected problems.1All programs to prevent unintentional injury need to be evaluated whether their purpose is to prevent a problem from occurring, to limit the severity of a problem, or to provide a service. And evaluation is much easier than most people believe. A well-designed and well-run injury prevention program produces most of the information needed to appraise its effects. As with most tasks, the key to success is in the preparation. Your program’s accomplishments—and the ease with which you can evaluate those accomplishments— depend directly on the effort you put into the program’s design and operation. Ah, there’s the rub: whether ’tis wiser to spend all your resources running the injury prevention program or to spend some resources determining if the program is even worth running. We recommend the second option: programs that can demonstrate, through evaluation, a high probability of success also have a high probability of garnering legislative, community, technical, and financial support. HISTORY OF EVALUATION Early attempts to evaluate programs took one of two forms:
Practical evaluation was conducted by people involved in prevention programs or by program staff. They were careful not to disrupt program activities more than absolutely necessary and to divert as few resources as possible away from the people being served. As a result, the evaluation design was often weak, and the data produced by the evaluation lacked credibility.In contrast, academic evaluation was, in general, well designed and rigorously conducted. However, it was labor-intensive, intrusive, and therefore not applicable to large portions of the population because the results represented only the knowledge, attitudes, beliefs, or behaviors of people who would complete a laborious regimen of evaluation procedures.2Over the years, evaluation evolved. The discipline profited from both practical experience and academic discipline. Methods became more feasible for use in the program setting and, at the same time, retained much of their scientific value. Furthermore, we now understand that effective evaluation begins when the idea for a program is conceived. In fact, much of the work involved in evaluation is done while the program is being developed. Once the prevention program is in operation, evaluation activities interact—and often merge—with program activities.PURPOSE OF EVALUATION Data gathered during evaluation enable managers to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward the program’s goal, and to judge the program’s ultimate outcome (Figure 1).Indeed, not evaluating an injury prevention program is irresponsible because, without evaluation, we cannot tell if the program benefits or harms the people we are trying to help. Just as we would not use a vaccine that was untested, we should not use injury interventions that are untested. Ineffective or insensitive programs can build public resentment and cause people to resist future, more effective, interventions.Evaluation will also show whether interventions other than those planned by the program would be more effective. For example, program staff might ask police officers to talk to students about the hazards of drinking and driving. The hope might be that stories the police tell about the permanently injured and dead teenagers they see in car crashes would scare the students into behaving responsibly. Evaluation might show, however, that many teenagers do not respect or trust police officers and therefore do not heed what they say. Evaluation would also show what type of people the students would listen to—perhaps sports stars or other young people (their peers) who are permanently injured because of drinking and driving. The right message delivered by the wrong person can be nonproductive and even counterproductive.
|
||
|
||
Figure 1. | ||
|
||
S IDE BENEFITS OF EVALUATIONA side benefit of formal evaluation is that the people who are served by the program get an opportunity to say what they think and to share their experiences. Evaluation is one way of listening to the people you are trying to help. It lets them know that their input is valuable and that the program is not being imposed on them.Another side benefit is that evaluation can boost employee morale—program personnel have the pleasure of seeing that their efforts are not wasted. Evaluation produces evidence to show either that their work is paying off or that management is taking steps to see that needed improvements are made.A third side benefit is that, with good evaluation before, during, and after your program, the results may prove so valuable that the news media or scientific journals will be interested in publishing them. In addition, other agencies or groups may see how well you have done and want to copy your program.A C OMMON FEAR OF EVALUATION: "It shows Only nly what’s wrong! wrong!"Often, a major obstacle to overcome is the program personnel’s concern that evaluation will reveal something bad that they are unaware of. And the truth is that evaluation will reveal new information that shows any aspects of the program that are not working as effectively as planned. But that is not bad news; that is good news. Because now something can be done to improve matters. We promise that evaluation will also bring news about aspects that are working better than expected.Indeed, all evaluation produces four categories of information (See Table 1). |
||
|
||
Characteristics |
Description |
|
1..
> Information
staff knows already. |
Data about aspects of program that work well, that program staff knows about, and that should be publicized whenever possible. |
|
2.
> Information
staff knows already. > Indicates program needs improvement. |
Data about aspects of program that need improvement, that staff knows about and hopes will not be found out. Staff is unlikely to mention these aspects to the evaluator. |
|
3. > New
information. > Indicates program is working well.
|
Data about aspects of program that work well, but staff does not know about them. All evaluation uncovers some pleasant surprises, but program staff rarely expects them. |
|
4. >
New information. |
Data about aspects of program that need improvement and about which staff is unaware. This is the type of information staff most expects when evaluation begins. |
|
|
||
Well-designed evaluation always produces unexpected information. That information is just as likely to be about something that works well as it is to be about something that needs improvement.So remember to expect pleasant surprises; and recognize that, by showing you why certain components of your program do not work, evaluation will often make what seemed an intractable problem easy to solve. With this change in perspective, evaluation ceases to be a threat and becomes an opportunity.C HOOSING THE EVALUATORThe first step in any evaluation is deciding who will do it. Should it be the program staff or should outside consultants be hired?In almost all cases, outside consultants are best because they will look at your program from a new perspective and thereby provide you with fresh insights. However, outside consultants do not necessarily have to come from outside your organization. Evaluators within your organization who are not associated with your program and who have no personal interest in the results of an evaluation may serve your needs. Figure 2 contains a list of the most important characteristics of a consultant. Although these characteristics are listed with some regard to order of importance, the actual order depends on your program’s needs and the objectives for the evaluation.Important factors to consider when selecting consultants are their professional training and experience. Some specialize in quantitative methods, others qualitative. Some have experience with one stage of evaluation, others with another stage. Some consider themselves in partnership with program staff; others see themselves as neutral observers. Some had formal courses in evaluation; others learned evaluation on the job. In other words, the background experiences of evaluators can vary considerably. They can even come from different professional disciplines (e.g., psychology, mathematics, or medicine). Find a consultant whose tendencies, background, and training best fit your program’s evaluation goals.Another factor to consider is the consultant’s motivation (beyond receiving a fee). Consultants’ personal motivations will affect their perspective as they plan and implement the evaluation. For example, some consultants may be interested in publishing the results of your evaluation and consequently may shade results toward what they believe would interest journal editors. Other consultants may be interested in using the findings from your evaluation in their own research (e.g., they may be researching why certain people behave a certain way). Find consultants whose professional interests match the purpose of your evaluation. For example, if the purpose of your evaluation is to ensure that the program’s written materials are at the correct reading level for the people you are trying to reach, find a consultant whose interest is in producing data on which management can base decisions.Listed next are some areas consultants specialize in:
Producing data on which managers can base decisions (data may cover broad social issues or focus on a specific problem). Solving problems associated with program management. Increasing a program’s visibility to one or more audiences. Documenting the final results of programs.
Make sure the consultants you hire have experience in conducting the evaluation methods you need, in evaluating programs similar to yours, and in producing the type of information you seek. Be sure to check all references before you enter into a contract with any consultant.
|
||
Characteristics of a Suitable Consultant
|
||
Figure igure 2. | ||
Cost will vary depending on the experience and education of the consultant, the type of evaluation required, and the geographic location of your program. However, a good rule is for service programs (e.g., programs to distribute smoke detectors to qualified applicants) to budget about 10% to 15% of available funds for evaluation.Programs with experimental or quasi-experimental designs (see page 51) are essentially research projects, so evaluation is built into the design of the program: the cost of the program includes the cost of evaluation. Operating programs with an experimental or quasi-experimental design is more expensive than operating service programs, but experimental or quasiexperimental programs will show whether the service being provided to the target population produces the intended result. Indeed such programs are likely to produce publishable information that can benefit other programs to prevent unintentional injuries.Be sure to include the cost of evaluation in your proposals for grant funds. D ESIGNING YOUR PROGRAM SO THAT EVALUATION IS S AN INTEGRALPARTThe information needed to evaluate the effects of your program will develop naturally and almost effortlessly if you put the necessary time and resources into designing a good program, pilot testing your proposed procedures and materials, and keeping meticulous records while the program is in operation.To be the most effective, evaluation procedures and activities must be woven into the program’s procedures and activities. While you are planning the program, also plan how you will judge its success.Include the following components in the design of your program:
C OMPONENTS OF AN EVALUATIONEvery evaluation must contain certain basic components (Figure 3):
In Section 2 (page 19), we discuss the four stages of evaluation and mention the methods most suitable for each stage. In Section 3 (page 35), we discuss the various methods in considerable detail.
|
||
Steps Involved in Any Evaluation 1. Write a statement defining the objective(s) of the evaluation.2. Define the target population.3. Write down the type of information to be collected.4. Choose suitable methods for collecting the information. 5. Design and test instruments appropriate to the chosen methods for collecting the information.6. Collect raw information.7. Process the raw information.8. Analyze the processed information.9. Write an evaluation report describing the evaluation’s results. |
||
Figure 3.
|
||
This page last reviewed April 1, 2005. Privacy Notice - Accessibility Centers
for Disease Control and Prevention
|