Skip Navigation
 
Home | About CDC | Media Relations | A-Z Index | Contact Us
   
Centers for Disease Control & Prevention
CDC en Español 
Sexually Transmitted Diseases
Sexually Transmitted Diseases  >  Program Guidelines  >  Program Evaluation

Program EvaluationProgram Operations Guidelines for STD Prevention
Program Evaluation

Sections on this page:

INTRODUCTION E-1

This chapter gives a brief description of program evaluation and describes how evaluation can be used to help reach program goals and objectives. It does not include all methods, philosophies, or approaches to evaluation and touches on only a few aspects of STD prevention programs.

There are many reasons for program evaluation, some with emphasis on scientific methods to collect data, some with emphasis on the process of monitoring, and others with emphasis on the use of data to inform program managers and other key policy makers about how well a program is meeting its goals and objectives. CDC emphasizes evaluation as a way to improve and account for public health actions using methods that are useful, feasible, proper, and accurate. o accomplish this, CDC recommends that specific, systematic evaluations be carried out throughout the life span of a program, from program inception and planning to implementation, sustained delivery, and re-design (MMWR, 1999).

Ongoing evaluation in STD prevention programs is critical to developing and sustaining high quality, appropriately targeted STD prevention efforts. Evaluation offers the opportunity to review, analyze, and modify STD prevention efforts as necessary. It allows STD prevention programs to know where they have been, where they currently are, how they got there, and where they are headed. Good program managers use evaluation to improve program performance (See Leadership and Program Management chapter) and to monitor progress toward achievement of goals and objectives.

In addition to program self-evaluation, evaluation may be needed in other situations. Some of these situations are:

  • To help prioritize activities and guide resource allocation
  • To inform funders of the program whether their contributions are being used effectively
  • To inform community members and stakeholders of the project's value (Rugh, 1996)
  • To provide information that can be useful in the design or improvement of similar projects (Rossi, 1998)

Regardless of the reason for the evaluation, different strategies are called for in different situations and at various stages in programs. In the development stage, evaluations focus on assessing the extent and severity of the issues to be addressed and on designing effective interventions to address them (Wong-Reiger, 1993a). Once programs are initiated, it is important to examine various methods of operation to improve program effect or decrease costs in producing the desired effect (Wong-Reiger, 1993b). An example is a program improvement which increases the number of patients who voluntarily return for treatment while also decreasing the cost of follow-up.

To aid in decisions concerning continuing, expanding, or curtailing programs, evaluation should also consider costs in relation to benefits. It can compare an intervention's cost effectiveness with that of alternative strategies. For either new or ongoing programs, impact assessments estimate the effects of the intervention and the degree of effectiveness in providing the target populations with the resources, services, and benefits that are intended (Rossi, 1998).

Whether an evaluation is comprehensive or tries to answer only one question, "the aim is to provide the most valid and reliable findings possible within political and ethical constraints and the limitations imposed by time, money, and human resources" (Rossi, 1998).

Recommendation

  • Programs should conduct appropriate, regular and ongoing evaluation for self assessment and quality improvement.

PLANNING AN EVALUATION

Evaluation should be part of program planning from its inception. There should be a plan of evaluation for each essential program component, including how and when each will be evaluated and how the evaluation will be used to improve the program. While evaluations are conducted after the program has started, early planning for evaluation enables gathering the right data, at the right time, for the right purpose. This is especially important for determining if the program's activities are having the desired outcomes, such as behavior change, and is essential in determining if the program was responsible for the desired impact.

While a single public health intervention is seldom shown to be the reason for achieving a particular end result, such as a reduction in disease morbidity, confidence and utility of most evaluations can be increased by designing the evaluation questions and methods when planning or changing program activities or interventions. For instance, if a comparison of indicators before and after the program is to be used, planning for this must be included in the beginning. In addition, managers and evaluators must be able to identify factors outside the program intervention which might confound the evaluation and affect the outcome. These should be taken into consideration in the design of the evaluation and the collection of data. (Wong-Reiger, 1993)

EXAMPLE: The STD program supports a risk reduction program which emphasizes delaying sexual intercourse for all teens, in a local community based organization. An adolescent female pregnancy prevention program in the same community based organization also has an effort to persuade teens to delay the onset of sexual activity. The effect on the STD program's risk reduction program is confounded by a similar program for adolescent females which must be taken into consideration when designing and conducting the evaluation.

As managers plan an evaluation, they should begin with a clear purpose in mind. They must gather background information concerning what is to be evaluated and why, and determine the stakeholders of the program and the evaluation, how findings will be used, and the amount of fiscal and human resources available to design and conduct the evaluation. With this information in hand, the steps can be undertaken to begin the evaluation process. Throughout the evaluation process, input from stakeholders, staff, or evaluators may alter the extent of the evaluation or the resources available. However, at each juncture, keeping the purpose in mind will be useful in making decisions (Patton, 1997; Herman, 1987).

Designing an evaluation requires that choices be made between various ways of obtaining information; each choice is subject to trade-offs between accuracy, time, and resource constraints. Some of those choices are: type of information collected (e.g. descriptive or numeric), timing of measurements (e.g. pre and post), measurement techniques, (e.g. single versus multiple measures); and who and what is measured. The quantity and quality of information to be produced and the costs associated with each must be considered in the choice (NIDA, 1991).

Recommendations

  • Programs should plan evaluations early in the development of interventions.
  • Programs should have a plan of evaluation for all important program components, including how and when each will be evaluated.
  • Program evaluations should be designed and conducted with a clear purpose.

STEPS IN DESIGNING AND CONDUCTING AN EVALUATION

There are six essential steps in designing and conducting an evaluation. These steps are to 1) engage stakeholders in the evaluation, 2) describe the program, 3) focus the evaluation design, 4) collect credible evidence, 5) justify conclusions, and 6) ensure use and share results (MMWR, 1999). Each step is described in greater detail below.

1.) Engage stakeholders in the evaluation.

In practice, evaluation is often an effort of only program managers and evaluators (external or internal). However, for evaluation to be successful, it is necessary that other stakeholders are included in the planning, implementation, and interpretation of the evaluation and its findings.

The range of stakeholders includes participants who expect services, funders who expect results for their support, other agencies or groups who serve the same or similar clients, the staff or volunteers who run the programs, and the administrators who are responsible for the delivery of services (Wong-Reiger, 1993). There are stakeholders of the program and stakeholders of the evaluation and some are both. The more involved stakeholders are, especially in the decision making process, the more cooperative they will be in providing information and being open to unexpected results. It is important to understand what various stakeholders want from the evaluation and how rigorous they expect evaluation methodology to be. It is also likely that these different motivations and expectations will cause conflict if not accounted for or resolved.

Stakeholder involvement will vary with the type of evaluation. The choice of which stakeholders to involve and at what level is a function of the purpose of the evaluation and who will use the results. Some evaluations may involve stakeholders only in decision making while others may be completely "participatory". Participatory evaluations involve stakeholders in all aspects of the project including design, data collection, and analysis. The benefits of participatory evaluation are: 1) selecting appropriate evaluation methods, 2) developing questions that are grounded in the perceptions and experiences of clients, 3) facilitating the process of empowerment, 4) overcoming resistance to evaluation by participants and staff, and 5) fostering a greater understanding among stakeholders (Marris, 1998). Regardless of the level of involvement, it is important that responsibilities and roles of each person or group are clearly defined and agreed to at the beginning of the process.

2.) Describe the program, including the needs, expectations, activities, stage, and context.

Program managers will need to elicit information from a variety of sources including staff, data, and documents to fully describe the program. The description should include the mission and objectives and be detailed enough so that others may understand the program goals and strategies (MMWR, 1999).

In describing the program it is useful to have a logic model, a graphic presentation of the logical relationship among program components. A program logic model is ideally developed at the planning stage and assists in clarifying the relationships between activities, objectives, and goals of the program. The development of a logic model is similar to identifying goals and objectives. There are four main components in developing a logic model: 1) the activities (methods of operation), 2) the services delivered (process indicators), 3) the intermediate results (outcome indicators), and 4) the intended results (impact indicators), including targeted groups. The logic model is most useful if each element in it is linked to a quantified objective, so that process, outcome, and impact indicators are defined in terms of concrete numerical targets. To develop a logic model, mangers must be able to clearly and accurately describe the program and who and what it intends to affect. Each of the program activities is measured by one or more service delivery results, which in turn measure the level at which that activity is provided. Each service delivery result is linked to one or more intermediate result, which is expected to occur as a result of participation in the program activity. For a program to propose that an intended result can be achieved, it must show that there are one or more intermediate results linked to the intended result. Further, there should be evidence that each step will indeed bring about the next step in the process.

When program goals and objectives are appropriately written, that is, specific, measurable, realistic, and time-framed, the model is easier to develop. If not, flaws in the objectives (and program design) may also be easier to recognize. (See Appendix E-A for an example of a program logic model, and E-B for examples of good and poorly developed objectives.)

Adequately describing the program from beginning to end, both as it should be and as it is, will aid managers in determining whether the course the program is on is the correct one. For instance, if the services delivered are different from those which were planned, delivered in significantly fewer amounts, or to the wrong populations, it is necessary to rethink what changes in the program are indicated. If it is believed that a particular program is necessary, then it is difficult to attribute any results to the program if the activities were not delivered as planned (Wong-Reiger, 1993).

EXAMPLE: A manager implements a program to provide screening for syphilis in intake drug treatment facilities on the grounds that the exchange of drugs for sex is a part of the syphilis epidemic in the community. The expectation is that 98% of the clients will be tested and, if needed, treated for disease; 95% of those with the disease will be interviewed; and subsequently 80% of appropriate sex partners will also be examined and treated. However, for a variety of reasons, only 60% of the clients are treated for syphilis, 50% of them interviewed and only 40% of the named partners located. It would be unrealistic for this aspect of the program to be credited for a decrease in disease among drug users in the community.

3.) Focus the evaluation design.

Before the design of the evaluation is decided, managers, evaluators, and stakeholders will need to determine the objectives of the evaluation. The objectives differ depending on what is being evaluated and how the evaluation is intended to be used, but it is important that the objectives are realistic, focused on the need at hand, and designed to answer the right questions. Evaluation objectives help clarify what aspect of the overall program is being assessed (Schechter, 1993). Setting the objectives for the evaluation will help focus it and keep the process from becoming too cumbersome and all- inclusive. It is also important to understand the difference between evaluation objectives and program objectives.

EXAMPLE: A program objective might be "Ensure that 95% of females who test positive for chlamydia in the STD clinic are appropriately treated within 7 days". An evaluation objective might be "Assess whether follow up systems for clients are ensuring an adequate response rate." There are a variety of evaluation designs and not all are equally suited to the type of evaluation needed or wanted. It is necessary for managers to understand the difference and plan the evaluation in accord with the most appropriate evaluation method. This will help ensure that the evaluation strategy has the greatest chance of being useful, feasible, ethical, and accurate (MMWR, 1999).

4.) Collect credible evidence.

Protocols and instruments may need to be developed for use in data collection activities. These activities should be supervised closely by the evaluation director since these data will be used for analysis. If evaluation was not part of the planning process, some data may be very difficult or impossible to collect once the program has been initiated. There must be a plan for who can provide data and who can gather the data. For process evaluation, decisions should be made whether to collect all available data on an ongoing basis, sample on an ongoing basis, or sample at specific times. For outcome/impact evaluation there are many methodological issues to consider; in this case, it is best to seek help from program evaluation specialists (Program Evaluation Toolkit, 1997).

Not all evidence for program evaluations are quantitative data. Some issues in the evaluation  are best addressed through qualitative methods. Such methods include observations, semi-structured and unstructured interviews, and collection of vignettes and interpretations about program aspects and functioning. They are often more useful in evaluation of the early stages of program development, or assessment of the need for "midcourse corrections." Qualitative methods may help uncover aspects of the program, such as diverse understandings of its goals, that lead to revision of the logic model or a new frame for understanding problems. These methods are less appropriate for examining program outcomes.

EXAMPLE: Data obtained from STD*MIS can tell evaluators the length of time it takes to complete field work assignments, but a complete assessment of field work requires that supervisors observe how staff perform their activities. The quantitative data coupled with the information gleaned during observation is needed to determine how well that component is working and what changes may be necessary to improve field results.

Generally speaking, it is best to have trained evaluation staff who can assess the findings and objectively analyze the data. However, the person who analyzes the data will need to work closely with program managers to assist in the interpretation of findings. The evaluation report should not only document raw findings, but should also analyze and synthesize them (Schechter, 1993).

5.) Justify Conclusions

Once the evidence has been analyzed and synthesized, conclusions can be made about program activities. These conclusions must be linked to the evidence. However, because there is an apparent linkage does not mean that the conclusions are correct or acceptable to the stakeholders. Understanding the results within program context is essential or the results are often meaningless. Identifying evidence regarding the program's performance is not all that is needed to draw evaluation conclusions.

Conclusions made about the program lead to recommendations for some types of actions. Further, recommendations for continuing, expanding, redesigning, curtailing or terminating a program are not the same as determining a program's effectiveness. Recommendations about program activities should be aligned with areas that stakeholders can control or influence and be acceptable to them.

6.) Ensure use and share findings

The practical use of evaluation results and recommendations is not automatic. Too frequently evaluations are performed and it is assumed that appropriate action will occur. Program managers also need to plan for and take deliberate action to ensure that findings are disseminated appropriately and used properly. Frequent feedback to and from all the stakeholders is essential for ensuring use. Managers may need to develop a system of follow- up to determine the who, how, and when of operationalizing the recommendations.

EXAMPLE: An evaluation of a STD prevention program in a major city showed that 60% of women were being screened for chlamydia. Subsequently, a recommendation was made that all three clinics should begin routine screening. Program managers need to develop a plan for ensuring that each clinician is aware of the new policy, given the opportunity to discuss and agree on the change, trained in testing procedures, and that a mechanism is developed to systematically track the number of women tested. In addition, mechanisms for corrective action should be anticipated.

7.) Disseminate findings broadly and in a timely fashion.

The results of an evaluation should always be shared with stakeholders and, when possible, with other prevention and control programs. The results should be disseminated in a timely and unbiased fashion (MMWR, 1999). If the dissemination of the results is significantly delayed, either the situation may have changed or stakeholders may perceive that the evaluation is unimportant to them, management, and the evaluators. Results which are delivered in a biased fashion, such as punitive, will be ignored or possibly subverted.

National conferences are one possibility for widespread dissemination of evaluation findings. However, programs which discover significant findings that could have important effects on the control of STD should seek other more immediate ways of getting the information to other programs. As electronic communications become more and more commonplace, there will be many opportunities for widespread, rapid dissemination of findings.

With the results of the evaluation, a new process should be undertaken to refine the program, cease activities which do not work, and/or develop new interventions in areas of need. Evaluations are opportunities to improve programs and plan for the future and should be conducted as such.

Recommendations

  • Program managers should develop a written description of the program, including the involvement of stakeholders.
  • Programs are encouraged to develop logic models for goals, objectives, activities, and the targeted groups.
  • Evaluation results should be shared with stakeholders.
  • Evaluation results should be used for program improvement and further program planning.

 



Page last modified: August 16, 2007
Page last reviewed: August 16, 2007 Historical Document

Content Source: Division of STD Prevention, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention