skip header and navigation H R S A CareAction Newsletter

PROVIDING HIV/AIDS CARE IN A CHANGING ENVIRONMENT — MARCH 2005

Evaluation: More Crucial Than Ever

In this age of rising HIV/AIDS prevalence, rising health care costs and increasing need for HIV/AIDS services, evaluation is more important than ever to both funders and providers. Evaluation helps measure program performance. It can demonstrate program accomplishments and identify areas needing improvement. It helps funders decide which activities to support. It helps providers ensure that programs are running efficiently and providing maximum benefit to their clients.

Within the past 2 decades, health and human services providers have increasingly relied on evaluation to guide program planning and funding. Organizations that fund service delivery, such as the Health Resources and Services Administration (HRSA), have been among the leaders in this movement. This issue of HRSA CAREAction provides an overview of evaluation, how successful evaluation programs are structured, and how the evaluation process can benefit the HIV/AIDS services community.

Getting Started: Goals, Objectives and More

The first step to good evaluation—and to providing services successfully—is to articulate specific goals, objectives, outputs, and activities. Each should be measurable, related to one another, and related to a specific time frame (e.g., month, quarter, or year).

Goals

Goals are the long-term results expected from program activities. Goals are broad and are generally measured over long periods—often 3 years or more.

Example: A care provider’s goal might be “to decrease the AIDS-related death rate in our community during the next 3 years.”

Objectives

The intermediate changes needed for achieving goals are called objectives, or outcomes. Objectives should be categorized as long-term or short-term. A single goal may have as few as one or as many as five or six objectives.

Long-term objectives.Generally, two or three long-term objectives are needed for any given goal.

Example: A long-term objective related to the goal of decreasing the rate of AIDS deaths could be “to increase the number of people living with HIV/AIDS who are receiving medications that prevent or slow disease progression.”

Short-term objectives. Each long-term objective must be supported by related short-term objectives. Short-term objectives should be narrowly defined and should generally be limited to no more than two or three per long-term objective. Otherwise, conducting an evaluation can become unnecessarily complicated.

Example: Short-term objectives related to the long-term objective of increasing the number of people receiving HIV medications could include “ increasing the number of seropositive people who know their individual serostatus and enroll in treatment and support programs” or “decreasing the number of people who drop out after enrolling in care.”

Outputs

To ensure that progress toward objectives can be measured, it is important for each short-term objective to be associated with one or more specific outputs. Outputs are the results of specific activities and are reported in numeric values.

Examples: Outputs might include

Activities

Activities are the things that service providers actually do. Specific activities come into play once objectives are determined. “ Case management,” “clinical care,” and “community outreach” are common activities for HIV services. Activities may be grouped together under general categories, or activity sets. Each activity set should be accompanied by a short list of specific action steps. Action steps should be written out in detail using a workplan chart (see Figure 1).

Figure 1. Sample Workplan Chart
Goal: To decrease the AIDS-related death rate in our community over the next 3 years
Activity Set Action Steps Setting(s) Responsible Person(s) Time Frame Documented in
Community outreach to identify highrisk and seropositive persons Identify outreach locations Agency office and community meetings (CAB) Outreach Coordinator
and consumer advisory board
First month (of project) Report to Principal Investigator (PI
  Identify materials to be distributed Agency office and Internet Outreach Coordinator
and CAB
First month Report to PI
  Develop outreach protocol Agency office Outreach Coordinator, CAB, and Program Evaluator Second month Draft submitted to PI
  Develop data collection instruments Agency office Outreach Coordinator
and Program Evaluator
Second month Drafts submitted to PI
  Recruit, hire, and train outreach staff Agency office and community events Outreach
Coordinator
and PI
Third month Agency personnel records
  Deploy staff Community outreach locations Outreach Coordinator Fourth month through 2 months before the end of the project Outreach logs and records
  Collect outreach data and transfer to evaluation staff Community outreach locations and agency office Outreach Coordinator
and staff
All months during which outreach is conducted Evaluation activity records and database files

 

A tool that can help ensure that goals, objectives, outputs, and activity sets are logically and coherently related is a logic model (Figure 2). Logic models align resources (often called “inputs”) with activity sets, outputs, short- and long-term outcomes, and goals. Originally developed within the commercial business field, logic models are increasingly being used in services delivery. Both the Centers for Disease Control and Prevention (CDC) and HRSA recommend that service providers use logic models to plan programs and evaluations. More information on logic models can be found at www.cdc.gov/eval/resources.htm#logic%20model.

Before planning any evaluation, it is vital to spell out a program’s goals and objectives. Clear goals and objectives will make it much easier to decide what to evaluate and how to gather the information (data) needed for the evaluation. It will also help providers avoid collecting unnecessary information. Moreover, refinement of the goals and objectives will help determine how to conduct the evaluation so that it coordinates with and does not intrude upon the services being provided.

Figure 2. Example of a Logic Model
Goal: To decrease the AIDS-related death rate in our community over the next 3 years
Activity Set Resources/Inputs Immediate Outputs Short-Term Objective Long-Term Objective
Establish a health care collaborative to increase counseling, testing, and referral (CTR) and enrollment in care

Train provider staff
Agency staff

Staff of six other HIV and social service providers (i.e., the “ Care Collaborative”)
Six agencies are engaged in and oriented to participating in the Care Collaborative.

A minimum of 12 staff are trained in serving target population in culturally competent manner.

A clearly identified network with responsibilities and sufficient links and capacity to provide care is established.
Increased availability and accessibility of culturally competent HIV CTR and HIV care Increased perception among the target population of availability and cultural competence of HIV CTR and care services
Community outreach to identify high-risk persons (HRPs) and seropositive persons

Presence at community events

Monthly newsletter

Web site

Street outreach in parks

Peer outreach in Internet chat rooms
Agency staff and Care Collaborative A minimum of 200 outreach contacts are made in clusters of 3 to 4 persons each.

A minimum of 10 minutes of outreach occurs per contact cluster.

A minimum of 10 HIVinfected persons not in care are linked or relinked to care.

A minimum of 200 HRPs are identified and linked to CTR.
Increased awareness in the community of purpose and goals of outreach initiative and the care collaborative and their services

Increased use of Web site

Increased enrollment of HIV-positive persons not in care into care systems

Increased use by HRPs of CTR services
Increased linking of HIVpositive persons who are either never enrolled

in care or are lost to care into care services
Co-locate CTR and referral services within agency office

Enhance links for services via the collaborative
Agency staff and Care Collaborative A minimum of 100 HRPs are engaged in pretesting.

A minimum of 50 HRPs have an HIV test.

A minimum of 5 HRPs test seropositive.

All HIV-positive persons are enrolled in case management services.
Increased knowledge of individual serostatus

Increased early identification of HIV-positive persons and enrollment into systems of care at earlier disease stages
Increased number of high-risk or HIV-positive persons who use local HIV prevention and/or care services

 

The Evaluation Process

Evaluation is a process of looking back in order to look forward. Once goals, objectives, outputs, and activities have been crafted, the next step is to ask: What do we need or want to know as a result of what we’ve planned? The responses to this question will generate the evaluation questions, which form the basis of the evaluation.

Developing the evaluation questions is a fundamental part of the evaluation planning process.

Evaluation questions should be fairly narrow and related to the goals, outcomes, outputs, and activity sets. Major stakeholders should be consulted in developing the questions. Examples are as follows:

Types of Evaluation

Evaluation falls into four categories: resource, process, outcome, and impact. Each type of evaluation is tailored to specific aspects of a program plan and can be accomplished using either quantitative or qualitative methods. Quantitative evaluation, in which information is defined according to numeric values, is perhaps most familiar. It requires the assistance of someone who is knowledgeable about statistical analysis. In qualitative evaluation, information that is not readily quantifiable— such as individual client stories—is gathered and studied. Most successful evaluations actually use both methods in what is called a mixed-methods approach, which gives the most complete picture of what is happening in a program.

After developing the evaluation questions, consider what category of evaluation is needed to answer each question and what types of data could be gathered in each area to answer the questions. It is especially important to assess how the information can be gathered so as not to interfere with helping clients. If evaluation procedures interfere with serving clients or become too much of a burden for staff, the evaluation will be not be faithfully implemented and likely will become worthless.

Resource Evaluation

Resource evaluation compares the planned use of resources (e.g., staff time, staff skills, facilities, equipment, supplies, and money) with what actually happened. It seeks to answer questions such as the following:

Common techniques for resource evaluations include budget reviews and audits, reviews of staff work records, personnel evaluations, equipment and supply reviews and inventories, and facility reviews. Many of these reviews are commonly done on a regular basis within public and private nonprofit organizations.

Process Evaluation

Process evaluation compares planned activities and outputs with what actually happened, focusing on three important issues: program reach, program fidelity, and quality assurance. Program reach examines whether the program engaged the intended target population (e.g., “ Were Latina women and their partners served, or did a significant number of clients come from other demographicgroups?”). Program fidelity focuses on whether the activities occurred as planned (e.g., “Did all newly identified HIV-infected clients receive a case management intake within 48 hours of being identified?” “Did the client risk-reduction groups follow the curriculum?”). Quality assurance looks at whether the activities conducted were consistent with professional standards of conduct and “best practices” (e.g., “Did the counseling and testing services offered follow the CDC guidelines for such services?”).

Common process evaluation techniques build on information that is routinely available, such as review of program and service records (e.g., outreach logs), client-level records (e.g., patient charts), and protocols for service delivery (e.g., case management manuals); direct observation of services as they are being delivered (e.g., periodic monitoring of peer counseling sessions); client surveys (e.g., brief customer satisfaction forms); and staff focus groups. All of these methods can be used to collect quantitative and qualitative information. As with resource evaluation, the basic questions are, Did we conduct our activities as planned? Did we achieve the outputs we anticipated? If not, why not? What lessons about our processes can we carry into the future or share with others? How can we improve our services?

Outcome Evaluation

Outcome evaluation seeks to determine how program activities affected the intended outcomes. For example, program activities may include intensive case management and peer counseling on adherence, which in turn are intended to lead to a short-term outcome of greater adherence to antiretroviral (ARV) drug regimens in furtherance of the goal of decreasing the AIDS-related death rate. The process evaluation may show that all the activities intended to achieve this outcome were conducted as planned. But did the activities actually lead to greater adherence? The outcome evaluation may involve a review of client medical charts, interviews with clinical providers involved in ARV drug therapy, and surveys of clients themselves about how well they were able to accomplish their treatment goals. In this scenario, it would be important to examine whether any other activities or factors (e.g., a change in dosing procedures that made it easier to take medications and adhere to regimens) may have increased adherence. Only then could one judge whether the program made a difference.

Once evaluation of short-term outcomes takes place, long-term outcomes can be examined. For example, if the long-term goal is to achieve a decrease in the AIDSrelated death rate through increased adherence to ARV therapies, data sources could include the agency’s client records as well as State health department birth and death records.

Note, however, that the further one gets from the realm and scope of an agency’s actual activities, the less one can directly connect them with any outcomes. Many other factors can influence observed changes. In the example above, a change in the mixture of medications prescribed could have lowered the death rate, or another HIV service provider may have undertaken a different initiative to address the same issue. In drawing conclusions, all possibilities must be considered. Frequently, the best one can do is to state that what an agency did contributed to a change but is not completely responsible for it.

Impact Evaluation

Impact evaluation measures how well a program achieved its goals. In truth, few service providers handle this type of evaluation directly because so many factors are involved. Although agency progress and success toward a broad goal can be partially linked, it is hard to directly attribute success solely to a particular service or set of services an agency may offer. For example, a decline in the rate of AIDS-related deaths in a community during a 3-year period may have occurred for many reasons, just one of which may be an agency’s services. Other reasons may include a population increase in the community, which is a statistical condition leading to a lower death rate that has nothing to do with what any service provider may have done.

Usually a mixture of extensive epidemiologic and program data must be examined when conducting an impact evaluation. As a result, public health agencies and large funding entities generally conduct impact evaluations. Small service providers that are interested in impact evaluation generally link their evaluation efforts with those of the State or local health department. Yet, as with long-term outcomes evaluation, the best an agency will probably be able to do is conclude whether what it did contributed to a given impact and (perhaps) the degree to which it contributed.

Putting an Evaluation Together

One of the biggest challenges in evaluating programs is keeping track of all the evaluation components. An evaluation plan chart (Figure 3) can help ensure that each staff person knows his or her role in conducting the evaluation and that important evaluation activities are not omitted.

Figure 3. Sample Evaluation Plan Chart
Project goal: To decrease AIDS-related deaths during the next 3 years
Evaluation
Question
Type of
Evaluation
Relevant
Data
Data Source Who
Collects
Who Analyzes Who
Reports
Who Gets
Reports
Where do we
find lost-to-care
(LTC) clients, so
we can re-engage
them in care?
Process LTC clients found
during outreach

Informal feedback
from clients
Outreach
worker logs

Focus group
Outreach
workers

Evaluation
staff
Outreach coordinator

Director of evaluation
Director of
evaluation
Program
and services
director
How well are
clients adhering
to their medication
regimens?
Outcome Reports from clients
during clinical visits

CD4, T cell, and other
physiological indicators
Clinical visit
records

Client medical
records
Nurse case
manager
Evaluation staff Director of
evaluation
Medical
director

 

To use an evaluation plan chart, the staff responsible for planning the evaluation should first list all the relevant questions the evaluation is intended to answer. Next, they should identify as many types of evaluation as possible that could answer each question, then identify all types of relevant data that either are already collected or could be collected without unreasonably impeding services to clients. For each data item, it should be specified who will collect the data, who will analyze the information, who will report the results of the analysis, and who will receive the report. It may be desirable to identify what the people to whom the report is distributed will do with the information (e.g., incorporate it into a report to funders or prepare a presentation for an upcoming meeting of service providers).

As the evaluation plan comes together, it will become clear that certain data cannot be collected, either because doing so is too difficult, the answers will not be reliable, or gathering the information will be too intrusive to clients. Those questions must be disregarded. In addition, certain questions may not have a sufficient number of data items to answer the evaluation question meaningfully or the data may be unreliable. A program evaluation specialist can help determine which data should be collected. Finally, the number of questions may simply be excessive in light of staff resources. In all these cases, some questions will have to be deleted from the evaluation plan. Although this process can be difficult, plenty of important questions generally remain. Also, discarded evaluation questions can be revisited when more resources are available or circumstances have changed.

Don’t Go It Alone

Evaluation is a science, and it requires expertise. Individuals conducting evaluation should have graduate-level training and, to maintain objectivity, should be autonomous from other program staff.

Many large organizations meet the evaluation staffing challenge by hiring a director of evaluation and one or more additional evaluation staff positions. Small organizations often partner with a local college or university or hire a parttime evaluation consultant. Either way, it is important to plan for evaluation in the program budget. In fact, many funders are reluctant to make awards to organizations that do not include evaluation in their funding proposals.

In the sample evaluation plan chart (see Figure 3), the people analyzing and reporting evaluation data differ from those implementing the evaluation. This approach is beneficial for three reasons. First, the people analyzing and reporting information must be objective and able to report positive as well as disappointing news. It is difficult to maintain this objectivity when in the midst of providing services. Second, service delivery staff often have heavy demands on their time and find it difficult to step away from those demands to digest evaluation information. Third, the set of skills required to analyze and report data is often different from the skills required to provide services to clients.

Standards for a Good Evaluation

In planning an evaluation, organizations should refer to the Standards for Effective Evaluation, issued by the Joint Committee on Educational Evaluation and the American Evaluation Association in 1994 (see the Resources section). These standards help organizations decide among evaluation options and are particularly helpful in avoiding evaluations that are not useful or unethical. The four major categories for standards are utility (i.e., the information needs of evaluation users are satisfied), feasibility (i.e., the evaluation is viable and pragmatic), propriety (i.e., the evaluation is practical and not exploitative of clients or human subjects), and accuracy (i.e., the evaluation findings are technically correct).

Implementing the Evaluation

Once the evaluation plan is defined and staff roles are assigned, the only task left is to implement it. At this point, all staff involved in the evaluation gather the information they have been assigned and pass it on to the staff who are responsible for analyzing the data. They, in turn, analyze the data and draw conclusions. Each evaluation should identify specific lessons that can be disseminated internally as well as externally to funders and other service providers. One of the most important uses of an evaluation is to help an organization plan for the future, a process that involves both improving services and securing funding.

Any evaluation plan may need to be adjusted as it is implemented; doing so is acceptable, but the changes should not be too dramatic. Major changes in one area of a plan could negatively affect another part of the evaluation and invalidate the results. Rather than make extensive changes mid-course—or implement a series of small changes that add up to a significant change—it may be best to take a step back and rework the entire evaluation plan.

Once they get into the habit of simultaneous program planning and evaluation, many people find that evaluation is a lot easier than they thought. They also see the following important benefits to the organization:

The important thing is to get started— and the HIV/AIDS Bureau is available to help. At a time when hundreds of thousands of people living with HIV/AIDS in the United States are still undiagnosed, not receiving appropriate care, or both, the HIV/AIDS services community is searching for increasingly effective mechanisms for linking people with appropriate services and ensuring that those services are as productive as possible. To this quest, evaluation has much to offer. By using just a small part of available resources, providers and funders alike can identify what’s working— and what could be working better. And in doing so, they can make sure that their resources are being used where they can do the most good.

John Hannay

Top

Resources

Current RYAN White CARE Act providers may obtain technical assistance on evaluation issues by contacting their Project Officer at the HRSA HIV/AIDS Bureau. Subcontracting agencies through Titles I and II should do this through their local Title I and II granting agencies.

Acuff C, et al., eds. Evaluating services and programs. Chapter 14 in Mental Health Care for People Living With or Affected by HIV/AIDS: A Practical Guide. Washington, DC: National Mental Health Information Center; 1999.

Administration for Children and Families, U.S. Department of Health and Human Services. The Program Manager’s Guide to Evaluation. Available at: www.acf.hhs.gov/programs/core/ pubs_reports/prog_mgr.html.

Centers for Disease Control and Prevention (CDC) Evaluation Working Group. Framework for program evaluation in public health. MMWR Morbid Mortal Wkly Rep. 1999;48(RR11):1-40. Available at: www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm.

CDC Evaluation Working Group. Additional resources. Available at: www.cdc.gov/eval/resources.htm.

CDC Public Health Training Network. Practical evaluation of public health programs. Five-hour distance learning course. 2000. Available at: www.cdc.gov/phtn or 1-800-41-TRAIN. The workbook may be viewed at www.cdc.gov/eval/workbook.pdf.

CDC, National Center for HIV, STD and TB Prevention, Division of HIV/AIDS Prevention. Evaluation Web page. Available at: www.cdc.gov/hiv/eval.htm.

The Community Toolbox. Models for Promoting Community Health and Development. Interactive Web site. 2004. Available at: www.ctb.lsi.ukans.edu/tools/c30/ProgEval.htm.

Joint Committee on Standards for Educational Evaluation. Program Evaluation Standards. Kalamazoo, MI: Western Michigan University; 1994.

Milstein B, Wetterhall S, et al. A framework featuring steps and standards for program evaluation. Health Promotion Practice. 2000;1(3):221-228.

Outcome Measurement Resource Network. Measuring Program Outcomes: A Practical Approach (Item No. 0989). Washington, DC: United Way of America; 2003. Information on the Outcome Measurement Resource Network is available at: http://national.unitedway.org/outcomes/.

Top

Planning Goals and Objectives: Key Points