National Cancer Institute National Cancer Institute
U.S. National Institutes of Health National Cancer Institute
NCI Home Cancer Topics Clinical Trials Cancer Statistics Research & Funding News About NCI
Pink Book - Making Health Communication Programs Work



Preface






Why Use This Book?






Introduction






Overview






Stage 1: Planning and Strategy Development






Stage 2: Developing and Pretesting






Stage 3: Implementing the Program







Stage 4: Assessing Effectiveness






Communication Research Methods






Appendix A






Appendix B






Appendix C






Appendix D






Appendix E






Acknowledgments



Page Options
Print This Page  Print This Page
Print This Document  Print This Document
View Entire Document  View Entire Document
E-Mail This Document  E-Mail This Document
PDF Version  View/Print PDF
Quick Links
Director's Corner

Dictionary of Cancer Terms

NCI Drug Dictionary

Funding Opportunities

NCI Publications

Advisory Boards and Groups

Science Serving People

Español
NCI Highlights
Virtual and Standard Colonoscopy Both Accurate

New Study of Targeted Therapies for Breast Cancer

The Nation's Investment in Cancer Research FY 2009

Cancer Trends Progress Report: 2007 Update

Past Highlights
You CAN Quit Smoking Now!
Stage 4: Assessing Effectiveness and Making Refinements

Questions to Ask and Answer
Why Outcome Evaluation Is Important
Revising the Outcome Evaluation Plan
Conducting Outcome Evaluation
Refining Your Health Communication Program
Common Myths and Misconceptions About Evaluation
Selected Readings

Questions to Ask and Answer

  • How can we use outcome evaluation to assess the effectiveness of our program?
  • How do we decide what outcome evaluation methods to use?
  • How should we use our evaluation results?
  • How can we determine to what degree we have achieved our communication objectives?
  • How can we make our communication program more effective?

In Stage 3, you decided how to use process evaluation to monitor and adjust your communication activities to meet objectives. In Stage 4, you will use the outcome evaluation plan developed in Stage 1 to identify what changes (e.g., in knowledge, attitudes, or behavior) did or did not occur as a result of the program. Together, the progress and outcome evaluations will tell you how the program is functioning and why. (If you combine information from the two types of evaluation, be sure that you focus on the same aspects of the program, even though you look at them from different perspectives.) This section will help you revise your plans and conduct outcome evaluation.You should begin planning assessment activities either before or soon after you launch the program.

Why Outcome Evaluation Is Important

Outcome evaluation is important because it shows how well the program has met its communication objectives and what you might change or improve to make it more effective. Learning how well the program has met its communication objectives is vital for:

  • Justifying the program to management
  • Providing evidence of success or the need for additional resources
  • Increasing organizational understanding of and support for health communication
  • Encouraging ongoing cooperative ventures with other organizations

Revising the Outcome Evaluation Plan

During Stage 1, you identified evaluation methods and drafted an outcome evaluation plan. At that time, you should have collected any necessary baseline data. The first step in Stage 4 is to review that plan to ensure it still fits your program. A number of factors will influence how your communication program’s outcomes should be evaluated, including the type of communication program, the communication objectives, budget, and timing. The outcome evaluation needs to capture intermediate outcomes and to measure the outcomes specified in the communication objectives. Doing so can allow you to show progress toward the objectives even if the objectives are not met.

Examples of Effectiveness Measures for Health Communication Programs

Knowledge
A public survey conducted before and after NCI’s 5 A Day campaign found that knowledge of the message (a person should eat 5 or more servings of fruits and vegetables each day for good health) increased by 27 percentage points.

Attitude
In 1988, the U.S. Surgeon General sent a pamphlet designed to influence attitudes on AIDS to every U.S. household. An evaluation conducted in Connecticut showed no change in attitude between residents who read the pamphlet and those who did not.

Behavior
The Pawtucket Heart Health Program evaluated a weight-loss awareness program conducted at worksites. More than 600 people enrolled, and they lost an average of 3.5 pounds each compared with their preprogram weight.

Consider the following questions to assess the Stage 1 outcome evaluation plan and to be sure the evaluation will give you the information you need:

  • What are the communication objectives?
    What should the members of the intended audience think, feel, or do as a result of the health communication plan in contrast to what they thought, felt, or did before? How can these changes be measured?
  • How do you expect change to occur?
    Will it be slow or rapid? What measurable intermediate outcomes (steps toward the desired behavior) are likely to take place before the behavior change can occur? The behavior change map you created in Stage 1 should provide the answers to these questions.
  • How long will the program last? What kinds of changes can we expect in that time period (e.g., attitudinal, awareness, behavior, policy changes)? Sometimes, programs will not be in place long enough for objectives to be met when outcomes are measured (e.g., outcomes measured yearly over a 5-year program). To help ensure that you identify important indicators of change, decide which changes could reasonably occur from year to year.
  • Which outcome evaluation methods can capture the scope of the change that is likely to occur?
    Many outcome evaluation measures are relatively crude, which means that a large percentage of the intended audience (sometimes an unrealistically large percentage) must make a change before it can be measured. If this is the case, the evaluation is said to "lack statistical power." For example, a public survey of 1,000 people has a margin of error of about 3 percent. In other words, if 50 percent of the survey respondents said they engage in a particular behavior, in all likelihood somewhere between 47 percent and 53 percent of the population represented by the respondents actually engages in the behavior. Therefore, you can conclude that a statistically significant change has occurred only if there is a change of 5 or more percentage points. It may be unreasonable to expect such a large change, and budgetary constraints may force you to measure outcomes by surveying the general population when your intended audience is only a small proportion of the population.
  • Which aspects of the outcome evaluation plan best fit with your organization’s priorities?
    Only rarely does a communication program have adequate resources to evaluate all activities. You may have to illustrate your program’s contribution to organizational priorities to ensure continued funding. If this is the case, it may be wise to evaluate those aspects most likely to contribute to the organization’s mission (assuming that those are also the ones most likely to result in measurable changes).
Quantitative Versus Qualitative Evaluation

Quantitative research is used to gather objective information by asking a large number of people a set of identical questions. Results are expressed in numerical terms (e.g., 35 percent are aware of X and 65 percent are not). If the respondents are a representative random sample, quantitative data can be used to draw conclusions about an intended audience as a whole. Quantitative research is useful for measuring the extent to which a knowledge set, attitude, or behavior is prevalent in an intended audience.

Qualitative research is used to gather reactions and impressions from small numbers of intended audience members, usually by engaging them in discussion. Results are subjective and are not described numerically or used to make generalizations about the intended audience. Qualitative research is useful for understanding why people react the way they do and for understanding additional ideas, issues, and concerns.

Quantitative research methods are usually used for outcome evaluation because they provide the numerical data necessary to assess progress toward objectives. When evaluating outcomes, qualitative research methods are used to help interpret quantitative data and shed light on why particular outcomes were (or were not) achieved. See the Communication Research Methods section for detailed explanations of quantitative and qualitative research methods and the circumstances under which you should use each.

Conducting Outcome Evaluation

Conduct outcome evaluation by following these steps:

  1. Determine what information the evaluation must provide.
  2. Define the data to collect.
  3. Decide on data collection methods.
  4. Develop and pretest data collection instruments.
  5. Collect data.
  6. Process data.
  7. Analyze data to answer the evaluation questions.
  8. Write an evaluation report.
  9. Disseminate the evaluation report.
Evaluation Constraints

Every program planner faces limitations when conducting an outcome evaluation. You may need to adjust your evaluation to accommodate constraints such as the following:

  • Limited funds
  • Limited staff time or expertise
  • Length of time allotted to the program and its evaluation
  • Organizational restrictions on hiring consultants or contractors
  • Policies that limit your ability to collect information from the public
  • Difficulty in defining the program’s objectives or in establishing consensus on them
  • Difficulty in isolating program effects from other influences on the intended audience in "real world" situations
  • Management perceptions of the evaluation’s value

These constraints may make the ideal evaluation impossible. If you must compromise your evaluation’s design, data collection, or analysis to fit limitations, decide whether the compromises will make the evaluation results invalid. If your program faces severe constraints, do a small-scale evaluation well rather than a large-scale evaluation poorly. Realize that it is not sensible to conduct an evaluation if it is not powerful enough to detect a statistically significant change.

See a description of each step below.

1. Determine What Information the Evaluation Must Provide

An easy way to do this is to think about the decisions you will make based on the evaluation report. What questions do you need to answer to make those decisions?

2. Define the Data You Need to Collect

Determine what you can and should measure to assess progress on meeting objectives. Use the following questions as a guide:

  • Did knowledge of the issue increase among the intended audience (e.g., understanding how to choose foods low in fat or high in fiber, knowing reasons not to smoke)?
  • Did behavioral intentions of the intended audience change (e.g., intending to use a peer pressure resistance skill, intending to buy more vegetables)?
  • Did intended audience members take steps leading to the behavior change (e.g., purchasing a sunscreen, calling for health information, signing up for an exercise class)?
  • Did awareness of the campaign message, name, or logo increase among intended audience members?
  • Were policies initiated or other institutional actions taken (e.g., putting healthy snacks in vending machines, improving school nutrition curricula)?

3. Decide on Data Collection Methods

The sidebar Outcome Evaluation Designs describes some common outcome evaluation designs, the situations in which they are appropriate, and their major limitations. (See the Communication Research Methods section for more information.) Complex, multifaceted programs often employ a range of methods so that each activity is evaluated appropriately. For example, a program that includes a mass media component to reach parents and a school-based component to reach students might use independent cross-sectional studies to evaluate the mass media component and a randomized or quasi-experimental design to evaluate the school-based component.

The following limitations can make evaluation of your communication program difficult:

  • Lack of measurement precision (e.g., available data collection mechanisms cannot adequately capture
    change or cannot capture small changes). Population surveys may not be able to identify the small number of people making a change. Self-reported measures of behavior change may not be accurate.
  • Inability to conclusively establish that the communication activity caused the observed effect.
    Experimental designs, in which people are randomly assigned to either receive an intervention or not, allow you to assume that your program causes the only differences observed between the group exposed to the program and the control group. Outcome evaluations with experimental designs that run more than a few weeks, however, often wind up with contaminated control groups, either because people in the group receiving the intervention move to the control group, or because people in the control group receive messages from another source that are the same as or similar to those from your program.

The more complex your evaluation design is, the more you will need expert assistance to conduct your evaluation and interpret your results. The expert can also help you write questions that produce objective results. (It’s easy to develop questions that inadvertently produce overly positive results.) If you do not have an evaluator on staff, seek help to decide what type of evaluation will best serve your program. Sources include university faculty and graduate students (for data collection and analysis), local businesses (for staff and computer time), state and local health agencies, and consultants and organizations with evaluation expertise.

Outcome Evaluation Designs Appropriate for Specific Communication Programs
Programs Not Delivered to the Entire Population of the Intended Audience
Evaluation Design Major Limitations
Randomized experiment. Members of the intended audience are randomly assigned to either be exposed to the program (intervention group) or not (control group). Usually, the same series of questions is asked pre- and postintervention (a pretest and posttest); posttest differences between the two groups show change the program has caused.
  • Not appropriate for programs that will evolve during the study period.
  • Not likely to be generalizable or have external validity because of tight controls on program delivery and participant selection. Delivery during the evaluation may differ significantly from delivery when the program is widely implemented (e.g., more technical assistance and training may be available to ensure implementation is proceeding as planned).
  • For programs delivered over time, it is difficult to maintain integrity of intervention and control groups; group members may leave the groups at different rates of attrition.
  • Often costly and time-consuming.
  • May deprive the control group of positive benefits of the program.
Quasi-experiment. Members of the intended audience are split into control and intervention groups based simply upon who is exposed to the program and who is not.
  • Same as randomized experiments.
  • Difficult to conclude that the program caused the observed effects because other differences between the two groups may exist.
Before-and-after studies. Information is collected before and after intervention from the same members of the intended audience to identify change from one time to another.
  • Difficult to say with certainty that the program (rather than some unmeasured variable) caused the observed change.
Independent cross-sectional studies. Information is collected before and after intervention, but it is collected from different intended audience members each time.
  • Cannot say with certainty that the program caused any observed change.
Panel studies. Information is collected at multiple times from the same members of the intended audience. When intended audience members are differentially exposed to the program, this design helps evaluators sort out the effects of different aspects of the program or different levels of exposure.
  • Generalizability may be compromised over time. As participants age, leave, or respond to repeated questions on the same subject, they may no longer closely represent the intended audience.
  • Can be difficult to say with certainty that the program caused the observed change.
Time series analysis. Pre- and postintervention measures are collected multiple times from members of the intended audience. Evaluators use the preintervention data points to project what would have happened without the intervention and then compare the projection to what did happen using the postintervention data points.
  • Large number of pre- and postintervention data points are needed to model pre- and postintervention trends.
  • Normally restricted to situations in which governmental or other groups routinely collect and publish statistics that can be used as the pre- and postintervention observations.



Examples of Outcome Evaluation for Communication Programs

NCI’s Cancer Information Service
Customer satisfaction surveys are one means of gathering data about a program’s effects. Surveys of telephone callers to NCI’s Cancer Information Service (CIS) have shown that:

  • Eight out of 10 callers say they take positive steps to improve their health after talking with CIS staff.
  • Seventy percent of those who call about symptoms say the CIS information was helpful in their decision to see a doctor.
  • Fifty-five percent of those who call about treatment say they use CIS information to make a treatment decision.
  • Two-thirds of callers who are considering participation in a research study talk with a doctor after calling the CIS.

Note. From Morra, M. E. (Ed). (1998). The impact and value of the Cancer Information Service: A model for health communication. In Journal of Health Communication, Vol. 3, Number 3, Supplement 1, pp. 7–8. Copyright Taylor & Francis Co. Adapted with permission.


The Right Turns Only Program
Right Turns Only is a video-based drug education series produced by the Prince George’s County, Maryland, school system. The effects of this series (including collateral print material) on student knowledge, attitudes, and behavioral intentions were tested among approximately 1,000 seventh grade students.

Twelve schools were assigned to one of four groups: three intervention groups and one control group. One intervention group received only the video-based education, a second received both the video-based and a traditional drug education curriculum, a third received only the traditional curriculum, and the control group received no drug abuse prevention education. All interventions were completed within a 3-week period.

The six outcomes measured included: 1) knowledge of substance abuse terminology, 2) ability to assess advertisements critically, 3) perception of family, 4) conflict resolution, 5) self-efficacy in peer relationships, and 6) behavioral intentions related to substance use/abuse prevention.

Changes were measured using data from questionnaires completed by students before and after the interventions. The data were analyzed to identify differences based on gender, race, grades (self-reported), and teacher. Groups that received drug education scored higher than the control group on all posttest measures except self-efficacy. On two of the six measures, the group receiving the combination of the video series and traditional curriculum scored significantly higher than other groups.

The evaluation demonstrated that instructional videos (particularly when used in conjunction with print materials and teacher guidance) could be an effective tool for delivering drug education in the classroom.


Note. Adapted from Evaluating the Results of Communication Programs (Technical Assistance Bulletin), by Center for Substance Abuse Prevention, August 1998, Washington, DC: U.S. Government Printing Office. In the public domain.


NIDDK’s "Feet Can Last a Lifetime" Program
In 1995 the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) developed a feedback mechanism for the promotion of its kit, "Feet Can Last a Lifetime," that was designed to reduce the number of lower extremity amputations in people with diabetes. The first printing of the kit included a feedback form for health care providers to comment on the materials. Based on the feedback, NIDDK revised the kit in 1997. The new kit’s contents were then pretested extensively with practitioners for technical accuracy, usefulness, and clarity. The original kit was developed primarily for providers; based upon evaluation results, the revised kit also includes materials for patients. These include an easy-to-read brochure; a fact sheet with "foot care tips" and a "to do" list that contains steps for patients to follow to take care of their feet; and camera-ready, laminated tip sheets for providers to reproduce and give to patients.

4. Develop and Pretest Data Collection Instruments

Most outcome evaluation methods involve collecting data about participants through observation, a questionnaire, or another method. Instruments may include tally sheets for counting public inquiries, survey questionnaires, interview guides. Select a method that allows you to best answer your evaluation questions based upon your access to your intended audience and your resources. To develop your data collection instruments—or to select and adapt existing ones—ask yourself the following questions:

Which Data?

The data you collect should be directly related to your evaluation questions. Although this seems obvious, it is important to check your data collection instruments against the questions your evaluation must answer. These checks will keep you focused on the information you need to know and ensure that you include the right measures. For example, if members of your intended audience must know more about a topic before behavior change can take place, make sure you ask knowledge-related questions in your evaluation.

From Whom?

You will need to decide how many members of each group you need data from in order to have a sufficiently powerful evaluation to assess change. Make sure you have adequate resources to collect information from that many people. Realize that you may also need a variety of data collection instruments and methods for the different groups from whom you need information.

How?

Before you decide how to collect your data, you must assess your resources. Do you have access to, or can you train, skilled interviewers? Must you rely on self-reports from participants?

Also consider how comfortable the participants will be with the methods you choose to collect data. Will they be willing and able to fill out forms? Will they be willing to provide personal information to interviewers? Will the interviews and responses need to be translated?

Conducting Culturally Competent Evaluation

When you evaluate communication programs, you form a set of assumptions about what should happen, to whom, and with what results. Recognize that these assumptions and expectations may vary, depending on the cultural norms and values of your intended audiences.

You may need to vary your methods of gathering information and interpreting results. Depending on the culture from which you are gathering information, people may react in different ways:

  • They may think it is inappropriate to speak out in a group, such as a focus group, or to provide negative answers. (This does not mean that you should not use focus groups within these cultures; observance of nonverbal cues may be more revealing than oral communication.)
  • They may be reluctant to provide information to a person from a different culture or over the telephone.
  • They may lack familiarity with printed questionnaires or have a limited ability to read English.

Remember that the culture of the evaluator your program uses can inadvertently affect the objectivity of your evaluation. When possible, try to use culturally competent evaluators when you examine program activities. If your program cuts across cultures and you adapt your evaluation methods to fit different groups, you may find it difficult to compare results across groups. This type of evaluation is more complicated, and if you plan to conduct one, enroll the help of an expert evaluator.

5. Collect Data

Collect postprogram data. You should have collected baseline data during planning in Stage 1, before your program began, to use for comparison with postprogram data.

6. Process Data

Put the data into usable form for analysis. This may mean organizing the data to give to professional evaluators or entering the data into an evaluation software package.

7. Analyze the Data to Answer the Evaluation Questions

Use statistical techniques as appropriate to discover significant relationships.Your program might consider involving university–based evaluators, providing them with an opportunity for publication and your program with expertise.

8. Write an Evaluation Report

A report outlining what you did and why you did it, as well as what worked and what should be altered in the future, provides a solid base from which to plan future evaluations.Your program evaluation report explains how your program was effective in achieving its communication objectives and serves as a record of what you learned from both your program’s achievements and shortcomings. Be sure to include any questionnaires or other instruments in the report so that you can find them later.

See Appendix A for a sample evaluation report. As you prepare your report, you will need someone with appropriate statistical expertise to analyze the outcome evaluation data. Also be sure to work closely with your evaluators to interpret the data and develop recommendations based on them.

Why?

Writing an evaluation report will bring your organization the following additional benefits:

  • You will be able to apply what you've learned to future projects. Frequently, other programs are getting under way when evaluation of an earlier effort concludes, and program planners don’t have time to digest what has been learned and incorporate it into future projects. A program evaluation report helps to ensure that what has been learned will get careful consideration.
  • You will show your accountability to employers, partners, and funding agencies. Your program’s evaluation report showcases the program’s accomplishments. Even if some aspects of the program need to be modified based on evaluation results, identifying problems and addressing them shows partners and funding agencies that you are focused on results and intend to get the most benefit from their time and money.
  • You will be able to give evidence of your program and methods’ effectiveness. If you want other organizations to use your materials or program, you need to demonstrate their value. An evaluation report offers proof that the materials and your program were carefully developed and tested. This evidence will help you explain why your materials or program may be better than others, or what benefits an organization could gain from using its time and resources to implement your program.
  • You will provide a formal record that will help others. A comprehensive evaluation report captures the institutional memory of what was tried in the past and why, which partners had strong skills or experience in specific areas, and what problems were encountered. Everything you learned when evaluating your program will be helpful to you or others planning programs in the future.
Evaluation Report Helps CIS Promote Program Areas, Strengths

NCI’s CIS used an evaluation report, "Making a Difference," to show its partners, the research community, NCI/CIS leadership, and the media that its programs are effective. The document both quantified CIS results (e.g., making 100,000 referrals a year to research studies, providing information on breast cancer to 76,000 callers in 1996, providing information that increased fruit and vegetable consumption among callers) and put a human face on the calling public. Quotations from callers and leaders in the cancer community illustrated the personal impact of the service on people’s lives and health.

The report was written in lay language and used pullouts and simple charts to explain statistics. Ideas for using the report with regional partners, the media, and community leaders were included with the copies sent to each CIS office. To maximize opportunities for using the report, CIS has also made it available on computer disk and as a PowerPoint® slide presentation.

How?

Consider the Users

Before you write your evaluation, consider who will read or use it. Write your report for that audience. As you did when planning your program components in Stage 1, analyze your audiences for your report before you begin to compose. To analyze your audience, ask yourself the following questions:

  • Who are the audiences for this evaluation report?
    • Public health program administrators
    • Evaluators, epidemiologists, researchers
    • Funding agencies
    • Policymakers
    • Partner organizations
    • Project staff
    • The public
    • The media
  • How much information will your audience want?
    • The complete report
    • An executive summary
    • Selected sections of the report
  • How will your audience use the information in your report?
    • To refine a program or policy
    • To evaluate your program’s performance
    • To inform others
    • To support advocacy efforts
    • To plan future programs

Consider the Format

Decide the most appropriate way to present information in the report to your audience. Consider the following formats:

  • Concise, including hard-hitting findings and recommendations
  • General, including an overview written for the public at the ninth-grade level
  • Scientific, including a methodology section, detailed discussion, and references
  • Visual, including more charts and graphics than words
  • Case studies, including other storytelling methods

Selected Elements to Include

Depending on your chosen audience and format, include the following sections:

  • Program results/findings
  • Evaluation methods
  • Program chronology/history
  • Theoretical basis for program
  • Implications
  • Recommendations
  • Barriers, reasons for unmet objectives

9. Disseminate the Evaluation Report

Ask selected stakeholders and key individuals to review the evaluation report before it is released so that they can identify concerns that might compromise its impact. When the report is ready for release, consider developing a dissemination strategy for the report, just as you did for your program products, so the intended audiences you’ve chosen will read it. Don’t go to the hard work of writing the report only to file it away.

Letting others know about the program results and continuing needs may prompt them to share similar experiences, lessons, new ideas, or potential resources that you could use to refine the program. In fact, feedback from those who have read the evaluation report or learned about your findings through conference presentations or journal coverage can be valuable for refining the program and developing new programs.You may want to develop a formal mechanism for obtaining feedback from peer or partner audiences. If you use university–based evaluators, the mechanism may be their publication of findings.

If appropriate, use the evaluation report to get recognition of the program’s accomplishments. Health communication programs can enhance their credibility with employers, funding agencies, partners, and the community by receiving awards from groups that recognize health programs, such as the American Medical Writers Association, the Society for Technical Communication, the American Public Health Association, and the National Association of Government Communicators. A variety of other opportunities exist, such as topic–specific awards (e.g., awards for consumer information on medications from the U.S. Food and Drug Administration) and awards for specific types of products (e.g., the International Communication Association’s awards for the top three papers of the year). Another way to get recognition is to publish articles about the program in professional journals or give a presentation or workshop at an organization meeting or conference.

Refining Your Health Communication Program

The health communication planning process is circular. The end of Stage 4 is not the end of the process but the step that takes you back to Stage 1. Review the evaluation report and consider the following to help you identify areas of the program that should be changed, deleted, or augmented:

  • Goals and objectives:
    • Have your goals and objectives shifted as you’ve conducted the program? If so, revise the original goals and objectives to meet the new situation.
    • Are there objectives the program is not meeting? Why? What are the barriers you’re encountering?
    • Has the program met all of your objectives, or does it seem not to be working at all? Consider ending the program.
  • Where additional effort may be needed:
    • Is there new health information that should be incorporated into the program’s messages or design?
    • Are there strategies or activities that did not succeed? Review why they didn't work and determine what can be done to correct any problems.
  • Implications of success:
    • Which objectives have been met, and by what successful activities?
    • Should successful communication activities be continued and strengthened because they appear to work well or should they be considered successful and completed?
    • Can successful communication activities be expanded to apply to other audiences or situations?
  • Costs and results of different activities:
    • What were the costs (including staff time) and results of different aspects of the program?
    • Do some activities appear to work as well as, but cost less than, others?
  • Accountability:
    • Is there evidence of program effectiveness and of a continued need to persuade your organization to continue the program?
    • Have you shared the results of your activities with the leadership of your organization?
    • Have you shared results with partners?
    • Do the assessment results show a need for new activities that would require partnerships with additional organizations?

Once you have answered the questions above and decided what needs to be done to improve the program, use the planning guidelines in Stage 1 to help determine new strategies, define expanded or different intended audiences, and rewrite/revise your communication program plan to accommodate new approaches, new tasks, and new timelines. Review information from the other stages as you plan the next phase of program activities.

Common Myths and Misconceptions About Evaluation

Myth: We can’t afford an evaluation.

Fact: Rarely does anyone have access to adequate resources for an ideal health communication program, much less an ideal evaluation. Nevertheless, including evaluation as a part of your work yields the practical benefit of telling you how well your program is working and what needs to be changed. With a little creative thinking, some form of useful evaluation can be included in almost any budget.

Myth: Evaluation is too complicated. No one here knows how to do it.

Fact: Many sources of help are available for designing an evaluation. Several pertinent texts are included in the selected readings at the end of this section. If your organization does not have employees with the necessary skills, find help at a nearby university or from someone related to your program (e.g., a board member, a volunteer, or someone from a partner organization). Also, contact an appropriate clearinghouse or Federal agency and ask for evaluation reports on similar programs to use as models. If the program has enough resources, hire a consultant with experience in health communication evaluation. Contact other communication program managers for recommendations.

Myth: Evaluation takes too long.

Fact: Although large, complicated outcome evaluation studies take time to design and analyze (and may require a sufficient time lapse for changes in attitudes or behavior to become clear), other types of evaluation can be conducted in a few weeks or months, or even as little as a day. A well-planned evaluation can proceed in tandem with program development and implementation activities. Often, evaluation seems excessively time-consuming only because it is left until the end of the program.

Myth: Program evaluation is too risky. What if it shows our funding source (or boss) that we haven’t succeeded?

Fact: A greater problem is having no results at all. A well-designed evaluation will help you measure and understand the results (e.g., if an attitude or a perception did not change, why not?). This information can direct future initiatives and help the public health community learn more about how to communicate effectively. The report should focus on what you have learned from completing the program evaluation.

Myth: We affected only 30 percent of our intended audience. Our program is a failure.

Fact: Affecting 30 percent of the intended audience is a major accomplishment; it looks like a failure only if your program’s objectives were set unrealistically high. Remember to report your results in the context of what health communication programs can be expected to accomplish. If you think the program has affected a smaller proportion of the intended audience than you wanted, consult with experts (program planning, communication, or behavioral) before setting objectives for future programs.

Myth: If our program is working, we should see results very soon.

Fact: Results will vary depending on the program, the issue, and the intended audience. Don’t expect instant results; creating and sustaining change in attitudes and particularly in behavior or behavioral intentions often takes time and commitment. Your program may show shorter term, activity-related results when you conduct your process evaluation; these changes in knowledge, information seeking, and skills may occur sooner than more complex behavioral changes.

Selected Readings

Academy for Educational Development. (1995). A tool box for building health communication capacity. Washington, DC.

Agency for Toxic Substances and Disease Registry. (1994). Guidelines for planning and evaluating environmental health education programs. Atlanta.

Center for Substance Abuse Prevention. (1998). Evaluating the results of communication programs [Technical Assistance Bulletin].Washington, DC: U.S. Government Printing Office.

Flay, B. R., & Cook, T. D. (1989). Three models for evaluating prevention campaigns with a mass media component. In R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (2nd ed.). Thousand Oaks, CA: Sage.

Flay, B. R., Kessler, R. C., & Utts, J. M. (1991). Evaluating media campaigns. In S. L. Coyle, R. F. Boruch, & C. F. Turner (Eds.), Evaluating AIDS prevention programs. Washington, DC: National Academy Press.

Morra, M. E. (Ed.). (1998). The impact and value of the Cancer Information Service: A model for health communication. Journal of Health Communication, 3(3) Suppl.

Muraskin, L. D. (1993). Understanding evaluation: The way to better prevention programs. Washington, DC: U.S. Department of Education.

Rice, R. E., & Atkin, C. K. (2000). Public communication campaigns (3rd ed.). Thousand Oaks, CA: Sage.

Rossi, P. H., Freeman, H. E., & Lipsey, M.W. (1998). Evaluation: A systematic approach (6th ed.). Thousand Oaks, CA: Sage.

Siegel, M., & Doner, L. (1998). Marketing public health: Strategies to promote social change. Gaithersburg, MD: Aspen.

Windsor, R.W., Baranowski, T. B., Clark, N. C., & Cutter, G. C. (1994). Evaluation of health promotion, health education and disease prevention programs (2nd ed.). Mountain View, CA: Mayfield.


Back to TopBack to Top

< Previous Section  |  Next Section >


A Service of the National Cancer Institute
Department of Health and Human Services National Institutes of Health USA.gov