Primary Navigation for the CDC Website
CDC en Español
Division for Heart Disease and Stroke Prevention
divider
Email Icon Email this page
Printer Friendly Icon Printer-friendly version
divider
DHDSP Topics
bullet DHDSP Home
bullet About the Program
bullet Announcements
bullet State Program
bullet Public Health Action Plan
bullet WISEWOMAN
bullet Stroke Registry
bullet State Exam Survey
bullet HealthyPeople 2010
bullet Heart/Stroke Maps
bullet Legislative Database
bullet Resource Library
bullet Site Map
Contact Info
Mailing Address
CDC/NCCDPHP
(Mail Stop K–47)
4770 Buford Hwy, NE
Atlanta, GA 30341–3717

Information line:
(770) 488–2424
Fax:
(770) 488–8151

bullet Contact Us

State Program Evaluation Guides:
Developing an Evaluation PlanDeveloping an Evaluation Plan cover

PDF logoThis document is also available in Portable Document Format (PDF–166K). Learn more about PDFs.

divider
Guides:
bullet Writing SMART Objectives
bullet Developing and Using a Logic Model
bullet Developing an Evaluation Plan
bullet Fundamentals of Evaluating Partnerships
divider

The evaluation guide Developing an Evaluation Plan will help states and their partners think through the process of planning evaluation activity. The guide describes components of a plan, details to consider in plan development, provides sample templates, and provides a step-by-step process. This guide uses one example, providing training, and carries it through all the steps. This format is meant as a template and resource in helping you think through and write your evaluation plan. A feedback page is provided at the end of this guide. Your comments will be appreciated, especially after you have used the information to develop an evaluation plan.

An evaluation plan is part of your application for funding or plan of work. This plan should be based on the program objectives stated in the work plan and provide an approach to assess the extent to which those objectives have been achieved. A state’s evaluation plan should also include a method to document progress toward achieving the 5- and 10-year performance measures described in the program announcement.

The evaluation plan will also help states develop an overall picture of evaluation activities so that required staff time and resources can be identified. Just as the program work plan is a roadmap for implementing a program, the evaluation plan provides a roadmap for evaluation activities. Your plan is a fluid document that will change, based on budget, resources, work plan objectives, accomplishments, and expectations. A plan can, and eventually should, be developed to include the following two levels:

  1. Process evaluation: focuses on the quality and implementation of capacity building activities and interventions.
     
  2. Outcome evaluation: concentrates on assessing the achievement of expected outcomes of selected capacity building activities and interventions. Outcome evaluation should build on process evaluation.

States are not expected to engage in all levels of evaluation in the beginning but to grow into them as capacity is increased and programs develop.

When should you develop an evaluation plan? Ideally, you should draft the evaluation plan while you develop your program work plan. As you develop objectives, activities, and timelines, documenting their progress is a natural next step. Developing your evaluation plan as you develop your work plan helps you think realistically about the process of evaluation. It also encourages you to monitor and assess, from the beginning, your program’s implementation so that program improvements can be made. And, as the program or intervention budget is planned, evaluation costs can be estimated and included.

You can develop your evaluation plan—either as one document that consolidates all your state’s HDSP evaluation activities (Appendix 1) or alternatively, integrated into your work plan as a component of each objective (Appendix 2). If you choose the latter, you will eventually want to look at the evaluation activities of your program or intervention as a whole to get a broad picture of the job ahead.

Pre-plan Development

The foundation of your program evaluation plan is your HDSP program work plan. Developing your evaluation plan will be much easier if your work plan contains

  • Clear objectives that describe what you will accomplish.
  • A well-defined set of activities that systematically leads to achieving your objectives.
  • A specified level of expected performance (your performance measure).

Clarify Program Objectives

An essential activity before developing your evaluation plan is to clarify your program objectives. The evaluation plan is based on the stated objectives, activities listed to accomplish those objectives, and the performance or outcome measure(s) listed in your work plan. Your first step is to review each objective from your work plan and identify the following

  • The performance measure that indicates successful achievement of the objective.
  • The activities that you will carry out to achieve the objective.
  • The time frame for accomplishing those activities.

Before you begin your evaluation planning, ensure that stakeholders and staff understand the purpose and scope of the objectives. This might also be a good time to check in with program stakeholders to ensure that the stated objectives are still on point and relevant and to confirm that your performance measures agree with what they perceive “successful performance” to be.

Ensure that your objectives are “SMART.” As you review your program objectives, be sure they are written to identify the results to be achieved, i.e., they describe what the program expects to accomplish. Objectives should be expressed in SMART terms whenever possible. If your objectives are not written that way in your work plan, it is a good idea to revise them and restate them in both your work and evaluation plans.

SMART stands for the following:
 
   Specific – concrete, identifies what will change for whom.
   Measurable – able to count or otherwise measure activity or results.
   Attainable/Achievable – reasonable and feasible with given resources.
   Relevant – relates to the overall goals of the program.
   Time bound – achieved within a specified period of time.

Developing specific, measurable objectives requires time, orderly thinking, and a clear picture of the results expected from program activities. The more specific you can be, the easier it will be to demonstrate success. It is important that objectives be realistic and based on data to identify a valid level of success. Otherwise, states should base performance on perceived realistic goals. Often, talking to state partners or colleagues in other states who have implemented similar programs, or reading reports or articles on similar programs or interventions, can provide information about the amount of change that can be expected.

Here are some examples of SMART objectives:

  • By June 29, 2006 (time bound), increase the number of train-the-trainer sessions provided to HDSP partners on “Implementing and Evaluating System Change” (specific & relevant) from 10 to 14 (measurable & attainable).
     
  • By December 31, 2009 (time bound), increase the percentage of African Americans in Blueberry State who recognize all the signs and symptoms of heart attack and know to call 9–1–1 (specific & relevant) from 11% to 18% (measurable & attainable) (Baseline: BRFSS 2005).

As you develop objectives, ask yourself the following questions:

  • Is the objective based on identified need?
  • Does the objective relate to the priority program areas?
  • Is it a policy, environmental, or system change or does it support this type of change?
  • Does the objective focus work at the highest level possible?

Develop Well-defined Activities

Your work plan should list a set of activities that will lead to the accomplishment of your stated objective. Your activities should be well defined and measurable, should be directly related to your objective, and should indicate the person(s) responsible for insuring that the activities are carried out.

Specify Your Expected Level of Performance (Performance Measures)

Performance measures identify the expected effects or level of success needed to meet the stated work plan objective. These measures are often based on an expected change from a known baseline. SMART objectives can identify your performance measures because they provide specific information needed to identify the expected effects or the goals to be attained.

Determine Use and Users

As you begin planning your evaluation, it is most important to clarify the purpose of the evaluation and who will use the results. These two facts will help focus evaluation questions and the communication plan. Evaluating with the end user in mind will also increase the likelihood that the results will be used.

Plan Development

Once you have clarified your program or intervention objective(s) and the use and users of your evaluation, you can begin to develop your evaluation plan. Developing a plan includes these eight steps:

  1. Develop evaluation questions. What do you want to know?
  2. Determine indicators.  What will you measure? What type of data will you need to answer the evaluation question?
  3. Identify data sources. Where can you find these data?
  4. Determine the data collection method. How will you gather the data?
  5. Specify the time frame for data collection. When will you collect the data?
  6. Plan the data analysis. How will data be analyzed and interpreted?
  7. Communicate results. With whom and how will results be shared?
  8. Designate staff responsibility. Who will oversee the completion of this evaluation?

This guide uses the following SMART objective example to develop a sample evaluation plan:

By June 29, 2006, increase the number of training sessions on “Implementing and Evaluating
System Change,” provided to HDSP partners, from 10 to 14.

Keep in mind, this is only an example. If you were actually developing a plan based on this objective, you could choose to implement either a more basic or a more comprehensive evaluation, depending on several variables:

  • Available resources.
  • Other program evaluation needs.
  • The importance of this particular objective in achieving your expected outcomes.
  • The need for data for program improvement.
  • Your intent to repeat the intervention or activity.
  • The input of your stakeholders.

In working through this example, the guide will use the template provided in Appendix 1. This template is not required; it is simply provided as an example that you may use.

Step 1: Develop Evaluation Questions

Evaluation questions indicate those things that you want to learn. They allow you to

  • Focus your evaluation.
  • Measure your achievement against stated objectives (i.e., its impact).
  • Assess how well the objective's supporting activities worked (the process).
  • Reflect on your program’s stage of development and the resources available.

Evaluation questions typically center on the planning and implementation of your intervention or activity (reach of intervention or activity, number and quality of activities completed, adherence to the implementation plan, diversity among participants, barriers, reasons for non-adopters, etc.), the attainment of objectives, the impact on participants, and the impact on the system. Evaluation questions should focus on what happened, how well it happened, why it happened the way it did, and what the results were.

Begin by brainstorming a long list of potential questions with partners, stakeholders, and staff. Remember that your evaluation plan will, most likely, not answer every question posed. Your next step is to prioritize the list of questions based on the following:

  • Importance to stakeholders.
  • Feasibility—ability to collect data and the cost.
  • How the answers will be used.

As you prioritize your evaluation questions, compare them to your work plan to ensure that your work plan activities are sufficient to answer them. As an example, your stakeholders might wish to know if participants are satisfied with their training. If you don’t include a participant survey in your work plan, it will be hard to answer this question.

Let’s begin with our sample objective and pose the potential questions listed below that could be asked to evaluate its success. Question 1 is essential to measuring the achievement of the objective—an increase in the number of training sessions. To take your evaluation one step further and measure the processes of the training, you might ask questions 2 through 5. If you wanted to assess the impact of the increased training with some additional dimensions such as appropriateness of content or on-the-job application by participants, you could ask questions 6 and 7.

Objective: By June 29, 2006, increase the number of training sessions provided to HDSP partners on "Implementing and Evaluating System Change" from 10 to 14.
Evaluation Questions
1.  How many training sessions were conducted in 2006?
2.  Did the training sessions have defined goals and learning objectives?
3.  How satisfied are staff (or partners) with the training sessions offered?
4.  Did the appropriate partner representatives attend training sessions?
5.  Did participants increase their knowledge of key learning objectives?
6.  Are participants able to translate training into practice? Do participants intend to use the new knowledge in their workplace?
7.  Did participants use the knowledge gained during training sessions in their work? If not, why not?

These are just some examples of the evaluation questions you could ask about your training activities. The questions you generate should be based on your particular program and should reflect what you and your partners want to know about your training activities.

It might not be possible to answer all the evaluation questions you generate for this objective. Which questions you choose to answer should be based on needs and resources available for the evaluation.

Step 2: Determine Indicators

To successfully answer your evaluation questions, you will need to determine indicators or what you will measure to obtain observable evidence of accomplishments, changes made, or progress achieved. Indicators describe the type of data you will need to answer your evaluation questions.

Examples of indicators you could use to answer the evaluation questions in our example are found in the following table. There could be more than one indicator for some evaluation questions.

Objective: By June 29, 2006, increase the number of training sessions provided to HDSP partners on "Implementing and Evaluating System Change" from 10 to 14.
Evaluation Questions Indicators
1. How many training sessions were conducted in 2006? Number of training sessions conducted during 2006.
2. Did the training sessions have defined goals and learning objectives? Number of training sessions with stated goals and learning objectives.
3. How satisfied are staff (or partners) with the training sessions offered? Level of participant satisfaction with training sessions.
4. Did the appropriate partner representatives attend training sessions? Number and type of participants that meet target audience description.
5. Did participants increase their knowledge of key learning objectives? Change in knowledge after completing training.
6. Are participants able to translate training into practice? Do participants intend to use the new knowledge in their workplace? Percentage of participants able to identify one potential workplace application. Of those, the percentage that will incorporate that application into their workplace.
7. Did participants use the knowledge gained during training sessions in their work? If not, why not? Number of skills developed in training that are incorporated into workplace post-training and the list of reported barriers.

Step 3: Identify Data Sources

The next step is to identify data sources that relate to your indicators and answer your evaluation questions. The sources you select will depend on what data are available that answer the evaluation questions most effectively. It might be necessary to collect new data as part of the evaluation. However, before you start, check to see what data are already available that can answer your evaluation questions and provide adequate information to assess your objectives.

Data sources for program evaluations include people, documents, observations, or existing data sources. To increase the credibility of your evidence, you should plan to collect data from more than one source whenever possible.

Our example evaluation plan that includes data sources becomes the following:

Objective: By June 29, 2006, increase the number of training sessions provided to HDSP partners on "Implementing and Evaluating System Change" from 10 to 14.
Evaluation Questions Indicators Data Sources
1. How many training sessions were conducted in 2006? Number of training sessions conducted during 2006. Administrative records, training log.
2. Did the training sessions have defined goals and learning objectives? Number of training sessions with stated goals and learning objectives. Training document abstraction.
3. How satisfied are staff (or partners) with the training sessions offered? Level of participant satisfaction with training sessions. Participant satisfaction surveys, participant interviews.
4. Did the appropriate partner representatives attend training sessions? Number and type of participants that meet target audience description. Attendance logs, participant demographic sheets.
5. Did participants increase their knowledge of key learning objectives? Change in knowledge after completing training. Pre- and post-test scores.
6. Are participants able to translate training into practice? Do participants intend to use the new knowledge in their workplace? Percentage of participants able to identify one potential workplace application. Of those, the percentage that will incorporate that application into their workplace. Participant survey questions: "Identify 1 way that you will incorporate new information learned today into your current work." If you cannot, please describe the barrier(s) to incorporating this information.
7. Did participants use the knowledge gained during training sessions in their work? If not, why not? Number of skills developed in training that are incorporated into workplace post-training and the list of reported barriers. Follow-up written survey of attendees.

Step 4: Determine Data Collection Method

If the collection of new data is necessary to answer your evaluation questions, you must determine what method for collecting those data is most appropriate and feasible. There are two types of data collection methods: quantitative and qualitative.

Quantitative data are numerical and can be used to make calculations and draw conclusions in terms of percentages, proportions, and other values. Quantitative data are often easier to organize and analyze than qualitative data. Quantitative data answer questions such as “how much?” “how many?” or “to what extent?”  “How much of an increase in knowledge of signs and symptoms of stroke did we see in the priority population as a result of our health communication campaign?” “How many participants implemented at their job the skills or knowledge they learned in the training session?” “To what extent were you satisfied with the training you received?”

Qualitative data come in the form of notes, verbal answers, transcripts, and written responses. The data generally include respondents' thoughts, feelings, and perspectives and are primarily analyzed in terms of themes, ideas, events, personalities, histories, etc. These results must be interpreted and organized but cannot be measured in numerical terms. Qualitative data answer the question “why?” “Why was the increase in the priority population’s knowledge about the signs and symptoms of stroke less than we expected?” “Why did so few people implement the skills and knowledge they learned in the training session?”

For some aspects of your program, qualitative methods will be the most useful. For example, as you build partnerships, personal interviews with partners allow you to gauge the level of enthusiasm for and commitment to the program and the strength of the partnership. For others, quantitative methods will be more appropriate. Common methods of data collection include, but are not limited to the following:

Quantitative

Qualitative

  • Record review and abstraction
  • Written or telephone surveys
  • Document review and analysis (activity logs, attendance sheets)
  • Observations
  • Focus groups
  • Personal interviews

When thinking about the method to use for collecting data, it is useful to consider the following:

  • Which method is more likely to secure the information needed?
  • Which method is more appropriate given the values, understanding, and capabilities of those who are being asked to provide the information?
  • Which method is less disruptive to the program and target populations?
  • Which method is more feasible given the available resources (money, personnel, skill level, etc.)?

Once you have decided what data you will collect, it is time to decide how you will collect them. Ask yourself the following:

  • When will the data be collected?
    • At one time
    • At specific times during the program or intervention
    • Continuously throughout the program or intervention
     
  • Will a sample be used? Or will data be collected from all participants or all participating sites?
  • Who will collect the data?
  • What is the schedule for data collection?
    • When will information be available?
    • When can the information be conveniently collected?
    • Where will the information collection take place?
    • When will data collection start and end?

Below is our evaluation plan with the inclusion of the data collection method.

Objective: By June 29, 2006, increase the number of training sessions provided to HDSP partners on "Implementing and Evaluating System Change" from 10 to 14.
Evaluation Questions Indicators Data Sources Data Collections
1. How many training sessions were conducted in 2006? Number of training sessions conducted during 2006. Administrative records, training log. Count number of sessions conducted.
2. Did the training sessions have defined goals and learning objectives? Number of training sessions with stated goals and learning objectives. Training document abstraction. Count number of sessions with defined goals and learning objectives.
3. How satisfied are staff (or partners) with the training sessions offered? Level of participant satisfaction with training sessions. Participant satisfaction surveys, participant interviews. Self-administered survey at end of session. Telephone interview.
4. Did the appropriate partner representatives attend training sessions? Number and type of participants that meet target audience description. Attendance logs, participant demographic sheets. Percentage of attendees by following demographics: job type, organization represented, position in organization, etc.
5. Did participants increase their knowledge of key learning objectives? Change in knowledge after completing training. Pre- and post-test scores. Self-administered survey conducted before and after completion of training session.
6. Are participants able to translate training into practice? Do participants intend to use the new knowledge in their workplace? Percentage of participants able to identify one potential workplace application. Of those, the percentage that will incorporate that application into their workplace. Participant survey questions: "Identify 1 way that you will incorporate new information learned today into your current work." If you cannot, please describe the barrier(s) to incorporating this information. Self-administered survey at completion of training—question added to post-test survey.
7. Did participants use the knowledge gained during training sessions in their work? If not, why not? Number of skills developed in training that are incorporated into workplace post-training and the list of reported barriers. Follow-up written survey of attendees. Self-administered mail survey conducted 1–3 months after training session.

Step 5: Specify Time Frame

The time frame should reflect when the data are to be collected. Depending on the type of data needed, collection can take a few days or a few months. It is important to be realistic about how long it will take to collect the data and the resources that will be required. Data collection should be feasible given the time and resources available for the task.

Step 6: Plan Data Analysis

Data analysis depends on the type of data collected. A number of quantitative and qualitative software packages are available to assist with analysis. Interpretation is the process of attaching meaning to analyzed data. Too often we analyze data but fail to take the next step—to put the results in context and draw conclusions. Numbers do not speak for themselves. They need to be interpreted based on careful and fair judgments. Similarly, narrative statements need interpretation. When interpreting data, you should consider the following:

  • Who should be involved in interpreting the results of data analysis?
  • What is the basis for interpreting the data?
  • Who sets the basis for comparison?
  • What are the conclusions and recommendations, especially for program or intervention improvement?
  • What did we learn?

Step 7: Communicate Results

Once evaluation results are developed, how will they be communicated and shared? When deciding how to share evaluation results, think about the following:

  • With Whom? Look back at who was identified early on as a key user. Target key decision makers with appropriate and hard-hitting information. Also, consider who else might, or should, be interested in the evaluation results. Because program improvement is important, staff and managers need the results.
     
  • How? The communication methods you use will depend upon your audience. A variety of possibilities exist, such as a written report, a short summary statement, a slide presentation, media releases, and internet postings. A useful approach would be to invite your audiences to suggest ways they would like to receive the information.

Not all information resulting from your evaluation has to be provided to all of your stakeholders. Some groups might be interested only in select results. It is important to know what type and amount of information is desired by your stakeholders so that what you provide meets their needs. For example, based on the information needs and desires of your stakeholders, your evaluation could be reported in full, as an executive summary, a PowerPoint presentation, or a policy brief. Your full report could go to program staff to help determine how to plan and implement more effective trainings in the future. An executive summary could be most appropriate for other partner members, to provide them with a detailed summary of your training efforts. A policy brief that provides a summary of training sessions and their impact on environmental or systems change would be a good way to inform policy makers, and a PowerPoint presentation that highlights the need for training and training results might be the best way to get your message across to other organizations that you want to approach for inclusion in future training efforts.

The evaluation plan on the following pages incorporates the time frame, data analysis and communicating evaluation results components. You will note that not every question on your evaluation plan needs to indicate an audience or way of disseminating results. Just indicate overall the stakeholders who will receive results and how.

Objective: By June 29, 2006, increase the number of training sessions provided to HDSP partners on "Implementing and Evaluating System Change" from 10 to 14.
Evaluation Questions Indicators Data Sources Data Collections Time Frame Data Analysis Communicating Results
To Whom How
1. How many training sessions were conducted in 2006? Number of training sessions conducted during 2006. Administrative records, training log. Count number of sessions conducted. June 2006 Compare number in 2006 to 2005. Funders Progress report
2. Did the training sessions have defined goals and learning objectives? Number of training sessions with stated goals and learning objectives. Training document abstraction. Count number of sessions with defined goals and learning objectives. May–June 2006 Percentage exceeds 90% Funders Progress report
3. How satisfied are staff (or partners) with the training sessions offered? Level of participant satisfaction with training sessions. Participant satisfaction surveys, participant interviews. Self-administered survey at end of session. Telephone interview. Aug 2005–May 2006 Percentage of respondents satisfied or very satisfied exceeds 80% Program staff Brief report
4. Did the appropriate partner representatives attend training sessions? Number and type of participants that meet target audience description. Attendance logs, participant demographic sheets. Percentage of attendees by following demographics: job type, organization represented, position in organization, etc. May–June 2006 Descriptive statistics of attendees. Program staff Brief report
5. Did participants increase their knowledge of key learning objectives? Change in knowledge after completing training. Pre- and post-test scores. Self-administered survey conducted before and after completion of training session. Dec 2005–May 2006 Compare and pre- and post-training means using T test.    
6. Are participants able to translate training into practice? Do participants intend to use the new knowledge in their workplace? Percentage of participants able to identify one potential workplace application. Of those, the percentage that will incorporate that application into their workplace. Participant survey questions: "Identify 1 way that you will incorporate new information learned today into your current work." If you cannot, please describe the barrier(s) to incorporating this information. Self-administered survey at completion of training—question added to post-test survey. Aug 2005–May 2006 Calculate % of participants that can name at least 1 workplace application of training, with >80% desired. Identify themes related to barriers. Policy makers, partners, and stakeholders Policy brief, written report
7. Did participants use the knowledge gained during training sessions in their work? If not, why not? Number of skills developed in training that are incorporated into workplace post-training and the list of reported barriers. Follow-up written survey of attendees. Self-administered mail survey conducted 1–3 months after training session. Feb 2006–August 2006 Percentage of respondents using information or implementing skills >50%. Analyze users and non-users by demographic. Identify themes around non-use. Partners and stakeholders Written report

Step 8: Designate Staff Responsibility

Your evaluation plan should identify key staff and/or partners who will be responsible for ensuring that the evaluation is carried out. This will eliminate duplication of effort and omission of key tasks. Staff/partners named as responsible are not necessarily those persons who will actually conduct the evaluation, but rather the persons in your organization or partnership who will oversee the evaluation activities and ensure that the evaluation is carried out. Not all activities will be the responsibility of HDSP program staff. Partners and contractors might share in this responsibility.

Sample Evaluation Plans

Appendices 1 and 2 include two examples of evaluation plan formats for states to consider. Templates are available as Microsoft Word documents from your CDC Project Officer.

Evaluation Budget

The evaluation planning stage is the perfect time to develop an evaluation budget. As with program budgeting, estimating budget items for evaluation activities can be tricky. A checklist is provided (Appendix 3) to assist evaluators and others in thinking through the many expenses that should be considered when developing an evaluation budget. This checklist is divided into categories of evaluation costs and questions to prompt their consideration. In some cases, items on the checklist will not be applicable.

If you have no idea what to budget for evaluation activities, consult the following resources:

  • State government bid or quote systems might have contractors who have pre-quoted services.
  • The local American Evaluation Association affiliate might offer guidance for estimating local costs.
  • University evaluation centers.
  • Colleagues.

A very rough estimate for an evaluation budget is 5% to10% of the intervention or program costs. Each evaluation has unique considerations and your actual costs could be different than others estimate.

Additional Resources

McNamara, C. Basic guide to program evaluation. Free Management Library located at http://www.managementhelp.org*

Taylor–Powell, E., Steele, S., & Douglah, M. (1996). Planning a program evaluation. Retrieved April 2002, from University of Wisconsin–Extension–Cooperative Extension, Program Development and Evaluation Unit Web site: http://learningstore.uwex.edu/pdf/G3658-1.PDF*  [PDF–326K]

Appendix 1: Consolidated Evaluation Plan Format

Objective:
 
Evaluation Questions Indi-cators Data Sources Data Collection Time-frame Data Analysis Commun-ication Plan Staff Responsible
 
What you want to know. What type of data you will need. Where you will get the data. How you will get the data. When you will collect the data. What you will do with the data. When and how you will share results. Who will ensure this gets done.
               
               
               
               
               

Appendix 2: Integrated Evaluation Plan Format

Objective:
 
Activities (undertaken to achieve objective): Person Responsible Timeline Indicators
       
       
Evaluation Plan:
Describe in narrative or bullet form
  • Evaluation questions.
  • When and how data will be collected and analyzed.
  • When and how results will be reported.
  • Who is responsible.
Evaluation Results:
Describe in narrative or bulleted form your evaluation results including achievement of indicators, response rates, findings, and recommendations for improvement.

Appendix 3: Budget Template

Evaluation Expenses Estimated Cost
Will additional secretarial support, data entry services, graphic design services, transcribing, etc., be needed?  
Are there consultant fees for evaluation design, statistical analysis, telephone surveyors, data collection, etc.?  
What travel will be incurred for administration, data collection, participants, etc.? If you have one or more consultants, will their travel expenses be paid from the travel budget line or included in the consultant fee?  
What postage or other forms of mail services will be required for mailing of surveys, notices, invitations, etc.? Will express services be needed?  
What printing costs will be incurred as a part of the data collection process for surveys, interview guides, etc.? As part of the report submission?  
Will you use telephones to collect data? Will long-distance charges be incurred? What are the charges per completed interview?  
What supplies will be needed? (CD–ROMs, notebooks, pencils)  
Are promotional materials (e.g., brochures, pamphlets) a product of the evaluation? Include graphics and printing charges.  
What specialized equipment is needed for scanning surveys, recording responses, random telephone dialing, etc. (e.g., tape recorders and tapes, computer software, laptop). Will they be purchased or rented?  
Are there costs for data storage, transmission or analysis?  
Are translation services required?  
Are there incentives for evaluation participants?  
Other:  
Other:  
Other:  
Other:  
Other:  

Acknowledgements

This guide was developed for the Division for Heart Disease and Stroke Prevention under the leadership of Susan Ladd and Jan Jernigan in collaboration with Nancy Watkins, Rosanne Farris, Belinda Minta, and Sherene Brown.

State Heart Disease and Stroke Prevention programs were invaluable in the development and fine-tuning of this guidance document. Their review contributed significantly to the clarity and utility of this guide. Special thanks are extended to the following:

Susan Mormann, North Dakota Department of Health,
Ghazala Perveen, Kansas Department of Health and Environment,
Ahba Varma, North Carolina Department of Health and Human Services, and
Namvar Zohoori, Arkansas Department of Health and Human Services.

We encourage readers to adapt and share the tools and resources in the document to meet program evaluation needs. For further information, contact the Division for Heart Disease and Stroke Prevention, Applied Research and Evaluation Branch at cdcinfo@cdc.gov or (990) 488–2424.

Bibliography

American Heart Association. Heart Disease and Stroke Statistics–2006 Update. Dallas, Tex: American Heart Association; 2006.

Centers for Disease Control and Prevention. Prevention Works: CDC Strategies for a Heart-Healthy and Stroke-Free America. Atlanta, GA: U.S. Department of Health and Human Services; 2003. Available at http://www.cdc.gov/DHDSP/library/prevention_works/index.htm.

 
*Links to non–Federal organizations are provided solely as a service to our users. Links do not constitute an endorsement of any organization by CDC or the Federal Government, and none should be inferred. The CDC is not responsible for the content of the individual organization Web pages found at this link.
 

Return to Top of Page

Page last reviewed: October 15, 2008
Page last modified: October 15, 2008
Content source: Division for Heart Disease and Stroke Prevention, National Center for Chronic Disease Prevention and Health Promotion

  Home | Policies and Regulations | Disclaimer | e-Government | FOIA | Contact Us
Safer, Healthier People

Centers for Disease Control and Prevention, 1600 Clifton Rd, Atlanta, GA 30333, U.S.A
Tel: (404) 639-3311 / Public Inquiries: (404) 639-3534 / (800) 311-3435
USAGovDHHS Department of Health
and Human Services