|
Implicit in discussions of educational reform, but rarely
recognized, is the confusion between the terms change and progress.
. . Change is not necessarily progress. Change must always be viewed
in relation to the particular values, goals, and outcomes it serves.
- Michael G. Fullan, The New Meaning of Educational Change
When you accepted the position of Drug Prevention and School Safety
Coordinator, you took on the exciting role of change agent. In
collaboration with school and community partners, you strive to
establish effective prevention initiatives that enhance health and
educational outcomes for students in grades K-12. There is no
question that you are fostering change in your schools and
districts. You have brought diverse individuals together in new
ways, unearthed and made sense of information about student needs,
and initiated new programs and activities to meet those needs.
However, it is critical to make sure that the efforts of everyone
involved in your prevention initiative are making the situation
better, not just different. That is where evaluation comes in;
through evaluation, you can determine whether or not your good
intentions are actually leading to great results!
Keep in mind that, in your role as a coordinator, you are not
expected to know everything about how to conduct an evaluation of
your schools' prevention initiative. You are, however, responsible
for making sure that a quality evaluation is conducted. This
five-day, facilitated event is designed to provide you with the
knowledge and skills necessary to initiate the process of evaluating
your schools' prevention efforts. By the end of this event, you will
be able to do the following:
Describe the benefits of conducting a thorough evaluation of
prevention activities.
Make informed decisions about the best way to approach your
evaluation.
Identify and select the right person to help you evaluate your
program.
Collaborate with your evaluator to develop a solid and practical
evaluation plan.
Access additional evaluation resources.
Although this event cannot address all the issues critical to
conducting a quality program evaluation, it will help you understand
the basics and consider how to make your evaluation as feasible and
useful as possible. You are ready to begin this event if you have
done the following:
This event was developed by the National Coordinator Training and
Technical Assistance Center and CSAP's Northeast Center for the
Application of Prevention Technologies at Education Development
Center, Inc.
Before beginning Day 1, please read this page to learn the anwers to
the following frequently asked questions about on-line learning:
Why do you offer this training on-line?
Conducting training on-line has numerous benefits! On-line or
distance learning allows professionals working in different
locations within a district, throughout a state, or even across the
country to come together around a topic of interest and form a
community of practitioners.
Specifically, on-line learning offers these advantages:
- It saves time and money for participants who would otherwise have
to travel to a training.
- It provides access to information for participants who live and
work in places where resources may be unavailable or difficult to locate.
- It increases participants' familiarity and comfort with
technology.
- It provides flexibility, allowing participants to access
information at the time and pace most convenient for them.
- It gives participants an opportunity to engage thoughtfully in a
topic of interest by allowing them time to reflect before responding.
- It provides materials that participants can print for future
reference or share with colleagues and community members.
What will I learn from this event?
Are You Making Progress? Increasing Accountability Through Evaluation is a
five-part workshop designed to be completed over the course of five days. The
workshop is divided into these sections:
Day 1: Introduction to Evaluation
You will receive an overview of four types of evaluation, examine
different reasons to conduct evaluations, and learn about
traditional and participatory approaches to evaluation.
Day 2: Bringing an Evaluator on Board
You will learn about potential differences between internal and external evaluators,
as well as the steps for finding and hiring an external evaluator.
Day 3: Planning Your Evaluation
You will review four key steps in planning evaluation studies: involving key
stakeholders, focusing the evaluation, crafting research questions, and selecting
the right design.
Day 4: Conducting Your Evaluation
You will learn about the differences between quantitative and qualitative approaches
to data collection, the benefits of a mixed-method approach to evaluation, and
some important issues to consider prior to data collection.
Day 5: Summary and Wrap-Up
You will review a summary of the event discussion, explore additional resources,
and share any final questions or insights that you may have about program evaluation.
How is this Web site organized?
This site is composed of the following sections:
Home
This is the page that you see when you first access the site. It
includes a welcome to all participants, the event's learning
objectives, and information about the steps that you should have
completed prior to beginning this event.
Getting Started
You are currently viewing this section, which provides a detailed
introduction to and overview of the event.
Daily Materials
This is the heart of the event; you will acquire a basic
understanding of the event topic from the daily materials. Materials
appear in a variety of formats and can be printed for future
reference. Each day, you will also be asked to answer two or three
discussion questions that will help you reflect on and apply the
information that you are learning.
Discussion Summary
This section contains a summary of the previous day's on-line
discussion. On the final day of the event, it will contain a summary
of the entire week's discussion. Please read the summary before
beginning your day's work.
Resources & Links
This section houses three types of resources: session, general, and
additional. Session resources supplement the main text for each day
of the event; examples are tip sheets and practical tools. General
resources help you participate in this on-line training. Additional
resources include links to other organizations and publications with
information about the event topic.
Event Support
This section includes an on-line form you can submit if you need
technical assistance. If you have any problems during this event,
please do not hesitate to use this form. Center staff will promptly
address all requests for assistance.
Discussion Area
This area houses the on-line discussion among participants. In addition to
sharing your responses to the discussion questions found at the end of each session,
you may also post any questions or comments you have about event content. The
National Center's Director of Continuing Education will facilitate this discussion.
How much time is required each day?
We expect that it will take you approximately one hour per day to
review materials, complete activities, and contribute to the event
discussion. We ask that you visit the Discussion Area at least once
a day to share your ideas and experiences, as well as to review and
respond to the messages posted by your fellow participants and
Center staff. It is beneficial to visit the Discussion Area more
than once each day, if possible; participants in other on-line
workshops have found that more frequent visits allowed them to
better monitor and contribute to the on-line discussion.
You will have a more accurate sense of how much time you will need
to set aside for this event after you complete the first session.
Please make sure to allow enough time each day to complete all event
tasks; your full participation is the key to the success of this
training.
Can I print these materials?
All the materials on this site can be printed for fuure reference.
However, we strongly suggest that you review the materials on-line
before you print them, so that you can see how the various sections
fit together.
To print a specific page, go to that page, place your cursor on File
(at the top of the page), go to Print, and then press OK. Everything
on the screen in front of you will print. To print only the text
(minus the navigation bars at the top and side of the screen), you
must first open the site using a Web browser (e.g., Internet
Explorer). Then place your cursor on the page you want to print,
right-click with your mouse, select Print, and choose OK.
Unfortunately, it is impossible to print the entire site with a
single click of the mouse.
Where can I go for help?
If you have technical questions or problems, you can submit an
on-line request for assistance in the Event Support section. You may
also find answers to your questions in the following tip sheets
located in the Resources & Links section: Navigating This Site,
Participating in On-line Events, and Using the Discussion Area.
Whether you work with an evaluator from your school or district or
an outside consultant, it is important to understand the
fundamentals of the evaluation process so that you can fully
participate in it. The more you know about the purpose and nature of
evaluation, the better prepared you will be to work with your
evaluator and discuss the process and findings with your school and
community partners.
While many of you may be well acquainted with this topic, it
is still helpful to start at the beginning -- by defining
evaluation and reviewing why it is so important.
We are all evaluators, carefully examining and assessing the value
of everything around us. When purchasing produce, we smell the
cantaloupes and squeeze the tomatoes before making our selections.
When choosing a pain reliever, we consider price, reliability, and
potential side effects. Sometimes we even consider other data
sources, such as Consumer Reports or the opinions of friends and
family members.
What distinguishes program evaluation from these everyday evaluation
activities is the fact that it is systematic -- it involves the application
of methods and techniques that are designed to increase our certainty about the
validity of the results. We can use the information we collect through evaluation
to improve the effectiveness and make decisions about the future of a prevention
program. Although all types of interventions can be evaluated, this event will
focus on program evaluation.
Evaluation is the systematic collection of information
about program characteristics and outcomes in
order to improve effectiveness and make decisions.
|
Generally speaking, there are four different types of program evaluation: formative,
process, outcome, and impact. The type of evaluation you choose to conduct will
depend on the current state of your prevention activities and the nature of the
decisions you need to make about those activities.
-
Formative evaluation involves collecting data to
inform program development and delivery. An example is conducting a focus group
with students to shape the development of a tobacco prevention media campaign.
Formative evaluation activities are an excellent way to obtain feedback about
the feasibility of proposed activities and their fit with the intended settings
and participants. Although some people use this term synonymously with process
evaluation, most use it only within the context of program planning and development.
-
Outcome evaluation measures the direct effects of
program activities on targeted participants, such as the degree to which a program
increased knowledge about alcohol and other drug use among students. The emphasis
here is on the program's short-term effects.
-
Impact evaluation assesses the ultimate effects of
program activities on targeted participants. The emphasis here is on the program's
long-term effects. For example, impact evaluation might look at the extent
to which program activities contributed to a reduction in risk factors (e.g.,
school drop out) associated with a program's intended outcomes (e.g., reductions
in alcohol and other drug use among youth) among students.
Evaluation serves a variety of purposes, including the following:
It helps keep you on track. Despite everyone's
best intentions, program activities often change over time.
These changes can sometimes compromise program effectiveness.
A clear understanding of the implementation process can help
you bring actual delivery in line with intended delivery. Such
adjustments can ensure that program activities continue to
reflect the program's original goals and objectives.
It can improve program efficiency. Evaluation may reveal
opportunities to streamline program delivery or enhance coordination between program components.
Increased efficiency can help reduce cost or allow you to provide
more services to a larger audience at the same cost.
It may reveal unintended effects. Your program may contribute to
many different changes among participants, including changes that your planning
team does not expect. For example, some drug prevention programs
have been shown to reduce the prevalence of one type of substance
while simultaneously increasing the use of another.
It enhances accountability. Accountability is the hallmark of the
No Child Left Behind Act of 2001 -- and for good reason. Because resources are limited
and the education and well-being of young people are at stake, we
must do everything we can to ensure that the academic and
health-related activities students experience are of high quality in
design, delivery, and outcomes. Prevention activities must be held
accountable to the following stakeholders:
- funders who provide resources to support a program
- school staff who spend their time and energy implementing a program
- parents who entrust their children to schools
- students who rely on schools to help them reach their full potential
- other school and community partners invested in the well-being of
youth
Traditional vs. Participatory Evaluation
There are two primary approaches to conducting an evaluation:
traditional and participatory. In the traditional approach, the
evaluator is responsible for all decisions about how to conduct the
evaluation. The evaluator's contact with program staff is minimal:
program staff may be unaware of the evaluation's design and excluded
from evaluation tasks.
A relatively new but increasingly valued approach involves a
partnership between the evaluator and those who develop and deliver
program activities. This approach, known as collaborative or
participatory evaluation, relies on an evaluation team composed of
one or more individuals trained in evaluation, program staff, and
other stakeholders (e.g., students who are receiving the program,
parents, etc.). The diverse members of the evaluation team work
together to plan and implement all evaluation tasks.
Not only is the participatory approach to evaluation more closely
connected to your own approach to prevention work -- namely
collaborative, but it can also help you produce a better, more
enjoyable, and less expensive evaluation. Click below to learn more
about the advantages and disadvantages of participatory evaluation.
Click here for some advantages. |
Click here for some disadvantages. |
Traditional and participatory approaches to evaluation embody
different values and result in different relationships, processes,
and outcomes. In the end, you must work with your planning team and
other key school and community partners to determine the best
approach for the evaluation of your program.
TRADITIONAL EVALUATION |
PARTICIPATORY EVALUATION |
- Evaluator works separately from the program.
|
- Evaluator works in concert with the program.
|
- Evaluator makes decisions.
|
- Evaluator advises on decisions.
|
- Evaluator retrieves information from program staff as needed
to plan and carry out the study.
|
- Program staff participate in
planning and carrying out the study.
|
- Evaluator interacts (relatively infrequently) with the program
director.
|
- Evaluator interacts regularly with program staff and other stakeholders.
|
Discussion Questions
Please think about the questions below and share your
responses, comments, and/or any questions about today's
material in the Discussion Area.
Have you had experience with any of the four types of
evaluation (formative, process, outcome, and impact), either as a coordinator or in a
previous position? If so, please describe your experience.
If you have had some experience with evaluation, was it
within the context of a traditional or a participatory approach? How did that approach
seem to work for you and the others involved? Please explain.
Whether or not you have evaluation experience, how do you
feel about the traditional vs. the participatory approach? How do you think your school
and community partners would feel about these two approaches?
|
This completes today's work.
Please visit the Discussion Area to share your responses to
the discussion questions! |
References for Day 1 materials:
Harding, W. (2000). Locating, hiring, and managing an evaluator.
Newton, Mass.: Northeast CAPT.
Jackson, E. T. (1998). Introduction. In E. T. Jackson & Y. Kassam
(Eds), Knowledge shared: Participatory evaluation in development
cooperation. (pp. 1-20). West Hartford/Ottawa: Kumarian/IDRC.
Muraskin, L. (1993). Understanding evaluation: The way to better
prevention programs. Rockville, MD: Westat.
Scheirer, M. A. (1994). Designing and using process evaluation. In
Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (Eds.), Handbook of
Practical Program Evaluation. (pp. 40-68). San Francisco:
Jossey-Bass.
The participatory approach to evaluation offers many benefits to
program staff and stakeholders, including the following:
-
It promotes shared expectations. When an evaluation
plan is developed collaboratively, there is less potential for misunderstanding
and an increased likelihood that all involved will be "on the same page."
-
Personal investment improves quality. Program staff
and stakeholders tend to have a greater personal investment in the evaluation
than evaluation professionals. Their ongoing participation helps ensure that the
program will be assessed carefully and treated fairly.
-
It empowers program staff. By deepening their understanding
of good research, it enhances their capacities to conduct their own research and
review the research of others.
-
It increases the likelihood that the research questions and data
collection methods will be appropriate and relevant for the target populations.
Program staff and stakeholders may be more knowledgeable than an evaluator about
the needs, culture, and circumstances of the target population.
|
One of the negative connotations often associated with evaluation is
that it is something done to people. One is evaluated. Participatory evaluation,
in contrast, is a process controlled by the people in the program or community.
It is something they undertake as a formal, reflective process for their own development
and empowerment.
-- M. Patton. (1990). Qualitative evaluation and research
methods (2nd Ed.), Thousand Oaks, CA: Sage Publications. p. 129. |
-
It increases the chance that results will be used.
When staff are integrally involved in the evaluation process, they are more likely
to understand, accept, and apply the findings of the evaluation.
-
It is more flexible and less costly than traditional models.
With more people involved, the balance of responsibilities can shift as needed
throughout the duration of the project. Furthermore, with program staff on board
to help, fewer tasks fall to the more costly evaluator.
Although there are many benefits to participatory evaluation, its
disadvantages include the following:
Participants may disagree about the distribution of labor. It is
sometimes difficult to work out a balance of responsibilities that everyone
finds comfortable, so try to be clear at the outset about who will
be doing what.
You may encounter resistance from program staff. They may feel
that they have too much to do already and that evaluation is not part of their
job. To increase buy-in, explain the importance of the evaluation
and how their involvement will contribute to its success.
Collaboration takes time. However, the time spent meeting, consulting, and sharing points of view will result in a more relevant and useful evaluation.
The potential for bias may be increased. Think critically about
who should and should not carry out specific evaluation tasks. For example, a
school administrator should not interview teachers about how well
they implemented a prevention program. The teachers are more likely
to be comfortable and candid discussing their implementation of
program activities with an evaluator.
It may be more difficult to locate an appropriate evaluator. Not all evaluators are committed to, or even interested in, collaborating with program
staff and stakeholders. If you would like to undertake a
participatory evaluation of your program, make sure to discuss this
approach with candidates during the screening process. The topic of
locating and hiring an evaluator will be covered in greater detail
on Day 3.
As a Drug Prevention and School Safety Coordinator, you are
responsible for making sure that you have the right people on
your evaluation team -- including a professional evaluator. It
is best to begin working with an evaluator as soon as
possible; the ideal situation is to bring an evaluator on
board as a member of your planning team. If you were not able
to connect with an evaluator that early in the process, then
there is no time like the present!
Internal vs. External Evaluators
Many of you may already be working with an evaluator. Your school or
district may have an in-house evaluator or you may have contracted
with an external consultant. For those of you who have not yet begun
to work with an evaluator (and have the option to make your own
selection), here are some things to consider:
|
Internal Evaluator |
External Evaluator |
Objectivity |
May be perceived as less objective because he or she is
closely connected to and invested in the program |
May be perceived as more objective because he or she is not
directly connected with the program |
Credibility |
May be perceived as having less evaluation expertise, and thus
be less credible |
May be perceived as more credible, provided he or she takes
sufficient time to understand program functioning |
Skills |
Is skilled and knowledgeable about program functioning |
Is skilled and knowledgeable about evaluation |
Usefulness |
May be more useful because he or she is more familiar with the
program |
May be less useful because he or she is less familiar with the
program |
Success |
May be more successful in getting support from other program
staff |
May be less successful in getting support from other program
staff |
Cost |
Is less expensive |
Is more expensive |
* Please note that some of the information in this table assumes
that (1) the internal evaluator is not an evaluation expert, and (2)
the external evaluator is working in a traditional, rather than a
participatory, manner.
You can increase the likelihood that your evaluation will be
successful by working with either a highly skilled internal
evaluator or an external evaluator committed to a truly
collaborative approach.
Hiring an External Evaluator
If you and your planning team decide to seek an external evaluator,
you will need to do the following:
Click here for a checklist of the steps that will help your team
stay on track as you search for the right evaluator. Once you have
an evaluator on board, he or she will work with you to plan and
conduct a successful evaluation of your school's prevention program.
Discussion Questions
Please think about the questions below and share your
responses, comments, and/or any questions about today's
material in the Discussion Area.
Are you currently working with an evaluator? If so, is he or
she an internal or an
external evaluator?
If you are working with an internal evaluator, what benefits
and challenges have you experienced during your collaboration?
If you are working with an external evaluator, what process
did your team use to select the right candidate (e.g., What qualities did you
prioritize? How did you screen candidates?)? What benefits and
challenges have you experienced during your collaboration?
|
This completes today's work.
Please visit the Discussion Area to share your responses to
the discussion questions! |
References for Day 2 materials:
Center for Substance Abuse Prevention. How to find and work with an evaluator.
Evaluation Basics PreventionDSS 3.0. Available on-line at: http://www.preventiondss.org/Macro/Csap/dss_portal/ templates/start1.cfm?sect_id=1&page=/macro/csap/ dss_portal/portal_content/eval_intros/eval-nug8-30b.htm& topic_id=5&link_url=processevalintro.cfm&link_name=Evaluation%20Basics.
Harding, W. (2000). Locating, hiring, and managing an evaluator.
Newton, Mass.: Northeast CAPT.
Juvenile Justice Evaluation Center. Hiring and working with an evaluator.
Washington, D.C.: author. Avilable on-line
at: http://www.jrsa.org/jjec/about/publications/evaluator.pdf.
Rabinowitz, P. Choosing evaluators. Community Tool Box. Available
on-line at:
http://ctb.lsi.ukans.edu/tools/EN/sub_section_main_1351.htm.
Including evaluators early in the prevention planning process allows
them to do the following:
Gain a thorough understanding of the program. Evaluators who are
involved in program planning will have a better understanding of the program's design
and intent, how each component is supposed to work, and how the
different components interrelate.
Conduct an evaluability assessment. Evaluability assessments can help you determine whether: (1) a program is mature enough to evaluate, (2) a program is functioning as intended, and (3) program outcomes or impacts can be measured. This type of assessment can reveal potential problems and prevent premature evaluations that waste valuable time and resources.
Design the evaluation. It takes time to develop and
agree on an evaluation design that is appropriate for your
program. Rushing through this phase can result in a flawed
design that does not adequately assess your program.
Evaluation designs will be explored in greater detail on Day
3.
Select the appropriate measures and develop instruments. The measures selected for the evaluation should be relevant to your program's objectives and goals and appropriate for program participants. Selecting the right measures takes time. Once chosen, the measures need to be organized into a questionnaire (or other data collection instruments), then
pilot-tested. If the instruments need to be translated into another language, the translation process could take several weeks. Ideally, all of this work should be completed before the program begins.
Complete the Institutional Review Board's (IRB) review
process. In many cases, you will not be able to collect data
until your procedures for protecting the participants have
been reviewed and approved by an IRB. The IRB review process
cannot begin until all instruments and participant protection
procedures have been developed. It often takes a month or two
to complete an IRB review.
Develop rapport with program staff. Program staff tend to be
suspicious of evaluators. Developing trust and good communication patterns usually takes time.
Waiting to get an evaluator on board until after the program has
been fully implemented can be a costly mistake. It often takes three
to six months of evaluation planning and preparation before data
collection can begin. If you wait too long, your evaluator may not
have sufficient time to help you collect the data you need to answer
your research questions.
Some points to consider when assessing candidates include the following:
Evaluation philosophy.Consider the evaluation philosophy that you
and your planning team find most comfortable and appealing. If you select the
participatory or collaborative approach, make sure that candidates
are committed -- or at least amenable -- to this model. Ask concrete
questions about their willingness to involve program staff and
stakeholders in the evaluation process.
Education and experience. If you cannot identify a candidate with
formal training in program evaluation, look for individuals with graduate-level
training in social science research methods. They should also have
professional experience in the areas of evaluation design, data
collection, and statistical analysis. Ideally, candidates will have
additional experience that is relevant to your specific program. Ask
candidates whether or not they have evaluated similar programs with
similar target populations. If they have, then they probably have
knowledge and resources (e.g., appropriate data collection
instruments) that can save you both time and money. To get a clear
sense of their work, ask to see the evaluation reports that they
prepared.
Communication skills. Evaluators must be able to
communicate effectively with a broad range of audiences. They
should avoid jargon; someone who cannot clearly explain
evaluation concepts is not a good candidate. To gather
accurate information, an evaluator needs to be able to connect
comfortably with program staff and participants. An evaluator
should be personable and engaging, as well as capable of
making evaluation results both compelling and accessible.
Cultural sensitivity. An evaluator needs to respect the cultures
of the communities with which he or she works. Mutual respect and some understanding and
acceptance of how others see the world is crucial. Genuine
sensitivity to the culture and community will help increase the
comfort level of program staff, participants, and other
stakeholders. It will also ensure that data collection tools are
appropriate and relevant, thus increasing the accuracy of the
findings.
Budget and cost. Ask for a detailed budget that
distinguishes between direct costs, such as labor, and indirect costs, such
as overhead. Overhead rates vary widely. It is not unusual to
see overhead costs of 100% or more, meaning that for every
dollar that goes toward conducting the study, another dollar
goes toward running the organization responsible for the
study. Sometimes you can get an organization to reduce its
indirect costs -- this saves you money without compromising
the quality of your study.
Time and access. Make sure candidates have the time to complete
the necessary work. Ask them about their current work commitments and how much
time they will be able to devote to your project. Compare their
responses to your estimate of the time needed to do the work. Make
sure to factor in frequent site visits and regular meetings. The
more contact your evaluator has with your program, the better he or
she will understand how it works and the more opportunities he or
she will have to monitor data collection activities. Regular
meetings also let you monitor the evaluator's performance. If the
evaluator is not local, travel expenses may increase the cost of
your evaluation.
Commitment to your agenda. Your program resources should not be
used to support someone's personal research agenda. Researchers, particularly those
attached to universities, may have their own reasons for embarking
on an evaluation. It may fit into a doctoral dissertation, a book
that a professor is writing, or a piece of long-term research that
will eventually be published. Researchers may also have strong
prejudices about the kind of research methods they want to use or
what they expect to find. You may want to discuss these
possibilities up front and specify in your contract that the
evaluator will make your program's needs a priority. Keep in mind,
however, that an evaluator with a strong agenda of his or her own
may actually prove to be more dedicated to your study and/or work
for less money. Just make sure that your agendas, if not the same,
are complementary.
You may have several good candidates for the evaluator's
position. Click here for a tool that can help you document and
prioritize criteria for selecting the best evaluator for your
needs.
|
Criteria for Selecting an Evaluator |
|
Your team can use the following tool to prioritize criteria for
screening potential evaluators. First, discuss each criterion as a
group and document team members' comments. Then go through the list
a second time and assign each criterion a rank. You may want to use
a blackboard or a large piece of paper for this process.
Criteria for Selecting an Evaluator |
Team Members' Comments |
Rank |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Commitment to your agenda
|
|
|
|
|
|
|
Now that you know what you are looking for, where do you find that
perfect evaluator? There are many different ways and places to
locate qualified candidates, including the following:
Programs similar to yours. Contact other schools or local agencies
that have implemented and evaluated similar drug and violence prevention
activities. They may be able to suggest local evaluators who will be
a good fit for your project. Be sure to ask whether there is anyone
with whom they were dissatisfied.
Safe and Drug-Free Schools Coordinators. Your school district's
Safe and Drug-Free Schools (Title IV) Coordinator is an excellent resource for
information about all phases of the prevention planning process.
This individual is likely to have extensive information about local
prevention resources, including qualified evaluators who may be
interested in working with your program.
State or local agencies. Most state or local government agencies
(e.g., departments of education or public health) have planning and evaluation
departments. You may be able to use individuals from these sections
or they may be able to direct you to other local organizations or
individuals who could work with you.
Local colleges and universities. Faculty in
departments of sociology, social work, education, community
psychology, and public health, and in university-based
research centers often have training and experience in program
evaluation. Some of these professors do work outside their
institutions, including program evaluations.
Research institutes and consulting firms. Professional service
firms and research organizations often employ experienced evaluators who can contract
with you to conduct an evaluation. You can find many of these
organizations in the Yellow Pages under "consultants".
Funder. Ask your funder to help you identify a suitable evaluator.
Funders see many evaluation reports and may know some good candidates in your area.
Furthermore, it makes sense to choose an evaluator whom your funder
knows and respects.
Technical assistance providers. The staff of organizations that
provide technical assistance in the fields of drug and violence prevention, such as
local or regional prevention centers, can be very useful in your
search for an evaluator. Help in identifying an evaluator is an
appropriate technical assistance request.
Professional associations. Associations such as the American
Evaluation Association and the Society for Prevention Research may be able to
provide names of local members who conduct program evaluations.
Evaluation literature. In the library or on the Internet,
look up published evaluation studies on programs like yours.
If the authors are local, contact them to discuss your
program. If the authors are not local, call and ask if they
know of qualified evaluators in your area. By reading these
articles, you will also learn a lot about how evaluation
studies of programs like yours are conducted.
Conference presentations. Look through agendas of conferences that
focus on school-based health promotion and risk prevention efforts. Contact
local researchers to discuss your program or ask researchers in
other areas for local contacts. You may also want to request copies
of the conference papers that they presented.
When selecting your evaluator, begin by considering these questions:
Who will be involved in the selection process? It is
usually a good idea to create a selection committe comprised of a
diverse group of program staff and stakeholders. This will reduce
the possibility of individual preferences or prejudices
contaminating the hiring process, provide different perspectives on
the applicants, and make for livelier interviews. If you intend to
work with your evaluator in a collaborative or participatory manner,
a group interview also demonstrates this value and objective to
applicants. Try to keep the group to four or fewer; a larger number
can be seen as intimidating and cause scheduling challenges.
What materials will you request from candidates? In addition to
resumes, ask candidates for reports of evaluations they have conducted and
written up -- particularly if they have worked with programs
similar to your own. Ask for samples of the complete report(s), as
well as copies of executive summaries and presentations that they
have developed to share their findings. Ideally, you could also ask
candidates to prepare a written proposal for your evaluation --
though you might want to reserve this request for your pool of
finalists. To get a good proposal, provide candidates with clear
information about your program's goals, activities, and audience.
How many people will you interview? This will depend on the amount of time you have to devote to this process and the number of qualified candidates applying for the position.
How many levels of interviewing do you plan to do? Will there be a
second interview of the two or three best? Or even a (rare) third
interview?
What questions will you ask of all candidates?To better compare
candidates and protect yourself legally, it is a good idea to generate a list of
four to seven questions that you will ask everyone. Of course, you
can and should also ask other questions that are tailored to each
candidate's background.
How will you schedule the interviews? This can often be one of the
hardest parts of the process. Decide whether you want to schedule interviews
back-to-back (all on one or two successive days), or spread them out
over a longer period. Spreading them out may be easier logistically,
but will lengthen the hiring process and increase the possibility of
losing a good candidate to another job while the process is being
completed.
Choosing Applicants to Interview
Applicants will send you a variety of materials which your hiring
committee will have to read through and rank. There are two
approaches to scoring or rank-ordering applications: formal and
informal.
A formal approach involves assigning points to each criterion
(e.g., evaluation philosophy, education and experience, cost). Candidates accrue a
certain number of points for each criterion they meet: the better
their qualifications, the more points. You can also add points for
such things as living in the target community or particular kinds or
amounts of personal or work experience. You can also score cover
letters according to how well they are written, how much care was
taken with them, and how well they are presented. Once applicants
are scored, you can move forward in one of two ways: (1) you can
interview candidates with the highest scores, or (2) use the scores
as a starting point for group discussion.
If you use an informal approach, then it is likely that you will use discussion to make your decisions without the scoring system as a guide. However, you will need to develop some approach to help steer your discussion in a positive direction. Either way, the real work usually gets
done during group discussions.
Interviewing Applicants
Interviews typically include introductions, information for the
candidate, questions for the candidate, and questions from the
candidate to the committee. Before conducting your interviews,
consider these questions:
How will you rate the applicants' performance? Consider having a
"training" session before the interviews, particularly for those for whom hiring is a
totally new experience. Taking notes or making an effort to remember
important features of an interview may not be obvious strategies for
someone who has never conducted an interview before. Framing
open-ended questions may take some practice as well.
Who will do what during the interviews? Before each interview,
decide who will facilitate (e.g., offer greetings, introductions, and explanations),
who will provide an overview of the program, who will keep track of
time, and who will move the interview to the next stage. Divide your
standard interview questions among committee members and allow time
for all members to ask any follow-up questions that they may have.
Will there be any other aspect to the interview besides
conversation? For example, will you take candidates on a tour of the school and/or
community? Will they meet with people other than the interview
committee? Will you ask them to demonstrate their competency at
something (e.g., explaining evaluation concepts)?
Following the Interview
If you have not already checked references, now is the time. Be sure
that the references include directors of programs with whom the
evaluator has worked. After references have been checked and
materials reviewed, sit down with your interview committee to
discuss the candidates. Have everyone state and support their
opinion. Hopefully, you will quickly arrive at a common,
mutually-accepted conclusion. If you encounter a disagreement that
you cannot work through, you may need to schedule another interview
-- perhaps one with a different format. Once you reach consensus,
and your candidate of choice accepts your offer, prepare a written
contract for him/her to sign. This will help make sure that you are
all on the same page and ready to proceed.
|
Developing an Evaluation Contract |
|
The desired relationship between the evaluation team and the
external evaluator is one of partnership and should be reflected as
such in the contract. The contract should state, in a single
paragraph if possible, the evaluator's general responsibilities.
Also include in this paragraph a brief statement detailing your
intended decision-making process and the authority of the evaluation
team. In another paragraph, list and provide a timetable for the
contract deliverables. Many evaluation contracts also specify who
owns the data gathered during the evaluation as well as who has the
right to publish the results of the evaluation study. Finally,
indicate how the evaluator will bill for services rendered and a
schedule of payment. The contract should also detail the evaluation team's
responsibilities to provide the external evaluator with timely and
appropriate guidance, to review and approve evaluation instruments
and documents in a timely and constructive manner, and to assist the
evaluator in solving problems that arise during the evaluation.
Sample Contract
Please note: This sample contract illustrates content typically
included in such documents. It is not intended for use as a legal
contract. Before issuing your own contract, be sure to review it
with your own legal counsel.
The evaluator, _________________________, is responsible for
designing and conducting an evaluation of the Peers Making Peace
program at Taft Middle School. The evaluator is responsible for
guiding the evaluation process. In collaboration with the Evaluation
Team, a subgroup of the Taft Prevention Planning Team, the evaluator
will prepare the evaluation plan, identify and/or develop
appropriate data collection instruments, identify the program
participants who will complete the instruments, and administer the
instruments to the selected participants. The evaluator is further
responsible for data entry, conducting the appropriate statistical
analyses, writing the evaluation report, and presenting the
evaluation's results to the Evaluation Team. The evaluator will then
work with the Evaluation Team to interpret the results and prepare
presentations for school and community stakeholders.
The Evaluation Team will have oversight responsibility for the
evaluation. The evaluator will submit plans, instruments, and
reports the Evaluation Team for approval. As an advisor to the
Evaluation Team, the evaluator will be expected to attend all team
meetings, unless informed otherwise. The evaluator will report to
Mr. Luis Cabrera, Drug Prevention and School Safety Coordinator and
Evaluation Team chair.
The Evaluation Team will be responsible for making timely decisions
regarding the overall evaluation plan and its components. If the
Evaluation Team recommends changes in the plan, the suggested
changes will be specific and feasible within the scope of this
contract. If the evaluator disputes the feasibility of the changes,
Mr. Cabrera will be the final arbiter. If the Evaluation Team
reverses one of its decisions, and the changes require additional
work on the part of the evaluator, the contract may be modified as
agreed to by Mr. Cabrera. The Evaluation Team will also be
responsible for assisting the evaluator in securing permission for
collecting the evaluation data, as well as assisting the evaluator
in resolving political or logistical barriers to conducting the
evaluation.
The Evaluation Team will assist the evaluator in developing a model
outline for the evaluation report. Finally, the Evaluation Team will
identify the person(s) to whom a presentation of the evaluation's
results will be made.
The evaluation contract will be in effect from July 1, 2003, through
June 30, 2004. The evaluator will deliver the following products at
the times specified below.
1. General evaluation plan |
7-15-03 |
2. Evaluation instruments |
8-31-03 |
3. Sampling plan and sampling frame |
9-30-03 |
4. Data-collection plan |
9-30-03 |
5. Data-analysis plan |
9-30-03 |
6. Collection of evaluation data |
3-15-04 |
7. Evaluation report, including data tape or disc |
5-31-04 |
8. Presentation (limit of 2) of evaluation results |
6-30-04 |
A deliverable will not be considered satisfactorily completed until
it is approved/accepted by the Evaluation Team. If a deliverable is
not approved/accepted by the Evaluation Team, specific reasons for
its disapproval/rejection must be provided to the evaluator within
two weeks of the deliverable's receipt.
The payment schedule for the contract is as follows:
- 10% after deliverable #1
- 20% after deliverable #2
- 10% after deliverables #3-5
- 30% after deliverable #6
- 20% after deliverable #7
- and 10% after deliverable #8
Accepted by:
_______________________
Taft Middle School
Coordinator |
_______________________
Evaluator |
_______________________
Taft Middle School
Principal |
|
_______________________
Date |
_______________________
Date |
Adapted from:
Centers for Disease Control and Prevention & IOX Assessment
Associates. Booklet 7: Choosing and Using an Evaluator. The Handbook
for Evaluating HIV Education. Available on-line at:
|
Your team can use this worksheet to keep track of the different
steps involved in identifying and hiring the right evaluator for
your prevention initiative.
Locating the Right Candidate
Done? |
Task |
Notes |
|
Have you explored these sources:
- Programs similar to your own
- Your SDFS coordinator
- State or local agencies
- Local colleges and universities
- Research institutes
- Consulting firms
- Your funder
- Technical assistance providers
- Professional associations
- Evaluation literature
- Conference presentations
|
|
Narrowing the Pool
Done? |
Task |
Notes |
|
Convene an interview team (Note: Make sure that team members have
the time to attend multiple interviews.) |
|
|
Select the materials you want candidates to submit (e.g., resume,
evaluation reports, presentations) |
|
|
Identify and prioritize ideal qualities/ characteristics the
evaluator should possess |
|
|
Develop priorities and a ranking sheet to help you decide which
candidates to interview and to evaluate candidates during interviews |
|
|
Develop protocol for reviewing resumes (e.g., who will read
materials, who will contact candidates, timelines) |
|
|
Select candidates to interview (Note: the number of candidates may
depend on the quality of resumes you receive) |
|
Preparing for the Interview
Done? |
Task |
Notes |
|
Develop an interview schedule |
|
|
Have all members of interview team sign off on the schedule |
|
|
Develop an interview protocol (e.g., how the interview will be
structured, questions to ask, who will ask what) |
|
Following the Interview
Done? |
Task |
Notes |
|
Meet with team members to compare notes and initial impressions |
|
|
Review submitted materials, including time-line and budget
(remembering to look closely at indirect costs) |
|
|
Decide which candidates to invite back for a second interview |
|
|
Assign follow-up tasks to team members (e.g., decide who will
participate in follow-up interviews, contact candidates, check
references, notify candidates regarding selection status) |
|
|
Check references |
|
|
Develop evaluator contract |
|
|
Contact candidates (by phone) who were interviewed but not selected |
|
|
Contact candidates (by mail) who were not interviewed |
|
With your evaluator on board, you are ready for the planning phase.
Your evaluator will work with you and your team members to complete
the following steps:
Involve Key Stakeholders
As with all aspects of prevention programming, it is critical to
identify and involve key stakeholders in the evaluation process.
When stakeholders are not appropriately involved, evaluation
findings are likely to be ignored, criticized, or resisted. When
stakeholders are involved, they can provide valuable assistance
during the evaluation process and become advocates for your
evaluation's findings.
Any or all of the stakeholder groups below may be interested in
participating in the evaluation process and/or learning about the
evaluation results:
- Program funders
- Program staff
- Program volunteers, collaborators, and supporters
- Participants' parents and other caregivers
- School administrators and other nonprogram school staff
- School board members
- County board members and elected officials
- Community leaders and activists
- Media
- Program participants
Click here for a chart showing some of the questions stakeholders may ask and the ways they might want to use
evaluation information. |
Focus Your Evaluation
Many prevention initiatives include multiple components (e.g.,
student education, policy enforcement, community awareness,
and information dissemination). The second step in evaluation
planning is working with your evaluator to determine exactly
what you want to study: the whole program or just one part. To
make this decision, consider the following questions:
Which prevention activities will you be able to evaluate? Some
prevention activities are more difficult to evaluate than others. For example, media campaigns
that reach hundreds or even thousands of homes are more difficult to
evaluate than a classroom-based curriculum. Your team will need to
think carefully about how to use limited evaluation resources.
Which elements of the program are most likely to demonstrate
measurable effects? For example, a year-long prevention curriculum is much more likely
than a one-time motivational speaker to influence youth outcomes.
Which part(s) are key stakeholders interested in evaluating, and
why? Different stakeholders will have different perspectives on and questions about
your program, which need to be considered before you can clarify the
purpose of your evaluation.
Which elements require the most resources? It is important to
understand whether or not program components that are very expensive, time-consuming, or
labor-intensive are actually working well.
Which elements have a strong research basis? If strong evidence
shows that a particular component will be effective, you may want to monitor the
implementation of that component.
Another method for defining the scope of your evaluation is to use a
logic model to identify the most important program elements to
evaluate and the data that will be needed. A logic model is a visual
representation of the theory underlying a program. It displays the
relationships between program activities and intended short- and
long-term outcomes. Most of the exemplary and promising prevention
programs identified by the Safe and Drug-Free Schools Program have
logic models that can be adapted for local use following program
selection.
Click here to learn more about how to develop a program logic model.
|
Craft Your Research Questions
For any single program component, there are innumerable issues you
can examine and evaluate. Most fall under one of two areas:
implementation issues and participant outcomes.
Implementation issues. Before assessing program outcomes and/or
impact, you must determine the degree to which actual program delivery matches
intended delivery. As mentioned on Day 1, this type of evaluation is
known as process evaluation. Common process evaluation questions
include the following:
- Did the program recruit and serve individuals from the target
population?
- Did the program deliver services to participants according to the
program design?
- How much of each program service did the typical program participant
receive?
- Who delivered each service?
- How long did participants stay in the program?
- Why did participants drop out of the program?
Click here for more information about process evaluation. |
Participant outcomes. Questions related to outcomes should assess
the degree to which a program directly affects participants and how it goes about
doing so. Outcome questions must be linked to the intended program
outcomes. You can ask separate questions about each intended outcome
or one broad question that addresses a variety of related outcomes
(e.g., Did the program reduce risk factors and increase protective
factors related to substance abuse among program participants?). The
following are some questions that you might use to evaluate a
school-based drug or violence prevention program:
- To what extent has X behavior (e.g., alcohol use, fighting)
decreased among students over the duration of this project?
- To what extent has academic failure declined over the duration of
this project?
- To what extent has school attendance improved over the duration of
this project?
- To what extent has the number of disciplinary referrals decreased
over the duration of this project?
It may also be beneficial to ask questions that will help you better
understand the outcomes you identify. For example, you might want to
ask whether the program is more effective for some types of
participants than for others, whether some program activities or
services account for more of the program's effects than others, or
whether length of participation is related to outcomes.
Click here for some tips on developing good evaluation questions. |
Select the Right Design
If your evaluation team decides to focus on implementation issues,
then the next step is to develop a plan for your process evaluation
and a system for monitoring the delivery of program activities. For
ideas about how to gather information for a process evaluation, talk
to your evaluator and refer to the evaluation guides listed in the
Resources & Links section, as well as the archived event
Implementing Research-Based Prevention Programs in Schools.
If your team decides to examine participant outcomes, then you
will need to select an evaluation design. Four research
designs are commonly used to assess program outcomes:
post-test only, pre- and post-test, pre- and post-test with
comparison group, and pre- and post-test with control group.
These designs vary in their capacity to produce information
that allows you to link program outcomes to program
activities. The more confident you want to be about making
these connections, the more rigorous the design and costly the
evaluation. Your evaluator will help determine which design
will maximize your program's resources and answer your team's
evaluation questions with the greatest degree of certainty.
Click here to learn about the four evaluation designs. |
As the coordinator of this planning process, it is important for you
to make sure that everyone is on the same page before proceeding
from one step to the next. To manage this process effectively, it is
a good idea to check in with evaluation team members and other
stakeholders from time to time to make sure that they have a clear
sense of the evaluation mandate and are comfortable with how the
evaluation plan is coming together.
Discussion Questions
Please think about the questions below and share your
responses, comments, and/or any questions about today's
material in the Discussion Area.
Have you talked with any school and community stakeholders
about the evaluation of your prevention activities? If so, what do they seem most
interested in learning? If not, who are the various
stakeholders you plan to talk to about evaluation issues?
Has your team made any decisions about which part(s) of your
prevention initiative to evaluate? If so, what do you intend to focus on and what
criteria did your team use to make that decision?
Will your team focus on process issues or participant
outcomes? Which processes? Which outcomes?
|
This completes today's work.
Please visit the Discussion Area to share your responses to
the discussion questions! |
References for Day 3 materials:
Center for Substance Abuse Prevention. Evaluation Basics PreventionDSS
3.0. Available on-line at: http://www.preventiondss.org/Macro/Csap/dss_portal/
templates /start1.cfm?sect_id=1&page=/macro/csap/
dss_portal/portal_content/eval_intros /eval-nug8-30b.htm
&topic_id=5&link_url=processevalintro.cfm&link_name=
Evaluation%20Basics.
Harding, W. (2000). Locating, hiring, and managing an evaluator.
Newton, Mass.: Northeast CAPT.
Kantor, G. K. & Kendall-Tackett, K. (Eds.) A guide to family
intervention and prevention program evaluation. Edited and prepared
for electronic dissemination by C. M. Allen. Available on-line at:
http://www.fourh.umn.edu/evaluation/evaluationfiles/family/default.html.
Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S. Getting to
outcomes: Methods and tools for planning, self-evaluation, and
accountability. Volume 1 available on-line at:
http--www.stanford.edu-~davidf-GTO_Volume_I.pdf. Volume 2 available
on-line at: http://www.stanford.edu-~davidf-GTO_Volume_II.pdf.
Western Center for the Application of Prevention Technologies.
Building a successful prevention program; Step 7: Evaluation.
Available on-line at:
http://www.unr.edu/westcapt/bestpractices/eval.htm.
This chart will help you identify the stakeholders that may be
interested in your evaluation results, questions they may want
answered, and ways they might want to use your results.
Stakeholder Group |
Questions of Interest |
Uses of Information |
Funders |
Is the program achieving what was promised?
Is the program working?
|
Accountability |
Program staff and managers |
Are we reaching our target population?
Are participants satisfied with program activities?
Is the program being run efficiently?
How can we improve the program?
|
Programming decisions, day-to-day operations |
Parents and community residents |
Is the program suited to my child's needs?
Is it suited to my community's needs?
What is the program doing?
|
Decisions about whether to participate and/or contribute support |
Public officials |
What is the program doing?
What difference is it making?
Is it reaching the target population?
What do participants think about the program?
|
Decisions about commitment and support, knowledge about the utility
of the program's approach |
Program participants |
Did the program help me?
What could improve the program for others?
|
Decisions about continuing with the program, whether to participate
in similar activities |
Adapted from:
Western Center for the Application of Prevention Technologies.
Building a successful prevention program; Step 7: Evaluation.
Available on-line at:
http://www.unr.edu/westcapt/bestpractices/eval.htm.
The logic model lays out what a program plans to achieve and how it
will work based on a series of logically ordered actions. It links
the following program characteristics:
Goals: the risk and protective factors your program will address
Strategies: the procedures and activities you will implement
Target group: the people who will participate in or will be
influenced by the program
Theory of change (or "If . . . then . . . statements"): the
program's assumptions about why it will work
Short-term outcomes: the immediate changes that are expected in
individuals, organizations, or communities
Long-term impacts: the program's final consequences
One Way to Create a Logic Model
Add information about your program to the chart below to
create a preliminary logic model.
You can also choose to group the different categories of
information into separate boxes and connect them with arrows
to indicate the intended flow. |
Goals
To address the level
of this risk or protective factor:
|
Strategies
We will do the following program activities:
|
Target Group
For these people and for this amount of time:
|
Theory of Change
We expect that this activity will lead to changes in these
factors:
which in turn will lead to our program goal.
|
Short-Term Outcomes
We will know these changes have occurred if:
|
Long-Term Impacts
We will know we are reaching our goals if:
|
You can build logic models to help with evaluation or to guide the
overall strategic planning process. Building a logic model offers
the following benefits:
It promotes understanding about what the program is, what it
expects to do, and how program success will be measured.
It facilitates monitoring by providing a plan that allows you to
track changes. This makes it easier to replicate successes and avoid
mistakes.
It reveals assumptions by forcing program planners to be more
deliberate about what they are doing and why.
It keeps you grounded by helping program planners and others
realize the limits and potential of any one program.
It enhances communication by providing a clear visual
representation of the program that others can understand.
It provides a framework for evaluation by revealing appropriate
evaluation questions and relevant data that are needed.
Programs are not usually implemented exactly as planned, but are
changed, adapted, and improved. Your logic model should provide a
"picture" of these changes. For more information about building a
logic model, refer to the Resources & Links section.
Through process evaluation, you can determine the degree to which
your program is being implemented as designed and explain any major
deviations. Process evaluation involves collecting information about
the program, including how the program is delivered, and about
program participants.
About the Program
Organizational context, such as type of service agency, size,
years in operation, experience providing services to the target community and
population, community linkages, and reputation in the community
Program setting, such as location, facilities, and community
environment
Target population, such as age range, race or ethnicity, common
risk factors, eligibility criteria, and sources of referral or recruitment into the program
Program staff, such as positions and full-time equivalents (FTEs), qualifications, staff-client congruency, training, and satisfaction with the program
Information about program services, such as:
- Types of services provided
- Frequency of service provision (number of times per week or month)
- Length of each service (number of minutes or hours)
- Duration of each service (number of days, weeks, or months)
- Method of delivery (e.g., one-on-one, group session, didactic
instruction)
A process evaluation should also document important problems that
were encountered while implementing the program. Deviations from a
program’s design are often caused by implementation difficulties.
For example, program participants may have greater service needs
than the planners anticipated; when program services are expanded, a
different staffing pattern may be required. Information on such
problems and the resulting program changes is useful when
interpreting outcome evaluation findings. This information is also
useful to others who may consider replicating the program.
About Program Participants
Age at program entry
Gender
Race or ethnic identification
Primary language (if appropriate)
Education level at program entry
Marital status (for adults)
Employment status (for adults)
Income sources (for adults)
- Risk factors at program entry
You should also consider collecting data on client satisfaction. A
good assessment of client satisfaction requires data on client
perceptions of each program service received and overall
satisfaction with the program.
For more information on how to conduct a process evaluation, visit
the Resources & Links section of this event as well as the archived
event Implementing Research-Based Prevention Programs in Schools.
When crafting your evaluation questions, keep the following
considerations in mind:
Some questions are too broad to answer. Think about breaking down
larger questions into their component parts. For example, consider
replacing, "Is our program duplicating other efforts?" with, "What
other programs exist that are similar to ours? In what ways are they
similar? In what ways are they different? How do these programs
complement ours? How is our program unique?"
Questions should be relevant. For example, questions about a drug
prevention program designed to strengthen family bonding should ask
about family relationships, not school attachment.
Questions should be feasible. Many groups make the mistake of
choosing questions that are interesting and on target, but are impossible to
answer given available resources. Any question that requires more
effort to answer than the evaluation budget can support is not a
good question.
Questions should be useful. Good evaluation questions yield
information that can be directly applied to program management, service delivery,
program planning, and/or policymaking.
While there are many possible ways to structure your evaluation, the
following designs are the most common:
One-Group, Post-Only Design
IMPLEMENT PROGRAM |
ASSESS TARGET GROUP AFTER PROGRAM |
In this design, you would administer a post-test (e.g., survey) to
your target group after participants have received an intervention.
This design is common and relatively inexpensive, but it does not
allow you to statistically measure changes from baseline (before the
intervention), nor does it allow you to measure change in relation
to other groups of people who did not take part in the intervention.
-
One-Group, Pre- and Post-Program Design
ASSESS TARGET GROUP BEFORE PROGRAM |
IMPLEMENT PROGRAM |
ASSESS TARGET GROUP AFTER PROGRAM |
In this design, you assess your target group both before and after
program implementation. The strength of this design is that it
provides baseline information that you can compare with your
post-test data. For this design to work, you must administer the
same instrument in the same way both before and after the program.
This design can tell you whether your target group made
improvements, but cannot assure you that your program was
responsible for the outcomes. Alternative explanations are still
possible (e.g., change occurred because participants matured over
time).
-
Pre- and Post-Program with Comparison Group Design
BEFORE PROGRAM |
IMPLEMENT PROGRAM |
ASSESS TARGET GROUP AFTER PROGRAM |
ASSESS COMPARISON GROUP BEFORE PROGRAM |
|
ASSESS COMPARISON GROUP AFTER PROGRAM |
In this design, you assess both your target group and another
similar group that does not receive the program, both before and
after implementation. The addition of a comparison group helps you
determine whether or not your target group would have improved over
time even if they had not experienced your program. The more similar
the two groups are with respect to variables that may affect program
outcomes (e.g., gender, race or ethnicity, socioeconomic status, and
education), the more confident you can be that your program
contributed to any detected changes. This design also helps control
for test effects (e.g., improvements on the post-test due to
participants' experience with the pre-test).
However, this design does increase both the expense and complexity
of your evaluation. It also leaves room for alternative
explanations, since the program and comparison groups may differ in
some important ways.
-
Pre- and Post-Program with Control Group Design
RANDOMLY ASSIGN PEOPLE FROM THE SAME TARGET POPULATION TO GROUP A OR
GROUP B |
TARGET GROUP A |
ASSESS TARGET GROUP A |
IMPLEMENT PROGRAM WITH TARGET GROUP A |
ASSESS TARGET GROUP A |
CONTROL GROUP B |
ASSESS CONTROL GROUP B |
|
ASSESS CONTROL GROUP B |
This design offers the greatest opportunity to attribute evaluation
outcomes to program activities. By adding a control group to your
pre- and post-program design, you introduce the element of random
assignment. When you randomly assign individuals to either a target
or a control group, all members of the target population have an
equal chance of winding up in either group. This fact should ensure
that members of the target and control groups are equivalent with
respect to many key variables that could affect their performance on
the pre- and post-tests. Of the four designs discussed here, this is
the most complex and expensive to conduct, but it also provides the
highest level of certainty that it was your program that caused any
changes detected by your evaluation.
Below are some important issues to consider when selecting a design:
Complex evaluation designs are most costly, but allow for greater
confidence in a study's findings.
Complex evaluation designs are more difficult to implement, and so
require higher levels of expertise in research methods and analysis.
Be prepared to encounter stakeholder resistance to the use of
comparison or control groups, such as a parent wondering why his or her child will
not receive a potentially beneficial intervention. Click here to see
some suggestions for dealing with objections to random assignment.
No evaluation design is immune to threats to its validity; there
is a long list of possible complications associated with any evaluation study.
However, your evaluator will help you maximize the quality of your
evaluation study.
|
Dealing with Objections to Random Assignment |
|
Most objections to using random assignment to evaluate drug and
violence prevention programs are based on perceptions that it is
unfair, denies program services to individuals, or withholds
effective services. These perceptions are often based on
misunderstandings about the nature of this technique.
Is random assignment unfair? The answer is no. Random assignment
provides everyone who is recruited for a program an equal chance of
receiving program services. This is not the case with all assignment
methods. For example, assigning individuals based on perceived need
is unfair because perceptions of need may be biased. Similarly,
assigning individuals on a first-come, first-served basis is unfair
because it discriminates against individuals whom you recruit later.
Does random assignment bring about denial of services? Again, the
answer is no. Few prevention programs have the resources to provide
services to all members of a target population. When a program
reaches its service capacity, new clients are turned away whether or
not random assignment is in place.
Does random assignment withhold effective services? The purpose of
evaluating a program is to determine service effectiveness. Random
assignment provides the best information possible about program
effectiveness. This knowledge will allow you to make good decisions
about the program's future -- including whether or not to expand it
to serve a greater percentage of your target population.
|
Your evaluation team's research questions will drive its selection
of research methods. Selecting methods before identifying questions
is like putting the cart before the horse. Research methods fall
into two general categories: quantitative and qualitative. Once your
evaluator helps your team choose the approach that is most
appropriate, you can go on to identify specific research methods
that will help answer the evaluation questions.
Quantitative vs. Qualitative Approaches
Generally speaking, quantitative approaches to data collection deal
with numbers and answer the questions who, what, where, and how
much. Qualitative approaches deal with words (stories) and answer
the questions why and how.
For example:
Quantitative: Disciplinary reports reveal a 10 percent decrease in
incidents of physical fighting on school premises.
Qualitative: According to a participant in the Peers Making Peace
program, "I have learned a lot about negotiation and mediation in this
program, and I've actually managed to help some students resolve their conflicts
peacefully here at school. It's a really good feeling!"
|
Quantitative data can be counted, measured, and reported
in numerical form. This approach is useful for describing concrete phenomena and
for statistically analyzing your results (e.g., calculating the percentage decrease
of cigarette use among 8th grade students). Some examples of quantitative data
include test scores, attendance rates, drop-out rates, and survey rating scales.
Benefits of collecting quantitative data include the following:
- Tools can be delivered more systematically.
|
- Data are easily compiled for analysis.
|
- Tools can be used with large numbers of study participants.
|
- Findings can be presented succinctly.
|
- Tools tend to be standardized, allowing for
easy comparison within and across studies.
|
- Findings are more widely accepted as generalizable.
|
Qualitative data are reported in narrative form. Examples
include written descriptions of program activities,
testimonials of program effects, comments about how a program
was or was not helpful, stories, case studies, analyses of
existing files, focus groups, key informant interviews, and
observations. You can use qualitative information to describe
how your program functions and what it means to the people
involved. Through qualitative data, you can place your program
in context, and better understand and convey people's
perceptions of and reactions to it.
Benefits of collecting qualitative data include the following:
- It promotes understanding of diverse stakeholder perspectives (e.g., what the program means to different people).
|
- Stakeholders may find quotes and anecdotes easier to understand and more appealing.
|
- It may reveal or shed light on unanticipated outcomes.
|
- It can generate new ideas and/or theories.
|
Click here for a detailed overview of different quantitative and qualitative data collection methods. |
Benefits of a Mixed-Method Approach
The ideal evaluation uses a combination of quantitative and
qualitative methods. A mixed-method approach offers a range of
perspectives on your program's processes and outcomes. Benefits of
this type of approach include the following:
-
It increases the validity of your findings by allowing you to examine
the same phenomenon in different ways. This approach -- sometimes
called triangulation -- is most often mentioned as the main advantage
of the mixed-method approach. Think of a stool; there must be at least three legs
for it to be stable. Each leg represents a different approach to or method of
data collection, holding up the program as represented by the seat.
It can result in better data collection instruments. For example,
it is beneficial to conduct focus groups to inform the development or selection of a
questionnaire.
It promotes greater understand your findings. Quantitative data
can show that change occurred and how much change took place, while qualitative
data can help you understand why.
It offers something for everyone. While some stakeholders may
respond more favorably to a presentation featuring charts and graphs, others may
prefer anecdotes and stories.
Although it may increase the expense and complexity of your
evaluation, a mixed-method approach is still -- resources permitting
-- the way to go. By using different sources and methods at various
points in the evaluation process, your evaluation team can build on
the strengths of each type of data collection and minimize the
weaknesses of any single approach.
Protecting Program Participants
Before gathering data, your evaluation team should decide how to
address the following important issues:
Consent. Everyone participating in your evaluation must
be informed about the study's purpose, what will be expected
of them, and possible benefits and drawbacks of participation.
If they agree to participate, they must be given the
opportunity to skip questions or stop participating at any
time, with no negative consequences.
Active Parental Consent
Any data collection effort funded by the U.S. Department of
Education that asks minors about the following topics requires
active, or written, consent from their parents/guardians:
- political affiliations or beliefs of the student or the
student's family,
- mental and psychological problems of the student or the
student's family,
- sex behavior or attitudes,
- illegal, anti-social, self-incriminating, or demeaning
behavior,
- critical appraisals of other individuals with whom respondents
have close family relationships,
- legally recognized privileged or analogous relationships, such
as those of lawyers, physicians, and ministers;
- religious practices, affiliations, or beliefs of the student
or student's family,
- income (other than that required by law to determine eligibility for participation in a program or for receiving financial assistance under such program).
|
Confidentiality. This means that participants' responses will not
be shared with
anyone outside your evaluation team, unless the information shows
that a participant has an imminent intent to harm himself or herself
or others. Confidentiality protects the privacy of participants and
thus increases the likelihood that they will respond to your
questions candidly.
Anonymity. Whenever possible, try to collect data in a manner that
allows participants to remain anonymous. This means that no one, including
members of your evaluation team, has the capacity to match
participants to their responses.
It is important that your evaluation team becomes familiar with the
U.S. Department of Education's (USED) Family Educational Rights and
Privacy Act (FERPA) and Protection of Pupil Rights Amendment (PPRA).
These federal laws, which apply to schools and programs that receive
USED funding, are designed to protect the rights of parents and
students during the research process. Click here to visit the USED
Web site to learn more about these important provisions.
Discussion Questions
Please think about the questions below and share your
responses, comments, and/or any questions about today's
material in the Discussion Area.
How do you think the various stakeholders in your school and
community would
respond to quantitative vs. qualitative approaches to
collecting and presenting evaluation data? What approach(es)
and method(s) has your evaluator recommended, and why?
Are you using, or do you plan to use, a mixed-method
approach to the evaluation of
your school's prevention activities? If so, what are the
various methods you are using or might you use? How do they
complement one another?
What steps has your team taken, or will it take, to protect
the privacy of students and families during the data collection process?
|
This completes today's work.
Please visit the Discussion Area to share your responses to
the discussion questions! |
References for Day 4 materials:
Central Center for the Application of Prevention Technologies.
Approaches to prevention evaluation. Available on-line at:
http://www2.miph.org/capt_eval/.
Frechtling, J., Sharp, L., & Westat (Eds.). User-friendly handbook
for mixed-method evaluations. Available on-line at:
http://www.ehr.nsf.gov/EHR/REC/pubs/NSF97-153/start.htm.
McNamara, C. Basic guide to program evaluation. Available on-line
at:
http://www.mapnp.org/library/evaluatn/fnl_eval.htm#anchor1581634.
Wandersman, A., Imm, P., Chinman, M., Kaftarian, S. Getting to
outcomes: Methods and tools for planning, self-evaluation, and
accountability. Volume 1 available on-line at:
http--www.stanford.edu-~davidf-GTO_Volume_I.pdf. Volume 2 available
on-line at: http--www.stanford.edu-~davidf-GTO_Volume_II.pdf.
Method |
Overall Purpose |
Advantages |
Challenges |
Questionnaires, surveys, and checklists |
When you need to quickly and/or easily get lots of information from people in a non-threatening way |
- Can be completed anonymously
- Is inexpensive to administer
- Is easy to compare and analyze
- Can be administered to many people
- Can produce a lot of data
- Sample questionnaires already exist |
- You might not get careful feedback
- Wording can bias a client's responses
- Can be quite impersonal
- Surveys may require a sampling expert
- Do not tell the full story |
Interviews |
When you want to fully understand someone's impressions or
experiences, or learn more about their answers to questionnaires |
- Collects full range and depth of information
- Develops relationship with client
- Can be flexible with client |
- Can take much time
- Can be hard to analyze and compare
- Can be costly
- Interviewer can bias a client's responses |
Documentation review |
When you want an impression of how a program operates without
interrupting the program; involves reviewing applications, finances,
memos, minutes, etc. |
- Collects comprehensive and historical information
- Does not interrupt program or client's routine in program
- Information already exists
- Involves few biases about information |
- Can take much time
- Information may be incomplete
- Need to be quite clear about what you are looking for
- Inflexible; data are restricted to what already exists |
Observation |
To gather accurate information about how a program
actually operates, particularly its processes |
- Views program operations as they occur
- Can adapt to events as they occur
|
- Can be difficult to interpret people's behaviors
- Can be complex to categorize observations
- Can influence behaviors of participants
- Can be costly
|
Focus groups |
To explore a topic in depth through group discussion, e.g.,
reactions to an experience or suggestion, understanding common
complaints, etc.; useful in evaluation and marketing |
- Quickly and reliably collects common impressions
- Can efficiently collect broad and deep information
- Can convey key information about programs |
- Can be hard to analyze responses
- Need a good facilitator
- Can be difficult to schedule |
Case studies |
To fully understand or depict a client's experiences in a program,
and conduct comprehensive examination through comparing cases |
- Fully depicts client's experience with program
- Powerfully portrays program to outsiders |
- Can take much time
- Collects deep, but not broad information |
Adapted from:
McNamara, C. Basic guide to program evaluation. Available on-line
at:
http://www.mapnp.org/library/evaluatn/fnl_eval.htm#anchor1581634.
Over the last four days, we have discussed the meaning and purpose
of program evaluation, how to select an evaluator to work with your
planning team, and some of the steps involved in designing an
evaluation. Still, with all that we managed to cover, this on-line
event has only touched the tip of the iceberg with respect to the
topic of program evaluation.
There are numerous organizations and materials that can help you and
your planning team dig deeper into this important topic. Some of
them are listed in the Resources & Links section. On this final day
of the event, please do the following:
Review the list of additional resources located in the
Resources & Links section. You will find information about
organizations, evaluation guides, and other resources.
Identify one resource that you find interesting, follow the
link, and spend some time learning about the organization or
reviewing the publication.
Visit the Discussion Area to share with your fellow
participants and the event facilitator the link you followed
and any interesting tips you learned.
Please also take some time today to read the summary of this week's
on-line discussion and share any additional thoughts -- either about
the topic of program evaluation or about this on-line event -- in
the Discussion Area.
Thank you for participating in Are You Making Progress? Increasing Accountability Through Evaluation.
We hope that you enjoyed the event! |
This week's discussion has been excellent! Thank you for sharing
your ideas and experiences in the area of program evaluation with
one another. Below is a
brief summary of your discussion from Day 4, as well as some
highlights from the discussion earlier this week.
DAY 4 SUMMARY:
Most coordinators appreciate both quantitative and qualitative data
collection methods and intend to use, or are in the processing of
using, a mixed-method approach to evaluation.
"The stakeholders in our school and community believe that the mixed
method approach will provide us with valuable insight to determine
program effectiveness."
"We are using a mixed method approach that involves the collection
of quantitative data for academic achievement and incidents of
behavior. The data is then entered in to our Safe Schools Alert
System. We also collect qualitative data through focus groups."
"I know that the stakeholders expect both quantitative and
qualitative evaluation data. I strongly believe that both are
beneficial. I already use both methods and feel comfortable that
everyone is able to take at least one thing from the evaluation data
and find it useful."
Surveys, focus groups, and statistics from school databases appear
to be a very popular combination for evaluations of prevention
activities.
"We will be using a mixed method approach. We have taken the Search
A&B Survey and I have held focus groups at the high school level. I
will be holding more focus groups in the fall. We are also
conducting a community survey, parent survey, and teacher survey to
measure attitudes, perceptions, and readiness. We have a school data
base called Power School that allows easy access to absentee,
truancy, failure, etc. rates for the quarter, semester, year, and
past years. This is a great tool for gathering quantitative data."
"Since we utilize the participatory method of evaluation, we decided
that both quantitative and qualitative data would be the better way
to go in reporting our findings. This way, the stakeholders have a
more comprehensive picture of what is occurring in the schools, both
pre and post evaluation. Surveys and focus group interviews have
helped in gaining much of this information. In addition, we have our
data-base system that gives us up to date info on students, as well
as the state survey info that we use as well."
Although all coordinators recognize the value of the mixed-method
approach, some coordinators shared a preference for quantitative
methods
"For my survey on 'The Extent of Bullying in Your School', the
evaluator has recommended a quantitative approach. He has indicated
that we can reach more stakeholders by using this approach."
"Stakeholders in our district will be more comfortable with
quantifiable evaluation methods. Due to size of our middle school
campus with over 3000 students, data is readily available and the
utility and ease of presentation makes this approach easily
understandable. I have been successful in presenting in chart form
data that district staff had not seen before and have found it
useful in planning for the future years."
While other coordinators shared a preference for qualitative
methods
"We used some new anti-smoking materials in the health classes. I
collected some process stats on it, but I knew it was effective when
I was walking through the school and saw a student's English essay
where she wrote about the danger of second hand smoke and quoted
some of the new material. I just got chills from the top to bottom
and knew that we accomplished something. I got a copy of the essay
and saved it for the evaluator."
"I believe most of the stakeholders initially would prefer to use a
quantitative approach simply because it is easier. However, since
the initiation of our antibullying training, I am finding that the
teachers seem to respond more to a qualitative approach. One of
their major concerns is that no one hears them. By giving them an
opportunity to talk, I receive responses that are much more
enlightening than using a quantitative approach."
"I know as a teacher, I learned so much from my students and parents
when comments and evaluations were in narrative form."
Coordinators clearly recognize the importance of protecting the
privacy of students and their families, and are taking all necessary
precautions to do so.
"Protecting the confidentiality of children was difficult at first.
I pulled some kids that were recently identified as bullies for a
focus group and their teachers immediately wanted me to tell them
everything that one the students had said. I said no and that
teacher was really angry. It was months before she would let me pull
kids from her class. A lot of people don't understand our roles. I
am not a cop, I am a coordinator. It might have been easier for an
evaluator to have done the focus group, but I have rapport with
those students. They knew that I was a school employee and were
willing to open up. I don't think they would do that for an
outsider. So I set ground rules. You say something in a focus group,
I write it down, but not your name. After the fact, I have no record
of who said what. (I also have someone from guidance there, so if an
issue comes up and the student wants to follow it up they can with
someone who knows what went on)."
"We require that each student and family that participates
understands that all the information is strictly confidential. We
have consent forms and surveys that families sign and fill out. We
have focus groups that are optional for students and parents.
Considering we have such a great turn out, I believe that our
families feel protected and confident that the information they
share is confidential."
"We go to great lengths to protect the anonymity of students. We
send out permission forms by postal service as well as have a copy
of the survey in the school office for parents to view."
"We are using an interview methodology that keeps everything private
in the data collection process. Only the responses are recorded. No
one knows who said what."
DISCUSSION HIGHLIGHTS FROM DAYS 1-3:
A few coordinators mentioned some problems that they have
encountered when working with traditional evaluators, including
difficulties getting on the same page and lack of input.
"It's been a challenge with our evaluator. Trying to get him to
understand what we want and how we want it. It's really
frustrating."
"I feel that traditional evaluation is very limited. It gives very
little opportunity for voice and choice."
Several coordinators voiced a clear preference for the participatory
approach to evaluation for the following reasons:
It enhances the skills of school-based personnel.
School and community stakeholders want to be included.
Collaboration can help shed light on program-related issues.
It fosters a positive working relationship between school personnel
and evaluators.
It offers practical benefits, including reduced cost and increased
buy-in.
However, a few coordinators voiced an appreciation for the
traditional approach to evaluation (e.g., potential for greater
objectivity), as well as some reservations about the participatory
approach to evaluation (e.g., greater time commitment).
"I prefer, in general, the traditional form of evaluation. My
reasoning is that all of the information is being interpreted by one
person and it leaves little room for discrepancy."
"One advantage to this approach is the evaluator is outside of the
school/district and has an objective position."
"Time remains the number one significant factor. With deadlines,
priorities, etc., it is extremely difficult to put in the time
needed to conduct a participatory evaluation in an effective
manner."
Although one coordinator mentioned that it can be difficult to get
school and community partners to share information with those
external to the school/district, another highlighted the importance
of familiarizing external evaluators with the unique characteristics
of your school setting.
"It's especially important to document and share situations
indigenous to one's campus when using an outside evaluator. For
example, some schools have high teacher turnover rates affecting
their program implementation, continuity, and outcomes. Combine that
with a student mobility rate of over 50% and a coordinator has a
formidable task given that the key to incident reductions across the
board is one's feeling of being connected to school. Of course there
are also positive indigenous factors that contribute to successes
that would otherwise go unexplained to an outside evaluator."
Some coordinators are working with evaluators who have both internal
and external qualities, which seems to be working very well.
"We are fortunate to work with an evaluator who was internal for
almost five years, but now she has moved to California, making her
external. She was familiar with the inner workings of every program
in our Department, which is a tremendous asset. Now that she is
external, we still have access to her and she knows us. She was here
for a two week period and came to our school sites to familiarize
herself when we send our information in. We communicate now by
e-mail and that works, since California has a three hour time
difference."
"Our evaluator has previously worked in our school district. He is
currently employed outside the school district, so I would consider
him a hybrid. His memory allows him to merge present observations
with past experiences to support us in our endeavor."
Whether internal or external, it is clear from many coordinators'
comments that it is imperative to maintain regular communication and
contact with your evaluator.
"In addition to the quarterly meeting, we turn in a detailed monthly
report on all of the activities related to MSC responsibilities. The
challenge of working with this evaluator is keeping up with the
paperwork, but the benefit is having a good working relationship
with the evaluator which translates into having a stronger
prevention program for our individual school sites."
"The evaluator has contact with all 10 middle school coordinators
either at our monthly meeting or through E-mail correspondence."
Several coordinators also mentioned the value of making sure that
evaluators truly understand the goals of your schools' prevention
initiatives.
"Each county has received a State Incentive Grant (SIG). This grant
is very similar to ours in terms of requirements etc. We have found
that the evaluator that is being used for the SIG Grants in our
counties fits our grant very well. We meet with her tomorrow to work
out the details for her to sign a contract. I am very excited
because all of our work between CESA and SIG and the MSC grant meld
wonderfully and I am hoping that this joint involvement will really
help with sustainability for everyone involved!"
"Our district contracted with the writer of our grant as the
evaluator. It was a positive move due tot he fact he was very in
tune to our goals. Our district has now created a testing and
assessment dept. so we will in the future save on cost with an
internal evaluator."
The following stakeholders were cited as particularly important to
include in the evaluation process:
- Parents
- Students
- Community groups
- Law enforcement
- Teachers
- Guidance Counselors
- District Director of Instruction
- District Pupil Services Director
- District Safe and Drug-Free Schools Coordinator
- PTA
- School Advisory Council/School Board
- Principal
- Superintendent
Coordinators reported that their schools and other stakeholders were
particularly interested in the following topics:
Disciplinary data
Prevention curricula
Academic achievement
School climate issues
Finally, coordinators shared a genuine interest in logic models as a
tool for planning prevention activities and communicating
information about activities to diverse stakeholder groups.
"The logic model can be applied to many areas of the MSC initiative.
Presentation to stakeholders, rationale for selection of prevention
program, and use in crisis planning are the areas I know I will be
able to apply this model."
"I have used Logic Models before and feel this is a concrete way for
my Board to look at program accomplishments."
Below are just some of the available resources on program evaluation
that you can access via the World Wide Web. For more information,
you may want to consult with the staff at your local or regional
prevention center and departments of education and public health.
You can also ask your evaluator, your district's Safe and Drug-Free
Schools Coordinator, and other colleagues for recommended readings
on evaluation.
Organization and Associations
The following organizations and associations can provide you with
valuable information about program evaluation:
Program Evaluation Guides
The following guides address many of the topics touched on in this
on-line event and provide step-by-step instructions for planning and
implementing program evaluations:
While many of the above guides include information about logic
models, you can learn even more about this topic from the following
resources:
For more information, check out the resources
compiled by the CDC Evaluation Working Group.
Whether you are a computer expert or novice, you may need some
guidance on using this Web site. This document will provide
you with an overview of site mechanics, including how to get
from one place to another. Two additional tip sheets,
Participating in On-line Events and Using the Discussion Area,
will orient you to issues associated with actual
participation.
There are several ways to move around this Web site. When you
enter the site, you will automatically arrive on the Home
page. Here you will find brief instructions for what to do and
where to go first in order to orient yourself to the event.
These instructions include links that you will be asked to
follow.
Using Links to Get Around: As you read through the event,
you will encounter links that can connect you to related materials and
resources. Links will always appear as underlined blue or
purple (depending on your browser) text.
Using the Menu at the Side of Your Screen to Get Around: If
you look to the side of your screen, you will notice a yellow
sidebar which lists each main section of the event Web site.
Just click on one of the titles to travel to that page. The
page you are currently on will be indicated by an arrow. When
you travel to different materials found within a main section
of the Web site, a link in the sidebar will allow you to
return to the main section. For example, clicking on the link
in the sidebar to the left will take you to the main Resources
& Links page.
Using the Yellow Box at the Bottom of Your Screen to Get
Around: Clicking on the yellow box at the bottom of your screen will
return you to the main page of that event section. For
example, clicking on the box at the bottom of this page will
take you to the main Resources & Links page.
Whatever method you choose to navigate this Web site, you can
always use the "Back" button on your browser to return to the
last document you were reading.
Facilitated communication among participants in this on-line
workshop will be asynchronous, meaning that coordinators can log on
to the event at their convenience to read and contribute messages.
Here are a few tips to keep in mind as you participate in this
exciting on-line event:
Your involvement is the key to event success! We hope to have
enjoyable and stimulating discussions, but that can only happen if
you log on and participate.
Make sure that you have adequate time to review new information and
messages.
Log in at least once a day and participate in the on-line
discussions as often as you can. You can share long or short messages, ask big
or small questions, or contribute brief reactions to the messages
posted by other coordinators and facilitators.
You can compose, review, and edit messages in a word-processing
program (e.g., Microsoft Word) or in the event's Discussion Area
prior to posting your messages on-line. Your messages will not
appear on-line until you actively choose to post them. This allows
you time to think about what you want to say and how you would like
to say it.
When you reply to a message that was posted by a fellow coordinator
or a facilitator, make sure to refer to the original message in your
response so that others can follow the conversation.
To participate more fully during the event, try enabling the
mailing list feature (see Using the Discussion Area). This will enable you
to receive all discussion postings by e-mail.
If you have any technical questions or problems, please do not
hesitate to submit a request for assistance to Event Support. We
promise you a prompt response.
Relax and have fun with this opportunity to learn and connect with
your fellow drug prevention and school safety coordinators!
These tips can help both experienced and novice Web users fully
participate in and benefit from on-line event discussion.
Understanding the Lingo
New technology often assigns new meaning to "old words." Here are
some commonly used terms you may encounter when you participate in
on-line discussions throughout this event.
WebBoard: The software used on the general Drug Prevention and
School Safety Coordinator Web site and the on-line event Web sites to
support on-line discussions among coordinators and Training Center
staff.
Discussions: On-line "conversations" taking place within central
Discussion Areas of the WebBoard. Discussions appear on the left side of the
screen. One or more Discussion Areas will be available to you during
an event.
Topic: A specific thread of discussion within a Discussion Area of
the WebBoard. Topics appear indented, under a particular discussion.
Post: Sending a message to the Discussion Area of the WebBoard
that begins or continues a thread of discussion. You must first select a
discussion before posting a new topic (or continuing an ongoing
one).
- Reply: A posting/message made in response to another
posting/message, always threaded under an existing topic on the WebBoard. Replies
appear indented beneath the messages to which they correspond.
For a more extensive list of definitions, visit the event Glossary.
Viewing Topics
To view topics within a discussion, click on the plus symbol [+]
next to a discussion name (or the discussion name itself). You can
also click on the plus symbol [+] next to a topic to view the
replies beneath it.
Posting a Message
If the message you are sending to a discussion begins a new topic
(rather than adding to a current topic), you are posting a message.
To post a message, follow these steps:
Select a discussion by clicking on its name.
Use the Post button on the top (color) toolbar, or open a message
in the topic and select Post from the options at the top of the message. You will
see a Message Creation Form. Enter your new topic in the topic
field.
Type your message in the text box.
Select the Post button on the form.
Responding to a Message
To respond to a message, follow these steps:
Open the message by clicking on it.
Select Reply from the menu at the top of the message.
You will see a Message Creation Form; the current message's topic
will appear in the topic field. You can edit this topic if you wish.
Type your response in the text box.
Click on the Post button. You're done!
Subscribing to a Discussion via E-mail
You have the option of subscribing to a discussion via e-mail. This
means that you can receive new discussion postings as regular
e-mails, and you can respond to them as regular e-mails as well. To
subscribe to a particular Discussion, follow these steps:
Select More from the toolbar.
Select Mailing Lists from the list of options.
Subscribe to the discussions you want by clicking on the
appropriate checkboxes.
Save your changes -- and you're done!
Attachments made to a WebBoard posting/message will not be carried
through e-mail. You must open the posting via the Web in order to
retrieve an attachment.
Attaching a File to a Message
If you use Netscape 3 or above (or Explorer 4 or above), you can
attach documents to a message in a WebBoard Discussion. To attach a
document, follow these steps:
Create a message either by posting or replying.
Select the Attach File checkbox by clicking on it.
Post the message. If you have Preview/Spell Check selected, you
must click on Post twice.
You will see a form for uploading documents. Click on the Browse
button to look for the document you wish to attach.
Select the document and click on Upload Now. You're done!
|
Glossary |
|
coveragethe extent to which the program is serving the
intended target population.
empowerment evaluationevaluation that is designed to support
program participants and staff in self-evaluation of their own programs (a form
of internal evaluation).
evaluationsystematic collection and use of program information
for multiple purposes, including monitoring, program improvement, outcome assessment,
planning, and policy-making.
external evaluationevaluation done by consultants or researchers
not working for the same organization as the program.
formative evaluationevaluation that is designed to collect
information that can be used for continuous program improvement.
impactsusually used to refer to long-term program outcomes.
inputresources available to the program, including money,
staff time, volunteer time, etc.
internal evaluationevaluation done by staff within the
same organizational structure as the program.
logic modela flowchart or graphic display representing
the logical connections between program activities and program goals.
monitoringa type of evaluation designed to ensure that
program activities are being implemented as planned (e.g., the number of participants,
number of hours, type of activities, etc).
outputthe immediate products or activities of a program.
outcomeways in which the participants of a prevention program
could be expected to change at the conclusion of the program (e.g., increases
in knowledge, changes in attitudes or behavior, etc.).
multivariate analysisa statistical term refering to analyses
that involve a number of different variables. For example, an analysis that looked
at whether peer factors and individual factors both influence alcohol use would
be called "multivariate".
participatory evaluationevaluation that involves key stakeholders
in the design, data collection, and interpretation of evaluation methods.
process evaluationevaluation that is designed to document
what programs actually do: program activities, participants, resources, and other
outputs.
stakeholdersthose persons with an interest in the program
and its evaluation (e.g., participants, funders, managers, persons not served
by the program community members, etc.).
summative evaluationevaluation that is designed to collect
information about whether a program is effective in creating intended outcomes.
triangulationthe use of multiple data sources and methods
to answer the same research question.
qualitative datainformation that is reported in narrative
form or which is based on narrative information, such as written descriptions
of programs, testimonials, open-ended responses to questions, etc.
quantitative datainformation that is reported in numerical
form, such as test scores, number of people attending, drop-out rates, etc.
Western CAPT |
Below are just some of the available resources on program evaluation
that you can access via the World Wide Web. For more information,
you may want to consult with the staff at your local or regional
prevention center and departments of education and public health.
You can also ask your evaluator, your district's Safe and Drug-Free
Schools Coordinator, and other colleagues for recommended readings
on evaluation.
Organization and Associations
The following organizations and associations can provide you with
valuable information about program evaluation:
Program Evaluation Guides
The following guides address many of the topics touched on in this
on-line event and provide step-by-step instructions for planning and
implementing program evaluations:
While many of the above guides include information about logic
models, you can learn even more about this topic from the following
resources:
For more information, check out the resources compiled by the CDC
Evaluation Working Group.
This bibliography of selected resources on evaluation concepts for
practitioners was compiled by Social Science Research and Evaluation
(SSRE) and annotated by SSRE and the Northeast CAPT.
Andrews, F.M., Lem, L., Davidson, T.N., O'Malley, P., Rodgers,
W.L. (1978). A Guide for Selecting Statistical Techniques for Analyzing Social
Science Data, Ann Arbor, Michigan: Survey Research Center, Institute
for Social Research, University of Michigan.
This guide uses decision trees to map the choices involved in
selecting an appropriate statistical technique for a given analysis.
More than 100 different statistics or statistical techniques are
included in the guide. Some knowledge of statistics is assumed.
Carmona, M.C., Stewart, K., Gottfredson, D.C., Gottfredson, G.D.
(1998). A Guide for Evaluating Prevention Effectiveness, CSAP Technical
Report. Rockville, MD: U.S. Department of Health and Human Services,
Substance Abuse and Mental Health Services Administration, Center
for Substance Abuse Prevention.
This guide provides practitioners with basic evaluation concepts and
tools. It describes commonly used research designs and their
strengths and weaknesses. Qualitative and quantitative data
collection methods used in process and outcome evaluation are
described. Basic concepts in data analysis are also discussed. NCADI
publication number: 98-3237
French, J. F. and Kaufman, N. J. (Eds.) (1981). Handbook For
Prevention Evaluation: Prevention Evaluation Guidelines. Washington, D.C.:
National Institute on Drug Abuse, Publication No. ADM81-1145.
This handbook was written for evaluator-practitioner teams working
to apply their skills in the assessment and improvement of
prevention programs. Topics discussed include models of prevention,
evaluation design, indicators and measures for process and outcome
evaluation, and reporting evaluation results. It contains an
extensive appendix on instruments and data sources.
Hawkins, J.D. and Nederhood, B. (1987). Handbook for Evaluating
Drug and Alcohol Prevention Programs: Staff/Team Evaluation of Prevention
Programs. Washington, D.C.: U.S. Department of Health and Human
Services, Publication No. (ADM) 87-1512.
This handbook provides program managers with a comprehensive tool to
guide their evaluation efforts. It discusses instruments and
activities for determining program effectiveness (outcome
evaluation), and for documenting and monitoring the delivery of
services (process evaluation). The major topics it addresses are
evaluation design, measuring outcomes, measuring implementation,
data collection, data analysis, and reporting study findings.
Worksheets, sample instruments, and a bibliography are included.
Isaac, S. and Michael, W.B. (1983). Handbook in Research and
Evaluation: A Collection of Principles, Methods, and Strategies Useful in
Planning, Design, and Evaluation of Studies in Education and the
Behavioral Sciences (Second Edition), San Diego, California: EdLTS
Publishers.
This book summarizes basic information on research and evaluation
methods. It is intended to help practitioners choose the best
technique for a particular study. The major topics discussed include
planning evaluation and research studies, research design and
methods, instrumentation and measurement, data analysis, and
reporting a research study. It contains many tables and worksheets.
W.K. Kellogg Foundation. (1998). W. K. Kellogg Foundation
Evaluation Handbook. Battle Creek, Michigan: Collateral Management Company.
This handbook provides a framework for thinking about evaluation as
a program tool. It was written for project directors with direct
responsibility for the evaluation of Kellogg Foundation-funded
projects. It discusses how to prepare for an evaluation (e.g.,
developing evaluation questions, budgeting for evaluation, selecting
an evaluator), designing and conducting an evaluation (e.g., data
collection methods, analyzing and interpreting data), and reporting
findings. The handbook contains worksheets, charts and a
bibliography on evaluation. Full text available on-line at:
http://www.wkkf.org/pubs/tools/evaluation/pub770.pdf
Kozel N.J., Sloboda Z. (1998). Assessing Drug Abuse Within and
Across Communities: Community Epidemiology Surveillance Networks on Drug
Abuse. Rockville, Maryland: National Institute on Drug Abuse, NIH
Publication No. 98-3614.
This guidebook is meant to help practitioners at the local,
regional, and state level assess local drug abuse patterns and
trends using indicator data. The types of data sources discussed
include: treatment data, medical examiner/coroner data, the Drug
Abuse Warning Network (DAWN), law enforcement data, national
surveys, HIV/AIDS data, census data, and telephone hotline data. The
guidebook includes references, a glossary, and appendices that
identify or discuss data sources. Full text available on-line at:
http://www.nida.nih.gov/DEPR/Assessing/Guideindex.html
- Larson, M.J., Buckley, J. and Gabriel, R. M. (1997). A Community
Substance Abuse Indicator's Handbook: How Do We Know We Are Making a
Difference? Boston, Massachusetts: Join Together.
This guide for communities describes indicators that community
coalitions and other groups can use to describe the nature and scope
of local substance abuse problems. The term "indicators" refers to
information that is usually already collected by an agency or
organization. The Guide discusses the sources and interpretation of
the data for 20 substance abuse indicators (e.g., licensed alcohol
outlets, arrests for driving, substance abuse related hospital
admissions). It includes contact information on state agencies and
organizations that collect/report indicator data. This document can
be ordered on-line at: http://www.jointogether.org/sa/ in the
resources/publications section. A summary of this document can be
found in: Beyond Anecdote: Using Local Indicators to Guide Your
Community Strategy to Reduce Substance Abuse. 1999 Monthly Action
Kit, Special 1999 Issue, Boston, Massachusetts: Join Together, 1999.
Miller, D. C. (1991). Handbook of Research Design and Social
Measurement, fifth edition. Newbury, Park, California: Sage Publications, Inc.
This handbook provides procedures and guidance for three major types
of research: basic, applied, and evaluation. Discussion includes
research design, data collection (documentary resources,
questionnaires, interviews), statistical analysis, and scales and
indexes. It includes a guide to federal and private funding and to
the publication of research reports. Extensive bibliographies follow
each major section of the handbook.
Moberg, D. P. (1984). Evaluation of Prevention Programs: A Basic
Guide for Practitioners. Wisconsin: Board of Regents of the University of
Wisconsin System for the Wisconsin Clearinghouse.
This guide is intended for practitioners involved in planning and
delivering local prevention services. Definitions and uses of
program evaluation are described. Recommended steps for planning and
implementing a program evaluation are detailed.
Muraskin, L. D. (1993). Understanding Evaluation: The Way to
Better Prevention Programs. Washington, D.C.: U.S. Department of Education.
Publication # ED/OESE92-41.
This handbook was written for school and community agency staff to
carry out required evaluations under the Drug-Free Schools and
Communities Act. The premise of this book is that many evaluations
that use simple designs can be conducted without formal training in
program evaluation. The author outlines checkpoints in the
evaluation process where practitioners may want to consult with
evaluation specialists. Topics discussed include evaluation design,
data collection methods and instruments, and interpreting and
reporting findings. The handbook describes implementation of an
evaluation of a hypothetical prevention program. This publication
can be ordered through ERIC at: http://www.ed.gov/about/pubs/intro/pubdb.html
Thompson, N.J. and McClintock, H.O. (1998). Demonstrating Your
Program's Worth: A Primer on Evaluation for Programs to Prevent Unintentional
Injury. Atlanta, GA: National Center for Injury Prevention and
Control, Centers for Disease Control and Prevention.
Addressed to program managers, this guide describes the process
involved in conducting a simple evaluation (formative, process,
impact and outcome), how to hire an evaluator, and how to
incorporate evaluation activities into a prevention program.
Appendices include information on sample questionnaire/interview
items, events or activities to observe, and types of records to
maintain. This guide provides a glossary and a bibliography on
evaluation. It also includes sources of information on violence;
injuries that take place in the home, on the road or during leisure
activities; acute care, rehabilitation, and disabilities; and
general sources on injury control/prevention. Information on
ordering this publication can be found at:
http://www.cdc.gov/ncipc/pub-res/demonstr.htm
Last Modified: 06/12/2008
|
|