Overview of the Regional Training Sessions
Researchers conducted the
training at each of the MRC regional meetings held between July and December
2007. Exhibit 1 provides a summary of the regional training sessions, including
the number of training sessions and average number of participants in each
session.
With the exception of Region
2, each training session was scheduled for two hours. In Region 2, only one
hour was scheduled for each session. To adjust for this change, we
significantly reduced the amount of time that the participants spent in their
breakout groups to complete the logic model. In addition, the report back
portion was shortened, leaving little time for questions. Generally speaking,
the ideal length of time for the training was approximately 90 minutes.
Exhibit 1. Regional Training Sessions
Region
|
Meeting date (2007)
|
Number of training sessions
|
Average number of people per
session
|
1
|
October 25-26
|
2
|
20
|
2
|
October 11-12
|
2
|
25
|
3
|
November 14
|
2
|
20
|
4
|
November 9
|
2
|
22
|
5
|
August 27-28
|
2
|
23
|
6
|
December 12-13
|
2
|
25
|
7
|
December 5
|
1
|
30
|
8, 9, and 10
|
July 17-20
|
3
|
30
|
The first regional meeting
was held in July 2007 and was also the largest, with unit coordinators from
Regions 8, 9, and 10 in attendance. Because this was our first opportunity to
implement the evaluation training, we learned several critical lessons that we
used to improve the training for future meetings. These lessons, and the
subsequent changes that we made to the training, are described below.
Return to Contents
Lessons Learned and Adjustments to the Training
Many participants worked in
a linear fashion through the logic model. They took each input or resource and
worked straight across the page to come up with an activity and an outcome for
that input rather than thinking about the desired outcomes of the unit as a
whole (Exhibit 2).
Exhibit 2. Participant Approach to Logic Models
The exhibit above is
representative of what many groups came up with for their logic model. Note
that the purpose of the logic model is to visualize the pathway by which a
program will operate to achieve its desired results. It should highlight the
relationships that exist among resources, activities, and outcomes, since these
rarely operate in a vacuum. In Exhibit 2, the stated outcomes (both short- and
long-term) are not indicative of why the MRC unit exists. For example, the unit
does not exist simply to have volunteers who are NIMS compliant. Rather, it
exists for some greater purpose, such as the ability to minimize morbidity and
mortality in an emergency by improving access to medical care. In a logic
model, long-term outcomes usually will be synonymous with the unit's
overarching goal. The tendency for participants to work linearly from one
resource to one activity to one outcome was a problem we observed in almost
every region.
We also found that many participants
in the first meeting dwelled on the inputs or resources column of their logic
model, trying to define specific quantities of resources they would need before
defining activities and outcomes. This was not the intent of the exercise; we
wanted participants to concentrate on how the activities they would engage in
are connected to the outcomes they wanted to achieve. Therefore, we adjusted
the training slightly and gave the participants specific information about the type
and amount of resources that were available to them (Appendix B). The participants
had to work within this set of pre-defined parameters and make assumptions
about what they could realistically accomplish. This assured that they spent most
of their time on the activities and outcomes.
Another observation from the
initial meeting was that the participants did not always create performance
measures for their activities and outcomes. Because the original logic model handout
did not have a space to list performance measures, this step was often skipped.
Therefore, we added a space below the activity and outcome columns in the
modified handout (Appendix B) to remind participants to include performance
measures.
Most participants found the
interactive sessions to be a fun and very informative experience. They learned
a great deal from the different experiences that each person contributed from
his or her own community. At the same time, the diverse backgrounds of the participants
(e.g., nursing, fire, law enforcement) influenced their perspectives on the
strategic planning process. This diversity, while creating a rich learning
environment, also created some challenges. For example, it was difficult for the
participants to adhere to the hypothetical scenario they were given for the
logic model. Instead, they routinely reverted to the resources, activities, or
goals they knew from their unit. We addressed this by emphasizing the need for
each group to make, and hold to, certain planning assumptions based on the
hypothetical information they were provided.
Also, we spent extra time
during the introductory presentation on key points and walking the participants
through the sample logic models. It was particularly important to stress that a
well-constructed logic model provides a visual pathway for how the entire unit is
expected to operate, and not just one aspect of the unit. We wanted
participants to think about what their unit seeks to accomplish beyond
recruiting volunteers and providing training. Finally, we emphasized the
importance of considering (and showing on a logic model) how the components of
their unit work together and how some activities or outcomes may be necessary
prerequisites to others.
In the next section we
discuss some of the major themes that we observed from our work with MRC
coordinators during the evaluation training sessions.
Return to Contents
Emerging Themes from the Training Sessions
Overall, participants at the
regional meetings embraced the concepts of strategic planning, logic models,
and performance measurement. They recognized the utility of these activities
for building stronger and more sustainable MRC units. In addition, coordinators
noted that grant applications increasingly require the use of logic models to
show how a program will operate. This practical application for helping to
secure funding was especially influential in getting participant buy-in.
Most of the coordinators
were enthusiastic and receptive to the training guides that we developed. Some
came to their regional meeting having already viewed the guides on the MRC
national Web site. They were also appreciative of the volunteer satisfaction
survey and felt that this would be a good resource to help them better gauge
the acceptance of their program among volunteers.
In many instances, the
groups developed very creative examples of activities, outcomes, and
performance measures for their logic models. Examples of these are provided in
Exhibit 3.
Exhibit 3. Sample Logic Model Components
Activities
|
Outcomes
|
Performance measures
|
- Determine skill sets of volunteers
and match these to expected roles of the MRC in staffing flu vaccination
clinics
- Work with public health officials
to establish clear roles and responsibilities for the MRC in staffing special
needs shelters
- Actively participate with
community partners in exercises and establish specific training objectives
for the MRC
- Develop "just-in-time" training
for volunteers
|
- Increase knowledge about the
health risks associated with obesity and a sedentary lifestyle
- Reduce the number of "walking
wounded" who are treated in the acute hospital setting
- Increase the overall flu
vaccination rate by 15% over the previous year's rate
- Increase the number of trained
medical volunteers who are available to staff alternate care sites during an
emergency
|
- Number of presentations delivered
on obesity over a six-month period (i.e., target = 15)
- Number of community exercises
participated in over the last 12 months
- Number of partnerships formed and
Memoranda of Understanding (MOU) established
- Time it takes to process a mock
patient through a point of dispensing during an exercise
|
Not surprisingly, we also
encountered some challenges. As noted earlier, a logic model should depict relationships
among the core components of a program that are essential for it to
operate as intended. We found that many participants had a difficult time
staying focused on the critical elements of their unit and instead got
sidetracked by the details. One person described it as, "we started going real
big, real fast." Overall, participants tended to be very task-oriented and
interested in developing a work plan. It was important to remind them that a
logic model is useful for getting a general sense of how a program operates,
but it does not provide the same level of detail as a work plan.
It was apparent from our
interactions with unit coordinators and program staff that the distinction
between a strategic plan and a work plan is often blurred. When a person
indicated that he or she had developed a strategic plan, further discussion
revealed that they really had a work plan outlining the specific tasks they
were going to conduct. It was common for coordinators to have developed a work
plan without having a written strategic plan in place. In some instances, they stated
that because the MRC was incorporated into their host organization's strategic
plan, they did not require a strategic plan of their own.
Our findings from the needs
assessment revealed that many MRC coordinators did not differentiate between
the broad goal(s) of their unit and the more specific and measurable objectives
that would help them achieve their goal(s). This was also observed during the evaluation
training sessions. Participants used the terms interchangeably and often spoke
of the short-term outcomes listed in their logic model as the "goals" of their
unit.
Some groups did not illustrate
linkages between activities and outcomes on their logic model, tending instead
to simply generate lists of each. One of the benefits of developing a logic
model is that it requires a person to constantly question the validity of the
connections between activities and outcomes (or between multiple activities or
multiple outcomes). For example, a coordinator could ask "is there a correlation
between participating with my partner agencies in drills and exercises and
gaining a better understanding of my unit's role in a disaster response?" By
simply generating lists of activities and outcomes, this examination of
correlation was not performed.
Defining realistic and
reliable performance measures was also challenging. In some instances, groups
specified performance measures that would have been exceedingly difficult to analyze
using supporting data. For example, one of the scenarios involved using MRC
volunteers to conduct outreach education on the benefits of healthy eating and
exercise to help combat the obesity epidemic. A group with this scenario
decided that they wanted to target school-age children and listed as one of
their outcomes a 15% reduction in the number of obese children over a 12-month
period. The performance measure chosen for this outcome was body mass index,
measured among the target group prior to the intervention and again at 12
months. Obtaining and calculating these indices likely would have been time
intensive and resource prohibitive for a volunteer organization like the MRC.
Throughout the training
sessions, we emphasized that there is no "one way" to develop a logic model.
Some might find it easier to start with their available resources and work
left-to-right across the page through activities to outcomes. Others might
choose to start with their desired outcomes and work backward to identify the
appropriate activities and resources. We encouraged the groups to try both
approaches and the feedback that we received suggested there are benefits to
each. Generally speaking, participants seemed to favor starting with the
outcomes and working backwards. From a teaching standpoint, this was good
because participants spent more time thinking about their unit's activities and
outcomes rather than dwelling on the resources they were given.
The breakout-group segment
of the evaluation training provided a venue for informal discussions between
the unit coordinators. These sidebar discussions often addressed such issues as
recruiting, credentialing, and liability protection for volunteers. Other
issues related to interoperable communications and activation procedures also
were discussed. These conversations provided insight into what were "top of
mind" issues for the coordinators. And more often than not these issues had to
do with structural challenges rather than planning challenges (the latter
including issues related to the development of program goals and objectives).
It was evident from these conversations that most coordinators operate in an
environment where structural challenges have priority. Given the time
constraints of most coordinators, this can make it very difficult to ensure
adequate attention is focused on strategic planning and evaluation.
A commonly asked question
during the training sessions was, "where does the development of a logic model
fit in terms of doing strategic planning and performance measurement?" There
was some confusion regarding the correct "order" of conducting these
activities. The development of a logic model is an essential part of strategic
planning because it helps define the core components of a program and how they
relate to each other. This knowledge helps guide the unit's coordinator in
determining the best approach for achieving the program's long-term goals.
Therefore, logic model development is part of, and not separate from, the
strategic planning process. The training staff emphasized this point to
participants.
Return to Contents
Proceed to Next Section