Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help Bookmark and Share

Photovoice – Evaluation through Photography

A picture’s worth a thousand words, and a method called photovoice takes advantage of pictures’ compelling qualities by incorporating photography into research and evaluation. Photovoice is a participatory evaluation method in which program participants are given cameras to capture images that convey their feelings, beliefs and experiences about an issue. The method is used frequently in advocacy projects, allowing the less powerful stakeholders to communicate about issues that impact their lives.

Photovoice seems to be a particularly popular way to engage youth in projects or in evaluation. For examples of photovoice projects with teenagers, check out the two articles listed at the end of this blog entry. The project described in Necheles et al. used photovoice to engage teenagers in identifying influences over their own health behavior. These teens then developed materials such as posters to advocate for healthier lifestyles among their peers. The article by Strack, Magill and McDonagh presents a project in which teens identified problems in their neighborhoods through photovoice. Both articles provide abundant advice for conducting photovoice projects, including how to engage youth in analyzing photos and ideas for presenting results.

Some photovoice projects carry potential risk for participants. Participants also must be taught how to get and document consent from others who appear in their photos. Consequently, photovoice projects require above-average planning and management. For an excellent resource on managing photovoice projects, check out photovoice.org.

Resources:

Necheles JW et al. The Teen Photovoice Project: A pilot study to promote Health through Advocacy. Prog Community Health Partnersh 2007 Fall; 1(3): 221–229.  (available at PubMedCentral, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2837515/)

Strack RW, Magill C, McDonagh K. Engaging youth through photovoice. Health Promot Pract 2004;5:49–58. Available at http://www.ncbi.nlm.nih.gov/pubmed/14965435

Photovoice.org, particularly the organization’s methodology section at http://www.photovoice.org/shop/info/methodology-series

How Many Points on YOUR Rating Scale?

In survey design workshops, we are often asked if rating scales designed to measure respondents’ opinions and attitudes should have an odd number of points, including a neutral mid-point (i.e., “Neither agree or disagree”); or if it’s better to have an even number of points, without a mid-point.  Our answer, which probably frustrates our participants to no end, is “it depends.”  You have to clearly think through what a “neutral” answer means and choose accordingly. Here is a link to a clever blog entry by Jane Davidson that makes this point very well and gives you ways to think about how many points your rating scales should have:

Boxers or briefs? Why having a favorite response scale makes no sense

(There are some insightful readers’ comments to this blog post that you might find interesting as well.)

Need to Know – The 6D’s of Needs Assessment

At the OERC, we recommend using evaluation questions as a foundation for evaluation projects. The questions are useful in developing data collection methods, analyzing data, and organizing evaluation reports.  If you are planning a needs assessment, you can take advantage of a tip sheet that provides needs assessment questions for you: The 6Ds of Needs Assessments. This one-page document will help you identify the information needed to advocate for your project or design your program. The 6D’s of Needs Assessment was created by Kylie Hutchinson, principle evaluator for Community Solutions Planning and Evaluation.

 

How to Ignite Your Presentation: AEA Training Webinar

On July 27, 2012 Stephanie Evergreen, eLearning Initiatives Director for the American Evaluation Association, gave a half-hour webinar about the Ignite approach to giving presentations.  This approach involves a 5 minute presentation based on 20 slides that are each shown for 15 seconds.  (Yes, this is similar to Pecha Kucha.)  The American Evaluation Association, which is conducting a “Potent Presentations” initiative to help its members improve their reporting skills, has made the recording and slides for this great presentation available in its free AEA Public Library:

In her short, practical webinar, Stephanie demonstrated the Ignite approach with a great presentation about “Chart Junk Extraction”—valuable tips for creating streamlined, readable charts with maximized visual impact.  Spend an enjoyable and enlightening few minutes viewing the fast-paced and interesting “Light Your Ignite Training Webinar”—you can even learn how to set your PowerPoint timer to move forward automatically every 15 seconds so that you can practice your Igniting!

How to Analyze Qualitative Data

“Utilizing grounded theory to explore the information-seeking behavior of senior nursing students.” Duncan V; Holtslander, L.  J Med Lib Assoc 100(1) January 2012:20-27.

In this very practical article, the authors describe the steps they took to analyze qualitative data from written records that nursing students kept about their experiences with finding information in the CINAHL database.  They point out that, although the ideal way to gather data about student information seeking behavior would be via direct observation, that approach is not always practical.  Also, self-reporting via surveys and interviews may create bias because members of sample populations might “censor themselves instead of admitting an information need.”  For this study, students were asked to document their search process using an electronic template that included “prompts such as resource consulted, reason for choice, terms searched, outcome, comments, and sources consulted (people).”

After reviewing these searching journals, the authors followed up with interviews.

The “Data analysis and interpretation” section of this article provides a clear, concise description of the grounded theory approach to analyzing qualitative data using initial, focused, and theoretical coding using the nVivo 8 software.  [Note, as of this writing, the latest version is nVivo 10]

  • Initial codes:  “participants’ words were highlighted to create initial codes that reflected as closely as possible the participants’ own words.”
  • Focused codes:  “more directed, selective, and conceptual than word-by-word, line-by-line, and incident-by-incident coding.”
  • Theoretical codes:  focused codes were compared and contrasted “in order to develop the emerging theory of the information-seeking process.”

The authors reviewed the coding in follow-up interviews with participants to check the credibility of their findings:  “The central theme that united all categories and explained most of the variation among the data was ‘discovering vocabulary.’”  They recommend “teaching strategies to identify possible words and phrases to use” when searching for information.

You can do this even if you don’t have access to nVivo 8 software.  Here’s an illustration: “Summarize and Analyze Your Data” from the OERC’s Collecting and Analyzing Evaluation Data booklet.

Lessons Learned from a Black and Minority Health Fair’s 15-Month Follow-up Counseling

In health information outreach, one of the most common ways to connect with the public is through a health fair.  Recent literature has shown that school health fairs improve health knowledge in the short term (Freedman, 2010) and health fairs can be a valuable way to enhance public awareness of specific health professions or services (Stamat, Injety, Pohlod, & Aguwa, 2008).  Additionally, participating in health fairs as exhibitors is a manageable outreach activity that can help to address health literacy and health information inequalities in specific communities (Pomerantz, Muhammad, Downey & Kind, 2010). For non-profit health organizations and medical libraries, health fairs provide important outreach opportunities and  ways to reach beyond a traditional sphere of users.  Exhibitors can share targeted information for a population demographic or demonstrate new online resources and applications for health information access and health literacy.

Even though the aforementioned studies show positive trends for short-term effects of health fairs on health information and health literacy, it can be difficult to evaluate long-term effects or to determine participant behavior changes from attending health fairs. As such, it can also be difficult to justify the staff time and monetary expenses for hosting a health fair.  However, results from a recent research study show that health fairs have potential to provide a foundation for ongoing and integrated outreach. In a study conducted in Indiana in 2006-2007, health professional contact 10 and 15 months following a health fair (to determine health fair effectiveness) led to more in-depth and personalized involvement with participants that resulted in greater changes in health behavior and attitudes.  Librarians can use health fairs to begin an engagement process with targeted individuals.   Additionally, librarians can evaluate health fairs using similar pre and post test methods as this study used, but focus more on assessing changes in health information literacy and knowledge from health fair participants.

The study published in 2011 evaluated both the short-term and long-term health effects of attending a health fair, and assessed the impact of personalized follow-up health counseling sessions for select participants.  The Indiana Black and Minority Health Fair included more than 100 health education booths and addressed a variety of health topics, including prevention and screening, as well as general healthy living.  Sponsored by the Indiana State Department of Health, the annual health fair is part of an ongoing effort to develop health literacy skills. The study was constructed as a pre-post test evaluation with baseline data collection at the health fair and then again after 10 and 15 months.  All participants from the original sample group received health education materials between the 10 and 15-month posttests, and a select group participated in health counseling sessions with a registered nurse. The researcher designed the study to explore a paradigm where, “health fair encounters were aimed to translate from episodic experiences to long-term educational experiences via 15-month follow-up health counseling sessions in a partnership among the state government…local government…and a university…” (p. 898).

With a stationed booth at the health fair, the researchers evaluated a baseline level of willing participants’ health information knowledge through a pretest. In this study, the pretest was a 16-item short-form questionnaire.  This questionnaire was adapted from the Behavioral Risk Factor Surveillance System questionnaire and included questions on “perceived body weight, vigorous and moderate physical activity, TV/video watching hours, fruit and vegetable consumption, tobacco use, and perceived health status” (p.899).  The pretest evaluated the major health indicators and behaviors of health fair participants.

The first posttest was conducted 10 months later in a follow-up mail survey to all health fair participants in the form of a 99-item long-form questionnaire, also adapted from the Behavioral Risk Factor Surveillance System.  This round had a response rate of 47%, with 266 participants responding. This posttest evaluated if attending the health fair had any short-term effects on the participant’s health knowledge and other health habits. It looked at the “participants’ health knowledge, eating behaviors, sedentary behaviors, exercise self-efficacy, exercise goal setting and adherence, and changes in health status” (p. 899). Results from this posttest showed that more people perceived themselves as overweight (an increase of almost 7% from the baseline measure) and fewer people watched TV/videos 4 hours or more on a usual weekday (a drop of almost 22%) 10 months following the health fair. These behavioral changes were not matched by any other changes in notable health behaviors or conditions included in the questionnaires.

This lack of health behavior change is an important aspect of this study, and a revealing situation for librarians.  Health fairs may not produce immediate changes but they can provide an educational space where health professionals and librarians can connect with at-risk health populations and start a conversation about their health behaviors. This potential was seen in the intervention step of the study from which participants’ health information materials and personalized counseling over the course of three to four months offered another opportunity to connect health information professionals with targeted populations.

All participants from the posttest received mailed health educational materials, and this was considered the comparison group. A small subset agreed to personalized health counseling by a registered nurse via phone. The intervention group set at least one health goal and received four counseling phone calls over the course of 12-16 weeks.  Then all comparison and intervention group participants received a second posttest questionnaire 15 months after the health fair (the same 99-item long form questionnaire from the first posttest).  This second posttest evaluated the effect of the personalized follow-up health counseling sessions, as well as the effect of receiving the health information materials.  Of the initial 266 participants, 188 followed through with the second posttest. The second posttest showed some increases in self-reported measures of general health status, with the percentage of overweight or obese participants decreased in both the intervention and comparison groups.  Additionally, participants in the intervention group showed a significant increase in choosing leaner meats, and showed improvements in exercise self-efficacy, exercise goal setting and adherence, and health knowledge.  However, there was little significant gain in health knowledge, as the emphasis in the intervention group was on addressing individuals’ health concerns.  A takeaway message from this study is that long-term follow-up interventions have the potential to provide health support over a longer period of time, which can increase the likelihood of healthy outcomes and health literacy.

Yet, this study shows that health fairs can be an important stepping-stone for health information outreach and evaluation.  Due to the inherent nature of health fairs, they are typically a one-time interaction, and participants run the risk of information overload in unfocused health fairs.  Librarians can use health fairs as an initial contact setting for population groups, and develop an ongoing relationship by providing consistent and meaningful health information materials and personalized outreach to former health fair participants.  Furthermore, evaluations of this ongoing outreach in the form of written or online questionnaires, in-person focus groups, or even individual case studies, can be compelling support for continued participation in health fairs.

For the long-term, Seo also states that the findings of this study have other implications for health promotion and education, in particular, “explor[ing] the possibility of utilizing BMHF [Black and Minority Health Fair] participants as lay health workers or social support groups in their own communities to address low health literacy and high-risk lifestyles in this population” (p. 903).  In other words, you can reach new (and possibly elusive) groups of people or target small specific high-risk populations by investing in a small group of participants over a long-term period, as initiated by a simple health fair.  Short-term and long-term evaluation can complement this long-term outreach program by evaluating changes in teaching methods, changes in health information literacy within communities, or even show differences over time in health knowledge and health information literacy for the lay health workers.  This study shows how heath information professionals can build long-term relationships with targeted populations by using health fairs as an initial contact point. With coordinated evaluation strategies, health information professionals can ensure that their long-term health outreach activity is consistently meaningful, relevant, and adaptive to their targeted populations’ needs.

Read more »

Evaluating Websites with Web Analytics: Contextual Use and Interpretation

If you missed the previous post in this series on “Evaluating Websites with Web Analytics,” you can find it here!

We left off in our discussion of web analytics by talking about a few of the basic metrics and how they can help libraries and non-profit organizations assess their websites and online presence.  The “big three” as described by the Digital Analytics Association included visits/sessions, unique visitors, and page views.  These three metrics are only the starting metrics that libraries and non-profit organizations can use to understand user patterns and actions, but other comprehensive metrics such as bounce rate, referral and direct traffic sources, traffic flow and conversion rate can give a better glimpse at how users are accessing your online resources.  In order to have the most comprehensive and “big picture” view of your website’s success and customer satisfaction, it’s important to use these web metrics with other kinds of evaluative tools, including logic models, surveys and formal usability studies (Marek, Chapter 4, 2011). Each metric should be evaluated within the context of other metrics and assessments, as web analytics contributes to the larger picture in understanding user patterns. Lastly, the organization’s mission and specific goals will determine how, when, and which web analytics should be used and interpreted.

At OERC, we use these comprehensive metrics to assess the OERC blog and website (separately), in addition to the big three.  Our web analytics tracking program is still fairly new and developing, but we wanted to share our process so you can see how easily web analytics can fit into the evaluation strategy of a non-profit organization or library.  Unlike for-profit companies, we aren’t using web metrics to calculate return on investment (ROI) and assign a monetary value to the metrics.  However, we can mimic some of those steps taken by for-profit companies to help guide our use of web metrics.

At OERC, we wanted to learn more about who accesses our blog & website and if they are finding the information they need. As Marek suggests, you need to evaluate your website through clear goals “based on your library’s mission and strategic plan” (Chapter 1, 2011).  From there, web analytics provides the raw data, which you can then assess within the context of your organization’s mission and strategic plan to make the most appropriate changes and informed decisions.  In web analytics, it is difficult to have exact and 100 percent accurate statistics, so capturing and evaluating trends over time is the best way to use web analytics as an assessment tool (Marek, Chapter 1, 2011). Web analytics can be a strong and valuable tool when creating evaluation tools such as logic models to guide your organization projects and outcomes. As you develop your short-term outcomes at the individual level, web analytics can be a quantitative way to measure your targets within your indicators.  This is where you can look at the patterns provided by bounce rate, traffic sources, referrals, and conversion rates to determine if you see a change and if your organization is achieving your logic model outcomes.  Unlike many evaluations, web analytics provide instant analysis and as a result, an organization can consult the data and make adjustments instantly to their website.  This flexibility allows you to reach your defined logic model outcomes and produce the best results from your structured plan.

At OERC, we determined specific goals to frame our web analytics strategy, and decided how we wanted web analytics fit into the overall structure of our evaluation plan.  Our overall goal is to increase blog post views and push more blog visitors to the OERC website and evaluation tools & resources. In our situation, we decided that we needed to track returning and new visitors as we publish new OERC blog posts. Then, we can compare those web statistics against baseline traffic patterns during days or weeks without a new blog post.  For the OERC blog, traffic sources and traffic flow are some of the biggest indicators of our visitors’ patterns. We can also separate new and returning visitors to the blog in different groups, and evaluation the traffic sources separately.  This way, we can work with our overall marketing plan to see if any marketing efforts have been successful in garnering new visitors, or pushing returning visitors back to the blog.

This step of web analytics complements other evaluation strategies because you can use web analytics to assess the same goals that other evaluation strategies are targeting.  For example, when an organization surveys users about how they obtain a particular form or brochure, then web analytics can provide supporting evidence for online users and downloads.  If the survey and web analytics show that more users are accessing the needed form online, then the organization can save costs on printing hard copies.  Furthermore, the organization can make a commitment to expand their online selection of forms and resources to support the greater interest in accessing forms online. Web analytics can support patterns from small assessments like surveys to see the full picture.   At OERC, we will be launching updated evaluation guides this year, and we will be tracking how often they are requested. Web analytics complements our in-person requests, and we will be tracking conversion rate to determine how often visitors download these new materials after visiting other webpages.  Conversion rate complements the other metrics and can provide support for our specified communication goals and logic model outcomes.  Another OERC organization goal is to increase requests for informational materials, as well as requests for classes and webinars.  As we market new materials and new online resources over the next few months, any increase in new visitors to specific OERC webpages as shown in the web analytics will complement our evaluation of these marketing techniques.  Web traffic to specific webpages or blog posts will complement the advertising efforts during webinars, social media outputs, and word-of-mouth.  In the next post for this series, we will take a closer look at the actual analysis of the web analytics and discuss what changes and patterns we see within OERC web analytics.

Web analytics provide valuable quantitative data that can support organizational goals and logic model outcomes. By integrating simple web metrics into your evaluation program, you can increase your understanding of website visitor patterns, website use, and access to online materials.  With this data, you can adjust current practices to improve your online presence.  Combining the data with results from other evaluations, such as surveys or focus groups can provide “big picture” statements, perfect for making larger decisions regarding future goals and actions.

Read more »

Evaluating Websites with Web Analytics

It comes as no surprise, but as of 2008, 61% of US adults and approximately 83% of internet users have used the internet to find health information. Relatedly, many medical libraries and health organizations rely on their websites, blogs, and social media to connect with their users or distribute much needed and requested health information. With more staff and time committed to maintaining social media and websites, how do we gauge the efficacy of these efforts? How do we know that our users are finding and accessing the information they need or want through our websites and other online tools? How do we know that users are actually following the breadcrumb trail from Twitter or Facebook to our website, and then navigating to relevant pages? Web analytics is the answer!

Web analytics is the study of the impact of a website on its users, and is a valuable tool to assess and evaluate your online presence. We often think of assessment and evaluation as something we do only in conjunction with a specific event or program. Simple website evaluation can be as easy as asking yourself a few fundamental questions. Yet, an ongoing assessment and evaluation of your web presence through web analytics is an easy way to understand what resources your users are accessing and how often they are visiting your site! There is a wide array of free and low-cost analytics tools that are now available across a multitude of platforms.  These tools have made it possible for non-profits to take advantage of these important statistics. In a 2008 study conducted in The Netherlands by Voorbij, the researcher found that most cultural heritage institutions use web statistics to assess the production output and costs of digitization activities. These nonprofits used a variety of web analytics programs, but all used the statistics for “practical purposes, such as adapting the web site or setting priorities for further digitization” (Voordbij, 2008). Web analytics programs make it easy to setup and track specific website or social media statistics through customizable features and follow what matters the most to your organization or library.

Read more »

Survey Research Problems and Solutions

Susan Starr’s editorial in the January 2012 issue of JMLA (“Survey research: we can do better” J Med Libr Assoc. 2012 January; 100(1): 1–2) is a very clear presentation of 3 common problems that disqualify article submissions from being full-length JMLA research papers.  Making the point that the time to address these problems is in survey development (ie, before the survey is administered), she also suggests solutions that are best practices for any survey:

Problem #1:  The survey does not answer a question of interest to a large enough group JMLA readers.  (For example, a survey that is used to collect data about local operational issues.)
Solution #1:  Review the literature to identify an issue of general importance.

Problem #2:  The results cannot be generalized.  (Results might be biased if respondents differ from nonrespondents.)
Solution #2:  Address sample bias by sending the survey to a representative sample and using techniques to encourage a high response rate; including questions to help determine whether sample bias is a concern; and comparing characteristics of the sample and the respondents to the study population.

Problem #3:  Answers to survey questions do not provide the information that is needed.  (For example, questions might be ambiguous or might not address all aspects of the issue being studied.)
Solution #3:  Begin survey development by interviewing a few representatives from the survey population to be sure all critical aspects of the topic have been covered, and then pretest the survey.

eHealth Literacy Demands and Barriers: An Evaluation Matrix

Chan, CV; Kaufman, DR.  “A framework for characterizing eHealth literacy demands and barriers.”  Journal of Medical Internet Research, 2011. 13(4): e94.

Researchers from Columbia University have developed a matrix of literacy types and cognitive complexity levels that can be used to assess an individuals’ eHealth competence and develop eHealth curricula.  This tool can also be used to design and evaluate eHealth resources.  eHealth literacy is defined as “a set of skills and knowledge that are essential for productive interactions with technology-based health tools.”  The authors’ objectives were to understand the core skills and knowledge needed to use eHealth resources effectively, and develop a set of methods for analyzing eHealth literacy.  They adapted Norman and Skinner’s eHealth literacy model to characterize six components of eHealth literacy:

  1. Computer literacy
  2. Information literacy
  3. Media literacy
  4. Traditional literacy
  5. Science literacy
  6. Health literacy

The authors used Amer’s revision of Bloom’s cognitive processes taxonomy to classify six cognitive process dimensions, ranked in order of increasing complexity:

  1. Remembering
  2. Understanding
  3. Applying
  4. Analyzing
  5. Evaluating
  6. Creating

They used the resulting matrix to characterize demands of eHealth tasks (Table 3) and describe an individuals’ performance on one of the tasks (Table 5), with a cognitive task analysis coding scheme based on the 6 cognitive process dimensions.