Skip all navigation and go to page content
NN/LM Home About Us | Contact Us | Feedback |Site Map | Help

Archive for the ‘Research Reads’ Category

Healthcare Services Mangers’ Decision-Making and Information

MacDonald, J; Bath, P; Booth, A.  “Healthcare services managers:  What information do they need and use?”  Evidence-Based Library and Information Practice.  3(3):18-38.

This paper presents research results that provide insights into how information influences healthcare managers’ decisions.  Information needs included explicit Organizational Knowledge (such as policies and guidelines), Cultural Organizational Knowledge (situational such as buy-in, controversy, bias, conflict of interest; and environmental such as politics and power), and Tacit Organizational Knowledge (gained experientially and through intuition).  Managers tended to use internal information (already created or implemented within an organization) when investigating an issue and developing strategies.  When selecting a strategy, managers either actively looked for additional external information–or else they didn’t, and simply made a decision without all of the information that they would have liked to have.  Managers may be more likely to use external information (ie, research-based library resources) if their own internal information is well-managed.  The article’s authors suggest that librarians may have a role in managing information created within an organization in order to integrate it with externally created information resources.

Demystifying Survey Research: Practical Suggestions for Effective Question Design

An article entitled “Demystifying Survey Research: Practical Suggestions for Effective Question Design” was published in the journal Evidence Based Library and Information Practice (2007). The aim of this article is to provide practical suggestions for effective questions when designing written surveys. Sample survey questions used in the article help to illustrate how some basic techniques, such as choosing appropriate question forms and incorporating the use of scales, can be used to improve survey questions.

Since this is a peer reviewed, open-access journal, those interested may access the full-text article online at: http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/516/668.

In addition, for those interested in exploring survey research more, I have found the following print resources to be very helpful in this learning process:

Converse, J.M., and S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage, 1986.

Fink, A. How to Ask Survey Questions. Thousand Oaks: Sage Publications, 2003.

Fowler, F.J. Improving Survey Questions: Design and Evaluation. Thousand Oaks: Sage Publications, 1995.

The Promise of Appreciative Inquiry in Library Organizations

Sullivan, M. “The Promise of Appreciative Inquiry in Library Organizations.” Library Trends. Summer 2004. 53(1):218-229.

According to Sullivan (2004), Appreciative Inquiry is a different approach to organizational change that “calls for the deliberate search for what contributes to organizational effectiveness and excellence” (p. 218). This perspective proposes moving from a traditional “deficit-based approach” in which there is an emphasis on problems to a more positive and collaborative framework. Therefore, Appreciative Inquiry is a unique approach that includes the identification of positive experiences and achievements as a “means to create change based upon the premise that we can effectively move forward if we know what has worked in the past” (p. 219). Furthermore, this approach “engages people in an exploration of what they value most about their work” (p. 219).

Overall, this article discusses the origins and basic principles of Appreciative Inquiry. In particular, the author provides practical suggestions for how libraries can begin to apply the principles and practices of Appreciative Inquiry to foster a more positive environment for creating change in libraries. For example:

· Start a problem-solving effort with a reflection on strengths, values, and best experiences.

· Support suggestions, possible scenarios, and ideas.

· Take time to frame questions in a positive light that will generate hope, imagination, and creative thinking.

· Ask staff to describe a peak experience in their professional work or a time when they felt most effective and engaged.

· Close meetings with a discussion of what worked well and identify individual contributions to the success of the meeting.

· Create a recognition program and make sure that it is possible (and easy) for everyone to participate.

· Expect the best performance and assume that everyone has the best intentions in what they do.

In conclusion, Appreciative Inquiry entails a major shift in thinking about how change can occur in library organizations. By examining what is working, this approach provides a useful and positive framework for transforming libraries.

What Is “Appreciative Inquiry”?

Christie, C.A. “Appreciative inquiry as a method for evaluation: an interview with Hallie Preskill.”  American Journal of Evaluation. Dec 2006. 27(4): 466-474.

In this interview, Preskill defines appreciative inquiry as “…a process that builds on past successes (and peak experiences) in an effort to design and implement future actions.” (p. 466)  She points out that when we look for problems we find them and this deficit-based approach can lead to feelings of powerlessness.  In appreciative inquiry the focus is on what has worked well, and use of affirmative and strengthening language improves morale.  She suggests a focus on the positive through interviews asking for descriptions of “peak experiences” that led to feelings of being energized and hopeful; asking for information about what is valued most.  She cautions that skeptics will find this to be a “Pollyanna” approach that lacks scientific rigor.

What Does “Effective” Mean?

Schweigert, F.J. “The meaning of effectiveness in assessing community initiatives.” American Journal of Evaluation. Dec 2006. 27(4):416-426.

Evaluators have a way of coming up with answers to questions we didn’t know we had, such as, “what does ‘effective’ mean?” This article points out that the meaning varies according to context. Sometimes a positive judgement means the changes that occurred were the ones that were expected; in others it requires that the changes were better than what would have occurred without any intervention (which needs evidence regarding cause and effect). In true academic evaluator fashion, the author presents three different meanings of “effectiveness”:

  • increased understanding through clarifying assumptions, documenting influences, identifying patterns, assessing expected and unexpected results, etc.
  • accountability through making decisions based on performance expectations and standards, such as in benchmarking.
  • demonstration of causal linkages through experimental and quasi-experimental evidence showing what works. “Although randomized experiments have been called the ‘gold standard’ of social science research and evaluation, evaluators are well aware that experimental designs are not always possible, feasible, necessary, or even desirable.” (p. 427)

Nuggets from the Health Program Evaluation Field

Grembowski, D.  The Practice of Health Program Evaluation.  Sage, 2001.  Info about this book from Google books.

Not a new book, but an interesting one, with information of potential use to us in thinking about evaluating health information outreach.  Some general overview perspective from the book:

  • Most evaluations are conducted to answer two questions:  Is the program working?  Why or why not?
  • All evaluation is political since judging worth is based on attaching values.
  • Evaluation as a 3-act play:  Act 1 is asking questions; Act 2 is answering them; Act 3 is using these answers in decision-making.
  • Evaluators’ roles range from objective researcher through participant, coach, and advocate.
  • Evaluations look at the “theories” behind programs, such as the causes and effects of implementing activities.
  • Premises underlying cost-effectiveness analysis: health care resources are scarce, resources have alternate uses, people have different priorities, there are never enough resources to satisfy all.
  • Evaluation standards include utility (results are intended to be used), feasibility (methods should be realistic and practical), propriety (methods should be ethical, legal, and respectful of the rights and interests of all participants), accuracy (produce sound information and conclusions that are related logically to data).

The “LIMB” Model: Lay Information Mediary Behavior

Abrahamson, J.A.; Fisher, K.E. “‘What’s past is prologue’: towards a general model of lay information mediary behaviour.” Information Research 12(4):October, 2007

Health information outreach is often aimed at information mediaries in addition to primary information seekers. The article defines lay information mediaries as “those who seek information in a non-professional or informal capacity on behalf (or because) of others without necessarily being asked to do so, or engaging in follow-up.” These individuals are also known as gatekeepers, change agents, communication channels, links, navigators, and innovators. The authors present a generalized model of information mediary characteristics, activities, motivations, barriers, facilitators and raise the question of what differences exist between primary information seekers and information mediaries, since “the caregiver-as-person may have information needs that vary from the caregiver-as-caregiver.” These are factors we can take into account in community assessment activities.

The STAR Model for Developing Health Promotion Web Sites

Skinner, H.A.; Maley, O.; Norman, C.D. “Developing Internet-based ehealth promotion programs: The Spiral Technology Action Research (STAR) Model.” Health Promotion Practice 2006; 7(4):406-417.

The STAR model combines technology development with community involvement and continuous improvement through five cycles: listen, learn, plan, do, study, act. The “listen” cycle corresponds to community assessment: learning about needs and opportunities, and building partnerships and stakeholder buy-in. The “plan” and “do” cycles involve identification of objectives and strategies followed by prototyping and design to address identified community needs. The “study” cycle corresponds to process evaluation of web sites or prototypes, followed by the “act” cycle in which decisions are made and actions taken based on evaluation results (promotion, ongoing feedback collection and continued refinement, and sustainability). This article presents a case study of using the model plus methods for approaching each of the five cycles.

Storytelling and Behavior Change

Hinyard, L.J.; Kreuter, M.W. “Using narrative communication for health behavior change: a conceptual, theoretical, and empirical overview.” Health Education & Behavior 2007; 34(5):777-792.

This article advocates use of narrative communication in motivating people to change their health behaviors, pointing out that “understanding any situation involves storing and retrieving stories from memory.” The authors speculate that narrative ways of learning and knowing may be especially useful when addressing issues for which reason and logic have limitations, such as morality, religion, values, and social relationships. Narratives can help overcome resistance to a message, facilitate observational learning, and provide identification with characters. Stories can be combined with more “scientific” methods to achieve optimum results.

Health Promotion Facilitators and Barriers

Robinson, K.L.; Driedger, M.S.; Elliott, S.J.; Eyles, J. “Understanding facilitators of and barriers to health promotion practice.” Health Promotion Practice 2006; 7:467-476.

The authors state that although the “field of health promotion has shifted to embrace a socioecological model of health recognizing the role of environmental and contextual factors on health promotion practice and health outcomes,” most health promotion research “continues to focus on behavioral or risk factor outcomes.” Published studies of health promotion facilitators and barriers have tended to focus on one of the three linked stages of health promotion practice: capacity building for planning and development; delivery of health promotion activities; and evaluation and/or research. Barriers to evaluation and research include: health promotion activities rarely have simple, direct cause-effect relationships to test; health interventions involve many factors and processes that cannot easily be quantified; monitoring in rural areas or at the community level poses significant logistical and financial barriers; and tension exists between “scientific rigor” and participatory evaluation processes that aim to influence practice.

The article characterizes facilitators and barriers to health promotion practice as internal (leadership, staffing, resources, priority/interest, infrastructure, and organization of teams and groups) and external (community buy-in, turnover of local contacts, partnerships or collaboration, socioeconomic/demographic/political contexts, and funding opportunities or cuts).