Skip all navigation and go to page content
NN/LM Home About Us | Contact Us | Feedback |Site Map | Help

Library Assessment Conference Wrap-Up

The penultimate session of the 2008 Library Assessment Conference was a panel discussion, Assessment Plans: Four Case Studies. Among the experiences and advice provided by the four expert panelists was this final observation about working with library staff on assessment projects: “If you include them, they’re your partner. If you exclude them, they’re your judge.”

The final session of the Library Assessment Conference featured a panel from academic librarianship and LIS education, who provided brief summations of what they learned during the three days of the conference.

Deborah Carver, Dean of Libraries, University of Oregon Libraries
There is more to the assessment story than numbers: Narratives are very important. Assessment is local, so think about what matters most to your institution. Borrow from others and share, but customize your methods for your own environment. Also, use what you have (ie, data you’re already collecting) as much as possible.

Debra Gilchrist, Dean, Libraries and Media Services, Pierce College Library
Inquiry is central to learning. Accountability is local and assessment provides vital signs. Stay ahead of the game so you can influence the future–as Betsy Wilson says, accelerate relevance. Can we call assessment something else? Focusing on assessment is like focusing on the test instead of the content of a class. We should come up with some label that puts focus on the outcomes. We should also do more with linking library assessment findings to what research says is important (eg about student learning).

Paul Beavers, Director, Information Services Group, Wayne State University Libraries
Assessment provides information that we can use in communicating with our patrons so that they can make more demands on us. We can help them understand that they can ask us to do more; we must make sure they have high expectations of us. Assessing the library’s contribution to educational outcomes is a “highest-hanging fruit” for academic libraries.

Peter Hernon, Professor, Simmons Graduate School of Library and Information Science
Evaluation and assessment are different from each other. Program evaluation is collecting and using data to make improvements, while environmental assessment is taking a broad view of the world. LIS education is sadly lacking in preparing future library and information science professionals with research skills that they can use in evaluating library programs.

Assessing Wineries and Libraries

Good news: wine assessment has a lot in common with library assessment! At the Library Assessment Conference on August 4, wine author/columnist Paul Gregutt described the winery rating system that he developed for his book about Washington wineries. He rated wineries’ quality according to four criteria: value, consistency, style, and contribution. It can make sense to apply these criteria in assessing libraries’ quality: value (people who visit libraries and wineries are both often under time stress and looking for answers about the best products), consistency (customers of libraries and wineries want a personalized experience that is comfortable, reliable, and won’t disappoint), style (a combination of physical characteristics, service, and collection strengths–a big winery or library MUST be well-organized; smaller ones MUST demonstrate uniqueness and depth), and contribution (to the wine industry or to libraries’ stakeholders via outreach/community programs).

Library Assessment Challenges

The Library Assessment Conference took place in Seattle from August 4-7 and at the opening session, the audience heard three academic library directors’ perspectives on the “Most Important Challenge for Library Assessment.”

Susan Gibbons, Dean of the University of Rochester’s River Campus Libraries, opened with the observations about the attractions of qualitative data: they give you a sense of “precision” and a “correct” answer, they’re perceived as weighty, and their collection can be automated. She emphasized the importance of thinking about what we are counting, and why, and provided the example of the decrease in reference questions answered at the University of Rochester by 10,000 from 1996 to 2006. To learn about the “why” behind this quantitative finding, the library used qualitative approaches. For example, they asked students to take pictures of what they carry with them all the time, to map out their daily movements, to indicate what is useful/not useful by writing on a printed copy of the library’s web page, and to imagine what they would wish for if they had a magic wand. They learned that all students carry cell phones but that their library’s phone number did not appear on its home page on the web (in fact, 40% of ARL libraries’ home pages lack phone numbers!) and that students’ peak time period for studying is from 11pm to 1am. Those findings led to better visibility of the library’s phone number on the web and near library computers. Through the magic wand exercise they learned the importance of providing skills and tools to graduate students early in their careers. She emphasized that local assessment methods are required since every campus is unique and accountability is local. If opportunities are available for wide staff participation in assessment, changes are easier and work better.

Rick Luce, Director of Libraries at Emory University, characterized assessment as a method of planning for improvement–a catalyst for change rather than a quick fix. Performance measures are an organization’s vital signs through metrics that show innovation, research leadership, brand identity, and gains in market share. Successful organizations offer something that others can’t do, do poorly, or have difficulty doing well. Satisfaction can be studied through questionnaires that function as “happiness meters,” investigation into what’s important, and looking at how an organization rates against the best in an industry. He cautioned that assessment efforts can be hampered by pitfalls such as lack of accountability, too many initiatives, forgetting larger organizational drivers, and lack of discipline. He reminded us that time and patience are needed for real change in organizations. Providing a brief mention of the “Hedgehog” concept (a single, simple idea that guides great organizations’ efforts to be the best) from Jim Collins’ book Good to Great, he urged us to understand what we are passionate about, what we are best at, and what drives our economic engines.

Betsy Wilson, Dean of Libraries at the University of Washington, provided her perspective that the most important challenge for libraries is accelerating relevance. Assessment can help by providing fuel for that acceleration. So, it is extremely important for libraries that assessment becomes part of their organizational lifeblood, turning cultures of complaint into cultures of assessment.

The Promise of Appreciative Inquiry in Library Organizations

Sullivan, M. “The Promise of Appreciative Inquiry in Library Organizations.” Library Trends. Summer 2004. 53(1):218-229.

According to Sullivan (2004), Appreciative Inquiry is a different approach to organizational change that “calls for the deliberate search for what contributes to organizational effectiveness and excellence” (p. 218). This perspective proposes moving from a traditional “deficit-based approach” in which there is an emphasis on problems to a more positive and collaborative framework. Therefore, Appreciative Inquiry is a unique approach that includes the identification of positive experiences and achievements as a “means to create change based upon the premise that we can effectively move forward if we know what has worked in the past” (p. 219). Furthermore, this approach “engages people in an exploration of what they value most about their work” (p. 219).

Overall, this article discusses the origins and basic principles of Appreciative Inquiry. In particular, the author provides practical suggestions for how libraries can begin to apply the principles and practices of Appreciative Inquiry to foster a more positive environment for creating change in libraries. For example:

· Start a problem-solving effort with a reflection on strengths, values, and best experiences.

· Support suggestions, possible scenarios, and ideas.

· Take time to frame questions in a positive light that will generate hope, imagination, and creative thinking.

· Ask staff to describe a peak experience in their professional work or a time when they felt most effective and engaged.

· Close meetings with a discussion of what worked well and identify individual contributions to the success of the meeting.

· Create a recognition program and make sure that it is possible (and easy) for everyone to participate.

· Expect the best performance and assume that everyone has the best intentions in what they do.

In conclusion, Appreciative Inquiry entails a major shift in thinking about how change can occur in library organizations. By examining what is working, this approach provides a useful and positive framework for transforming libraries.

Evaluation 2008

The American Evaluation Association’s 2008 annual meeting with its “Evaluation Policy and Evaluation Practice” theme, will be November 5-8 in Denver.  There will be three days chock full o’ presentations about evaluating almost any kind of program you can think of (including health promotion but not including health sciences librarianship).  To see if this is the meeting for you, take a look at the schedule.

What Is “Appreciative Inquiry”?

Christie, C.A. “Appreciative inquiry as a method for evaluation: an interview with Hallie Preskill.”  American Journal of Evaluation. Dec 2006. 27(4): 466-474.

In this interview, Preskill defines appreciative inquiry as “…a process that builds on past successes (and peak experiences) in an effort to design and implement future actions.” (p. 466)  She points out that when we look for problems we find them and this deficit-based approach can lead to feelings of powerlessness.  In appreciative inquiry the focus is on what has worked well, and use of affirmative and strengthening language improves morale.  She suggests a focus on the positive through interviews asking for descriptions of “peak experiences” that led to feelings of being energized and hopeful; asking for information about what is valued most.  She cautions that skeptics will find this to be a “Pollyanna” approach that lacks scientific rigor.

What Does “Effective” Mean?

Schweigert, F.J. “The meaning of effectiveness in assessing community initiatives.” American Journal of Evaluation. Dec 2006. 27(4):416-426.

Evaluators have a way of coming up with answers to questions we didn’t know we had, such as, “what does ‘effective’ mean?” This article points out that the meaning varies according to context. Sometimes a positive judgement means the changes that occurred were the ones that were expected; in others it requires that the changes were better than what would have occurred without any intervention (which needs evidence regarding cause and effect). In true academic evaluator fashion, the author presents three different meanings of “effectiveness”:

  • increased understanding through clarifying assumptions, documenting influences, identifying patterns, assessing expected and unexpected results, etc.
  • accountability through making decisions based on performance expectations and standards, such as in benchmarking.
  • demonstration of causal linkages through experimental and quasi-experimental evidence showing what works. “Although randomized experiments have been called the ‘gold standard’ of social science research and evaluation, evaluators are well aware that experimental designs are not always possible, feasible, necessary, or even desirable.” (p. 427)

Nuggets from the Health Program Evaluation Field

Grembowski, D.  The Practice of Health Program Evaluation.  Sage, 2001.  Info about this book from Google books.

Not a new book, but an interesting one, with information of potential use to us in thinking about evaluating health information outreach.  Some general overview perspective from the book:

  • Most evaluations are conducted to answer two questions:  Is the program working?  Why or why not?
  • All evaluation is political since judging worth is based on attaching values.
  • Evaluation as a 3-act play:  Act 1 is asking questions; Act 2 is answering them; Act 3 is using these answers in decision-making.
  • Evaluators’ roles range from objective researcher through participant, coach, and advocate.
  • Evaluations look at the “theories” behind programs, such as the causes and effects of implementing activities.
  • Premises underlying cost-effectiveness analysis: health care resources are scarce, resources have alternate uses, people have different priorities, there are never enough resources to satisfy all.
  • Evaluation standards include utility (results are intended to be used), feasibility (methods should be realistic and practical), propriety (methods should be ethical, legal, and respectful of the rights and interests of all participants), accuracy (produce sound information and conclusions that are related logically to data).

More from MLA on Library Value

This year’s Medical Library Association annual meeting in Chicago had several good sessions in which speakers presented experiences and approaches to assigning dollar values to library services and activities. These included:

  • “A Calculator for Measuring the Impact of Health Sciences Libraries and Librarians” presented by Betsy Kelly and Barb Jones of the MidContinental Region, National Network of Libraries of Medicine–Their calculators include the Valuing Library Services Calculator and the Cost Benefit and ROI Calculator. These have the potential to be very useful tools.
  • “Connecting with Administrators: Demonstrating the Value of Library Services” presented by Edward J. Poletti of the Central Arkansas Veterans Health Care System in Little Rock, AR–He and VA Library colleagues conducted value studies of shared electronic resources, ILL, and literature searches. His presentation included a list of sources of dollar values such as Fortney’s “Price History for Core Clinical Journals in Medicine and Nursing 2003-2007″ and “Doody’s core titles in the health sciences 2007: list overview and analysis.” This paper received honorable mention for the MLA Research Award, and a summary is available at the MLA Federal Libraries Section blog.
  • “Bridging the Gap: Using Dollar Values to Demonstrate the Value of Library Services” presented by Julia Esparza of Louisana State University Health Sciences Center in Shreveport, LA–Her experience with assigning and tracking dollar values included analysis of copying/printing costs and article costs.
  • “Quantum Physics and Hospital Library Assessment” presented by Michele Klein-Fedyshin of UPMC Shadyside, Pittsburgh, PA–Assessment must be locally relevant and there are various possible foci, such as the financial impact of local consortia, the impact of library services on nursing certification, prevention of hospital acquired infections, cost savings from library contributions to pay-for-performance, library as drug information center, etc.

The “LIMB” Model: Lay Information Mediary Behavior

Abrahamson, J.A.; Fisher, K.E. “‘What’s past is prologue’: towards a general model of lay information mediary behaviour.” Information Research 12(4):October, 2007

Health information outreach is often aimed at information mediaries in addition to primary information seekers. The article defines lay information mediaries as “those who seek information in a non-professional or informal capacity on behalf (or because) of others without necessarily being asked to do so, or engaging in follow-up.” These individuals are also known as gatekeepers, change agents, communication channels, links, navigators, and innovators. The authors present a generalized model of information mediary characteristics, activities, motivations, barriers, facilitators and raise the question of what differences exist between primary information seekers and information mediaries, since “the caregiver-as-person may have information needs that vary from the caregiver-as-caregiver.” These are factors we can take into account in community assessment activities.