Skip all navigation and go to page content
NN/LM Home About Us | Contact Us | Feedback |Site Map | Help

Denver in November 7: Read More About It

Read even more about the American Evaluation Association meeting at the Eagle Dawg Blog, where Nikki Detmar has summarized her Ten thousand four hundred and thirty one words of notes.  Nikki attended many different sessions from the ones I went to, and where we were both in the audience for a session, Nikki took more detailed notes than I did.  She’s a fast typist who uses her laptop for notes; I’m a codger who writes with a pen in cursive scrawls on pieces of lined notebook paper.  Also, the Eagle Dawg Blog is just an all-around good read for Nikki’s perspectives on life, the universe, health informatics, and medical librarianship.

Denver in November 6: Nonparametric Statistics

This was a half-day workshop on Sunday morning, November 9, ably taught by Jennifer Camacho Catrambone, Ruth M. Rothstein CORE Center, Chicago. Nonparametric statistics are those that are used with ordinal or nominal data, when data are skewed, or when sample sizes are small.

In contrast, parametric statistics are designed to be used with a minimum sample size of 30 subjects per group.   Dependent variables are expected to be interval-level; categorical (nominal) dependent variables are excluded (although independent variables are often categorical).

The Chi Square test is an example of a nonparametric test of association between variables. The workshop handout lists numerous others and provides descriptions and assumptions. The class was full of information, but note to self: don’t take statistics classes after spending four days in conference sessions–the brain is tired.

Denver in November 5: Saturday Sessions 11-8-08

Going to meetings is hard work!  Especially meetings like the American Evaluation Association annual meeting, which is chock full of interesting sessions that make you think.  Saturday was a very full day, and quite rewarding.

Fine-tuning Evaluation Methodologies for Innovative Distance Education Programs (Debora Goetz Goldberg, John James Cotter, Virginia Commonwealth University)

VCU Medical School offers a PhD in Health Related Sciences via distance education that combines on-campus learning, asynchronous discussions, synchronous chat, podcasting, and other approaches.  Program evaluation followed these steps: define quality (support, course structure, curriculum, instruction), select important areas to review (were goals met, what skills were developed, was advising adequate, was IT adequate, overall program), identify data collection sources (course evaluations, followup assessments, interviews with instructors, feedback from students’ employers), collect and analyze data.  Findings showed areas where the curriculum needed adjustment, where technology could be enhanced (for example, offering streaming videos of lectures), and where supplementary use of teaching assistants was needed.  The supplementary TAs worked with students in the statistics course.

Evaluation of an Interactive Computer-based Instruction in Six Universities: Lessons Learned (Rama Radhakrishna, Marvin Hall, Kemirembe Olive, Pennsylvania State University)

In a USDA-sponsored (with institutional matching funds) project, Penn State collaborated with five other land-grant universities to develop and offer a 1-semester agronomy course that comprised 11 interactive modules.  Development took two years and addressed the funding agency’s desire for collaborative courses that make collective use of expertise, share resources, and reduce duplication of effort.  Each module featured 20 knowledge questions plus items about the modules’ navigability, design, and layout.  Pre- and post-tests showed knowledge gain.  The project showed that multi-institutional collaboration can work, although it can be challenging.  In this case, IRB review was needed (because human subjects–the students–were involved) and the crop scientists were unfamiliar with that process.

The Use of a Participatory Multimethod Approach in Evaluating a Distance Education Program in Two Developing Countries (Charles Potter, Sabrina Liccardo, University of the Witwatersrand)

This radio-based series of English lessons for school children in South Africa and Bangladesh has grown significantly since it began in 1992.  In 1995 it was reaching 72,000 learners and as of 2005 it was reaching 1,800,000.  Evaluation has involved questionnaires, observations, focus groups, and photography; results have been used to report progress to stakeholders and to identify areas for improvement.

Building Evaluation Practice Into Online Teaching: An Action Research Approach to the Process Evaluation of New Courses (Juna Z Snow, InnovatEd Consulting)

The author has developed a Student Performance Portfolio that has been used with two online teacher education courses.  The portfolios allow students to conduct ongoing evaluation of their work and of the course, and include weekly goals, activities and time spent, with reflections on assignments and performance.   Students submit their portfolios each week.  To get the most from the portfolios, it is important to conduct ongoing content analysis and be responsive to students.

Incorporating Cellular Telephones into a Random-digit-dialed Survey to Evaluate a Media Campaign (Lance Potter and Andrea Piesse, Westat; Rebekah Rhoades and Laura Beebe, University of Oklahoma)

When both cellphones and landlines are included in telephone surveys, different sampling frames must be constructed for groups who are cellphone-only, who are landline-only, and who have both.  A tobacco intervention study found one significant difference between the 3 groups:  those who have both cellphones and landlines smoke less–a difference theorized to stem from income and educational characteristics.  The sociology of cellphones is different from landlines.  For example, if a cellphone on a counter or a desk rings and the cellphone’s owner is not present, no one else will answer the phone.  In addition, many cellphone contracts require cellphone owners to pay for calls they receive; these cellphone owners will not want to use their “minutes” up by answering survey questions.  This issue could be addressed by offering gift cards to participants or by conducting surveys on weekends.

The Growing Cell Phone-Only Population in Telephone Survey Research: Evaluators Beware (Joyce Wolfe, Brett Zollinger, Fort Hays State University)

Telephones have been fundamental tools for survey research, and cell phones are introducing new variables to be considered.  At one time more than 90% of households had landlines but now almost 16% of telephone users are cellphone-only.  The size of the cellphone-only population is projected to increase.  Whether there are significant differences between groups of people who have landlines and those who only use cellphones is a topic of debate.  The cellphone-only population tends to be young, unmarried, renters, and lower income (and more likely to have financial barriers to treatment).  Samples of cell phone numbers can be obtained, but it is illegal to use automatic dialers with these numbers.  In addition, more screening is needed because cell phones are linked to individuals rather than to households or geographic locations, and individuals can range in age down to elementary school students.

Perspectives on a Promising Practices Evaluation (Susan Ladd, Rosanne Farris, Jan Jernigan, Belinda Minta, Centers for Disease Control and Prevention; Pam Williams-Piehota, RTI International)

The Centers for Disease Control and Prevention’s Division for Heart Disease and Stroke Prevention (DHDSP) has conducted evaluations of heart disease and stroke interventions to identify effective interventions and promising practices, with the intention of building evaluation capacity at the state level.  Lessons learned included:  collaboration and comprehensive evaluation planning is time-consuming; better evaluability assessments are needed; periodic reaffirmation of commitments and expectations is necessary.

Rapid Evaluation of Promising Asthma Programs in Schools (Marian Huhman, Dana Keener,Centers for Disease Control and Prevention)

The CDC’s Division of Adolescent and School Health (DASH) funds school-based programs for asthma management and uses a rapid evaluation model to help schools assess program impacts.  These evaluations are intended to be completed within one year, with two days devoted to conducting a site’s evaluability assessment and six months devoted to data collection.  These evaluations focus on short-term outcomes.

Best of the Worst Practices: What Every New Evaluator Should Know and Avoid in Evaluation Practice (Dymaneke Mitchell, National-Louis University; Amber Golden, Florida A&M University; Roderick L Harris, Sedgwick County Health Department; Nia K Davis, University of New Orleans)

Panel presenters discussed lessons they learned from their evaluation experiences in the American Evaluation Association/Duquesne University Graduate Education Diversity Internship program.  The experiences and lessons included the difficulties faced by an evaluator who is working with a group that they feel sympathetic toward.  It is hard to be an objective evaluator if you want to help the program succeed.  In working with nonprofits it is important to develop patience with ambiguity, to clarify short and long term goals, and align goals with organizational readiness.  Strong negotiating skills are needed, along with a focus on building trust and credibility.  Evaluation seems to be 10% science and 90% relationships.  It is challenging to manage stakeholders’ diverse and sometimes conflicting agendas.

Ethics and Evaluation: Respectful Evaluation with Underserved Communities
This excellent and thought-provoking session featured three presentations that were based on chapters in the recently-published book, The Handbook of Social Research Ethics by DM Mertens and PE Ginsberg (Sage, 2008).

1.  Ethical Responsibilities in Evaluations with Diverse Populations: A Critical Race Theory (CRT) Perspective
(Veronica Thomas, Howard University)

In traditional social science research, white men are normative.  Critical Race Theory (CRT) is from the critical theory approach which views scholarship as a means to critique and change society and to counteract discrimination and oppression.  In the traditional positivist approach, research is explanatory.  CRT uses a critical lens to foreground oppressed populations and form conclusions and recommendations that promote social equity and justice.  IRBs, with their positivist emphasis on value-free research, can feature a lack of concern for community impacts of projects.

2. Researching Ourselves Back to Life (Joan LaFrance, Mekinak Consulting)

Frustration has built up for many years among Native populations from their sense of being abused by researchers.  The traditional IRB approach to human subject protection can fail to address the question of whose voice speaks with authority about Aboriginal experiences.  Tribal members are beginning to understand that they can define the degree to which they make themselves available.  Five tribes have developed their own IRBs–capacity-building is needed for more tribes to do this.  Tribal approaches involve inclusive review teams, a clear definition of who is expert, broader reporting, an understanding of data ownership and publication approval needs, and a negotiation of how stories will be told.  Different ways of knowing are accepted:  traditional (knowledge of the past), empirical (evidence-based), and revealed (knowledge that comes through channels other than the intellect).

3.  Re-Conceptualizing Ethical Concerns in Underserved Communities (Katrina Bledsoe, Walter R McDonald and Associates Inc; Rodney Hopson, Duquesne University)

Underserved communities are those that suffer from a lack of resources that would allow them to thrive.  There is a need to reconceptualize traditional views of research and ethics.  Unintentional ethical violations have grown from inappropriate methods, use of data, and dissemination of results.  There is a power differential between researchers and participants and, in randomized controlled trials, a problem from the assumption of population homogeneity.  In a new philosophical perspective, evaluators will consider culture, history, community consent, social responsibility, and the differences between what is meaningful statistically and what is meaningful to a community.

ROI in MLA News

In the October, 2008 issue of the MLA News, Terrance Burton presented a quick overview of how the business, dollor-based Return on Investment approach (profit gained from invested dollars) can be expanded to a view of what value is derived from whatever inputs you choose to work with.  This can be more relevant in the library world, where profit is not the main point.  In fact, in many health-related institutions served by libraries, profit is not the main point.  Money is indeed invested in libraries, but it can be challenging to assess the outputs from that investment.  Burton suggests that we identify outcomes that we want to measure, use a mix of approaches (quantitative, qualitative, speculative), and accept that any measure will be flawed.  Despite flaws, using mixed approaches can provide various indicators to bolster a case.

Healthcare Services Mangers’ Decision-Making and Information

MacDonald, J; Bath, P; Booth, A.  “Healthcare services managers:  What information do they need and use?”  Evidence-Based Library and Information Practice.  3(3):18-38.

This paper presents research results that provide insights into how information influences healthcare managers’ decisions.  Information needs included explicit Organizational Knowledge (such as policies and guidelines), Cultural Organizational Knowledge (situational such as buy-in, controversy, bias, conflict of interest; and environmental such as politics and power), and Tacit Organizational Knowledge (gained experientially and through intuition).  Managers tended to use internal information (already created or implemented within an organization) when investigating an issue and developing strategies.  When selecting a strategy, managers either actively looked for additional external information–or else they didn’t, and simply made a decision without all of the information that they would have liked to have.  Managers may be more likely to use external information (ie, research-based library resources) if their own internal information is well-managed.  The article’s authors suggest that librarians may have a role in managing information created within an organization in order to integrate it with externally created information resources.

Denver in November 4: Friday Sessions 11-7-08

Friday at the American Evaluation Association meeting had a technology slant in most of the sessions I chose.

Blogging to the Beat of a New Drum: The Use of Blogs and Web Analytics in Evaluation (Cary Johnson and Stephen Hulme, Brigham Young University)

This roundtable was a discussion about possible uses of blogs and web analytics to assist in evaluations.  Content analysis could be used to study blog conversations; blogs could be used to identify survey participants.  Issues include lack of trust, lack of time, access to computers & network, extraneous data, uneven representation, lack of comfort with written expression, influence of social desirability, and lack of knowledge about people who are “most likely to blog.”  Video blogging, in which participants can click and speak, can be used to collect student feedback in classes but it is challenging to analyze the data.  Google Analytics is a free tool that can be used to view percentages of which users are looking at posts, who selects “read more” and how long they stay, and top search terms.

Voicethread: A New Way to Evaluate (Stephen Hulme and Tonya Tripp, Brigham Young University)

Voicethread.com is a new website where you can upload videos, images, documents, and presentations.  Viewers can then make audio and video comments.  This can make it possible to gather richer data than via a  survey or interview.

Can You Hear Me Now? Use of Audience Response Systems to Evaluate Educational Programming (Karen Ballard, University of Arkansas)

This was a hands-on demonstration of a personal/classroom/audience response (”clicker”) system.  The Vanderbilt Center for Teaching maintains a bibliography of articles about clickers and provides a guide for using them in teaching.

Evaluation Dashboards: Practical Solutions for Reporting Results (Veena Pankaj and Ehren Reed, Innovation Network Inc.)

In the context of evaluation, a dashboard is a performance monitoring tool that provides a quick view of how well goals are being met.  Dashboards, borrowed from the corporate world, are useful in the nonprofit world. They display indicators and targets, and use simple visuals (such as color codes) to illustrate levels of achievement.  Although specialized software is available to help create them, Excel also works.  More information is available at Dashboard for Nonprofits.

Walking the Talk: Evaluation Is as Evaluation Does (Matt Gladden, Michael Schooley, Rashon Lane, Centers for Disease Control and Prevention)

Representatives from CDC’s Division for Heart Disease and Stroke Prevention presented an overview of their strategies to build an organizational culture that values and routinely uses evaluation techniques. Evaluation capacity building approaches include planning evaluations, conducting evaluations, using results, and supporting evaluation through consultation, training, and resources.  Lessons learned included: establish systems that support evaluation; define boundaries and priorities; balance ability to implement a project with its potential benefit; recognize that an evaluator’s role is to identify issues but not necessarily solve them; differentiate between long-term and workplan goals.

Denver in November 3: Thursday Sessions, 11-6-08

My Thursday at the American Evaluation Association meeting had a late start due to my participation in a morning webinar.  Still, interesting sessions awaited after lunch and here are some notes:

Evaluation Methods and Experiences on Five Indian Reservations With the Federally Recognized Tribal Extension Program in Arizona and New Mexico (Roundtable Presentation from Linda Masters, Melvina Adolf, Gerald Moore, Matthew Livingston, and Jeannie Benally, all from theUniversity of Arizona )

Extension faculty who work with the Federally Recognized Tribal Extension Program in Arizona and New Mexico presented a summary of evaluation methods that they have used with the Navajo Nation, the San Carlos Apache Tribe, Colorado River Indian Tribes; the Hualapai Tribe, and the Hopi Tribe.  The programs involve agricultural practices.  Emphasis was on how long it can take to establish trust (time spans of years) and the importance of finding an ally in the community to help open doors.  Faculty members are challenged to reconcile university requirements with tribal realities.  In the ensuing discussion, a possible approach of including recognition of the need to establish trust as part of project logic models emerged.  The roundtable also included overviews of approaches to workshops, and these included  setting small, measurable goals and identifying early adopters among class participants (these are people who could help with futher learning opportunities).  Workshop evaluations are usually done at the end of a session, before people leave, and feature questions with Likert scale responses.  individual interviews and success stories have also proven useful.  Pre- and post-tests were not recommended because they could be intimidating.

Indicators of Success in a Native Hawaiian Educational System: Implications for Evaluation Policy and Practice in Indigenous Programs (Roundtable Presentation from Ormond Hammond and Sonja Evensen, Pacific Resources for Education and Learning)

A project to identify valid, reliable, and meaningful indicators of success for the Native Hawaiian Education Council (NHEC) in response to governmental indicators that do not reflect Native goals and values resulted in development of proposed target impacts (Resiliance & Wellness; Hawaiian ‘Ike; Academic Achievement & Proficiency; Employment, Self-Sufficiency & Stewardship) and proposed target levels: Kanaka (individual/group); ‘Ohana (family); Kaiaulu (community); and ‘Onaehana (system).  For example, activities that address issues such as homelessness, life skills, or nutrition could be considered as impacting Resiliance & Wellness at the Kanaka (individual/group) level.  This is an effort to express indicators in culturally meaningful ways.

Improving the Collection, Analysis, and Reporting of Survey Data (Gary Miron and Anne Cullen, Western Michigan University)

The presenters demonstrated tools that they have developed for their clients to help improve the collection, analysis, and reporting of survey data.  These included examples of an Excel approach to analyzing open-ended questions (salient themes are column headers, comments are listed in the first column, and check marks placed in appropriate cells) and a preformatted Excel reporting file that produces graphs and tables of survey results.  Among their lessons learned: sometimes self-ratings of knowledge and skill decline from pre-test to post-test because class participants learn how much they don’t know.

Denver in November 2: Using Stories in Evaluation

This was my second Wednesday 11-5 workshop at the American Evaluation Association annual meeting.  It was taught by Richard Krueger from the University of Minnesota, who provided this overview of why we would want to use stories in evaluation:

  • Relatively easy to remember
  • Involve emotions
  • Transmit culture, norms, tacit knowledge
  • Add descriptive details to quantitative evidence
  • Provide insights about individuals
  • Explore themes

With the definition of a story as “a brief narrative told for a purpose,” we can use stories for evaluation that are systematically collected, are verifiable, respect confidentiality, and make a statement about truthfulness and representativeness.  Details about collection, analysis, and reporting should be provided.  A strategy for collecting stories can include decisions about what themes to look for (success stories, stories of challenges or opportunities, stories from participants about how a program works).   Analysis can be facilitated by looking at stories’ features (do they follow a traditional outline of background-problem-resolution-purpose; are they persuasive stories with a protagonist, obstacles, awareness that allows the protagonist to prevail, transormation).

The instructor pointed out that success stories have benefits (inform stakeholders and persuade them to take action) but also risks (story might convey unrealistic expectations, might seem naive, might result in fewer resources allocated to a program).

Denver in November 1: Visual Presentation of Quantitative Data

The American Evaluation Association annual meeting is packed with useful and interesting sessions and workshops.  This year’s meeting began for me on Wednesday with two workshops and this was the first:

Visual Presentation of Quantitative Data

Taught by David Streiner (University of Toronto) and Stephanie Reich (University of California, Irvine)

Beginning with some basics about memory and information processing, the instructors emphasized that effective graphs use simple features to move viewers from pre-attention to paying attention, with pieces of information being stored temporarily in working memory.  Working memory is limited to 9 elements of information at once (or fewer) so it is important to eliminate from graphs any details that might interfere with the main points.  The instructors made the case that the best use of graphs is when you want to display a comparison and make a key point immediately obvious, since exact numbers will be forgotten quickly.  For exact numbers, use a table.  Here are some tips from this workshop:

  • Sort data to be displayed, and experiment with switching axes–these can make a big difference in first impressions
  • Beware of stacked graphs–they make it hard to do comparisons and see trends
  • Be careful of what scales you use–they can be manipulated to make differences and changes seem larger than they really are (for example, if a scale does not begin at zero, or if it looks at percentage change)
  • Avoid 3-D graph displays–they are usually harder to read and rarely add anything useful beyond a standard two-dimensional format
  • Excel and PowerPoint create 3-D bar graphs differently, with the bars at different levels for the same data
  • Use caution with pie charts–it is difficult to compare slices that are similar in size
  • Pie charts in 3-D are especially heinous–they distort the wedges and make comparison impossible
  • Pie charts are not the same as gauges–gauges are calibrated and measure only one quantity

Demystifying Survey Research: Practical Suggestions for Effective Question Design

An article entitled “Demystifying Survey Research: Practical Suggestions for Effective Question Design” was published in the journal Evidence Based Library and Information Practice (2007). The aim of this article is to provide practical suggestions for effective questions when designing written surveys. Sample survey questions used in the article help to illustrate how some basic techniques, such as choosing appropriate question forms and incorporating the use of scales, can be used to improve survey questions.

Since this is a peer reviewed, open-access journal, those interested may access the full-text article online at: http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/516/668.

In addition, for those interested in exploring survey research more, I have found the following print resources to be very helpful in this learning process:

Converse, J.M., and S. Presser. Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage, 1986.

Fink, A. How to Ask Survey Questions. Thousand Oaks: Sage Publications, 2003.

Fowler, F.J. Improving Survey Questions: Design and Evaluation. Thousand Oaks: Sage Publications, 1995.