Skip all navigation and go to page content
NN/LM Home About OERC | Contact Us | Feedback |OERC Sitemap | Help Bookmark and Share

Archive for January, 2012

Survey Research Problems and Solutions

Susan Starr’s editorial in the January 2012 issue of JMLA (“Survey research: we can do better” J Med Libr Assoc. 2012 January; 100(1): 1–2) is a very clear presentation of 3 common problems that disqualify article submissions from being full-length JMLA research papers.  Making the point that the time to address these problems is in survey development (ie, before the survey is administered), she also suggests solutions that are best practices for any survey:

Problem #1:  The survey does not answer a question of interest to a large enough group JMLA readers.  (For example, a survey that is used to collect data about local operational issues.)
Solution #1:  Review the literature to identify an issue of general importance.

Problem #2:  The results cannot be generalized.  (Results might be biased if respondents differ from nonrespondents.)
Solution #2:  Address sample bias by sending the survey to a representative sample and using techniques to encourage a high response rate; including questions to help determine whether sample bias is a concern; and comparing characteristics of the sample and the respondents to the study population.

Problem #3:  Answers to survey questions do not provide the information that is needed.  (For example, questions might be ambiguous or might not address all aspects of the issue being studied.)
Solution #3:  Begin survey development by interviewing a few representatives from the survey population to be sure all critical aspects of the topic have been covered, and then pretest the survey.

eHealth Literacy Demands and Barriers: An Evaluation Matrix

Chan, CV; Kaufman, DR.  “A framework for characterizing eHealth literacy demands and barriers.”  Journal of Medical Internet Research, 2011. 13(4): e94.

Researchers from Columbia University have developed a matrix of literacy types and cognitive complexity levels that can be used to assess an individuals’ eHealth competence and develop eHealth curricula.  This tool can also be used to design and evaluate eHealth resources.  eHealth literacy is defined as “a set of skills and knowledge that are essential for productive interactions with technology-based health tools.”  The authors’ objectives were to understand the core skills and knowledge needed to use eHealth resources effectively, and develop a set of methods for analyzing eHealth literacy.  They adapted Norman and Skinner’s eHealth literacy model to characterize six components of eHealth literacy:

  1. Computer literacy
  2. Information literacy
  3. Media literacy
  4. Traditional literacy
  5. Science literacy
  6. Health literacy

The authors used Amer’s revision of Bloom’s cognitive processes taxonomy to classify six cognitive process dimensions, ranked in order of increasing complexity:

  1. Remembering
  2. Understanding
  3. Applying
  4. Analyzing
  5. Evaluating
  6. Creating

They used the resulting matrix to characterize demands of eHealth tasks (Table 3) and describe an individuals’ performance on one of the tasks (Table 5), with a cognitive task analysis coding scheme based on the 6 cognitive process dimensions.

Logic Models: Handy Hints

The Coffee Break Demonstration webinar  for Thursday, January 5 from the American Evaluation Association was “5 Hints for Making Logic Models Worth the Time and Effort.”  CDC Chief Evaluation Officer Tom Chapel provided this list:

1.  See the model as a means, not an end itself

His point here was that you may not NEED a logic model for successful program planning, but you will ALWAYS need a program description that describes need, target groups, intended outcomes, activities, and causal relationships.  He advised us to identify “accountable” short-term outcomes where the link between the project and subsequent changes can be made clear, and differentiate between those and the longer-term outcomes to which the program contributes.

2.  Process use may be the highest and best use

Logic models are useful for ongoing evaluation and adjustment of activities for continuous quality improvement.

3.  Let form follow function

You can make a logic model as detailed and complex as you need it to be, and you can use whatever format works best for you.  He pointed out that the real “action” is in the middle of the model.  The “middle” of the logic model, its “heart,” is the description of what the program does, and who or what will change as a result of the program.  He advised us to create simple logic models that focus on these key essentials to aid communication about a program.  This simple logic model can be a frame of reference for more complexity—for details about the program and how it will be evaluated.

4.  Use additional vocabulary sparingly, but correctly

Mediators such as activities and outputs help us understand the underlying “logic” of our program.  Moderators are contextual factors that will facilitate or hinder outcome achievement.

5.  Think “zebras,” not “horses”

This is a variation of the saying, “when you hear hoofbeats, think horses, not zebras.”   My interpretation of this hint is that it’s a reminder that in evaluation we are looking not only for expected outcomes but also unexpected ones.   According to Wikipedia, “zebra” is medical slang for a surprising diagnosis.

You can find the slides for Tom Chapel’s presentation in the American Evaluation Association’s Evaluation eLibrary.

Can Tweets Predict Citations?

Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact.”  G. Eysenbach, Journal of Medical Internet Research 2011; 13(4):e123

This article describes an investigation of whether tweets predict highly cited articles.  The author looked at 1573 “tweetations” (tweets about articles) of 55 articles in the Journal of Medical Internet Research and their subsequent citation data from Scopus and Google Scholar.  There was a correlation:  highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles.  The author proposes a “twimpact factor” (the cumulative number of tweetations within a certain number of days since publication) as a near-real time measure of reach of research findings.

Strategies for Improving Response Rate

There are articles about strategies to improve survey response rate with health professionals in the open access December, 2011 issue of Evaluation and the Health Professions.   Each explored variations on Dillman’s Tailored Design Method, also known as TDM (see this Issue Brief from the University of Massachusetts Medical School’s Center for Mental Health Services Research for a summary of TDM).

Surveying Nurses: Identifying Strategies to Improve Participation” by J. VanGeest and T.P. Johnson (Evaluation and the Health Professions, 34(4):487-511)

The authors conducted a systematic review of efforts to improve response rates to nurse surveys, and found that small financial incentives were effective and nonmonetary incentives were not effective.  They also found that postal and telephone surveys were more successful than web-based approaches.

Surveying Ourselves:  Examining the Use of a Web-Based Approach for a Physician Survey” by K.A. Matteson; B.L. Anderson; S.B. Pinto; V. Lopes; J. Schulkin; and M.A. Clark (Eval Health Prof  34(4):448-463)

The authors distributed a survey via paper and the web to a national sample of obstetrician-gynecologists and found little systematic difference between responses using the two modes, except that university physicians were more likely to complete the web-based version than private practice physicians.  Data quality was also better for the web survey: fewer missing and inappropriate responses.  The authors speculate that university-based physicians may spend more time at computers than do private physicians.  However, given that response rate was good for both groups, the authors conclude that using web-based surveys is appropriate for physician populations and suggest controlling for practice type.

Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians” by J. Dykema; J. Stevenson; B. Day; S.L. Sellers; and V.L. Bonham (Eval Health Prof 34(4):434-447, 2011)

The authors found that response rates were highest in groups that were entered into a $50 or $100 lottery and that a prenotification letter containing a $2 preincentive.  They also found that use of postal prenotification letters increased response rates (even though the small token $2 had no additional effect and was not cost-effective).  The authors conclude that larger promised incentives are more effective than nominal preincentives.

A Randomized Trial of the Impact of Survey Design Characteristics on Response Rates among Nursing Home Providers” by M. Clark et al. (Eval Health Prof 34(4):464-486.

This article describes an experiment in maximizing participation by both the Director of Nursing and the Administrator of long-term care facilities.  One of the variables was incentive structure, in which the amount of incentive increased if both participated, and decreased if only one participated.  The authors found that there were no differences in the likelihood of both respondents participating by mode, questionnaire length, or incentive structure.