Skip all navigation and go to page content
NN/LM Home About Us | Contact Us | Feedback |Site Map | Help

Archive for December, 2006

Community Collaboration Measures and Resources

The Aspen Institute Roundtable on Comprehensive Community Initiatives has a Measures for Community Success database with descriptions of over 100 evaluation tools and methods for evaluating community initiatives. Many of these tools have been designed for very specific types of initiatives, such as child development or substance abuse prevention. However, the resources are useful for getting ideas for community-based indicators or for evaluation of collaborations. There are some entries that appear to have more generic applications, such as the Collaboration Index and Collaborative Assessment of Capacity. (Note: I could not find these online, so they may be more context-specific than their descriptions indicate.) Some of these entries will lead you to names of experts who might have articles of interest to you.

This is mainly a database with descriptions. If you find something of interest, you will have to dig for more information, by either doing a literature search on the instrument or the author’s name or calling the contact information listed in each entry. However, it seems to be a rich resource for learning how to work with community partners.

Scale to Measure eHealth Literacy

Source: Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale.Journal of Medical Internet Research 2006; 8(4): e27.

The eHealth Literacy Scale is designed to measure consumers’ knowledge, skill, and comfort with finding, evaluating, and using electronic health resources. A scale is a measurement instrument designed for research and evaluation that is comprised of several (usually 3 or more) items. A participant’s responses to these items are combined into one score (e.g. by averaging or summing) to provide one measure of a specific concept – in this case, eHealth Literacy. A reliable scale is one that is consistent or stable — characteristics evaluated through a variety of methods. For instance, all items in this scale are supposed to be measuring the same concept, so the developers checked to see if participants’ answers were consistent across all of the items. Norman and Skinner also ran a factor analysis, which tests to see if the 8 items are related to one “theme.” This statistical method looks at patterns of responses and can indicate how many themes (known as factors) are needed to explain variations in how people responded to the questions. (The researchers name the factors by looking at items that statistics show belong together.) For the eHealth Literacy Scale, one factor seems to be adequate, which further supports its reliability. Finally, the developers tested to see if participants’ answers remained consistent (or stable) when they completed the scale on several occasions.

Since many of us do not have the skills to test measurement scales, it is nice to have one with a track record available in the literature. (Norman and Skinner provide the scale in this article.) One thing to remember, however, is that reliability is just one element of validity. Reliability tells us nothing about whether or not this scale is actually measuring eHealth Literacy (scales can be consistently wrong). Hopefully, Skinner and Norman or others will provide future publications that show evidence for the eHealth Literacy Scale’s validity. However, that should not prevent the rest of us from using it. In evaluation, we seldom make decisions based on one source of information, so we just need to pay attention to the scale’s findings in our studies to see if they corroborate other evaluation findings. If they do, you probably can feel comfortable using the data along with your other evaluation findings. If they do not, you can explore the inconsistencies and possibly gain a deeper understanding of the program you are evaluating.