![SciPICH Logo](https://webarchive.library.unt.edu/eot2008/20090115020312im_/http://www.health.gov/scipich/images/smscipichtrans.GIF)
![SciPICH Publications - button/link](https://webarchive.library.unt.edu/eot2008/20090115020312im_/http://www.health.gov/scipich/images/pubstinyyel.GIF)
An Evidence-Based
Approach
Introduction:
Evaluation of IHC
Consumers & IHC
Evaluation
Developers &
IHC Evaluation
Policy Issues Relevant
to IHC
Health Care
Providers, Purchasers & IHC
SciPICH Final
Report
![SciPICH home icon and link](https://webarchive.library.unt.edu/eot2008/20090115020312im_/http://www.health.gov/scipich/images/hometrans.GIF)
|
|
Wired for Health and Well-Being: The
Emergence of Interactive Health Communication
Editors: Thomas R. Eng, David H. Gustafson
Suggested Citation: Science Panel on Interactive
Communication and Health. Wired for Health and Well-Being: the Emergence of Interactive
Health Communication. Washington, DC: US Department of Health and Human
Services, US Government Printing Office, April 1999.
Download in PDF format: [Entire Document] [References]
Appendix E: Consumers
Guide to Evaluating IHC Applications
This guide is intended to help consumers interpret
evaluation results reported by developers using the Panels "Evaluation
Reporting Template" in Appendix A. The standardized
reporting of evaluation results should help you decide how well IHC applications meet your
own needs and help you interpret evaluation results by using the template structure.
To decide whether an IHC program will help meet your
specific needs, you will want to know general information about the application and its
intent, how the application was developed, how well it "runs," and whether the
application achieves its intended effects. The following are questions a consumer might
want answered in the best of circumstances. Unfortunately, some of this information may
not be easily accessible for many current IHC applications. The Panel wants to help
consumers avoid purchasing or using applications that do not provide the information and
support needed to make informed decisions.
1. Description of the Program
- What are the qualifications of the developers? Programs are
more likely to be good if developers are "experts" in the content area and have
previously developed effective IHC applications.
- Who sponsored or paid for the program? Programs supported by
organizations that have something to gain (e.g. tobacco companies who might support a
program on smoking) should be suspect.
- What is the IHC application intended to do? What are the
specific goals and objectives of the application? Do these match your needs?
- What type of user(s) was the application designed for? Some
applications are designed for certain age groups, men or women (or both), certain ethnic
or cultural groups, or certain socioeconomic groups. Is the application intended for a
specific type of user? Is it appropriate for you?
- How does the application protect your confidentiality or
anonymity? Who will be able to get information about the users?
2. Formative and Process Evaluation
These evaluations are normally part of the development and
testing of a new program. Developers use formative evaluation to create applications with
a better chance of succeeding in their goals and use process evaluation to make sure the
application "runs" well. The application could tell you:
- Where the content came from and what was done to ensure that
it is valid and current. For example, content may come from the published scientific
literature, individual "experts," or from the consensus of several experts. What
are the backgrounds of the experts? Who sponsored the development? Are the sources of
information specified in the application? Can you trust these sources to be current,
reliable, and without bias (objective)? Is the content updated at least frequently enough
to make sure it is accurate and up-to-date?
- Whether the content was presented in a way that makes it
easy for you to learn. Is it easy to understand? Are the words easy to understand?
Graphics, video, and animation can make learning easier, but sometimes they are used just
to be fancy. Look to see whether these features are there and whether they actually help
you learn. Is the material presented in a way that is tailored to your particular needs or
do you have to search hard to find content that helps you?
- How the application was tested for reading level or
understandability. Whom was this tested on and how was it performed? Is it likely to be
appropriate for you?
- Ease of learning. It might be appropriate to have to spend
some time learning how to use an application if you plan to use it over and over. But even
then, the program should be easy to learn and use.
- Opportunities for users to suggest improvements to programs.
3. Outcome Evaluation
Outcome evaluations test whether an application does what
it is supposed to do. Does it achieve its goals? Some applications try to help you change
your behavior (e.g., eat less fat), others try to help you choose between treatment
options (e.g., surgery vs. drug therapy), and still others try to provide you with social
interaction and support from others. Make sure the goals of the application match your
needs. Then, see if there have been outcome evaluations to answer the following questions:
- How much do users like the application?
- How helpful do users rate the application?
- Does the application increase users knowledge?
- Do users change their beliefs or attitudes in a good way?
- Do users improve their behaviors?
- Do users get healthier and/or do their symptoms improve?
- Do users change their use of health care resources or the
costs of their health care?
- Did users experience any bad outcomes?
- Is your privacy protected? Who will use the information you
provide, and how will they use it?
The information about the program could also help decide
whether to believe the results and whether you are likely to get the same results. You do
not need to be an expert in evaluation to decide whether to believe evaluation results.
Here are some simple rules to follow.
1. How good is the evaluation design?
The most valid evaluation is a series of "randomized,
doubleblind, controlled trials." Controlled trials compare people who use the
program to those who do not, to be sure that changes found would not have occurred without
the program. Randomized means that people in the study were assigned randomly
(e.g., by flipping a coin) to either get or not get the program. Double blinding
means neither subjects nor evaluators know who got the program, so answers to evaluation
questions are not influenced by the excitement of being in the test group or
disappointment of being in the comparison group. It is difficult to "blind" a
computer program evaluation, unless everybody gets a computer, some containing the program
and some with general health information. Finally, a single study cannot prove program
effectiveness; you need several, or a series of, studies. Although they are not proof,
studies can be informative if they are only controlled but not randomized or blinded. And
although randomized controlled trials are good for learning whether the program works,
they do not tell you why. Many people like "qualitative studies," where
evaluators watch people use the program or interview them or look at computer records of
how they used the program. They learn a lot, even though these qualitative studies cannot
"prove" a program really helps. Bottom line: avoid "evaluations" based
only upon user testimonials or expert endorsements. They are not worth much. If risks of
harm are small (including risks to time, money, or health) a less-rigorous evaluation may
be appropriate. As risks increase, you need more evaluation. If you will use the program
to make important health decisions, you may want one that has been tested in several
randomized, controlled evaluations.
2. Is it likely that I will experience the same results?
Some evaluations are done using such unique participants,
or in such a different place and time, that the results may not apply to you. For example,
men and women or young and old users do not always have the same response to applications.
Moreover, evaluations from 10 years ago might not produce the same results if performed
today. Evaluations cannot be done for all types of people in all places and at all times.
Since many programs have different effects on people, some may be helped more than others.
Some may even be harmed. You must decide whether the people used in the evaluation (their
age, gender, location, education, living situation) are similar enough to your situation
that the results are likely to hold for you. That means it is reasonable to expect that
evaluators of a program could tell you what kind of people were subjects in the
evaluation, so you can decide if they are enough like you. One way to determine this is to
look for personal stories in the program. They not only make learning easier but also
indicate the type of people for whom this program was designed.
3. Are the evaluators unbiased?
How much you believe the results of an evaluation could
depend upon who performed the evaluation. Users will want to know the answers to the
following questions:
- Do any of the evaluators have a financial interest in the
sale/dissemination of the application?
- Who funded the evaluation? Many evaluations will be carried
out or financed by the developers themselvespeople who want the application to
succeed. Financial interest and funding by an "interested party" does not
invalidate an evaluation. However, because evaluation results can be presented in a way
that highlights positive findings and hides negative findings, a user might prefer that
evaluations are completed and reported by an independent party. Because that is likely to
be rare, users must be educated consumers who are on the lookout for ways in which
evaluation results may be "spun" to make them want to use an application.
- Is a copy of the evaluation report(s) available for review
on request? If the potential risks are great, you or someone you trust should review the
evaluation results.
Return to Table of Contents
Comments: SciPICH@nhic.org
Updated: 05/01/08 |