Wired for Health and Well-Being: The Emergence of Interactive Health Communication
Editors: Thomas R. Eng, David H. Gustafson
Suggested Citation: Science Panel on Interactive Communication and Health. Wired for Health and Well-Being: the Emergence of Interactive Health Communication. Washington, DC: US Department of Health and Human Services, US Government Printing Office, April 1999.
Appendix E: Consumers Guide to Evaluating IHC Applications
This guide is intended to help consumers interpret evaluation results reported by developers using the Panels "Evaluation Reporting Template" in Appendix A. The standardized reporting of evaluation results should help you decide how well IHC applications meet your own needs and help you interpret evaluation results by using the template structure.
To decide whether an IHC program will help meet your specific needs, you will want to know general information about the application and its intent, how the application was developed, how well it "runs," and whether the application achieves its intended effects. The following are questions a consumer might want answered in the best of circumstances. Unfortunately, some of this information may not be easily accessible for many current IHC applications. The Panel wants to help consumers avoid purchasing or using applications that do not provide the information and support needed to make informed decisions.
1. Description of the Program
2. Formative and Process Evaluation
These evaluations are normally part of the development and testing of a new program. Developers use formative evaluation to create applications with a better chance of succeeding in their goals and use process evaluation to make sure the application "runs" well. The application could tell you:
3. Outcome Evaluation
Outcome evaluations test whether an application does what it is supposed to do. Does it achieve its goals? Some applications try to help you change your behavior (e.g., eat less fat), others try to help you choose between treatment options (e.g., surgery vs. drug therapy), and still others try to provide you with social interaction and support from others. Make sure the goals of the application match your needs. Then, see if there have been outcome evaluations to answer the following questions:
The information about the program could also help decide whether to believe the results and whether you are likely to get the same results. You do not need to be an expert in evaluation to decide whether to believe evaluation results. Here are some simple rules to follow.
1. How good is the evaluation design?
The most valid evaluation is a series of "randomized, doubleblind, controlled trials." Controlled trials compare people who use the program to those who do not, to be sure that changes found would not have occurred without the program. Randomized means that people in the study were assigned randomly (e.g., by flipping a coin) to either get or not get the program. Double blinding means neither subjects nor evaluators know who got the program, so answers to evaluation questions are not influenced by the excitement of being in the test group or disappointment of being in the comparison group. It is difficult to "blind" a computer program evaluation, unless everybody gets a computer, some containing the program and some with general health information. Finally, a single study cannot prove program effectiveness; you need several, or a series of, studies. Although they are not proof, studies can be informative if they are only controlled but not randomized or blinded. And although randomized controlled trials are good for learning whether the program works, they do not tell you why. Many people like "qualitative studies," where evaluators watch people use the program or interview them or look at computer records of how they used the program. They learn a lot, even though these qualitative studies cannot "prove" a program really helps. Bottom line: avoid "evaluations" based only upon user testimonials or expert endorsements. They are not worth much. If risks of harm are small (including risks to time, money, or health) a less-rigorous evaluation may be appropriate. As risks increase, you need more evaluation. If you will use the program to make important health decisions, you may want one that has been tested in several randomized, controlled evaluations.
2. Is it likely that I will experience the same results?
Some evaluations are done using such unique participants, or in such a different place and time, that the results may not apply to you. For example, men and women or young and old users do not always have the same response to applications. Moreover, evaluations from 10 years ago might not produce the same results if performed today. Evaluations cannot be done for all types of people in all places and at all times. Since many programs have different effects on people, some may be helped more than others. Some may even be harmed. You must decide whether the people used in the evaluation (their age, gender, location, education, living situation) are similar enough to your situation that the results are likely to hold for you. That means it is reasonable to expect that evaluators of a program could tell you what kind of people were subjects in the evaluation, so you can decide if they are enough like you. One way to determine this is to look for personal stories in the program. They not only make learning easier but also indicate the type of people for whom this program was designed.
3. Are the evaluators unbiased?
How much you believe the results of an evaluation could depend upon who performed the evaluation. Users will want to know the answers to the following questions:
Return to Table of Contents
Comments: SciPICH@nhic.org Updated: 05/01/08