Introduction: Evaluation of IHC |
Wired for Health and Well-Being: The Emergence of Interactive Health CommunicationEditors: Thomas R. Eng, David H. Gustafson Suggested Citation: Science Panel on Interactive Communication and Health. Wired for Health and Well-Being: the Emergence of Interactive Health Communication. Washington, DC: US Department of Health and Human Services, US Government Printing Office, April 1999. Download in PDF format: [Entire Document] [References] Appendix A: Evaluation Reporting Template For IHC Applications The template is divided into four sections. Section I focuses on identification of the developer(s), the source(s) of funding for the application, the purpose of the application and its intended audience(s), technical requirements, and issues of confidentiality. Assurance of confidentiality will become increasingly important as applications that collect and utilize personal health information, such as those that assess individual risk for sensitive health conditions, proliferate. Section II focuses on the results of formative and process evaluations, as contributors to application design and development. These items elicit information to help potential users and purchasers judge validity of the content, appropriateness of the application to their specific needs, and whether sufficient testing was done to ensure that the application functions as intended. This section attempts to go beyond the simple disclosure of the descriptive elements (e.g., identity of the developers, sponsorship and purpose of the application) to encourage disclosure of whether and how potential users and other "experts" were involved in application development and how extensively the application was tested prior to release. Section III focuses on the results of any outcome evaluations performed. The list of outcomes is not exhaustive but includes those most commonly encountered, ranging from user satisfaction to changes in morbidity or mortality, reduced costs, or organizational change. Potential outcomes are broadly defined because individual developers, users, and purchasers may have very different needs and expectations. For example, while one developer or potential purchaser may be interested in an application that improves management of specific chronic disease symptoms, another may be solely interested in improving general patient satisfaction. Classifications of evaluation designs from the US Preventive Services Task Force are included to provide information relevant to the internal validity of the results (i.e., the strength of evidence that the observed results are due to the intervention). Descriptions of samples also are included to provide information relevant to the "generalizability" of results. Section IV focuses on information about evaluators and funding to provide potential users and purchasers with information about potential biases or conflicts of interest relevant to the evaluation. The template also attempts to increase accountability for IHC applications by encouraging the disclosure of the person(s) responsible for design and content (Section I) and evaluation (Section IV). Evaluation Reporting Template for IHC Applications, Version 1.0, Science Panel on Interactive Communication and Health This is an evaluation reporting template for developers and evaluators of interactive health communication (IHC) applications to help them report evaluation results to those who are considering purchasing or using their applications. Because the template is designed to apply to all types of applications and evaluations, some items may not apply to a particular application or evaluation. Complete only those items that apply. This and subsequent versions of the template and other resources on evaluation of IHC are available at: URL: http://www.scipich.org Comments and suggestions regarding the content, scope, utility, and practicality of this template should be directed to: SciPICH, Office of Disease Prevention and Health Promotion, US Department of Health and Human Services, 200 Independence Ave., SW, Room 738G, Washington, DC 20201 or e-mail comments to: scipich@health.org I. Description of Application
II. Formative and Process Evaluation*
III. Outcome Evaluation**
IV. Background of Evaluators
* Formative evaluation is used to assess the nature of the problem and the needs of the target audience with a focus on informing and improving program design before implementation. This is conducted prior to or during early application development, and commonly consists of literature reviews and reviews of existing applications and interviews or focus groups of "experts" or members of the target audience. Process evaluation is used to monitor the administrative, organizational, or other operational characteristics of an intervention. This helps developers successfully translate the design into a functional application and is performed during application development. This commonly includes testing the application for functionality and also may be known as alpha and beta testing. ** Outcome evaluation is used to examine an interventions ability to achieve its intended results under ideal conditions (i.e., efficacy) or under real world circumstances (i.e., effectiveness), and also its ability to produce benefits in relation to its costs (i.e., efficiency or cost-effectiveness). This helps developers learn whether the application is successful at achieving its goals and objectives, and is performed after the implementation of the application. *** Evaluation design types are grouped according to level of quality of evidence as classified by the US Preventive Services Task Force and the Canadian Task Force on the Periodic Health Exam. (US Preventive Services Task Force. Guide to Clinical Preventive Services. 2nd Ed. Washington, DC: US Department of Health and Human Services; 1996.) I. Randomized controlled trials. Experiments in which potential users are randomly assigned to use the application or to a control group. Randomization promotes comparability between groups. These designs can be (a) doubleblinded: neither the participants nor the evaluators know which participants are in the intervention group or the control group, (b) single-blinded: the participants are not aware which experimental group they are in, or (c) non-blinded: both the participants and the evaluators are aware of who is in the intervention group and who is in the control group. Greater blinding lessens the chance of bias. II-1. Nonrandomized controlled trials. Experiments comparing users and nonusers (or "controls") but they are not randomly assigned to these groups. For this type of design specify how the participants were recruited, selected, and assigned to the groups and how the groups compare (similarities and differences between users and nonusers) prior to the evaluation. II-2. Cohort study/observational study. An evaluation of users with no comparison or control group. II-3. Multiple time series. Observations of participants as they go through periods of use and nonuse of the application. III. Descriptive studies, case reports, testimonials, "expert" committee opinions. Original version was published in: Robinson TN, Patrick K, Eng TR, Gustafson D, for the Science Panel on Interactive Communication and Health. An evidence-based approach to interactive health communication: a challenge to medicine in the Information Age. JAMA. 1998;280:1264-1269.
Return to Table of Contents
Comments: SciPICH@health.org Updated: 05/01/08 |
||