![SciPICH Logo](https://webarchive.library.unt.edu/eot2008/20090506205545im_/http://www.health.gov/scipich/images/smscipichtrans.GIF)
![SciPICH Publications - button/link](https://webarchive.library.unt.edu/eot2008/20090506205545im_/http://www.health.gov/scipich/images/pubstinyyel.GIF)
An Evidence-Based
Approach
Introduction:
Evaluation of IHC
Consumers & IHC
Evaluation
Developers &
IHC Evaluation
Policy Issues Relevant
to IHC
Health Care
Providers, Purchasers & IHC
SciPICH Final
Report
![SciPICH home icon and link](https://webarchive.library.unt.edu/eot2008/20090506205545im_/http://www.health.gov/scipich/images/hometrans.GIF)
|
|
Wired for Health and Well-Being: The
Emergence of Interactive Health Communication
Editors: Thomas R. Eng, David H. Gustafson
Suggested Citation: Science Panel on Interactive
Communication and Health. Wired for Health and Well-Being: the Emergence of Interactive
Health Communication. Washington, DC: US Department of Health and Human
Services, US Government Printing Office, April 1999.
Download in PDF format: [Entire Document] [References]
Appendix A: Evaluation
Reporting Template For IHC Applications
The template is divided into four sections. Section I
focuses on identification of the developer(s), the source(s) of funding for the
application, the purpose of the application and its intended audience(s), technical
requirements, and issues of confidentiality. Assurance of confidentiality will become
increasingly important as applications that collect and utilize personal health
information, such as those that assess individual risk for sensitive health conditions,
proliferate.
Section II focuses on the results of formative and process
evaluations, as contributors to application design and development. These items elicit
information to help potential users and purchasers judge validity of the content,
appropriateness of the application to their specific needs, and whether sufficient testing
was done to ensure that the application functions as intended. This section attempts to go
beyond the simple disclosure of the descriptive elements (e.g., identity of the
developers, sponsorship and purpose of the application) to encourage disclosure of whether
and how potential users and other "experts" were involved in application
development and how extensively the application was tested prior to release.
Section III focuses on the results of any outcome
evaluations performed. The list of outcomes is not exhaustive but includes those most
commonly encountered, ranging from user satisfaction to changes in morbidity or mortality,
reduced costs, or organizational change. Potential outcomes are broadly defined because
individual developers, users, and purchasers may have very different needs and
expectations. For example, while one developer or potential purchaser may be interested in
an application that improves management of specific chronic disease symptoms, another may
be solely interested in improving general patient satisfaction. Classifications of
evaluation designs from the US Preventive Services Task Force are included to provide
information relevant to the internal validity of the results (i.e., the strength of
evidence that the observed results are due to the intervention). Descriptions of samples
also are included to provide information relevant to the "generalizability" of
results.
Section IV focuses on information about evaluators and
funding to provide potential users and purchasers with information about potential biases
or conflicts of interest relevant to the evaluation. The template also attempts to
increase accountability for IHC applications by encouraging the disclosure of the
person(s) responsible for design and content (Section I) and evaluation (Section IV).
Evaluation Reporting Template for IHC
Applications, Version 1.0, Science Panel on Interactive Communication and Health
This is an evaluation reporting template for developers
and evaluators of interactive health communication (IHC) applications to help them report
evaluation results to those who are considering purchasing or using their applications.
Because the template is designed to apply to all types of applications and evaluations,
some items may not apply to a particular application or evaluation. Complete only those
items that apply. This and subsequent versions of the template and other resources on
evaluation of IHC are available at: URL: http://www.scipich.org
Comments and suggestions regarding the content, scope,
utility, and practicality of this template should be directed to: SciPICH, Office of
Disease Prevention and Health Promotion, US Department of Health and Human Services, 200
Independence Ave., SW, Room 738G, Washington, DC 20201 or e-mail comments to:
scipich@health.org
I. Description of Application
- Title of product/application:
- Type of application (e.g., Web site, CD-ROM/DVD):
- Name(s) of developer(s):
- Relevant qualifications of developer(s):
- Contact(s) for additional information:
- Funding sources for development of the application (e.g.,
commercial company, government, foundation/nonprofit organization, individual):
- Category of application (e.g., clinical decision support,
individual behavior change, peer support, risk assessment):
- Specific goal(s)/objective(s) of the application (What is
the application intended to do? List multiple if applicable):
- Intended target audience(s) for the application (e.g., age
group, gender, educational level, types of organizations and settings, disease groups,
cultural/ethnic/population groups):
- Available in languages other than English? No Yes (specify):
- Does the application include paid advertisements, content,
or links? No Yes
- Technological/resource requirements of the application
(e.g., hardware, Internet, on-site support available):
- Describe how confidentiality or anonymity of users is
protected:
- Indicate who will potentially be able to get information
about users:
II. Formative and
Process Evaluation*
- Indicate the processes and information source(s) used to
ensure the validity of the content (e.g., peer-reviewed scientific literature, in-house
"experts," recognized outside "experts," consensus panel of
independent "experts," updating and review processes and timing):
- Are the specific original sources of information cited
within the application? Yes No
- Describe the methods of instruction and/or communication
used (e.g., drill and practice, modeling, simulations, reading generic online documents,
interactive presentations of tailored information, specify methods used):
- Describe the media formats used (e.g., text, voice/sound,
still graphics, animation/video, color):
- For each applicable evaluation question below indicate
(i) the characteristics of the sample(s) used and how they were selected, (ii) the
method(s) of assessment (e.g., specific measures used), and (iii) the evaluation results:
- If text or voice is used, how was the reading level or
understandability tested?
- What is the extent of expected use of the application (e.g.,
average length and range of time, number of repeat uses)?
- How long will it take to train a beginning user to use the
application proficiently?
- Describe how the application was Beta tested and debugged
(e.g., by what users, in what settings):
III. Outcome
Evaluation**
- For each applicable evaluation question below, indicate (i)
the type of evaluation design (I-III),*** (ii) the
characteristics of the sample(s) used and how they were selected, (iii) the method(s) of
assessment (e.g., specific measures used), and (iv) the evaluation results:
- How much do users like the application?
- How helpful/useful do users find the application?
- Do users increase their knowledge?
- Do users change their beliefs or attitudes (e.g.,
self-efficacy, perceived importance, intentions to change behavior, satisfaction)?
- Do users change their behaviors (e.g., risk factor
behaviors, interpersonal interactions, compliance, utilization of resources)?
- Are there changes in morbidity or mortality (e.g., symptoms,
missed days of school/work, physiologic indicators)?
- Are there effects on costs/resource utilization (e.g.,
cost-effectiveness analysis)?
- Do organizations or systems change (e.g., resource
utilization, effects on "culture")?
IV. Background of Evaluators
- Names and contact information for evaluator(s):
- Do any of the evaluators have a financial interest in the
sale/dissemination of the application? No Yes (specify):
- Funding sources for the evaluation(s) of the application
(e.g., developers funds, other commercial company, government, foundation/nonprofit
organization):
- Has the evaluation been published in a peer-reviewed
scientific journal? No Yes
- Is a copy of the evaluation report(s) available for review
on request? No Yes (how to obtain):
* Formative evaluation
is used to assess the nature of the problem and the needs of the target audience with a
focus on informing and improving program design before implementation. This is conducted
prior to or during early application development, and commonly consists of literature
reviews and reviews of existing applications and interviews or focus groups of
"experts" or members of the target audience. Process evaluation is used
to monitor the administrative, organizational, or other operational characteristics of an
intervention. This helps developers successfully translate the design into a functional
application and is performed during application development. This commonly includes
testing the application for functionality and also may be known as alpha and beta testing.
** Outcome evaluation
is used to examine an interventions ability to achieve its intended results under
ideal conditions (i.e., efficacy) or under real world circumstances (i.e., effectiveness),
and also its ability to produce benefits in relation to its costs (i.e., efficiency or
cost-effectiveness). This helps developers learn whether the application is successful at
achieving its goals and objectives, and is performed after the implementation of the
application.
*** Evaluation design types
are grouped according to level of quality of evidence as classified by the US Preventive
Services Task Force and the Canadian Task Force on the Periodic Health Exam. (US
Preventive Services Task Force. Guide to Clinical Preventive Services. 2nd Ed.
Washington, DC: US Department of Health and Human Services; 1996.)
I. Randomized controlled trials. Experiments in
which potential users are randomly assigned to use the application or to a control group.
Randomization promotes comparability between groups. These designs can be (a)
doubleblinded: neither the participants nor the evaluators know which participants are in
the intervention group or the control group, (b) single-blinded: the participants are not
aware which experimental group they are in, or (c) non-blinded: both the participants and
the evaluators are aware of who is in the intervention group and who is in the control
group. Greater blinding lessens the chance of bias.
II-1. Nonrandomized controlled trials. Experiments
comparing users and nonusers (or "controls") but they are not randomly assigned
to these groups. For this type of design specify how the participants were recruited,
selected, and assigned to the groups and how the groups compare (similarities and
differences between users and nonusers) prior to the evaluation.
II-2. Cohort study/observational study. An
evaluation of users with no comparison or control group.
II-3. Multiple time series. Observations of
participants as they go through periods of use and nonuse of the application.
III. Descriptive studies, case reports, testimonials,
"expert" committee opinions.
Original version was published in: Robinson TN, Patrick K,
Eng TR, Gustafson D, for the Science Panel on Interactive Communication and Health. An
evidence-based approach to interactive health communication: a challenge to medicine in
the Information Age. JAMA. 1998;280:1264-1269.
Return to Table of Contents
Comments: SciPICH@health.org
Updated: 05/01/08 |