United States National Library of Medicine National Institutes of Health

HTA 101: VI. DETERMINING TOPICS FOR HTA


Organizations that conduct or sponsor HTAs have only limited resources for this activity. With the great variety of potential assessment topics, HTA organizations need some practical means of determining what to assess. This section considers how assessment programs identify candidate assessment topics and set priorities among these.

Identify Candidate Topics

To a large extent, assessment topics are determined or bounded, by the mission or purpose of an organization. For example, the US FDA [http://www.fda.gov/] is systematically required to assess all new drugs and to assess health devices according to specific provisions made for particular classes of devices. For a new drug, a company normally files an Investigational New Drug Application (IND) with the FDA for permission to begin testing the drug in people; later, following successful completion of necessary clinical trials, the company files a New Drug Application (NDA) to seek FDA approval to market the drug. For certain medical devices (i.e., new "Class III" devices that sustain or support life, are implanted in the body, or present a potential risk of illness or injury), the Investigational Device Exemption (IDE) and Premarketing Approval (PMA) Application are analogous to the IND and NDA, respectively. The FDA is notified about many other devices when a company files a "510(k)" application seeking market approval based on a device's "substantial equivalence" to another device that has already received FDA marketing approval.

Third-party payers generally assess technologies on a reactive basis; a new medical or surgical procedure that is not recognized by payers as being standard or established may become a candidate for assessment. For the US Centers for Medicare and Medicaid Services (CMS), assessment topics arise in the form of requests for national coverage policy determinations that cannot be resolved at the local level or that are recognized to be of national interest. These requests typically originate with Medicare contractors that administer the program in their respective regions, Medicare beneficiaries (patients), physicians, health product companies, health professional associations, and government entities. CMS may request assistance in the form of evidence reports or other assessments by a sister agency, AHRQ.  

For the Evidence-based Practice Centers program, also administered by AHRQ, the agency solicits topic nominations for evidence reports and technology assessments in a public notice in the US Federal Register. Topics have been nominated by a variety of other government agencies, payers, health systems and networks, health professions associations, employer and consumer groups, disease-based organizations, and others. In selecting topics, AHRQ considers not only the information about the topic itself, but the plans of the nominating organization to make use of the findings of the assessment. Information required in these nominations is shown in Box 35.

The American College of Physicians (ACP) Clinical Efficacy Assessment Program (CEAP), which develops clinical practice guidelines, determines its guideline topics based upon evidence reports developed by the AHRQ Evidence-based Practice Centers (EPC) program. (Topics of the EPC program are nominated by outside groups, including ACP.) The topics undertaken by ECRI's technology assessment service are identified by request of the service's subscribers, including payers, providers, and others. For the Cochrane Collaboration, potential topics generally arise from members of the review groups, who are encouraged to investigate topics of interest to them, subject to the agreement of their review groups (Clarke 2003).

Box 35
Evidence-based Practice Centers Topic Nominations

Topic nominations for the AHRQ EPC program should include:

Source: Agency for Healthcare Research and Quality 2003.

Horizon Scanning

The demand for scanning of multiple types of sources for information about new health care interventions has prompted the development of "early warning" or "horizon scanning" functions in the US, Europe, and elsewhere (Douw 2003). Horizon scanning functions are intended to serve multiple purposes, including to:

 

Among the organizations with horizon scanning functions are:  

For example, CETAP draws its information from the Internet, published literature, CCOHTA committee members, and other experts. The products of CETAP include short Alerts that address very early technologies, and as more evidence becomes available, CCOHTA publishes more in-depth, peer-reviewed Issues in Emerging Health Technologies bulletins. The purposes of EuroScan (European Information Network on New and Changing Health Technologies), a collaborative network of more than a dozen HTA agencies, are to: evaluate and exchange information on new and changing technologies, develop information sources, develop applied methods for early assessment, and disseminate information on early identification and assessment activities.  

As shown in Box 36, a considerable variety of online databases, newsletters, and other sources provide streams of information pertaining to new and emerging health care interventions. Certainly, an important set of sources for identifying new topics are bibliographic databases such as MEDLINE (accessible, e.g., via PubMed) and EMBASE. The Cochrane Collaboration protocols are publicly available, detailed descriptions of systematic reviews currently underway by Cochrane, which include detailed descriptions of the rationale for the review, information sources, and search strategies.

Although the major thrust of horizon scanning has been to identify "rising" technologies that eventually may merit assessment, horizon scanning may turn to the other direction to identify "setting" technologies that may be outmoded or superseded by newer ones. In either case, horizon scanning provides an important input into setting assessment priorities.  

Setting Assessment Priorities

Some assessment programs have explicit procedures for setting priorities; others set priorities only in an informal or vague way. Given very limited resources for assessment and increasing accountability of assessment programs to their parent organizations and others who use or are affected by their assessments, it is important to articulate how assessment topics are chosen.

Box 36
Information Sources for New and Emerging Health Care Interventions

*NDA: New Drug Application approvals; BLA: Biologics License Application approvals; PMA: Premarket Approval

Application approvals; 510(k): substantially equivalent device application approvals.

Most assessment programs have criteria for topic selection, although these criteria are not always explicit. Is it most important to focus on costly health problems and technologies? What about health problems that affect large numbers of people, or health problems that are life-threatening? What about technologies that cause great public controversy? Should an assessment be undertaken if it is unlikely that its findings will change current practice? Examples of selection criteria that are used in setting assessment priorities are:  

The timing for undertaking an assessment may be sensitive to the availability of evidence. For example, the results of a recently completed RCT or meta-analysis may challenge standard practice, and prompt an HTA to consolidate these results with other available evidence for informing clinical or payment decisions. Or, an assessment may be delayed pending the results of an ongoing study that has the potential to shift the weight of the body of evidence on that topic.

A systematic priority-setting process could include the following steps (Donaldson and Sox 1992; Lara and Goodman 1990).

  1. Select criteria to be used in priority setting.
  2. Assign relative weights to the criteria.
  3. Identify candidate topics for assessment (e.g., as described above).
  4. If the list of candidate topics is large, reduce it by eliminating those topics that would clearly not rank highly according to the priority setting criteria.
  5. Obtain data for rating the topics according to the criteria.
  6. For each topic, assign a score for each criterion.
  7. Calculate a priority score for each topic.
  8. Rank the topics according to their priority scores.
  9. Review the priority topics to ensure that assessment of these would be consistent with the organizational purpose.

Processes for ranking assessment priorities range from being highly subjective (e.g., informal opinion of a small group of experts) to quantitative (e.g., using a mathematical formula) (Donaldson 1992; Eddy 1989; Phelps 1992). Box 37 shows a quantitative model for priority setting. The Cochrane Collaboration uses a more decentralized approach. Starting with topics suggested by their review group members, many Cochrane Collaboration review groups set priorities by considering burden of disease and other criteria, as well as input from discussions with key stakeholders and suggestions from consumers. These priorities are then offered to potential reviewers who might be interested in preparing and maintaining relevant reviews in these areas (Clarke 2003).     

Of course, there is no single correct way to set priorities. The great diversity of potential assessment topics, the urgency of some policymaking needs, and other factors may diminish the practical benefits of using highly systematic and quantitative approaches. On the other hand, ad hoc, inconsistent, or non­transparent processes are subject to challenges and skepticism of policymakers and other observers who are affected by HTA findings. Certainly, there is a gap between theory and application of priority setting. Many of the priority setting models are designed to support resource allocation that maximizes health gains, i.e., identify health interventions which, if properly assessed and appropriately used, could result in substantial health improvements at reasonable costs. However, some potential weaknesses of these approaches are that they tend to set priorities among interventions rather than the assessments that should be conducted, they do not address priority setting in the context of a research portfolio, and they do not adopt an incremental perspective (i.e., consideration of the net difference that conducting an assessment might accomplish) (Sassi 2003).

Reviewing the process by which an assessment program sets its priorities, including the implicit and explicit criteria it uses in determining whether or not to undertake an assessment, can help to ensure that the HTA program is fulfilling its purposes effectively and efficiently.


Specify the Assessment Problem

One of the most important aspects of an HTA is to specify clearly the problem(s) or question(s) to be addressed; this will affect all subsequent aspects of the assessment. An assessment group should have an explicit understanding of the purpose of the assessment and who the intended users of the assessment are to be. This understanding might not be established at the outset of the assessment; it may take more probing, discussion and clarification.

Box 37
A Quantitative Model for Priority Setting

A 1992 report by the Institute of Medicine provided recommendations for priority setting to the

Agency for Health Care Policy and Research (now AHRQ). Seven criteria were identified:

The report offered the following formula for calculating a priority score for each candidate topic.

Priority Score = W1lnS1 + W2lnS2 + ... W7lnS7

where:

W is the relative weight of each of seven priority-setting criteria

S is the score of a given candidate topic for a criterion

ln is the natural logarithm of the criterion scores.

Candidate topics would then be ranked according to their priority score.

Source: Donaldson 1992.

The intended users or target groups of an assessment should affect its content, presentation, and dissemination of results. Clinicians, patients, politicians, researchers, hospital managers, company executives, and others have different interests and levels of expertise. They tend to have different concerns about the effects or impacts of health technologies (health outcomes, costs, social and political effects, etc.). They also have different needs regarding the scientific or technical level of reports, the presentation of evidence and findings, and the format (e.g., length and appearance) of reports.  

When the assessment problem and intended users have been specified, they should be reviewed by the requesting agency or sponsors of the HTA. The review of the problem by the assessment program may have clarified or focused the problem in a way that differs from the original request. This clarification may prompt a reconsideration or restatement of the problem before the assessment proceeds.

Problem Elements

There is no single correct way to state an assessment problem. In general, an assessment problem could entail specifying at least the following elements: health care problem(s); patient population(s); technology(ies); practitioners or users; setting(s) of care; and properties (or impacts or health outcomes) to be assessed.

For example, a basic specification of one assessment problem would be:

Causal Pathways

A useful means of presenting an assessment problem is a "causal pathway," sometimes known as an "analytical framework." Causal pathways depict direct and indirect linkages between interventions and outcomes. Although often used to present clinical problems, they can be used as well for organizational, financing, and other types of interventions or programs in health care.  

Causal pathways provide clarity and explicitness in defining the questions to be addressed in an HTA, and draw attention to pivotal linkages for which evidence may be lacking. They can be useful working tools to formulate or narrow the focus of an assessment problem. For a clinical problem, a causal pathway typically includes a patient population, one or more alternative interventions, intermediate outcomes (e.g., biological markers), health outcomes, and other elements as appropriate. In instances where a topic concerns a single intervention for narrowly defined indications and outcomes, these pathways can be relatively straightforward. However, given the considerable breadth and complexity of some HTA topics, which may cover multiple interventions for broadly defined health problem (e.g., screening, diagnosis, and treatment of osteoporosis in various population groups), causal pathways can become detailed. While the development of a perfectly representative causal pathway is not the objective of an HTA, these can be specified to a level of detail that is sufficient for the sponsor of an HTA and the group that will conduct the HTA concur on the assessment problem. In short, it helps to draw a picture.

An example of a general causal pathway for a screening procedure with alternative treatments is shown in Box 23. As suggested in this example, the evidence that is assembled and interpreted for an HTA may be organized according to an indirect relationship (e.g., between a screening test and an ultimate health outcome) as well as various intervening direct causal relationships (e.g., between a treatment indicated by the screening test and a biological marker, such as blood pressure or cholesterol level).

Reassessment and the Moving Target Problem

Health technologies are "moving targets" for assessment (Goodman 1996). As a technology matures, changes occur in the technology itself or other factors that can diminish the currency of an HTA report and its utility for health care policies. As such, HTA can be more of an iterative process than a one-time analysis. Some of the factors that would trigger a reassessment might include changes in the:

There are numerous instances of moving targets that have prompted reassessments. For example, since the inception of percutaneous transluminal coronary angioplasty (PTCA, approved by the US FDA in 1980), its clinical role vis-à-vis coronary artery bypass graft surgery (CABG) has changed as the techniques and instrumentation for both technologies have evolved, their indications have expanded, and as competing, complementary, and derivative technologies have emerged (e.g., laser angioplasty, coronary artery stents, minimally-invasive and "beating-heart" CABG). The emergence of viable pharmacological therapy for osteoporosis (e.g., with bisphosphonates and selective estrogen receptor modulators) has increased the clinical utility of bone densitometry. Long rejected for its devastating teratogenic effects, thalidomide has reemerged for carefully managed use in a variety of approved and investigational uses in leprosy and other skin diseases, certain cancers, chronic graft-vs.-host disease, and other conditions (Combe 2001; Richardson 2002).  

While HTA programs cannot avoid the moving target problem, they can manage and be responsive to it. Box 38 lists approaches for managing the moving target problem.  

Box 38
Managing the Moving Target Problem

Aside from changes in technologies and their applications, even new interpretations of, or corrections in, existing evidence can prompt a new assessment. This was highlighted by a 2001 report of a Cochrane Center that prompted the widespread re-examination of screening mammography guidelines by government and clinical groups. The report challenged the validity of evidence indicating that screening for breast cancer reduces mortality, and suggested that breast cancer mortality is a misleading outcome measure (Olsen 2001).

Some research has been conducted on the need to reassess a particular application of HTA findings, i.e., clinical practice guidelines. For example, for a study of the validity of 17 guidelines developed in the 1990s by AHCPR (now AHRQ), investigators developed criteria defining when a guideline needs to be updated, surveyed members of the panels that prepared the respective guidelines, and searched the literature for relevant new evidence published since the appearance of the guidelines. Using a "survival analysis," the investigators determined that about half of the guidelines were outdated in 5.8 years, and that at least 10% of the guidelines were no longer valid by 3.6 years. They recommended that, as a general rule, guidelines should be reexamined for validity every three years (Shekelle, Ortiz 2001). Others counter that the factors that might prompt a reassessment do not arise predictably or at regular intervals (Brownman 2001). Some investigators have proposed models for determining whether a guideline or other evidence-based report should be reassessed (Shekelle, Eccles 2001).

Changes in the volume or nature of publications may trigger the need for an initial assessment or reassessment. A "spike" (sharp increase) in publications on a topic, such as in the number of research reports or commentary, may signal trends that would merit attention for assessment. However, in order to determine whether such publication events are reliable indicators of technology emergence or moving targets requiring assessment, further bibliometric research should be conducted to determine whether actual emergence of new technologies or substantial changes in them or their use has been correlated with such publication events or trends (Mowatt 1997).

Not all changes require conducting a reassessment, or that a reassessment should entail a full HTA. A reassessment may require updating only certain aspects of an original report. In some instances, current clinical practices or policies may be recognized as being optimal relative to available evidence, so that a new assessment would have little potential for impact; or the set of clinical alternatives and questions have evolved so much since the original assessment that it would not be relevant to update it.  

In some instances, an HTA program may recognize that it should withdraw an existing assessment because to maintain it could be misleading to users and perhaps even have adverse health consequences. This may arise, for example, when an important flaw is identified in a pivotal study in the evidence base underlying the assessment, when new research findings appear to refute or contradict the original research base, or when the assumptions used in the assessment are determined to be flawed. The determination to maintain or withdraw the existing assessment while a reassessment is conducted, to withdraw the existing assessment and not conduct a reassessment, or to take other actions, depends on the risks and benefits of these alternative actions for patient health, and any relevant legal implications for the assessment program or users of its assessment reports.

Once an HTA program determines that a report topic is a candidate for being updated, the program should determine the need to undertake a reassessment in light of its other priorities. Assessment programs may consider that candidates for reassessment should be entered into the topic priority-setting process, subject to the same or similar criteria for selecting HTA topics.


Previous Section Next Section Table of Contents NICHSR Home Page

Last reviewed: 08 September 2008
Last updated: 08 September 2008
First published: 19 August 2004
Metadata| Permanence level: Permanent: Stable Content