Current Methods of the U.S. Preventive Services Task Force: A Review of the Process

(Continued)

Translating Evidence into Recommendations

General Principles

Making recommendations for clinical practice involves considerations that extend beyond scientific evidence. Direct scientific evidence is of pre-eminent interest, but such issues as cost effectiveness, resource prioritization, logistical factors, ethical and legal concerns, and patient and societal expectations should also be considered.

Historically, the Task Force has taken a conservative, evidence-based approach to this process, making recommendations that reflect primarily the state of the evidence and refraining from making recommendations when they cannot be supported by evidence. This is done with the understanding that clinicians and policymakers must still consider additional factors in making their own decisions (34). The Task Force sees its purpose as providing users with information about the extent to which recommendations are supported by evidence, allowing them to make more informed decisions about implementation.

Another important issue in making recommendations is the amount and quality of evidence required. As evidence is rarely adequate to provide decision makers with completely valid information about all important outcomes for the population of interest, those creating guidelines must consider how far they are willing to generalize from imperfect evidence. As noted in the Extrapolation and Generalization section, the Task Force believes that such generalizations can be made under defined conditions.

The general principles the Task Force follows in making recommendations are outlined in Table 5. Most of these principles have been discussed in other parts of this article. They involve both the factors considered by the Task Force in making recommendations (e.g., the most salient types of evidence, feasibility, harms, economic costs, and its target population) and the way in which it considers these factors (e.g., the place of subjectivity, the importance of the population perspective, and the extent to which the evidence connects the service with positive net benefits for patients).

Table 5. Principles for making recommendations



Return to Contents

Codes and Wording of Statements

As in the past, the Task Force assigns letter codes to its recommendations and uses standardized phrasing for each category of recommendations (Table 6), but the details have changed from previous versions. The original five-letter scheme, which included an E recommendation category that was rarely used (6), has been replaced with a four-letter scheme that allows only one classification for recommendations against routinely providing a preventive service (D).

Table 6. Standard recommendation language, USPSTF


Recommendation: A
Language a: The USPSTF strongly recommends that clinicians routinely provide [the service] to eligible patients. (The USPSTF found good evidence that [the service] improves important health outcomes and concludes that benefits substantially outweigh harms.)

Recommendation: B
Language a: The USPSTF recommends that clinicians routinely provide [the service] to eligible patients. (The USPSTF found at least fair evidence that [the service] improves important health outcomes and concludes that benefits outweigh harms.)

Recommendation: C
Language a: The USPSTF makes no recommendation for or against routine provision of [the service]. (The USPSTF found at least fair evidence that [the service] can improve health outcomes but concludes that the balance of the benefits and harms is too close to justify a general recommendation.)

Recommendation: D
Language a: The USPSTF recommends against routinely providing [the service] to asymptomatic patients. (The USPSTF found at least fair evidence that [the service] is ineffective or that harms outweigh benefits.)

Recommendation: I
Language a: The USPSTF concludes that the evidence is insufficient to recommend for or against routinely providing [the service]. (Evidence that [the service] is effective is lacking, of poor quality, or conflicting and the balance of benefits and harms cannot be determined.)


[a] All statements specify the population for which the recommendation is intended and are followed by a rationale statement providing information about the overall grade of evidence and the net benefit from implementing the service.


Previous definitions for letter codes focused on whether the evidence supported "including the preventive service in the periodic health examination." Current thinking is that preventive services should also be delivered in other contexts, such as illness visits. The new wording thus focuses on whether the service should be "routinely provided."

In the past, the Task Force assigned a C code to recommendations with "insufficient evidence to make a recommendation." Previous Task Forces used this code for a wide assortment of circumstances and thus assigned it to a large proportion of the preventive services they reviewed. Evidence could be insufficient because no studies existed, available studies were of poor quality, studies were of reasonable quality but conflicting, or results were consistent but the magnitude of net benefit was small.

The C recommendation, because of its location in the hierarchical ranking of recommendation grades, implies that the service is less worthy of implementation than services that receive an A or a B recommendation. The current Task Force believes that such pejorative conclusions should be applied only when the evidence provides a basis for inferring that the magnitude of net benefit is smaller than for interventions that merit higher ratings. In other instances, in which evidence is of poor quality or conflicting, the possibility of substantial benefit (or substantial harm) cannot be excluded on scientific grounds and thus the Task Force can make no evidence-based judgments about the service.

To address these cases, the Task Force has created a new recommendation category, the I recommendation (insufficient evidence). It has also intentionally chosen a letter distant from the A-D hierarchy to signal its reluctance to pass judgment about the effectiveness of the interventions that receive this rating. The Task Force gives an I recommendation when studies are lacking or of poor quality or when they produce conflicting results that do not permit conclusions about likely benefits and harms.

For the A-D recommendations, the Task Force has adopted a more formalized process for translating the evidence into group judgments about how strongly to recommend the intervention than had been applied in the past. In earlier years, the simplistic notion was that services supported by RCTs always received A recommendations. The new approach recognizes that the importance of providing the preventive service depends not only on the quality of the evidence but also on the magnitude of net benefit to patients or populations. In an effort to ensure that both dimensions—quality and magnitude—are addressed systematically in assigning letter codes, the Task Force now uses a recommendation grid (Table 7) that makes the process more explicit.

Table 7. Recommendation grid
Quality of evidence Net Benefit
  Substantial Moderate Small Zero/negative
Good A B C D
Fair B B C D
Poor = I        

Select for Text Version

As shown, code A indicates that the quality of evidence is good and the magnitude of net benefits is substantial: The Task Force "strongly recommends" that these services be routinely provided (Table 6). The B code indicates that the Task Force has found that either the quality of the evidence or the magnitude of net benefits (or both) is less than would be needed to warrant an A. Primary care providers should not necessarily give higher priority to A over B services. Setting priorities for offering, providing, or reimbursing these services should include consideration of time and resource requirements, which are beyond the scope of the Task Force's review. Other groups have undertaken this important work (35).

The C code indicates that the quality of evidence is either good or fair but that the magnitude of net benefits, as judged in the subjective process outlined above, is too small to make a general recommendation. In these cases, the Task Force "makes no recommendation for or against routinely providing the service." Clinicians and policymakers may choose to offer the service for other reasons—such as considerations other than scientific evidence or because benefits for individual patients are expected to exceed those observed in studies—but the Task Force rating is meant to advise them that existing evidence does not document substantial net benefit for the average patient.

The D code indicates that the evidence is good or fair but that net benefit is probably either zero or negative. In these situations, the Task Force recommends against routine use of the service.

When the evidence is poor, the Task Force cannot distinguish between substantial or moderate net benefits on the one hand and small or zero/negative net benefits on the other. In these cases, the Task Force uses code I to indicate that it cannot make a recommendation for or against routinely providing the service. Because extant evidence cannot yet clarify whether the net benefits of the service are large or small (or negative), this rating advises clinicians and policymakers that determination of whether to provide these services routinely cannot be based on evidence; such decisions must be based on factors other than science.

Return to Contents

Drafting the Report

In its earliest days, background papers and recommendations of the Task Force were written by individual panel members assigned to those topics. In later years, they were written by staff with close oversight by the Task Force. In time a sharp demarcation has evolved between descriptions of the evidence and recommendations.

Thus, for the third Task Force, topic teams led by EPC staff write systematic evidence reviews. These reviews define the strengths and limits of the evidence but stop short of making recommendations.

Systematic evidence reviews typically include the full version (available from AHRQ and accessible on its Web site) and a shorter summary such as those available online. As a work product prepared under contract for AHRQ, the systematic evidence reviews must be approved by the agency before public release. The reviews remain pure descriptions of the science; because they are published separately, groups other than the Task Force can use them to formulate their own guidelines and recommendations.

The summary reviews are typically coupled with a "recommendation and rationale" document, written by the Task Force, which contains recommendations and their supporting rationales. Recommendations, which cross the line from science into policy, are based on formal voting procedures that include explicit rules for determining the views of the majority.

The Task Force has an explicit policy concerning conflict of interest. All members and EPC staff disclose at each meeting if they have an important financial, organizational, or intellectual conflict for each topic being discussed. Task Force members and EPC staff with conflicts can participate in discussions about evidence, but members abstain from voting on recommendations about the topic in question.

Recommendations are independent of the government. They neither require clearance from nor represent the policy of AHRQ or the U.S. Public Health Service, although efforts are made to consult with relevant agencies to reduce unnecessary discrepancies among guidelines.

The Task Force chair or liaisons on the topic team generally compose the first draft of the recommendation and rationale statement, which the full panel then reviews and edits. These statements have the general structure of the chapters in previous editions of the Guide to Clinical Preventive Services (6). Specifically, they include a recommendation statement and code, a rationale statement, and a brief discussion of clinical interventions. The clinical intervention section is meant to provide more specific information and guidance to clinicians about the service, sometimes discussing factors beyond the quality of the evidence and the magnitude of net benefit that must be considered with implementation.

Return to Contents

External Review

Before the Task Force makes its final determinations about recommendations on a given preventive service, the EPC and AHRQ send a draft systematic evidence review to four to six external experts and to federal agencies and professional and disease-based health organizations with interests in the topic. They ask the experts to examine the review critically for accuracy and completeness and to respond to a series of specific questions about the document. After assembling these external review comments and documenting the proposed response to key comments, the topic team presents this information to the Task Force in memo form. In this way, the Task Force can consider these external comments and a final version of the systematic review before it votes on its final recommendations about the service.

Return to Contents

Conclusion

Methods for making evidence-based practice policies are evolving. At one extreme, guidelines panels could insist on direct evidence or point to any information gaps to justify a negative recommendation for almost any service. Such an approach would result in positive recommendations only for services that had a very narrow confidence interval for net benefit, but many effective services would not be recommended. At the other extreme, guideline groups that accept incomplete data and allow easy extrapolation make many positive recommendations, but they have less certainty that the services they recommend actually produce more benefit than harm.

In avoiding these extremes, the Task Force has wrestled with several gaps in existing methodology for assessing the quality of evidence, for integrating bodies of evidence, and for translating evidence into guidelines. It continues to address several knotty questions: Can criteria for the internal validity of studies be consistently applied across preventive services? How reliable are such criteria in identifying studies with misleading results? How much weight should be given to various degrees of information gaps, particularly those concerning potential harms and generalizations from research studies to everyday practice? Should the Task Force modify any of these methods when dealing with counseling services?

More methodologic research is warranted in several key areas. Principal among these are efforts to determine the best factors to consider in using evidence-based principles to guide judgments about the magnitude of benefits and harms when the available evidence is fair in quality and when gaps exist in the framework supporting effectiveness. These and other challenges will make the methods of the Task Force, like those of other evidence-based guideline programs, a work in progress for many years.

Return to Contents

Acknowledgements

This paper was developed by the Research Triangle Institute-University of North Carolina at Chapel Hill (RTI-UNC) and the Oregon Health Sciences University (OHSU) Evidence-Based Practice Centers under contracts from the Agency for Healthcare Research and Quality (contract nos. 290-97-0011 and 290-97-0018, respectively). We acknowledge the assistance of Jacqueline Besteman, J.D., M.A., EPC Program Officer; the AHRQ staff working with the third Task Force; and the staffs of the EPCs at RTI-UNC and at OHSU for their many hours of work in support of this effort. We also acknowledge the assistance of the Counseling and Behavioral Issues Work Group of the Task Force, Evelyn Whitlock, M.D., M.P.H., convenor. Finally, we also acknowledge the major contribution of the entire third U.S. Preventive Services Task Force for its support and intellectual stimulation.

The authors of this article are responsible for its contents, including any clinical or treatment recommendations. No statement in this article should be construed as an official position of the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.

Return to Contents

References and Notes

1. Field MJ, Lohr KN, eds. Guidelines for clinical practice: from development to use. Washington, DC: National Academy Press, 1992 (for Institute of Medicine).

2. Woolf SH, George JN. Evidence-based medicine: interpreting studies and setting policy. Hematol Oncol Clin N Amer 2000, 14:761-784.

3. Mulrow CD, Cook D, eds. Systematic reviews: synthesis of best evidence for health care decisions. Philadelphia: American College of Physicians, 1998.

4. Cook D, Giacomini M. The trials and tribulations of clinical practice guidelines. JAMA 1999, 281:1950-1951.

5. Lawrence RS, Mickalide AD, Kamerow DB, Woolf SH. Report of the U.S. Preventive Services Task Force. JAMA 1990, 263:436-437.

6. U.S. Preventive Services Task Force. Guide to clinical preventive services: report of the U.S. Preventive Services Task Force, 2nd ed, Washington, DC: Office of Disease Prevention and Health Promotion, U.S. Government Printing Office, 1996.

7. Eddy DM. Clinical decision making: from theory to practice. A collection of essays from JAMA. Boston: Jones and Bartlett Publishers, 1995.

8. Pignone MP, Phillips CJ, Atkins D, Teutsch SM, Mulrow CD, Lohr KN. Screening and treating adults for lipids disorders. Am J Prev Med 2001;20(suppl 3):77-89.

9. Briss PA, Zaza S, Pappaioanou M, et al. Developing an evidence-based guide to community preventive services: methods. Am J Prev Med 2000;18(suppl 1):35-43.

10. Meade MO, Richardson WS. Selecting and appraising studies for a systematic review. In: Mulrow CD, Cook D, eds. Systematic reviews: synthesis of best evidence for health care decisions. Philadelphia: American College of Physicians, 1998:81-90.

11. Woolf SH, DiGuiseppi CG, Atkins D, Kamerow DB. Developing evidence-based clinical practice guidelines: lessons learned by the U.S. Preventive Services Task Force. Ann Rev Public Health 1996, 17:511-538.

12. Battista RN, Fletcher SW. Making recommendations on preventive practices: methodological issues. Am J Prev Med 1988;4(suppl 4):53-67.

13. Mulrow C, Langhorne P, Grimshaw J. Integrating heterogeneous pieces of evidence in systematic reviews. In: Mulrow CD, Cook D, eds. Systematic reviews: synthesis of best evidence for health care decisions. Philadelphia: American College of Physicians, 1998:103-12.

14. Nelson HD, Helfand M. Screening for chlamydial infection. Am J Prev Med 2001;20(suppl 3):95-107.

15. Helfand M, Mahon SM, Eden KB, Frame PS, Orleans CT. Screening for skin cancer. Am J Prev Med 2001;20(suppl 3):47-58.

16. Wilson JMG, Junger G. Principles and practice of screening for disease. Geneva: World Health Organization, 1968 (Public Health Papers No. 34).

17. Frame PS, Carlson SJ. A critical review of periodic health screening using specific screening criteria. J Fam Pract 1975;2:29-36, 123-9, 189-94, 283-9.

18. Bucher HC, Guyatt GH, Cook DJ, Holbrook A, McAlister F.A. Users' guides to the medical literature. XIX. Applying clinical trial results. A. How to use an article measuring the effect of an intervention on surrogate end points. JAMA 1999, 282:771-778.

19. Gøtzsche PC, Liberati A, Torri V, Rossetti L. Beware of surrogate outcome measures. Int J Tech Assess Health Care 1996, 12:238-246.

20. Lohr KN, Carey TS. Assessing "best evidence": issues in grading the quality of studies for systematic reviews. J Qual Improv 1999, 25:470-479.

21. Hornberger J, Wrone E. When to base clinical policies on observational versus randomized trial data. Ann Intern Med 1997, 127:697-703.

22. Feinstein AR, Horwitz RI. Problems in the "evidence" of "evidence-based medicine." Am J Med 1997, 103:529-535.

23. Oxman AD, Cook DJ, Guyatt GH, Evidence-Based Medicine Working Group. Users' guides to the medical literature: how to use an overview. JAMA 1994;272:1367-71.

24. Mulrow CD, Linn WD, Gaul MK, Pugh JA. Assessing quality of a diagnostic test evaluation. J Gen Intern Med 1989, 4:288-295.

25. Guyatt GH, Sackett DL, Cook DJ, Evidence-Based Medicine Working Group. Users' guides to the medical literature. I. How to use an article about therapy or prevention. A. Are the results of the study valid? JAMA 1993;270:2598-601.

26. Laupacis A, Wells G, Richardson WS, Tugwell P, Evidence-Based Medicine Working Group. Users' guides to the medical literature V. How to use an article about prognosis. JAMA 1994;272:234-7.

27. Russell MA, Wilson C, Taylor C, Baker CD. Effect of general practitioners' advice against smoking. BMJ 1979, 2:231-235.

28. Eddy DM. Comparing benefits and harms: the balance sheet. JAMA 1990;263:2493, 2498, 2501.

29. Braddick M, Stuart M, Hrachovec J. The use of balance sheets in developing clinical guidelines. J Am Board Fam Pract 1999, 12:48-54.

30. Ewart RM. Primum non nocere and the quality of evidence: rethinking the ethics of screening. J Am Board Fam Pract 2000, 13:188-196.

31. Fletcher SW, Black W, Harris R, Rimer B, Shapiro S. Report of the International Workshop on Screening for Breast Cancer. J Natl Cancer Inst 1993, 85:644-656.

32. Elmore JG, Barton MB, Moceri VM, Polk S, Arena PJ, Fletcher SW. Ten-year risk of false positive screening mammograms and clinical breast examinations. N Engl J Med 1998, 338:1089-1096.

33. Nease RF Jr, Kneeland T, O'Connor GT, et al. Variation in patient utilities for outcomes of the management of chronic stable angina: implications for clinical practice guidelines. JAMA 1995, 273:1185-1190.

34. Woolf SH, Dickey LL. Differing perspectives on preventive care guidelines: a new look at the mammography controversy. Am J Prev Med 1999, 17:260-268.

35. Coffield AB, Maciosek MV, McGinnis JM, et al. Priorities among recommended clinical preventive services. Am J Prev Med 2001. In press.

Return to Contents

Author Affiliations

[a] Harris: School of Medicine and Cecil G. Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, NC.
[b] Helfand: Division of Medical Informatics and Outcomes Research, and Evidence-based Practice Center, Oregon Health Sciences University and Portland Veterans Affairs Medical Center, Portland, OR.
[c] Woolf: Department of Family Practice, Medical College of Virginia, Virginia Commonwealth University, Fairfax, VA.
[d] Lohr: Research Triangle Institute, Research Triangle Park, and University of North Carolina at Chapel Hill, Program on Health Outcomes and School of Public Health, Chapel Hill, NC.
[e] Mulrow: Department of Medicine, University of Texas Health Science Center, San Antonio, TX.
[f] Teutsch: Outcomes Research and Management, Merck & Co, Inc., West Point, PA.
[g] Atkins: Center for Practice and Technology Assessment, Agency for Healthcare Research and Quality, Rockville, MD.

Return to Contents

Footnotes

[1]Other members of the Methods Work Group include: Alfred O. Berg, M.D., M.P.H., University of Washington School of Medicine; Karen B. Eden, Ph.D., Oregon Health Sciences University; John Feightner, M.D., M.Sc., FCFP, University of Western Ontario-Parkwood Hospital; Susan Mahon, M.P.H., Oregon Health Sciences University; and Michael Pignone, M.D., M.P.H., University of North Carolina School of Medicine.

Copyright and Source Information

This document is in the public domain within the United States as stated in AHRQ's license agreement with the American Journal of Preventive Medicine. For information on reprinting, contact Randie Siegel, Director, Division of Printing and Electronic Publishing, Agency for Healthcare Research and Quality, Suite 501, 2101 East Jefferson Street, Rockville, MD 20852. Requests for linking or to incorporate content in electronic resources should be sent to: info@ahrq.gov.

Source: Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM, Atkins D, for the Methods Word Group, third U.S. Preventive Services Task Force. Current methods of the U.S. Preventive Services Task Force: a review of the process. Am J Prev Med 2001;20(3S):21-35 (http://www.elsevier.com/locate/ajpmonline).

Return to Contents


Internet Citation:

Current Methods of the U.S. Preventive Services Task Force: A Review of the Process. Article originally in Am J Prev Med 2001;20(3S):21-34. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/clinic/ajpmsuppl/harris1.htm


Return to USPSTF Recommendations
U.S. Preventive Services Task Force (USPSTF)
Clinical Information
AHRQ Home Page