National Cancer Institute
U.S. National Institutes of Health | www.cancer.gov

NCI Home
Cancer Topics
Clinical Trials
Cancer Statistics
Research & Funding
News
About NCI
Levels of Evidence for Cancer Screening and Prevention Studies (PDQ®)
Health Professional Version   Last Modified: 12/17/2004



Introduction






Evaluation of Evidence






Notes on Quality Assessment






Get More Information From NCI






Changes to This Summary (12/17/2004)






Questions or Comments About This Summary






More Information



Page Options
Print This Page
Print Entire Document
View Entire Document
E-Mail This Document
Quick Links
Director's Corner

Dictionary of Cancer Terms

NCI Drug Dictionary

Funding Opportunities

NCI Publications

Advisory Boards and Groups

Science Serving People

Español
Quit Smoking Today
NCI Highlights
Report to Nation Finds Declines in Cancer Incidence, Death Rates

High Dose Chemotherapy Prolongs Survival for Leukemia

Prostate Cancer Study Shows No Benefit for Selenium, Vitamin E

The Nation's Investment in Cancer Research FY 2009

Past Highlights
Evaluation of Evidence

The 2 steps in evaluating evidence are described below.

  1. Description of the Evidence (The PDQ Editorial Board uses the same process for benefits and harms; the “evidence” referred to is the evidence relevant for answering the question of the magnitude of the health effects of widespread implementation.)

    Domains

    1. Study Design (evidence from the best studies available; ranked in descending order of strength)
      1. Evidence obtained from randomized controlled trials (see below).
      2. Evidence obtained from nonrandomized controlled trials.
      3. Evidence obtained from cohort or case-control studies.
      4. Evidence from ecologic and descriptive studies (e.g., international patterns studies, time series).
      5. Opinions of respected authorities based on clinical experience, descriptive studies, or reports of expert committees.
    2. Internal Validity: “Quality” of Execution Within the Study Design
      The Editorial Board uses design-specific criteria within each research design to assess the internal validity of the evidence. At present the Board uses the criteria developed by the U.S. Preventive Services Task Force (see Table 3 in [1]). These criteria may be modified over time as needed.
    3. Consistency (coherence)/Volume of the Evidence
      • One study (small vs. large number of participants; agree vs. disagree).
      • Multiple studies (small vs. large number of participants; agree vs. disagree).
    4. Magnitude of Effects on Health Outcomes (both absolute and relative risks; as quantitative as possible; may vary for different populations)
      • Small positive/negative magnitude (benefits/harms).
      • Larger positive/negative magnitude (benefits/harms).
    5. External Validity
      • Extent to which the intervention can be applied to usual practice with the same effects as in efficacy studies.
      • Effects among people in the general population, differences with study subjects.
  2. Assessment of the Evidence
    1. The level of certainty (solid, fair, inadequate) of our understanding of the direction and magnitude of the health effects (both benefits and harms) of widespread implementation.
    2. Example: Statement of Benefits

      Option 1: "Based on [solid/fair] evidence, use of intervention X [among population Y, where appropriate] leads to a reduction/increase in (a specific benefit).” [In the Evidence of Benefit section, the actual evidence is detailed, including evidence and assessment of direction and magnitude of specific benefits.]

      Option 2: "The evidence is inadequate to make a clear determination of benefit." (To be used when evidence is inadequate in amount or quality.) Alternative format depending on situation: "The evidence is inadequate to determine whether (preventive service) reduces (health problem) to a clinically or public health important degree." (Further explanation or clarification as needed.) Need to state for what outcome the evidence is inadequate; i.e., to assess a mortality effect.

    3. Example: Statement of Harms

      Option 1: “Based on [solid/fair] evidence, use of intervention X [among population Y] leads to a reduction/increase in (a specific harm).” [In the Evidence of Harm section, the actual evidence is detailed, including evidence and assessment of the direction and magnitude of specific harms.]

      Option 2: "The evidence is inadequate to make a clear determination of benefit." (To be used when evidence is inadequate in amount or quality.) Alternative format depending on situation: "The evidence is inadequate to determine whether (preventive service) reduces (health problem) to a clinically or public health important degree." (Further explanation or clarification as needed.) Need to state for what outcome the evidence is inadequate; i.e., to assess a mortality effect.

References

  1. Harris RP, Helfand M, Woolf SH, et al.: Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med 20 (3 Suppl): 21-35, 2001.  [PUBMED Abstract]

Back to Top

< Previous Section  |  Next Section >


A Service of the National Cancer Institute
Department of Health and Human Services National Institutes of Health USA.gov