Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

Systematic Reviews, Updating

Full Title: Updating Systematic Reviews

September 2007

View or download Technical Review


Structured Abstract

Background: Systematic reviews are often advocated as the best source of evidence to guide both clinical decisions and healthcare policy, yet we know very little about the extent to which they require updating.

Objectives:

  • To estimate the average time to changes in evidence sufficiently important to warrant updating systematic reviews (referred to as the survival time) and to identify any characteristics that increase or decrease these survival times.
  • To determine the performance characteristics of various surveillance protocols to identify important new evidence.
  • To assess the utility of rates and patterns of growth for evidence within clinical areas as predictors of updating needs.
  • To establish typical timeframes for the production and publication of systematic reviews in order to assess the extent to which they impact survival time (e.g., whether or not delays in the peer review and publication processes substantially shorten the time in the public domain before new evidence requires updating of a given systematic review).
  • To characterize current updating practices and policies of agencies that sponsor systematic reviews.

Design: Survival analysis for a cohort of 100 quantitative systematic reviews that were indexed in ACP Journal Club with an accompanying commentary; supplementary sample of Cochrane reviews meeting the same criteria and AHRQ evidence reports; internet-based survey of agencies that sponsor or undertake systematic reviews.

Sample: Eligible reviews evaluated the clinical benefit or harm of a specific (class of) drug, device, or procedure, were originally published between 1995 and 2005, and included at least one quantitative synthesis result in the form of an odds ratio, relative risk, risk difference, or mean difference. For the survey of updating policies and practices, we contacted 22 organizations that are well-known to produce or fund systematic reviews (including 12 AHRQ Evidence-based Practice Centers).

Data Sources: Systematic reviews indexed in ACP Journal Club and eligible new trials identified through five search protocols.

Measurements: Quantitative signals for updating consisted of changes in statistical significance or a relative change in effect magnitude of at least 50% involving one of the primary outcomes of the original systematic review or any mortality outcome. These signals were assessed by comparing the original meta-analytic results with updated results that included eligible new trials. Qualitative signals included substantial differences in characterizations of effectiveness, new information about harm, emergence of superior alternative treatments, and important caveats about the previously reported findings that would affect clinical decisionmaking.

The primary outcome of interest was the occurrence of either a qualitative or quantitative signal for updating the original systematic review. We also assessed the occurrence of a signal for updating within 2 years of publication, as some sources (e.g., The Cochrane Library) currently recommend updating systematic reviews every two years.

The survey measured existing updating policies, current strategies in use, and additional perceptions related to the updating process from the 18 organizations that responded.

Results: The cohort of 100 systematic reviews included a median of 13 studies (inter-quartile range: 8 to 21) and 2663 participants (inter-quartile range: 1281 to 8371) per review. A qualitative or quantitative signal for updating occurred for 57 systematic reviews. Median survival free of a signal for updating was 5.5 years (95% confidence interval [CI]: 4.6-7.6), but in 23 cases (95% CI: 15% to 33%), a signal for updating occurred in less than 2 years, and in 15 cases (95% CI: 9% to 24%) the signal occurred in less than 1 year. In 7 cases (95% CI: 3% to 14%), a signal had already occurred at the time of publication of the original review. Shorter survival was associated with cardiovascular medicine (hazard ratio of 3.26, 95% CI: 1.71 to 6.21;p=0.0003), heterogeneity in the original review (hazard ratio of 2.23, 95% CI: 1.22 to 4.09;p=0.01), and having a new trial larger than the previous largest trial (hazard ratio of 1.08, 95% CI: 1.02 to 1.15;p=0.01). Systematic reviews with more than the median of 13 included studies had increased survival (hazard ratio of 0.55; 95% CI: 0.31 to 0.98;p=0.04). No feature of the original review significantly predicted a signal for updating occurring within 2 years of publication.

Median time from the final search date to indexing 1.4 years (inter-quartile range; 0.96-2.0 years). Lags from search to publication were shortest for Cochrane reviews (median 0.6 years, inter-quartile range: 0.42-1.25) and longest for journal reviews (median 1.3 years; inter-quartile range: 0.84-1.77), with technical reports falling in between (median 1.1 years; inter-quartile range: 0.87-1.42) (Kruskal Wallis X2 11.24,p=0.004).

Of the five search protocols tested for their effectiveness in identifying eligible new trials, the combination with the highest recall and lowest screening burden were the strategy that used the PubMed Related Articles feature (applied to the three newest and three largest trials included in the original review) and the strategy involved submitting a subject search (based on population and intervention) to the Clinical Query filter for therapy. This combination identified most new signaling evidence with median screening burden of 71 new records per review.

For the survey of organizations involved in producing or funding systematic reviews, we received responses from 19 (86%) of the 22 organizations contacted. Approximately two thirds (68%) of respondents identified themselves as producers of systematic reviews and an additional 21% identified themselves as both funders and producers of systematic reviews. Only two respondents (11%) characterized themselves solely as funders of systematic reviews.

Approximately 80% of respondents characterized the importance of updating as 'high' or 'very high', although 68% acknowledged not having any formal policies for updating in place. Approximately two thirds (13/19; 68%) of respondents reported that over 20% of the reviews they commission or produce are out of date, and 32% respondents (6/19) reported that at least 50% of their reviews were out of date. Barriers to updating identified by respondents included lack of appropriate methodologies, resource constraints, lack of academic credit, and limited publishing formats. The majority of the sample (16/19; 84%) indicated they 'somewhat' to 'strongly' favor the development of a central registry, analogous to efforts within the clinical trials community, to coordinate updating activities across agencies and review groups.

Conclusions: In a cohort of high quality systematic reviews directly relevant to clinical practice, signals for updating occurred frequently and within relatively short timelines. A number of features significantly affected survival, but none significantly predicted the need for updating within 2 years.

Currently, definitive methods about the frequency of updating cannot be made. Blanket recommendation such as every two years will miss a substantial number of important signals for updating that occur within shorter time lines, but more frequent updates will expend substantial resources. Methods for identifying reviews in need of updating based on surveillance for new evidence hold more promise than relying on features of the original review to identify reviews likely to need updating within a short time, but such approaches will require further investigation. Several of the methods tested were feasible, yielding good recall of relevant new evidence with modest screening burdens.

The majority of organizations engaged in the funding or production of systematic reviews view the importance of updating systematic reviews as high to very high. Despite this recognition, most organizations report having no formal policy in place for updating previous systematic reviews. Slightly less than half of organizations performed periodic literature searches to identify new evidence, but searching frequencies varied widely, from monthly to every two years.

If systematic reviews are to achieve their stated goal of providing the best evidence to inform clinical decisionmaking and healthcare policy, issues related to identifying reviews in need of updating will require much greater attention. In the meantime, publishers of systematic reviews should consider a policy of requiring authors to update searches performed over 12 months prior to submission. And, users of systematic reviews need to recognize that important new evidence can appear within short timelines. When considering the results of a particular systematic review, users should search for more recent reviews or trials to see if any exist and determine if the results are consistent with the previous review.


Download Technical Review

Updating Systematic Reviews

Evidence-based Practice Center: University of Ottawa
Topic Nominators: Agency for Healthcare Research and Quality (AHRQ)

Current as of September 2007


Internet Citation:

Updating Systematic Reviews, Structured Abstract. September 2007. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/clinic/tp/sysrevtp.htm


 

AHRQ Advancing Excellence in Health Care