Department of Health and Human Services - www.hhs.gov
Department of Health and Human Services - www.hhs.gov
healthfinder.gov - A Service of the National Health Information Center, U.S. Department of Health and Human Services

 

healthfinder.gov Home   |   About Us   |   News   |   Health Library   |   Consumer Guides   |   Organizations   |   En Español   |   Kids   |   Contact Us

Home > News

Some Cancer Trials Overstate Findings, Analysis Claims

Group-randomized research sometimes uses inappropriate statistical analysis of a prevention effort.

  • E-mail this article
  • Subscribe to news
  • Printer friendly version
  • (SOURCE: Ohio State University, news release, March 25, 2008)

    TUESDAY, March 25 (HealthDay News) -- The effectiveness of public campaigns or efforts to prevent cancer can often be overstated in certain kinds of cancer trials because of inappropriate statistical analysis, a new report claims.

    The review, published in the March 25 online issue of the Journal of the National Cancer Institute, suggests that some of the 75 group-randomized cancer trials it studied may have reported these interventions were effective when in fact they might not have been.

    "We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings," review author David Murray, chairman of epidemiology in the College of Public Health at Ohio State University, said in a prepared statement.

    In the review, more than a third of the 75 trials contained statistical analyses that the reviewers considered inappropriate to assess the intervention being studied. Most of those studies reported statistically significant intervention effects that, because of analysis flaws, could be misleading to scientists and policymakers, the review authors stated.

    "If researchers use the wrong methods, and claim an approach was effective, other people will start using that approach. And if it really wasn't effective, then they're wasting time, money and resources and going down a path that they shouldn't be going down," Murray said.

    In group-randomized trials, researchers randomly assign identifiable groups to specific conditions and observe outcomes for members of those groups to assess the effects of an intervention under study.

    For example, a group-randomized trial might study the use of mass media to promote cancer screenings and then assess how many screenings result among groups that receive different kinds of messages.

    The review is not an indictment of the study design. Murray is a proponent of such trials, and authored a 1998 textbook on the subject, "Design and Analysis of Group-Randomized Trials."

    "We're not trying to discourage people from using this design. It remains the best design available if you have an intervention that can't be studied at the individual level," he said.

    In analyzing the outcomes of such trials, researchers should take into account any similarities among group members or any common influences affecting the members of the same group, Murray said. However, the review found that the common ground among group members was often not factored into the final statistical analysis.

    "In science, generally, we allow for being wrong 5 percent of the time. If you use the wrong analysis methods with this kind of study, you might be wrong half the time. We're not going to advance science if we're wrong half the time," said Murray, who is also a member of the Cancer Control Program in Ohio State's Comprehensive Cancer Center.

    The review identified 75 articles published in 41 journals that reported intervention results based on group-randomized trials related to cancer or cancer risk factors from 2002 to 2006. Thirty-four of the articles, or 45 percent, reported the use of appropriate methods to analyze the results. Twenty-six articles, or 35 percent, reported only inappropriate methods used in the statistical analysis. Eight percent of the articles used a combination of appropriate and inappropriate methods, and nine articles had insufficient information to even judge whether the analytic methods were appropriate or not.

    The use of inappropriate analysis methods is not considered willful or in any way designed to skew results of a trial, Murray noted.

    Murray and his colleagues call for investigators to collaborate with statisticians familiar with group-randomized study methods, and for funding agencies and journal editors to ensure that such studies show evidence of proper design planning and data analysis.

    More information

    The American Cancer Society has more about prevention and early detection of cancer.  External Links Disclaimer Logo

    Copyright © 2008 ScoutNews, LLC. All rights reserved.  External Links Disclaimer Logo

    HealthDayNews articles are derived from various sources and do not reflect federal policy. healthfinder.gov does not endorse opinions, products, or services that may appear in news stories. For more information on health topics in the news, visit the healthfinder.gov health library.

    healthfinder.gov logo USA dot Gov: The U.S. Government’s Official Web Portal
    footer shadow