Skip Navigation
Department of Health and Human Services www.hhs.gov
  • Home
  • Search for Research Summaries, Reviews, and Reports
 
 

EHC Component

  • EPC Project

Topic Title

  • An Exploration of Bayesian Mixed Treatment Comparisons Methods

Full Report

Save this page in Facebook.com  Save this page in Myspace.com  Save this page in Twitter.com  Save this page on your Google Home Page  Save this page in Windows Live
Save this page in Yahoo  Save this page in Ask.com  Stumble this page.  Save this page in del.ico.us  Digg this page. 

E-mail E-mail   Print Print

Add to My Collections



Abstract - Final – Dec. 21, 2011

An Exploration of Bayesian Mixed Treatment Comparisons Methods

Topic Abstract

Background

It is generally of interest in a comparative effectiveness review to compare a number of available interventions. Many randomized controlled trials (RCTs) only compare an active drug to placebo or compare to the standard treatment, and rarely is there direct head-to-head evidence on competing interventions. The randomized controlled trial is considered the “gold standard” for comparing drug efficacies and naively comparing treatments across trials is not wise, since the benefits of randomization do not hold. Methods such as adjusted indirect comparisons using a frequentist approach and more recently, Bayesian mixed treatment comparisons (MTC) meta-analysis, have been proposed when direct head-to-head evidence is not available or sufficient.

Objectives

The overarching objective is to better understand the performance of Bayesian MTC meta-analysis methods. The specific aims are: (1) to compare how Bayesian MTC methods perform for different types of evidence network patterns, (2) to compare Bayesian MTC methods to frequentist indirect methods for various types of outcome measures, (3) to explore how meta-regression can be used with Bayesian MTC meta-analysis to investigate heterogeneity, and (4) for each of the evidence network scenarios, to determine how many equivalent sized studies are needed for each indirect comparison to equal the validity of one (same-sized) direct comparison study.

Approach

We will implement Bayesian MTC methods under a variety of different evidence network patterns using data from current or recent systematic reviews and using simulated data. The data will include binary and continuous outcomes. We will explore the robustness of the MTC modeling framework in handling different types of evidence network patterns and answer questions about how types of data structures impact the fitness of the statistics. We plan to compare results from each of these scenarios to those of frequentist indirect methods, under both fixed effects and random effects assumptions. The simulation study will explore how both network patterns and the number of available studies impact the ability of the model to report accurate and precise effect estimates.

Return to Top of Page