Skip Navigation

U.S. Flag

Print Print   Download Reader Download   Text Enlarge text size Reduce text size Normal text size

Listening Session of the Federal Coordinating Council for Comparative Effectiveness Research

Washington, DC
June 10, 2009

 

Introduction

The purpose of the third listening session on comparative effectiveness research (CER) was to continue to gather public input from a broad range of diverse stakeholders on priorities, concerns, and ideas about how CER can empower patients and providers and improve care for all Americans.

The 2009 American Recovery and Reinvestment Act authorized $1.1 billion for comparative effectiveness research, including $300 million for the Agency for Healthcare Research and Quality, $400 million for the National Institutes of Health, and $400 million for the Secretary of Health and Human Services to support CER. The same law also created a 15-member Federal Coordinating Council for Comparative Effectiveness Research, which will assist federal agencies in coordinating comparative effectiveness and related health services research. The Council will submit a report to Congress on these priorities and recommendations by June 30, 2009.

The Council held its first listening session in Washington, DC, on April 14, and its second listening session on May 13 in Chicago, IL. This was the third and final listening session prior to the completion and delivery of the Report to Congress.

 

Recommendations

The Council obtained public comment from a wide range of speakers. A summary of key comments and recommendations follows.

 

Why CER Matters

  • The value of CER is that it is an investment in better health for all of us.
  • CER can help to level the playing field and potentially accelerate adoption and implementation of more useful products that are more accessible to patients.

 

The CER Definition

  • The Council should be applauded for including a comprehensive array of health-related outcomes for diverse patient populations.
  • The Council’s definition and prioritization criteria should make clear that this comprehensive array of health-related outcomes must include comparisons of the broader, longer-term socioeconomic effects of different interventions.
  • The Council must ensure that the language regarding “prevalence of condition, burden of disease, variability in outcomes, and costs of care" does not exclude either broader or longer-term socioeconomic consequences—otherwise comparisons and contrasts among diagnostic and therapeutic alternatives would be impaired.
  • The Council’s definition of CER is good. In particular, it supports the need to inform stakeholders about the effectiveness of a wide range of medical interventions for diverse patient populations, while at the same time acknowledging differences in health care settings.
  • The Council should be commended that its definition of CER acknowledges the need to develop better data sources and methods to assess comparative effectiveness.
  • The Council should include in the definition of CER the question of inpatient v. outpatient evaluation and treatment.

 

The CER Prioritization Criteria

  • The Council should be applauded for including in its draft criteria the evaluation of effectiveness in diverse populations and sub-populations. 

 

Stakeholder Involvement

  • Patients must be engaged in the care process, including playing a central role in their own treatment plans.
  • CER should be done at a science agency, and it must involve in its governance those with a direct stake in the results.
  • Public input into the research agenda is a social good and should be sought, including public-private advisory boards.

 

Concerns about CER

  • There is a concern that, if allowed to dictate the preferences of medicine, centralized comparative effectiveness decision-making could replace the experience, wisdom, and knowledge of physicians with bureaucracies that reduce their decisions to formulas.
  • CER has the risk of stifling innovation.

 

Create a Global Network

  • The Council should consider establishing a global health-sharing network to find out what CER exists in the U.S. and elsewhere.

 

Develop Infrastructure

  • The Council should support efforts to improve clinical data sources.
  • The Council should consider combining the Minimum Data Set with data from pharmacy and lab providers.
  • Health care databases across the Federal government should be linked amongst themselves and to those systems used by providers under health information technology initiatives.
  • Health systems must learn how to analyze comparative patient data to discern trends and design positive change, and this work should be done by a neutral third party to bring confidence, integrity, and transparency to the process.
  • Development of useful data repositories requires the development of a common language, the development of an ongoing method and data repository, and research to validate and ensure the quality and accuracy of the data that is obtained.
  • The Council should consider funding a national patient library of evidence-based information as a trusted resource for patients to work with their providers to figure out what works best.
  • The Council should be applauded for its recognition of the importance of infrastructure investments, particularly patient registries.

 

Consider Registries

  • Robust clinical registries are great tools that support and enable research on diverse populations who are too often not included in random clinical trials.
  • Creating, interlinking, and using robust national registries should be given a high priority.
  • The Council should consider linking private clinical databases with CMS data sets.
  • The Council should consider establishing inter-operative databases that could service multiple purposes, including benchmarking, quality improvement, and public reporting.
  • The Council should consider creating a maternal and child data repository to help assess differences in treatments and the impact on both women and their neonates.

 

Prevention

  • CER is needed that focuses on primary prevention (which complements medical care and treatment).

 

Consider The Whole Health Care System

  • Comparative effectiveness research should focus on questions that reflect the interactions among all of the various components of the health care system, and have the greatest potential to empower medical specialists and patients to make the most appropriate decisions when faced with real world clinical situations.

 

Research Methodology

  • Emphasis should be placed on longitudinal studies.
  • Careful consideration needs to be given to the methodologies used in CER, and rigorous standards that consider both the benefits and challenges associated with the methodologies selected need to be applied to the research methods selected.
  • Continuing investment is needed in the development and advancement of comparative effectiveness methods themselves (including advanced clinical trial design, data synthesis methods, observational methods, and mathematical modeling), as well as in the rigorous training of researchers in their use.
  • An evaluation expert should be involved from the outset of health improvement program design in order to ensure that the program is launched in a way that facilitates evaluation of its effectiveness.
  • There is a need to distinguish between optimum medicine and optimum health care delivery.
  • Funding should be used to create better models of disease diagnosis, treatment, and consequences. Without them, comparative effectiveness is meaningless, because clinical trials cannot evaluate what they cannot measure. 
  • A foundation of evidence rooted both in scientific methodology and established through observation and real-life experience is essential to the delivery of safe and effective medical care.
  • CER must move beyond randomized clinical trials. This is especially with regard to psychiatric treatments, where they reveal little about how these treatments work in the real world or which treatments within any given class are most effective.

 

Cost

  • Cost effectiveness has a legitimate role, but decisions regarding cost effectiveness should be made later, after scientific merit has been established. There should be a firewall between clinical and cost comparisons.
  • Costs are an important outcome along with comparative effectiveness; conducting comparative effectiveness without including the cost is like choosing from a menu without prices.  
  • Coverage decisions should not be the purview of the research agencies doing CER. 

 

Patient Involvement

  • CER must account for the individual nature of patient characteristics, including their specific preferences and personal values.
  •  CER must develop the ability to account for both the important individual differences in physiology and risk faced by patients making decisions about their care and for individual patient preferences.
    • Tools and methods must be developed to help patients and their doctors include these important characteristics.

 

Dissemination is Critical

  • The Council should consider broader dissemination of existing solid evidence.
  • CER should be disseminated to consumers in order to enable them to engage in more knowledgeable discussions with their physicians and to work with them to arrive at informed choices.

 

Prioritize Around Vulnerable Populations & Co-Morbidities

  • CER is needed on people with multiple chronic diseases, including:
    • a set of optimized protocols for various groups of complex patients.
    • a set of learnings about how to replicate data-driven improvement across multiple organizations.
  • The Council should consider using real-world data in the context of collaborative, rapid-cycle improvement to expand the evidence base for costly and vulnerable patient populations.
  • CER studies are needed to help address the needs of underserved, un-researched, and other priority populations who experience a disproportionate burden of health risk, disease, and cost.

 

Focus on Care Delivery

  • CER is needed to assess the appropriateness and effectiveness of care across and within settings.
  • Research is needed on delivery system design in order to understand which elements of structure and process are the strongest drivers of high-quality care.
  • Priority should be given to projects that are dedicated to the assessment of operations delivery systems.
    • These projects should advance a culture of learning in the health care enterprise.
    • These projects do not have to be designed to achieve definitive results (i.e., it is possible to learn and advance operational efficiency and effectiveness with studies of less rigorous design).
  • The Council should consider developing different criteria for the assessment of operations delivery system research projects that use observational and quasi-experimental study design (as opposed to those that are used to evaluate strongly-controlled experimental studies).
  • CER should be defined broadly enough to allow for the carful study of the processes of emergency care, with particular emphasis on diagnosis, disposition, and design.
    • RE diagnosis, the comparative effectiveness of different approaches to rapid diagnosis has not been rigorously evaluated, which represents a major missed opportunity to improve patient outcomes and to conserve resources.
    • RE disposition, comparative effectiveness of inpatient v. outpatient treatment in many conditions have not been rigorously evaluated.
    • RE design, there are many unanswered questions related to the fundamental organization and delivery of emergency care.

 

Address Sub-Groups

  • CER is needed on people with multiple chronic diseases.

 

Other Specific Research Priorities

  • Consider research into rare diseases, and include an expert advisory panel to assist with study design and to determine the relative value and feasibility of conducting such studies.
  • Consider CER on diagnostic imaging, including the various cardiac imaging modalities and imaging for urological conditions.
  • Consider CER to assess the impact of long-term compliance with NQF performance measures.
  • Consider research into post-acute and long-term care.
  • CER is needed to compare different treatment options for localized prostate cancer.
  • Research is needed into obstetrics and pre-natal care delivery.
  • CER is needed to address trauma, as there is very little high quality translational or clinical research addressing comparative efficacy.
    • Many treatments provided are not evidence-based; there is a need to determine the best possible treatment in patients where there's often an incredibly short amount of time to make the decision.
  • The Council should consider funding an initial three-year CER project to thoroughly evaluate patients in an environmental control unit in order to conduct, support, and synthesize research that compares the clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat illnesses.

 

Council’s Questions to Panelists

 

Dr. Graham asked Drs. Grover and Fasules about the importance of clinical databases (as opposed to administrative databases), and about the diversity in their organizations’ databases. Dr. Fasules replied that with registries, as opposed to with clinical trials, you are able to capture everyone who has a particular procedure or treatment. He added that the strength is the breadth of the registry, adding that when you include a large claims database then you get a double impact. Dr. Grover concurred, adding that the Society of Thoracic Surgeons specifically collects data on race and ethnicity in order to have reliable clinical data to make treatment decisions at the individual patient level. Pressed further about whether these databases can adequately capture some of the impact of various interventions on priority populations, Dr. Fasules noted that his organization was trying to move the databases longitudinally.

 

Dr. Valuck also Dr. Fasules to elaborate on his two-step process for incorporating cost into the comparative effectiveness discussion. In response, Dr. Fasules noted that cost is important—but that the cost value should not be done while comparing the science or the treatment; otherwise, he said, you may not be evaluating the effectiveness of the difference in treatment.

 

Dr. Hunt asked Mr. Kanter how long it took him to make a treatment decision, and what tools he used to do so, once he discovered he had prostate cancer.  Mr. Kanter replied that it had taken him almost two years. He said he had started with WebMD, then talked to his urologist, and then spoke with the medical director of a nonprofit cancer institute. Mr. Kanter said that the medical director recommended a new treatment protocol at another institution—and he decided to go with that despite the lack of evidence to support the decision.

 

Dr. Kupersmith asked Dr. Fasules to what extent studies from the cardiology databases had informed the ACC’s guidelines (versus randomized clinical trials). Dr. Fasules replied that he could not give a percentage because randomized clinical trials are the first step. He added that the ACC was starting to mine several of the databases to look at how the data can be used to affect care, and then changing the guidelines as that happens.

 

Ms. Tanden asked Mr. Kanter for his recommendation about how to access international CER information. Mr. Kanter replied that much of the international data that his foundation has looked as thus far is not very good. He added that some major groups in the U.S., including Kaiser, have a lot of data that they have indicated they are willing to share with his global health sharing network.

 

Mr. Millman asked Dr. Cuddeback about the federal investment that would be needed to help operations such as his to use data warehouses. Mr. Cuddeback said that the Council’s strategic framework, as laid out, hits on a number of areas where there are needs, including methods development and looking at methods for enhanced adoption.

 

Dr. Delany asked Mr. Fox what has to change in order to put in place a viable “expert patient” model. In response, Mr. Fox said that technologies and methodologies need to be developed within the health information technology systems that allow a patient to enter clinically significant information into the system. Mr. Fox added that the starting point needs to be the clinical suite, and that the doctor and patient need to work together at a very primary level with the findings coming out of comparative effectiveness research.

 

Dr. Kilpatrick asked Mr. Fox if he could elaborate on the concept of involving the patient in the design, selection, and process of developing CER studies. Mr. Fox responded that patients are looking to get into the game. He noted that, when given the opportunity to participate, patients want to do so. They understand their symptoms, he said, so this is really a health literacy problem.

 

Ms. Tanden asked Dr. Vojta whether UnitedHealth’s efforts to design insurance plans to interact with comparative effectiveness research was something unique to that company or a new arena. Dr. Vojta replied that UnitedHealth did a great deal of qualitative and quantitative research before it rolled out its personalized benefit design plan, which reduces out-of-pocket costs for people with diabetes in exchange for compliance with American Diabetes Association standards.

 

Dr. Conway asked Drs. Cuddeback and Roberts about how the U.S. government can fund both R01 work as well as the types of studies that are broader than a single question (perhaps addressing registries, translation, adoption, and infrastructure). In response, Dr. Cuddeback talked about the value of the data themselves, including the ability to take observational data that are the byproduct of providing care and essentially raising the standards for those data. He added that there was value in funding the process of using existing real-world data and, with the understanding that the process itself may raise the quality of the data, improving the usability of the data and the inferences that can be drawn from that data. Dr. Roberts added that there were two types of research that need to be funded in this arena: the application of methods to particular problems that are not traditional basic science and the development of strong, rigorous methodologies that are advancing the field.

 

Dr. Emanuel asked Dr. Buckley what the structure might be should his organization create a CER program. Dr.  Buckley replied by talking about the downsides of randomized clinical trials, and said that he would like to see a large, upfront investment in methodological issues. He also talked about the importance of incorporating all stakeholders into the process, including patients. Finally, Dr. Buckley talked about the need to focus on how the totality of the health care system (including care delivery and insurance benefit design) impacts patient care and outcomes.

 

Dr. Graham asked Dr. Fox what he would include in a true patient-provider partnership. Dr. Fox replied that what was needed was some kind of systematic measurement of the effectiveness of the communication between the clinician and the patient as CER is rolled out.

 

Dr. Clancy asked Dr. Roberts about the tension between standardization (i.e., RCTs) and continued methodological innovation. In response, Dr. Roberts noted that many involved in science have learned a lot not from randomized controlled trials but from thoughtful observations with rigorous underlying theory. He noted that there was some concern with the concept of certification in this area, in that certification itself assumes an understanding of the methods well enough to determine what an appropriate certification would be.  Dr. Roberts added that the true wisdom isn’t the trial that says 10 percent died on therapy A, but rather to understand why it happened and whether the subjects would have fared better on therapy B.

 

Dr. Clancy asked Dr. Lerner to elaborate on the concept of a national patient library and how it might be distinguished from MedlinePlus and other existing online resources. Dr. Lerner noted that a lot of the existing information is repurposed information (with lots of science and statistics) that is not really usable by the general public. He added that the information should be purpose-built for use by consumers in conjunction with their physicians.

 

Dr. Delany continued on the same theme, noting that he thought that a lot of CER would be needed just to design the system. He suggested that it would likely need to be an iterative process, because a foundational set of research studies would be needed first. Dr. Lerner noted that the first step could be a significant planning process that looks at what exists, where the gaps are, and what would be needed to fill them.

 

Dr. Kilpatrick asked Drs. Carr and Stewart how they would approach the issue of medical literacy, and involving patients in the process, when there isn’t yet a lot of evidence-based data [on trauma and emergency care processes]. Dr. Carr replied that the first step in looking at planning for emergency care is to recognize that it must be done from a population perspective (and not from a hospital-based individual perspective). He added that the second issue is that this is a discussion of undifferentiated complaints—people who don’t know what’s wrong with them—and that this is not the emphasis of most of the medical literature.

 

Dr. Millman asked Drs. Carr and Stewart for their recommendations for priorities for CER in the areas of trauma and emergency care. In response, Dr. Stewart said that hemorrhages, resuscitation, infection, disaster preparedness, burns, and traumatic brain injury were all high on his list. Dr. Carr added that, from a systems standpoint, shared infrastructure is critical.


Panelists

Academy Health
Polly Pittman, Executive Vice President

American College of Cardiology
James Fasules, Senior Vice President

American College of Emergency Physicians & Society for Academic Emergency Medicine
Brendan Carr, Assistant Professor

American Urological Association
Beth Kosiak, Associate Executive Director

Anceta – AMGA’s Collaborative Data Warehouse
John Cuddeback, Chief Medical Informatics Officer

Biotechnology Industry Organization
Ted Buckley, Director of Economic Policy

ECRI Institute
Jeffrey Lerner, President

Eunice Kennedy Shriver NICHD Maternal Fetal Units Network
Matthew Hoffman, Director of Ob/Gyn Research

Fundamental Clinical Counseling, LLC
David Juba, Senior Research Analyst

Galen Institute
Grace-Marie Turner, President

Institute for Health Technology Studies
Martyn Howgill, Executive Director

Joseph H. Kanter Family Foundation
Joseph Kanter, Chairman

National Association for Rare Disorders
Diane Dorman, Vice President

National Center for Patient Interactive Research
Bill Fox, Executive Director

National Trauma Center
Ronald Stewart, Board Chairman

Optimalpolicies
Eduardo Siguel, Director

Prevention Institute
Larry Cohen, Executive Director

RS Medical
Mark Pilley, Medical Director

Society for Medical Decision Making
Mark Roberts, President

Society of General Internal Medicine
Harry Selker, Executive Director,
Institute for Clinical Research and Health Policy Studies, Tufts Medical Center

Society of Thoracic Surgeons
Fred Grover, Chair, Council on Quality, Research, and Patient Safety

Universal American Corporation
Patricia Salber, Chief Medical Officer and Senior Vice President

UnitedHealth Group
Deneen Vojta, Senior Vice President

 

Presenters During the Open-Comment Period

American Society of Anesthesiologists
Alex Hannenberg, President-Elect

Care Management Technologies
Jack Gorman, Chief Scientific Officer

Executive Intelligence Review
Anton Chaitkin, Executive Editor

Foundation for Environmentally Triggered Illnesses
Diana Williams, Secretary

National Research Center for Women and Families
Diana Zuckermann, President


Council Members

Agency for Healthcare Research & Quality
Carolyn Clancy

Centers for Medicare and Medicaid Services
Tom Valuck

Department of Defense
Michael Kilpatrick

Health Resources & Services Administration
Mike Millman

National Institutes of Health
Lynn Hudson

Office of the Assistant Secretary for Planning and Evaluation
Jim Scanlon

Office of Management and Budget
Ezekiel Emanuel

Office of Minority Health, HHS
Garth Graham

Office of the National Coordinator, HHS
David Hunt

Office of the Secretary, HHS
Neera Tanden

Substance Abuse and Mental Health Services Administration
Pete Delany

Council Staff
Patrick Conway, Executive Director
Cecilia Rivera Casale, Deputy Director