Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov
Performance Plans for FY 2000 and 2001 and Performance Report for FY 1999

Appendix 5. Reports on Needs Assessment Activities

Outcomes Research

Note: the report on the evaluation study Outcome of Outcomes Research at AHCPR is used as a prime resource for this section. Further discussion of the study can be found in Objective 4.1 of this GPRA report.

The full print version of Outcome of Outcomes Research at AHCPR report may be obtained by contacting Joanne Book at (301) 427-1488 or at Center for Outcomes and Effectiveness Research, AHRQ, 6010 Executive Boulevard, Rockville, MD 20852. The report is available online at http://www.ahrq.gov/clinic/outcosum.htm

Background

In 1998-99, following a decade of investment in outcomes and effectiveness research (OER), AHRQ pursued several activities in needs assessment and evaluation to assure that future research investments would be informed by both a clear understanding of our customers' needs and an evaluation of prior successes and lessons learned. We held several meetings with stakeholders to obtain their input on future priorities; we also conducted quantitative analyses to set the stage for discussion. We also conducted an evaluation titled The Outcome of Outcomes Research at AHCPR.

Consultation with stakeholders helped us identify several important customer needs:

  • More focus on outcomes improvement, i.e., understanding "what works" must be linked with strategies to enhance behavior and practice change.
  • A need for practical tools as well as publications. As one systems leader stated: "My job is to implement this research in my organization. What would make my job much easier is to get the chart review forms and other tools used by the researchers, rather than having to re-develop them myself."
  • Development of practice-based laboratories that can move the conduct of research closer to practice.
  • A strong interest in providing input to research initiatives in the formative phases.

The evaluation, conducted by the consulting firm, The Lewin Group, was designed to:

  • Develop a framework for understanding and communicating the impact of OER on health care practice and outcomes.
  • Identify specific projects that illustrate the research impact framework.
  • Derive lessons and options from past efforts that can help develop strategies to increase the measurable impact of future research sponsored by AHCPR (1).

In addition to this report, the authors have written or contributed to several recent review articles about outcomes and effectiveness research (2, 3, 4).

Framework for assessing impact

A framework was developed that outlines an idealized process by which basic findings in OER are linked over time to increasingly concrete impacts on the health of patients. The four levels of impact are:

  • Level 1: Findings that contribute to but do not alone effect a direct change in policy or practice. These findings may add to an areas knowledge base and help focus subsequent research.
  • Level 2: Research that prompts the creation of a new policy or program.
  • Level 3: A change in practice, i.e., what clinicians or patients do.
  • Level 4: Actual changes in health outcomes.

This framework provided a context for linking progress in basic studies with changes in practice and improvement in outcomes.

Perspectives of principal investigators

Based on the premise that the researchers should have a clear understanding of their most important findings, a survey was mailed to all principal investigators (PIs) funded by COER during the period 1989-97, asking them to describe their most important work. Of the 95 PIs contacted, responses were received from 61 (64 percent). Results from the survey suggest that PIs have been most successful in providing increasingly accurate and detailed descriptions of what actually occurs in health care, developing tools for measuring costs of care and patient-reported outcomes, and identifying topics for future research. Few PIs reported findings that provide definitive information about the relative superiority of one treatment strategy over another. There also were relatively few examples of findings that have been incorporated into policy (level 2 impacts) or clinical decisions (level 3), or interventions that have measurably improved quality or decreased costs of care (level 4).

One of the main challenges for the next generation of outcomes studies is to move from description and development of methods to problem solving and quality improvement.

OER accomplishments

At least three conceptual developments have been strongly influenced by AHRQ-sponsored work:

  • The increasing recognition that evidence, rather than opinion, should guide clinical decisionmaking.
  • The acceptance that a broader range of patient outcomes need to be measured in order to understand the true benefits and risks of health care interventions.
  • The perspective that research priorities should be guided in part by public health needs.

Other accomplishments include:

  • OER studies have often provided descriptive data that challenged prevailing clinical ideas about how best to manage specific clinical problems.
  • Tools and analytic methods have been developed, including strategies for conducting systematic reviews and meta-analysis (now used by AHRQ's Evidence-based Practice Centers and others), instruments for measuring health outcomes important to patients, and sophisticated techniques for analyzing observational data to adjust for disease severity and minimize bias.
  • A growing appreciation of evidence-based medicine as a guiding framework for decisionmaking has intensified interest among clinicians, health systems leaders, and purchasers in information about the relationship between clinical and organizational interventions and patient outcomes. In particular, recent interest in quality measurement and improvement has resulted in increasing use of OER results as the basis for performance measures for report cards and accreditation.

Lessons learned about OER

Lessons were learned about study designs, use of data, and associated bias. Further work is needed to explore more systematically how to associate the features of a particular clinical problem with the most appropriate tools and methods to study that problem (given that the goal is to promote decisions that will improve outcomes of care). Additionally, research and experience have demonstrated that development and dissemination of high-quality, highly credible information is necessary to alter practices, but it is not enough. Enhanced knowledge must be linked with supportive practice environments and active implementation efforts.

Recommendations for future directions:

AHRQ can take steps to maintain its strength in methods and tools development, while increasing support for studies with greater potential for impact.
Response: In Fiscal Year 2000 two targeted research solicitations address methods development for understanding and eliminating racial and ethnic disparities in health care, and evaluation of strategies for translating research into practice. Grantees are expected to address explicitly how their methods and approaches will inform the needs of clinicians, health systems leaders and policymakers.

AHRQ could play a more active role in the transfer of knowledge and documenting change when it occurs.
Response: In addition to supporting targeted research solicitations focused on translating research into practice in Fiscal Year 99 and Fiscal Year 2000, AHRQ will develop and implement a plan for making tools as well as information available to decisionmakers.

AHRQ could take on greater responsibility to make sure that once these critical knowledge gaps are identified, they are addressed in follow-on studies.
Response: This will be a high priority in Fiscal Year 2000, and this effort could be combined with a strategy for addressing the research agendas now generated by the Evidence-based Practice Centers.

AHRQ should leverage resources by seeking new partnerships in addition to maintaining collaborative efforts with HCFA and other payors, managed cae organizations (MCOs) and medical groups, medical professional organizations, peer review organizations, and medical product manufacturers. Collaboration with these organizations would ensure that potential studies are crafted to meet the applied needs of these organizations.
Response: A forthcoming task order contract with integrated delivery systems will provide a mechanism for working with health plans to use methods and information from OER. In addition, we have recently begun to solicit feedback from participants at AHRQ's User Liaison Program meetings on an ongoing basis about future research priorities. The new CERTs program also provides a mechanism for supporting public-private partnerships to improve the use of therapeutics.

There is a need for more attention to developing innovative methods and strategies for efficiently addressing the large number of unanswered questions about effectiveness and cost-effectiveness, that incorporate relevant environmental characteristics. There has been no concerted effort to craft a new methodological, organizational, and ethical framework for these studies. The conceptual infrastructure for conducting effectiveness trials needs further development.
Response: A methodological conference in October 1999 provided a forum for preliminary development of a new approach to assessing the effectiveness of clinical and organizational interventions. This conceptual work will continue in Fiscal Year 2000.

Improvements in outcomes measures and development strategies to encourage their routine use are an essential future direction.
Response: AHRQ is soliciting research to encourage expanded use of outcomes measures in routine practice, and will use the new task order contract with integrated delivery systems for pilot demonstrations.

The Agency should consider developing the capacity to identify important research findings (generally level 1 impacts) and to assist to moving to higher levels.
Response: This is a high priority for Fiscal Year 2000.

A high level of interaction with stakeholders in the health care system will ensure that the basic studies are supportive of real problems faced by those involved in health care delivery.
Response: We have developed and implemented several strategies to consult with stakeholders and customers on an ongoing basis: periodic meetings; publication of Federal Register notices to obtain input from customers prior to publishing a research initiative (done this year for CERTs and translating research into practice); soliciting feedback from participants in AHRQ's User Liaison Program meetings; periodic consultation with the National Advisory Council and others as we develop future initiatives.

AHRQ should support the development of practice-based laboratories that can move the conduct of research closer to practice.
Response: Two Fiscal Year 2000 initiatives, for practice-based research networks in primary care and a task order contract with integrated delivery systems, address this issue.


References

  1. Tunis S, Stryer D. The outcome of outcomes research at AHCPR. Rockville (MD): Agency for Health Care Policy and Research; 1999. AHCPR Pub. No. 99-R044.
  2. Clancy CM, Eisenberg JM. Outcomes research at the Agency for Health Care Policy and Research. Disease Management and Clinical Outcomes 1998; 1; 3: 72-80.
  3. Clancy CM, Eisenberg JM. Outcomes research: measuring the end results of health care. Science 1998; 282: 245-6.
  4. Freund D, Lave J, Clancy C, Hawker G, Hasselblad V, Keller R, Schneiter E, Wright J. Patient outcomes research teams: contribution to outcomes and effectiveness research. Ann Rev Public Health 1999; 20: 337-59.
  5. Stryer D, Tunis S, Clancy CM. The outcome of outcomes research at the Agency for Health Care Policy and Research. JGIM 1998; 13(Suppl.):51.

Health care quality, defined by the Institute of Medicine as "the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with professional knowledge," has proven far more difficult to measure and improve than to define. Traditionally the assessment of quality in healthcare was accomplished through judgement of individual actions supplemented with the collection of standards such as medical credentials and the clinical capabilities of a facility. Case review, subjective judgements of the skills of providers, facility inspections, and documentation of training comprised the bulk of quality measurement.

Over the last few years, however, patients, providers, purchasers, and policymakers have demanded more sophisticated means of measuring quality in healthcare. Quality of care is now measured through a combination of characteristics of the health care provider(s) and services (procedures or tests) that result in better outcomes for the patient. It can be measured through either experiential ratings or clinical performance measures. When the health care provider and services (procedures) combine to improve the condition of the patient and the patient is satisfied with his or her condition, this is said to be good quality care. Quality is doing the right thing, for the right patient, at the right time, with the best results. AHRQ's efforts on improving quality have focused in three areas: quality measurement, quality improvement, and reporting of quality.

This specification of our quality agenda is based on both formal and informal conversations with a wide range of users (and potential users) of AHRQ quality measurement products. Through our participation in the Committee on Performance Measurement (CPM) of the National Committee for Quality Assurance (NCQA), we learned of the need for a broader array of quality measures, particularly for vulnerable populations. This need was underscored by the National Rehabilitation Hospital Research Center and by members of AHRQ's own Advisory Committee. The Consumer Assessment of Health Plans (CAHPS®) project has provided numerous and regular opportunities for feedback from product users. CAHPS® schedules two user meetings each year as a means of training new users and obtaining feedback about existing products (and the need for new products) from current users. From these meetings, and from meetings of the CAHPS® grantee advisory committee, we learned of the need for a CAHPS® instrument which would allow patients to assess the quality of care provided by their doctors, medical groups, and clinics.

The need for individual provider, group or institutional level instruments has also been expressed by the American Medical Association and the Joint Commission on Accreditation of Healthcare Organizations. Development and testing of this instrument is on the CAHPS® agenda for the coming year.

Both CAHPS® users and the Work Group on Consumer Health Information have impressed on AHRQ the need for evaluation of the effects of different reporting formats on the usability of quality information and how quality reports affect consumer/patient behavior. To determine the priority needs for quality measurement products for care given to children, AHRQ convened an expert meeting in 1999 along with other major children's health organizations such as the David and Lucile Packard Foundation and the American Association of Pediatrics. On a more "macro" level, the American Health Quality Association has underscored the importance of development of fundamental knowledge of what works in quality improvement in which generic situations. And experts from the United Kingdom and Europe (assembled at last year's Leeds Castle conference) recommended that AHRQ's Center for Quality Measurement and Instruments (CQMI) analyze the success of various mechanisms for translating research into practice.

The Quality Interagency Council or QuIC has given AHRQ the opportunity to obtain feedback from our Federal partners which has shaped our quality agenda. A subgroup of the QuIC related to the working conditions of health care workers has encouraged us to consider the development of new structural measures of health care capturing the influence of worker safety and working conditions on the quality of patient care.

1. Meeting the Need for a Wider Array of Quality Measures

Our pursuit of health care quality indicators and performance measures has resulted in a rapidly evolving and growing field of health services research producing an increasingly complex array of quality yardsticks. Research sponsored by the AHRQ has played a fundamental role in developing measures. In addition, accrediting bodies such as the National Committee for Quality Assurance (NCQA) and the Joint Commission on Accreditation of Healthcare Organizations have pushed the field significantly with their demands for valid evidence-based metrics. As a result of these trends, the AHRQ's COmputerized Needs-Oriented QUality Measurement Evaluation SysTem (CONQUEST), used for collecting and evaluating clinical performance measures, now has nearly 1200 entries. This growth has required the development of an informal method of classifying measures.

Despite the explosive growth in quality measures, there are still major gaps in our capacity to capture important components of "quality" when examined against the framework of that taxonomy. There are a number of assessment instruments to facilitate consumers making choices among health plans, most notably the AHCPR-sponsored Consumer Assessment of Health Plans (CAHPS®) family of surveys, yet few measures which assist consumers in making choices between individual providers beyond the advice of trusted friends and relatives. For some common clinical conditions, such as heart failure, there are a number of evidence based and validated quality measures.

But other common conditions which also have a major impact on quality of life and functional status, such as osteoarthritis and depression, have few extant measures that meet those criteria. A number of measures exist which can be applied to relatively healthy insured populations, but few are applicable to the most vulnerable segments of our population including children, those with chronic illness, disability, or the uninsured. Even where measures exist, there are fundamental questions to be resolved including whether the data should be risk adjusted and how it should be reported to various decisionmakers on the clinical, organizational, and policymaking levels. Seemingly simple questions, such as whether the quality of healthcare in the United States is improving or declining, cannot be answered with the current measurement capacity.

In summary, we have an ever increasing array of evidence based and validated quality measures yet they still only apply to a relatively narrow set of measurement levels, conditions, and populations. As a result consumers, providers, and policymakers are often forced to rely on subjective judgements to inform important decisions regarding healthcare.

The goal of research initiatives in quality measurement at the Agency are focused on broadening the availability of quality measures. In Fiscal Year 1999 these activities included efforts at expanding quality measurement to the most vulnerable populations and beginning efforts to extend measurement down from the health plan level to the provider level. Specific activities included:

Quality Measurement for Vulnerable Populations RFA (HS-99-01). This RFA was issued to develop and test new quality measures that can be used in the purchase or improvement of health care services for populations identified as vulnerable in the report "Quality First: Better Health Care for All Americans." The report was by the President's Commission on Consumer Protection and Quality in the Health Care Industry (Commission).

Funded grants under this RFA include:

  • Measuring Patient Satisfaction: Low Literacy Populations (HS10299).
  • Measuring the Quality of Care for HighRisk Infants (HS10328).
  • A Patient-Centered Quality Measure for Asian-Americans (HS10316).
  • Measuring Quality of Care for Vulnerable Children (HS10317).
  • Quality Measurement in Residential Care (HS10315).
  • Prescription Benefits As A Quality Measure (HS10318).
  • Computerized Tool Assessment in Low Literacy Patients (HS10333).
  • Measuring the Qaulity of Care for Diabetes (HS10332).
  • Facility Effects on Racial Differences in New Hampshire Quality (HS10322).
  • Quality Measures for Severe/Persistent Mental Illness (HS10303).
  • Cultural Relevance of a Community of Care Measure (HS10335).
  • Using Census Data to Monitor Care to Vulnerable Groups (HS10295).

We also initiated an effort to develop a provider level CAHPS® instrument with an expert meeting in Fiscal Year 1999.

Developing the Basic Science of Quality Improvement

Although advances in the measurement of quality are a necessary component of health care quality improvement, they alone are not sufficient. Progress in quality measurement has not been complemented by comparable advancement in our ability to systematically translate that information into improvement. As a result, a substantial gap between quality information and improvement has developed which is likely to grow without focused research to provide an evidence base for the application of quality improvement strategies in clinical policy making. This was recognized by the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry, which recommended the continued development and dissemination of evidence-based information to guide management policies that can improve health care quality.

Over the last 3 decades a variety of approaches have been used to foster quality improvement in health care. The adoption of industrial models for quality improvement has been one method for addressing variations in health care quality. Additional methods employed to improve health care quality have included the use of regulations, focused incentives, behavioral interventions, academic detailing, and the use of information systems. There have been some documented success stories in applying these techniques to quality improvement. Recent state and regional efforts have also attested to the potential of quality improvement efforts for specific conditions such as ischemic heart disease.

Despite these successes, health care quality improvement efforts have often been met with skepticism from both providers and policy makers. The few published evaluations of the value of quality improvement efforts conducted to date have shown mixed results. For example, the application of continuous quality improvement to the management of clinical outcomes has shown some promise in non-randomized studies, but randomized trials have failed to show a meaningful impact on clinical outcomes or organization wide improvement.

Recent work has identified the significant barriers to the successful application of continuous quality improvement in health care which may provide a first step to overcoming them. Successful quality improvement programs have usually been conducted in single institutions, addressed one condition with one intervention, had modest sample sizes, and used historical controls. Consequently, the interpretation of these results and their generalizability have limited their utility in achieving more global improvements in health care.

This situation is unlikely to change without a fundamental understanding of:

  • Which quality improvement efforts work for particular conditions, populations and circumstances.
  • The use of complementary strategies; and collaboration between provider institutions and organizations aimed at improving quality.

Quality improvement efforts resulting in error reduction, enhanced patient safety, improvements in appropriateness, service enhancements, and waste reduction are plausible solutions to provide Americans with high quality care at reasonable cost. A first step in this process to harness the potential of quality improvement is a rigorous analysis of improvement strategies to build a fundamental "basic science" understanding of the relative merits of these strategies. That understanding will foster the appropriate application of quality improvement techniques in the future.

The goal of research initiatives in quality improvement at the Agency are focused on developing a fundamental and generalizable picture of what works to improve quality. Specific activities in Fiscal Year 1999 included:

Translating Research Into Practice RFA (HS-99-003). This RFA was issued to generate new knowledge about approaches, both innovative and established, which are effective and cost effective:

  • In promoting the use of rigorously derived evidence in clinical settings.
  • In leading to improved health practice and sustained practitioner behavior change (with particular interest in studies that implement AHRQ-supported evidence-based tools and information).

Selected grants funded under this RFA include:

  • Do Urine Tests Increase Chlamydia Screening in Teens? (HS10537)
  • Improving Diabetes Care Collaboratively in the Community. (HS10479)
  • Evidence-based Surfactant Therapy for Preterm Infants. (HS10528)
  • Practice Profiling to Increase Tobacco Cessation. (HS10510)

Assessment of Quality Improvement Strategies in Health Care (HS-99-002). This RFA was issued to evaluate strategies for improving health care quality that are currently in widespread use by organized quality improvement systems. Such systems are projects that would expand the conceptual and methodological basis for improving clinical quality and analyze the relative utility and costs of various approaches to quality improvement.

Selected grants funded under this RFA include:

  • Organizational Determinants of HIV Care Improvement. (HS10408)
  • Improving Heart Failure Care in Minority Communities. (HS10402)
  • Strategies for Continuous Quality Improvement (CQI) Efforts: A National Randomized Trial. (HS10403)
  • Hospital Performance and Beta-Blocker Use After MI. (HS10407)
  • Evaluating Quality Improvement Strategies. (HS10401)

3. Improving the impact of quality information: Making Quality Count

The perfect quality measurement system is of limited value if that information is not accessible to decisionmakers. A particular immediate challenge in this area is providing reports to the public on quality which provide meaningful information. To date, the attention paid to the development of measures has been far greater than that given to the creation of reporting formats. Serious challenges lie ahead in the development of quality reports including variations in the graphical presentation of information, availability at the time decisions are made, and the need to adjust the information to the audience. Additional challenges include ensuring that the information presented is balanced and fair, particularly with respect to differences in casemix.

The goal of research initiatives in quality reporting at the Agency are focused on developing an evidence-based approach to developing meaningful information for decision-makers. Specific activities in Fiscal Year 1999 included:

Making Quality Count Expert Meeting. This meeting brought together researchers, media specialists, and users of quality information to discuss the specific needs for reporting research. The research agenda derived from this meeting will be the basis for further activities in this area beginning in Fiscal Year 2000.


References

  1. Lohr KN, et al. Medicare: A strategy for quality assurance, Volume 1. Institute of Medicine, National Academy Press, Washington, DC 1990.
  2. Blumenthal D. Part 1: Quality of care—what is it? NEJM 1996 Sep 19; 335(12):891-4.
  3. Brook RH, McGlynn EA, Cleary PD. Measuring Quality of Care. NEJM 1996; 335:996-70.
  4. Sennet C. Moving ahead, measure by measure. Health Aff. Millwood 1998 Jul-Aug; 17(4):36-7.
  5. CONQUEST is available at http://www.ahrq.gov/qual/conquest.htm
  6. CAHPS® documentation is available at http://www.ahrq.govhttp://www.cahps.ahrq.gov/
  7. Mangione-Smith R, McGlynn EA. Assessing the quality of healthcare provided to children. Health Serv Res 1998 Oct; 33(4 Pt 2):1059-90.
  8. Shuster MA, McGlynn EA, Brook RH. How good is the quality of health care in the United States? Milbank Q 1998; 76(4):517-63.
  9. Green J, Wintfeld N, Krasner, M, Wells, C. In search of America's best hospitals: The promise and reality of quality assessment. JAMA 1997; 277:1,152-5.
  10. Chassin MR, Galvin RW. The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA 1998 Sept 16; 280(11):1000-5.
  11. President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry, 1998. Quality First: Better Health Care for All Americans. Washington, DC: U.S. Government Printing Office.
  12. Laffel G, Blumenthal D. The case for using industrial quality management science in health care organizations. JAMA1989; 262:2869-73.
  13. Evans RS, Restonik SL, Classen DC, et al. A computer-assisted management programs for antibiotics and other anti-infective agents. NEJM; 338:232-8.
  14. Soumerai, SB, McLaughlin TJ, Gurwitz JH, Guadagnoli E, Hauptman PJ, Borbas C, Morris N, McLaughlin B, Gao X, Willison DJ, Asinger R, Gobel F. Effect of local medical opinion leaders on quality of care for acute myocardial infarction: a randomized controlled trial. JAMA1998; 279:358-63.
  15. Marciniak TA, Ellerbeck EF, Radford MJ, Kresowik TF, Gold JA, Krumholz HM, Keife CI, Allman RM, Vogel RA, Jencks SF. Improving the quality of care for Medicare patients with acute myocardial infarction. JAMA 1998; 279:1351-7.
  16. Shortell SM, Bennett CL, Byck GR. 1998. Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate the progress. Medical Care, in press.
  17. Blumenthal D, Kilo CM. A Report Card on Continuous Quality Improvement. Milbank Q, 1998, in press.
  18. Berwick DM. As good as it should get: Making health care better in the new millennium. Paper for the National Coalition on Health Care. Washington, DC 1998.
  19. Lansky D. Measuring what matters to the public. Health Aff Millwood 1998, July-Aug 17; (4):40-1.
  20. Epstein AM. Rolling down the runway: the challenges ahead for quality report cards. JAMA 1998 June 3; 279(21): 1691-6.
  21. Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG. The unreliability of individual physician "report cards" for assessing the costs and quality of care of a chronic disease. JAMA 1999, June 9; 281(22):2098-105.

Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care