Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov
Evaluation of AHRQ's Partnerships for Quality Program

Chapter III. What Did Grantees Seek To Do? (continued)

D. Expected Outcomes and Evaluation Approaches

The AHRQ solicitation required all PFQ projects to evaluate the effects of their interventions, though it did not clearly specify how the evaluation was to be conducted or what purpose it would serve.10  As discussed in Chapter I, some originators of the PFQ concept viewed the evaluation requirement as a feedback requirement more than as research for its own sake. According to this view, evaluation was intended to document how projects were helping to move evidence-based research findings into practice on a large scale. 

Grantees, however, interpreted the requirement in different ways.  Some paid more attention to the evaluation requirements than others. Grantees varied on how clearly they sought to measure the outcomes of their work, how rigorously they tried to pursue their analyses, how much of the grant resources were allocated to the evaluation, and how they viewed the role of such findings to their overall goals.  

The rest of this chapter reviews key characteristics of the evaluations proposed by grantees, including the outcomes, research design, and the affiliations they developed to support the evaluation. Appendix Table A.3 provides more detail on evaluation approaches and measures for each grantee. The chapter concludes with a brief discussion about how the variation in evaluation approaches influences the ability of this evaluation to draw insights or compare results across grantees. 

1. Evaluation Focus

The focus of evaluation efforts typically differed between clinical improvement and bioterrorism projects. Most of the clinical improvement projects sought to evaluate their success by measuring improvements in the process of care and in clinical outcomes. In contrast, bioterrorism grants planned to measure success simply on the basis of the production of findings on how health providers could improve emergency preparedness.  

Projects Focused on Improving Clinical Quality. As discussed previously, 17 grants had this as their goal, including 15 that sought to directly influence provider behavior.  Of the 15, all but three (AMA, JCAHO, RTI) planned to measure the changes in care processes that resulted from their work under the grants.  The American Academy of Pediatrics grant, for example, planned to compare the percentage of patient charts demonstrating target levels of care for seven

ADHD care components between those practices enrolled in e-QIPP and receiving AAP training support with those only entering practice data onto the e-QIPP system.  Ten projects (ACP, AHA, AMDA Foundation, ACNL/CalNOC, CHCA, ISIS, Lehigh Valley, NYS-DOH, PMSI, VNSNY) intended to go further by capturing data on patient outcomes of care as well. 

The clinical outcomes were most often short-term changes in patient lab scores, patient satisfaction, and similar measures that might be expected to change within the time frame of the project. The Lehigh Valley Hospital and Health Network project, for example, planned to evaluate its project on both process and outcome-based measures by monitoring diabetes process of care measures, and selecting indicators of diabetes control for patients in participating physician practices at baseline, 6 months, and 12 months post intervention.  Similarly, the New York State Department of Health planned to examine the degree to which facilities and staff implemented interventions (the process measures), as well as patient falls, hospitalizations, weight loss, and incontinence (the outcome measures) by comparing pre-post measures for two intervention groups and one control group.  In addition, the American College of Physicians planned to conduct telephone surveys pre-intervention, during intervention, and post-intervention to evaluate patient satisfaction. 

Two projects planned to collect financial information. The project led by the American Hospital Association/HRET had a plan to compare financial data at baseline from three learning labs to post-program data from six learning labs. This metric was likely included in this evaluation because of the PI's interest in creating a business case for implementing palliative care units at hospitals.11 Lehigh Valley Hospital and Health Network also planned to obtain financial data to help it calculate the cost of the interventions. 

To provide context for understanding these outcomes, some grantees proposed a process evaluation. For example, the International Severity Information Systems planned to conduct staff focus groups and interviews to determine staff satisfaction; it also planned to examine how the intervention supported the use of best practice protocols in study units, became integrated into daily workflow, achieved process efficiencies, and gained user acceptance.  The American Academy of Pediatrics monitored the frequency and participation in QI activities in treatment and control practices, as well as collecting qualitative information on the factors promoting AAP chapters' ability to develop and sustain QI activities. VNSNY also tracked implementation experiences and perceptions of value by surveying CEOs and other staff in participating home health agencies. 

Three of the 15 grantees focused on improving clinical care but did not plan to measure their success based on actual change in the process or outcomes of care (AMA, JCAHO and RTI). The AMA project's planned measure of success was the ability to show that physician groups could transfer clinical data electronically, and that data could be compared to AMA performance standards. JCAHO did not plan to formally evaluate its project, though it did plan to track progress in its survey of hospitals' perceptions of the value of JCAHO's core performance measures for quality improvement initiatives.  The RTI project's primary measure of success was the production of lessons on how to create effective partnerships for translating research into practice, based on the experiences of its integrated delivery system partners to spread effective quality improvement methods across and within the systems.  

The purchaser-focused grants proposed to gauge their success on whether or not they could modify reimbursement systems and incentives to promote quality care rather than measure the changes in care per se.  The most ambitious of these was The Leapfrog Group's plan to study whether purchaser incentives would influence employees' choice of hospitals if they received a discount for using hospitals that met Leapfrog's patient safety standards. HealthFront proposed to measure the proportion of the insured population in two markets that were subject to "aligned incentives."

Bioterrorism preparedness projects. The bioterrorism-focused grants proposed to judge their success by producing findings about what is needed to improve health care system preparedness. The exception was the Connecticut Department of Public Health together with Yale/New Haven Hospital System's Office of Emergency Preparedness, which planned to formally measure success of improving knowledge about bioterrorism preparedness among physicians.  

2. Research and Evaluation Approaches

Formal research designs were employed in 12 of the 15 clinical projects that focused on processes and outcomes of care, and in one of the bioterrorism preparedness projects. The rigor and approach to the design varied across these grants. In most cases, investigators proposed quasi-experimental designs that involved pre-post measurement of relevant clinical or other indicators (sometimes with comparison groups), and qualitative studies of implementation processes and participant experiences.  Only one grantee—the AMDA Foundation—used a randomized design; it randomly assigned each participating nursing home to one of two clinical practice guideline implementation groups, each serving as cross-controls to the other.  However, a few grantees compared results of experimental groups with those of control groups, by allowing those in the latter set to participate in the intervention after the former completed data collection.

3. Evaluation Responsibility

Many of the evaluations were carried out by the grantee organizations themselves, many of whom are non-academic applied research groups, such as Altarum, ISIS and RTI, or research arms of provider organizations, such as JCAHO's Division of Research, VNS of New York's Center for Home Care Policy and Research, Lehigh Valley Hospital and Health Network's Community Health Studies division, and AMA's Clinical Quality Performance Measurement unit.

Some grantees worked closely with researchers or quality improvement measurement experts from non-academic research institutions. For instance, New York State Department of Public Health had co-PIs from the Research Division of the Hebrew Home for the Aged at Riverdale.  HealthFront worked with researchers from Park Nicollet Institute.  AMDA Foundation worked closely with Quality Partners of Rhode Island, the CMS-designated QIO support center for nursing home quality improvement.

A few projects engaged researchers from either academia or other research institutions to conduct independent evaluations of their projects. These included Catholic Health Partners, which had an academic researcher conduct a formative evaluation; the Leapfrog Group, which had three academic researchers conducting process and outcome evaluations of its pilot projects; and AMA, which sub-contracted with RAND for an evaluation.

Return to Contents

E. Implications of Diverse Projects for Evaluation

In evaluating a program like PFQ, which includes grantees with diverse goals, one can evaluate outcomes against overall program goals, as well as against the individual goals each grantee sets for itself in the proposal that AHRQ funded.

In terms of overall goals, AHRQ clearly desired PFQ to have a broad reach in changing health care delivery. Hence, the scale of grantee efforts and their collective reach is an important issue to examine as part of the overall evaluation of the PFQ.  To our knowledge, the agency was less prescriptive about strategies for translating research into practice and how trade-offs were to be made when projects brought the potential for large-scale influential national sponsors. But it did propose approaches that were less directly or immediately tied to changing individual provider performance within the time period of the grant. In addition, AHRQ itself acknowledged that given the novelty of the PFQ program, it expected the grantees would learn as they went along.  In this context, only a subset of grants might be expected to succeed even if the program as a whole was successful.

We can also assess grantees' successes against their own goals and their implementation progress, but only a subset of projects was designed to achieve (or measure) change in clinical practice. In the next chapter, we evaluate grantees' successes through an overall assessment of the collective experience of grantees, while remaining sensitive to the differences in goals set by each grantee and how concretely they planned to measure success.


10. The RFA stated, "AHRQ intends that funded projects be models, and as such yield information that may be useful to other organizations.  Evaluation relevant to an individual project must be part of all plans, with an emphasis on acquiring information that will permit assessment and reporting of progress against approved aims as well as internal decision making by the grantee and consortium members.  Cost and other resource dimensions must be addressed in evaluation at this level."

11. The RFA stated, "Documentation of results must include benefits to patients and also costs and benefits to individual providers and to the organizations that are likely to have a bearing on long-term adoption and sustainability of the changes [emphasis added].  In other words, it is desirable to 1) institute policy, organizational, or operational efforts that will motivate and support changes in practice to improve quality, and 2) provide evidence that the changes in quality are cost-beneficial to the relevant participants so that they can be expected to continue, independent of this or other grant funding."


Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care