1002 — Profiling Facilities Using Three Different Performance Measurement Systems

Author List:
Kerr EA (Ann Arbor VA COE and University of Michigan)
Hofer TP (Ann Arbor VA COE and University of Michigan)
Hayward RA (Ann Arbor VA COE and University of Michigan)
Hogan MM (Ann Arbor VA COE)
Adams J (RAND Health)
Asch SM (Ann Arbor VA COE and University of Michigan)

Objectives:
Many performance measurement (PM) systems, including those endorsed by VA and the NCQA, attempt to distinguish quality provided by health plans or facilities. Despite their expense, we know little about how different methods of PM vary in their ability to discriminate among facilities. We therefore examined the quality of care in 26 VA facilities using three different standard PM approaches to determine how closely the three agreed on quality and how many patient charts must be reviewed to produce reliable facility-level quality profiles.

Methods:
We examined processes of care for 621 patients in 2 VISNs using three different PM tools: a focused explicit tool (41 measures for 6 conditions/prevention), a global explicit tool (376 measures for 26 conditions/prevention), and a structured implicit review instrument (physician-rated care). Trained nurse abstractors and physicians reviewed all medical records. Correlations between facility-level scores from the three PM tools were adjusted for the patient-level reliability of each tool using multilevel regression models.

Results:
Intercorrelations of facility-level scores were generally high across all three tools. For example, overall quality was correlated at 0.5 between the implicit and global tools, 0.34 between implicit and focused, and 0.72 between global and focused. Correlations were even higher for conditions with large evidence base (e.g., diabetes, preventive care). However, only 26 records per facility would be need to produce an overall quality estimate with 0.8 reliability using the implicit tool, while 70 records would be needed using the focused tool and 290 using the global tool.

Implications:
We found high agreement in facility-level scores across the three PM tools for most clinical areas, indicating that all three tools were tapping into a similar quality domain. However, the implicit tool reliably discriminated quality among facilities with many fewer records than the two explicit tools.

Impacts:
Although often criticized for its low reliability in distinguishing between individual cases, structured implicit review may be an excellent tool for detecting quality differences across facilities. While explicit review tools have other advantages (e.g., more actionable results), the number, type and weighting of measures within the tools may influence their ability to reliably and efficiently classify facilities on quality.