Appendix B: Summaries
of PFQ Grantee Activities (continued)
PFQ Grant Summary: Measuring Performance and Bioterrorism Preparedness: An Impact
Study
Lead Organization: Joint Commission on Accreditation of Healthcare Organizations
(JCAHO)
Partner Team: Technical Expert Panels; hospitals, community health centers, and other
health care organizations
Title: Measuring Performance and Bioterrorism Preparedness: An Impact
Study
Topic Area: Core Performance Measurement/Quality Improvement and Emergency
Preparedness
Principal Investigators: Jerod M. Loeb, PhD, Executive Vice President, Division of Research
AHRQ Project Officer: Sally Phillips, PhD, RN
Total Cumulative Award: $1,181,351
Funding Period: 9/2002–9/2006
Project Status: Bioterrorism Preparedness: complete, pending submission of final report;
Performance Measurement—data analysis continues; Received no-cost
extension until September 2007
1. Project
Description
Goals.
This project had two distinct components. The first sought to evaluate the
impact of evidence-based performance measurement on perceptions about and the
perceived value of quality improvement efforts. For this component, the
project examined evidence-based process-of-care practices for five core
performance measure sets: acute myocardial infarction, heart failure,
pneumonia, pregnancy and related conditions, and surgical infection prevention.
It analyzed relationships between core performance measure data and perceptions
about their value, actions taken, and the impact of interventions. The second
project sought to assess the existence of linkages for emergency
preparedness between health care organizations and community responders and
other stakeholders, including public health, public safety, and governmental
administrative agencies. This component planned to compare these linkages in
communities that had experienced a disaster with those that had not, and
identify exemplary practices.
Activities and
Progress
Performance
Measurement Project. In Year 1, to determine the accuracy, completeness,
and reliability of core measures records abstraction, JCAHO project staff
re-abstracted up to 30 medical records at 30 randomly selected test hospitals
for JCAHO core measure sets in acute myocardial infarction (AMI), heart failure
(HF), community-acquired pneumonia (PN), and pregnancy and related conditions
(PR). Project staff compared results of the re-abstractions, data element by
data element, to the original hospital data abstraction. Following this, 90
hospitals conducted their own re-abstraction of the core measure data. In Years
1 and 2, project staff analyzed the data and conducted interviews with hospital
staff to discuss discrepancies and identify systemic issues with the data
collection process.
During
Years 1 and 2, surveys were sent to approximately 1,971 hospitals to
investigate staff perceptions of quality improvement efforts and the value of
core performance measurement and actions taken in response to the measurement
process. The results were compared to hospitals' performance measure data.
Project staff conducted site visits to 40 of the hospitals that completed the
survey (36 on-site and 4 teleconference visits). During Year 3, invitations to
participate in an online survey were sent to the same hospitals. In Years 3/4,
in-person interviews were conducted at 29 hospitals, representing a mix of
those with high perception/high performance and those with low perception/low
performance. The in-person interviews were extensions of the surveys, providing
more detail about factors influencing perceptions and performance. Data
analysis is ongoing and will be completed during the one-year no cost extension.
Bioterrorism
Preparedness Project. In Year 1, the project assembled a Technical Expert
Panel (TEP) comprised of nine panel members representing a range of
organizations and professions, including hospital administrators, emergency
response personnel, local and state public health officials, and law
enforcement, and engaged a project consultant. The grantee, with assistance
from the TEP, developed a framework of seven major topic areas to be used in
assessing the existence of linkages among health care organizations, community
responders, and stakeholders, and to identify exemplary practices.
In
Year 2, based on the TEP's recommendations, the grantee developed a
questionnaire to be sent to a randomly selected sample of U.S. accredited and unaccredited medical/surgical hospitals from the American Hospital Association
database. Prior to implementation, the questionnaire was pilot-tested. The
project team invited 1,750 hospital CEOs to participate in the study, and the
final questionnaire was mailed to the CEO-designated contact person for the 678
hospitals that agreed to participate. Representatives of 575 hospitals returned
completed questionnaires. The project team analyzed the data to determine the
prevalence and breadth of hospital and community linkages related to emergency
preparedness. The aggregate results were sent to participating hospitals when
they agreed to participate in the study.
In
Year 3, project staff continued to analyze the data from the hospital questionnaires
and developed and submitted a manuscript describing the results of the hospital
analyses. Project staff also identified potentially innovative practices for
inclusion in the Joint Commission publication, Standing Together: An
Emergency Planning Guide for America's Communities.
Also
in Year 2, the grantee assembled a new Technical Expert Panel subgroup for
assessing community emergency preparedness linkages in health centers. The
eight-member panel drew on both existing TEP members and referrals from the
TEP, including an expert from the Health Resources and Services Administration
(HRSA) to lead the subgroup. This new subgroup examined the hospital
questionnaires and provided feedback and suggested revisions for the resulting
60-item questionnaire to be implemented in federally funded health centers. In
Year 3, the grantee mailed the health center questionnaires to the executive
directors of 890 federally funded CHCs, of which 307 responded. The project
staff worked with the TEP subgroup for health centers to develop a strategy for
analyzing data. The remainder of Year 3 was used to conduct an initial health
center data analysis, to convene the health center TEP subgroup for a
discussion of aggregate findings, and to develop and disseminate these findings.
A
request for a six-month no-cost extension (to March 2006) of the bioterrorism
component of the grant was requested following the scheduled project-end date
of September 30, 2005; this allowed completion of (1) multivariate analysis of
health center data, (2) identification of innovative health center practices,
(3) manuscript preparation (health center results), (4) dissemination of
innovative health center practices, (5) continued preparation and finalization
of project report, and (6) presentation of findings.
2. Partnership
Structure/Function
JCAHO was the
primary leader and actor for both studies funded under this grant. The JCAHO
project team did not have any partners for the performance measurement project,
although it viewed the grant funding as an opportunity to get feedback from
hospitals on JCAHO's required performance measures, and how they might be
improved for use in quality improvement activities. For the bioterrorism
preparedness project, the grantee convened an advisory TEP and TEP subgroup.
The TEPs met with the JCAHO project staff approximately every six months.
Table 1. Major Partner Organizations and Roles in the Project
|
Organization |
Role in Project |
Lead Organization (grant recipient) |
Joint Commission on Accreditation of Healthcare Organizations
(JCAHO)
|
Developed questionnaires, conducted and
provided general oversight for the studies
Wrote reports and disseminated results
|
Key
Collaborators |
Bioterrorism Project:
Technical Expert Panel (TEP)—Hospitals
Technical Expert Panel Sub-Group—health centers
|
Advisory group included AHA; helped to
construct hospital questionnaire and guide
analysis
Advisory group of health center representatives,
including DHHS/HRSA's Bureau of Primary
Health Care; helped to construct health center
questionnaire and guide analysis
|
Target Organizations |
Performance Measurement
Project:
Nearly 1500 hospitals
participated in the 2 surveys;
69 hospitals participated in the
in-person interviews
Bioterrorism Project:
1,750 (random sample) Joint
Commission accredited and
unaccredited hospitals; 890
(population) federally funded
health centers
|
Conducted data abstraction and re-abstraction;
completed surveys and submitted them to project
staff; identified participants for the in-person
interviews. (The 29 interviews in the second
round of in-person interviews each took
approximately 2 hours to complete.)
Completed questionnaire and submitted results
to JCAHO project staff
|
3. Project Evaluation and Outcomes/Results
Performance Measurement Project. The baseline level of data reliability appears to be
acceptable for measures used to assess and improve hospital performance.
Twenty of 21 performance measures examined showed no statistically significant
differences when comparing originally abstracted with re-abstracted data using
the Chi-Square test statistic for rate-based measures and the Wilcoxon test
statistic for continuous variable measures. The one statistically different
measure reflected higher performance measure rates when derived from the
originally abstracted data (p <0.05). The mean data element agreement rate
for the 61 data elements evaluated was 91.9 percent and the mean kappa
statistic for binary data elements was 0.68. Preliminary findings indicate
that overall data element agreement rates varied among measure sets and, in
general, JCAHO independent abstractors identified more data element
discrepancies than did the self-re-abstractors; in other words, it was found
that hospital self-abstracted data was fairly accurate and reliable, although
it was better when a third party conducted the re-abstraction. This
information is important to those considering payment tied to performance
measures.
For the first survey, project staff received
approximately 1,141 completed surveys from 851 hospitals. From these
respondents, a sample of 40 hospitals was recruited to participate in 36
in-person and 4 teleconference interviews. For the second survey, nearly 600
hundred hospitals responded and 29 in-person interviews were completed.
Preliminary results suggest relationships between the perceived value of core
measure sets and a variety of quality improvement actions. Further analysis
will attempt to evaluate the relationships between improvement actions measure
rates, as well as assessment of qualitative data obtained during the in-person
interviews.
Bioterrorism Preparedness Project. Of the 678 hospitals that received questionnaires, 575
submitted completed surveys. The study found deficient linkages between
hospitals, public health, and other critical response entities. The abstract
of the article, published in Annals in Internal Medicine, June 2006
reported:
"In a weighted analysis, most hospitals (88.2%)
engaged in community-wide drills and exercises, and most (82.2%) conducted a
collaborative threat and vulnerability analysis with community responders. Of
all respondents, 57.3% reported that their community plans addressed the
hospital's need for additional supplies and equipment, and 73.0% reported that
decontamination capacity needs were addressed. Fewer reported a direct link to
the Health Alert Network (54.4%) and around-the-clock access to a live voice
from a public health department (40.0%). Performance on many of 17 basic
elements was better in large and urban hospitals and was associated with a high
number of perceived hazards, previous national security event preparation, and
experience in actual response."
Of the 890 health centers that received
questionnaires, 307 returned the survey. While 80 percent reported that their
communities had a group or committee responsible for emergency preparedness or
response planning, only 54 percent reported being represented in the group by
either a staff member (46 percent) or by the Primary Care Association or
network/consortium (8 percent). About half (54 percent) of health centers
reported that the community had established a role for all (22 percent) or some
(32 percent) sites in the event of an emergency. Thirty percent reported that
their role had been documented in the local/county emergency operations plan.
Twenty-seven percent had completed a collaboration threat and vulnerability
analysis with community responders for all or some sites. Twenty-four percent
of health centers reported that all (5 percent) or some (19 percent) sites had
participated in community-wide drills/exercises since 2001. Thirty percent of
responding health centers reported having responded to an actual public health
emergency or disaster, while an additional 11 percent reported having responded
to a potential or suspected emergency.
Stepwise logistic regression analysis also was
performed. The main outcome variable for this analysis was a composite measure
of the strength of community linkages. Having the highest cumulative linkages
indicator score was associated with 7 items: health centers that had an
emergency operations plan that was developed collaboratively with the community
emergency management agency, and those that had participated in community-wide
training, were 3.4 and 3.6 times more likely to have the highest summary
indicator score, respectively. Those whose staff had seen the community
emergency plan were nearly 3 times more likely to have the highest indicator
score, and those who had staff who were involved in community planning were
more than twice as likely to have the highest score. Health centers whose
community plan addressed their health need for additional supplies and
equipment were 3 times more likely to have the highest summary indicator
scores. Health centers that reported having a community emergency management
agency with the ability to reach a health center contact around the clock, and
those that reported staff as present or being represented at the community
emergency operations center during a response, were approximately 2.3 times
more likely to have the highest summary indicator score.
4. Major Products
Performance Measures Project:
- Mebane-Sims IL, Williams SC, Schmaltz SP, Koss RG and Loeb JM.
"Influence of Perceptions About Performance Measurement on Actions Taken to
Improve the Quality of Patient Care." Paper presented at the Annual Research
Meeting 2006, Seattle, WA, June 25, 2006.
- Williams SC, Watt
A, Schmaltz S, Koss RG, Loeb, JM. "Assessing the Reliability of Standardized
Performance Measures: Self versus Independent Reabstraction." Int J Quality
Health Care 2006;18:246-255.
- Williams SC, Watt
A, Schmaltz S, Koss RG, Loeb, JM. "Reliability of Standardized Performance
Measures: Self versus Independent Reabstraction." Paper presented at the
American Health Quality Association 2006 meeting, January 2006.
- Williams S,
"Assessing the Reliability of Standardized Health Care Quality Indicators
Implemented Across the United States." Paper presented at the International
Society for Quality in Health Care, Indicator Summit, Dallas, TX, November 2,
2003.
- Watt A Williams
S, Lee K, Robertson J, Koss RG and Loeb JM, "Keen Eye on Core Measures." Journal
of the American Health Information Management Association, 2003, 74(10):21-25.
- Watt A, "A
Reliability Assessment of Performance Measure Data." Poster presentation at the
Academy Health 2004 Annual Research Meeting, San Diego, CA, June 2004.
Bioterrorism Preparedness Project:
- Loeb JM, Braun
BI, Wineman NV and Schmaltz SP. "Emergency Preparedness Planning and Exercises:
Comparing Hospital and Health Center Community Integration." To be presented
at the American Public Health Association Annual Meeting, Boston, MA, November 2006.
- Wineman NV, Braun
BI, Barbera JB, Schmaltz SP and Loeb JM. "The Integration of Health Centers
into Community Emergency Preparedness Planning: An Assessment of Linkages."
Presented at Academy Health Annual Research Meeting, Seattle, WA, June 2006.
-
Braun BI, Wineman
NV, Finn NL, Barbera JA, Schmaltz SP and Loeb JM. "Integrating
Hospitals into Community Emergency Preparedness Planning." Annals of
Internal Medicine 144(11):799-811, 2006 Jun 6.
-
Wineman NV, Braun BI, Finn NL,
Schmaltz SP and Loeb JM. "The Integration of Healthcare Organizations into
Community Emergency Preparedness Planning: A National Baseline Assessment."
Poster presented at the American Public Health Association Annual Meeting,
December 2005, Philadelphia, PA.
-
Finn N, Braun BI and Wineman NV.
"The Integration of Hospitals into Community Emergency Preparedness Planning
and Response: A Baseline Assessment." Poster presented at the Academy Health
Annual Research Meeting, June 2005, Boston, MA.
5. Potential for Sustainability/Expansion
after PFQ Grant Ends
Research findings from these projects
could generate new research opportunities following the end of the grant
period. Some of the findings may be useful in developing research questions to
evaluate relationships between core performance measures data and clinical
outcomes, and in evaluating and designing pay-for-performance systems. Some
say the survey instrument for the bioterrorism component is a useful checklist
for hospital emergency preparedness measures. An examination of the depth of
community linkages also could be undertaken.
Return to Appendix B Contents
PFQ Grant Summary: Using Incentives to Drive Leaps in Patient Safety
Lead Organization: The Leapfrog Group
Partner Team: Purchaser (employer) and payer (health plan) groups in 6 different
markets; Evaluators/researchers from 3 universities; Consultants from
Medstat, Towers Perrin, and Ropes & Gray
Title: Using Incentives to Drive Leaps in Patient Safety—Implementation
Phase
Topic Area: Incentive and reward programs to motivate providers to improve quality
Principal Investigators: Suzanne Delbanco (Leapfrog)
AHRQ Project Officer: Michael Hagan
Total Cumulative Award: $1,295,537
Funding Period: 10/2002–9/2006
Project Status: Received no-cost extension until September 2007
1. Project
Description
Goals. This project began with a one-year "planning grant," which developed and
recruited payer and purchaser groups to pilot-test financial incentive and
reward programs targeting hospitals and consumers, in order to speed the
adoption of The Leapfrog Group's recommended hospital patient safety practices.
On behalf of the millions of Americans for whom many of the nation's largest
corporations and public agencies buy health benefits, The Leapfrog Group aims
to use its members' collective leverage to initiate breakthrough improvements
in the safety, quality, and affordability of health care.
The
goal of the subsequent three-year "implementation grant" was to implement these
pilot projects in at least six health care markets around the country and
evaluate their effectiveness. Specific aims were to (1) document and understand
payers' and purchasers' interest in incentive and reward programs, and identify
organizational and market characteristics related to integrating such programs
into their purchasing decisions; (2) document and understand the decision
making processes purchasers and payers use to design and implement
interventions aimed at improving hospital quality and safety; and (3) measure
the impact of their interventions on employees' choice of hospitals and
hospitals' adoption of Leapfrog's recommended quality and patient safety
practices.
Activities and
Progress
Phase I pilots:
- GE, Verizon, and
Hannaford Brothers Collaborative/Albany-Schenectady market. These three
large employers collaborated in designing and implementing a bonus program for
hospitals and financial incentives for consumers to use hospitals meeting
Leapfrog hospital patient safety standards. The group chose to use Leapfrog's
Hospital Rewards Program quality and efficiency measures in five clinical
areas. Hospitals would be eligible for rewards based on how they performed in
each of the areas. Leapfrog provided and arranged for technical assistance to
this group, including hosting webcasts for local hospitals and health plans
about the program, and conducting outreach to hospitals to solicit their
participation. The pilot has not yet been implemented (it was on hold as of
June 2006) because of hospitals' reluctance to participate due to uncertainty
about the availability of bonus funds, and because the data vendor has not yet
agreed to release the data necessary to compile the measures. The evaluation
team has monitored the pilot's progress and had planned to conduct a survey of
hospitals regarding their willingness/unwillingness to participate, but this
survey also is on hold.
-
Healthcare 21 (HC21)
Business Coalition/Eastern and Central Tennessee. This pilot worked to implement a "tier and steer"
incentive program to direct patients to high performing hospitals. Leapfrog
helped with measure development and legal assistance. HC21 constructed a
consumer guide on selecting hospitals based on Leapfrog's recommended patient
safety practices (aka "leaps"), and has been working with a few employers on
new benefit designs to encourage employees to use higher performing hospitals. The
majority of employers, however, were wary of proceeding with any benefit plan
changes because health plans in the state also are designing new benefit
packages along these lines, a role that employers believe health plans are
better suited to fill, and the project has stalled.
-
Boeing Company/Seattle, Wichita, Kansas and Portland, Oregon.
This pilot adopted a benefit differential to encourage certain members of its
PPO to use hospitals that met Leapfrog's quality and patient safety practices.
Under an arrangement negotiated with two unions representing certain Boeing
employees, the Hospital Safety Incentive allowed PPO-enrolled employees to
obtain 100% coverage after the deductible for services in a "Leapfrogcompliant"
hospital, versus 95% coverage in a non-compliant hospital. Boeing does not plan
to continue the benefit design, but machinists with the benefit in their
current contracts will retain the design for three more years. Boeing worked
with Leapfrog, Medstat, and its plan administrator to identify which hospitals
met Leapfrog's standards. The evaluation team used a pre- and post-measurement
design of employees affected and unaffected by the program. Boeing currently
is examining the post-measurement results.
- Maine Health Management Coalition (MHMC)/Maine. This pilot created a bonus
pool of about $1 million for high performing hospitals. Hospitals could receive
bonus funds by meeting certain performance standards. The 10 participating
hospitals and 9 participating purchasers contributed to the bonus pool; the
funds from hospitals are redistributed from lower to higher performing
hospitals with purchasers contributing some "new money." Hospitals can lose
their contribution if they do not meet certain performance thresholds, or gain
a bonus for exceeding them. Medstat collected data to calculate a score based
on patient satisfaction, patient safety, clinical measures, and efficiency.
Leapfrog assisted with incentive and reward methodology and administration.
Intended to begin in July 2005, the pilot's implementation was delayed until
2006 when 2005 performance results were reported; Medstat issued the rewards in
the summer of 2006. The evaluator tracked the pilot's methodology and results,
and conducted a survey of employers and hospitals involved in the pilot to
determine their concerns.
Phase 2 pilots:
-
Blue Shield of California. This pilot built on a hospital
tiering program (Network Choice), which was developed using Leapfrog's hospital
patient safety measures. Blue Shield used the grant resources to develop a
complementary "Physician Informational Tiering Project" to build awareness
among physicians and Blue Shield plan members about the cost and quality
differences between hospitals and ambulatory care facilities, and influence
their choice of hospitals and ambulatory surgery centers. The project surveyed
physician and member attitudes about the hospital tiering program to shape its
design in the future. Despite a monetary incentive, Blue Shield has struggled
to get physicians to participate in the survey.
-
Buyers Health Care Action
Group (BHCAG)/Minnesota. This
pilot aimed to (1) measure and publicly disseminate market-, employer-, and
plan-specific Opportunity Rate scores (the rate of admittance to Leapfrog
compliant hospitals per opportunity), and (2) increase health plan participation
in efforts to improve hospital quality by linking the plans' Opportunity Rate
scores to the "buy" decision. (Health plans would be tracked using the
National Business Coalition on Health's eValue8 tool, which health plans use to
submit information to purchasers about their clinical quality and
administrative efficiency.) The pilot is based on other research showing that,
even when hospital patient volume shifts do not occur as a result of incentives
or quality information, measurement and public dissemination of performance
data creates a competitive environment. Leapfrog provided ongoing assistance
with updates and applications of the Leapfrog algorithm to calculate
Opportunity Rates, as well as qualitative analysis and cataloguing of health
plan and employer practices. The pilot is currently on hold because of
turnover at Watson Wyatt, who is assisting BHCAG.
2. Partnership
Structure/Function
The
partnership consisted of the lead organization, The Leapfrog Group, founded in
2000 by The Business Roundtable to mobilize employer purchasing power to
improve health care quality by recognizing and rewarding providers that take
"big leaps" in advancing quality, patient safety, and affordability. Leapfrog
recruited six groups from among its membership to conduct pilot projects; those
selected included major employers (Boeing and the GE/Verizon/Hannaford Brothers
group); three employer health coalitions (in Maine, Minnesota, and Tennessee) and one health plan (Blue Shield of California). Leapfrog arranged for
technical assistance to the pilot projects by three groups of consultants:
Towers Perrin (actuarial services), Medstat (data analysis), and Ropes and Gray
(legal counsel).
Each
pilot functions separately, but Leapfrog conducts monthly calls with the entire
group, including external evaluators and some of the TA contractors. Leapfrog
held in-person meetings with grant participants in February 2005 and January
2006 to discuss lessons learned and key takeaways. Leapfrog also wrote and
distributed a newsletter in which they reported on the pilots' progress and
included links to tools and resources for the pilots.
In
addition, Leapfrog engaged a group of three evaluators to conduct
individualized process and outcome evaluations of each of the pilots. The
evaluators communicated weekly with Leapfrog. With some of the pilots, the
evaluators acted both as consultants and evaluators. In Maine, for example,
the evaluators attended meetings and participated in teleconferences to provide
formative feedback. For the GE pilot, the evaluators also acted as consultants
and held discussions with them, attended meetings, and provided feedback. Other
pilots, such as BHCAG and HC21, did not ask evaluators for assistance.
Table 1. Major Partner Organizations and Roles in the Project
|
Organization |
Role in Project |
Lead Organization (grant recipient) |
The Leapfrog Group
|
Lead and coordinate grant activities; provide TA to pilot
sites and oversee other TA and the evaluation team
|
Key
Collaborators |
Pilot Groups: 2 large employers,
3 business coalitions, and 1 health
plan) in CA, KS, ME, MN, NY,
OR, TN, WA
Evaluators
Consultants
|
Implement hospital incentive and reward programs in
their respective markets
Evaluate pilots; develop case studies: Dennis Scanlon
(Penn State), John Christianson (U. Minnesota), Eric
Ford (Tulane-Texas Tech)
|
Target Organizations |
Hospitals and selected other
providers in the 6 health care
markets
|
Report data on performance measures selected by each
purchaser group; adopt Leapfrog or other hospital quality
and patient safety standards
|
3. Project Evaluation and Outcomes/Results
Only two of the projects (Boeing and MHMC) have
reached implementation stage and have been fully evaluated; the evaluation of a
third pilot project (Blue Shield of CA) is not yet complete. However, all six
of the pilots provided insights or lessons as to the challenges of implementing
incentive and reward programs through multi-stakeholder efforts. The
evaluators found the following results:
-
Boeing: Leapfrog expected the Boeing pilot
to produce the most rigorous empirical findings about the impact of incentives
on behavior in the health system, because the evaluation compared the program's
effects on employees in the PPO with modified hospital benefit to those in
Boeing's regular PPO. However, the evaluation did not find that the program
had any effect on consumer choice of hospital, primarily because employees'
physicians did not refer or admit them to the higher performing hospitals.
Employees would not use hospitals where their physicians did not practice,
regardless of the extra cost. In addition, only a few hospitals in the three
Boeing markets qualified for the bonuses, so there were not enough options for
consumers or physicians. These findings may be useful to other organizations
seeking to alter health benefit designs so as to shift market share to better
performing hospitals.
-
MHMC: Interviews with program
participants (hospitals and employers) revealed satisfaction with the pilot's
leadership and its structure, including the choice of measures, weighting of
the measures, and funding. There was uncertainty among participants about
whether the pilot should continue, with many citing the need for information
about the pilot's outcomes. The interviews provided insight into reasons such
a pilot may be unsustainable, including: insurance companies developing
similar programs; administrative burden/costs being too high; performance
measures being publicized and misinterpreted by the public; and the need for
new bonus money not being sustainable. Many respondents felt the pilot was
valuable in that it sent a signal to health plans about the interest in having
transparent and standardized measures and receiving rewards based on those
metrics. Without involving the health plans, however, many felt the program
would not be sustained. These findings from the interviews offer lessons to
similar incentive programs, particularly the need to involve hospitals,
purchasers, and health plans.
-
Blue Shield
of CA: When
completed, the physician survey will provide lessons on physicians' awareness
of the variation in hospital quality and safety and offer input into the design
of an insurance product that gives physicians incentives to steer patients to higher
performing hospitals.
Although the three other pilots have stalled, they do
offer lessons regarding the barriers that such purchaser-led efforts face. For
example, leadership constraints can impede progress, particularly if those
negotiating with hospitals and health plans lack the authority to make
decisions and enforce them in their organizations and benefit plans. In
addition, purchaser-led efforts to establish performance standards may run into
stakeholder opposition; at least one of the pilots encountered resistance from
hospitals regarding participation in the program. Strong leadership may help
with participation, but resistance is still likely. One pilot found it more
difficult than originally anticipated to align standards and monetary incentives
for providers. As the evaluators learned, hospital administrators do not think
that current performance measures are accurate, so they are unlikely to support
reimbursement models that put significant money at risk until measurement is
more sophisticated. Further, employers are unlikely to sustain incentive
programs without a positive return on investment.
4.
Major Products
The following publications are planned but not yet
complete:
- Boeing Pre- and
Post-Survey Analysis (estimated completion date Summer 2006; we had not heard
as of October 2006 if this was completed).
- MHMC Pilot Case
Study (estimated completion date Fall 2006).
- A Multi-Purchaser
Incentive and Reward Program: Challenges and Barriers to Achieving Results
(from GE, Verizon, Hannaford Brothers pilot—estimated completion date
September 2006; we had not heard as of October 2006 if this was completed).
- Assessing
Doctors' Potential Use of Comparative Patient Safety, Cost, and Quality
Reporting in California Surgery Centers (from Blue Shield pilot—estimated
completion date November 2006).
- Promise and
Problems with Supply Chain Management Approaches to Health Care Purchasing
(from GE, Verizon, Hannaford Brothers pilot—completion date TBD).
The documents
below were presented at Leapfrog's Incentives and Rewards Workshop in July
2006:
- "Incentives and
Rewards Best Practices Primer: Lessons Learned from Early Pilots," The Leapfrog
Group (lessons based on the 6 PFQ pilots and 7 in RWJF Rewarding Results
program).
- "The Leapfrog Group's
Incentive and Reward Pilots: Key Lessons Learned."
5. Potential for Sustainability/Expansion after PFQ
Grant Ends
Leapfrog will not be sustaining the program, but some
of the individual pilots will likely continue. Leapfrog's idea for the program
was to start new projects and learn what it could from them. Since the pilots
began, the movement for incentives has taken off and Leapfrog feels there is no
need to continue them. They have used the lessons from the pilots to refine
the design of the Leapfrog Hospital Rewards Program so, in that sense, the
program is continuing. Furthermore, all of the pilots will continue their
relationship with Leapfrog, since they are also members of Leapfrog's Regional
Roll-Out program, in which Leapfrog employer members work with other local
employers, as well as local hospitals, health plans, physicians, unions,
consumer groups, and others, to implement the Leapfrog action plan in their
region.
MHMC will meet in August 2006 to decide whether to
sustain its program, and if so, how best to involve the major health plans in Maine and additional employers. Blue Shield of California is using the survey feedback to
support its ongoing pay-for-performance agenda. Boeing's benefit design is in
place for certain employees for three additional years, but the company does
not plan to continue or expand the design for other employees.
Return to Appendix B Contents
Return to Contents
Proceed to Next Section