Chapter 6. "Lessons Learned" for Future Activities
In this section, we present a number of "lessons learned" during
our interviews concerning future directions for AHRQ in the development and
modification of QIs. Our discussion is organized in three parts. First,
we describe interviewees' perspectives on current, anticipated, and potential
development projects involving the QIs. Next, we discuss users' perspectives
of AHRQ as a measures developer and the ways in which users speculate this
role could evolve or change in the future, especially in relation to other
potential providers of this service. Finally, we briefly discuss users' views
on the subject of market demand, in particular, user willingness to pay for
QIs.
6.1. Voices Of The Customer: Priorities
for Future Development of the QIs
A key function of this study was to provide AHRQ with feedback from interviewees
about priorities for future development efforts. In order to explore this topic
with users, we first solicited input from members of the AHRQ QI team about
current, anticipated and potential development projects. We then used these
responses in our interviews, which asked explicitly about interviewees' opinions
of the need for these projects as well as their own priorities for future development.
We grouped the development projects into three categories and asked interviewees
which category they would like to see given priority:
- Improvements in the current product line.
- Addition of new product lines.
- Improved support for the QI products.
Improving the current products was most frequently seen as the highest priority,
followed by both the addition of new products and improvements in service,
outreach, and user support for the measures (Table 6.1).
Many users told us that it was important to improve the current set of indicators
as much as possible and expand their use so that the QIs became more of a national
standard. One interviewee summarized this sentiment by saying, "One
solid measurement set with everyone's buy-in would be enormously positive." Another
user pointed out that "it would be good to focus on shoring up current
indicators because there is currently a lot of criticism around using them
for public reporting."
Many other users said that their recommendation to focus on improving current
AHRQ QIs was driven by a desire to overcome stakeholder opposition to indicators
based on administrative data. One interviewee summarized this line of
thought:
There is no way that every hospital in the country is going to do
primary quality data collection and even if they did, how could we enforce
consistency and timeliness? This is a battle that we have been fighting for
years, and we've been struggling because people tend to dismiss out of hand
any information based on administrative data. In the short term, until there
is progress with the electronic health record, administrative data is all
there is, and there is no convincing argument that we have exhausted all
possibilities to use this type of data for quality improvement.
Another user was more specific about how the indicators might improve:
The AHRQ QIs may not be perfect but they are a national standard, based
on readily available data that are not going away, and the indicators will
get better and better - especially with extended ICD-9 codes (and later the
move to ICD-10) and the addition of a flag for condition-present-on-admission
and things like that. I think there is an opportunity to improve the QIs
gradually over time, as the underlying data sources improve—as new
data elements are added with the introduction of the UB-04, and eventually
electronic health records.
Despite these sentiments, there is also a strong desire by many interviewees
for additional QIs covering new areas. Indeed, it was difficult for many
users to choose between adding new products and improving current products
as their top priority. Improving service and outreach was most frequently given
a low rating. In the following subsections, we discuss in more detail what
changes interviewees would like to see in the AHRQ QI program.
6.1.1. Improvements of the current product line
Apart from the expectation that AHRQ maintain and update the current QIs,
the most commonly requested improvement was the addition of data elements to
increase the specificity of the QIs, such as a flag for conditions present
at admission or for "do-not-resuscitate" orders and the addition
of clinical data elements (Table 6.2). As mentioned above, the AHRQ QI team
is incorporating a flag for conditions present at admission in the next iteration
of QI specifications. Other improvements mentioned with some regularity
include validation studies on the development of composite measures (a project
that AHRQ is currently undertaking) and better risk adjustment, with coordination
of risk adjustment methods across the subsets of QIs.
6.1.2. Adding new product lines
Most interviewees were aware and appreciative of the roll-out of the pediatric
QIs as a new module, as this important population had been excluded from many
of the initial QIs. Almost half of the interviewees mentioned the additional
need for measures for hospital outpatient/ambulatory care, such as day surgery
and diagnostic procedures (Table 6.3). About a third of interviewees
mentioned the need for efficiency, physician-level, and emergency room care
measures. Nearly a quarter of interviewees expressed interest in integrating
data and indicators for inpatient and outpatient surgery, since an increasing
number of procedures are being shifted to outpatient settings. This, for example,
has created a real problem for constructing the laparoscopic cholecystectomy
indicator (IQI 23), because nearly all of those procedures are now done on
an outpatient basis.
Measures for rural/small hospitals were the next priority group. However,
interviewees expressed differing views on the implications of having a dedicated
set of indicators for rural/small hospitals. On the one hand, many felt that
dedicated indicators were needed, because the low patient volume at rural/small
hospitals excludes those institutions from most of the current indicators.
Further, interviewees felt that some indicators should not be constructed for
those facilities. For example, since current ACOG guidelines do not recommend
VBAC for facilities without adequate infrastructure for emergency caesarean
section, the VBAC indicators (IQI 22 and 34) should not be used for many of
them. On the other hand, some interviewees expressed concern that dedicated
indicators would suggest that small and rural hospital were second-class facilities,
because common quality standards do not apply.
6.1.3. Improved services around QI products
One of the most common priorities for improved service among interviewees
(Table 6.4) was more explicit guidance from AHRQ on the use of the QIs for
public reporting and pay-for-performance (also discussed in Section 4.3.3). Users
were sometimes not aware that AHRQ had recently released documents on those
issues; the latest guidance document was released in December 2005, predating
these interviews by only a few
months.47
Another commonly listed priority for increased AHRQ service was for AHRQ to
provide guidance on the process that should be followed to improve quality
in areas where the QIs indicate a problem. Users had varying levels of
experience with quality improvement and varying levels of access to networks
that can be used to share quality improvement knowledge. It would be
helpful for users to have further guidance (e.g., a general methodology for
analyzing medical records following an abnormally high incidence of PSI 4 -
failure-to-rescue, and a summary of available evidence on interventions that
could be implemented to lower the rate).
Interviewees suggested that AHRQ collaborate more closely with other organizations
in attempt to forge more of a consensus on a "national standard" set
of quality indicators. The standardization of some AHRQ and Leapfrog indicators
and submission of some of the AHRQ QIs for NQF approval are steps that AHRQ has
already taken in this direction. Further efforts along these lines would
improve the usability of the AHRQ QIs for users.
Interviewees also suggested making the AHRQ QIs more user-friendly
and simpler to understand. A simple suggestion in this regard was to
promulgate official, simple names for the QIs in language understandable by
people with no clinical knowledge.
We asked users specifically about one aspect of AHRQ service —user
support. We received favorable feedback about the current level of AHRQ user support for
QI users. Of the 15 users who reported using AHRQ support, all but one explicitly
reported a good experience. Interviewees were impressed by the technical competence,
accessibility, and responsiveness of the helpdesk staff and argued that this
support function had played a major role in advancing the field of quality
measurement, because it removed the barriers that non-research institutions
face when implementing complex measurement systems. To provide a point of comparison,
several of the more experienced users recounted the difficulties they had experienced
in working with the HCUP indicator code.
One user reported being able to "feed complicated, technical questions
from hospitals to AHRQ," and that AHRQ user support was able to answer
those questions "from a greater depth and background" than the user had.
This user added that responses to inquiries were "based on evidence,
thoroughly considered and thought-through, with a quick turnaround." Another
user felt that there was "always someone you could get hold of to voice
concerns." A vendor commented that the AHRQ QI technical support had a "pretty
quick response time compared to what one would expect from federal agency. They
would open a case the same day, send an email confirmation, assign to a person—all
in the same day." On the other hand, other interviewees (7 of 54)
did suggest the need for increased responsiveness and speed of user support. These
interviewees generally wanted most questions to be answered the same day they
were asked.
Return to Contents
6.2. User Perspectives: The
Future Role of AHRQ Compared to Other Players
We explored extensively the issue of how users perceive AHRQ as a measures
developer, what they think AHRQ's role should be in this area, and whether
some function that AHRQ currently performs could be taken over by other public
or private institutions.
Our interviewees held AHRQ in very high regard. They credited AHRQ for its
vision in pushing for the use of administrative data for quality measurement
well before the research and provider community was ready to exploit this data
source. The work of the AHRQ QI team was described as technically sound, sensitive
to the limitations of the underlying data, and transparent. AHRQ is regarded
as an intellectual leader and "go-to" institution for health services research and the
use of administrative data for hospital quality measurement.
s shown in our environmental scan for comparable products, no clear comparable alternative
to the AHRQ QIs has emerged or is likely to emerge. Several other developers,
especially JCAHO, CMS, HQA, and Leapfrog, are seen as prominent sources for measures
and may be used as alternatives, but their indicators differ in several important
ways and are generally regarded as complements to the AHRQ QIs, not true alternatives. We
asked QI users to visualize how the quality measurement landscape would change
if the AHRQ QI program disappeared.
One interviewee answered:
If AHRQ stopped the QI program, pieces would be picked up but there wouldn't
be a consistent, cohesive package as big as AHRQ is now. The public domain
issue is a big one. Providers have only been in the game because the
indicators come from a public source—if that public source goes away,
I think providers will stop doing it.
And another one said:
I can't imagine who else would pick up activities from AHRQ so instead,
probably activities would be broken down into orphan activities—any
one slice would be a different activity; specialty organizations would take
over certain types of measures (pediatric, for example).
Interviewees were quite comfortable with AHRQ having a leading role in national
quality indicator development. It was generally viewed as positive that a trustworthy
federal institution had defined open-source and well-documented quality measurement
standards. These standards were viewed as contributing to the transparency
of health care quality measurement and reducing the measurement burden for health
care providers by limiting the number of measurement tools they must use to satisfy
various reporting requirements. Many emphasized the need for even greater
leadership from the federal government in this area, either by developing measures
or by orchestrating public-private partnerships, so that standard measure sets
for various purposes would become available and accessible to everyone.
AHRQ's leading role was also seen as a challenge for AHRQ, because with
it comes the responsibility to maintain the QI program, on which so many programs
now depend. Our interviewees looked primarily to AHRQ to fill the obvious gaps
in the measurement science. Several commented that current funding levels for
AHRQ were not adequate to meet all those needs.
We discussed whether it could be a viable option for AHRQ to give up parts
of the current QI program in order to free up resources and set different priorities.
Specifically, we asked whether AHRQ could or should stop developing software
and providing user support in order to focus exclusively on indicator development.
Almost unanimously, interviewees rejected a model under which AHRQ would develop
and distribute the software without supporting it.
There was much concern that lack of user support would create enormous barriers
to the implementation of quality measurement initiatives, especially for new
users and non-research institutions. Using vendors to provide user support was
also not commonly regarded as an alternative, because many feared that vendors
would be prohibitively expensive or incapable of providing the same quality of
support as the original developers. The latter view was even shared by some of
the vendors who would potentially stand to gain from this model: one representative
stated that "we do not want to support AHRQ's software since we can't support
what we don't write."
We received mixed reactions to a model under which AHRQ would only develop
and release indicators and their technical specifications, but no longer provide
or support software. Many interviewees were familiar with such an arrangement,
as it would mirror the division of responsibilities between JCAHO and the Core
Measures vendors. But several drawbacks were brought to our attention, such
as vendor and license fees, as well as potential quality problems and comparability
issues (if different vendors implemented AHRQ specifications).
Several interviewees stated that such a model would represent a step backwards in
the development of a unified quality measurement infrastructure, since a transparent national
standard would be transformed into multiple proprietary systems, at the same
moment at which many entities, like CMS, JCAHO and NQF, are trying to introduce
open-source consensus measures, as recommended by the IOM. At a minimum, a
rigorous certification program for vendors would be needed and many interviewees
worried about the implications of such a change for the momentum that the hospital
quality measurement movement has gathered.
Finally, we asked interviewees which parts of the QI program AHRQ could give
up, if (hypothetical) budget cuts were to leave it with no other choice. Most
of the 54 interviewees stated that the program represented a unified entity
that should not be disassembled, although 12 interviewees said software development
and 11 said user support could be discontinued by AHRQ and those functions
assumed by others.
Return to Contents
6.3. User Views: Willingness To Pay
for the AHRQ QIs
As an alternative to AHRQ realigning current funds, we asked interviewees
whether AHRQ might consider financing program growth by generating additional
revenues from charging users. Not unexpectedly, this proposal was not met with
enthusiasm. Almost half of our interviewees (20 of 54) did not answer the question.
Five out of 36 current users stated that they would stop using the QIs in this
case. Three current users replied that they had invested so much into their
program based on the AHRQ QIs that they would have to accept charges, but emphasized
that they might not have selected the QIs in the first place if they had not
been a free resource. However, almost half of the interviewees (44%) expressed
willingness to pay a reasonable fee for access to the full QI
resources.p Two even said that the perceived
value of the QIs would increase if users had to pay for it:
"Marketing 101: If you don't charge anything, people aren't going to
proscribe value to it. If there is no cost attached, people can take or leave
it because it doesn't represent an investment."
A slight majority favored a subscription model (i.e. paying a one-time charge),
but some argued for a usage-based payment scheme. Most recommended differential
pricing by type of organization and purpose of use (i.e. commercial vendors who
resell the QIs or incorporate them into their products should pay a higher rate
than state agencies that operate public reporting programs). Interviewees also
felt that one-time use for research projects should be less expensive than ongoing
use for operative purposes.
p. We did not
elicit specific information on what users would consider to be a reasonable
fee. A market study would be required to determine what would be considered
"reasonable" among the current and potential users. Such an endeavor
was outside of the scope of our study.
Return to Contents
Proceed to Next Section