Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov
Evaluation of the Use of AHRQ and Other Quality Indicators

Chapter 7. Discussion

In this chapter we discuss the limitations of this evaluation, then briefly review some of the top-level findings from our assessment and discuss their implications for AHRQ's future growth opportunities.

7.1. Limitations

This evaluation had several methodological limitations that should be considered in the interpretation of the results. The most important limitation is that the majority of interviewees were users of the AHRQ QIs. Organizations that have decided not to use the AHRQ QIs for quality measurement may be more likely than users to have negative opinions about the AHRQ QI program. For this reason, we interviewed a small number of non-users. No unique, substantially negative opinions of the AHRQ QIs were expressed by the non-users, but a larger sample may have yielded different results.

A second limitation of this evaluation is that the environmental scan used to identify users of the AHRQ QIs probably failed to identify a large number of organizations that do not publicly release AHRQ QI results or publish descriptions of their AHRQ QIs use. If these organizations differed consistently from those identified in the environmental scan, our results could present a biased view of AHRQ QI users' opinions.

Return to Contents

7.2. What is AHRQ'S Current Market Position?

Our results show that the AHRQ QIs are regarded as the leading product in the area of measuring the quality of hospital care from administrative data. In fact, the QIs are currently the only comprehensive measurement system in this area, as our survey did not identify any comparable offerings. The QIs have gained a leading role not just within the United States, but also increasingly in other countries. While a variety of vendors offer quality measurement systems that use administrative data, their products often embed the AHRQ QIs (either the actual software or the specifications).

The fact that the QIs are a free resource certainly helped them to gain market share, but our interviewees were adamant that the excellent quality of the product combined with AHRQ's reputation were the key prerequisites for the indicators' success. Many interviewees commended AHRQ for the rigor and unbiased nature of its research and felt that AHRQ would remain the natural home for the development of standards for quality measurement. The AHRQ QIs were widely described as a scientifically sound and well-documented set of measures that were easy to implement because of user-friendly software and good user support. The complete transparency of the indicator specifications, the risk adjustment methodology, and the underlying evidence were all credited as crucial factors for the acceptance of the product by various stakeholders.

Thus, the AHRQ QIs have achieved a strong position in their market segment and no obvious alternative or competitor could be identified, although some organizations (notably JCAHO, CMS, and Leapfrog) have complementary indicator sets. This is unlikely to change: new users have an incentive to adopt the prevailing product, because it makes their results comparable to a large number of other users and because the widespread use lends legitimacy to the product, which is critical in the often politicized debates about selecting quality indicators for such uses as public reporting and pay-for-performance. Indicator development based on rigorous science is also quite costly. As a result, other developers would face substantial barriers to entry if they tried to establish alternative measurement systems.

Our interviewees were quite comfortable with AHRQ having this position, although they pointed to two potential risks. First, the dominance of the AHRQ QIs combined with ease of access to administrative data might stifle innovation for indicators that have more demanding data requirements. Second, the dominance also implied a responsibility for AHRQ to maintain the program and to keep expanding it, which was seen as challenging, given AHRQ's budget limitations.

Return to Contents

7.3. Where are the Growth Opportunities for the AHRQ QI Program?

The market for the AHRQ QIs is large, growing rapidly, and changing as indicators are being used in new ways. There are now a substantial number of users of the AHRQ QIs for public reporting and pay-for-performance programs. As the prevalence of those activities increases, we expect the number of users to increase substantially both for the programs themselves and for internal quality improvement programs and projects that will attempt to align their target measures with standards for external accountability.

Our interviewees suggested that there is substantial demand for expansion of the AHRQ QI program. The most common requests were for improvements to the current sets, such as accommodation of non-UB-92 variables (e.g., present on admission flags) and methods for composite formation, followed by additional QI sets to close important measurement gaps, such as hospital outpatient care and emergency room care. Interviewees were largely aware and appreciative of AHRQ's current efforts to improve and expand the program, but expressed an interest in scaling up, and speeding up, those activities.

Return to Contents

7.4. How Could Growth Be Financed?

Interviewees recognized that expansions of the AHRQ QI program would require additional resources and largely argued that this would be money well invested. Most believe that federal funding should be used to support those activities, realizing that this was a difficult proposition given the pressure on public budgets in general, and on AHRQ's budget in particular. But the availability of scientifically sound indicators in the public domain was seen as a precondition for quality improvement efforts and policy innovations like public reporting and pay-for-performance so that support by public funds seemed warranted.

It was frequently stated that pay-for-performance and public reporting programs in particular should be based on fully transparent methodologies to allow hospitals to understand how they were evaluated and to identify opportunities for improvement. Proprietary indicators, the likely outcome of private funding, were seen an unsuitable for those applications.

We challenged interviewees to brainstorm about alternatives to increased public funding for the QI program. One option was the re-allocation of existing funds by reducing the scope of activities under the program and focusing on core competencies - for example, giving up development and distribution of free software to construct the QIs or stopping user support. Most argued that indicator development was AHRQ's core competency and should never be given up, and interviewees also tended to be reluctant to see AHRQ give up software development and user support.

We heard consistently that only the original developer of such specialized software is able to provide adequate support. Even some vendors, who could consider trying to take on the support role themselves, agreed with that assessment. Thus, continuing software development but stopping user support does not appear to be a plausible option for AHRQ as a means to free up funds for development activities.

Nor was there enthusiasm for the even more radical alternative of AHRQ deciding to focus only on specification development while leaving software development and user support to vendors. Concerns centered on high fees, restricted access, and potential problems with quality and comparability if different vendors implemented AHRQ specifications. At a minimum, a rigorous certification program for vendors would be needed. By and large, interviewees felt that this approach would only be slightly superior to stopping the program altogether, and that it would greatly impede the proliferation of quality measurement activities.

As an alternative, we discussed the option of AHRQ continuing to provide specifications, software and user support but starting to charge for those services. While there was little enthusiasm for this prospect, only a few stated that they would stop using the product in that case. Most seemed to be willing to pay a reasonable amount, so charging users would be a viable option to support expansions of the QI program. However, the feasibility of implementing such an option would depend on many yet-to-be-answered questions:

  1. How much would users be willing to pay? Our study was not designed to investigate what users considered a reasonable charge. Thus, a market research study would be required to elicit willingness-to-pay.
  2. How would a fee for use affect the willingness of new future users to implement the QIs? We talked mainly to current users of the QIs, who have already invested resources into implementing them for their particular purposes. Those current users are unlikely to adopt a different indicator system, unless the cost of the AHRQ QIs was to become prohibitive. But non-users might select another product or abort their quality measurement activities entirely if they had to pay for the AHRQ QIs.
  3. What are the rights of existing users who have invested in implementing the QIs under the assumptions that they are a free resource?
  4. Should pricing be different for different users (e.g., researchers and re-sellers) and by what degree?
  5. What is the best pricing model (e.g., fee-per-use, subscription)?
  6. How should fees for international users be handled? These users are probably very price-sensitive because most of the current uses are actually small initiatives of individual researchers. It is more difficult to collect money from researchers, but fairness would require charging them if domestic users are charged.

In summary, if AHRQ were to implement a charge-based model for the QIs, it would face the challenge of developing a comprehensive business plan. The size of the market needs to be determined to make sure that the expected revenue could provide a meaningful contribution to the growth of the program, after the added cost of operating a business is taken into consideration. AHRQ would also need to consider the amount of additional revenue it could expect to obtain in proportion to the potential negative effects on the spread of the program to new users and usages.

In a sense, AHRQ is now in a situation comparable to that of other organizations that have started out offering content or services on the Internet for free and are contemplating whether to begin charging users. Thoughtful deliberations would be needed to find a business model that generates sufficient revenue but is still consistent with AHRQ's mission and values as a public agency.


Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care