Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

You Are Here: AHRQ Home > Measuring Healthcare Quality > AHRQ Quality Indicators > AHRQ Quality Indicators 101

AHRQ Quality Indicators 101



This is the transcript of a Webinar titled AHRQ Quality Indicators 101 that took place on September 17, 2008.

Select to access the (PowerPoint® slide presentation [1.8 MB]; Web version).


Mamatha Pancholi: Thank you and welcome. My name is Mamatha Pancholi. I'm the Program Officer for the AHRQ [Agency for Healthcare Research and Quality] QIs [Quality Indicators]. I would like to welcome you to the Quality Indicators 101 Webinar. Along with my colleague Jeff Geppert, who will be speaking later in the session, our goal today is to give you some background information and an introduction to the AHRQ Quality Indicators.

This slide presents a brief overview of what we will be covering this afternoon. As you can see, it's a long list and we will hopefully get through all of it. First, I would like to talk to you about where and how the Quality Indicators got started, as well as give you a look at where we are today. Learning objectives for today's session are listed here. We hope you will be able to walk away with a better understanding of the Quality Indicators in terms of understanding what their advantages and limitations are, how they're being used … and we will also talk a little bit about the National Quality Forum's recent endorsement of some of the Quality Indicators. Finally, we'll talk about the AHRQ Quality Indicators Learning Institute, which this Webinar is part of.

So let's get started with a little bit of background information on the QIs. We can't talk about the Quality Indicators without talking about HCUP [Healthcare Cost & Utilization Project]. HCUP is a family of health care databases that were developed through a Federal/State/industry partnership that was sponsored by AHRQ. It is a uniform data set that we use to develop the Quality Indicators. It is a set of inpatient discharge data sets provided to us by participating States, and together they represent approximately 90 percent of all the discharges in the Nation. It is a very powerful data set. One of the tools that has been created out of HCUP is the Quality Indicators.

Now what are the Quality Indicators? The QIs are a set of measures that were developed through a contract with the University of California-San Francisco and Stanford University at their Evidence-based Practice Center. As I mentioned previously, the QIs do use the HCUP data set, and we also use readily available data elements to help develop the Quality Indicators. We used several other HCUP tools to develop the Quality Indicators. The QIs are a risk-adjusted measure and we currently have five modules that comprise the QIs.

The process for developing an Indicator is not a simple one; instead, it's a fairly rigorous process. We go through several iterations before obtaining a final definition of an Indicator. The AHRQ Quality Indicators team begins with a review of the literature to determine which areas are right for measure development. Then we do some empirical analysis before the initial measure is defined. The measure definition is evaluated by a technical panel of experts. The panel typically consists of 14 multidisciplinary professionals and they also assist in refining the measure over time. When the measure is under evaluation, it could then proceed to a final definition or it may be sent back to the panel for additional consideration and review. Recently we've added a validation and field testing component to the measure development cycle, which we think has greatly improved the validity of the measures themselves.

So the QI definitions are based on ICD-9-CM [International Classification of Diseases, 9th Revision, Clinical Modification] codes, often with the APR-DRG [all patient refined diagnosis-related group] or MDC [major diagnostic category] and some basic demographic data elements, as well as admission type and admission source and so on. The numerator is the number of cases that are flagged by the measure, which is the outcome of interest. This could be death in some cases or avoidable hospitalizations from asthma in others. The denominator is the population at risk. The observed rate, as you can see here, is the numerator over the denominator, and the volume counts of the procedures are also parts of the sets.

There used to be just three modules of the QIs, but now we have five. They are the Prevention [PQIs], Inpatient [IQIs], Patient Safety [PSIs], Pediatric [PDIs], and Neonatal Quality Indicators (which are a subset of the Pediatric). I will briefly describe each of these as we go through the process.

The PQIs [Prevention Quality Indicators] are a set of measures that can be used with hospital discharge data to identify quality of care for ambulatory-care sensitive conditions. These are conditions for which good outpatient care can potentially prevent the need for hospitalization and also for which an early intervention can prevent complications or more severe disease. Even though the PQIs are based on hospital inpatient data, they provide a valuable insight into the community health care system and services outside of the hospital setting.

The IQIs [Inpatient Quality Indicators] reflect quality of care inside hospitals and include inpatient mortality for certain procedures and medical conditions. They reflect the utilization of procedures, which pose questions of underuse, overuse, and misuse of procedures. There's evidence that a higher volume of procedures is associated with lower mortality, listed here in the next two slides. The IQIs can be used to help hospitals identify potential problem areas in terms of quality that might need further study.

Now we get to the PSIs [Patient Safety Indicators]. The PSIs are a set of Indicators that provide information on potential in-hospital complications and adverse events following surgeries, procedures, or childbirth. They can be used to help hospitals identify potential adverse events that might need further study as well. They also give us an opportunity to assess incidence of adverse events and in-hospital complications using administrative data out of the typical discharge records. They're listed here. You can see there are many PSIs, and several of these measures have been submitted to the National Quality Forum as well.

The PDIs [Pediatric Indicators] are basically the Patient Safety Indicators, but modified for the pediatric population. They also focus on potentially preventable complications and iatrogenic events for pediatric patients treated in the hospital and on preventable hospitalizations among pediatric patients, and they continue on this slide.

Next, we will talk about the advantages and some of the challenges associated with the Quality Indicators. A big advantage is that these are publicly available. There's extensive documentation and details about each of the Indicators on the Quality Indicators Web site. These measures are available via a software tool, available as SAS and as a Windows application. The measures are standardized, and they can be used by any administrative data set, so it doesn't have to be HCUP data, it can be your own hospital internal data sets. The hospitals themselves can download the tool and replicate the measures themselves, so you don't have to depend on anybody else for that information. The software and the measures are updated annually, so we do maintain the measures. We update the measures with respect to the ICD-9-CM updates and we try to release them every February.

We currently have over 100 individual measures. The measures are certified by basic demographic variables. We also include measures that capture certain priority populations and areas, such as child health, women's health, and so forth. We do mainly focus on acute care, but do cross over into the community outpatient care delivery settings as well. We like to say that the QIs are a window into other delivery settings.

Another advantage is that we are continually trying to harmonize with other measures out there. We recognize that there are a lot of measures out there in the field. We do our best to try to harmonize with as many other organizations as possible, and those efforts are currently still underway. We do, as I mentioned before, maintain the measures and do annual updates. We do provide the software tools and extensive technical assistance. I'm sure many of you have used the QI E-mail address to get a question answered. We also are part of the National Healthcare Quality Report and Disparities Report; the measures are used at the national level and therefore are available as national benchmarks. HCUPnet, which I hope some of you have visited before, is an online, interactive query system where you can calculate various estimates and include the Quality Indicators using their discharge data set online, but those are all national estimates.

Those are the advantages, but there are a few challenges. Most of the challenges have to do with the fact that we're using administrative data. Some of the characteristics of administrative data are things that we inherit. For example, the typical discharge record does not include clinical detail. We have some challenges with respect to risk adjustment. The level of accuracy really does strongly depend upon the accuracy of the documentation, and particularly the coding is an issue as well. The data are subject to some gaming, again we recognize that as well, and there is a time lag. The QI measures are dependent upon, from a developmental perspective, the HCUP data, which are a year or two behind, while the measures themselves can be applied to your data in real time.

We've undergone a lot of work this past year or two to develop some Composite Measures, and we have done some additional enhancements as well. Recently, we've developed Composite Measures for reporting Inpatient, Patient Safety, Prevention, and Pediatric Quality Indicators. Those composite measures, which Jeff will talk about a little bit later, are being put through the National Quality Forum endorsement process as well. We have done some additional work on the risk adjustment. About a year or two ago we convened a work group to look at the risk-adjustment methods in the Quality Indicators set. We did make some improvements—namely, we included some hierarchical modeling in some of the risk adjustments that we have. We also updated literature reviews, which we typically try to do on some reasonable cycle. This is where we look at the literature and try to update the measures based on the latest evidence available.

We have a tool called the Reporting Template, which some of you may have already heard about. It was an effort that was part of a larger effort in terms of reporting, and States have requested that they be guided with a set of definitions and formats to report the Quality Indicators particularly. We're working on making that Reporting Template available via a software package, and hopefully that will be available next year. We also submitted several of the measures to the National Quality Forum for endorsement, and we'll talk about that in more detail in a few minutes.

In terms of the future, some ideas that AHRQ has been exploring are what we call "joining forces." It is an effort where we try to use Health Information Technology [HIT] to help improve the timeliness of the data and to help add clinical detail for accuracy and credibility for the purposes of the reporting. Things like using present on admission as a data element to help improve the accuracy of the measures and lab values is something that has been a high priority for us in the past year or two. We hope to expand statewide outpatient data to develop new data analysis tools for outpatient and clinically enriched data, and to do this with a continued vigilance for patient privacy and data security. These are areas of focus for collaboration with HIT colleagues that we hope to do in the near future.

AHRQ sponsored a study recently showing that adding a few clinical data elements significantly improved the quality assessment using administrative data. Then, as a followup to that study, we funded projects to support the inclusion of present on admission and lab values specifically into administrative databases. Those pilots, right now, are in year two. They've completed their first year's activities, which were to develop an implementation plan; they recruited a lot of hospitals; and they developed a data collection method. I believe they are actually collecting data at this point and are developing methods for transmitting those data. In the end, what we hope to gain is a multihospital data set to assess data quality and use the data to produce hospital-level reports on quality, with a final report and some synthesis of lessons learned that we can disseminate to others as well.

When we first developed the Quality Indicators, they were primarily used for hospital quality improvement, internally specifically. Over the years there's been a growing use at the national, State, and regional reporting level, and now we have a couple of organizations, particularly CMS [Centers for Medicare and Medicaid Services], that will be using the QIs for pay-for-performance. The biggest growth has been within comparative public reporting, which we will talk a little bit about, and has been the focus for the past year or two. Currently there are 12 States that use the QIs for public reporting. As I understand it, there might be several more, which is the motivation for this current Learning Institute. We believe there are actually five or six more States that may be coming up next year, and so we hope this Learning Institute will provide an opportunity to help foster that.

I briefly mentioned the AHRQ QI Reporting Template previously. The idea for the Reporting Template originated as a request from the QI user community. The reports were designed specifically to report comparative information on hospital performance based on the AHRQ Quality Indicators. There are two model reports. The first one was developed in the middle of 2006 and grouped individual Indicators together and organized them into health topics. It is known as the Health Topics Model Report. The second model, known as the Composite Model Report, was developed later in 2006 and is based on a set of four Composite Measures using a substantial portion of the Inpatient QIs, the Patient Safety QIs, and the Pediatric QIs. The Composites were accrued on the basis of substantial statistical analysis as well as an expert review, so this was not something that we just sort of threw together. It is based on the latest evidence. Actually, Shoshanna Sofaer of Baruch College (The City University of New York) was the lead on this particular effort. One thing to note here is that the Reporting Template is available if you want to access it; you just need to E-mail the QI support line for it. It will also be incorporated into an upcoming tool next year and hopefully will be made available in a more interactive way.

The Reporting Template is also based on a structured approach where there was extensive research done on hospital quality measurement reporting. There were interviews done with various experts in the field, purchasers of large coalitions, executives of integrated health care systems. There were two focus groups with chief medical officers and some systems quality managers from a broad mix of hospitals. There were four focus groups with members of the public who had recently experienced a hospital admission. There were also four rounds of cognitive interviews, about 65 interviews to test drafts of the two model reports with members of the public with recent hospital experience—all with basic computer literacy but widely varying with the levels of education. As you can see this was quite an extensive project on our part.

This takes us to the National Quality Forum endorsement. I will preface this by saying this has been a major effort. We submitted a large number of Quality Indicators to the National Quality Forum for endorsement. I will hand it off to Jeff Geppert now, who will take us through the next several slides.

Jeffrey Geppert: Thank you, Mamatha. So as Mamatha mentioned, there's an extensive development process that goes on with the creation of the Quality Indicators that involves a lot of literature review and clinical panels and empirical assessment, but that evaluation is really ongoing. The NQF [National Quality Forum] submission was a continuation of the evaluation of the Quality Indicators, specifically for an assessment of their suitability for both quality improvement and comparative reporting. There are a number of different initiatives under which the Quality Indicators were assessed by the National Quality Forum Consensus Development Process. The major initiative consisted of a large number of measures submitted in September of 2006 under the Hospital Priorities Project. There were 34 individual AHRQ Quality Indicators initially submitted, along with 4 Composite Measures and the 2 Reporting Frameworks that Mamatha just described. The Composite Measures are available in the software and there are reports about them on the QI Web site. There are two Composite Measures for the Inpatient Quality Indicators, one for mortality for selected procedures, and then another Composite for mortality for selected conditions. There's a PSI Composite Measure and then a pediatric PSI Composite Measure. Those measures were all submitted to the NQF. The NQF created five technical advisory panels to assess the measures using the NQF's standard criteria of important scientific acceptability, usability, and feasibility. The five technical advisory panels were for patient safety, pediatric, surgery, and anesthesia composites, and then for the Reporting Template. Several of the other Indicators have been submitted to the NQF through other processes, more recently the Perinatal Project and the Perioperative Project have continued the assessment of the QIs. In terms of those that have gone through the process and have received endorsement from the National Quality Forum, the Diabetes Project assessed the three PQIs related to diabetes and they were the first set of Indicators endorsed by the NQF. The earlier version of the Hospital Priorities Project reviewed the pneumonia mortality measure, which is an inpatient measure, and that also received endorsement.

Then there was an Ambulatory Care Project that reviewed the remainder of the Prevention Quality Indicators. They also received an endorsement, along with a Composite of the PQIs. There are actually three Composites for the PQIs: there is an overall PQI measure and then a Composite for acute conditions and a Composite for chronic conditions. Under the Hospital Care 2007 Priorities Project, there were 8 Patient Safety Indicators that were endorsed, 4 Pediatric Patient Safety Indicators, 2 of the pediatric measures, 11 surgery and anesthesia measures. Most recently, the postoperative PE/DVT [pulmonary embolism/deep venous thrombosis] was endorsed under the Perioperative Project and under the Perinatal Project birth trauma, nosocomial, bloodstream, and neonates measure.

There are a number of additional activities that also support the ongoing assessment and review of the Quality Indicators. As Mamatha mentioned, the Indicators are based on ICD-9-CM codes. Through a lot of input that we've received both from the NQF process and from other researchers that have evaluated the Indicators, and primarily from users, such as yourselves, that have submitted questions to the support line about specific cases and specific clinical scenarios. There's been a lot of input provided into ways in which the ICD-9-CM codes themselves could be improved to increase the accuracy of the measures and their usefulness, in terms of identifying potentially preventable adverse events and the like. These are just a couple of the more recent ICD-9-CM coding proposals that have been submitted and adopted by the Coordination and Maintenance Committee that will continue to improve the measures as we go forward.

Along with that, as Mamatha mentioned, there's been some significant effort to achieve harmonization of the measures. We often hear from hospitals that there's a burden involved in requests for measurement from different organizations, so all of these organizations are highly committed to ensuring we have common definitions, specifications, and data sources for measures to the extent feasible. We've had ongoing collaborations with the Joint Commission and with CMS and researchers to make sure that the measures are harmonized to the greatest extent possible.

In addition to the development work that was developed by the Evidence-based Practice Centers, we have most recently incorporated a validation component into the AHRQ QI development cycle. The validation component is really a collaborative effort between the QI support team and the users of the Quality Indicators. It's a volunteer effort on the part of the users who agree to participate in these studies. The users contribute a lot of information to the QI team, and the QI team provides a lot of guidance in terms of how the QIs can best be utilized. Currently, the validation projects are divided into phases. We've completed Phase I of the project, which focused on five of the Patient Safety Indicators, which are accidental puncture, pneumothorax, postoperative PE/DVT, postoperative sepsis, and selected infection due to medical care. We're currently initiating our Phase II of the validation effort, which will focus on five additional Patient Safety Indicators.

The objective of these pilots is to gather additional evidence on the scientific acceptability of the Patient Safety Indicators—in particular, supplementing the other types of validation efforts that we undertake with medical records reviews to really assess what the evidence base is for each of the Indicators, and use that information to improve guidance on how the information can be used and interpreted in individual hospitals. Then we evaluate the information that we gain from the validation projects to determine if the Indicators can be improved through refinements either in coding, specifications, or data.

We've found some highly useful information from our initial Phase I validation work. We assessed all of the Patient Safety Indicators, primarily to determine their positive predictive value—which means, for those cases flagged by the administrative data, we tried to determine what percentage had actual adverse events. That information was highly useful in the NQF process in terms of assessing which ones were the most useful and/or most valid for the purposes of comparative reporting. Most importantly, the validation demonstrated the feasibility of conducting this type of evaluation, which we are using to design our Phase II analysis and to develop some methodologies which could be used to estimate the sensitivity of the Indicators. In other words, we would like to determine if there are adverse events that are documented in the medical record but not flagged by the administrative data. That's one of the primary emphases in Phase II. If you are interested in participating in the validation projects at all, please contact the QI support line. Here's the E-mail address of our validation pilot coordinator, Jennifer Cohen: cohenj@battelle.org.

One of the areas in which we're currently engaging in some development efforts is a project related to developing Indicators of health care emergency preparedness, which we're conducting in collaboration with colleagues in the Department of Health and Human Services. There's another project that is just getting underway to assess and refine the Prevention Quality Indicators specifically for the Medicaid population. There's a lot of interest in avoidable hospitalizations for that population and also some particular challenges related to enrollment of Medicaid beneficiaries, which that project is intended to address.

Mamatha, I will turn it back over to you at this time.

Mamatha Pancholi: Thank you, Jeff. What we wanted to do was take the remaining time to talk more specifically about the Quality Indicators Learning Institute. To remind us all of the Institute's purpose, we wanted to provide a means for users who are interested in using the Quality Indicators for public reporting. We wanted to give them a forum for actually discussing and facilitating the use of the QIs, since there are many organizations that are telling us that they are on the cusp of using the QIs for reporting at the hospital level, but they need some help and information. They want some peer-to-peer learning opportunities. This was a forum that we thought would work well, where we would have experts in the field of reporting available to provide that kind of information and assistance. This is really your opportunity for discussion and peer-to-peer review through various means, like Webinars and conference calls and so forth. We hope that through the Learning Institute you will become more familiar with the QI Reporting Template, and there are other tools that we're working on as well. Our target audience, again, are those individuals and organizations who are leading, facilitating, or directly involved with their public reporting efforts. We think they're going to be State agencies, task forces, hospital associations and coalitions. Those are the types of organizations that have been involved in public reporting in the past. We're really looking for anyone who is a leader in that area for an organization that will be doing hospital-level public reporting, particularly with an emphasis on using the AHRQ Quality Indicators.

As we started thinking about the Learning Institute and what we wanted it to be, we decided to focus on key topics that have come up from various users in various organizations across the country as they engaged in public reporting efforts. So we came up with these eight topics. Now these are just ideas that we've come up with on our own. We're open to hearing from all of you, so if there are topics you would like to see being covered in the Learning Institute, please let us know. There are many folks who would like to start talking about topics such as how to select a measure, how you deal with small sample sizes, how best to prepare the data for reports, how to talk to your organizations about those reports, how you deal with the consumer aspect of it, how you classify hospitals, displaying the data (which is really addressed by the Reporting Template that we talked about earlier and which will actually be discussed in more detail), language about how to best explain the Quality Indicators on your Web site (again, another topic that Shoshanna Sofaer will be covering), how to market and promote your report (I think there are organizations that are having particular trouble marketing and promoting their report, so we're going to try to focus on that more specifically), and how to evaluate your public reporting program. There may be other topics that may interest you. Or you may feel, as you are putting together your public reporting efforts/initiatives, that you want to have other topics addressed, so we welcome hearing from you about those topics. We do have good flexibility in adjusting the topics covered in the Institute, so we hope to hear from you regarding that issue.

I want to reiterate one note. This particular Webinar is open to anyone who is interested, but future Webinars will only be accessible if you are an accepted member of the Learning Institute. So if any of those topics are of interest to you, I encourage you to apply to the Learning Institute, as there will be no other way to access those materials. Due to the great interest that we received regarding the Learning Institute and a maximum of 100 participants, we do encourage you to apply sooner rather than later. We do have a limit because of the Web facilities that we have. The Learning Institute not only provides the means and the access to the Webinars, but there's also an Internet site that you can have access to as well that is accessible only to the Learning Institute members. Again, please remember that future Webinars will only be open to Learning Institute members and we hope this will facilitate some peer-to-peer learning. The application deadline is September 30, 2008, and we really hope to hear from you soon. We do have a Webinar planned for the end of October, so the sooner we get your applications in, the sooner we'll be able to get started with this Webinar series.

With that I will end the formal part of the session. I would like to open it up for questions at this time.

Jeffrey Geppert: Thanks, Mamatha. So we have a few questions already and if people have additional questions they can type them in the right-hand side of their screens.

The first question is, How can people get a copy of the slides?

If you go to the QI Learning Institute Web site, which is http://www.ahrq.gov/qilearninginstitute/, there is a copy of the slides that you can download. You can also access this page through the QI homepage, which is http://qualityindicators.ahrq.gov, and under the News section there is a link to the Learning Institute page.

The second question is, Where can people get assistance for installation of the AHRQ QI software?

There's a support line, for which the E-mail address is support@qualityindicators.ahrq.gov. If you are having issues or difficulties with installing the AHRQ QI software, just let us know. We'll respond within a few business days, most likely the same day, and help you sort through your issue. Just as Mamatha mentioned, there are two versions of the AHRQ QI software: there's a SAS version and a Windows version. In order to run the SAS version, you need to have a license for SAS, the basic SAS module. For the Windows version, you don't need any third-party software to use that version of the software. They are compatible in terms of producing the same information based on the same input file, and the SAS version is typically used by users that have very large data sets who want the flexibility of using that particular software.

Does AHRQ still provide guidance on the calculation of confidence intervals, especially in regard to being able to calculate 95-percent or 99-percent confidence intervals for the observed-rate PSIs?

If you have a specific question on confidence intervals, I encourage you to submit a question to the support line. We will probably include that topic in one of our future Learning Institute Webinars relating to empirical methods and sample sizes. The software does produce a confidence interval based on a 95-percent confidence interval, but some people want to do them for different confidence intervals and different rates. If you would like to do so, again I would encourage you to submit a question to the QI support line and we can send you some guidance related to that.

I've read studies that indicate teaching hospitals often are shown to have poorer quality than non-teaching hospitals, but the truth of the matter is that they tend to just document better. Thoughts?

The user is asking whether that's related to true performance or improved documentation. That is an excellent question and that is really the focus of our validation activities. The validation projects are really in some sense about ensuring uniformity in documentation and coding practices and identifying areas in which there is some variety. That's something we will learn a lot more about as we conduct these validation studies to learn whether these differences are related in fact to performance and to what extent they are also related to documentation.

How will the Quality Indicators be updated to reflect ICD-10?

There's an outstanding rule for implementing ICD-10, which is several years off. I imagine, Mamatha, that the Indicators would be updated as soon as that rule was fully implemented.

Mamatha Pancholi: Right. At the moment there are no plans today to update to ICD-10. We know it's on the horizon, but until that rule is formalized, we don't have the resources at the moment to be able to do that upgrade for the QIs.

Is there a fee for the AHRQ QI Learning Institute?

The next question is about a fee for the Learning Institute. No, the Learning Institute is a free resource to anyone who applies. It is a service offered by the Department of Health and Human Services.

Jeffrey Geppert: You want to answer the next one, too?

Mamatha Pancholi: Yes.

The next question is, If you are a part of a hospital, are you able to join the Learning Institute?

The target audience is actually a little bit broader, which is a State agency or the entity involved in a public reporting effort, and that's mainly because of our capacity. Since we only have a 100 organization capacity to include in this Learning Institute, we're giving priority to others. I encourage you to apply if you are a hospital and you are interested, and if we're able to support you, we will. You will not be given priority only because you are not part of a national effort; the target audience is really towards that. However, if you have questions or you need assistance I encourage you to E-mail the QI support line. They're good about responding to hospital needs, and my guess is we'll be able to help you with whatever question you have.

Do you have any plans to expand into other arenas, such as hospice, ambulatory, or primary care?

In terms of other focus areas, or settings of care, at the moment we don't have explicit plans to go in those directions. There have been discussions at AHRQ about extending our efforts into the outpatient arena and into the emergency departments. Again, those development activities really are pending resources. At the moment, no, we don't have any explicit plans to go into those areas.

Jeffrey Geppert: Okay.

Where is it possible to find documentation on the Composite Indicators. We are unable to locate documentation in the PQI specifications.

There's a report on our QI Web site. If you go to the section of the site where the documentation for the PQIs are posted, there's a section that lists which of the individual Indicators are included in each of the individual Composites.

Does AHRQ have any recommendations for the States regarding collecting clinical data, such as lab data from hospitals?

There's a question related to recommendations to States regarding the documentation of clinical data, such as lab. There are some ongoing pilot projects, as Mamatha mentioned, that AHRQ has supported with the State to answer that very question, in terms of which clinical data items are feasible and optimal to collect. Mamatha?

Mamatha Pancholi: Actually, if there are folks interested in that pilot project, please e mail me. I'm happy to put you in contact with the AHRQ staff that are running that project. They're expecting another year's worth of work in that area, but sometime next year there will be a report available for folks that are interested.

Jeffrey Geppert: Okay.

Are any of the Composites NQF endorsed? If not, are the individual items in the Composites NQF endorsed?

For the four Composite Measures that were submitted to NQF, the process is still ongoing. There was a steering committee—a Technical Advisory Panel specifically—that NQF convened. Their first charge was to develop, consistent with existing NQF evaluation criteria, a set of companion criteria to apply to composites. NQF posted the criteria on their Web site in mid-August, and the panel is currently using the criteria to evaluate the AHRQ QI composites. So that process is still ongoing and it is expected to be completed by the end of the year.

It was mentioned that the Reporting Templates will be released "next year." Is there a more specific time period when this is expected to be released?

If you would like to see them, you can submit a request to the QI support line. We would be happy to send them to you. They're being implemented in a tool that will be available sometime, but no specific timeframe is in mind. Is that right, Mamatha?

Mamatha Pancholi: Right. We're hoping we'll have a beta version sometime around spring of next year. As with all development activities, it depends on what we have next year. At the moment the Reporting Template is available as a paper document, but what we're talking about is a software tool. The research has been done and the language that's been documented is certainly still available, so we're happy to share that. Send us an E-mail. We'll be happy to do that.

Can you please give us the address to obtain a copy of the slides again?

The E-mail address to obtain the slides is on the last slide, which I think is still up If you go to the Quality Indicators Learning Institute Web site, the slides are posted underneath the event name. If you have trouble downloading them, please E-mail the QI Learning Institute E-mail address, which is QualityIndicatorsLearning@ahrq.hhs.gov, or you can e mail the support line, which is support@qualityindicators.ahrq.gov and we should be happy to get you those materials.

I work for a hospital and am interested in internal reporting. Would it benefit me to become a member? If yes, what ways?

I guess to the extent that your reporting efforts are part of a larger one or if you want to get information and talk to other folks in similar efforts, it would be. Again, you need to note that priority will be given to folks who are involved in a more systemic and broader reporting effort for their State.

Are there any plans to develop similar Quality Indicators for the ambulatory environment?

Again, we currently are focused on the hospital setting in terms of measure development, emergency preparedness, and the Medicaid population. But really, we have no plans at the moment to go into any other settings of care. There are several questions here asking about ambulatory and other care settings. We hope to go in that direction, but at the moment we don't have that opportunity.

Which are the current PSIs endorsed by NQF, and when are you expecting to get the endorsement for the rest of PSIs?

One thing that we are working on that we didn't describe in detail at all is a document that we're calling the Comparative Reporting Guide. This guide will go Indicator by Indicator and talk about the strengths and the weaknesses of the Indicator for reporting purposes. It will rank the evidence base for that Indicator and will include whether or not that Indicator was endorsed by the National Quality Forum. In terms of future submissions, at the moment we don't expect to have any significant number of PSIs or other measures submitted to the National Quality Forum at this time. I think our next focus will be on strengthening the Indicators we have currently. In the next year or two—or possibly later, depending on our validation efforts—we may or may not send some of the PSIs forward. Our goal is to strengthen them now for public reporting.

Jeffrey Geppert:

Do you expect that present on admission (POA) will change the face of quality reporting in the future?

We do expect that present on admission coding will be extremely important. For quality, certainly, it makes it possible to assess various different types of events and assess severity levels for patients in a much more accurate way. A lot of activity over the next few years will be trying to assess and improve the present on admission coding, to make sure that sort of potential is realized.

When will the AHRQ population rate used for hospital comparison of PSIs be updated?

All of the population data and coding will be updated in the next release of the AHRQ QIs, which we update every year. The FY09 coding changes will be included in this next release, as well as updating all of the comparative data.

Mamatha Pancholi:

Explain how your Quality Indicators relate to success in attaining PHQ [Physician and Hospital Quality] certification with NCQA [National Committee for Quality Assurance].

I'm actually not sure if it will. Typically the QIs do not intersect much with the world of NCQA. My guess is it would not have much impact. Again, we've not worked with NCQA in this particular area.

Jeffrey Geppert:

You presented that one of the five modules is a "neonatal module." Is this a different toolset than the PDIs, PSIs, IQIs, and PQIs?

The Neonatal module is a sort of subset of the Pediatric module. We don't anticipate creating a whole other set of software and documentation related to the Neonatal module, but rather include it as an appendix or component of the Pediatric module. It's worth creating a distinction because there are some specific issues: clinical issues and coding issues that warrant treating the neonatal population distinctly.

Are MS-DRGs [Medicare-severity DRGs] replacing APR-DRGs in the relevant Quality Indicators?

Currently, the Inpatient QIs use the APR-DRG software in the risk adjustment and, through an arrangement with 3M, the software actually includes a limited license version of the APR-DRG software. We can make that tool available to users without cost. The specific question is, Would we stop using the APR-DRGs and replace it with the MS DRGs? The short answer to that is, Not in the near future. We plan on continuing to use the APR-DRGs as our risk-adjustment methodology for the IQIs. Part of that is it's just practical. In order to implement the risk adjustment, we need data that apply the DRG system. Most data that use MS-DRGs are recent, and we would need to conduct an assessment to evaluate whether that would be a worthwhile transition. Currently, there are no plans to do that.

Mamatha Pancholi: I believe that's our last question. I would like to take this opportunity to remind everyone that, if you are interested in any of the topics that I had listed in an earlier slide, I encourage you to apply for the Learning Institute. If you have ideas for topics for Webinars that I did not list, I welcome you sending us your ideas. We want to make this Learning Institute as useful to you as possible. This is a resource that I think has really grown and we hope to make this as helpful to you as possible.

Please remember the deadline for applications is September 30, 2008. Please E-mail us at QualityIndicatorsLearning@ahrq.hhs.gov, including "Application Information for the QI Learning Institute," and we will respond with application details.

Again, I'm Mamatha Pancholi, I want to thank you for joining us today. Please E-mail us your questions that you may have. My E-mail address is Mamatha.Pancholi@ahrq.hhs.gov. You can also E-mail QualityIndicatorsLearning@ahrq.hhs.gov if you have any questions. Thank you and I look forward to meeting you again in the Learning Institute.

Current as of October 2008


Internet Citation:

AHRQ Quality Indicators 101. Webinar transcript. October 2008. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/QILearningInstitute/917webinarslides/qi101-transcript.htm


AHRQ Advancing Excellence in Health Care