Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

AQA Third Invitational Meeting Summary

Day Two Opening Remarks

Carolyn Clancy, Agency for Healthcare Research and Quality

Carolyn Clancy made brief opening remarks at the start of the April 28 session. She stressed that there is a need to strike a balance between urgency (the need to move forward now) and inclusiveness and transparency, and noted that the starter set being put forth today by the workgroup on performance measurement was drawn from measures that have already been vetted. Finally, Clancy announced that nine of the Nation's largest health plans were working with AHRQ in a separate initiative to improve quality of care.

Report of the Performance Measurement Workgroup

Kevin Weiss, American College of Physicians

Kevin Weiss said the workgroup on performance measurement had been working since the January 2005 meeting to develop a starter set of measures for ambulatory care. He noted that the workgroup started with the list of "straw man" measures presented at the January meeting—all of which were part of the ambulatory care measurement set of the CMS, American Medical Association Consortium, and National Committee for Quality Assurance that was submitted to the National Quality Forum for expedited review.

Weiss reiterated the general parameters for performance measures (discussed at the January meeting), saying that the starter set was focused on areas where the Institute of Medicine had put its mark. He noted questions raised at the January meeting about the language specifying that measures be "limited to ambulatory care," saying that the workgroup discussed linkages to other types of care and to coordination of care. He said a key factor in developing the starter set was to create a manageable number of measures to meet needs while not overwhelming everyone. Weiss also stressed the workgroup's commitment to the parameter that the measures "reflect a spectrum rather than a single dimension of care."

Weiss said the workgroup used a modified "Delphi" exercise to help facilitate its discussion. Using this, participants considered and selected measures based on whether they met the following agreed-upon "supra-criteria":

  • Clinical importance and scientific validity.
  • Feasibility.
  • Relevance to physician performance.
  • Consumer relevance.
  • Purchaser relevance.

Weiss added that other factors under consideration included direction obtained during the January AQA meeting and whether the measures were preliminarily approved under NQF's expedited review process. Most notably, the workgroup had explored the question of efficiency measures, as directed by participants at the January meeting.

Next, Weiss discussed the starter set of measures. The 26 measures address:

  • Prevention—breast cancer, colorectal cancer screening, cervical cancer screening, tobacco use, advising smokers to quit, influenza vaccination, pneumonia vaccination.
  • Coronary artery disease—drug therapy for lowering LDL cholesterol, beta-blocker treatment after heart attack, beta-blocker therapy post-MI.
  • Heart failure—ACE inhibitor/ARB therapy, LVF assessment.
  • Diabetes—HbA1C management, HbA1C management control, blood pressure management, lipid management, LDL cholesterol level (>130 mg/dL), eye exam.
  • Asthma—use of appropriate medications for people with asthma, asthma: pharmacologic therapy.
  • Depression—antidepressant medication management (acute phase), antidepressant medication management (continuation phase).
  • Prenatal care—screening for human immunodeficiency virus, anti-D immune globulin.
  • Quality measures addressing overuse and misuse—appropriate treatment for children with upper respiratory infection, appropriate testing of children with pharyngitis.

Regarding the overuse/misuse measures, Weiss noted that there was not a lot of evidence about efficiency measures. Moreover, he said, those in place were not developed with a great deal of transparency and had not been reviewed by NQF.

More broadly, Weiss stressed that the workgroup sees these performance measures as constituting a starter set, not a complete set. He noted that some type of public comment period might be useful, and suggested that evaluation of some or all of the measures might be required. Weiss added that implementation should include a test period for some or all of the measures, and noted that different measures may have sample size issues regarding their validity when used in small practices or by individual physicians.

Moving forward, Weiss noted that there was a need to move beyond quality measures to those of efficiency, and to move beyond clinical measures to include other dimensions of care (including patient experience). We also need to look at specialty measures in ambulatory care.

Finally, Weiss stressed that any performance measurement activities should be accompanied by support for practice improvement opportunities, and said it was important to think about research and development to build structures to measure quality and to create better systems for doing so.

Discussion

The discussion opened with a workgroup member noting that the workgroup had discussed whether it was developing a starter set or the beginning of a starter set. He added that there had also been discussion about the need to recognize that this is not a full set of measures that will allow for significant evaluation of physician performance. The bigger question, he said, is, Where do we go from here? And how to do we round out areas where there are some gaps?

A second workgroup member stressed that the list represented a good starting point, and noted that CMS Administrator Mark McClellan had acknowledged this in his remarks. Our issue moving forward is how to expand on this starter set, he said. Another participant observed that it was important to reach consensus on the starter set, especially by purchasers and health plans. If we view this as a starter set, he said, then we can discuss where we go from here. This is a well-defined way to begin, and it represents a huge change in how physicians (many of whom are in small practices) will do their work. While we recognize the need to push physicians in this direction, he added, we must have purchaser and plan support.

I hope that by the end of the day we can reach consensus that this is the starter set, agreed another participant. Implementation, obviously, is a different story, she added. This seems like a reasonable set of measures, said another participant, cautioning that expanding the set would depend on investments in electronic medical records. One person, however, expressed concern about approving a starter set the group was just seeing for the first time.

One participant agreed that measuring physicians for improvement was the way to go, adding that it was important to reward those who make improvements every bit as much as to reward physicians who are starting off at a high level.

A participant posed a question about the feasibility of the data set. In response, Weiss stressed that the key issue wasn't comprehensiveness, and he acknowledged that the measures were mostly based on administrative data. A couple of these measures will be tough to do, he said, but it is not our intent to strain the system. As electronic medical records become more prevalent, Weiss added, we will be able to use them more.

One person asked, In the workgroup discussions on the feasibility of implementing some of the administrative measures, was there discussion on attribution? Weiss responded that the workgroup had been much more concerned about defining the measure set. We recognize that the next step involves how far we can aggregate the data to make the measures "physician focused."

I have some concerns about feasibility, alignment, and the reduction of redundancies around measurement, said one participant. The issue of attribution is obviously very important, he said, yet I think that certifying boards as a group can be a vehicle for implementation.

One participant asked a question about the Institute of Medicine's six aims (making care safe, effective, patient-centered, timely, efficient, and equitable), as well as a question about whether some of the starter set measures were intended to stimulate better quality ambulatory care in physician groups. In response, Weiss said that the workgroup was limited by its need to live with measures that were going through the NQF process. We haven't yet had a chance to embrace all six aspects of quality in developing performance measures, he said. This led the person to reiterate that he believed the workgroup needed to make clear that the Ambulatory Care Quality Alliance intended to develop systems to achieve better quality of care along the six aims. I think it is important that we clarify our intent, he said.

A speaker from CMS reminded everyone that the U.S. Congress was looking at physician reform and stressed that the AQA's work was critical in informing the debate. We're heavily involved in politics, he said. We want to say that providers need a stable and preferably higher reimbursement, but at the same time we want to improve quality while saving money. He asked whether the workgroup had addressed health disparities in its discussions and wondered whether this was an aim that should be set forth.

One participant termed the starter set "a modest beginning," and said "the modest measurement set won't stop the train from moving." There's urgency in the marketplace to have performance measures to address physician quality, he said. He added that it was important to move quickly to ameliorate the market confusion and accelerate the pace in proceeding to next steps. In response, Weiss said there was a very positive response in the workgroup to efficiency measures—but also concern about the "black box" nature of what's happening in the marketplace now. Purchasers and health plans need to open up and work with us.

This is a great starting point, said one participant, and the workgroup did a good job in aligning the measures with the National Quality Forum process—but what now? Many purchasers would like to see performance measures pushed into use as quickly as possible. He also expressed a need to adopt crosscutting measures. Another participant (representing a consumer group) stressed that any crosscutting measures adopted need to be consumer relevant. We're a long way from that, she said.

I think the physician community needs to participate in discussions about efficiency measures, commented one participant, because there is legitimate concern about what's being measured and what data are being used to do the measurement.

I appreciate that the workgroup wants to tackle efficiency, said one participant, and I hope we can set a timetable for achieving an efficiency performance measurement set. We need to provide certainty and let those who are looking for consensus know that their window of confusion is time limited. Another participant added that efficiency measures are critical in order to get to the right kind of care and to make it affordable. Another, however, asked, What is efficiency? What exactly are we measuring?

The efficiency metric has two terms to the equation, value and cost, said one person, and we need to engage in both in order to drive improved performance and outcomes. The starter set is a good start and a good compromise.

One participant noted that while the workgroup had focused on clinical underuse measures of quality (which are under review through the NQF process), overuse and relative-resource-use performance measures were also being looked at. We're also looking at reforming the patient survey, he said, to focus on patients' perceptions of care at the physician level. Measures of "systemness" need to be in the equation. He added that it was also important to focus on electronic medical records—and the data that are available now. While the starter set is a very important set of compromises, he added, we will face active tension between the measurement development phase and whatever comes next.

I also think systemness needs to be part of any ambulatory care set, agreed another participant. Purchasers want to know what's in the black box. We want to know how physicians are being designated as high performers or low performers.

One person raised concern about the need for measures that look at the performance of physicians across specialties. Are there existing performance measures we could use for this? asked another.

In response to the previous questions, one participant noted that new measures needed to be developed with the input of the people who are practicing in that specialty. If not, he warned, we're likely to measure the wrong things. He added that if purchasers and plans think there's confusion in the marketplace, then imagine how physicians feel. We need to know where to invest our dollars and time, he said. If we do not involve physicians in the discussion, then we will have bad outcomes and no transparency—and black box management will be seen by physicians as plans trying to reduce costs without concern for quality.

This has to be about quality, the speaker continued. Even the efficiency measures have to be about quality—doing the right service at the right time. He warned against rushing forward with untested efficiency measures.

The efficiency piece is a big challenge, said a representative of the purchaser community. I don't know an employer who would debate the importance of quality measures, so that's a non-issue. But our urgency around efficiency measures is high because we're paying for the inefficiencies, he added.

Kevin Weiss said he thought physicians and employers have a lot more common ground around efficiency than many might think. We know where some of the inefficiencies are, he said, and changing them is important. But we need good performance measures. He asked those at the table to bring forward efficiency measures for the workgroup to review—and challenged everyone to open up their black boxes and let the workgroup evaluate the data.

From the point of view of the health plans, the current reality is that the way forward isn't clear, said one participant. We're not sure how to use additional electronic tools across all potential touchpoints involving patients, and we shouldn't lose the opportunity to see how the starter set actually improves quality.

Another participant pleaded for greater transparency. Physicians want to explore electronic health records, but not even knowing what type of program or software to select is impeding forward process. Physicians need transparency about what they are being measured on, and information about the criteria for setting up electronic health records. Also, what measures should I be able to report on in my own practice? he asked. Right now, it is overwhelming, and there is a lot of fear about moving forward. In response, another participant noted that efforts to certify electronic medical recordkeeping systems were taking place in a separate forum.

One participant addressed the issue of soliciting public comment on the starter measures. I would like to see this as a complementary activity to NQF's work, he said. He also issued a challenge to those at the table: There's been a lot of suggestion that we consider some kind of systematic or structural measures. I think we have an opportunity to insist that these be collected from prospective data (where the definitions are right on the sheet, thus improving quality—and thus reducing the data collection burden because the activity is being done as part of the quality of care). The employer/purchaser communities need to put positive incentives on the table to encourage action, he said.

A CMS participant said it was important to adopt the starter set and start operationalizing the measures, and reiterated Mark McClellan's point about the need to move forward and have physicians play a role in the process. He also discussed the notion of a value proposition, stressing the need to demonstrate that value works. Otherwise, he said, all we'll have is cost containment.

The CMS participant also referred to the work of the Hospital Quality Alliance (HQA) in making a point about AQA, stressing that HQA is not the only game in town. It's not that others can't do anything in the marketplace, he said, it's just either within or outside of the alliance's activity. CMS does work outside HQA, as do other purchasers. But to be part of the alliance's activity involves a certain understanding. Likewise, this is just a starter set. He added that if the stakeholders at the table could reach consensus on the starter set, then physician leadership would be critical in moving the process forward.

This isn't just about agreeing to a starter set of performance measures, said another person, it's about our willingness to stay involved with the process for the long haul. We also have a huge challenge to develop a way to communicate what we're doing to all stakeholders, he added.

Another speaker responded to the CMS reference to the HQA process by noting that there are a number of separate initiatives. There will at times be a divergence of views, he said, and there needs to be a way to capture a certain amount of tension. I think the starter set is a way to move in the direction of getting the full medical community's support for this. But we also need the support and representation of all stakeholders—and this means greater input from consumers and purchasers. He expressed concern that "the floor can quickly become the ceiling," warning that developing the infrastructure to report on measures (at the Federal, State, and local levels) can overwhelm all other activities.

In response, Carolyn Clancy said that along with an urgency to act there was also a need to give everyone time to absorb AQA's activities and to respond to its work. She added that participation from purchasers and consumers was critical, and suggested that osteopathic physicians should also be brought into the process.

A number of participants voiced support for the workgroup's efforts. One noted the need to keep in mind a balance between quality and efficiency, and said his organization's clinical assessment program was sold on quality improvement. Another stressed that the consumer community feels the urgency to act now, saying that "consumers have the most to gain if we get it right and the most to lose if we don't." She added, however, that there wasn't much that was consumer relevant in the starter set, although she was glad to hear that consumer relevance was part of the workgroup's criteria for moving forward. Still another participant said the AQA effort was the best effort to create uniform performance measures, and a fourth stressed the need to reach agreement at the meeting on adopting the starter set.

There was some disagreement. One participant said the starter set was too broad, and raised concern about overburdening practicing physicians while trying to improve quality. If this is a true starter set and there is true commitment to urgency and implementation, he said, then we need a starter set based solely on administrative data. He suggested that the starter set should be scaled back to include only measures that rely on administrative data—and allow entities and organizations that want to use these data to experiment. He added that there were some administrative data-derived measures (e.g., readmission rates for all diagnoses) that tie in quality improvement and efficiency. Regarding prospective data collection, he warned against confusing it with current performance.

In response, Kevin Weiss said the process wasn't about taking the easy road (which would be relying only on administrative data). This takes away the tension to achieve a patient-centered approach and patient outputs. Instead, he said, we need to push the field forward as part of the starter activity.

Regarding the use of administrative data, another person said he wanted people to start thinking about chart extraction. The measurement process is valuable because physicians trust the results (versus data that come from an administrative data set). Our real goal is to drive quality improvements, he said. We also need to get physicians to start to understand the value of medical recordkeeping.

Physicians want to do what's right, said one participant, and they want to be a part of the process. But they also want to know what criteria they are being judged on. They want to use their own charts, because they don't trust the administrative data. Going forward, she said, the process must be open and above board—and physicians must play a role.

It would be very easy to use just administrative data, said another participant, but I am convinced that we need to do more. For those measures that may require some chart extraction, we can get physicians to start using this prospectively. This is not difficult because physicians are already doing it.

A participant representing a health plan noted that while administrative data work quite well in some areas, studies have shown that in other areas, what we are looking at isn't what is actually going on. We need to measure what is really going on, she said.

One participant referred to estimates that it will take 3 to 5 years to get physicians onto electronic medical recordkeeping systems and asked, If we adopt these measures, does that mean it will take 3 to 5 years to start reporting? No, replied Weiss. The starter set includes some measures based on administrative data and some based on chart-extracted data. Putting the measures in place will allow us to begin to focus on quality improvement in these areas—and those with electronic health records will just be ahead of the game. But individual practices can begin using the performance measures within the next 6 months. Clancy added that many of these data are available now.

But aren't we better off with a starter set based only on administrative claims data? asked another participant. No, replied someone else, because more clinically oriented outcome measures are possible if more data are aggregated.

One person noted concern that purchasers and health plans would marginalize the process if the starter set relied only on administrative data. They're already using administrative data, he said. We need to get to the chart data.

One participant noted that as the process has moved forward, different people have wanted to become involved, adding that she was pleased to see the discussion turning on whether or not the process was moving fast enough. She also stressed that it has been physician driven. The question on the table now, she said, is whether this is a reasonable place for CMS to start. I hope everyone will say yes, because CMS's involvement will lead to clarity and allow the process to move and us to get action.

Representing the consumer viewpoint, one participant stressed that consumers need information that will drive better decisions. We need to put forward that value proposition in order to drive the system in the right direction. She also cited the need to address equity issues. There's a lot of talk about health disparities, and a starting point would be to collect data so we can see what's going on and attack the problems.

One participant said there is significant concern within the employer community over competing initiatives. All parties should aggressively share everything they're doing, he said, both on quality and efficiency metrics.

What set of measures comes next? And is there one that applies only to primary physicians? asked one participant. As we go from 26 to 100 to 200 performance measures, they're all going to apply to family physicians—who are sincerely concerned about that potential burden. Is there a way to address what's appropriate and standardized and compartmentalized to family care?

From a consumer perspective, the issue of comprehensiveness is very important, said another participant. We have a sense of urgency to expand the performance measurement set to get at consumer needs. She added that she did not believe that efficiency measures could be considered without also looking at quality. Purchasers can shift their costs, but consumers have no vested interest in inefficiency and poor quality.

Should we start to think about system processes and structural measures in addition to quality and efficiency measures? asked one person. These are a potentially important component to any pay for performance activities, he said.

One participant said that efficiency measurement was critical. The greatest disparities are between those who are insured and those who are not.

The American Academy of Pediatrics has developed some modules physicians can use to compare their work to the guidelines, noted one participant. This offers pediatricians a means to start looking at their charts and using them, and to set up systems to improve quality of care. Would plans and purchasers be willing to incentivize such activities? he asked.

A motion was offered: To adopt the Starter Set.

Result: The motion was adopted by consensus.


Carolyn Clancy said that having started this process, AHRQ recognizes that it is important to reach out to a broad group of stakeholders—including many who are not currently at the table. The timeline moving forward, she said, includes two tracks:

  • A timeline for what can be implemented right now.
  • A timeline for building upon the starter set.

Clancy noted that it was clear from the discussion that there was a need to focus on efficiency measures and not to lose sight of systemness.

We hope to reconvene the larger group in September, Clancy continued. Between now and then, I hope the workgroup can expand the starter set and start to see how quickly it might be possible to move forward on both efficiency and system-process measures.

We also need to have a communications strategy, said Clancy. Perhaps we need to issue a press announcement about where the Ambulatory Care Quality Alliance has been and how we plan to move forward. She also urged those at the table to start disseminating information on AQA's activities to date. The hallmark of this proceeding is transparency, she said.

Finally, Kevin Weiss committed the workgroup to developing a set of efficiency measures that can be reviewed at the September meeting.


Previous Section Previous Section        Contents         Next Section Next Section


AHRQ Advancing Excellence in Health Care