Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

AQA Third Invitational Meeting Summary

Session on Reporting

Randy Johnson, Motorola

Randy Johnson noted that the workgroup on reporting created two separate subgroups:

  • One to develop principles for provider reporting (chaired by Nancy Nielsen).
  • A second to develop principles for consumer and purchaser reporting (chaired by Johnson).

Each group came back with a separate set of principles for reporting.

Principles for Reporting to Physicians and Hospitals

Nancy Nielsen discussed the principles for reporting to physicians and hospitals, noting that her subgroup had assumed that these reports would offer more information than those being developed for consumers. The principles address design, data collection, data accuracy, data aggregation, report format, report purpose and frequency, and the review period.

Regarding design, the principles state that:

  • Practicing physicians should be actively involved in the design and implementation of a performance reporting system.
  • Measures used for reporting should be evidence based, clinically relevant, statistically valid, and reliable.
  • Performance measures should be stable over time, unless there is compelling evidence or a justifiable reason not to be (in other words, said Nielsen, they shouldn't change every 6 months, and they should be measuring something that will enable quality improvements).
  • Physicians and hospitals should be notified in writing, in a timely manner, of any changes in program requirements and evaluation methods.
  • Methods, including risk-adjustment methods, should be disclosed and explained to physicians and hospitals.

Regarding data collection, the principles state that:

  • Accurate, auditable data should not be limited to administrative data, and should include data abstracted from medical records when appropriate.
  • Medical record data should be collected in a manner that minimizes burdens and disruptions to physician practices, hospitals, and health insurance plans. Prospective data collection should be encouraged where possible to minimize burdens.

Regarding data accuracy, the principles state that mechanisms to verify and correct reported data should be identified. Regarding data aggregation, aggregation from multiple sources should be encouraged in order to get a more comprehensive assessment. We need all the data to get a complete snapshot, said Nielsen.

Regarding the report format, the principles state that:

  • Reporting formats should be user friendly, easily understood, and pilot-tested before implementation.
  • Physician performance should be evaluated and reported to promote continuous quality improvement as well as to encourage meeting/exceeding agreed-upon targets. (The idea here is not to set up winners and losers, said Nielsen, but rather to encourage improvement.)
  • Results of individual provider performance should be displayed relative to others, without identification of others by name. Reports should focus on meaningful and actionable differences in performance. (Nielsen added that there was a lot of discussion at a National Quality Forum (NQF) pay-for-performance meeting about not "tournament ranking" physicians and of the need for incentives for those at the top to help others improve. The aim here is to encourage everyone to rise to the top, she said. If we reward only the top 10 or 20 percent, added another workgroup member, then we discourage everyone else from trying to do anything to improve.)

Regarding report purpose and frequency, the principles state that health insurance plans, physicians, and hospitals should collaborate to share pertinent information in a timely manner that promotes patient safety and quality improvement. Finally, regarding the review period, the principles state that physicians and hospitals should be able to review performance results prior to any public release.

Principles for Reporting to Consumers and Purchasers

Randy Johnson discussed the principles for health care reports to consumers and purchasers. The principles address comprehensive reporting, consumer-friendly formats, use for public reporting, use of standard measures and data collection methods, transparent methods, timely results, portrayal of performance differences, and full and fair attribution.

Regarding comprehensive reporting, the principles state that reports should focus on areas that have the greatest opportunities to make care safe, timely, effective, efficient, equitable, and patient centered. Information reported should include both information that consumers want based on the literature as well as information that is important for consumers. In addition, reports should address these areas for hospitals, physicians and physician groups, integrated delivery systems, health plans, and treatments.

Regarding consumer-friendly formats, the principles state that the reports should be formatted to reflect how consumers think about their care, evaluate data, and apply data to actions using categories, formats, icons, and other depictions based on credible research. Regarding public reporting, the reports should be used to support informed choice of providers and patients, referring providers, health care purchasers, and health insurance plans. Reports should also be continually improved so that they are increasingly effective and can be evaluated by all users, including consumers with differing levels of literacy.

Regarding the use of standard measures and data collection methods, the principles state that the reports should rely on standard measures when available. All measures and data collection methods used for reporting should comply with the Consumer-Purchaser Disclosure Project's Guidelines for Purchaser, Consumer, and Health Plan Measurement of Provider Performance. In addition, the opportunities and advantages of reports should be considered in terms of the requirements for those who provide the data.

Regarding transparency, the principles state that measures and methods for scoring and ranking performance should be as transparent as possible so that users and those being measured know results are valid and reliable. Regarding timeliness, the principles state that in order for reports to be most useful, results should be based on data that reflect performance within a timeframe that is as recent as possible.

Regarding performance differences, the principles state that reports should identify performance differences that enable patients to make decisions with meaningful information. Finally, the principles state that, to the extent possible, results should accurately reflect all units of delivery that are accountable in whole or in part for the performance measured.

Johnson concluded his remarks by acknowledging that while there is little new about many of these principles, the workgroup wanted to affirm some of these points about how consumers think about care and evaluate data.

Discussion

The discussion opened with a question about the reference in the consumer reporting principles to standard measures and data collection reports. The issue was whether the process should not only comply with the Consumer-Purchaser Disclosure Project's guidelines but also perhaps endorse the importance of the Guidelines—or whether the Disclosure Project should endorse the Ambulatory Care Quality Alliance (AQA). The reason for the emphasis on AQA, said one participant, is the conscious effort to refocus attention on the starter set. There was discussion over which organization should take up the issue. One participant noted that there have been a lot of requests for health plans to support the Consumer-Purchaser Disclosure Project's Guidelines. I think AQA would like to get recognized somehow, and to have the Disclosure Project endorse what the workgroup has put forth would be an excellent start, she said.

Next, a participant commented on the issue of not ranking providers. While I agree that there is room for improvement, I also believe that we cannot tolerate poor-quality care, he said. We need to insist on excellent outcomes in health care regardless of setting. In response, Nancy Nielsen stressed that it was not the workgroup's intent to tolerate poor care.

While many of these principles are self-evident, said one participant, it is very important that we make this statement. This hasn't been done before by such a large set of stakeholders. He then addressed two specific issues. Regarding the provider reporting principles, he asked for clarification on whether data collection could be done by chart if not available electronically. Regarding the consumer reporting principles, he stressed that "full and fair attribution" is a critical issue. It is very important that this reflect that a sufficient and substantial level of actionability ought to be the basis of attribution, he said, and not an absolute standard. This also depends on what the information is being used for, he added. For example, there's a more restricted focus if we're talking about board competency and a broader focus if we're looking at what happens to a consumer seeking care. If this is public reporting for consumers and purchasers, then we need to be looking at broad accountability.

Physicians don't always know they are being measured, said another participant, adding that she believed that language on reporting frequency was needed to ensure that the reports reached their intended audience. She also noted that the data collection language (in the provider reporting principles) stated that they should not be limited to administrative data. If these principles are designed to be able to evolve to measure systemness, for example, then this language is limiting. Turning to the consumer reporting principles, she stressed that people are just beginning to look at how to convey information to consumers. We don't know everything yet, so perhaps we should say that "reports should convey the latest research on how to best report to consumers."

The discussion returned to the issue of whether or not to identify physicians relative to their peers. One participant suggested that the language be changed, saying that many physicians want to know who the top performers are for benchmarking purposes. The bigger concern, she said, is how health plans structure their incentives. Another participant said that if there is a minimum standard, then it was important to know what the "minimum" was.

Carolyn Clancy weighed in on the discussion, expressing concern about the word "meaningful." We do not yet have a way to calculate what is clinically meaningful, she said. She also suggested that these were "principles 1.0," not meant to be valid forever.

A member of the workgroup noted that there was some discussion around the term "clinically meaningful," and that the workgroup had pulled some technical language from its draft. The issue we looked at, he said, is that if you have people ranked 99, 98, 97, 96, would these be insignificant differentiations, or would they be four-star, three-star, and so forth? Another participant said she thought "minimally acceptable" was an old standard because interpretation might vary considerably from one person to the next.

Randy Johnson noted that, over time, as the data aggregation and reporting got better, it would be possible to differentiate on quality. The minimum standard today may not be the same one in place tomorrow when we've improved the system overall.

In the principles for reporting to physicians, said another participant, I think we need to have more emphasis on reporting for quality improvement (i.e., the distinction between quality and improvement). Maybe I am a physician who isn't in the top 10 percent, but I am getting significantly better. We need to build in that "improvement in time" element, and not just compare the person's performance to others, he concluded.

One participant raised a question about transparency and public reporting. We might find patient safety concerns arising from the data we receive. If they occur at a large practice or a hospital, they would fall under the protected peer review process. Is it appropriate that there be some recognized overlap between when information is otherwise made transparent to the marketplace and the public and a time for physician review for quality improvement? Or are there times when quality concern will require active engagement? For example, he said, if a physician is a very poor performer, leading to concerns about patient care, is it possible that we can reveal concerns about that provider's care?

Nielsen noted that, like everything else in the public domain, that information would be considered in various ways, including by the courts. Carolyn Clancy suggested that this was an issue of real concern for physicians, and that it should be flagged for future review.

One participant expressed concern about the concept of not ranking physicians in relationship to their peers. Assuming that consumers are the end users, she said, they want this information and deserve it.

A second participant acknowledged that consumers want to know which physicians are top rated, but pointed out that physicians are concerned about doing this. In addition, what happens to the consumer who can't get in to see the top-ranked cardiologist, but instead gets number three or four (who's also probably a very good doctor)? Minimum standards are just that, she said, and I worry about ranking physicians on standards that may not be truly applicable to outcomes—especially once we throw in patient satisfaction, which is a much more subjective standard. We need to make sure that whatever we put forth is accurate, and we need to help the three-star physician become a four-star performer, and so forth. She concluded that if a physician meets the standards, then it's not clear why there would be a need to differentiate between three-star and four-star performers on consumer Web sites—unless there is a clear difference in outcomes.

We already rank physicians, said Randy Johnson. We're always asking friends and neighbors for recommendations—and we don't have the data now to make informed recommendations. Neither do physicians who make referrals.

Carolyn Clancy wrapped up the discussion by thanking the two subgroups for their hard work and asking for a sense of the group on whether to endorse the two sets of principles as a beta starter set.

A motion was offered: To strike the language "without identification of others by name."

Result: The motion was approved.


The group then agreed to endorse the beta starter set.

Next Steps

Wrapping up the meeting, Clancy said the Ambulatory Care Quality Alliance had made enormous progress. She indicated that the next meeting would be held in September. She noted that the data aggregation group had a lot of work to do, and said that it might be desirable to put out an interim report between now and September in order to address the sense of urgency coming from Capitol Hill. In conclusion, Clancy said she wanted to ensure broader representation moving forward, especially from consumer groups.

Current as of June 2005


Previous Section Previous Section        Contents                        


Internet Citation:

Ambulatory Care Quality Alliance. Third Invitational Meeting. Summary. June 2005. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/qual/performance3/


 

AHRQ Advancing Excellence in Health Care