Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

AQA Third Invitational Meeting Summary

Report of the Data Sharing and Aggregation Workgroup

David Kibbe, American Academy of Family Physicians

David Kibbe started his remarks by echoing three statements previously made by participants:

  • Where you stand is tied to where you sit.
  • Data collection is feasible in the near term.
  • The way forward is not entirely clear.

But, he stressed, data sharing and aggregation are worth getting right.

The workgroup's objective, said Kibbe, was to reach consensus on a proposed data sharing and aggregation model. Data sharing is key, he said. How can we use technology to share this information going forward?

Kibbe outlined a set of principles for an effective data aggregation and sharing model. Almost all shareholders agree that an effective aggregation model requires:

  • An independent, trusted third-party aggregator capable of maintaining appropriate restrictions on privacy and confidentiality.
  • Transparency with respect to framework, process, and rules.
  • A process that allows provider performance to be compared against both national and regional benchmarks and otherwise assists in the analysis of national assessments of health care quality and efficiency.
  • A process that facilitates making the data useful for physicians to improve the quality of care they provide to their patients as well as efficiency, and also to use for other purposes (e.g., maintenance of certification).
  • A process that results in the deployment of user-friendly and actionable information about physician quality and efficiency to the consumer.
  • The collection of both public and private data so that physician performance can be assessed as comprehensively as possible.
  • Standardized and uniform rules associated with measurement and data collection.
  • Protection of the privacy and confidentiality of data, as well as compliance with applicable laws, while ensuring necessary access to providers, plans, other data contributors, and consumers.
  • Systems or processes to share, collect, aggregate, and report quality and efficiency performance data that are affordable and that minimize burdens.

Based on this vision, said Kibbe, the workgroup recommends that the optimal data aggregation model have the following key attributes:

  • Data aggregation at the national level.
  • An independent governing board established to set rules, policies, and standards for data sharing and aggregation.
  • A trusted, third-party aggregator. (Kibbe noted that the workgroup is calling for a national, private-sector, third-party aggregator that can receive Medicare and Medicaid data and data from private-sector sources.)
  • A data stewardship entity to oversee aggregation and auditing activities and ensure that the rules, standards, and policies set by the independent board are implemented and complied with by all stakeholders.
  • Comprehensive assessment.
  • Aggregation parameters.

Regarding the aggregation parameters, the workgroup recommended that they include, but not be limited to, rules, policies, and standards addressing data collection, data validation, data analysis, data uses, confidentiality, a transparent aggregation process, and an advisory committee to provide ongoing input and to review each year's aggregation process.

Next, Kibbe highlighted several outstanding issues. They include addressing the relationship between regional initiatives and the national model and the need to address the particular issue of small physician practices (many of which have little or no experience collecting or submitting data). Funding is another key issue. Who will pay for these activities?

Kibbe also raised a question about the need for unique patient and physician identifiers. The workgroup recognized that this effort would be very difficult without unique identifiers, so we need to find a way to address this difficult and costly issue. Finally, he raised the issue of how to incorporate new standards as they arise. Who will be responsible for developing new measures? Will the new board do this? Physician organizations? Or will NQF continue to undertake this activity?

Care Focused Purchasing

Following David Kibbe's remarks, there was a brief presentation on Care Focused Purchasing (CFP). Care Focused Purchasing is based on the premise that a new, consumer-driven market needs to be built that identifies and rewards better physicians, hospitals, and treatment options. CFP is also based on the premise that a more transparent, rational market for health care could reduce cost pressures, correct quality defects, and reverse the decreases in consumer confidence that are jeopardizing the current system.

Larry Becker of Xerox Corporation said that CFP is aligned with the Ambulatory Care Quality Alliance in seeking industry-standard provider performance metrics and in seeking to use data aggregation to enable the most credible deployment of metrics. In addition, he said, CFP will work with AQA to consider supporting Care Focused Purchasing as a national demonstration model for data aggregation and measures deployment.

We want data aggregation, said Becker. We're all dependent on having a mass of critical data. The more episodes you have per physician, the better you're going be able to understand that physician's performance. Care Focused Purchasing is an effort to aggregate data, get a critical mass, and look at the point of convergence between cost efficiency and quality—and what it can tell us.

Earlier, during the discussion on the starter set of performance measures, a participant had referenced Care Focused Purchasing, saying it represented a great start and was aligned with some of the measures AQA was moving forward. At the same time, a second person had noted that Care Focused Purchasing activities had been undertaken with input from health plans and expert panels. Our goal was to develop a set of measures, he said, that are the least burdensome possible to physicians, that can be operationalized and implemented—and that are tied to actual performance improvement.

Discussion

The discussion opened with a question by Carolyn Clancy about the need to tie data aggregation efforts to technical assistance. Larry Becker noted that Version 2.0 of the CFP measures will start to correlate cost efficiency and performance improvement.

David Kibbe then brought the discussion back to the workgroup's activities. We assumed that there would be a starter set of performance measures that used clinical data from charts (both paper and, perhaps, also some electronic) and administrative data elements, and that these would have to go to some entity for storage, analysis, and so forth. This begs the question of how we operationalize these efforts fairly and without being a burden.

Another workgroup participant stressed that the workgroup—composed of a very diverse group of people—was able to agree on the set of principles being outlined today. We are saying that combining public and private data sources is key. This is an important set of principles that we thought would apply both to current regional efforts and to a new national standard.

A participant stressed the need to get data back to physicians at the point of care delivery. At the same time, she expressed concern about data confidentiality and fears about a national database falling into the wrong hands. A second participant also expressed concerns about data privacy, as the models clearly assume patient identification. A third person stressed the need to get good technical people involved in creating levels of protection.

Kibbe addressed the question of data security, noting that wherever data were aggregated there would be significant data stores and they would be vulnerable. He explained that the workgroup did not get into the issue of what level of data elements would be identified or encrypted by the various organizations.

Regarding data aggregation, one participant wondered whether the right model was top down (national to regional/local) or bottom up. If you create a highly structured national set of rules and protocols for data collection, when the aggregated results are sent to us at the national level, we will have a degree of comfort about the data we receive from health plans and others. This allows for easier feedback to physicians in the locality and the ability to benchmark the data at a national level, he said. Alternatively, we can aggregate at the local/regional level, but follow an absolutely standardized data collection methodology.

Kibbe noted that the workgroup had looked at both a model with a clear capability to aggregate data at the national level (into a very large database) and a virtual data collection model (where data are collected and reported using highly specified, standardized processes), where rolling up to the national organization would be dependent on what the national organization would do with the data.

Turning to the principles for a data aggregation model, one member of the workgroup noted that the group had discussed both equity and efficiency. While we would certainly entertain discussion of different ways to handle data aggregation, he said, these two elements are critically important to purchasers, providers, and consumers.

While I think the workgroup did a very good job in coming up with a data aggregation model, said one participant, this model clearly applies to some point in the future. We need to look at today's reality, where physicians are being told they are high/not-high performers based on very limited data. Perhaps we need two different perspectives: one on data aggregation and a second on data sharing (where health plans share what they have now so consumers have some reasonable assurance that the information is correct).

The workgroup looked at this issue, Kibbe replied, which is why we felt that there was a need for a national-level capability to examine data and risk-adjust it. Kibbe noted that physicians would like to be able to take reports on their performance and make sense of them. The current data from the health plans are not helpful, he said. If we don't do this at the national level, he warned, I think it will be harder and harder to standardize and usefully interpret the data.

There was also discussion about the concept of a third-party aggregator. If there's a third party, asked one participant, who are the other two parties? Kibbe responded that the Federal Government (CMS) and health plans (private sector) are two very important parties. The third party could be termed "public/private." The workgroup felt that this level of aggregation must include data from health plans and public (Medicare) data.

That's a good point, said another participant, but the key is that the control needs to be owned by multiple stakeholders within the independent board—and the control needs to be shared across these stakeholders.

One participant expressed concern about "independent, trusted third-party aggregator," suggesting it implied a specific model. She proposed striking the language and substituting "any aggregation model must be capable of maintaining appropriate restrictions."

A participant from CMS also wondered whether the bullet on the third-party aggregator should be pulled. If this bullet point means we're endorsing a particular model, we're not ready to say yet what that model should be. Kibbe stressed that the workgroup has not yet agreed on a particular model.

Another participant wondered why CMS took issue with the concept of an "independent, trusted third party." To clarify, the CMS official said the concern wasn't with the language, but rather with whether the language applied to a particular model. Before we adopt a model, he said, I think we need to talk about how to collect the data in a standardized fashion—and then how we can ensure them and aggregate them. Once we have clarification around the rules of engagement (on how we collect data), then we can have a dialog around the proper model for data aggregation. A second CMS official clarified that CMS does not have a problem with the language being a principle for discussion and stressed that CMS was firmly committed to working with AQA to see these principles move forward.

As we continue to think about this, said another participant, I wonder if it would be helpful to make a clear distinction between aggregation of patient-specific data and quality indicators about a practice. It seems that we can make a strong case for aggregating the latter at the marketplace, but not the same case for pulling patient-specific data at the national level. With patient-specific data, she said, we need to compare marketplace to marketplace. It seems that calls for a different strategy, where aggregation occurs at the local and regional levels—and then we strip off the identifiers as the data roll upward.

Kibbe noted that there clearly are places where physicians currently have the opportunity to participate in regional efforts. But what do you do for the three physicians in Toledo, Ohio, where there is no State or regional entity for them to submit data to? They might want, from an efficiency point of view, to submit their data directly to a national entity.

We already have three levels of data—claim, patient, and physician-level—said one participant, and thus three levels of aggregation. She suggested that the starter set measures lend themselves well to patient-level data. One of the principles of the model might be to produce data at the highest level needed to minimize the risk of unnecessary information being exposed to potential breaches, she said.

Safety issues exist, said one participant, whether we're talking about a national database or data aggregation at the physician level. By the same token, someone needs to support a data standard—and this is something the Federal Government can produce. In addition, as we move to electronic medical records, we need to have a standard so that even if the system containing the data goes bankrupt, the data can be exported elsewhere. He also suggested perhaps substituting "e-data custodian" for "independent, trusted third-party aggregator."

One member of the workgroup said that the concept of a national data aggregator model evolved in order to reduce concern that a variety of regional initiatives will crop up without any structure or standardization for how data will be collected. We wanted to avoid that confusion, she said. Today a handful of regional and statewide activities are emerging, but nothing that is consistent across the marketplace. By default, the simplest way to make sure we have standardization is to go through a standardized body.

A participant applauded the workgroup's efforts, saying that he has been seeking a common set of measures and a common set of data. At the same time, he said, as a supporter of the Care Focused Purchasing Initiative, we find ourselves going down the frightening path of contributing data to multiple data aggregation efforts. We need to make explicit that we are adopting a principle that recognizes that significant progress is being made around the country, and we need to set up a structured methodology to incorporate those multiple initiatives into this one effort.

One participant asked a question about data validation. Do the data need to be longitudinal if they are patient-centric? In response, Kibbe noted that she was raising another issue: the idea that we are now able to create a patient health profile (a standard set that includes diagnoses and medications)—and that this information could come from various sources of data (laboratory, physician, health plan, and so forth). The question is where that patient health profile resides, he said. There is a growing sense that patients will have access to the information that they create or assemble. They may even be able to carry it around with them on a USB (memory stick), he said. Or the data may be held in a physician's office and assembled on call as needed. This is a very different paradigm than we have now, where we have multiple, privately held data sources.

Much of this discussion goes back to the question of principles versus operationalizing, said another member of the workgroup. There seems to be agreement around the principles, but we still have a lot of work to do to implement data aggregation, whatever the model. A second issue is a funding model to support the system, which will be different depending on whether we have a local or a national model.

One participant, who was troubled by the "independent, trusted third-party aggregator" language, noted that principles five and six (addressing "deployment of user-friendly and actionable information about physician quality and efficiency to the consumer" and the collection of "public and private data so that physician performance can be assessed as comprehensively as possible") would be a very big deal. I'm prepared to endorse these, she said, but recognize that this is not going to be an easy sell.

We need to endorse a set of principles today, said one participant. I would suggest that we take a uniform one-track vision, and then try to figure out how to collect data at the regional level and move it up to the national level—so we can move up or roll down. This way we wouldn't have multiple aggregators (a physician concern) and we would have clarity (an employer concern). We need a uniform strategy and standardization. She added that validation also needs to be embedded in the principles.

Another participant expressed concern about endorsing the principles, thus making them binding on any new entity that emerges with broader representation (especially by consumers). I think we have to be very careful what we do today, she said. I think it is one thing to recommend that these be principles that require consideration, but I am loathe to endorse them without broader consumer community representation.

There are at least three levels to this discussion, said one participant. How should we structure things to get what we want out of them? What are the rules by which they all operate? And what are the needs of the various people who will use these data? It seems that one set of rules addresses what the measures will be and a second set measures the process standards. The data steward is really an auditor. He noted that most of the discussion has been around centralized versus decentralized and national versus regional. But there are other data aggregators too, such as specialty organizations. In addition, purchasers, researchers, and others may also want to continue to be able to look at the data. The question then becomes, he said, how we structure the data so it's simple for both physicians and other parties to look at them.

Returning to the issue of an "independent, trusted third-party aggregator," one participant noted that the key word was aggregation. This benefits everyone, he said, including payers, consumers, plans, and physicians. Without aggregation, we will wind up with different data. He added that it was important to think about efficiency (including cost efficiency) and cost. The cost of doing this is not inconsequential.

Another participant noted that the Cystic Fibrosis Foundation is already doing data aggregation and plans to make its process totally transparent to patients and physicians within a year. This will enable people to rank cystic fibrosis centers by any number of criteria, he said. Is this perhaps a model we can emulate?

There is urgency to move forward on this issue, said one participant, coming back to the need to endorse the set of principles. I think the role of AQA is to operationalize measurement sets that have been endorsed. We have a starter set, and our goal is to figure out how to operationalize it. We need a model, and we need to figure out a way to move this process forward. He suggested that the solution might be to move several models forward at first and test them out. That way we can see what's workable, and what's the best way to implement the starter set of performance measures.

David Kibbe told participants that sooner, rather than later, they needed to arrive at a consensus. It is going to be very difficult to support a data collection effort when the health plans are saying they're going to go off and do their own thing, and CMS says it cannot commit to a single way to handle data collection, and consumers are saying they're going to develop Regional Health Information Organizations. This is a way to build in defeat. If we cannot make progress on some national rule-setting system and a methodology, then this process will not go forward, he warned.

Kibbe's remarks prompted one participant to reiterate that she was concerned about the lack of broader consumer buy-in. Perception is important, she said, and we need broader representation on this. A second participant suggested perhaps tweaking the "independent, trusted third-party aggregator" bullet to add language saying "that serves the needs of consumers, providers, [and others]." She also raised the issue of what the system will look like in 6 months and 3-5 years down the road. We want to see an aggregation process in place in 6-12 months, so we need short-term implementation options, she said. But it might be dismantled down the road, so we also need a longer term solution.

Regarding the timetable, one participant suggested starting with the measures available now in order to have some statistical significance to what's going on in the world today. A member of the workgroup noted that there seemed to be a lot of agreement on the principles (except for perhaps bullet one), and that if participants could accept the principles, then the workgroup would have something to work with moving forward.

A participant from CMS said he hoped to see something put in place in the short term that can transition into the long term. The short-term issue we need to discuss is, now that we have a starter set, what kind of data are we talking about collecting? Claims data? Or are we really talking about abstracted electronic health records data? He added that different data sets would require different approaches to validation.

Finally, a participant noted that CMS was looking for a solution for handling data aggregation. If we cannot put forth a proposal on quality and efficiency, then the budget constraints will force cost containment, she warned. Data aggregation is a key piece of this.

Wrapping up the discussion, Carolyn Clancy said that the workgroup had made tremendous progress. She acknowledged that not everyone was comfortable with moving forward, and suggested that the group endorse the principles as a beta set. She added the proviso that the first principle needed to be wordsmithed a bit. There were no objections.


Previous Section Previous Section        Contents         Next Section Next Section


AHRQ Advancing Excellence in Health Care