Skip Navigation

American Health Information Community

Quality Workgroup #14

Friday, December 14, 2007

Disclaimer

The views expressed in written conference materials or publications and by speakers and moderators at HHS-sponsored conferences do not necessarily reflect the official policies of HHS; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.

>> Michelle Murray:

Thank you. This is Michelle Murray from ONC, and we’re ready to start this meeting. This is the 14th meeting of the Quality Workgroup. It’s a FACA committee meeting, which means it’s open to the public, it’s being broadcast over the Web, and there’ll be time at the close of the meeting for any public comments. Workgroup members and guests, I’d like to remind you to please identify yourself before you speak, please speak clearly and distinctly, and mute your phone line when you’re not speaking to reduce any background noise. If you’re listening to the Webcast as well as using the phone line, please mute the sound on your computers to reduce any feedback.

And at this time, we can do the roll call of the members on the phone line. Then we’ll do introductions here in the conference room at ONC. So ready for the roll call.

>> Chris:

Alison, do you want to go ahead with that?

>> Alison:

Sure, no problem. We have Rick Stephens from Boeing, Michael Rapp from CMS, Jane Metzger from First Consulting Group, Mark Carroll from IHS... Is that IHS Beth Israel?

>> :

Indian Health Service, probably?

>> :

Yeah, Indian Health Service.

>> Alison:

Thank you. Justine Carr from Deaconess Medical Center. Captain Richard Haberberger from DOD, Ann Janikula in for Susan Postal from HCA. We have Joyce Grissom from DOD, Jerry Osheroff from Thompson Health Care, Charlene Underwood from Siemens AG, Margaret Van...

>> Margaret VanAmringe:

Amringe.

>> Alison:

thank you from Joint Commission I apologize if I’m destroying anyone’s name Jonathan Teich from Harvard, Pam French from Boeing, Mike Kaszynski from OPM, Kristy Baus from CMS. I also have Robert Simons from Boeing, and Carlo Morello from Boeing, and Luann Whittenberg from Department of Defense. I believe that’s it.

>> Michelle Murray:

Great. Okay, here at ONC, Michelle Murray.

>> Carolyn Clancy:

Carolyn Clancy.

>> Kelly Cronin:

Kelly Cronin.

>> David Hunt:

David Hunt.

>> Kristine Martin-Anderson:

Kristine Martin-Anderson from Booz-Allen.

>> Debbie Jackson:

Debbie Jackson, (inaudible).

>> Alison:

That’s everybody. Now, Carolyn, you can take it over.

>> Carolyn Clancy:

Great. Well, good afternoon, everyone, and we’re coming up on the holidays. We seem to time our meetings very well that way. Last time we met, it was Halloween, as I recall.

We’ve got some very interesting work to do, and I just wanted to reflect on this past calendar year for a couple of minutes. One was first of all, before I get into reflection mode, we need to accept the meeting summary for October 31. Could I have a motion?

>> :

So moved. So moved.

>> Carolyn Clancy:

Second?

>> Richard Stephens:

Rick Stephens, second.

>> Carolyn Clancy:

Thanks, Rick. Anyone notice any errors or areas that needed to be corrected? I did not. (Pause) Any objections to simply accepting the minutes? (Pause) Done.

Okay, now back to reflections. Over the past calendar year, I think that we have done a terrific job creating a roadmap, and a roadmap that begins to help us see how do we get from our or at least what is going to what steps we’re going to need to take although they’re not in logical order right now, which is going to be our next challenge to get from the current state to where we measure quality in silos, or where we’ve got essentially very precise we hope snapshots very brief snapshots of care so we have sort of a photo album, if you will, for quality. And what we really want is a movie. We want to be able to get to a place where the measurement is less oriented to the site of care and much more oriented to the person who is traversing the health care system; that is to say, patient or consumer focused, which means that it’s got to be longitudinal measurement that accounts for patients intersecting with a number of different settings and take into account a number of different dimensions that simply aren’t possible today.

And of course, along the way, what we’re hoping is that the same infrastructure that emerges from electronic health records will also support clinicians in their efforts to get it right at the first place, so that we’re not just doing a better job driving faster through the rearview mirror, but actually putting in place clinical decision support that makes it easier to do the right thing from the outset. I sometimes tell clinicians that if we get it right, or when we get it right, the quality enterprise will actually be value added to their work, which if anyone finds value added today, I hope that we can find them. My general assessment is that on a really good day, the quality enterprise is painless for physicians, but it is rarely value added, because it simply is not engineered to be value added.

So just looking at the agenda for today, we’ve got a couple of big topics coming up. One is that we’re going to hear a little bit more about data stewardship from Justine Carr, who actually co-chairs the Quality Workgroup for the National Committee on Vital and Health Statistics, and I will orient us to that discussion when we get there. They’ve been doing some terrific work and had an opportunity to present their work to date to the full American Health Information Community in mid-November. We’re then going to hear from Mike Rapp, and we’re really pleased he could be with us today.

We’ve been talking a lot about the fact that based on the work we’ve heard about to date from the National Quality Forum, there does not seem to be, like, a nice little set of data elements which will get us 80 percent of the way to being able to build our current quality measures into electronic health records, which is really too bad, because it would be lovely if they did. But I’m pretty convinced and had a chance to hear the presentation again this morning at the National Quality Forum board of directors meeting I’m pretty convinced that there isn’t any obvious solution there. And that brings up the question of whether it makes sense to begin to think about building a minimum quality dataset.

So we’re going to hear from Mike about some of CMS’s experiences using these minimum datasets. We’re then going to return to our draft roadmap and potential recommendations for the future, and I’m going to challenge us to be as concrete and as specific as possible. And the frame I’d like us to put on it is not just “Is it important and good stuff to do?” but also “What are we likely to either accomplish this year or to make a substantial down payment on this year that the Secretary, when he leaves office, metaphorically turning off the lights, can actually say, ‘We accomplished this,’ or, ‘We’ve moved the ball down the field in a tangible way’ that irrespective of politics or who the next administration is, we can actually see progress and there’s something to build on? I don’t want to suggest for a moment that none of us have that aspiration, but I think that we’re going to need to be very strategic about how we make our decisions, because if there’s one thing we’ve learned, it’s that there’s a really long list of issues that we could take on. We can’t solve them all at once. And then we will come back right at the end to talk about 2009 use case priorities.

So with that, let me turn it to you, Rick, for any comments, and then we’ll get into data stewardship.

>> Richard Stephens:

Carolyn, thanks very much. I, like you you think back, reflect about where we’ve been and where we’re going and what we need to do. A lot of great work, but I think we’ll be measured by how well we’ve postured going forward. And again, to follow on your words, Carolyn, it’s about “What are the measures,” you know, “in this that are really important we get out there?” You know, “What’s the quality dataset that’s important? And then how do you make sure you get that put in place, whether anyone wants to call that standards or an implementation plan, but such that everyone is beginning to use it?” Because again, in the end, we’re about trying to drive change in the system, and our focus is on quality and quality measures. We’ve got to have some concrete recommendations going forward so that when this activity gets handed off next year, the all the hard work we’ve put in place will not be for naught. It will have resulted in others beginning to take action on our specific recommendations.

So I, too, am looking forward to this discussion today on the data stewardship, more understanding about the datasets, and then understanding how we’re going to take the roadmap and actually turn that into something that’s going to happen with a concrete set of recommendations. Carolyn, back to you.

>> Carolyn Clancy:

Great, thank you. So without further ado, I would like to introduce Justine Carr. Justine, as you heard, is the Senior Director for Clinical Resources Management at the Beth Israel Deaconess Medical Center in Boston, and she and I have had the opportunity to have conversations about the general challenge our Workgroup is taking on: How do we bring the quality and IT enterprises together in a way that has not happened previously? We’ve had this conversation on and off even from before the Quality Workgroup started.

She’s going to be presenting some recommendations to Health and Human Services that the National Committee on Vital and Health Statistics brought forward to the American Health Information Community at the last meeting of the full American Health Information Community. Now, in case you’re wondering, “Do we have too many quality workgroups?”, the short answer is yes: We have hundreds if not thousands of them across the country right now, many different organizations involved in the quality enterprise. But the really great thing here is that NCVHS has been around for 50 years? a long, long time. It’s a longstanding advisory committee to the Secretary. And in general, they’ve had the opportunity and really have done a terrific job of taking on technical and policy issues that require quite a bit of depth.

So I want to assure all of you that they are not simply having the same conversations we’re having, but in a different building or under a different umbrella, but that they’re we’re working very hard to make sure that our efforts are complementary. So with that, Justine, I’ll turn it over to you.

>> Justine Carr:

Carolyn, thanks very much. I appreciate the opportunity. In October, I came through, and we had some preliminary findings, but we now are in a position where we’ve put together observations and recommendations. So I’ll go lightly over the things that I’ve covered before and spend a little more time on the recommendations that we’re putting forward. So thank you again.

So as you know, this report was developed by a workgroup of the National Committee of Vital and Health Statistics, and that’s an 18-member statutory public advisory committee that actually, for the last 60 years, has advised HHS on areas related to health data, statistics, health information, and privacy. Committee representation includes leaders from a range of fields, including health information technology, health statistics, clinical and administrative data standards, medical informatics, privacy, and population health. So today I’m here on behalf of Simon Cohn, who’s the Workgroup Chair, and Harry Reynolds, who’s the Co-Vice Chair.

So yeah, next slide, thanks last spring, NCVHS was asked by ONC to develop a conceptual and policy framework to balance risk, sensitivity, benefits, obligations, and protections of various uses of health data and to develop recommendations for HHS on data stewardship and other measures to enable optimal use of health data while respecting privacy of individuals. And in particular, there was an emphasis on appropriate protections surrounding uses of health data for quality measurement, reporting, and improvement. So do I say next slide? Next slide. Okay.

This is our timeline. The previous work of NCVHS is shown in pink on this slide. And particularly over the last few years, there’ve been a number of letters and reports related to HIPAA, privacy, NHIN, quality, and data standards, and that has laid a strong foundation for the topic on of uses of health data. To address this topic, we assembled an ad hoc workgroup reflecting the diversity of the expertise and perspectives of the full committee. So as you see this timeline, you can see we had a busy summer. We held 8 days of hearings. We heard from 58 testifiers representing providers, consumers, quality organizations, health information exchanges, vendors who process and use health data, researchers, and public health representatives. We published the first draft in late October and held a public call, and we then received further comment from provider organizations, professional associations, accrediting organizations, consumer groups, and health information exchange same array of folks, and as well as from private citizens.

Next slide. Yeah, thanks. We also drew upon related work, and collateral documents that included the AHIC Quality Workgroup vision summary and quality use cases, also the RFI on national health data stewardship, as well as the AMIA work toward a national framework of secondary use of health data, and then selected reports from NCVHS as you see here.

Next slide. So when I was here in October, I brought this forward, and George Isham seconded it resoundingly, which is that very early in the Workgroup deliberations, we observed that the term “secondary use” of data was troublesome. There’s no definition, and “secondary” suggests lesser importance. George has suggested that rather than saying “We encourage” to just say the specific use. He said we should forbid the term. So we’re, for now, encouraging that people just specify the use rather than calling something “secondary use,” because the world of health care is changing so dramatically, and data sharing has become part of primary care deliver.

Next slide. So the question, then, is, why address uses of health data now? And the landscape is changing. Electronically available health data are no longer just claims data but include more clinically rich data, such as labs, medications, problem lists, etc.

Secondly, electronic data can be linked more readily with other databases. And this affords an opportunity to assess clinical outcomes over time. It also creates a risk of data being linked to databases that might jeopardize privacy, employment, or insurance eligibility.

The third issue is that sources of electronic health information are expanding beyond HIPAA protection of covered entities and business associates. So this would be, for example, providers who do not file claims electronically or the growing number of personal health records services.

And finally, electronic solutions to protect and secure data continue to evolve, including the emergence of approaches that will allow individual consent to follow data.

Next slide. Two major themes permeated the testimony that we heard. The first recognized the great benefit that can be achieved using electronic health data, including enhancements of quality measures and reporting, with more real-time quality improvement cycle; also acceleration of accrual of cases for timely identification of complications from drugs or devices.

The second theme was the concern about the potential for harm, where two areas emerged. One is the issue of erosion of trust in the health care system, with potential compromise to health care that may occur when there’s divergence between the expected and the actual use of health data. A second concern was potential or actual discrimination or confidentiality breaches with the increased ability to collect longitudinal data, coupled with sophisticated means to re-identify data.

Next slide. So I’m going to just speak about HIPAA and the privacy rule for two slides. So HIPAA has provided a foundation for protection, a roadmap for stewardship. And we took the approach of evaluating who is covered by HIPAA, and how can that protection and stewardship be strengthened, who is outside of HIPAA, and how might they be protected, and where is HIPAA silent. And more clarity is needed.

So just to review, HIPAA promoted electronic exchange of financial and administrative transactions. HIPAA’s legislation requirements for health information privacy resulted in the issuance of the privacy rule. As I mentioned, HIPAA only regulates covered entities that electronically transmit health information in connection with transactions for which HHS has standards, so basically payers, providers, and clearinghouses. Covered entities use business associates and their agents, persons or entities acting on behalf of the covered entity to perform a function regulated by HIPAA via a business associate contract. So covered entities, business associates, and agents are all covered by HIPAA. Next slide.

As part of HIPAA, Congress required HHS to adopt regulations safeguarding the privacy of individually identifiable health information, and this is called the HIPAA Privacy Rule. So a couple of points here. One is, it covers only individually identifiable health information. And that’s actually in any form: paper, electronic or oral. The information is held or transmitted by a covered entity. So that’s called protected health information. There’s also something else called personal health information, and that may be health information held by organizations who are not covered by HIPAA.

So a second issue is that HIPAA requires authorization for disclosures of PHI. And the exceptions there are treatment, payment, and opera health care operations. And the other is where it’s required by law, such as public health. So within the array of activities in health care operations is included quality assessment, competency review, payment processes, compliance activities, business planning, general administration.

And the final point to make is that HIPAA does not protect de-identified data. And by “de-identified data,” they refer to removal of 17 designated identifiers and anything else that identifies a patient or a statistical process. And actually, I’ll speak more about this. We heard concerns related to the sale of de-identified data.

>> Richard Stephens:

Justine, this is Rick Stephens.

>> Justine Carr:

Yes.

>> Richard Stephens:

I see in your chart says “safe harbor.” That does imply implies that this will be compliant with the safe harbor regulations in the in Europe? Is that what that

>> Justine Carr:

No, actually it’s a statistical definition of de-identification.

>> Richard Stephens:

Thank you.

>> Justine Carr:

Sorry, I have to clarify that. Yeah, it’s either to a certain percentage of anonymity, you know, statistically calculated, or it’s just the removal of the 17 identifiers and anything else that could identify.

>> Richard Stephens:

Thank you.

>> Justine Carr:

So there are two methods. Thanks for asking.

Next slide. Based on the testimony and review of the landscape, we put together a health data stewardship conceptual framework. This affords the opportunity to assess particular use of data in a systematic way. Understanding the user and the use, the benefit and the harm, one can now look at data stewardship attributes and their application for maximal data stewardship.

So this framework is intended to outline how an organization might approach evaluation of its intended use of data and recognize where it may elect to enhance its data stewardship processes. For example, a business associate of a payer that’s covered by HIPAA who wishes to use identifiable data for quality measurement under health care operations would describe the benefits of this use, and consider the potential risks and harms, and consider how it would address each of the data stewardship considerations. In some areas, the user may believe it provides appropriate data stewardship, but in other areas, it may see an opportunity such as improved transparency or stronger security controls, etc. So this framework, then, is how we then went on to put together our observations and recommendations, which I’ll go to now. Next slide.

Before I get, actually, into those, we want to now turn our attention to the guiding principles that we used. I think that if we learned anything from this very intense 6 months, we learned that use of health data is an extremely challenging area, and that despite ongoing evaluation study of the same materials, thought leaders will differ on the optimal course of action. The complexity of this topic is further compounded by the evolving landscape. We are still early in our understanding of the NHIN.

In the development of the recommendations and borrowing from the Minnesota Health Records Act fact sheet, we have identified guiding principles against which each of the recommendations is evaluated, and these include the following: Protections should maintain or strengthen individuals’ health information privacy. Protections should enable improvements in the health of Americans and the health care delivery system of the nation. Protections should facilitate appropriate uses of electronic health information and increase the clarity and understanding of laws and regulations pertaining to privacy and security of health information. Protections should build upon existing legislation and regulations whenever appropriate and should not result in undue administrative burden.

Next slide. So this brings us to our observations and recommendations. So currently, in the age of electronic protected health information, electronic data exchange, the health industry relies heavily upon HIPAA construct of covered entities and business associates to protect individually identifiable data. Once de-identified, they’re not under the HIPAA requirements, as I mentioned.

So as we transition to HIE and NHIN, a transformation is needed. Our recommendations call for enhanced HIPAA protections and data stewardship for all uses of health data by all users, independent of whether an organization is covered by HIPAA. And so to achieve these, HHS has a variety of means that are to achieve the enhanced protections called for in these recommendations. And these means include guidance such as were issued for security last December or requirement for federal agency adoption; also inclusion in requirements for contract rules, incentives, conditions of participation, interagency collaboration, modification of existing regulations, or new regulations. Next slide.

Our recommendations fell into four categories. The first, health data stewardship. And so these are the attributes that we talked about in the framework, and I’ll come back to. Second, oversight of specific uses of health data. So our recommendations focus are also to focus on the use of health data for quality measurement, reporting, and improvements. We are including recommendations relating to monitoring, such as in monitoring the uses and enhancing oversight as appropriate. Third is transition to the NHIN, and as the industry makes the transition to HIE and NHIN, there needs to be an evaluation of new tools and technologies as they become available. And finally, additional legislation to broaden the scope of privacy coverage to all who may have access to personal health information and to address anti-discrimination consequences that may arise out of wrongful use of health data. Next slide.

So recommendations on health data stewardship

>> :

Thank you.

>> Justine Carr:

Specifically, NCVHS is recommending that covered entities strengthen the terms of business associate contracts, to be more specific about what data will be used, how the data will be used, and by whom the data will be used, including, “Is the recommendation the covered entities and their business associates confirm compliance periodically to what the business associate contract specifies?”

With regard to transparency, transparency’s very important to consumers, and the notice of privacy practice needs to be more meaningful. Individuals should be able to request and be supplied additional information about what specific uses and users there are of their data, drawn from greater specificity in the business associate contracts. Next slide.

Principles of data stewardship also need to extend to personal health data and de-identified data. Today, HIPAA only protects protected health information held by covered entities, yet personal health information is being collected by many other organizations outside HIPAA jurisdiction, such as Web-based vendors. NCVHS is recommending that authorization for uses of personal health information outside of HIPAA be assured.

Further, when personal health information is de-identified, the HIPAA definition of de-identified should be used. We heard a very great array of terms that were implying de-identification, but I think that there’s that variation sets up risk that a definition might not meet the HIPAA de-identification. So our recommendation is that only the HIPAA definition be used. NCVHS recommends on business associate contracts also that they include what uses will be made of de-identified data and describe that in the contract.

The scope of issues related to de-identified data prove to be quite broad. Therefore, NCVHS has opted to further study this area before making recommendations. Within organizations, there are already there are also more readily available security safeguards and controls that could be applied, and data integrity and data quality also need to be assured. On to the next slide, please.

For specific uses of health data, NCVHS believes there should be recognition that quality I’m sorry; this says “research” but that “quality” is within the scope of HIPAA health care operations. However, there may be additional risk as health data for quality measurement reporting and improvement cross more organizational lines. So NCVHS is recommending that organizations institute a more proactive oversight process to ensure compliance with HIPAA.

In the area of research, the Office of Human Rights Protections is already working to clarify definitions within the common result, but there needs to be harmonization of research relations across the common rule FDA, VA, and HIPAA, all of which cover research, but each only in specific instances with some variations. Guidance on research should also be disseminated beyond the research communities to others who may not typically conduct research but who find who may find, for example, that a quality study is evolving into research.

As findings from a quality study become generalizable to a broader population, then the focus of the quality study organ I’m sorry become more generalizable to a broader population than the focus of a quality study, organizations are at risk of not meeting the HIPAA privacy rule requirements for privacy border institutional IRB oversight.

The I think this slide sort of outlines the recommendations. I’m happy to go back and answer questions. I realize I’m going a little bit quickly through this. Next slide. Just a few more slides.

NCVHS has been active in the area of envisioning a national the nationwide health information infrastructure and recommending functional requirements for an NHIN and follow-up specific tools and technologies, such as means to attach individual choices to data, improve de-identification techniques and safeguards where the selling of data should be evaluated. And our recommendations here were incorporating data stewardship attributes into NHIN activities and using NHIN trial implementations to evaluate individual choice applications, data stewardship, attributes and comprehensive databases, potential new de-identification techniques, chain of trust enhancements, and educational modalities to improve understanding. Next slide.

NCVHS recognizes that HIPAA has limits. Other protections beyond data stewardship are needed. NCVHS recommends more inclusive federal privacy legislation to cover all organizations that have access to personal health data. NCVHS believes that legislative or regulatory measures on antidiscrimination need to be stronger. And to promote interoperability, differences in state laws need to be understood and harmonized.

Next slide. Last slide, actually. Areas that we recognized in the course of these 6 months that need further consideration, as I mentioned, is uses of de-identified data and also the overlap of quality and research. So these are areas of focus within NCVHS going forward.

Next slide. Actually and one more slide. This is the NCVHS membership. So I’ll stop with that and thank you for the opportunity. And I’m happy to hear your comments and questions.

>> Carolyn Clancy:

Well, Justine, on behalf of the Workgroup, let me just say, “Wow.” This is a lot of work, and I have no trouble at all grasping how many days of hearings are required to get to this, but you’ve really done a phenomenal job of distilling this. Before just opening this up for questions or comments, let me say I was very pleased that you took the AHRQ RFI on behalf of AQA into consideration as well as our Quality Workgroup roadmap. What was the general sense when you presented to the AHIC in mid-November?

>> Justine Carr:

It was well-received. I think, actually, one of the things that was one of the comments that was made was that they very much appreciated in our guiding principles that we were looking to build upon existing legislation. And I think that was one you know, one of the main comments. I think they didn’t get it was a lot to bring to that group late in the day, I might add. But I mean, it was a favorable response, not detailed in terms of asking questions.

>> Carolyn Clancy:

Were there areas that were confusing or not sort of immediately graspable? Did you have that sense?

>> Justine Carr:

You know, I think that there are not just for that group, but understanding, you know, the continuum, the issues between the continuum between quality and research I think that’s something. Another area that we didn’t talk too much in detail about but is the stewardship about the data: the precision, the accuracy, the completeness, the aggregation techniques applied to it. We didn’t go in great depth with that group there, but we have in the report.

>> Carolyn Clancy:

Interestingly, my day is quite interesting, having started off with the NQF board of directors meeting where the recent decision in or decision agreements between the Attorney General for New York State and a number of health plans was the subject of some considerably interesting conversation. But what it underscored, I think, for many people sitting in the room was, when we talk about transparency as it relates to reporting on quality, many people think of that as the report card that you see on the Web or get in a book somewhere.

Clearly, there’s been a whole lot of interest, which is what NQF is about, in making the measures themselves transparent. But that also is going to need to extend ultimately to transparency in the implementation and use of those measures, which is an area which someone this morning kept saying we need to have a bright yellow highlighter over, but we actually aren’t there yet at all. And the gist of the agreements in New York State actually outlines begins to clarify or articulate that we need some very clear conventions for that. For example, the agreements all contain such language as, “There’s an appeal process for physicians, who’ve been the major focus on these agreements.” So I think that’s very important.

In your hearings, did you have the sense that the term “de-identified data” invokes a fair amount of confusion?

>> Justine Carr:

Absolutely. You know, we actually thought we were beginning to get a handle on de-identified data and some recommendations. And then the more we heard, the more applications of that term, and actually the more uses of de-identified data, we learned about. And I think that the you know, the other area that has come up is sale of de-identified data and the sense of you know, that’s not that there is sale of de-identified data for very good purposes. And then, you know, there are other areas where one would want to have better understanding, and particularly with regard to the potential for re-identification of data. I think that’s a particular area. And so to just say “de-identified data” doesn’t you know, you can’t qualify that, because it’s a spectrum. And even the sale of de-identified data is a spectrum. And so we did not want to we wanted to be very much better informed before we made any recommendations.

One other thing that came up, I think, in a number of settings is the need for education of the public. I mean, you mentioned transparency, actually, in how things are developed, but even for the public to understand the tremendous benefit that will accrue to them and others in the health care system, as we have the opportunity to look more quickly at a larger number of patients and recognize adverse medication effects that we would never have gotten from the one-off FDA report from, you know, four different hospitals, you know. And I think that John Lumpkin a number of folks brought that up that there really needs to be a systematic campaign of education about that link of the benefit of being able to look at large datasets.

>> Carolyn Clancy:

Well, that’s very helpful. And let me just tell you, I could keep you here ‘til 6 o’clock asking you questions, and I may come back to a couple of ones, but I want to actually give others an opportunity to ask questions or make comments.

>> Richard Stephens:

So this is Rick Stephens, Justine.

>> Justine Carr:

Yes, hi, Rick.

>> Richard Stephens:

What’s your sense of the next steps and the time associated with those next steps?

>> Justine Carr:

Let’s see. Well, I have about 10 answers. I’m trying to think of for whom. Let’s start with that. Next steps for the committee, next steps for the concept, or how do we move things forward?

>> Richard Stephens:

I guess it’s more of how do we move things forward, you know, because you’ve got a lot of great information here about setting requirements and definitions and getting alignment, and this provides a great framework for consistency, which I think is critically important. What do you see as next in that sequence? Kind of on the macro levels, we try and work this into, you know, our overall draft vision of the roadmap and the time frame where you’d expect people start adopting what you’ve really done a great job pulling together and integrated.

>> Justine Carr:

Yeah. Well, I mean, I you’re right that this is high-level, and that is, you know, what NCVHS does, to try to sort of say, “What are the what’s the big picture?”, but then in the implementation and so we have some you know, we made reference at the end toward “Are there things that we can do in current demonstrations that will help us see how do what are the real issues that play out?” I know we had talked about the exchange you know, the definition of “operations” and “covered entity,” and the exchange of information across those health information exchanges, and who are the covered entities. These are things that need to be better understood, and vetted.

And it you know, the challenge of it was is that we’re living in today’s world, and we have the vision, and we have some other documents about NHIN, and trying to visualize that and align these principles and these recommendations and this stewardship with that is a challenge. But I think, you know, if there are ways of using NHIN trial implementations to put these recommendations up against, and say, “Is it working? Where is it falling short?” I think that’s one thing I think it on a very tangible level, the idea of the business associate agreements and getting more specificity, more accountability, more understanding of what data, and where it goes, and who’s using it, and is it being sold, and if so how, and for what purpose I think those are things that are immediate.

And then we did allude to the we talked about the model of having some institutional or organizational oversight of uses of data for quality. And I think that there, there’s tremendous opportunity locally to apply these data integrity; data, you know, standardization; precision; all of those concepts, to begin to apply locally that as we’re using the data, we understand what it’s being used for and how it’s being interpreted so as to enhance understanding and add value.

>> Richard Stephens:

Thank you.

>> Carolyn Clancy:

Other comments or questions?

>> Charlene Underwood:

Justine, Charlene Underwood. I actually wanted to also applaud the work that really looked at the basis and all the work that’s gone into building an infrastructure, for instance, for EHI and or for sharing, you know, financial transactions EDI. And I would comment on the picture: I think it’s a great picture to show we need to move from the experience that we had from implementing HIPAA and sharing data using EDI to HIE, but I would like to think it’s less than it’s more it wouldn’t be such a strong chasm in the middle. I know it’s somewhat of a chasm, but it would be a smoother way to get there, and we could build on that and kind of make it a pathway forward rather than, you know, a jump across a chasm. So

>> Justine Carr:

Yeah, well-said. Actually, that picture’s not in the report, but your point is well-taken.

>> Charlene Underwood:

Okay, (inaudible) concept, but I would

>> Justine Carr:

No, I think that’s an excellent point.

>> Charlene Underwood:

All right. But because I think there was so much experience, and we’ve got a data center that runs the stuff and we know it works, and if we could build on that and because it is still about people, you know, and a subset of the information, and there’s experience with that secondary data here sorry...

>> Justine Carr:

Uses of data.

>> Charlene Underwood:

(Laugh) Anyway, that was my comment.

>> Justine Carr:

Yeah, no, thank you very much. I mean, we put that in sort of to be illustrative. And I think, you know, that the graphics may have superseded the concept in some ways, because you’re I think that’s an excellent point, and thank you for making that.

>> Carolyn Clancy:

Other comments? (Pause) Well, I have a question while thoughts may still be percolating. Have you had an opportunity to bring this to the HHS Data Council?

>> Justine Carr:

No, no, we just approved it we approved all the recommendations at our November 28 NCVHS meeting, and truth be told, there’s still a little bit of wordsmithing going on on the introductory parts, but that makes a lot of sense.

>> Carolyn Clancy:

The reason I just bring it up is, the Data Council historically which AHRQ co-chairs I don’t even want to pretend for a moment that that would be me personally; it actually is one of my senior folks, Steve Cohen but they’ve tended to define data interests as relevant mostly to surveys. And there’s a lot of work to be done there: precision alignments and avoidance of nonproductive redundancy, that kind of thing. But it’s fair to say that the Department also has some very important strategic data assets in the forms of claims data, which tends to be the subject of a lot of interesting policy discussion. So I actually think that might be another useful group to present this to.

>> Justine Carr:

Yeah, that’s a great idea. That’s a really great idea. I’ll bring that back.

>> Carolyn Clancy:

Great. And I will also say, from the AHRQ world, the quality and research issues remain sensitive, shall we say. It’s another way of saying we hear from grantees continuously that trying to do research which is about quality of care, and the focus of which is, no matter how you cut it, to improve quality of care, runs into the an IRB apparatus, which isn’t necessarily streamlined to make it easy.

>> Justine Carr:

Well, right, and we heard a lot of that. I mean, we really heard a continuum I mean a spectrum of interaction, you know, in different institutions. Is we had earmarked that as a potential area for additional hearings. Does that make sense to you? Would that add value, or I know a lot has been written the Hastings report, of course, and others. Is there more to be heard or more to be said, or can NCVHS, do you think, play a role there?

>> Carolyn Clancy:

Well, I do think that I know Joanne Lynn has been trying to push this boulder uphill. She had the lead for the work done under the umbrella of the Hastings Institute, and we funded that. But I think they’ve actually gotten to similar specific types of next steps, so that might be useful.

>> Justine Carr:

What might be useful, having hearings or that they’ve already got it covered?

>> Carolyn Clancy:

No, having hearings.

>> Justine Carr:

Oh, okay. All right. That’s good. And also, any thoughts from folks about de-identified data? Any additional thoughts about that? I mean, we’ve just been quite surprised at the spectrum of uses.

>> Carolyn Clancy:

Well, I will say it’s come up for us in a couple of contexts, one of which will be more apparent to the general public when the proposed rule for the Patient Safety and Quality Improvement Act is out in the relatively near future. It’s sitting at OMB right now. The issue of being able to aggregate de-identified data from the patient safety organizations is one where we had some very interesting challenges about related to de-identified data. One of the areas and when we get into specific recommendations, Justine, I’m hoping that if you can stay around, that would be great. If not, we can get back with you later.

>> Justine Carr:

Sure. Yeah.

>> Carolyn Clancy:

One of the areas that we have given a bit of thought to on our roadmap moving forward concerns a locus of de-identification and re-identification. Now, some of that is up-front education about what you can or cannot do with truly de-identified data, I mean, where it’s likely to be useful or not. But and as it pertains to assessing quality of care, it’s very clear that there need to be some trusted entities that actually have the potential to re-identify or how would I say this to re-identify in a way that I think we would all agree should be authorized, as opposed to figuring out how to do it for other reasons.

So if you can stick around for that, that would be very helpful.

>> Justine Carr:

Sure. Oh, sure. Yeah. Thank you.

>> Carolyn Clancy:

Well, if there are no other questions or comments, then I’d like to move into Mike Rapp’s presentation.

>> Michael Rapp:

Thank you, Carolyn. Today I’m going to talk about the post-acute care resident assessment minimum datasets, our CARE instrument, and how that might relate to some of the issues that we have been talking about in terms of a dataset perhaps in the hospital arena.

I do have on the phone a couple of other my other staff: Marty Rice and Judy Tobin. Marty is involved in the HIT arena in our Group, and Judy was a Project Officer for the development of the CARE instrument. So if we get to a time where you have some questions that are perhaps very detailed, either from the HIT standpoint or with regard to the CARE instrument, I’ll ask them to chime in and answer those questions.

So the what I’m hoping to accomplish today is to go through the purpose and history of the uses of post-acute datasets and their implications for use in the hospital setting. We’ll discuss the challenges associated with the current patient assessment instruments. And I want to identify the benefits of the CARE instrument, and I’ll tell you about as a driver of health care measures and outcomes.

So when we talk about a minimum dataset, I want to just back up to make sure everybody is aware of what we’re talking about. At least in the Medicare arena, we have a variety of sources of data from which we calculate quality measures. One of them, of course, is claims. That’s a frequent source of data that can be used to calculate quality measures, and we all know the limitations of that. And then another source of data would be chart abstraction. Particularly, we use that in the hospital arena, and of course, that is accompanied by the burden of having to go through and abstract data to get the primary source information. Another source of data that we use is the in the post-acute arena, the minimum dataset the MDS 2.0, it’s called, for nursing homes.

There are assessing instruments used in other settings. Home Health uses the Oasis instrument. The inpatient rehab facility uses a so-called IRF-PAI, or the Inpatient Rehab Facility Patient Assessment Instrument. But what’s different about this is, this is an actual collection of data directly dealing with the patient. It’s and with the MDS in particular, an interesting aspect of it is, it’s considered part of the medical record, so it’s not considered to be something that is a secondary source of data, which I would look at claims as. Claims, you have to go back to the medical record or even a chart abstraction. You would have to go back and see if your abstraction is correct, with your primary source being the actual patient record. But the minimum dataset is not considered a secondary source of data, but instead, it’s considered really a primary source of data equivalent to the medical record.

So again, I’m on the slide that’s titled “What Is the Minimum Dataset?” It is a federally mandated resident’s health status screening and assessment tool. It assists in a variety of ways, which I’ll kind of go through, but assists certified Medicare/Medicaid nursing homes to develop a comprehensive resident care plan. And it’s something from which we can derive measures related to physical, psychological, and psychosocial functioning characteristics.

The statutory background on that, moving to the next slide, is OBRA in 1987 was the first major legislative improvement in federal regulation of nursing homes since 1965. There was a significant concern about the quality of care in nursing homes. And based upon that legislation was based upon recommendations from a 1986 IOM report and a 1987 GAO report. And it led to regulation which requires that the nursing homes complete this assessment instrument, providing a comprehensive and accurate and standardized and reproducible assessment with regard to each long-term care facility resident’s functional capabilities. These assessments are done at standardized times, 5, 14, 30, 60, and 90 days in nursing homes for the post-acute setting, which is the Medicare arena.

Let’s move to the next slide. And I’m sorry; somebody’s calling me on the other line, but hopefully, they’ll hang up in a minute. The other the slide of uses of this instrument it makes clear that there are several. First of all, they there is the MDS Center indicates that there are data elements that are collected, as I mentioned, on every resident. Once those data elements are collected, they, at times, will trigger care planning that has to be done. In other words, if a certain data element comes up, then it is required that the nursing home would have to engage in following a resident assessment protocol.

So for in some instances, there are aspects of the MDS that are used for payment for Medicare and Medicaid, in some states. In addition, the data that’s collected in the MDS instrument is used for survey and certification. The quality indicators come from that. The data is also used for quality improvement by the facilities themselves. And finally, there are there’s also the authority to use this data for the calculation and reporting of quality measures. And you are probably pretty much all familiar with the compare Web sites that we have for hospitals and other facilities, but one of them is nursing homes. And for nursing homes, we calculate from the information collected MDS 5 quality measures for the short stay and 14 for long stay. And they a variety of measures, including such things as pressure sores, whether the residents were physically restrained, losing control of their bowels, spending most of their time in a bed or wheelchair for the long term. In the short term, we’ve added some things recently, including the percentage of short-stay residents given influenza vaccination, whether they were given pneumococcal vaccination, and then we have other measures on there as well.

So those all derive, however, from this minimum dataset that the nursing homes are required to collect. And so it has the significant advantage of, one, it’s all patient data, just like when you go to chart abstraction. It’s not claims data, so you don’t have that limitation. And then the other thing is, as is apparent, you can follow a patient over time, which is very helpful.

Let’s go to the next slide, which is titled “Patient Assessment.” And I just want to discuss: For all the benefits of the MDS, Oasis, and IRF-PAI instrument that I’ve just discussed, there are some significant problems that we have. And they are listed there. They have incompatible data formats. They have different data storage sites. They use different measurement scales use different assessment periods. Hospitals and long-term care hospitals are not currently required to use any standardized form at all, and so what we’re left with then is a type of assessment is confined to the post-acute care setting. And furthermore, you have the inability to really look at patients across the post-acute care setting. So a patient that may have a similar condition, or the same condition, might go to a nursing home, or might go to an inpatient rehab facility. But because of the different data, it’s not possible to really adequately compare those patients, either for the quality purposes or for payment purposes.

And so, going to the next slide, “Continuity Assessment Record and Evaluation” slide, Congress has been concerned about this problem for quite a number of years. And in the Deficit Reduction Act of 2005, they required CMS to develop a uniform assessment instrument to measure and compare Medicare beneficiaries’ health and functional status across provider settings, at intervals, over time, and be something that would start at hospital discharge. And we were required to test these instruments’ usefulness in a 3-year demonstration that would start in early 2008. So we are this was we were given a very compressed timeline, because, I think, as you’ll remember, the Deficit Reduction Act was actually passed in early 2006, and so we didn’t have too much time to do this. And so as a result, we worked quite diligently, but in fact, the complete the instrument is basically completed and available for implementation for the demonstration.

I’ll tell you a little bit about it. First of all, as I mentioned, it starts at hospital discharge, and it’s got the same information that can be used in all of the post-acute settings. It provides for standard data collection. It will support safer transitions across settings, because, of course, it’s unlike claims or even chart-abstracted data, for that matter, in that you will assess this patient over time and across settings as well. The it contains major assessment elements from administrative to medical, cognitive, functional, prognosis, the discharge status, symptoms, treatments, diagnosis, and so forth. So it has a broad (inaudible) that is collected.

And the other very important critical part about this is, we took the opportunity to not confine ourselves to a paper instrument that would only be that would have to then be entered into some type of submission vehicle to send to CMS, but instead, we sought to have an Internet-based (inaudible), where this information could be submitted through an Internet-based application. And let’s move to the next slide.

And we wanted to take advantage of all of the opportunities that might be available going forward, if this instrument were to be more widely implemented beyond the beyond just the demonstration, that it could present tremendous numbers of oppor tremendous opportunities in terms of a variety of providers being able to go into the instrument, for example. Let and just I think for a second that if there’s a standardized assessment at hospital discharge, even things like medication reconciliation, one could have the medications put in at that point, and then when they get to the nursing home, it would greatly simplify the medication reconciliation and so forth, and just a lot of standardized information that could help coordinate care, provide better care, and would of course provide a source of quality measures as well. And in particular, quality measures that would go across episodes of care could be combined with costs, for example. Even in the demonstration, one of the purposes of the demonstration is to compare costs as well as quality across the settings and possibly lead to payment reform. But in any event, for the (inaudible) efficiency of care and so forth, this leads to a possibility there.

Let’s go to the next slide, the IT standards, just to mention for the purpose of this Group that it is a Web-based application. It leverages a three-tiered Web architecture, leverages an Oracle database, uses Health Level Seven standards, LOINC/SNOMED terminology, and there’s investigation going on for architecture to support exporting of data.

So we’ll go to the final slide, the summary. So in short, the instrument would use standards set forth by ONC. It’s really looking forward to the possibility of this not being a medical record per se, but being a way that the assessment that’s done on the patients is readily accessible to all providers that would deal with that patient; would provide standardized information; would help deal with the lack of coordinated care that frequently occurs; and would deal, if one started at the hospital discharge, with our sickest populations, in that it would be these patients that would find assessments done that probably represent the most significantly ill patients, namely those that find themselves requiring acute care hospitalization. We would seek to leverage existing provider electronic health records and integrate the clinical workflow already in place in other words, not duplicate other things, but look to create interfaces with other electronic health care systems, and would permit self-examination of internal procedures to ensure continued quality patient care and outcomes that can be tracked as well using such an instrument.

So in short, the potential, we think, for the CARE instrument is quite significant, and for a variety of reasons that I mentioned, but I just would want to add and get to the point about a minimum dataset. We don’t probably want to use that outside the con that term outside the context of nursing homes, since it has a very specific meaning that refers to the instrument assessment instrument currently used for nursing homes. But of course, hospitals currently are required to submit a significant amount of data on patients for quality measurement purposes. They have to do the chart abstraction specifically for that, and one can imagine that in connection with an instrument such as this, the potential that that would be a vehicle to have some assessment. For example, in our Hospital Compare measures, we have three specific conditions that we deal with, which is pneumonia, heart failure, and acute myocardial infarction, plus surgical infection prevention. But one could imagine that assessment data with regard to those particular conditions might provide a vehicle to do quality measurement if they were placed in and collected through this vehicle. So we’re pretty excited about it, particularly the fact that the agency was interested in making this an Internet-based application and take advantage and pay attention to the standards interoperability standards and so forth that we’re all working to promote.

So that’s sort of the conclusion of the presentation. I’d be happy to answer any questions on this. As I say, if they’re very detailed on some of the aspects of the CARE instrument or some of the HIT things, I have some additional staff that can help me answer those questions.

>> Carolyn Clancy:

Well, Mike, first let me thank you and your colleagues for a terrific presentation. I, for one, had never completely grasped where the MDS for nursing homes came from initially. I knew it was a lovely thing to have once you had it, and we have funded a fair amount of research using that dataset but actually didn’t know the genesis quite so clearly back to OBRA in ‘87, so I very much appreciated that background.

I think what I’d like to do is to segment questions into two categories. I had the privilege of hearing about this about the CARE instrument in particular earlier this week in some detail, so I’d like to take sort of specific comments and questions about that. But I’d also like you to be thinking, Mike, about what are some lessons that we might think about if we were going to imagine thinking about a quality dataset that would help us make this transition to electronic health records, making facilitating using electronic health records for measurement and reporting in the future.

And the great thing, is Janet Corrigan just joined us in the past 5 minutes, I wasn’t sure she’d be able to given the timing of the board meeting. And to some extent, this begins to dovetail off the work that HITEP has just presented this morning, presented to our Workgroup last time. So let me just ask for specific comments and questions first.

>> Margaret VanAmringe:

This is Margaret. I have two comments. Thank you, Mike, for the presentation. I also had heard it earlier this week, too. And I’m very excited about this, because I think one of the doors that this opens up and I know this may depend in part upon what comes out of your demonstration is that if it’s a Web-based tool, there now could be access to many of the referring physicians who refer patients to hospitals and other post-acute facilities but don’t know, especially after they’re discharged, what has happened to them and what kind of discharge instructions even that they’re on. So going downstream, the applications of this well beyond what maybe Congress originally intended, I think, are very, very great, and can help really solve one of the problems that we have in terms of communication in this country back to primary and referring physicians and clinicians. So I’m really very excited about this project as well.

The second comment is gets a little bit, I think, more to Carolyn’s second point is wondering how burdensome this instrument, though, will be in terms of being able to complete it and make it something that will have the right data elements in it that we need, but not so many that there’ll be some type of pushback for especially for hospitals in filling it out, and getting it up in running. And I wonder if you’d had much feedback on that second aspect.

>> Michael Rapp:

Okay, well, I think I might let Judy Tobin chime in a little here because of all the involvement she’s had with the specific development. But let me just say a couple of things. I did neglect to mention that although I talked about the demonstration, another potential implementation opportunity for this instrument may be in the ninth scope of work, which there’s a theme of care coordination, in particular. And this would dovetail right into that. In that context, there would be the potential to broaden this out the way you’re talking about, which is to have the information accessible by other providers than just the post-acute settings, but it would include potentially physicians. So that is something that we’ve given some thought to as well.

With regard to the burden, as I say, I’ll let Judy comment. But I think that yes, there’s always a burden to completing anything, but on the other hand, the utility may be quite substantial. And also, we would love to take advantage of interfaces so that they (inaudible) so that the interoperability aspect there are a variety of different information systems and so forth that the hospital would have, but conceivably, with proper interfacing of information, that it wouldn’t necessarily even require the hospital to fill out something per se, in some instances. So that hopefully would reduce the burden. But Judy, do you have any comment there?

>> Judith Tobin:

Sure. And Margaret, thank you. I appreciate the both the comments and the questions, and we’re particularly sensitive to the issue of burden. Actually, I’m a clinician by practice, a physical therapist. Mike is a physician, and Marty is a nurse. So I think we have that sense of you want to minimize the amount of time people have to spend with a chart but, again, get at what is the right mix of information that’s critical to communicate to another care provider.

And I believe for the demo, all the participants will be participating on a volunteer basis. And we’re really going to learn a lot from the demonstration in terms of what’s going to help inform quality, what will help inform payment policy, but also perhaps, as you mentioned, which questions were the right questions which ones, in the long run, are not as informative, so that we can really pare the instrument down.

The other aspect or feature or characteristic of the instrument that doesn’t really come across when you see it in hardcopy is that there are quite a few skip patterns. When the instrument is displayed for public comment, we have the entire master list of every possible question. But the reality is that for certain sites, there’s a much more abbreviated version in terms of what actually appears to the user. So it’d be a subset of those items. And if this has usefulness certainly beyond the demo, and with the ninth scope of work, I think we’re we’ll learn quite a bit about what could be either prepopulated from data sources, what questions need to be answered in which settings, and which information is very useful to transfer with that beneficiary.

>> Margaret VanAmringe:

Thank you.

>> Judith Tobin:

You’re welcome.

>> Carolyn Clancy:

Other comments or questions?

>> Charlene Underwood:

Just a comment. Operationally, in terms of the timetable and location of the demonstration project, is that sorted out?

>> Michael Rapp:

Judy, do you want to answer that?

>> Judith Tobin:

Yes. They’re in the process of selecting the markets. It will be representative of the markets across the country, both urban, rural, and in between, anywhere from 10 to 15 markets, minimum 150 providers. And amongst those providers will be hospitals, long-term care hospitals, home health agencies, inpatient rehab facilities, and home health agencies.

>> Charlene Underwood:

My question would be again, this is Charlene Underwood from Siemens who you know, we do service hospitals and practices when you talked about alignment with the standards, like the export standards, in terms of populating CARE and I like the approach are those standards that it seems like those are standards in development as we speak. Or am I missing something here in terms of the automatic export of the data in that form from an EHR?

>> Judith Tobin:

Well, actually, we’re deep into this portion of it. It won’t be operational for the beginning of the demo, but we’re speaking with vendors such as yours as well as big larger hospital groups, ELTEC, in terms of what systems do their pharmacies run on right now for medications and, if we were to try to take a medication list and import it into the CARE instrument, what would be needed for that interoperability. And I think some of the conversations are going on this week next week, and I believe Siemens is also going to be one of the groups we’re speaking with.

But Marty, you’re more of an informatics expert. I don’t know if you want to comment on that as well. I think we’re certainly trying to embrace all the standards that have come through the Office of the National Coordinator.

>> Martin Rice:

Charlene, it’s really hard to pinpoint exactly which standards we’re going to be using right now, because we’re trying to get our goal is to make this process of exporting data to CMS as invisible to the clinician as possible. And we want to make you know, we want to make this an easy process fit into the workflow. And we’ve looked at the CDA. We’ve talked to people at the IHE. We’re really there’s no commitment, because we can’t we might have to make a variety of standards available, because no one standard is really the accepted standard of transmitting data.

>> Judith Tobin:

You know what we have heard, though, is that we’ve had a number of opportunities to speak nationally about this initiative. And a repeating message is from hospitals and the larger vendors and providers is, “Try to make your system work with our front-end systems.” Nobody wants to have to do a big overhaul. So we are trying to be very much aware of that and understand what the needs are out there.

>> Martin Rice:

And I welcome your you know, we welcome your input, Charlene. If you want to call me up, I would love to speak with you more about that.

>> Charlene Underwood:

Yeah. I would do that, because I mean, I just this is I’ll speak kind of this is one of those projects you know, the Vendors Association’s been working on this stuff, but this is one which is the scope is a little more manageable, if you will, and it actually is going to be a demonstration project. So it would be nice to have that conversation, because, you know, the vendors are kind of almost aligned on an interoperability or roadmap to some extent without a place to go. So...

>> Martin Rice:

And I’d love to speak with you further about it and put you in such with the right people so you can, you know, give us your information.

>> Kelly Cronin:

Yeah, I’m just this is Kelly Cronin. I would also encourage just an offline conversation about how to make sure that we’re mapping to the CCHIT criteria, because that is what is going to be driving the standards adoption in these records over the next couple of years. And we already have over 40 percent of the ambulatory care market with certified products, so there is real traction. And HITSP is naming standards interoperability specifications that cover probably a majority of the things that you’re looking at, or at least they will over the next year.

So we it’s not ONC standards per se, the standards that are being done by many, many, many organizations on the outside that are looking at these in depth. And the Secretary is really the one that’s recognizing them. And then through an executive order, all the government funding, including through demonstration programs, really has to be consistent with those. So we can go through it in a lot more detail offline, and I just would encourage that conversation to happen maybe in the next few weeks, if possible.

>> Martin Rice:

And if somebody can send me some contact information, I’ll be more than I’ll follow up on it.

>> Kelly Cronin:

Okay, that’d be great.

>> Carolyn Clancy:

Great. Janet?

>> Janet Corrigan:

Yeah. Mike, this is Janet Corrigan. And it’s great to listen to the presentation, even if it’s my second time. I learn things each time. But one comment: As I took a look here at sort of the five domains of the CARE instruments that would be the domains for the core set of data and it strikes me that we have an opportunity to harmonize the language in the domains that you are using with the domains that are coming out of NQF’s comprehensive chronic care measurement framework framework for measures. For example, we look in our comprehensive framework, we have the domains of process, outcomes, care coordination, patient engagement and decisionmaking, and resource use. And I think that probably, as I look at the domains in the instrument, some of them map quite directly, and others probably don’t map quite so directly. But it strikes me that we probably should have an offline discussion just to see the extent to which we could fairly quickly each of these is, you know, gone through a fairly extensive process, but we could probably harmonize some of that language and get to more common domains that would guide both the NQF work and meta-measure developers and decide where they want to focus our attention and endorsement activities in a way very consistent with where things are going in terms of the data.

>> Michael Rapp:

That’s an excellent suggestion, Janet. Thank you.

>> Carolyn Clancy:

Other comments or questions? (Pause) Well, Mike and colleagues, let me just say that you always know that it’s a really good presentation, and what you’re doing is fine work when people are immediately right out of the gate to say, “But what about this?” and, “This is really cool.” You know, the notion that given how many people in what I guess the post-acute setting which isn’t actually a setting; it’s a variety of settings currently have not had the opportunity to have any continuity of information. And, you know, the possibilities here, I think, are very, very exciting.

Now, having said that, on one level, I want to get back to Charlene’s point about this being a manageable kind of project. It’s a demonstration, it’s a discrete population, and so forth. (Inaudible) setting (inaudible) and the notion of a quality dataset. As you or this is what I’ve understood, anyway. I may be overstretching here. But when you look at the fact that you at CMS are managing a process for physicians in particular to submit data on claims and hospitals to submit data from an electronic process that ultimately derives, for the vast majority of hospitals, from chart review but also wanting to figure out how do you not disincentivize the adoption of electronic health records, a quality dataset, which is one of the recommendations we’ll be discussing as we get into the recommendations piece, does sound like a fairly appealing idea and, in some ways, might actually be a logical build onto the high-tech work. Have you given any thought to what that would take?

>> Michael Rapp:

Well, I have given some thought to it, and particularly in the context of the CARE instrument. So if we start I think one way to look at quality measures in particular is just, sometimes you take a confined dataset and you just see what you can do with it. And we’ve done I think that’s kind of the way that MDS is. That was an assessment instrument that wasn’t set up specifically to calculate quality measures, but once the data was there, then we looked to see, “Well, what can we calculate?” And we’ve looked at even things like part E data when that came about. What could you actually calculate with the limited set of data that you get from that?

So we will have, then, in the CARE instrument, with the assessment elements that were put into it there’s going to be a certain number of quality measures that might be calculated from that. And so that’s sort of the if we were to start with that instrument in terms of a working effort, we would start with the elements that are there.

The second thing: I would see that the hospitals are a particularly, I think, important and interesting area, especially given the recurrent concerns about the burden that hospitals currently have in abstracting and submitting data to not only, of course, us but to many other sources, or many other requests for data. So I think in coming up with a dataset, one the next thing I would probably think of is to look at the measures that we’re currently interested in collecting and seeing to what extent additional data elements to something like the CARE instrument would deal with that current need.

And then I would see that perhaps the third stage in all this would be to just look more broadly and not think necessarily in terms of quality measures or this particular assessment, but what might round out the data elements that people broadly think would be desirable to collect on anyone that ends up hospitalized and that could be carried forward from there.

So that’s kind of the three stages that I would see to the extent that I can envision where this might go, but I definitely would be interested in what the comments of the people at the meeting and on the call might be. I would point out that in the nursing home arena, in terms of adding to existing elements, when we were interested in influenza vaccine vaccination and so forth, that’s something that wasn’t originally in the MDS, but it’s something that we added to it. And so it’s an example of one is not necessarily confined to what one has, and if one goes forward and thinks broadly and has a lot of input, it can be a useful device and, I think, to the point about having something manageable to deal with. This is manageable. It’s big, but it’s still manageable, and it’s focused. And I think having the focused areas of effort like that, especially insofar as it couples well with HIT development going on, might be a tremendous opportunity.

>> Carolyn Clancy:

Thank you. Any specific comments or questions before we transition into sort of recommendations and next steps for the Workgroup at large on this particular point? (Pause) Charlene, have you or any of your colleagues thought about this at all?

>> Charlene Underwood:

Actually, you know, the awareness I’m going to be honest: The awareness has been low. We’ve been working more on the ambulatory side. But the tools should be similar, you know, and we would just hope that it’s you know, export discharge we’ve been working on the standard for CCD to export data in a discharge summary. Well, you would want to make sure we probably haven’t integrated into that planning the minimum dataset requirements. But we would hope there’d be some synergy there. So we’ve got to bring those together.

>> Jane Metzger:

Carolyn, this is Jane Metzger.

>> Carolyn Clancy:

Hey, Jane.

>> Jane Metzger:

Hello. As I was thinking what could you do in a similar vein to what we’ve been hearing about the post-acute situation, it strikes me that if we’re saying a minimum dataset for currently required sets of measures, like, say, the core measures that actually something similar to this is starting to happen. Hospitals are using quality nurses or starting to call them “clinical effectiveness nurses.” And they assemble the information needed for the core measures in real time as patients are in the hospital. And we’re even starting to see some automated tools that help in that process. And that’s obviously not ideal: You’d rather get the you’d rather get electronic information from the original documentation. But what made me think of it is that when you set up for this kind of process, you really have figured out all the information that you need for that specific set of quality measures, including all of the exclusions.

So you could think about this as an interim way of data capture as we move toward hopefully directly from doc from electronic documentation. But you can even potentially set up these tools for the clinical effectiveness nurse so that they as electronic data become available, they can be brought over to reduce the need for manual data entry. This approach, however, requires that you’re thinking of a particular set of measures, not of quality in a broader sense.

>> Carolyn Clancy:

Although, as Mike pointed out, they have, downstream, augmented the MDS for nursing homes, which I knew about but hadn’t actually thought about the concrete steps before he pointed that out, because I’d seen the data and the results, and of course, there wasn’t I don’t think our sensitivity to the idea of people in nursing homes getting influenza vaccines was quite as acute 20 years ago. Should have been, but it wasn’t.

>> Judith Tobin:

And this is Judy Tobin. I just wanted to add that I think one of the overarching goals with this CARE instrument and our instruments as they go forward is to really build them and revise them in such a way that we’re only retaining useful items, and maybe retiring items that have outlived their usefulness, and then replacing them with items that are relevant. So hopefully, we could meet that need of the end users, whether it was a specialist in terms of this nurse clinician.

>> Carolyn Clancy:

Thank you. I mean, one of the reasons I was thinking about this is, I again, I had the opportunity to hear the HITEP presentation, which, like wine, seems to be getting better with age. (Laugh) So (inaudible) was really on this morning.

And if you’ll recall from the last presentation, when this panel I mean this really terrific panel after two full meetings, tried to come up with an analysis of was there could you derive a core set of data elements for even a prioritized subset of AQA and HQA measures, the short answer was, “Not exactly. Not really. No.” This report is going out for public comment through NQF and also from HHS, and it strikes me that we might be able to add one or two questions. This is a Federal Register notice being something like a four-person survey to get some feedback on this question. So we might come back to you about how to word that more precisely.

I mean, one advantage of Mike and his colleagues doing this at CMS and now I’m going to make all this of this work sound really easy, but there is a level at which you’ve got more control of the situation than needing to work with many measure developers and so forth, because you’re starting from you’re building on areas that you have a lot of control over already, which is a little bit different than trying to figure out how to deal with the many, many, many parties, for example, involved in physician measures and so forth. But (inaudible) the hospitals (inaudible) my brain is beginning to circulate.

If there are no other comments, what I would like to then turn to, in the next part do you have something? Yes.

>> Janet Corrigan:

Just one quick question. This is Janet again. And this is going to show my ignorance, but how does this work relate to the CCR, that Continuity of Care Record effort that’s been under way for quite a few years? Are there others more familiar with that?

>> Judith Tobin:

Yes, actually, and we’ve had a number of conversations with folks at ASPE as well. And I suppose where they would overlap is in terms of interoperability in adopting the standards that are broadly accepted, with the idea that if you have think of the CARE instrument almost as an item bank of data items survey questions that are linked with an identifier or a LOINC coding or something that same question and response could be imported into a continuity care document, CCD. It’s received it’s gone under a number of acronyms. And the reverse could be true as well: If people are standardizing some of this data, something that’s already been answered through another instrument could be imported into the CARE item.

>> Janet Corrigan:

Thank you.

>> Carolyn Clancy:

That’s far from an ignorant question. I’m still getting my head around that one as well.

>> Judith Tobin:

I hope I don’t know if that was a very clear explanation. Hopefully, that helps.

>> Carolyn Clancy:

CCD is what effectively what VA uses for quality measurements?

>> Kelly Cronin:

I think it’s one of the things they use, but they Indian Health Services is actually probably the most sophisticated in terms of having the ability to export data in a standardized fashion and have it in a repository. But the VA and DOD are not that advanced. And I whether or not I imagine Indian Health Services probably does use the what’s becoming a CCD. It’s what used to be called the CCR, and the most recent updated version that was approved by HITSP. And really, it was created because of HITSP. It’s called the CCD. That’s why everyone has all these confusing acronyms.

>> Carolyn Clancy:

It’s not religious education, as some of us (laugh). There should be, like, an acronym bank, like URLs, right? I mean (inaudible) (laugh) (inaudible) this acronym.

Is someone from IHS on the phone?

>> :

There was.

>> Mark Carroll:

Yes, hi. This is Mark Carroll. I’m here.

>> Carolyn Clancy:

Mark, I couldn’t remember if it was Mark or Matt. Sorry about that.

>> Mark Carroll:

That’s okay.

>> Carolyn Clancy:

Are you familiar with how you do the quality measures? I remember that you’ve got a fabulous interface on what’s Vista or a version of Vista.

>> Mark Carroll:

Right, through RPMS. We have a separate program that actually will enable us to, as Dr. Cullen would say, sort of scavenger-hunt for specific fields based on measures that are GPRA-driven as well as measures that are internally driven and externally advised and, from that, kind of collate and pull out measures. And now we have a different viewing capability through a new program that was just completed to look at them in many different perspectives, whether they be patient-specific perspective, facility-specific, panels of patients, or even a community perspective. And you probably know that Terry Cullen is probably better to speak to this in greater detail, and I’m sitting in for her today, but that’s a rough overview of what we do. I’m not sure if that’s helpful as a description in this discussion.

>> Carolyn Clancy:

So thank you very much for that, and please let Terry know that we may be pestering her for a (inaudible) fine work that you’ve been doing and, frankly, the results that have come about as a result of this work. But I think it may actually be helpful as we get further downstream.

So I’d like to transition to the next part of our meeting, which is to return to the quality vision roadmap and potential recommendations to the American Health Information Community. Now, some of this you’ve seen before already. It has been refined a bit, you’ll see, as we get into it, because the roadmap and descriptive slides have been updated. But just turning to the first slide, please, you’ll recall that the we have defined milestones, timelines, and key players necessary to achieve the desired end state for each component included in the quality vision roadmap. Let me say, as I re-looked at this, this morning, it occurred to me that we’ve defined key players more from the IT side. And I think as we walk through this, we want to be thinking about the key players from the quality side who need to be engaged here as well. Interesting that I only noticed this today.

Our challenge, then, though, to get back to where we started at the beginning of this meeting, is to try to figure out where we can make a difference in the next year. Now, that doesn’t mean this has to necessarily be bounded by “It can be done one year from today” although if we identified one or more of those, that would be fine but more that we have figured out that we can get some traction on a particular task and that the Quality Workgroup acting on it will make a difference. There’s clearly a lot of activity, and I, for one, certainly don’t want to, in any way, discount or dismiss the efforts that the vendor association and others have made working through their collaborative IHE and so on.

So based on our previous discussions, we’ve heard a number of ideas that had been brought up either as potential recommendations from this Group to the AHIC or potential action items for this Group to consider undertaking. So this slide presentation lays out the recommendation and action items with a vision roadmap component to which they are related. This is in advance, and I do want to thank Michelle and Kelly for making this happen. And if we have overlooked important ideas, now is the time for it, but I’m hoping that we can narrow the list of option.

So there are essentially, as we walk through this and what I want to do is go through it once and then come back sort of systematically for us to consider whether we just think, “Whoa, too big. It’s got to happen sometime, but the potential for traction in the near term” which I’m defining as this next year “is pretty small or would require so much effort that to say that it’s a unique function or opportunity for the Quality Workgroup doesn’t make a lot of sense.” Another would be “Yes, that’s very promising.” And a third is “Huh, we may need to kick the tires a little bit,” in which case I think we could add a few discrete questions to this Federal Register notice of getting public comment on the HITEP document, which I expect most of the public comment to say, “Whoa, great stuff” (laugh), but beyond that.

So if you can turn to the next slide, please, and these the bullets on this slide effectively just drill down a little bit with what I was just saying. Do we agree, as a Group, that this idea or action item should be done, or do we want to discard it? Is there anything that we’re missing? Have we overlooked groups that need to be part of the scene or part of the activity? And should we advance this? Do we need more analysis or information? And so forth. Next slide, please.

So the next slide, which I’m hoping that you like I have a full-page printout of this is actually a notional draft of the roadmap. Now, referring back to our original charges, this is actually addressing the broader charge: developing health IT capabilities to achieve the Quality Workgroup vision, or I would say it is to support the quality enterprise. In essence, what’s really going to be required is both that the quality enterprise and the health IT enterprise begin to converge, which is not precisely the current state. So you can see the future state components that come right out of the roadmap, from incentives and measure set evolution at the top, down through data stewardship, provider record matching of quality datasets, where the particular options about inpatient and ambulatory care quality datasets are actually brought up, expanded data elements, standardization, and so forth.

The bottom bar, I’ll just call your attention to, refers to clinical decision support, which we have always articulated as a key part of where we’re trying to drive to. Kelly would be the first to remind me, so I’ll say it for her: that there is because clinical decision support touches a number of the other workgroups that report to the American Health Information Community, there is a cross-cutting group that literally got together yesterday and had some more work teed up in the next few weeks, and we’ll be bringing that back to this Group. So I don’t think we need to get too, too bogged down in that aspect of it. Next slide, please.

I should also say, if anyone wants to jump in at any time, don’t hesitate.

>> Justine Carr:

Carolyn, this is Justine. One question I have as I look at this and I think you’ve alluded to it maybe earlier, maybe yesterday human factors: If a physician is doing all these things in making an electronic record quality and capture, are we factoring in, in any way, what portion of that 20-minute visit is electronic and what portion is human interaction?

>> Carolyn Clancy:

That’s interesting, I’m glad that you went on to elaborate, because when you initially said “human factors,” I had something a little different in mind. I think there are some studies of that phenomenon. I have to say, I have found the ones I’ve seen a little bit unsatisfying, only because it’s never really clear whether the physicians in the particular study are using all the functionalities of the electronic health record, where they are in these sort of training and adaptability phases, and some doctors take more time to learn this I would be among the later group, I’m sure and so forth. I was thinking of the question of human factors as more about how are data collected as part of the workflow.

>> Justine Carr:

Yeah. I’m sorry. I meant that as well. I mean, as you have your 20 minutes and you get your you’re doing all of these data collections and then e-prescribing and then med reconciliation, how much time you know, how are we allocating our time, and at what point have we eclipsed the time with the patient?

>> Carolyn Clancy:

Well, I think that’s something lots of clinicians would have very strong views about, but I don’t know of any quantitative estimates. Does anyone else?

>> Justine Carr:

I think I’m raising it more so as we do as we have a roadmap, we could have all these things, but have a, you know, frustrated or hands-tied provider.

>> Janet Corrigan:

No, I wouldn’t necessarily assume that these are done that these activities are either with or without the patient. It seems to me you can do med reconciliation with the patient, that it’s a you know, it’s a joint process, so you’re having (inaudible).

>> Justine Carr:

I’m just saying the extreme. I’m just saying somewhere along the line, we should think about because we’ve changed the workflow, things that we might have done at the end of the visit, at the end of the day, some of them have to be then in the moment. And I don’t know quite where it belongs, but I think it needs to be in the dialogue somewhere.

>> Carolyn Clancy:

No, I think it’s a very important point.

So if we could go to the next slide, please. So one area that we have talked about in the roadmap relates to expanded data element standardization. So where we are today is that, thanks to the work of the Health IT Expert Panel, or HITEP, and the use cases, that has helped ident guide HITSP’s identification of specifications and standards needed to facilitate development of interoperable health IT. And then the certification commission can add additional standards for certification on an annual basis. Clunky, but it works, and there’s clear rules and responsibilities.

Clearly, where we think we’re headed is that there are continuous ongoing efforts under way to standardize data required to facilitate quality measurements implemented through an established process. In other words, we don’t need a HITEP to say, “Wow, we’ve got this process that we didn’t think about,” or, “We’ve got measures that were developed actually without thinking about any data source.” I don’t think they were particularly antagonistic to electronic health records. The current process mostly focuses on what’s the right thing to do and doesn’t really think at all about data sources. And so we’re imagining that in our MDS data, or when we’re getting all this done, that actually there will be a much more seamless integrated process.

In the bottom box, you can see that there are key players to enable movement towards the vision. And I think all of those make sense, although I think the measure developers need to be there, at a bare minimum, as well. Would you agree with that, Janet?

>> Janet Corrigan:

Yes, I would. I also would make the distinction and have NQF there in its role as an endorsement, and the guidelines we set for measures, and where we set the bar in terms of the measure specifications. We can drive, I think, parts of that. And that’s separate and apart from HITEP.

>> Carolyn Clancy:

Yeah, no, I think that’s very important.

>> Margaret VanAmringe:

This is Margaret. I definitely would agree with both of those additions, because I think I see us doing commission trying to do some of the same things, and we work through the (inaudible) for the collaborative. So putting (inaudible) our work and measure development and implementation would be very important to put on there.

>> Carolyn Clancy:

So if you turn to the next slide, one potential recommendation idea is to expand and promote standardized consensus-based data elements that support automated quality measurement and reporting. Now and then below that are several bullets that begin to get into this area of how would we go about that. One is to work with Chief Information Officers and integrated delivery networks. One is to that the Department should, in some fashion, work to gain buy-in from leaders in the community to ident to utilize standard data elements and source systems. And then there’s one for AHRQ, here, that we should convene a group to prioritize areas for structuring clinical data.

What’s most interesting about this to me is no matter how you cut the problem that we’re trying to address from many different aspects, you keep coming back to clinical documentation. Right now, AHRQ is supporting a number of efforts exploring the feasibility of structured data entry. And I must say, the only one I saw demoed as a busy clinician, I’m not sure I find it very appealing. It’s still, I’d say, a work in progress. And this was presented, I have to say, in rapid-fire fashion, so I don’t think it wasn’t a case of the person presenting speaking slowly.

Again, what I want to do is just walk through these, and then we’ll come back and sort of decide how much transaction we have. But let me just ask, are there any comments on this?

>> Richard Stephens:

Well, Carolyn, this is Rick. The one thing you might add is a fourth bullet that says you know, figure how to have put incentives in place to motivate people to go to these standards.

>> Carolyn Clancy:

Very, very good point.

>> Janet Corrigan:

That’s a great point.

>> Carolyn Clancy:

Thank you. When Janet was mentioning the role of NQF as setting data specifications and actually sending a very important signal back to measure developers, one incentive that I found very appealing this morning was the possibility that, at some future date, you might only get a very relatively short time-limited endorsements, if what you were submitting was a measure with specifications that were not set for an electronic health record. Let me be very clear: This was not put up to a vote or consensus or anything, but the concept struck me as very appealing. Kristine?

>> Kristine Martin-Anderson:

This is Kristine Martin-Anderson. I just wanted to we haven’t talked about this in a long time raise the point that came up in our subgroup that was talking about these initially, which is that in this particular area, while electronic health records are very important, we don’t really don’t have to wait for electronic health records. So that was one of the ideas around trying to educate the community around for instance, if your existing lab system were to convert to using a standard that comes out, or your existing pharmacy system that may or may not be part of an electronic health record system, that there are individual CIOs and integrated delivery networks that have ability to switch their standards for data capture even before they implement, which would help them when they ease into an electronic health record. So that’s just one of the points that trying to think about how we capture that opportunity this year as compared to just getting into the 5-year queue.

>> Carolyn Clancy:

And we already know that since CMS is requiring a present-on-admission diagnosis for hospital discharge abstracts, you combine that with lab data and you’ve increased the precision of discharge abstracts alone in a pretty powerful way. So that’s a very important point. Thank you.

Any other comments? (Pause) Then we’ll move to the next slide, which talks about coding improvements. So the where we are now and in the future is continuous ongoing efforts under way to standardize coding for diagnosis procedures and billing, which form the basis of determining inclusion and exclusion criteria for quality measures. Let me just say that this, I think, creates huge headaches right now. One derives from the limitations of using billing claims for quality metrics, even though it is electronic data and, to that extent, has a lot of fairly self-evident advantages over running around to find charts.

And it’s not just the finding the charts, actually. It’s the inconsistency of documentation that is the real problem, because if you could wave your wand and have all the charts delivered to you in 5 minutes, that wouldn’t necessarily assure that there was good quality information in there, because there’s not a lot of standardization about how people document. So leave it that way. And I’ll leave the nurses out of that, by the way, because nurses tend to be much more consistent. So the recommendation here would be

>> Kristine Martin-Anderson:

None have come forward for what this Workgroup could do, but this is an area that other groups have expressed interest in. So the real question here is, is there something that we could be doing in this area or not? And if not, move on.

>> Carolyn Clancy:

Okay, but at the moment, we don’t have a specific recommendation.

>> Kristine Martin-Anderson:

None have come forward so far.

>> Janet Corrigan:

Standardized coding.

>> Kristine Martin-Anderson:

We could improve coding. Right.

>> Carolyn Clancy:

So moving to the next slide, then, we’re going to get into data exchange.

>> Janet Corrigan:

Sorry, one question. One of the things out of the HITEP was that we needed a more standardized coding system for the problem list, I thought, and the stamped problem list. Isn’t that something that can this Group drive at least some developmental work in that area? And that seemed to be a pretty critical element for the denominator populations in most of the measures.

>> Justine Carr:

This is Justine. I would second that, dealing with that on my day job as well as many different settings. I mean, there’s SNOMED coding, or there’s ICD 9 or 10 coding, and a lot of problem lists are not necessarily linked to that. And I think that, you know, that’s going to be a that’s a key driver, I think, even for the decision support piece and all that. And I don’t know nobody’s I don’t know that anybody is raising their hand on that, but I think it’s critical.

>> Carolyn Clancy:

So I guess problem list has at least two connotations here. One is I’m going to say for institution-based care. Somewhere there’s usually an incentive related to payment and other factors to have a list of diagnoses that the patient has. For ambulatory care, it often doesn’t exist at all. I mean, if I see a patient with four chronic illnesses, by and large, it doesn’t affect the bill very much which one I check off.

>> Justine Carr:

I’m not saying for the bill. I’m saying for the taxonomy.

>> Carolyn Clancy:

No, no, no, but what I’m trying to get at I’m sorry is, if you look at key players that would be needed that we’d need to engage to make some movement here, it feels like a lot of physician and other health care professional organizations would need to be engaged.

>> Janet Corrigan:

Yes, because in many ways, we were trying to get at two things, I think, in the problem list that we need. What it drives is not only the denominator populations, but that’s where you would get the data for most of the exclusions that are in a lot of measures. And so I think if you’re right, it would have to be the performance measure kind of community would be real keenly interested could be one would need to, whatever standardized problem list that’s developed, bump it up against what all the needs are and the measures. And the idea here was that it would it should be in the CARE it should be collected in the CARE instrument, I would think, those data elements, because you want it to cross settings and you want it to time- and date-stamp, because if it isn’t updated and time and date stamped, it’s not going to be useful. It’s got to keep being revised. It’s where you have the rule-outs, diagnoses, and all that that’s really critical to getting the measures right.

>> Carolyn Clancy:

So this is one little thing I remember being impressed with the general idea, but one specific I don’t have a clear recollection about. If an individual patient has four chronic illnesses, the time and date stamp is certifying that that condition is still present?

>> Janet Corrigan:

Yes.

>> Carolyn Clancy:

That their (inaudible) didn’t go away.

>> Janet Corrigan:

That’s right, as I understand it, in my limited knowledge.

>> Carolyn Clancy:

Okay. That actually makes sense.

>> Kristine Martin-Anderson:

Or in during the care, say, in an inpatient, at what point the diagnosis appears, so you know if it’s a complication or comorbid, which would make the coding on admission more accurate, so...

>> Carolyn Clancy:

Got it. Yeah. Okay, that’s very helpful.

>> Jane Metzger:

Carolyn, this is Jane Metzger, I would argue that in the inpatient setting, there isn’t a problem list today. There’s some attention to it at admission and a fair amount of attention to it at discharge and almost nothing in between. I know it’s on the roadmap for CCHIT certification. It’s needed for a whole variety of reasons. But the discipline of, every day, looking at the problem list and making sure that it’s up to date, especially for a medical admission I think it’s a huge gap.

>> Margaret VanAmringe:

I agree. There are models. For example, I think iMD-Soft MetaVision has a problem list, and it’s it then drives the documentation, and then it can actually be reported out as, you know, problem over time or date you know, you can sort it by problem; you can sort it by day. I mean, it has potential, but, you know, having that across the inpatient setting you know, I see it as a very important thing as well.

>> Carolyn Clancy:

That’s very helpful. What I was just reflecting on is actually, in the paper world, there’s often multiple problem lists that don’t agree with each other at all. Now, sometimes, that’s because one particular professional has specific issues in mind, like the nutritionists. Other times, though and I recall this vividly from reviewing charts the nursing list and the doctor lists are often different. I can tell you the nursing list, in my personal experience, is far more reliable, but that would be a topic for another day. And I don’t think what we’re trying to drive to is that nurses make problem lists. But okay, good.

Next slide. Okay, we’re on it. Sorry.

>> Ann Janikula:

Real quick, this is Ann Janikula sitting in for Susan Postal. I just wanted to make a comment here that AHIMA’s doing some work in this area related to coding improvement may be interested in discussing some ideas or recommendations in expanding this topic.

>> Carolyn Clancy:

Great. Thank you very much. So if we then move on to data aggregation, I think it’s fair to say and before you came, Janet, I was making just reviewing for the Group the issue that came up at the discussion this morning, about we’ve got transparency in measures, and we have transparency in reporting. The intervening steps look like a black box to many folks, or they’re not as clear as they could be, and the discussion that the AG’s agreements in New York State might actually drive more transparency in that part.

So I think it’s fair to say that current state there is limited aggregation in exchange of quality data across organizations, whether that’s patients moving to different points in a care continuum or patients seeing multiple clinicians or whatever. What data we have electronically, for the most part, is primarily claims, for which we already have interoperability standards. What we are imagining would happen in the mid-state is that we’ll see more aggregation and exchange of data across organizations and I think the CARE instruments really is one example of setting the stage for that and that the data would be a mix of claims data and clinical data so your point on the coding improvements, I thought, was very helpful, Kristine and frankly, that a lot of that increased aggregation and exchange is driven by an increase in pay-for-performance, value-based purchasing programs, and so forth. And then, of course, the end state gets us to longitudinal data aggregation and so forth. Now, at the bottom here, again, the key players, I think, are a bit more health IT focused. I think NQF has got to be a driver here for data specs and the priority setting process: clearly, measure developers, I think at least in interaction, if they’re not a driver. So I think they’ve got to be engaged.

Who else needs to be engaged here if we were to pursue a recommendation in this area? (Pause) Okay, well, moving to the next slide

>> Margaret VanAmringe:

This is Margaret. I’m wondering whether we need to put purchasers or in here or insurers, because I’m just wondering whether or not all of the claims data is that if you look at Medicare claims data and then you look at commercial claims data, whether all of that is as interoperable as we would like. (Inaudible) can answer that.

>> Carolyn Clancy:

I would guess the answer to that question has to be “No.”

>> Margaret VanAmringe:

So I’m thinking if we’re ever going to be looking at, say, all payer data, then we might need others to play.

>> :

can’t use it when you do.

>> Richard Stephens:

Yeah, this is (laugh) from an employer’s perspective and we rely an awful lot on, you know, the insurers and those who used to process data to get a sense about what’s going on I think having, you know, those groups involved will be important to make sure we get the data in the way that we want, even though, you know, the health care institutions may want to look at it another way, because we’re always looking to mine information about “What are the trends? What’s going on? What are the implications in terms of the behaviors they’re trying to work with our employees?”

>> Carolyn Clancy:

You’re in. But I actually think that’s a very good point. And interoperability in this particular area only goes so far. So I think the real challenges facing those who are trying to aggregate the claims across multiple plans does not have anything to do with interoperability specifications. It’s internal coding specs and conventions that are making life so exciting and frustrating.

Next slide. So here we have teed up a number of potential action items. One action item, the first, is actually to define various approaches to data aggregation and begin to articulate the issues and specific challenges that are going to be needed to actually get us to longitudinal quality measures. A second action item is to design and potentially conduct a scenario analysis so that we can understand some of the implications, strengths, and weaknesses of different approaches.

The next slide tees up the notion of specific and actionable options for data aggregation strategy and talks about inventorying various sources of data available for aggregation. We called it we’ve been saying throughout our discussions that the future of quality measurement’s going to be drawing data from multiple sources, and the idea here would be to get some clarity on the uses of other data sources, which we aren’t necessarily thinking about, being clearer about the benefits and challenges about using claims data for quality measurement and reporting and identifying other existing data sources that would relate to longitudinal quality measurements. That is a mouthful.

>> Janet Corrigan:

Yeah, and you know, it does make you think and when I think of the other data sources, there was some discussion of this earlier today the registries outpatient registries. And you might want to think about getting some of the specialty societies and registry owners involved in this as well very early on, because they frankly had some of the most rich you know, true quality data in them, if you can figure out how to mine them.

>> Carolyn Clancy:

Yeah, and I recall when something came up about registries, and we’ll actually find another topic, but there seemed to be some confusion in the room about what was involved. So clarification is almost never a bad idea.

Action Item Potential Action Item 4 here would be to collaborate with the high value health care project that’s now being led by Brookings. What Brookings is doing is, in order to aggregate data from multiple plans, instead of collecting that data and assembling it and synthesizing the results so that if I, Dr. Clancy, had my performance evaluated, there would be data derived from the multiple plans with whom I contract or who contract with me instead, what they’re going to be doing is creating an algorithm that the plans use, and then the plans will submit information or reports rather than actual data. This has the potential to protect privacy, because the data aren’t quite so mobile, and they’re staying with the plan that presumably already has lots of safeguards in place for protecting personal health information and so forth. Presumably, although I don’t think anyone’s fleshed this out just yet, it also has the downside of making it taking longer to get to the notion of clinicians getting feedback in anything ever approaching real time or proximal to when the care was provided so they can get better informed as a result of the measurement enterprise.

My only brief comment here was, this is just getting started. I don’t know about the timeliness oh, good.

>> :

Is there somebody already set to evaluate it as a part of the a part of the grant?

>> Carolyn Clancy:

Not that I know about. Well no, I don’t think they’ve gotten that far.

>> :

Okay. I didn’t okay.

>> Carolyn Clancy:

And then the on the next slide, the potential fifth action relates to collaborating with the NCVHS Workgroup that we heard about from Justine. And this gets into the issues of the implications of the locus of identification and de-identification and data aggregation, identifying the infrastructure needs necessary to support aggregation of longitudinal patient data, and considering some of the applicability and lessons learned within the NHIN trial implementations.

Comments on this one before we move on? (Pause) Okay. Next slide.

>> Kelly Cronin:

And also

>> Carolyn Clancy:

Yes.

>> Kelly Cronin:

A caveat there. I think when we’re talking about the NHIN trial on patients say there’s several places in the country that might be doing this just like the BQI pilots; we already advanced the recommendation on evaluating BQI pilots, which CMS will be doing for the QIO contract, but I think this might be an opportunity to be looking at how data comes from a lot of disparate sources. And so beyond the BQI pilots, in other places in the country, where health information exchange might not be as secure as it is in Indiana or Massachusetts, how are they making it work from a lot of disparate sources? So it’s not just you know, there could be the concept of the health databank being integrated or perhaps even information from a registry. I think we probably need to look at how they plan on conducting their work over the next year to know what where all the data sources are coming from. But there could be tying in a couple different concepts within this one.

>> Carolyn Clancy:

Okay, that’s very helpful.

>> Justine Carr:

And Carolyn, it’s. So is this something I should bring back to the Executive Subcommittee?

>> Carolyn Clancy:

Well, I think what we’re going to do, Justine, is just keep walking through these. And the short answer is, yes, you ought to bring it back to the executive subcommittee, but in terms of where we go with it from here, I don’t want to commit us to I don’t want to make a normative statement about how much we’re committed to coordinating this meeting.

>> Justine Carr:

Okay. Yeah.

>> Carolyn Clancy:

So the quality dataset gets back to the questions I was interrogating Mike with just a few minutes ago refers to CMS’s demo that we heard about with the CARE instrument and also references the fact that the Joint Commission is working to establish a minimum dataset for exchange across care settings. Margaret, can you say a few words about that?

>> Margaret VanAmringe:

I’m sorry. I just because someone just distracted me, Carolyn. Say a few words about what?

>> Carolyn Clancy:

The fact that the Joint Commission’s working to establish a minimum dataset for exchange across care settings.

>> Margaret VanAmringe:

Yes, definitely. We had a Workgroup earlier this year that was very interested in this specific topic. And of course, they looked at what was some of the other efforts that were under way, such as the CCR and so forth, to look at what was in there. Came up with, after some evaluation, a group of about 12 different data elements that the Joint Commission Workgroup and technical advisors felt were just absolute minimum elements that needed to be in an electronic health record that was going to share information about the discharge of a patient. And for example, the meds obviously, the med status were one of those elements. And so we did forward those on to various people who are working in this area of what we thought were very, very critical elements. But it’s a very small list, because we thought that if we started with the small list, it would apply to every different health care entity out there that was discharging patients, as opposed to, like, a 200-item set.

>> Carolyn Clancy:

Thank you. And just following along from our roadmap, you can see that in the mid-state, we’re envisioning that there will be minimum datasets or quality datasets and I want to learn to say “quality datasets” instead of “minimum,” because the minimum I always think of nursing homes immediately are established for a variety of care settings.

And I don’t think the point here is to limit the amount of information, but in any pragmatic, practical way, you’re not going to ship every single piece of data or item of information that’s been collected, say, after a hospital discharge or any period of care that there’s going to be a selected subset. And then, obviously, the end state gets us to longitudinal patient-centric quality dataset. We’re not going to get there tomorrow. In terms of players enabling movement

>> Charlene Underwood:

Carolyn, just before I just had a question on the last report.

>> Carolyn Clancy:

Yes.

>> Charlene Underwood:

Did the in the JCAHO work, did you look at the HITEP work at all, or were those joined in any way to see if there was any sweet spot between those two?

>> Margaret VanAmringe:

We did, and there are definitely some sweet spots, definitely. But again, our goal was to say, “If we have to start very have to start in kind of a more succinct manner with the vendors in terms of electronic health records” because they’re not going to be able to do everything all at once what would be the most important elements to start with?”

>> Charlene Underwood:

And the reason I’m asking is, I mean, we’re paralleling this environment in this IHE domain, where there’s a patient care coordination PCC committee, and we’re working to define what that document looks like. It would seem to have to include your 12 elements. And at the same time, there’s conversations that are happening between the quality domain document to make sure there’s synergy between the document the data elements being defined to support quality. So there’s some parallel structures going on that we could probably, you know, work to, you know, kind of connect these dots a little bit. And then it comes back to the discussion that was earlier here that we talked about: the minimum dataset for care. So there’s, like, five dots we could connect.

>> Margaret VanAmringe:

Exactly. Well, I think we sent those out I’m just trying I’m not sure who Laurie sent them out to, whether, Michelle, you got them or someone got them, but I can send them out again to everybody.

>> Charlene Underwood:

That’d be great.

>> Carolyn Clancy:

That’d be good.

>> Charlene Underwood:

All right. Do you participate on the patient care coordination IHE work?

>> Margaret VanAmringe:

No, we don’t. No, we well, I mean, we’ve certainly looked at what they were doing, but we’re not a participant.

>> Charlene Underwood:

That would be good stuff to do.

>> Margaret VanAmringe:

Okay.

>> Justine Carr:

Be able to

>> Janet Corrigan:

And I wonder this is Janet. I wonder if there’s any overlap between our HITEP committee and the PCC and then the Joint Commission’s group, because and it seems to me it would be really wise, in the very near term, to maybe get a few people from each these together to at least take all the data elements that were identified by the HITEP, see if they’re in the PCC, see if they’re in the Joint Commission’s, and if there’s consistency of definitions for those data elements, it would really be a worthwhile thing doing sooner rather than later.

>> Charlene Underwood:

Yes. Again, this is the kind of thing that and the reason I’m the vendors are working on this discharge stuff, to output stuff. If we could bring a few of these together, we would get more value.

>> Margaret VanAmringe:

That would be very nice. Yeah, absolutely. I think we widely disseminated to many of those folks, but I think it’s worth another go-round to make sure.

>> Charlene Underwood:

It’s just more knowing sometimes they go to different people, because, like, we’ve got the nursing people looking at JCAHO, and the physicians are looking at the dis you know what I’m saying?

>> Margaret VanAmringe:

I certainly do know what you’re saying. I do agree.

>> Kelly Cronin:

And Charlene, this is Kelly. It’d be helpful if we could sort of put the what the IHE process is in context with the other documents we’ve been working on for the sort of measure specification process, just so we’re clear on how you guys are feeding into all that.

>> Charlene Underwood:

Well, it’s not exactly the holding hands. Again, as you think of things coming, the work the use cases are defined, and HITSP will define the standard, then IHE actually becomes and again, there’s some duplicate work that happens here is the means that takes what’s been defined as a standard and builds the implementation guide. And so they’ll say, “Okay, we’re going to use whatever the standard is, and this is exactly how we’ll specify it,” so that when the vendors get it and say, “Okay, I’m going to generate a discharge summary,” the IHE process and we certify to that will guarantee that when you receive it, it will be the same. And that’s what that process is about. So it’s the one that, for the vendors, actually gives you the plug-and-play capability.

So that’s kind of how it fits. We’ve got to kind of be in all those domains. And I can’t say this is probably the most optimum processes, but they are what they are right now.

>> Carolyn Clancy:

Well, I think the overarching point for me is that we’ve got some very, very important dots to connect and some very important work.

So turning to the next slide, the specific recommendation here is to develop and implement a quality dataset, and I think we’ve literally just heard in real time about some very important dots to connect. I don’t think we need to get into the details here, because what we’re trying to get to is some consensus about where to move forward.

And on the next slide basically says that that would basically, that these what comes out of that quality dataset work would be fed into the certification commission. (Pause) So I am hearing a fair amount of excitement about that. This is probably a keeper. (Laugh)

Turning to the next slide, Patient and Provider Entity Records Matching no clear standard approach or convention for doing all of this. There’s some narrative here about the National Provider Identifier, and I really, really do not want to have a discussion about the National Provider Identifier, except to say that it’s been very challenging. Our vision, of course, suggests that actually we would be smarter about that in the mid- and longer term.

Next slide. So the specific recommendation says that we have an opportunity to understand some of these issues within the context of several of the NHIN trial implementations.

Comments or questions?

>> Kristine Martin-Anderson:

I just want to raise something that came up when we presented the Quality Work the AHIC Quality Workgroup staff presented to the NCVHS Quality Workgroup staff and team. And one of the things they raised on this one that I don’t know if it puts some context into this, but there are a lot of technical approaches, and those technical approaches are known. There’s just some policy issues here that whoever tackles this really is tackling a policy issue.

>> Kelly Cronin:

There’s a level of risk you’re willing to take for activity or specificity.

>> Kristine Martin-Anderson:

And right, that whole tradeoff.

>> Carolyn Clancy:

So is that technical, or is that policy?

>> Kristine Martin-Anderson:

Saying that the technically, there’s all different ways to match, but deciding which match is the best really is this agreement for how much area you’re willing to accept in the match, and then that makes it how data hungry it will be to do the matching, and that there are a whole variety of approaches already in place for folks who have made that tradeoff for themselves. But deciding what’s the right way to do it because you do want some consistency requires that debate about

>> Kelly Cronin:

And should that be a debate and a governance decision at a local level, or should it be something that we try to tackle on at a more local level?

>> Carolyn Clancy:

Well, this strikes me as one where we probably need some additional information. It’s pretty critical. And again, just thinking back to this New York agreement with plans, you’d like to know that they’re all using the same approach. After all, the one thing doctors go completely bonkers about is that one plan says that they’re Dr. Superb and another says they’re Dr. Death, in quality terms. So

>> Janet Corrigan:

The other thing I would want to know is, for the variability in approaches that are out there, how much difference do they result in, in terms of false positives or negatives? And even where you have the greatest amount, is it going to make any difference in terms of getting comparable quality and performance data?

>> Margaret VanAmringe:

You know what? That’s I was going to say, too, I think that’s the important part, because I think different organizations can use different methods, but as long as everybody knows what is the amount of error that’s associated with those different methods, both in specificity and sensitivity, then at least people understand the value of that data. And I certainly agree with what Kelly that one of the things that is very important, too, is the data hunger. I think it was you, Kelly, that said it. I’m not sure who said it. But anyway, the data hunger part of it’s very important, because that gets the issues, too, of privacy. Sometimes the better specificity that you have, the more data that you actually need, and it gets down into sometimes giving so much information that you’re needing patient-identifiable information. So there’s a lot of things that go with this, too.

>> Janet Corrigan:

I think this is an issue. I’m just not sure it’s ours (laugh) maybe somebody else’s issue.

>> Margaret VanAmringe:

(Laugh) It would be nice to give something away.

>> Carolyn Clancy:

To be honest, I mean, in terms of a this is just minor details, but in terms of the letter that goes to the Secretary for the community, I mean, identifying this as an issue but you’re right: One big question is how important this is. Some of this could remind me of efforts some years back to make sure we were all measuring infant mortality the same way. And it turned out we actually weren’t, but it turned out once you really, really get precise, actually, the U.S. didn’t change at all. (Laugh) We were more precisely not doing well.

So I’m going to try to move relatively quickly through some of these. And what I’m actually going to do is move past clinical decision support, because we I think we’ll have more information to bring back to you at the next meeting once we have touched base with others. This is not to say we’re not interested in it, but just to say that we’re there.

If you could get to slide 20, then, this just refers to data stewardship. And although we’ve heard a lot about this and will look forward to coordinating with NCVHS’s Quality Workgroup’s continued efforts on uses of data, we don’t have a specifics recommendation here.

The next slide, on 21, talks about a legal framework for data sharing and refers to the health information security and privacy collaboration. This multiple-state effort, 33 states and Puerto Rico, actually examined both business practices and state laws and policies might impede health information changing that could be used for quality and so forth. And I don’t think we have a specific recommendation here, either, is that correct?

The next slide talks about patient record de-identification locus. And we’ve talked about why that is important, and the iss and I think Justine’s presentation really teed this up in a very nice way. We don’t have a specific recommendation here.

Next slide talks about measure set evolution. And this a lot of work to do here, to put it mildly really refers to the fact that right now, at best, we’ve got setting-specific measures, which is not to be sneezed at. It’s taken a long time to put those into place and put a rigorous process in place. And with new leadership at NQF, things are actually moving, and the NQF has begun to set the stage for setting priorities for what we’re doing in measurements, which we pretty desperately need, and to beginning to articulate and flesh out a framework for three key conditions, where we look at longitudinal measurements. We’d like to imagine that as we move forward in time, this longitudinal patient-centric approach is continues to evolve and that we see more and more of it. Don’t you like how I just (laugh) really crystallized a very elaborate process? And ultimately, by the end state, that’s all we talk about.

We don’t have a specific recommendation here. I do think there may be some opportunity to think, probably more offline, between the work going on at NQF and the NHIN trial implementations, but I think we can move on.

>> Charlene Underwood:

I know you’ve been thinking about this, you know, this minimum quality dataset or whatever it is. That would seem to be a it’s feeling like that’s a kernel of success here, and the reason I say that is twofold. One, it would be a piece that would start to balance the impact on the workflow with the you know, the measures that we have to capture. So and, you know, from an implementation perspective, the more you can start to carve out what’s that minimum dataset, the less dependent you become on each of the vendors understanding the each of the people implementing this, understanding the nuances of the measurement development process. So I don’t know I know you’ve said it a couple of times, but I don’t know quite where to link it into your recommendations, but it seems to, you know, carry a kernel of something that might help us.

>> Carolyn Clancy:

Well, actually, I think the comment is very timely. The very last slide I’ll come back in 1 second, Charlene just talks again, we took a very broad view in our roadmap talks about incentives and sort of describes the current state as a lot of innovation but, I’d say, relatively small scale, in terms of transitioning to a place where we actually pay for results rather than pay for unit of services, and, you know, ultimately envisioning that substantial payment change or reform will be implemented coincident with our having a Nationwide Health Information Infrastructure and so forth. I think we all believe that. I don’t see a specific recommendation.

So now I want to cycle back to your notion, Charlene, because it seems to me when I look at the areas and I’m just whipping through slides here; I’m going to give the person on the Web a little break here for a moment the areas where we certainly heard a lot of excitement are

>> :

Data elements.

>> Carolyn Clancy:

data element standardization. And we may need to refine that a bit in terms of what the precise recommendation is. But I think my goal for today would be to get us to focus in a few key areas, again keeping the time frame this next year in mind. The second area would be what is labeled coding improvements and gets back to this issue of a problem that’s I think it also touches on the point that George Isham made when he was talking about data stewardship: It’s not just all about EHRs. It’s actually about how do you draw from both admin data, because you’re going to need that for date and time stuff, as well as clinical data. And I also think it begins to help us set the stage for how would I say this? changing the mindset from one where it’s either paper or claims, or it’s the electronic health record. We’ve got to have an evolution, and I don’t think that’s very clear to people right at the moment. And then the other area, I think, is the quality dataset.

Let me ask, those three areas does that make sense to people? Just to say them again: coding improvements, data which includes the issue that was teed up by HITEP of a problem list. So those of you who aren’t clinicians, let me just comment briefly that it’s not that people don’t make problem lists. They just keep them in their heads and stuff like that. This would really get at the heart and soul of some documentation issues: one data element standardization, and the other being a quality dataset one or more quality datasets. And the cool thing about that as you pointed out, Charlene, and teed up a conversation about the fact that there are several dots to connect there that I think would be incredibly helpful.

If we were to tee up those three areas as areas where we think we’re going to focus our work for the next year, are we missing anything that’s going to give anyone enormous heartburn? (Pause) I think that might be called a leading question. Let me frame that (inaudible) anything else that strikes people as incredibly important?

>> Charlene Underwood:

The only point I want to make and this is around problem list and/or it’s like, on the coding piece, I don’t know where and maybe the JCAHO folks know this the discussion of moving from ICB 9 to ICB 10, but I know we’re trying to move away from using coding infrastructure to SNOMED and other means to med you know, but it’s like, it feels like there’s an interim space someplace here that we might have to rely on a better coding infrastructure that we can’t accomplish today with ICD 9. So it may be irrelevant, but maybe you can look at that and research it.

>> Carolyn Clancy:

You know, I think, at a bare minimum, Charlene I’m really glad you brought it up we need to bring back to this Group what is the current state of wisdom, for lack of a better descriptor, on how fast we can move, when it’s likely to happen, and so on and so forth. The last time I was involved in a conversation about this was about 3 years ago, and I vowed to never get near it again, but (laugh)

>> :

Carolyn, NCVHS is holding a hearing on this on January 29, and it’s linked to the 50-10, you know, transaction, you know, form, or whatever. And it is frightening, in a way, that there are so many linkages that, at the moment, the implementation of ICD 10 looks to be a couple of years out, still.

>> Charlene Underwood:

Yeah, and, I mean, you know, from, like, the vendors, we’ve got to gear up to do that. So and if we’re thinking about doing it for the financial side, then what’s its relationship to the clinical side if not I mean, it just needs to be considered in the conversation.

>> Margaret VanAmringe:

I agree with that. I think it’s a very sticky area, though, this whole area of coding. And I’m glad that NCVHS is having a hearing. But I think there are a lot of other issues that go with it: not just how fast can we move, but what are we moving to when we get it.

>> :

Yeah. Right.

>> Carolyn Clancy:

Well, we are going to lean very heavily on NCVHS here and are thrilled that you’re doing a hearing about this. I mean, I have actually heard payers say that they can’t get there until 2014 or beyond. But regardless, it’s a very important part of the context of our work going forward.

>> Kelly Cronin:

Yeah, I have a suggestion for us to think about. If since we’ve spent a fair amount of time as a Workgroup trying to piece together all of the different components of the roadmap that we think are needed to make all of this happen, and yet we may focus on just a couple of key areas for recommendations, it may be helpful to not lose all of the important pieces in whatever we end up drafting in a letter of recommendation and somehow figure out, even if it’s not you know, there’s not a specific action with a specific home, at least hand it off in such a way that it’s known that these things have to happen for this vision to be realized. So for example, you know, if we end up being much more data oriented and coding oriented and trying to make sure things are standardized over the next, you know, several years, with specific actions in the next year, I think we also need to be mindful of the fact that the data have to be mobile and have to be aggregated. And if you can’t get data across settings of care, we’re never going to get there. So right now, since we’re so claims oriented and we can’t share data across settings of care, we I mean, it’s a major barrier.

So and I think it would be very helpful for not only the Department to be contemplating these things, but there’s some essential ingredients that have to happen to realize value based health care. It’s very helpful to the Secretary, for AHIC and the Department to understand what are those critical things that they have to enable.

>> Janet Corrigan:

I think that’s a great point. I wonder if there isn’t a fourth element here fourth critical thing that we ought to be focusing on in the coming year, and it relates closely to the quality dataset. But, you know, even as we move towards the quality dataset, you’ve got to take the quality measures that we’ve got and respecify them and retool them, or we’re not going to have anything that’s really actionable. I mean, for example, we’ve got to go those that have narrative descriptions for their numerators and denominators. We have to the measurement owners, the stewards, have to go back and rewrite those in SNOMED terminology or something they can actually bump up. And I think it’s critical that we do that, so at the end of the year, we not only have a quality dataset; we have a minimum quality dataset I didn’t really want to use the term minimum, but at that point, it would be minimum quality dataset and we have a minimum set of quality measures that run off of that dataset, if it existed. And that requires work on the measure side.

The other thing that’s important about doing both of those is that I suspect that although we’ve gone through this process of HITEP tracing out of data elements and we’ve had a lot of good thinking that went into specifying the CARE tool and Joint Commissions and beginning to move towards that quality dataset, until we go back and bump it up against the measures and say, “Yes, we can rewrite these measures to run off of it,” we’re not going to really know whether we got the right data on it there (laugh). I suspect a lot is going to come out in that process. So we should close the loop and go back and respecify the measures and see if everything fits so we have a package that we can put out.

>> Carolyn Clancy:

Well, and, I mean, on one level, getting started there, I think clarifying and how would I say this? broadcasting, sharing the lessons from the AMA, PCPI, NCQA, sometimes CMS collaborative, I think, would be very helpful. I think we also have some other grant projects in place that just got started this year that will also be helpful there. But you’re absolutely right: It’s great to draw this terrific diagram.

So one level is as I’m hearing from you, Kelly, is actually clarifying in English which would be tricky but literally, what are some of the data problems? And before we put people to sleep, I know in my family, I just have to say data and people run for the doors, but (laugh) you don’t have to put that in the minutes (laugh) but before we make people think, “Oh my God, this is not what I’m thinking about. I’m thinking about better care. I thought that’s why you were here” so if we can kind of succinctly summarize what are some of the issues.

And on some level, you know, coming back to this agreement in New Yor or series of agreements in New York State, I think that makes it very clear for people, because you’ve got to have transparency all along the process. And so they don’t need to know the details, but to know that we could hand off a sort of to-do list that is going to need to be part of the continued evolution of the environment. And, you know, I have to say I was very struck this morning, Janet, at our conversation about where does kind of endorsement leave off and that kind of stuff begin, because we don’t really have an entity in charge. We have many who are playing there, but not actually authorized or whatever. And that, in turn, is clearly very tightly linked, I think, to the concept of data stewardship. So Justine, we will be continuing to be in close communication.

>> Jerry Osheroff:

Carolyn, it’s Jerry Osheroff. Can I ask a question about the quality dataset item? It’s kind of on the opposite end of the spectrum of what Janet was just talking about.

>> Carolyn Clancy:

Sure.

>> Jerry Osheroff:

So she was sort of talking about, you know, the micro-implications of what exactly is going on with that dataset. And I’d like to raise a question that sort of comes up in a bunch of settings recently, including the CDS meeting this week, and that is the notion of looking at some sort of dashboard or whatever of what all these needles are, what the quality metrics are that we’re trying to move on a national basis. We sort of have some of that in terms of Hospital Compare and other settings like that, but there isn’t really a set of high-priority national dials that everybody’s looking at and sort of linking the looking at those things back to the quality improvement efforts.

So I guess my question is, is it within the scope of this quality dataset item to be thinking about perhaps linking into the NQF priority partnerships on some of the items that was on the later slide, identifying what some of these major needles are that we’re trying to move, and just sort of using that as something that we hold up in front of ourselves, in front of the you know, the rest of the health care system as something that we’re trying to move toward, to help make sure that all these individual things that we’re doing are helping to close those most critical gaps?

>> Carolyn Clancy:

Janet.

>> Janet Corrigan:

Hear, hear. Couldn’t agree more. Really, really critical. I mean, I think what we have now is a dashboard, but we’re not clear that we’re measuring the high-priority things, I mean, because it kind of developed from the ground up, and that has certain advantages, but now what we’re going to do with the priorities partners effort and that’s going to they’re meeting for the first time in January. Their second meeting is March, when we’ll have an initial, quote, “dashboard of high priorities, areas, and metrics.” We would hope by the spring, May or June, we should have been able to digest that and get it out for public comment. So it’s going to move at a fairly rapid clip.

Now, having said that, it’s not going to be exhaustive. I mean, we really, we are focused very much on a limited number of very high-leverage, high-priority areas that will then quickly translate into a set of metrics, but we want to move through the process. We view it as an ongoing platform, so every couple of years, we’ll be updating. There will be plenty of opportunities to put other ones on. We’re going to start very practical with a small set and try to get the process going.

>> Jerry Osheroff:

So that’s very exciting, and that kind of raises the question: What is the role of this Group or of any group in terms of coordinating what it is that those metrics are showing in terms of where we are, and how to close the gaps between there and where we want to be? What role does this Group, you know, have, for example, in that gap-closing activity?

>> Carolyn Clancy:

Well, let me deconstruct that just a little bit. One question, of course, will be, are the priorities that are selected the right ones? #1. Number 2, imagine we can flash forward and make it instantaneously happen. People are collecting data and even longitudinally and all that. You know, the ultimate question I think everyone wants to know is, does this actually impact health, or have we just created a new enterprise and we’re still spending as much on health care and not getting getting about the same amount of value, which is not as much as we want by a long stretch, but we’re paying clinicians and hospital administrators plus data collectors more? And I’m being hyperbolic just to be emphasizing. What it does tee up for me is that, since AHRQ submits to the Congress by law an annual report on quality and disparities and this year’s going to be our fifth report, which gives us a time to reflect and a commitment to actually thinking about how we need to reengineer it for the future, but there’s a real series of dots to connect there that I think would be incredibly exciting.

>> Janet Corrigan:

You know, the other thing I think the issue raises for me is that and I’m sure everybody’s thought about this, but we’re going to have to keep redoing certain things every year or 2. And as soon as these priorities come out, we need to reconvene. We need we will quickly translate and populate the priorities with measures. That’s the next step. And we’re going to find out, in some cases, we don’t have the measures we need or want, and, you know, in some cases, we will. So we’ll begin to get that measure development going and endorsement activity. But we’re going to need to reconvene the HITEP to look at those measures and trace them down to the to take a look at whether what we have in the quality dataset at that point will support the measures. We’re virtually certain that it won’t, because they’re going to come up with more longitudinal complex measures than what we’ve probably anticipated. I mean, maybe we’ll get lucky and most of it’ll be there, but I think we have to assume that we won’t, so we’ve got to go through these steps again. And it’ll help us then look at the quality dataset. And hopefully some of this will happen soon enough before the quality dataset is set in stone.

But I think we will move to a higher level of understanding of what we mean by longitudinal quality measurement and assessment, because that’s where that Group is going. I mean, they’ll be targeting the key areas where we haven’t had a whole lot of measure development and experience. So it’ll advance our measures, which in turn will challenge what the quality dataset has in it. And it may be it will be a fairly quick feedback loop that could really influence the quality dataset, I would hope, given the timing.

>> Kelly Cronin:

Yeah, and getting back to sort of how we describe things in the letter, it might be helpful to say, if that’s an overarching goal, that this prioritization process is going to be not only informing all the measurement development endorsement activities, but it really should be driving all the infrastructure development and all our processes around that over the next several years. So if that fundamental point can be made and then the mechanisms can be worked in through our recommendations, that would be helpful just to make it a little more obvious.

>> Carolyn Clancy:

Yup. I think this is very exciting, but for those of you who don’t, like, live and breathe quality measurement every day, it sometimes feels like we’re talking about so many challenges. How could we ever get this right? I did just want to take a moment and note that very recently, there was an article in the New England Journal of Medicine which was called “Eulogy for a Quality Measure,” because the measure for the proportion of people who’d had a heart attack and were on beta blockers, the people who didn’t have any reason not to be on that drug, was so performance was so consistently high that it was no longer a meaningful measure. Now, I remember hearing Janet Corrigan a number of years ago, when she worked for another organization, saying that that was going to be part of the strategy. Psychologically it’s been a little difficult because nobody wants to lose measures where they’re doing really well. But nonetheless, performance was so high and so consistently high that it’s gone, which now, that’s the good news. The bad news is, it took us 25 years to determine that clinical trial (laugh). But the point is, I think, that we are making progress. So I didn’t want to leave us in such a dreary frame of mind, particularly with the holidays coming up. (Laugh)

So just recapping, here, I think that we’ve got a number of we’ve managed to focus our inquiry in a number of very important areas: the coding improvements, including the problem list; the standardization of data elements; and, to quote Charlene, the quality dataset or sets feels like an opportunity for a kernel of success. We’ve talked a lot about the need to think about and I think this is going to have to stay in our workplan what is it going to take to retool some of these measures. And thankfully, when someone brought up the issue of the conversion of IDC 9 to ICD 10, man, was I excited to hear that NCVHS is having a hearing in late January, so thank you for that.

So Rick, let me just ask: See anything?

>> Richard Stephens:

I’m sorry, Carolyn. I missed the last part of your sentence.

>> Carolyn Clancy:

I say, are we missing anything? Do you want to add anything? Does this sound like a reasonable approach to you?

>> Richard Stephens:

I think it sounds like a reasonable approach. I’ve been working like mad trying to keep up with all the dialogue, so it’ll be good to see the notes. But I think we’re heading down the right path.

>> Carolyn Clancy:

Great. And actually, the comments about keeping up if there’s one thing I’ve become more acutely conscious of lately, it’s that people in Quality Land, I guess like people in Aviation Land or anyplace else, do get locked into lingo. And as we think about presenting to the full community, we’re going to have to be as clear as possible. I have to confess I, too, often rely on in-the-moment inspiration to remind me of what it is that they need to remember, like “Quality measure developers don’t think about a data source” that kind of thing.

The use case priorities to be quite honest, I’m not clear what my script is. My battery went dead. I’ll turn it over to Kelly.

>> Kelly Cronin:

I think all of you received some email communication about this. We are going through our next round of priority setting, this time for 2009. And while we haven’t I mean, we’ve contemplated a lot of different aspects through our requirements analysis over the last 6 months or so and our recent workgroup meetings. We’ve really been seemingly focused on two major areas around enabling longitudinal measures, and we’ve also been touching on CDS.

So in thinking at least across staff, thinking through what we might be contemplating as a Workgroup for potential priorities, and also maybe some sub-elements under those priorities and describing what aspects of these things are particularly important for standardization and interoperability, we thought that we could at least start with these two priority areas and then maybe try to drill down within those to point out some of the more important considerations that we’ve been talking about. So it would be helpful to get your feedback on “Are those two sort of the general categories that you think we should be advancing as priorities, or are there others?” And then it’d also be helpful to talk briefly about some of the things to concentrate on within those two priority areas.

Yeah, so let me offer some ideas on to touch really on some sub-elements for longitudinal measurement. And this really builds off our conversation today. We talked about the need to do data aggregation across multiple providers and also over time. So it has to be able to pull up historical data when necessary, and obviously health information exchange would be a component of this, to be able to share data across settings of care over time. Standardization of data obviously that would be relevant and consistent with what we’ve talked about already. The hybrid approach, where we’d be merging data from disparate sources, but also most likely merging both claims and clinical data in the near term, would be another aspect of this, and then figuring out specific aspects of record-linking or matching for both patients and providers. So those are four potential aspects of longitudinal measurement that we could be specifying in advancing this.

>> Carolyn Clancy:

Well, I think, you know, I’d have to say that right now, the concept of longitudinal measurement feels, like, just fabulous and wonderful, but it actually gets even hard to envision. I mean, I have to say, after a lot of terrific work that NQF had done looking at some aspect of this, but in pragmatic terms, to bound the discussion that focused on a year, what’s the first thing somebody says? “Oh, no, no, no, it needs to be longer than that.” And I get that, but it does seem to me that we’re so far from that right now that that might be something very a very useful area to explore in the context of the implementations. What does this look like? What is the data capacity that you need to be able to do that? How difficult is it?

>> Kelly Cronin:

Right. Yeah, and the thing to remember, too: this is for 2009. So let’s say that use cases get developed over the next year, and they get advanced to the Health IT Standards Panel. It would also potentially bring up another round of consideration for HITEP and for other organizations that we know now are involved in the standardization process for HITSP, for CCHIT, and for the next rounds of NHIN trial implementation. So, you know, it’s the whole spectrum of all our activities having to do with vendors and interoperability. So by the time this would be advanced, hopefully we’d have a little more thinking around what might be the set of longitudinal measures that would be applicable, even if we’re not crystal clear on that today.

>> Janet Corrigan:

Kelly and I think it’s worth being very practical here, and even thinking about, quote, longitudinal measures that are a hospital episode to 60 days after. (Inaudible)

>> :

We don’t have it.

>> Janet Corrigan:

Yeah, how do we take a few steps small steps, you know, before or after the hospitalization, and is that the beginning to try to go longitudinal? (Inaudible) that we know tremendously bad things happen (laugh) during those points, and to tackle that would be to get some data on that would certainly be a big step forward.

>> Carolyn Clancy:

Well, you know, the other reason I think that would be incredibly helpful, even if they could move it up (laugh) I don’t know if there’s that opportunity but I’m thinking about the fact that Hospital Compare now is posting 30-day mortality rates for people who’ve had a heart attack or heart failure. This is for the Medicare patients only, because they’ve got the claims to do that.

Now, on one level, that, I think, is something to celebrate, and pneumonia is coming soon. And, you know, it actually focuses people’s attention on the ultimate endgame that we all care about. At the same time, it immediately reveals that kind of black box, right, in terms of how much control you know, what’s the handoff from the hospital to wherever the patient went and so forth. So I like that idea a lot.

>> Janet Corrigan:

We actually have something to build off of in terms of (inaudible) and everything else.

>> Carolyn Clancy:

Yeah. Well, you’ve got people’s attention and engagement, because it’s up.

>> Kelly Cronin:

Right. Okay, and then, for CDS, obviously we haven’t had as much of an opportunity to talk about it extensively as a Workgroup, but some concepts I think we have touched on have been the development of a longitudinal record so that whether it’s an electronic health record or whatever tool might be used at the point of care, this information would be available, so you have specific information on patient characteristics and could make, you know, use of the clinical decision support functionality at the point of care. So it’s just the concept of having the data available on the patient at the point of care to enable the CDS would be one aspect of that to consider.

And then obviously, there’s a lot of workflow issues integration of CDS into EHRs. I think other workgroups have been contemplating a lot of different kinds of applications that are out there now. Not all of them are part of an interoperable EHR. Certainly, there’s CPOE systems that have CDS and different applications in the hospital environment in particular, but that may not be sort of categorized as a complete EHR. So having integration of various types of applications would be another aspect of this.

>> Janet Corrigan:

For those who’ve thought about this a lot more than I have, do we have any idea how the I guess it’s sort of hypothetical at this point the hypothetical quality dataset how much it would be expand to be able to satisfy the CDS needs associated with it (inaudible)? Do we know I mean, clearly, what’s needed for the measures is needed for clinical decision support, but do you need more for clinical decision support than you do for the measures? Would it expand by 20 percent, and how so? (inaudible) figure that out.

>> Kelly Cronin:

Yeah. So we want to so (inaudible)

>> Carolyn Clancy:

Yeah, no, we need to bring that to the meeting, because I think that’s exactly right, because clearly, it’s about tradeoffs and incremental approach and how much are we missing and what would it take to expand that and so forth.

>> Jerry Osheroff:

That’s one of the benefits this is Jerry Osheroff one of the benefits of, you know, when we sort of have some agreement about what the top priority targets we are that we’re going after, we can use those targets to answer those questions. So laying out the workflow, the data requirements, etc. focused on those specific targets can be a very powerful tool for answering Janet’s question, not just about the standards related to patient data, but the standards related to the clinical decision support interventions themselves those interoperability standards.

>> Janet Corrigan:

Let me say, my bias and I’ll state it right up front is to go for if we have to make tradeoffs in terms of the data elements in that set and how expansive it is, my bias would be to go for addressing fewer conditions in clinical areas, but coupling the da making sure we have the data to support both measurement and clinical decision support, because otherwise, I mean, we really have this problem of not showing enough improvement for all of our quality measurement and reporting. And I think the best way we have to try to address that is to get it coupled with the clinical decision support, because we’re getting the data out there in some areas, and we’re not giving the providers the tools and information they need to make the (inaudible) improvement.

>> Jerry Osheroff:

Well, amen to that. I mean, that’s exactly the point I was trying to make earlier. So I think coupling the A fan and E fan arms of this thing are what it’s going to take to show that, you know, we’re moving the needle in the way that everybody’s hoping and expecting that it’s going to move.

>> Carolyn Clancy:

Are we ready for public comment?

>> Kelly Cronin:

I think so. If anyone else has any other ideas they want to contribute so, as staff writes this up, we capture your thoughts, please get back to us by email.

>> Carolyn Clancy:

Does anyone want to make a public comment?

>> Alison:

Ryan, can you open up the line for the public?

>> Chris:

Yes. Just as a reminder for those who’ve been following along on the Web, the phone number is up on the screen to make a public comment right now. And once you get in the queue, just press star-1 to can ask a question. And for those of you who have been listening on the phone, all you need to do is press star-1 to get into the queue. And if you guys want to make any final wrap-up comments while we’re waiting, go ahead.

>> Carolyn Clancy:

Well, let me just say, I thought this was really a terrific meeting today. So thank you all for your participation and engagement. We get a summary back, what, in a couple of weeks?

>> Kelly Cronin:

About 3 weeks or so.

>> Carolyn Clancy:

About 3 weeks. So I hope that you’re feeling charged up to get a some relaxation over the holidays, but then we’ll hit the road running in 2008. I think that we’ve really got some concrete places to focus our attention.

I was asking about the timing of the summary, just because I think it would be great if we could get that around to people in relatively early January. It would also give us a chance, I think, to refine these specific areas in ways that make sense in English, more so than they were certainly emanating from my vocal cords today. But that’s what the holidays will help with, I think.

>> Richard Stephens:

Make it your New Year’s resolution.

>> Carolyn Clancy:

That’s right. Anything you want to add to that, Rick?

>> Richard Stephens:

No, Carolyn, I think you’ve got it. I think it’s a lot of great work, and now getting it packaged so we can look at it quickly and then sort out the game plan going forward in 2008. I hope we all rest well over the holidays.

>> Carolyn Clancy:

Perfect.

>> Alison:

And we don’t have any public comments.

>> Carolyn Clancy:

Terrific. Well, happy holidays, everyone, and again, remember, we’re going to be hitting the ground in 2008. Take care.

>> :

Thank you, everybody.