Skip Navigation

American Health Information Community

Quality Workgroup Meeting #12

Wednesday, October 3, 2007

Disclaimer

The views expressed in written conference materials or publications and by speakers and moderators at HHS-sponsored conferences do not necessarily reflect the official policies of HHS; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.

>> Judy Sparrow:

Welcome everybody to the 12th meeting of the Quality Workgroup. Just a reminder, we're operating under the auspices of FACA which means the meeting is being Webcast and a transcript will be available. And also, a reminder to the Workgroup members to please speak clearly and distinctly and identify yourselves before you speak so we can properly indicate in the transcript. With that, I think I will ask Jennifer to introduce those members on the telephone and then we'll go around the room here.

>> Jennifer Macellaro:

Sure, Judy. We have a number of people on the phone today. Rick Stephens from the Boeing Company. Phyllis Torda from NCQA. Tammy Czarnecki from the VA. Susan Postal from HCA. Jerry Osheroff from Thomson Healthcare. Barry Straube from CMS. Steven Wojcik in for Helen Darling from the National Business Group on Health. Anne Easton from OPM. Pam French from the Boeing Company. Kristin Brinner is on the phone, from ONC. Doug Rosendale from the VA. Beth Feldpush from the American Hospital Association. Jonathan Teich from Harvard. Margaret VanAmringe from the Joint Commission. Jane Metzger from First Consulting Group. Marty Rice from CMS. Mike Kaszynski from the OPM. Lee Jones from GSI Health. Tom Murray from the American Medical Association. David Parker from Indian Health Service. And Dan Rosenthal from the National Quality Forum. Did I miss anyone on the phone?

>> George Isham:

George Isham, HealthPartners.

>> Jennifer Macellaro:

I'm sorry, okay.

>> Charlene Underwood:

Charlene Underwood, Siemens Medical Solutions.

>> Judy Sparrow:

Great, we've got a full house. And here in the room we have to my right --

>> Justine Carr:

Dr. Justine Carr, Beth Israel Deaconess Medical Center.

>> Michelle Murray:

Michelle Murray, ONC.

>> Kristine Martin Anderson:

Kristine Martin Anderson, Booz Allen Hamilton.

>> Carolyn Clancy:

Carolyn Clancy from AHRQ.

>> Greg Downing:

Greg Downing, Office of the Secretary.

>> Judy Sparrow:

Great. And we have a very full agenda and with that we'll turn it over to the co-chairs, Dr. Clancy and Mr. Stephens.

>> Carolyn Clancy:

Good afternoon everyone, and welcome. And I'm glad we have enthusiastic attendance because we have very important issues to review and to determine some next steps for our course of action. So I just want to remind us very, very briefly and succinctly that we are making good progress on the specific charge of our Workgroup and in fact, Dan Rosenthal, I might put you on the spot a little bit later but not right this second to tell us about the most recent part of that work that NQF is doing. And we're now continuing our discussion but I think feeling some urgency about coming to some specificity and next steps to look at the broader aspects of our charge.

Now, the broader aspects of our charge speak to aligning the capabilities of health IT with quality measurement, which, in my shorthand world, means both enterprises have to move and converge more towards each other so that we get to a place where, as George Isham once said, the quality and the health IT people actually know each other and work together on a regular basis, as opposed to being two somewhat isolated communities in a number of instances. So Rick, let me just ask if you have anything to add to that?

>> Rick Stephens:

Carolyn, thank you. I don’t have anything additional to add. I think you’ve hit on the key points. It is about aligning and integrating our activities so we press forward and get the results we're all looking for. Thank you.

>> Carolyn Clancy:

We'll come back to that very critical issue of alignment a little bit later. Rick keeps reminding me about the issue of not just kind of technically can we figure out how to do this but what are the right structures and processes to get broad buy-in across stakeholders in health care. And he's completely right.

If I could just have someone move to accept the meeting summary. I should ask if there are any issues with the minutes from the last meeting. Could I have a motion to accept the minutes, then?

>> Jonathan Teich:

Jonathan Teich.

>> Carolyn Clancy:

Thank you. I'll second it. So we will accept the minutes into the record.

This afternoon, between now and about 2:00, 2:10 we're going to hear first from Justine Carr, who is from Beth Israel Hospital in Boston, and she's going to give us an overview of what an ad hoc workgroup of the National Committee on Vital and Health Statistics has been doing by way of secondary uses of data. They've been looking at quality pretty explicitly and what I think is very important, getting back to the word alignment again, is that we've been working a lot to make sure our efforts are complementary rather than reinventing the wheel.

We're then going to hear from Greg Downing, who I've had the pleasure of working with quite a bit over the past year and a half, about what's happening with the Personalized Healthcare Workgroup, because the one area where they converge with us or overlap, intersect with us, relates to our stated goal of making sure that deploying health IT for the purposes of reporting on quality is not just about driving faster using the rearview mirror but actually is linked with the kind of decision support capability that can be used to give clinicians and patients information in real-time, so that they do well to begin with rather than simply making it easier to make a good report card. Clearly, clinical decision support is a really incredibly important set or group of applications that -- whose usefulness is not limited to quality. So I think we'll very much enjoy hearing from Greg.

Then we're going to hear from Lee Jones from the Health IT Standards Panel on clinical care document interoperability specifications. And I'm going to ask to you stay tuned to get into the specifics there. After those three presentations, we're then going to shift back to our requirements analysis and try to come to some closure today about areas where we need more in-depth work. There's many, many people and organizations across the country are looking at the issues that and challenges that we're trying to solve. There’s a whole lot of movement, politically and about every other way you can imagine. Our challenge is to identify those key actions that I think are going to be indispensable to moving forward regardless of the kind of array of the players around us. So without further ado I'm going to turn it to Justine.

>> Justine Carr:

Thank you, Carolyn, good afternoon, and thank you for the opportunity to present today. So I’m here as a member of The National Committee on Vital and Health Statistics. As you may know, NCVHS’s Work Group is still deeply immersed in integrating the excellent testimony that we have received. Today I will present a status report, as our final recommendations will wait ongoing deliberations taking place over the next four weeks. I want to thank Margaret Amatayakul who has helped prepare these slides. However, I will also apologize to Margaret because in the interest of time I'm going to move very quickly through the background slides and spend more time on the final slides of where we are today. Next slide, please.

For over half century, the National Committee -- let's see. There we go -- The National Committee on Vital and Health Statistics has been a statutory public advisory body to the Secretary of Health and Human Services on health data, health statistics, privacy, and national health information policy. NCVHS has a reputation for open, collaborative processes and also for the ability to deliver timely, thoughtful, and practical recommendations. Next slide, please.

The work that I will present today has been a product of a NCVHS ad hoc Work Group chaired by Simon Cohn, along with Harry Reynolds and myself, and many of the members are well-known to AHIC including Mark Overhage, Mark Rothstein, Bill Scanlon, Paul Tang, and Kevin Vigilante. Next slide, please.

As you look at this outline, note that I will focus on the latter topics. Why the uses of health data are an issue at this time. What HIPAA covers and does not cover. The emerging and evolving concept of stewardship. And then finally, spend some time on the analytic framework that we're developing to ensure that issues are identified and addressed. Next slide, please.

The assistance of NCVHS was solicited by ONC, in particular the request was to develop an initial set of recommendations to clarify allowable and appropriate approaches for the use of data for quality measurement and reporting, and addressing areas where there is lack of clarity in current constructs. Next slide, please.

Although the focus of ONC was quality, NCVHS has elected to first develop a conceptual policy framework that provides guiding principles balancing risk, sensitivity, benefits, obligations, and protections of various uses of health data, then develop recommendations for HHS to facilitate the use of health data for advancing quality of the nation's health and health care delivery system while respecting privacy of the individuals who are the sources of this data. Next slide, please.

So in terms of premises, I think an initial observation was that health information technology not be the focus unto itself, but rather be viewed as an enabler, facilitating the optimal use of health data for the common good, ensuring and protecting privacy. Next slide.

This summer was a marathon of education for the Work Group. Several of us participated in the AMIA conference in June and thereafter many public hearings and meetings were held with input from 58 testifiers representing key stakeholder groups. Next slide.

NCVHS has drawn upon excellent work of AHIC, AHRQ, HISPC, and many others as part of this evaluation. In addition, we reviewed NCVHS documents. NCVHS has a long history of addressing many of these same issues in recommendations to the Secretary concerning privacy, NHIN, population-based data collection, quality, and also an annual report to Congress on HIPAA. Next slide, please.

So very early on, members of the Work Group raised a key question about the term secondary use. It was observed that the term has variable meanings, and secondly that it potentially implied that all secondary uses could be viewed in aggregate as treated as one. And it also potentially implied that secondary were somehow less important than primary, secondary uses less important than primary. Therefore, NCVHS is avoiding using the term secondary and attempting instead to explicitly and uniquely describe each use of health data. Next slide.

A key question is why the issue of uses of health data has garnered so much attention over the last year. Simply put, health information technology affords great opportunity for improving quality of health care through rapid cycle improvement and responsiveness, and with this opportunity comes the challenge of ensuring protection of privacy, preventing harm, and maintaining trust in the health care system. Next slide.

So what has changed? One, electronic health data is expanding from claims data to more detailed clinical elements such as lab results and medications. Two, the power of data that can be linked over time is being recognized. Three, sources of electronic health information are expanding, and now including entities that are not covered by HIPAA. And finally, electronic solutions that can protect data are becoming available, including role-based access, mechanisms to ensure data integrity, signature authentication, and individual consent that can follow data. Next slide.

Our analysis began with a review of HIPAA. HIPAA's focus was the promotion of electronic exchange of data for administrative simplification. Therefore, HIPAA regulates entities that electronically transmit health information, including health payers, providers, and clearinghouses. HIPAA also regulates business associates and their agents. As mentioned previously, a key concern is the fact that there are growing number of entities that are not covered by HIPAA, for example, vendors of personal health records. Another concern is the lack of detail on the expectations of business associates and their agents with regard to ongoing uses of health information. Next slide.

As part of HIPAA, Congress has required HHS to adopt regulations safeguarding privacy for individually identifiable health information. This is called the HIPAA privacy rule. It covers protected health information. Protected health information is individually identifiable health information in any form, paper, electronic, or oral, which is held or transmitted by a covered entity. The privacy rule requires authorization for disclosure of protected health information except for uses related to treatment, payment, or health care operations, known as TPO, or when it is required by law or for public health. Health care operations is defined broadly and includes activities such as quality assessment, competency review, payment processes, compliance activities, business planning, and general administrative activities. Some testifiers have asked whether TPO needs greater definition or specificity. Also notable is that HIPAA de-identified data is not protected under HIPAA, and may be used in any manner including sale for marketing or other purposes. Next slide.

Data stewardship. In addition to the privacy rule, there is also a HIPAA security rule that creates standards for the security of electronic protected health information, ensuring its confidentiality, integrity, and availability. Together, the HIPAA privacy and security rules created an initial set of stewardship principles. Well, what is stewardship? According to Merriam Webster, stewardship is the careful and responsible management of something entrusted to one's care. So when an individual provides private health information to the provider, the individual expects it is held confidential, and safeguarded and used in certain ways. AMIA, has defined health data stewardship which encompasses the responsibilities and accountabilities associated with managing, collecting, viewing, storing, sharing, disclosing, and otherwise making use of personal health information. Next slide.

So this is the slide -- thank you for bearing with me for this marathon. This is the slide I wanted to spend more time on, that helps you understand how we are thinking about pulling all this testimony and framing documents together. So this is where we are today. And on this slide you see a draft of our analytic framework that will shape much of the discussion over the coming weeks as we prepare our recommendations. As we discuss the various potential uses of data we look first of all, on the top line, at user status and then at data status. Is it a covered entity or not a covered entity? Is it protected health information or is it de-identified data? And what is the intended use? Then what we, we frame this as data stewardship in terms of how we do the analysis and what approaches we would take. And I'm going to say in advance that as I look at this with fresh eyes, I see that some of this appears cryptic and, you know, still a work in progress. This is testimony to the fact that this is a work in progress. But if you look under analysis, as we think about the different uses we think about the intended benefits and societal benefits, and also the potential for harm. And we look at what Federal, State, or legal regulatory and enforcement is currently in place. And also we are looking for best practices, particularly in these areas that are gray zones. We've heard testimony from many institutions that have really come up with some excellent ways to address these gray areas.

On the bottom you see approaches and this is where it is kind of jumbled together but what I want to point out is this: what we are particularly thinking about in the secondary uses with regard to quality, has to do for one, you can see on the right, operations and quality. So we have heard interest from many testifiers about having a clearer definition about definitions of quality activities and scope of quality activities under TPO. A second issue is differentiating quality and research in addressing privacy issues when quality evolves into research. A third area is data aggregation oversight within quality, within TPO, understanding sophisticated uses of data and data integrity. A fourth area is record identification and de-identification, and what that means for matching patients with their records for longitudinal analysis. Next slide.

So we will be working vigorously over the next four months -- I'm sorry, next four weeks -- to develop recommendations and they will likely include recommendations for legislation, regulation, guidance, and also for further study. There are some issues that need further clarification, and it's possible that some of the ongoing demonstration projects may provide excellent laboratory to study some of these issues. Further NCVHS Quality Work Group has expressed a interest in holding hearings on issues and best practices relating to clarifying the boundaries between quality improvement and research. So I think with this, I will stop, and I'm happy to take questions or comments.

>> Carolyn Clancy:

Well, thank you, very much, Justine. That was a lot of information. And I really have to applaud your effort to lose, and I'll say almost lose, the term secondary uses because I've been dying to ask someone where did that definition come from, and who said it was secondary. Let me turn to people in the room, or on the phone who want to make comments.

>> George Isham:

I have a comment when you're ready.

>> Carolyn Clancy:

Ready right now, George.

>> George Isham:

George Isham, Health Partners. I also applaud the presentation and effort that you're making. And I was going to make a comment about secondary use of data as well, along the lines of Carolyn's. I think you probably should be stronger in your statement of not only avoiding but actually forbidding the use of it. I feel very strongly about that. It's a pejorative term that takes a point of view not only in terms of one aspect of the use of data, but I think increasingly it takes a pejorative point of view with respect to, you know, the contemporary issues around patient management and quality improvement.

And let me explain that a little bit more. I think we often, when we get to the national discussion of quality, are looking at issues about what data we can collect and have collected through efforts in the health plan, hospital sector, and what AQA is currently doing. And that represents what's agreeable to do today with today's technology of care. But as we look forward, I think increasingly coordination of care across sites of care and across organizations is identified as a critical quality of care and contemporary care issue. Because in this country we have increasingly no limitation on where people will get care under their various payment arrangement, they can get care in multiple organizations and multiple payer sites, and we're, I think, quite aware of the quality problems and dangers to individual patients of not knowing where all the prescriptions are being made, not knowing -- one physician not knowing what other treatments are being prescribed, and certainly the waste aspect of that is tremendous in terms of tests and so forth. So I really think it's becoming a primary tool of managing contemporary, modern patients well to make sure data is shareable across emergency rooms, office settings, you know, case management sites and so forth, without hindrance, and patients under the appropriate protections, you know, patient-identified specific data. I just wanted to make that comment as clearly as I could, as strongly as I could, to point out that's increasingly a tool of modern quality management to match the modern American way of getting health care in many communities across this country.

>> Justine Carr:

Thanks very much. That resonates with the sense of the committee, but your strong language is appreciated and I will bring it back to the Work Group.

>> Carolyn Clancy:

I'm seeing a bumper sticker on it with secondary use on it and a big slash, something like that. Other comments from members of the Workgroup?

>> Rick Stephens:

This is Rick. I guess the question, I guess I don't know enough about what NCVHS really does in terms of the details, and I would compliment you, Justine, this has been very helpful for me, particularly about secondary use and how it all ties in from a HIPAA standpoint. I guess the question that goes to my mind is, is the NCVHS the organization that ought to be helping make sure we have standards in place across the entire health community that people can then plug into, with the common language, data dictionary, and then that secondary element about how do we manage it and take care of it, keeping in mind our HIPAA requirements and data privacy? Or is that not -- is the NCVHS not the right organization to do that? Because from my limited view, that's where one of our holes are, is getting standards put in place.

>> Justine Carr:

Yes, thank you for clarifying that. I think our role is to hold hearings and to point out where the gaps are, and it's not just limited to HIPAA. And I think, you know, I went carefully through HIPAA because I found it an excellent exercise for myself. And I think that the point is that we began this privacy and security with HIPAA, that was the first pass. But the world has changed today, as you point out. And it goes beyond HIPAA. And HIPAA is if you exchange claims electronically, but when we think of all the places, from Websites to personal health records to health clubs, where health information is exchanged, we have a very huge task in front of us, and it is absolutely our intention to make clear that -- and in fact we have in previous letters on privacy -- make clear that HIPAA is good for what it does, but that there's a world beyond that that must be addressed. So thank you for bringing that forward.

>> Carolyn Clancy:

This is Carolyn. I just had a very basic question. I presume that there's a fairly precise legal meaning to the phrase chain of trust? Or is that --

>> Justine Carr:

No, actually, you know, I think that is -- what we're using it to mean here is as we have data from the provider that is shared with -- or data from the covered entity that is then shared with business associates and then agents of business associates. What is the accountability and awareness of what that data is, what is the use of the next group, and over time where is that data going? I think that was the intent. I may be mistaken. I may share in your ignorance, but that is the context that we're using it in, that we need to have that trust, as the data traverses, that we understand the protections go with it.

>> Carolyn Clancy:

Very helpful. Thank you. And I think I had presumed or was projecting a very specific legal meaning. Mostly because I hear it from my colleagues in the Department who are lawyers.

>> Justine Carr:

I'm going to take that back and get clarification and circle back with you.

>> Carolyn Clancy:

Let me just also say by way of data stewardship in the quality community, particularly within the AQA, George Isham and others have been really on the forefront and kind of ahead of the curve in anticipating some of the issues inherent in data aggregation and stewardship for the purposes of reporting on and improving quality of care. So I think that you're aware that we publish -- we AHRQ, on behalf of AQA -- published an RFI. Jon White isn't here today, not just because he doesn't like traffic coming out of here late in the afternoon, but also he's very busy going over summaries of the responses. So we're going to be presenting that to the full AQA in a couple of weeks on the 18th.

>> Justine Carr:

And I should have mentioned -- and we've been working closely with Jon, he's been a part of the committee. He did a preliminary presentation when the RFA was going out -- RFI -- and also when he had some preliminary area findings back and we heard some preliminary impressions but we look forward to hearing more about, it for sure.

>> Carolyn Clancy:

Great.

>> Phyllis Torda:

Carolyn, this is Phyllis Torda from NCQA, and I was going to comment on that point as well, as one of the organizations that responded along with the National Quality Forum and the Joint Commission to the RFI, and I think it's great that NCVHS is looking at privacy and security issues as part of data stewardship, but as I think their RFI communicated, just to be clear, I think the concept of data stewardship goes beyond privacy and security issues.

>> Justine Carr:

Absolutely right.

>> Carolyn Clancy:

Without question.

>> Justine Carr:

Absolutely. And again, I apologize if that seemed to be overlooked. We've spent a great deal of time discussing and actually reviewing some very interesting materials from the Hastings report about oversight of data and the sophistication and understanding of use of it. So it's very much a part of our discussions.

>> Phyllis Torda:

Thank you.

>> Carolyn Clancy:

I just have to say, we funded some of that work and NCQA and others who are very well represented there. But you want to be careful. They're very passionate because they uncovered a lot of important issues that are important to our deliberations.

>> Justine Carr:

Absolutely right, it's a guidance, we're just covering the entire landscape.

>> Carolyn Clancy:

Great. I just wanted to note that Janet Corrigan has joined us as well, in person here, and I'm going to turn now to Greg Downing.

>> Greg Downing:

Thank you, Carolyn. Appreciate the opportunity to be here in front of the Quality Workgroup today. The Personalized Healthcare Workgroup among the ONC and AHIC family are the new kids on the block from about a year ago and briefly I'd just like to tell you a little bit about that. And then one of the four areas that AHIC asked us to work on in our charge was the aspect of clinical decision support. The title of this talk is really about the understanding of the role. And that's where we've really been focused on. Also on the call today I believe Jerry Osheroff and Jonathan Teich, who represented a good deal of the work on the CDS roadmap of about a year ago, have been a key component of our activities within our Workgroup and I appreciate their call here today and invite you to jump in as you so need in this presentation.

I believe we were asked to report on what our findings were as they do I think contribute somewhat to the efforts of improving quality of health care, and synonymous with the efforts of the CDS roadmap, I think there's a lot of similarities in terms of where our Workgroup activities are going in terms of the timing and the early stages of where the medicine and science and art really come into play in terms of health care delivery.

The PHC workgroup sprung out of the Secretary's initiative in personalized health care which is in short-term focused on patient-centric measures in health that are aimed at understanding at a molecular level the differences in patient responses to therapy and risks for particular diseases. And in a nutshell it's really building on the advances and the understanding of the disease mechanisms and targeted therapies, and the interface of that with the emergence of health IT, and bioinformatics overall. So we have a number of Workgroup activities overall in the Secretary's initiative and the Workgroup with AHIC is one of those areas that we've been working on over the last year or so.

Our charge is really developing the standards and capability to facilitate interoperable and clinically useful genetic information and analytical tools into electronic health records, and principally to support clinical decision-making for both the clinician and the consumer. And I will say in this space we've been spending a fair amount of time focused on the PHR as well and I think that will become evident as we're going through these presentations.

Our workgroup representation is fairly broad, and co-chairs for this are Doug Henley, who is a member of AHIC, as most of you know, and John Glaser from Harvard Partners HealthCare who is coincidentally helping coordinate some of the activities with ONC and AHIC on clinical decision support, and both of these leaders have been very contributory in terms of their time and identifying communities that we should reach out to. And there's also a substantial senior advisors from a number of Federal agencies, VA and DOD, and our partnerships with them in terms of background information [inaudible].

Primarily the vision and priorities area of our Workgroup again is really focused on customization of diagnostic treatment and management plans for health care and health quality. The four perspectives that we identified about a year ago, and our perspectives overall for the initiative, were consumer-based, clinician-based, researcher, and health plans and health payers, and all of these have really contributed in sort of the development of an image that's emerging in the health care system of what this notion of personalized medicine is about. The four priority areas are listed here, genetic and genomic tests, basically the standards for the incorporation of medical tests that are incorporated into electronic health records, and we've been working with the standards organizations and for both this and the family health history we have developed recommendations for use case development that are now under way and were accepted back in -- at the July meeting of AHIC. And these activities continue to work in terms of developing common datasets for the family history elements and there's a fair amount for family history areas that have clinical decision support tools that are not incorporated into electronic health records currently but are Web-based and have been looked at by a number of health plans and other consumer-based organizations as important tools to facilitate the consumers' interface with the health care system. We also have charges to address confidentiality, privacy, and security areas that are unique to genetic and family history information and we've been collaborating with the CPS Workgroup over the last six months or so on looking at the unique aspects of these pieces of information overall. And then finally the clinical decision support area, and when we were asked by AHIC specifically to address this area at our initial meeting, I sort of sat there with a stone face not knowing exactly what it was that we were being asked to do, but agreed wholeheartedly that it sounded like it was an important endeavor to engage in.

So we took about six months to really embrace what the CDS report really meant in terms of our specific initiative, and so we asked the important questions of our Workgroup along the way and tried to identify some unique aspects of this. Much of personalized medicine really is focused on the interactions between the health care system and a consumer. How can they assume more initiative in terms of the choice of therapies and the options that are presented to them, particularly for disease prevention and chronic disease management strategies? Another feature of this is that genetic information inherently is complex, it takes a fair amount of understanding of the science and for consumers, they recognize through concurrences with family members that genetic predispositions to disease can be an important predictor of some of their own dilemmas that they may face, and so improving the quality of that information is a key facet for both providers and consumers alike. There is, I think as many of you know, a great deal of interest since the completion of the human genome project to make some of this more relevant to medical care, and there is a great deal of, a number of technologies in this area that are coming into the marketplace, both through medical systems as well as direct to consumer marketing. And along with that there's a lot of data, but often times not much knowledge about what to do with that data. And one aspect of how clinical decision support tools are designed is to try to facilitate the quality of the information that comes to consumers as a result of either using Web-based resources or direct to consumer based resources. As listed here, our genomic test and family history areas have use cases that are under way in development and in one of the areas of confidentiality and privacy and security issues, the RTI report did also address some elements of clinical decision support tools and the privacy issues in particular when utilizing (inaudible) that are consumer-based that may have specific concerns about them. Over the last four or five months we've been working with the Office of the National Coordinator, AHIC, and AHRQ to address common goals and applications of how CDS tools should be configured in the concepts of electronic health records for personalized health care.

On the next slide, page 7, we basically took sort of a organized approach knowing this is new territory for most of our Workgroup, and establishing the visioning system we looked at some examples of types of CDS tools that were not currently connected with the EHRs but were represented as bona fide tools that were helpful in this area, and these included risk assessment algorithms, predictive tools, and preventive messaging, and these are either in the concepts of either reminders to particular physicians and clinicians and consumers about tests that would be of benefit to them, or more integrated approaches in which treatment options would be engaged. In our own staff work within the Personalized Healthcare Workgroup we started doing an environmental scan and looked at what companies and health plans themselves were developing and that was very useful in developing a summary report of various tools in their very elemental state of development, and provided that as background to our Workgroup. This fact-finding aspects were coordinated with the CDS roadmap team that John Glaser was heading up here.

One point I just wanted to get to in more detail here relates to what we did with that information. And our Workgroup held a full-day meeting on September 17th that was moderated by Jonathan Teich and Jerry Osheroff. And this was a very interactive day where we brought in a good number of both private and public entities to really address two areas: what CDS tools services and systems were currently being developed in this space, and then also, importantly, the evidence development to support these tools. And this was a great concern for our group in that this new emerging areas that we identify the pathways in which not just any old person could develop a tool and a guide to particular decision-making, but that there's substantial evidence incorporated with the tools that involves the consumer and the provider to know the basis upon which the judgments or recommendations through these tools are being made.

One of the interesting things is that we focused on a case study, and this was in large part in response to our Workgroup members who a day prior to the last meeting we had, the FDA made an announcement about the labeling use of a particular drug called warfarin or Coumadin, which is an anti-coagulant used commonly in treating patients to prevent stroke and heart disease. And this new finding, of utilizing a genetic test as part of the dosing algorithm for establishing the appropriate level of anti-coagulant therapy is now being recommended and many health plans had many questions about how to integrate this as a quality management tool within their infrastructure. And since this is a common occurrence in terms of one of the more common drugs used and also one associated with a high degree of adverse events in terms of bleeding and other associated morbidities and mortalities, this was an area that we highlighted and brought in a number of different entities that have been developing tools that couple the diagnostic technology to the electronic health record along with the electronic tool to help in the dosing algorithms. So this was a useful model to display and sort of work through all of the workflow issues, the complicated aspects of who needs what types of information, and under which circumstances a decision tool should be used and other times when it should not. So I think that there was a conclusion of these sessions overall that a lot of work needs to be done in terms of the evaluation of these tools, and understanding the evidence that supports the recommendations that they come up with.

I won't spend much time on the assumptions that we came into this, other than the recognition that much of this information, if it's going to be used in electronic health records, will require some form of electronic support in terms of helping clinicians and providers understand what to do with this. And there's fully a recognition within the Workgroup itself that this is a very early stage of development and that some organized approaches to developing tools and facilitating their development would be a benefit overall to the industry, the communities, and health care delivery systems, and providers as well.

The next page, basically are some of our findings that came out of this that many of the CDS developers have had funding or important interactions with AHRQ leading up to the development of their product or service. That patient engagement and inclusion of patient preferences in CDS configurations will be critical to understand the full breadth of options that patients are willing to consider when given a recommendation for the health care practices, meaning there may very well be a component of the tools that develop whether patients are willing or able to take certain recommendations for their health care that are developed out of the tools themselves. We felt that the primary applications were devoted to primary care and ambulatory care settings where a lot of the impact could potentially done and work directed in this area would be of a higher value than many hospital-based systems are currently already up and running and that this would be more of a focus where the consumer interests would be also based. That the design of CDS tools to fit into a natural workflow is of high importance. The areas where I work in personally, I think relate a lot to these kinds of tools that the time and focus doesn't allow to you jump from one Website to another to integrate this and therefore figuring out the ways, the pathways and the natural decision supports within the patient-provider interface occurs will be an important aspect, and working alongside a number of the systems that are evaluating these will be important. Another area that was mentioned in the roadmap and has been a source of a number of areas of discussion within our Workgroup has been the potential use of standards or a national repository to maximize the efforts and widely deploy the best evidence and best practices for these tools and avoid duplication of efforts by many systems or companies basically at the rudimentary level, the family history tool is a good example of that. Basically at this point we're summarizing or developing a white paper that aligns the specific areas of personalized healthcare needs into the three general areas of the pillars of the CDS roadmap and we've been tasked by our chairs to develop a white paper for our next meeting in October, to further delineate the aspects back to AHIC and ONC overall about the unique aspects that we've discovered within areas of personalized healthcare.

So some potential areas that we're looking at for the Workgroup is examining shared decision-making models for connecting consumers and clinicians through the electronic health record. Examine the CDS tools in chronic care and prevention settings, particularly around the use of family history tools. The best practices for workflows in their development. How do we develop individual variations in widely deployed health care systems for CDS tools? The use of the CDS roadmap to consider establishing a repository, as best mentioned. Examining how present and new knowledge can be exchanged in consistent ways so that as updates to the decision tools are accommodated that the patients' information in the electronic health record is updated as well to reflect those changes in either health policies or best practices that are being implemented through the tools. And just basically there are some areas of early success that we see as potentially being interfaced with electronic health records. These are in the family history tools area, breast cancer risk assessment, and newborn screening where tools, fully developed and certified by professional organizations, have already been developed and looking at some early wins in terms of deploying these in pilot programs may be an angle that we further pursue. And then seeing how the CDS tools integrate with electronic health systems and utilizing the information gained from these early discovery components integrate into the use case development are primary achievements we'll be looking at over the next year.

So I’ll be happy to answer any questions or certainly invite your questions to Jerry and Jon as they've been very helpful to our particular workgroup and looking at the intersections of CDS and personalized healthcare. Thank you.

>> Carolyn Clancy:

To quote someone I spoke with last week, that was a very high protein presentation. Thank you very, very much. Very substantive and I appreciate it, and it distilled a huge amount of work and collaboration. Before just opening this up to questions or comments, I'd like to ask Jerry or Jonathan if you had anything to add in particular.

>> Jerry Osheroff:

Sure be happy to just make a comment. First of all, I think the presentation very much summarizes the things that went on and, as we were very happy to be there, I think that's a good model and it may be that there are similar collaborative engagements with some of the other AHIC Workgroups which this one kind of cuts across to sort of look at the particular space. I think we learned some new things about the personalized healthcare space that we can take back.

My two comments are that there's something new and something old. And I think what's new is that there are clearly going to be standard best tools and best ways of showing personalized healthcare information within the context of a family doc and his or her electronic health record. And that as we learn these, it's probably worthwhile to find a way to share these. And it may be that the use cases we develop can reflect what's different about PHC-based CDS, compared to other CDS.

And the something old is that there's a real need for simplicity. We need to translate between the fabulous science that has been performed in genomics and proteomics and translate that to the average doctor in the frontlines. There's a lot of places where putting all the information in front of a doctor is going to lead to probably more heat than light and we're sort of hoping to learn about how we can actually fit the best things into the best workflow. Instead of trying for 100 percent of the answer and getting nothing, we try for 90 percent of the answer and get a lot of improved care. Those are the two big things I took back, I think.

>> Carolyn Clancy:

Thank you. And the leadership that you have been exerting in this particular issue with AMIA, and in multiple places, I think is exemplary so I want to thank you for that. Let me just say, taking the chair's prerogative for a moment, I was quite thrilled to hear you emphasize the issues of workflow and partly I've been re-inspired on this issue recently by the folks working with the guidelines collaborative in Minnesota, ICSI, where they're starting to focus on this issue quite a bit.

I guess the two bullets that caught my attention, because I think it's a tension for us moving forward, one that talks about design of CDS tools to fit into natural workflow and the other about best practices. What I'm less clear about is do we need actually a vocabulary first, or a way to commonly describe and common language around workflow? Because I'm not sure we know best practices (inaudible). The other questions is, are we designing for natural workflow now, or what might be a better workflow? And I think that's going to be a tension moving forward.

>> Greg Downing:

My own perspective is this has to go hand-in-hand with the electronic health record systems. And that, as I've seen it happen in many care settings now, that in itself, the integration of a new system, really does change the workflow, who does certain activities, and how they are aligned and integrated. The consumer's role in that is changing as well. I think it would be helpful to have some description of what that future workflow might look like of a traditional primary care visit, what that looks like in terms of how they come back in the door and out the door into the parking lot and then how -- where the key decision points are that might need assistance would be pretty useful. That's my take.

>> Jerry Osheroff:

Carolyn, this is Jerry. Can I jump in with a comment? Well, you're probably getting tired of hearing me say this because I think I've brought it up in each of the last couple of Workgroup meetings. But what it is that Greg just got finished describing is in fact what one of the Quality Workgroup recommendations to AHIC was. That is, to develop those things.

>> Carolyn Clancy:

Without question, yeah. No, and I appreciate your reminding us, because it's very easy to get distracted by what feel like technical, very solvable problems that, if addressed in isolation of workflow, won't take us very far at all.

>> Jerry Osheroff:

I would agree with what Greg was saying, that there is going to be sort of workflow change in important ways when you introduce clinical information systems but if you look at what happens before the patient gets to the parking lot, after they get into the office, what happens in the office, and afterwards, there is a certain, even though the nuances of the workflow change, there are certain things that happen in the flow that are fairly generic that create opportunities for the individual stakeholders, the patient, the nurse, the physician, all the other members of the care team to make certain decisions and take actions that could be informed by information and whether you're in clinical decision support intervention. So whether or not you're talking about the personalized healthcare domain specifically, or performance on core measures of these other things, there are these generic opportunities for support. And I think after we have that map down and we can start thinking about standardized ways and approaches and best practices as you were saying for delivering knowledge and gathering performance data during that workflow, everybody will be able to move forward much more efficiently and effectively.

>> Greg Downing:

So much in this personalized healthcare space, for it to succeed will require having the information ready in a timely fashion that is an interaction between the history, lab results, and then the prescribing patterns, and then the information they're giving the patient. All of that stuff has to come together in new ways, otherwise the diagnostic tests, the results are coming back three days after you've made the treatment decision, doesn't make any (inaudible) in the decision. And the case study we presented actually was approaching how do health systems really redesign their components to make sure that the information is there on time when you need it every time.

>> Jerry Osheroff:

Yeah, I agree with that. I mean, there's been a few attempts to isolate workflow and try to explain it. We've been using a model where there's sort of 12 different steps to a given clinical care episode. And they're reasonably recognizable or at least recognizable to us and each of those kind of lends itself to a different kind of decision support. So that's one of several models that might be applied.

>> Carolyn Clancy:

I know Justine wanted to get in here.

>> Justine Carr:

Maybe you already addressed this, but the concept of sort of modular decision support. So for example with Warfarin, what would be the recommendation and then would that be -- is that what you were saying, you would go to the Web and get that, so as it changes over time, you're not stuck with last year's decision.

>> Greg Downing:

I think there are two points to this. One is the emergence, large part through professional societies and others who just want to improve care to develop these Web-based, freely distributable tools that do have fairly credible science behind them. You have to kind of jump out of your usual workflow, go down and find them, and it's not intuitive where you go get them.

The second aspect is as you're working through it and working in an electronic environment, are you prompted for these things in the appropriate fashion? So if you have a patient on anti-coagulant therapy, every time you're renewing their prescription, you don't want to see the prompt that has the algorithm in there. There's a lot of nuances and I think there just has to be a systemic way to structure these as you're designing these systems. And it's all a new frontier out there as we see it.

>> George Isham:

I want to ask a question at some point, too.

>> Carolyn Clancy:

Take it away, George.

>> George Isham:

Great topic and great presentation. I just want to make this observation. Carolyn has heard me make it before. You know, in Minnesota with the groups that have adopted automated medical records or electronic medical records so far, there's really no difference in those groups and those that haven’t in respect to quality performance in our various collaboratives in Minnesota. And I think it's the topic of this Workgroup in terms of personalized healthcare and clinical decision support where we get at the next generation of tools for health records which enable actually better performance that build on the capabilities of the electronic medical record. And that observation I don't think is generally perceived across the country. I mean, people think that if you get an automated medical record, quality will be better and performance is going to improve. And I think you're indicating that that depends upon workflow and how it's deployed and those kinds of subtleties which are critical to that. So I wanted to make that observation.

And then secondly, when you get to that point of deploying decision support based on tools that are developed by professional society groups that have good evidence, whose professional society, against which particular standards, are we going to use to put into the electronic medical record application for decision support at any given point in time? Is it going to be the American Association of XYZ, or the American Association of ABC, or any number of professional groups which may have overlap for some clinical areas but differences of opinion about how a particular clinical situation is handled?

So I think those are issues that are beginning to be raised by the decision support kind of question. In Minnesota, which of the seven preventive care standards for children shall we put in decision support tools, or when the clinician gets to that point should all seven pop up and the clinician choose on the spot? How do we handle that? Those kinds of issues I think are also issues that I hope your group is thinking about.

>> Jerry Osheroff:

Carolyn, can I respond briefly to George's first point?

>> Carolyn Clancy:

Yes, and then I think we may need to move on.

>> Jerry Osheroff:

I just want to say that’s an excellent point, it's going to take a lot of doing to get these things incorporated effectively into workflow and I just want to point out that CMS in the DOQ-IT program is doing some work to specifically try to sort out those details and define some best practices for using clinical decision support in EHRs and care management in the outpatient practice. So I think that's a first crack at it. But I think outgrowth of that activity will hopefully address the first point that you made.

>> Rick Stephens:

This is Rick Stephens, and I really appreciate, George, your bringing that up. My experience is that everyone likes to go to a standard process so long as it's theirs. And I think that's a really important point, that I can't emphasize enough about how to get alignment, which goes to how do you get agreement amongst all the clinicians to be able to work this if we're going to be successful aligning around a data system makes sense.

>> George Isham:

This is a critical issue for quality going forward because quality against what standard? And until this point in history where you've got the potential to use technology to enable consistently high performance against the latest standard, it then becomes a more critical issue against which standards you use, so I think this is a critical social, political issue.

>> Charlene Underwood:

Carolyn, this Charlene. I just want to make one comment. And again this just reinforces, I think -- or an observation -- the comments that have been made to date. You know, from a vendor perspective, as we see, for instance, adoption of like a clinically-validated tool like Braden Scale to be predictive of pressure sores and those types of things, those are the kinds of things then, you know, the vendors start to actually actualize and build into their workflows and figure out how to optimize and get feedback on and those types of things. So that direction to actually, you know, move toward those types of scales and clinical decision support that can be endorsed and accepted will help you with the workflow problem.

>> Jonathan Teich:

And regarding the competing guidelines -- this is Jonathan -- in the roadmap work we either were extremely collaborative or complete cowards altogether, but allowed for the fact that there were these conflicts. And the technology and the ability to share CDS based on guidelines is something where we think that we might support a lot of different ones with tabs within the actual things you can download that says well, if you want the stuff that comes from the American College of Cardiology, then search using that filter and we'll get you all of theirs, and if you want the stuff from AAP then we'll give you theirs. There's a way to support multiple different ones and to share multiple different ones, obviously, that skirts the issue of saying can we actually then reconcile them.

>> George Isham:

You could convert it into a scientific debate, you know, where you basically have differences of opinion drive scientific discussion, based upon data that's collected from practice as well as from research about what the issues are as opposed to making it a political sort of battle between our group feels this way and your group feels that way. And you know, you're a jerk and I'm not because I have my standard. That kind of thing.

>> Jonathan Teich:

Absolutely.

>>

This kind of brings up the whole concept we've been talking, clinical trials on some -- in generic words, not specific drug words, on some of these clinical decision support tools, especially as we start to move down the health record direction.

>>

Yeah.

>> Carolyn Clancy:

This has been really a terrific discussion. I will also, since I raised a complex question early on by way of a segue to hearing from Lee Jones, just mention I did hear a very simple request recently about workflow and clinical decision support, and a good friend who is an ER physician came to me and said you have a lot of good stuff on your site for emergency medicine physicians and I want it on my hip. That's all. It's got to be in that PDA, that's the only place it will actually -- because I'm not sitting. We're moving all the time. It's going to be an interesting challenge moving forward.

Just by way of the conflicts, George, I’ve often thought that the opportunity that we don't exploit enough is technology that could actually cut across multiple different guidelines that actually highlights where common -- points of commonalty. No matter what happens, the following three things are always keyed as important and of high value to the patient. But it may take us a little while to get there.

We're now going to shift gears to hear from Lee Jones. Lee is a former colleague of ours at HHS, and is now working with the Health IT Standards Panel, and he's going to tell us about the clinical care document interoperability specifications. Just by way of a word of context, let me say that in terms of addressing our specific charge we've had the unenviable task of trying to take measures that were developed without any eye, thought, or even a moment of speculation about where the data would come from. But with an enormous focus on what's the right thing to do here and trying to figure out how those can be -- I don't know if retrofitted is the right word, Charlene, and if there's a more pleasant one, euphemistic one, you can tell me later -- but it’s kind of a backend fix to existing electronic health records. As we transition to our broader charge, the question is what is most likely to be a standard way of getting a lot of clinical content in the future. Right now this issue has a lot of traction and a lot of support, and I think has been used quite a bit by VA. So I'm going to turn it over to you, Lee, and welcome.

>> Lee Jones:

Thank you very much. I want to first just thank everyone for giving me the audience. I have had some trouble getting into the Web presentation so I'm not able to follow along. So I assume that the slide that you’re looking at is the first one, my title slide; is that right?

>> Carolyn Clancy:

Absolutely.

>> Lee Jones:

I am the program manager for the Health Information Technology Standards Panel, and I'm sure that most of you know what that is, but we essentially are a group that has been charged with trying to canvass the industry for the available health IT standards that are germane to a given context or use cases that are prioritized by AHIC and then to endorse certain standards for the implementation of those use cases. So we aren't trying to choose standards that are sort of across the board or across use cases but rather that they're contextually chosen. So today I'm going to talk a little bit about the Continuity of Care Document or CCD, which is one of the standards that we've chosen in a few different contexts, and just give you a general feel for what it is what we're talking about, so that you can have as an input for you.

So I'm going to move to the next slide, slide 2, which is labeled What are Clinical Documents Anyway? And typically when we are talking about clinical documents in this context, we're talking about a way to codify a number of concepts that are medical concepts in a way that can be transmitted and interpreted by computers. And so, essentially the dominant standard for codifying just about any kind of information these days is XML, or extensible markup language. These clinical documents we're talking about are usually coded in that also and they represent a collection of concepts. The primary or prominent ones that are very in the fore when it comes to the representation of what are typically referred to as summaries, are the Continuity of Care Record which is a standard out of ASTM, and then the Clinical Document Architecture which is a standard out of HL7. And when we were looking over a year ago at trying to select a standard for the transmission of some -- a document-based transmission of some information, those two came to the fore and we were able to choose what we viewed as sort of a compromise between the two which is the Continuity of Care Document. And the Continuity of Care Document essentially was an effort, a joint effort between HL7 -- or the result of a joint effort between HL7 and ASTM to codify all of the data elements and clinical concepts, if you will, from the ASTM standard which is CCR, into the Clinical Document Architecture's XML schema. So the physical representation of the concepts would be in CDA, but the semantic, if you will, or the meaning or the paradigm, for what those elements are and how they relate to one another would come from the CCR. So when we're talking with about these summaries and core datasets in general these are sort of overloaded terms that don't have a precise definition. But we typically are talking about these kinds of standards to represent them when we're talking about transmitting them between machines. Next slide, please.

So on the next slide, this is an example of the XML representation of a diagnosis using CCR and CCD, and I'm not asking you to really interpret what this means, but the point here is -- or there are two points. One is that the same concept can be represented in multiple standards, and so I don't think that one standard should sort of dictate what the concepts are, rather we should figure out what the, what standard is appropriate to encompass all of the different concepts that we want to have. And then two, this kind of a representation has some, I guess, useful information in the real world and then a lot of overhead in order to keep track of things in the digital world. So things are broken down into codes and numbers and IDs and references that probably don't have as much utility in the delivery of care but are necessary for the understanding and processing of it by machines. Next slide, please.

So in the Continuity of Care Document there are a number of different areas which are taken from the Continuity of Care Record where you can represent information. And I've listed them here, this sort of comes out of the table of contents for the Continuity of Care Document, and so there are three main sections, a header and footer which give a lot of sort of contextual information about who is involved and who, what patients are involved, what clinicians are involved, et cetera. And then the body, which is in the center there, which contains all of the different clinical statements about a -- that are germane to care. When I say clinical statement, what I'm really saying is there's some concept like a diagnosis or procedure or a lab result that we are making a statement about by placing it into this framework. So we want to communicate the idea that someone has had a test done or a result has come back for that test or someone has a particular diagnosis. And so those are all clinical statements and this CCD represents an organization of those clinical statements. I don't know that all of these different areas are germane to quality metrics. However, to the extent that they are, you know, they are mostly representations that include all those different coding and ID paradigms that I showed you on a previous slide. Next slide, please.

So when it comes to HITSP, when we chose the Continuity of Care Document last fall and then sort of ratified it in the spring, it proved to be one of the most engaging subjects that we've encountered because there are really strong camps that had strong opinions about which standards were the right ones. So some people really aligned with the Continuity of Care Record, or CCR, and they tend to be a very vocal and active population of people. Some people were in the HL7 camp and citing worldwide usage of the Clinical Document Architecture. We viewed the CCD as a good compromise between those two, but even today, people still debate the topic. Next slide.

So if we just drill down for one moment to see what the debate is about, I think there are a few things we can see. One is that the CCD is at its, you know, fundamentally speaking, a CDA standard, or Clinical Document Architecture standard, and that means that it is a HL7 standard not an ASTM standard even though the concepts it represents come from ASTM. So as that standard evolves over time it will be advanced within HL7, not within ASTM. And then CDA is a larger, more generic and extensible kind of framework than the CCR was, or said another way, CCR had a very focused application whereas the CDA was designed as sort of a greater platform by which you could build different kinds of documents. And so as the notion of things that might want to be included in the CCD expands, then they're probably going to draw from the notions that are contained in the CDA as opposed to going back to the CCR. And then, you know, another thing that is out there is that because HITSP has endorsed CCD sometimes the underlying standard which is the CDA, is lost or at least the notion that we have really picked a CDA document is lost. And so if we then go back and say we're now endorsing a CDA document that has some other sort of clinical statements, some people think it's sort of a bait and switch. You weren't really committed to CDA, et cetera. In addition, there are a number of implementation, real world implementations using the CCR, and especially by a number of smaller organizations that have been very vocal in saying that the switching costs are very high for them to move off of CCR to adopt the CCD, and that the CCD has additional levels of overhead that make those implementations more difficult. Next slide, please.

So HITSP's view on this is we really are being asked for each context or use case to solve a different problem, even though they may be similar or analogous or even overlapping. So we need to have a fundamental framework that's extensible and so while the CCR has a simple and effective representation of clinical summaries for a snapshot in time, and there's been a lot of thought and effort that went into designing what those data elements are, its extensibility is not obvious. Or said another way, when compared to CDA, CDA was really designed to house a number of different clinical documents, and was not designed with any single summary or other concept in mind. So CCD is really an expression of the CCR within this larger CDA framework, and so choosing CDA allows us to also expand on that same standard as we move into different contexts. So to the extent that we're able to reuse the CCD and the statements that are contained in that, we want to do that. But to the extent that a new use case will mandate something that wasn't easily represented in the CCD, we may have to augment it appropriately. Next slide, please.

So this diagram tries to give you a sense of how these things relate so if you think about the white circle, which are all health care summaries, the CDA standard tries to accommodate all of the different clinical statements or concepts that could make up a health care summary, and then there's some specifics aggregation of those statements that comprise the CCD. There are some statements that are not within the CCD that one could still call a health care summary. There are some that are outside the concept of a health care summary, depending upon your viewpoint, that could still be described by Clinical Document Architecture, and then there could even be statements that are outside of what CDA currently represents and may be represented in other kinds of documents that HL7 has. And so the fundamental unit here is really the clinical statement. And so when you're trying to put together a core dataset for quality metrics or a summary for emergency care or whatever the context is, the key thing is to figure out what is it that we're really trying to represent, and then we can try to select the right aggregation of those data elements into a standard that is appropriate. Next slide, please.

So currently HITSP has several different documents that are based on CDA, including a couple that are based on the CCD, which is, again, a CDA document. I've listed them here and they are CDA standards, and we try to put additional constraints on those standards so that we can be very precise about how they would be implemented by an organization. Next slide.

So because a number of the use cases that have come up are getting more and more to this idea of sharing some -- or defining some shared set of data that they want to pass around and use for different purposes, we in HITSP have created what is listed here as an X-TC WG, which means a cross technical committee workgroup around the summary document architecture. So we are trying to get a group together that's drawn from all of the workgroups that work on the use cases and have them really be thoughtful about how is it that we define these different summaries so that we can be consistent across all of the different use cases and the standards that we choose and the way things are represented. And so this group will be taking up the question of is CCD appropriate for whatever the issue is at hand. And our general principle is we want to reuse the CCD and the clinical statements it contains to the extent possible and then only move outside of that whenever the context really calls for it. Next slide, please.

So the current issues that we have, work cut out for HITSP is to try to remain consistent in our use of the CDA, and making sure the concepts are represented consistently across different use cases to maximize the opportunity for reuse so that we can leverage CCD everywhere possible, for example. But not at the expense of solving the problem. So we don't want to shoe horn, you know, a quality use case that compromises somehow into the CCD, if it really would benefit by having something that was outside of CCD. And then CDA has a number of different levels that we can -- that you can use in order to be more and more granular about and structured about how data is represented. So we want to make sure that we choose the right levels. So next slide is sort of my final slide, and I certainly am open to entertain any questions.

>> Carolyn Clancy:

Thank you very much for that, Lee. I think that like your boss, I think John Halamka, you made a lot of really, really hard work with people with strong opinions seem very easy. I guess it must be a required skill set if you work at HITSP. Let me ask if there are questions for Lee's presentation. And then I have some questions.

>> Karen Bell:

Carolyn, could I ask one?

>> Carolyn Clancy:

Yes.

>> Karen Bell:

Lee, this is Karen Bell, and I'm also on the EHR Workgroup, as is Carolyn. And that Workgroup is very interested also in the laboratory results. HITSP has therefore created standards for the transmission of lab results in two different ways. One is through the EHR process and another is through the CCD. Clearly this Workgroup is interested in extracting or actually semantic interoperability of lab results as it relates to various quality measures. Could you comment on the two different approaches to lab results, for instance, and how that would apply to quality measurement?

>> Lee Jones:

I think that what we are -- what we've come across is, in lab and in other areas, is this tension between a message-based paradigm, and a document-based paradigm. So there's more and more interest going forward for groups to represent things as documents that may have persistence and can be stored and recalled, et cetera. Whereas, by far the predominant way to communicate health care in most settings is through message-based paradigms, which are more transactional HL7 2 dot X, you know, is very dominant. And while the same kinds of concepts are represented in the HL7 messaging standard and then these clinical documents, like CCD, they tend to be in practice, you know, more temporal and transactional and don't really hold together. So I think that the intention is not to have different data represented, you know, in those different paradigms, but really was just to accommodate and bridge the current dominant model to the emerging future model, or the message-based paradigm into the document-based paradigm. I think that was the reasoning behind two different approaches with the lab message in particular.

>> Karen Bell:

Thanks, Lee.

>> Lee Jones:

Sure.

>> Carolyn Clancy:

Yes, Kristine.

>> Kristine Martin Anderson:

Hi, Lee, this is Kristine Martin Anderson. I want to try to see if I understand the gist of your presentation so I want to try something here. If we're thinking about data that we're going to be using for quality purposes, that raise in a number of different areas, could be lab results, could be whatever. From what I heard you say, once you have that set of data that you know you want, it might be in CCD, right, because it might be represented already in a clinical statement. It might not be in CCD, but it might already be represented through something through CDA, so it could -- I don't know if it could become part of CCD, that's another part of the question. How does one get something adopted into CCD over time? Or if -- and that would be the preference if it's not already in CCD, but -- and then if it's exists just somewhere else, then if it just -- it needs to exist that way for that purpose, you would want to choose another standard. I want to see if I understand whether or not this concept I have of trying to be able to represent a set of data through a set of standards, if they're going to be message-based data, is that what you sort of endorse CCD for under certain contexts?

>> Lee Jones:

I think that your characterization of what I was saying is right. The last part of your exact question, I'm not sure I understood. Are you asking if we're endorsing CCD? I'm not sure exactly what you're asking.

>> Kristine Martin Anderson;

You only deal with it in the context with which you've already considered? You're not necessarily saying that CCD should be used for any other purpose. You've only evaluated it for the purposes that have come forth through the use cases?

>> Lee Jones:

Oh, okay, yes --

>> Kristine Martin Anderson:

So let me try it another way. We're trying to get at this idea of a defined core dataset for quality and we're trying to understand what the intersection is between that and CCD. That's really where I'm going. Do you want to add things to CCD? From our perspective, outside of the standard world, how should we think about CCD in that context?

>> Lee Jones:

Okay, so I think there are two things. One is yes, we do in HITSP choose the standards within a given context. So one that's not been given to us did not really factor into the choice of the standard. So we didn't anticipate that quality would give us anything in particular. And so we don't endorse the standards per se beyond the context which we've chosen them in.

However, the second point would be these standards, you know, like the CCD, which is based on the CCR, do represent by and large the kind of data that is available in these clinical systems and that is actually flowing. So while it may not be that it was chosen for some other context, if you want the other context to avail itself of the data that is actually available to be had, you may want to consider those as starting points to represent the kind of things that might be practically implementable.

>> Kristine Martin Anderson:

Thank you.

>> Carolyn Clancy:

This is Carolyn, Lee. I have two questions. One I think builds on Kristine's. In the slide labeled current environment where you have examples of specific documents --

>> Lee Jones:

Yes.

>> Carolyn Clancy:

What is in the patient-level quality data document?

>> Lee Jones:

I don't have that in front of me, but I think that the -- we have, we have been given a, you know, quality use case and have built a number of constructs around them. And so the focus of our first year there was really to try to figure out what the atomic clinical elements are that are -- that would solve the use case as it was given. So I think I provided a, provided that document, or certainly it's available through HITSP's Website if you want to see the exact contents of it.

>> Carolyn Clancy:

Okay. I have a question. If Tammy is still on the phone?

>> Tammy Czarnecki:

I'm here.

>> Carolyn Clancy:

Great. So it's my understanding that VA uses a similar document really built more closely to CCR for assessing quality. Do I have that right?

>> Tammy Czarnecki:

We do use a similar document.

>> Carolyn Clancy:

So, can you -- are you able to assess quality in a longitudinal fashion?

>> Tammy Czarnecki:

Yes. I mean, I believe that we can. Doug, you're on the call as well, correct? I mean, I work in the quality department, so yeah, I believe that we can assess quality longitudinally.

>> Lee Jones:

And I wouldn't be surprised by that if they are using something that's like the CCR or even something homegrown. And one of my slides was trying to drive home that point, that it's not that you can't use one of these other standards; it's just that if we're going to get toward interoperability we want to settle on one. And so, you know, there becomes sort of a translation activity that would have to be had so that we could all come on the same sheet of music.

>> Tammy Czarnecki:

And I think that something that was said, I think in the last presentation, about standardized definitions and stuff like that for quality, will really play a role in how we're able to do that. Because I think that that's probably one of the pieces that will affect our ability to be interoperable as far as sharing quality data.

>> Carolyn Clancy:

I think I'm starting to get it, although I'd have to confess my own brain still has quite a bit of fog in it in terms of trying to match our task to this particular capacity. I guess two issues I'm struggling with is, how does the CCD relate to the work that we have launched with the National Quality Forum? So where we took, again this subset of quality measures endorsed by the AQA and the HQA, again these were never developed with any eye towards electronic health records or even thinking actually about a data source at all, that's not the job of the people at that point in time who were thinking about quality measures. And essentially you've created this, you know, megamap that maps out the data elements and tries to create a map where we've got good interoperability standards already, lab, pharmacy and so forth. And then of course we have lots of stuff to measure that doesn't quite fit. So I'm trying to figure out how these work together. Part of my overarching question is, is CCD a sort of transitional thing to a better world where technology is truly just as wonderful as we know it can be routinely and reliably and clinicians just kind of work with drop-down menus all the time?

>> Lee Jones:

Well, yeah, I'm a believer that the world continually gets better, so hopefully this is a part of that evolution. But I would say that when it comes to quality I think that there are two different kinds of things that I've seen that are trying to be represented in general.

One are these same set of things that CCD and things like it represent, which are these clinical statements that have to do with sort of raw data about what happened in patient care. You know, what was the diagnosis, what was the test, you know, et cetera.

But then there's a second kind of concept that seems to be represented which has to do with counting and taking percentages and how often does one do that at a rate that such and such was applied. Or, you know, interpreting what the lab result was, whether it was in normal range or not, and then figuring out how many times that happened. And those kinds of, you know, what I'll call the actual metric aspects of it are not well represented in something like CCD. So it's the point of CCD is to move the (inaudible) to someone who then would do those kinds of metrics, I think that would be an appropriate use. If it was to carry the metric or the calculated metric itself, CCD is not in its current form well suited to do that.

>> Janet Corrigan:

This is Janet Corrigan, let me take a shot at explaining a little bit more I think how they might relate to each other, at least the NQF effort. The NQF effort has, as Carolyn indicated, it started with a sizable number of measures and traced them down to the data elements that needed to be in the EHR to be able to then generate those measures I think is helping us definitely think through how we need to develop measures perhaps a little differently than we currently do. So for example, a lot of our measures right now have very, very long lists of exclusionary criteria that apply to the denominator. And our committee did a Pareto analysis to take a look at whether a limited number of data elements in the EHR were really critically important, 80-20 rule, are there certain data elements if you could get them into the EHR well, they would generate most of the measures. That's not the case given the way we currently develop measures, because there is such an extraordinarily long list of exclusionary to try to capture absolutely every reason under the sun why somebody perhaps shouldn't in that denominator.

If we, on the other hand, though, were to develop our measures so that we didn't attempt not only quite the same level of perfection in terms of classification in had patients and we didn't have those very, very long lists of exclusionary criteria, only had exclusionary criteria for instances where that event is likely to occur more frequently, not once in a blue moon, then indeed the number of data elements that you really need to capture well in an EHR does -- the Pareto principal does come to play there, it's a more limited number. So there's one example of an implication of how I think we will develop measures differently as we better understand the interface between measure development specifications and EHR.

Another issue that came out of the discussions was that since SNOMED is likely to be the terminology used and the system used, we need to probably work more closely with the major measure developers on how they do specify certain aspects of what goes into the numerator, denominator, select their populations to be more consistent with that translation that has to take place over to the SNOMED terminology. So there’s definitely an interface between the two groups.

I think last but not least, a good deal of what we need, were there to be a very standardized problem list with an appropriate date and time stamp on it, and a very detailed problem list that was standardized and developed to capture a lot of what goes into these exclusionary criteria lists that the measure developers have, that's sort of a essential building block and if we could get to that point we could then generate most of our denominator population from the problem list, especially if it was date and time stamped because many of the things that go on there change as the patient moves through the system. So when you look longitudinally, it's going to change depending upon the point in time that you're trying to calculate the measure.

>> Lee Jones:

I think that's a good point that in trying to craft these measures and having these exclusionary criteria, you may look to CCD or something like it to determine whether the criteria that you're using for exclusions is discernible in the data that's represented here. So it's not that the raw data elements can't be represented, but if you want to exclude people, can you actually tell that someone falls in one category or another by just what you have presented in these kinds of standards?

>> Janet Corrigan:

Yes and taking that one step further, let's say that we find out that the answer to that question for a very important measure is no, that it's not discernible from CCR. What do we do then? What is the process? Because it may still be -- something isn't being captured in the right way to be able to allow us to calculate that performance measure, and that's an important thing because the performance measures are coming off of the practice guidelines in the same way that the decision support ought to be coming off the practice guidelines. So if we're at a point where we've got what we think is a very, performance measure that analyzes and assesses a critical aspect of the care process, an aspect that is important for patient outcomes and it isn't discernible from the CCR, how do we influence the future development of the CCR?

>> Lee Jones:

Well, I think that we -- if we had gotten something from you that -- in the form of a use case or something, that that was a situation after we did our analysis and there was no current standard, CCD, CCR, or otherwise, that could accommodate what you're looking for, then what we do at HITSP, we identify it as a gap, we go to the various standards development organizations and give them that, you know, that gap and ask them to fill it. And so, you know, in the lab arena we did something similar like that with HL7 and they implemented a implementation guide for us to use primarily based on the requirements that HITSP gave them. So I do think that, you know, I know that government is very active in all of the different SDOs themselves directly, can bring it up that way, but also through HITSP, when there's a gap like that, we work with SDOs to get them to collaborate on filling the need.

And then there's a larger question about, okay, even if there's a standard there, how do you influence ultimately clinical practice so that people fill in the data the right way, all the vendors are actually using it in their products so it be represented and captured, et cetera?

>> Janet Corrigan:

That's very helpful and I think in some ways what we're really pointing to here is the need to more clearly delineate a process for moving forward because we're going to be in this sort of continuous loop between measure development and the CCR or whatever, and HITSP's work, and essentially we've got to have a better, more well-defined sort of line of communication and handoff between our group at NQF that is called HITEP, to make things even worse, HITEP, but the HITEP group, which is chaired by Paul Tang, needs to have more explicit handoff probably or communication over to the HITSP and then indeed a feedback loop as well.

>> Carolyn Clancy:

Right.

>> Lee Jones:

I know that --

>> Charlene Underwood:

I'm going to have to speak up on this one, because -- with all due respect, between Janet, it's like this is one and I think most of you are aware of the work Floyd Eisenberg has done and we've been trying to do work from the vendor community to try and connect the dots here. This is just representative of the convolution of the current process in terms of trying to connect the dots. So I would not support building a connection between Janet's group and HITSP without looking at the broader workflow process definition, such that we can like streamline this process, because right now we're trying to hold end-to-end concepts together from the point that the use case is developed from AHIC, it's passed to ONC to develop the use case, it goes to HITSP, we respond manually in terms of what the issues, we try and be at the table in terms of Janet's group, and right now it is a waste of effort in the industry trying to get the stuff to hold hands. So I totally concur that we need a better process. I don't concur that we should define what that process is until we look at the bigger picture.

>> Carolyn Clancy:

Charlene, can I clarify? When you're talking about workflow, you're talking about the resource and other implications of needing to change records?

>> Charlene Underwood:

No, no. I'm talking about the current process now to identify what are the standards that need to be deployed to support quality management. That's what I'm talking about. And making sure that at the end of the day those standards can be built such that they're computable and the data that we capture is computable. Because we've been trying to define what that process is for the last year, working with all these groups, and responding to the work coming out of ONC in terms of trying to communicate that, and the right hand does not know yet what the left hand is doing so something is not working.

>> Carolyn Clancy:

Well, I think that's fair and to be honest I think it also circles back to the point that Rick Stephens makes to me all the time that we've got to be thinking about where's the consensus and buy-in. Lee --

>> Josie Williams:

Yeah, this is Josie, Carolyn. And I have to agree with that totally. We really do have some issues out there. And it's next to impossible if you're out sort of a large system to even understand let alone put it together. So I have to concur with the last speaker.

>> Carolyn Clancy:

Josie, let me say how thrilled I think all of us are to hear your voice. I know you've had a series of adventures that we don't need to go into, but we've missed you.

>> Josie Williams:

Let's just say that I understand all too well what quality is and should be.

>> Carolyn Clancy:

Right. I expect to see you on Oprah any day touting your book.

[laughter]

>> Josie Williams:

Good to be back.

>> Carolyn Clancy:

Any other questions for Lee? I have just one very concrete question, Lee. Back to the current environment and these documents. The documents that you've listed there are documents that have been developed to support the use cases for the Nationwide Health Information Network, is that correct?

>> Lee Jones:

You mean the use cases that they will be addressing? Yes.

>> Carolyn Clancy:

So looking at that list, we would be overstretching to infer some strong sense of consensus in a much broader health care community about the content of these, am I correct on that? Which is not to say that the standards folks don't reach out, but that's different than, say, the American College of Physicians or the AMA saying, this is the patient-level quality data document.

>> Lee Jones:

Oh, yeah, I think you're right in that characterization.

>> Carolyn Clancy:

Okay, thanks, that's very, very helpful. And we'll be looking forward to sending you a quality report card as soon as we solve all of these problems here.

>>

Carolyn, we'll need to do a little bit more research to inform some of those questions. (inaudible)

>> Carolyn Clancy:

That would be great. And Tammy, since I put you on the spot, I will put Doug on the spot later on.

>> Tammy Czarnecki:

Okay.

>> Carolyn Clancy:

We're going to transition now to return to the requirements analysis that we've been working on. And just by way of refreshing your memory, we've put together, or staff put together, a very long analysis. The concept of a requirements analysis is one that is probably more resonant in the world of software development. But actually began to take on the broader charge, which moves us from thinking about how can an electronic health record capture some current quality measures, and make reporting easy for, say, in hospitals, or physicians’ practices, to the idea of looking at longitudinal assessment of care, which I think is where most people certainly as consumers of health care, and I might have tested this concept on Josie, are very interested. Most people aren't interested in knowing that the first ten minutes of the movie was really wonderful and the rest really got pretty hairy. They'd like to see a fairly comprehensive picture. So we've had discussions about this analysis at the past Workgroup. We've had -- past Workgroup meetings. We've had a specific subgroup of people which I know has included Phyllis Torda and others really kicking the tires on this analysis so we have a lot of comments and suggested changes.

So on this slide you can see that some of those changes include reorganizing the document to maintain and sort of emphasize the focus on longitudinal analysis and the capabilities of health IT and future requirements. And the additional background or context setting information has been moved to appendices. The introduction has been restructured to clearly focus on the intersection between health IT and longitudinal analysis. There has been a section on clinical decision support -- thank you, Jonathan and Jerry and many others -- added to this document. There's also been identification of requirements related to disease registries. Those of you who don't tune in to the quality world every ten minutes, I can tell you this is a huge interest, in particular among the surgeons. And a development of a next steps section including the identification of recommendation topic areas based on the analysis. So we started to get into the topics at the last meeting, and it is our hope now to continue that process.

So on the next slide, just making this a little bit more concrete, a very specific deliverable for us is to put forth recommendations to the American Health Information Community in 2008. Now, I know that many of you are acutely aware that there will be a transition going on this year from I guess what might be called AHIC 1.0 to AHIC 2.0, but that really does not change the content and sort of trajectory of what we're doing. So in order to get there, we really need to develop and reach some consensus on some recommendation areas and we've actually come up with a, sort of some criteria to help us think through this. And then we need to figure out are there areas which -- that we think are important but where we need some additional background work whether that's additional testimony, additional analysis, and so forth. And then of course we ultimately need to get into draft language for a recommendation letter.

And we may also need to conduct some scenario analysis on some important topics that we've already discussed where the desired future state is unclear. Because I'll take you way back to our very early meetings where we said that we were not going to prescribe solutions. We were not going to say data aggregation should be done this way, recognizing that our health care system for better or for worse is based on a market-based approach, but that we were going to focus on requirements. So for example the locus of data aggregation is an area where I would have to say it's pretty clear that there's no clear consensus. Right now hospitals are reporting to a central repository maintained by the Iowa Foundation for Medical Care, otherwise known as the Iowa QIO. In other work going forward there's much more of a focus on a distributed kind of approach. Unlikely that by the time we're bringing recommendations forward to the American Health Information Community there's going to be a clear consensus on those points and I don't think it is our job to necessarily drive that. So we may need to actually flesh out some scenarios.

In addition to that, we think it's important to develop a roadmap of key activities or changes needed to reach the end state articulated in the Quality Workgroup vision and again taking you back a number of meetings, you'll recall that we were pretty good at saying what we wanted in the future. And, I mean actually quite imaginative and creative, I think, and we were really articulate and clear about what's wrong with this picture today. Making the distinction between midterm and longer term was certainly much more challenging for me and I think for many of us. So we're going to try to push you on this timing issue a little bit.

So the next slide is labeled notional draft. I don't own that label but I love it. And it's essentially a vision roadmap. So what you can see along the left-hand side are some issues that we think, and have been the subject of much of our discussion over the past couple of meetings, are areas and issues that we think need further development. Along the top is a timeline in years, and what we also see here are areas of -- we see some suggested timeframes -- my problem here and the reason I'm fumbling for language is I didn't bring reading glasses with me, because the print is pretty small. Oh, thank you. The spirit of team work around here, I now have some glasses. It’s actually helping a little bit.

So for example, what we have put together here is if you look at data stewardship, we've said there will be broad agreement on need in the latter part of 2007 going into 2008, and are suggesting that policies or procedures will be developed in 2009 and so forth. And -- oh, and this is even better. There’s one with large print here. I can't get over it. -- So what we're trying to do is figure out from this roadmap how far off is this timing? What are the dependencies between some of these activities? And where do we need consensus and buy-in and are those structures in place to actually create a forum for that kind of engagement? So if this isn't enough questions for you, I can't imagine what might be. So I'm going to pause here. After we take a look at this timeline, we're going to actually delve a bit more in-depth into specific topic areas. Let me ask if there are comments from members of the Workgroup, and in particular from Rick?

>>Rick Stephens:

Well, from my standpoint, Carolyn, I think you characterized it very well. We have lots of questions, we've tried to put together a framework and now we're looking for some feedback and dialogue about how all this ties together. I think the challenges again go back to how long does it take to get there, what are the relationships, and how do we align and get agreement about what each of these bars mean and who is going to go pull them off.

>> Jane Metzger:

Carolyn, this is Jane Metzger. Could I ask a question about measure set evolution?

>> Carolyn Clancy:

Absolutely.

>> Jane Metzger:

I'm not -- it's not clear to me whether in that evolution bar includes the concept that we're moving toward one set of measures.

>> Carolyn Clancy:

I think that's a really good question and I’ll have to say that a number of us work closely enough with the National Quality Forum that that may be a presumption that is hardwired into our brains. I think there's been huge consensus among many in the health care industry that we need one set of measures. That's certainly been a driving factor in the quality alliances. And I think many people have come to the notion of having one set of measures through shared pain more than anything else, because any other strategy is kind of chaotic.

>> Jane Metzger:

Well, I guess I think we should be quite explicit about that.

>> Carolyn Clancy:

That's fine.

>> Janet Corrigan:

This is Janet Corrigan, the way that we think about it at NQF, and working with our partners on it, is that there does need to be agreement around one set of measures that will be used for public reporting and pay for performance. There will be lots of other quality measures in all likelihood that are wanted internally for internal quality improvement, but to the degree we're looking for comparative data and we want to lessen the burden on providers from external demands at least, we do need to reach agreement on those that would be used for external reporting purposes.

>> Jane Metzger:

I guess I would request that we be explicit about that, because it isn't necessarily obvious to people who haven't been involved in this process.

>> Carolyn Clancy:

I think that's incredibly helpful. Thank you for that. Now that I can actually read here I wanted to just clarify a couple of terms in the left-hand column. Some of these I think are pretty self-evident. The issues about the legal framework for data sharing, we just heard a bit about from the NCVHS, where data stewardship we've discussed.

The issue of patient record matching, in a world where a unique patient identifier is unlikely to come about anytime soon, is one that we discussed at the, at recent meetings.

And the same applies for provider record. The acuity of this was brought home for me when I heard Beth McGlynn from RAND describe her experience trying to do this in Massachusetts with four, the number four, health plans and finding out that one health plan had a million unique provider IDs. That turns out to be an artifact of how they register referrals out of network. But it's that kind of challenge that is very much at the core of trying to get to longitudinal assessment.

We've had a number of discussions or at least discussion fragments might be a better way to say it, talking about the locus of record de-identification, and you know, Justine did a fabulous job clarifying the issues related to -- I’m trying myself now -- health data use, that relate to when the data are de-identified and when they are not. And I think we've certainly heard some testimony in our meetings before which suggest there's not uniform total clarity about this at all times.

Data exchange and aggregation I think is the unique challenge for us because it is unlikely that someone is going to wave a wand and the entire health care system will be wired in an interoperable fashion in a very short period of time. So what we're going to see, we're going to be living in a world where that diffusion and adoption is evolutionary. So for some docs and other health care settings we're going to need to have to rely on claims, taking advantage of clinical data elements when we can, whereas others are going to be living in a world because they were among the early adopters and were in a better place to do it, where they've got a very, very sophisticated electronic system. So the issues of, related to reconciling aggregation of data from very different sources and pulling data from multiple sources which we started to get into with Lee Jones a little bit, I think are very real.

The concept of a quality dataset is a synonym for minimum dataset, and we are not using the word minimum because that has come to have a very specific meaning related to nursing homes. In fact, when you log on to Nursing Home Compare on the CMS site, all nursing homes in this country are required to report on a select set of performance measures, which have been endorsed by the National Quality Forum, but they were derived from this minimum dataset. And the question is do we need something like that for longitudinal measurements?

The two other areas that I think are of great interest, and I know that the subgroup has teed up for consideration are expanded data element standardization, sort of I guess the HITEP process on steroids, if you will. I’m making sure that Janet is sitting down here and so she doesn't hit the floor. But quite concretely that might ultimately be translated as something that we bring to the Community where we say we need more standards and we need them now. And the current process doesn't actually support somebody saying let's get moving. But so you as the Community need to collectively bring your power together and say we want this to happen.

And then the last issue relates to coding improvements and --

(music)

That is lovely --

>>

Sounds like a third grade etude.

>> Carolyn Clancy:

I love it. Name that tune, right?

The issue of coding improvements has been raised because one of the --

(audio disruption)

>>

I think we know whose phone this is.

>> Carolyn Clancy:

Secretary Leavitt?

[laughter]

Is someone on mute?

>>

Yes. On hold.

>> Carolyn Clancy:

Do we know who it is?

>>

Yes.

>> Carolyn Clancy:

Oh, thank you.

>>

From the AHA.

>> Carolyn Clancy:

I love it. That was great. This issue of coding improvements, I see or understand as being closely linked to the issue of a problem list. Today in real life it is not actually so easy to find the denominator population of people for example with diabetes. In outpatient care that's usually some little algorithm about who had a blood test that looked like diabetes plus has gotten other tests that sound and smell like diabetes, and maybe is on medications that only diabetics would take and so forth. And that seems to me to burn a lot of resources and to undermine the credibility of the implementation of measures.

So those are the big broad areas that have been -- and then there's clinical decision support underneath that. We took a crack at a timeline, and you'll see that the incentives piece across the top assumes a vigorous and coordinated effort of payment reforms that would of course provide rocket fuel to all of the work that we're contemplating, and those reforms may or may not take place. At the same time I think what we're trying to distill here are what critical steps are going to be needed under any scenario. So I'm going to stop here and ask if there are other questions or comments.

>> George Isham:

This is George Isham. This is great. I love this picture, Carolyn. It implies, though, that we'll be done just in the first quarter of 2013. And I'm reassured by that --

[laughter]

-- but I think the one aspect of this that isn't captured in the diagram -- and I don't know if it can be, maybe it needs to be in the explanation -- is the dynamic nature of quality improvement and measurement in the sense that whatever the quality problems are today that we conceptualize and develop measures for and projects, quality improvements projects to improve, in 2013 we'll have a different set of issues that we'll be smarter about and we’ll want to deploy another cycle of improvement around. So we want to be careful not to convey that the whole process is going to fix in place through regulation and et cetera that our notion of quality problems and issues as of 2007, but it's going to accommodate change and innovation over time as we go forward.

And then one specific comment on the data exchange and aggregation line. We have that line, I think, moving in a kind of a very appropriate direction from 2007 through mid-2012. The information in the parentheses is that we're going to move from highly claims data to highly clinical data. And I think what we're going to see is more appropriate and more use of clinical data to be combined with the financial information that comes from claims and other sources, to give us a data that allows us to know what value is created in the health care system and ultimately to give the system incentive to address affordability. So I think it's a little misleading in terms of the right-hand label being highly clinical data, with the inference possibly being that, you know, it's desirable to move away from claims data, when I think it's desirable to actually supplement it to a greater degree with direct clinical data.

>> Carolyn Clancy:

Right, because no matter how you cut it, we're always going to need -- no matter how fabulous the clinical electronic data are, we are going to need some key pieces of information that by and large are not going to come out of electronic health records. They're likely to come from other administrative streams.

>> George Isham:

I think one other thing, too is you're never going to get a population view from just collecting all of the medical record data and adding it up. You know, you may need another data sources to look at that. You know, from survey data, survey sources, so forth. So I think that's an obvious point. But I think the implication of the simple bar here on the diagram is a little misleading in that regard.

>> Carolyn Clancy:

That's very helpful. Thank you. Karen?

>> Karen Bell:

This is Karen Bell and I'm actually sitting in for Kelly Cronin today. Picking up on your point, George, because I actually was thinking from a different perspective and that comes from the fact that if you listen to some of the discussions that are going on in other Workgroups there's a tremendous amount of emphasis being put on patients' access to data and patients having the ability to pull data together in some sort of an instrument that we're currently calling a PHR for lack of better word. But that ultimately we may be in a position as we move five years out, to be able to assess whether or not an individual patient received all of the care they needed through that other vehicle as well. So -- and I absolutely agree, this is a stunning way of putting together a very complex issue and in fact I'm going to steal it and bring it to some of the other Workgroups. But I think that in addition to what you mentioned about the claims, George, I think that we need to think about how we're going to bring in person-centered data as well.

>> Phyllis Torda:

This is Phyllis Torda. Can I jump in?

>> Carolyn Clancy:

Of course.

>> Phyllis Torda:

Thank you. In our little pre-work discussion about this item, I think our primary interest was in communicating that we be very open minded about the sources of data that might be aggregated in the future. And I think George's and Karen's comments are right in line, but mostly we wanted to communicate -- we wanted, you know, to move away from the current notion of data aggregation, which is kind of most commonly used, to think about aggregating sets of claims data, and to communicate that we thought in the future, and particularly for the kind of longitudinal measurement you've been talking about, we were going to need to think about combining lots of different kinds of data sources.

>> Carolyn Clancy:

That's very helpful. Other comments here?

>> Jerry Osheroff:

Carolyn, this is Jerry Osheroff. Can I jump in?

>> Carolyn Clancy:

Of course.

>> Jerry Osheroff:

There's a lot of very complex items that are laid out here and these things are all clearly necessary to getting us to where we want to go. But I'm wondering, they say the future is here it’s just unevenly distributed, and this committee and its work has had the chance to benefit from testimony from organizations like Geisinger and others, who sort of represent a high concentration of the future right now, today. So one of the things that struck me as I was reading through the excellent requirements document and that was a really terrific piece of work, it sort of painted a picture of where we are with diabetes and where we want to go with diabetes, and we've heard testimony of organizations that are getting closer and closer to that vision. So I'm wondering if as an adjunct to the sort of micro-components of what needs to be done, there might be a role for this committee to sort of share with the outside world some of the things that we've benefited from, namely examples or models of organizations that today are implementing pieces of what we consider the future vision to be as sort of inspirational and guidance and things like that, and a way of making concrete some of these very technical and arcane and in some cases quite complex things that we're trying to make happen here.

>> Carolyn Clancy:

Let me see if I can restate this so I understand that, Jerry. You're making the case that we've heard testimony from a number of organizations and we could have a discussion about how many there are in the country that are pretty much on the leading edge. To some extent they're on a leading edge far enough ahead of the curve that they're probably making some best bets about where the requirements curve is going for them, in other words, what policymakers, payers, and so forth are going to want in the future. So on some level sharing this with them for a reality check, and also a sense of what do they see as necessary to -- I'm struggling for the right verb here -- reinforce their efforts. Or what would they stop doing if they didn't think that there was some sort of demand for some of this work?

>> Jerry Osheroff:

What you just described is an important part of what it was that I was suggesting, that is checking these elements against the people who are doing this stuff already. But I'm just -- so in the requirements document there's a future state and it's described in a conceptual sort of way and there's a diagram that sort of shows how information is going to flow in this world that we're trying to create through the activities of this Workgroup. I guess what I'm suggesting also goes beyond what your sort of, you know, your expression of what I just said, to say that if we could actually hold up those things that these people are doing, not just to validate this work or the information that's laid out on the slide, but as a model to help the world get a picture of what the picture on the front of the box looks like when we get all these pieces in place. Some of that stuff is in place right now, and if everybody has a chance to see that and understand that as being real and tangible and things that are happening in health care today, it will help all the folks who are working on crafting and placing these individual pieces so that they add up so all the stakeholders are involved in this will have a clear concrete picture of the world that we're trying to bring about.

>> Carolyn Clancy:

I think that's incredibly helpful. Because otherwise it can start to sound really, really technical fast.

>> Jane Metzger:

Carolyn, this is Jane Metzger. I'd like to ask a question about the last bar.

>> Carolyn Clancy:

Uh-huh.

>> Jane Metzger:

It isn't clear to me from the way it's stated whether -- and I'm focusing on the part where certification starts becoming involved. There are lots of clinical decision support toolsets that are built into EHRs that organizations can use. And so when we say functionality, are we talking about those toolsets? Or are we talking about, let's say we have one set of measure, is that certification would also move into looking for those toolsets set up to assist with delivering care consistently with that one set of measures? I think I'm wondering what the term functionality means for certification. Is it the tools, or is it the tools set up for specific quality measures?

>> Carolyn Clancy:

I think for the purposes of our Workgroup, it's the critical linkage to measures of quality. So the idea being that in our future state quality assessments, what we're aiming for is not just decreased burden and being able to metaphorically hit F7 and upload quality measures, but also that it would be linked with decision support, which clearly gets into workflow issues in a big way, fast. So when a clinician is ordering the betablocker you not only have the infrastructure in place to record whether that happened, but that person is also getting a message. So I'm not sure if that's a toolset --

>> Jane Metzger:

No, I think that’s a -- to me that's content.

>> Carolyn Clancy:

Okay.

>> Jane Metzger:

It's tools and clinical content.

>>

Content.

>> Kristine Martin Anderson:

Maybe I can jump in a little. This is Kristine. What we were trying to communicate here was that -- and it goes back to the -- a lot of the work that's been done on the roadmap, et cetera, as people think' how CDS would evolve, which is even though there's a standard toolset, there's dramatic variability in the implementation of those toolsets, such that there's a -- each party has to invent a large part, for a large part what clinical decision support might actually help them get to where they need to be. And some element of that will always be there because of the way practice is local. But that there would be some additional movement toward best practices. So we know what works well that are related to these quality measures. And those could be disseminated and communicated even through the certifying Commission. But with no more detail than that, so it's not like there's a list of things that need to be in certification because we're all the way back with not even the pilot studies haven't been implemented. It's undefined. There will be greater definition and understanding of how clinical decision support actually improves quality at the point of care around specific conditions.

>> Carolyn Clancy:

So I think this exchange has clarified the issue but also reinforced for me the importance of language and vocabulary, not terribly dissimilar from your first point, Jane, about being very, very clear about assumptions about one set of measures. Here I think that we're going to need some backup explanatory text to be very clear what we are talking about.

>> Jonathan Teich:

Yeah, Carolyn, this is Jonathan. We found only recently that we've been hearing the term best practice used in two different ways. We used it to mean the best ways, the best design and implementation of CDS, but we're also hearing it being used as the actual best clinical practices, you know, you should do this particular thing your diabetics. So I'd be happy to work with whoever is doing this line to sort of clarify that, because we're getting better wording now.

>> Carolyn Clancy:

That's terrific, thanks. Other comments or questions here?

>> Charlene Underwood:

This is Charlene. On the chart, I like it a lot and my question would be, on the top line you're reflecting the evolution of payment, hopefully or probably toward a more pay for quality-type model. And then the measures come next. But like I think it would be just like you said maybe we're moving, on that line of the measures, move toward that single set of measures is kind of the goal rather than a venue-specific but at the highest level it seemed like the papers about as well as kind of the whole concept we're working for is moving from venue-specific kind of acute care scenario toward models of better managing chronic care, with prioritizing those kinds of conditions that we're developing measures for along that spectrum, the things we pay the most for. Would it make sense to kind of show a line where we're kind of moving from, if you will, goals that are set at the, you know, institutional- or the venue-specific level to goals that are kind of set at the more, if you will, national level to kind of set the context that we're not only moving the paradigm from, you know, in terms of payment, but it's supported because there's a national strategy to improve the health care of the nation. You kind of see where I'm going? It just would frame it. Unless that is in the context of how this is presented, which it could be.

>> Carolyn Clancy:

If I could crystallize this and then I'll ask Janet to chime in because she probably thinks about this --

>>

Probably every day, yeah.

>> Carolyn Clancy:

Which is great. In addition to the fact we're moving from very silo-specific snapshots, if you will, of care provided, it's been a fairly opportunistic approach in terms of measures. That does not in any way suggest any lack of good intentions and wanting the best for patient care, but there really has not been much of a process to say what's the most important issues here. Am I capturing the two major themes there?

>>

Yes.

>> Carolyn Clancy:

Great. And Janet, do you want to add to that?

>> Janet Corrigan:

I think it is good to tie the two together. We're pretty close, I think, to moving to a more specific set of national priorities and goals. The NQF national priorities partners group meets in January and their goal is to have a short list of national priorities by May. That's their objective, and those will likely include three or four chronic conditions and then three or four cross-cutting areas. The chronic conditions will definitely be, I'm pretty sure, low back pain and myocardial infarction, and both of those thinking with a long time window of 12 to 18 months surrounding them is how we're thinking about it longitudinally, because that builds off of a pilot project. Cross-cutting conditions will probably be things like pain management, end of life care, and care coordination as possible examples. So absolutely, that is intended to set the future agenda for where we need to develop measures if they don't currently exist, where measures need to be endorsed by NQF if we haven't got them in the pipeline, and in turn roll into the -- I think, indicate what the data requirements are in the capabilities in the EHR. So it should be hopefully a more orderly process or one that at least has had a little more up-front thinking as to where we want to target our resources in terms of quality improvement efforts, as well as measure development and all the rest of it.

>> Charlene Underwood:

It would be valuable, I think, to me, just kind of being explicit, to add a top line to kind of show that, because it sets -- what you just said, it kind of frames the goal of what we're trying to do.

>> Janet Corrigan:

Yep.

>> Carolyn Clancy:

Yes, without question.

>> Jerry Osheroff:

Carolyn, this is Jerry. What Charlene was just saying I think triggers a connection to what I was saying also. She just described putting a top line on this diagram and I think what I was saying almost amounts to adding a bottom row in this that talks about a success models and right now the models of success for the things that we're trying to happen are kind of spotty and disconnected and they're local in individual health care organizations. So that's sort of where we are today and we could actually have a library, we could literally collect those slide sets that we have that show those local examples, and then in the timeline going out into the future, these things that are very local and isolated now can become much more global and interconnected over time as the things we're trying to make happen become much more part of the fabric of how health care plays out.

>> Carolyn Clancy:

Thanks. I think that makes sense.

>>

Different issue?

>>

Okay.

>> Janet Corrigan:

This is Janet again. One question that I had and I'm sorry I think Karen Bell just left because she made this point earlier and so did George Isham. When we touched on the issue of PHR, personal health record and information from the patient, in some of the pilot work NQF has been doing in collaborate situation with lots of groups on two conditions, looking longitudinally, we started to develop what we think is a more generic framework for quality measurement that would apply hopefully to any chronic condition. A couple of the things that really came to the surface in the discussions is that it was really a strong emphasis on wanting to get at patient outcomes and patient outcomes over time, not just when they leave the hospital or three months later or whatever, but looking much longer at health functioning, return to work, as well as things like mortality, but all those other outcomes. In addition to that the other thing I think we want as we look longitudinally in terms of performance measurement at a chronic condition is a lot more information on health behaviors and health behavioral change. And then the third area that we really want, that's critically needed in order to derive the measures of the future when we think about how these sets are going to evolve, are measures of patient engagement in decision-making, which is actually closely tied to making health care more affordable and resource use, because patients make very different decisions when they're engaged than what we currently see when they're in a more passive role following the advice of just clinicians.

So all three of those areas, outcome, health behaviors and health behavioral change, and then third patients engagement in decision-making, say to me that we've really got to get out of the box the traditional health care box, of just thinking about our models that are more clinically- or medically-oriented and complement that with more patient-centered or patient-oriented data collection and tools. And at what point, and do we want to somehow incorporate that into this roadmap or at least allude to it somewhere in the text, will we really bring online that emphasis on the personal health record, how that patient-derived data -- and by personal health record meaning more than just patients having access or people in the community actually having access to clinical information, but also they're entering a lot of information themselves, capturing that critical information that we can really only get from those patients and how that will interface with this effort. Because it's really those data that are captured in that PHR from patients that I think will be really important for a lot of measures in the future.

And then one last point. The decision support really hopefully will get much more oriented on the families and their caregivers in addition to providing decision support to clinicians.

>> Carolyn Clancy:

I think that's incredibly helpful and I'm actually just looking at the clock because I didn't realize so much time had passed, an action-packed agenda. So what I'm going to suggest is we've heard terrific feedback which we will incorporate into a second iteration. If you have additional comments, if you could send them to Michelle Murray.

So what we've heard specifically is in some way trying to capture the dynamic nature of quality assessment and quality improvements. We've heard some very good dimensions about where do personal health records fit into this, because a very clear goal of our future state is all of this measurement improvement activity is much more patient-centric than it is right now. Although it's hard to get out of the frame we live in now, I'll just say that. Trying to figure out where that fits. We've heard a number of comments about the need for a success model, and some clarity of language around clinical decision support, and I know Michelle has been -- and also about the being very, very clear about one set of measures and linking that to some sense of priorities that are going to cut across all. Even though in my own personal view I think that many communities may want to supplement a core set of cross-cutting issues with stuff that's extremely relevant to them but may be not as important in other areas.

We have, as you can see, labeled pay for performance and value-based purchasing as an accelerant here. We've also labeled as an accelerant clarity about business practices as well as legal requirements for privacy and data sharing. If there are other areas that seem to you that we have not captured that should be accelerants, we would love to get comments on that. And if after kind of staring at this for a bit you realize there are other areas we've left out, if you could let us know, that would be great.

So with that, I would like to move our attention to the next part of the meeting, which is talking about the recommendation idea and action items. And what we're hoping to get here, because this is not the first time we've had this opportunity -- you can move to the next slide -- this is not the first time we've actually talked about these areas. But we're trying to get a little bit more specificity, and very specifically the questions that seemed important to us are first is the particular area or action item truly part of a critical path to achieving our vision, recognizing all the uncertainties of the current context that we live in. We are going to have a new President in January of 2009, I'm not making any predictions whatsoever, but there will be a different one than the one we have today, the Constitution specifies that. That has lots of implications about what we think the future political landscape might be, and so forth. But that is the world we live in. We're going to have to -- so when I say critical path, I mean something, a recommendation or area that's going to be important almost under any set of sociopolitical circumstances.

The next question is should we move ahead and/or do we need more information and analysis to know whether we will use this area as the basis of a particular recommendation, and we need to be thinking at all times about who are the groups that are going to be needed whose buy-in and alignment is needed to make this happen. After all, in the great world of quality, making measures for all of the current problems that we have right now is so much easier than actually making change. So I think we've got to be anticipating that as part of the future state at all times.

So the sources of the recommendation ideas and action items specifically come from our prior discussions and input from others, and the list really focuses on topics where health IT can be leveraged to support longitudinal measurement and improvement, and again we're looking for the critical path. Next slide.

So again I've just articulated these basic questions and based on the output of our discussion the staff will, my highly regarded colleagues, I should say, will identify ideas that should be further developed and to draft language for specific recommendations to bring to the American Health Information Community.

So with that, I'm going to move to topic one. One specific recommendation relates to this idea of a quality dataset to support quality measurement and reporting. And this would really build on, this is HITEP on steroids, if you will, that essentially we would turn to the quality alliance steering committee which is really focused on moving up from the AQA and HQA to extend the work of HITEP to compare CCD standards to priority areas and data elements reviewed by HITEP to determine how the CCD applies to the minimum dataset or quality dataset needs of a quality enterprise, and so forth. So I guess one question is, one question there from me -- let me just continue here. This also includes some recommendations for CMS, and I don't believe we have anyone from CMS on the phone, so you better believe we'll be -- is someone here?

>> Michael Rapp:

Yes, I'm on phone.

>> Carolyn Clancy:

Great, Mike, thank you. As always, you've been thoughtful and reserved, so I appreciate that. So this recommendation would further specify that CMS would be collaborating with measure developers, health IT vendors, AQA and HQA, and so forth to define quality datasets for hospital and longitudinal quality measurements. And that that is not just the kind of what are the data elements but also what's the optimal source. You can see we're packing an awful lot into this recommendation. So -- ultimately this would feed up to the Certification Commission. So if I haven't completely overloaded us at approximately 3:30, let me ask for comments on this specific topic.

>> Jonathan Teich:

Carolyn, this is Jonathan. First thing that comes to mind is, is it just the CCD that is part of this work? If we have a checklist of quality items that we want, or data items that are necessary for quality, is it possibly available to use more than just the CCD? The DEEDS standards or anything else that's already within the HITSP or HL7 boundaries?

>> Carolyn Clancy:

Yes.

>> Jonathan Teich:

Okay, because it doesn't say that.

>> Carolyn Clancy:

Okay, we can clarify that.

>> Margaret VanAmringe:

This is Margaret VanAmringe. I think this is a very critically important one. I think if we're going to have the kind of information we want to do longitudinal measurement, as well as to make sure that one of the biggest problems in health care, which is the handoff and communication side of things, is addressed, then I think this particular area could help significantly from that. And given how many variables there are in the presentation we just heard earlier this afternoon in terms of the CCD, the CCR, and all that, coming up with a set that seems to be a core set that ought to be at least used by and incorporated into the EHRs, to me would be extremely valuable. Because I just think right now there's so much information that's being lost, number one, and in some cases people are gathering so much information, it's overpowering and a huge burden and other organizations don't want to do that. So to have something that's in between, to me, but valuable, would be a great thing to do.

>> Carolyn Clancy:

Margaret, I'm going to do some creative thinking off the top of my head, which is not always a good place to be. But Janet's mentioning care coordination, and your mention of hand-offs and transitions, made me wonder if at the margins it wouldn't make sense to imagine that that quality dataset, particularly if it's initially implemented as ambulatory care and hospital care, doesn't include some specific overlap items that relate to that kind of transition, areas where we know we drop the ball. I'm saying this as if I am completely confident that we could identify a few high leverage opportunities. There are a number of groups that are working on that right now.

>> Margaret VanAmringe:

Oh, I think we could, absolutely.

>> Michael Rapp:

Carolyn, this is Mike Rapp from CMS. And I want to just say that I support this completely in concept. Because I think we've kind of addressed this frequently, that the way measures get developed is you develop the measures and then figure out how you're going to get the data. And the more we can move to a situation where we have a certain set of data and then we develop measures from that, of course, the dataset would take into account the things that are most important. But as you pointed out, there is for nursing homes a minimum dataset and that, that dataset is where the measures come from. We dealt with the same issue to a certain extent with trying to deal with Part D. There's a certain set of data that comes from that, and we worked on developing measures deriving directly from that. So in the hospital arena in particular, we don't have that, and I think if we could move toward that, it would be quite helpful. And I think you're also aware that as part of the Deficit Reduction Act CMS was charged with developing a tool which would be specifically for following patients across the continuum of care starting at hospital discharge and in the post-acute settings after that, and that tool, the care instrument is going to be implemented for the purpose of the demonstration starting in January of 2008. And so that's an example of probably a basic dataset that's already been worked on to a large extent, and then one could tack on to that other things. But I just want to say that we would support this wholeheartedly.

>> George Isham:

Carolyn?

>> Carolyn Clancy:

Yes.

>> George Isham:

I hope this isn't an off-the-mark comment, but how are we going to accommodate variation in performance and variation in quality problems, a la the Wennberg Fisher kind of studies, with our national standards and, Janet, with our national priority sets? You know, I sometimes worry that we lose sight of the fact that there are, different regions of this country have different fundamental quality problems based upon the fact that they're different regions, and that they probably ought to be driven by different sets of priorities and different sets of standards that are manageable for care systems in those regions. And I know that's sort of -- maybe it's off the whole mark of this conversation, but I just wanted to make that comment.

>> Carolyn Clancy:

No, I don't think it's off the mark. I mean, I think it's pretty important, and I guess the overarching question is -- and I'm sure Elliott has done this because it seems like he’s thought of everything it feels like, referring to Elliott Fisher from Dartmouth -- but if you look at sources of variation, obviously there's variation that's completely appropriate because it's a higher prevalence of disease, so when you see more skin cancer in the south, for example. And then there's all kinds of variation that's probably attributed, attributable to variations in supply. Coming from Massachusetts, a State rich in medical resources, Massachusetts has some variation problems that simply don't exist in other States as far away as Connecticut that don't have the same supply of hospital beds and so forth. But what I'm really not clear, George, is kind of like where is the greatest source of variability, because we know that a lot of hospitals do very well in some areas and not so well in other areas.

So I guess my own take, and I know Janet will want to jump in, is that it feels reasonable to me to imagine a small set, particularly very cross-cutting, of national priorities that are probably an issue no matter where you live. At the same time -- or recognizing that there are clinical areas where no matter where you live, we're not doing very well. I'm thinking about the very high performance benchmarks that Minnesota has set for itself, and because they're so strict, you know, the overall performance rate today is pretty awful. But there's a lot of alignment around moving that forward and accelerating rapidly. But that you would have some core national set that’s supplemented a lot locally. But that feels like a glib response so I'm going to turn to Janet.

>> Janet Corrigan:

No, I think that's right on target, Carolyn. As we've thought about the national priority setting process we've tried to be very, very explicit that we're talking about a short list of national priorities. And that it's essentially -- I mean, we're going to have measure reporting requirements imposed on health care providers, whether we like it or not. So why not get those focused on a more limited set of very high priority, high leverage areas and in no way preclude a local region from moving beyond that short list of priorities, and it may if they're an exemplary performance on one of the national priorities, well, great, share your best practices and then move on to other ones. It isn't intended to be a be all and end all. The way we're developing the national priorities, in partnership with 28 other national organizations that are a major affector arm, so the accreditors, the board certifiers, the major purchasing alliances, and other groups. All of those virtually set their own priorities for a variety of applications. We don't expect the national priorities are going to supersede those. What we're asking is for each partner at the table that they at least agree to place some emphasis on those, a significant amount of emphasis on those national priorities but once again they’ll be a short core list, but recognizing they'll choose to go beyond it for many purposes.

>> Reed Tuckson:

This is Reed Tuckson. I just want to make sure I understand that last thought. I think that what we -- if you sort of take what you just said, Janet, and you roll that forward, and you still have an extraordinary amount of data priorities and measure priorities that any, that physicians will still have to meet, and that other organizations will have to meet, beyond the national focus set. So it sounds like we're saying at one level we appreciate the need for focus and prioritization. On the other hand, everything still, everything is still in the mix. So I don't quite understand the envisioned future of how this system is going to respond to the priorities and everybody else's coalition's priorities.

>> George Isham:

Lets me add to Reed's comment because I think this is an important point. Because if you have that set of minimum priorities, as you've laid out, Janet, developed by national groups, and let's say you have regional differences and performances against minimum set, in much the same way that New England performed very well against the HEDIS standards when they were rolled out in the '90s. Basically -- and then you tie this to payment, incentives. The issue is are you going to be systematically advantaging one region of the country based upon just regional variations, as opposed to challenging the entire country against priorities which they have. Which I know is a slightly different point than Reed is making. But there are some complex issues that are driven by these regional variations and by this need for parsimony in terms of a single, you know, a parsimonious national set of priorities and measurement standards.

>> Janet Corrigan:

Yeah, that latter issue I think is one, George, that the priorities partners committee needs to consider as one of the criteria in setting priorities, the extent to which we probably don't want a set of priorities that would advantage or grossly disadvantage one region over another unless that truly is a region, the region that is advantaged is an exemplar in many, many different areas. So I guess that is one of the criteria for setting priorities. And it could be a downside if it's not handled properly.

Reed, I think that your issue is a somewhat different one. I don't see all of -- I mean, what do we have now, about 150, 200 measures that are endorsed by NQF, and widely in the mix and accepted and used by AQA and HQA, or as part of the Medicare nursing home set or home health set. I don't see those going away. I think though that as we develop the national priorities, two things will happen. Hopefully, the alliances will, over time, retire some of the perhaps measures that we're using right now that may not be really hitting at high leverage areas, and move to ones that are very much aligned with the priorities. So maybe over time we will see some sort of reshuffling. But the other thing the priorities are going to do is to expose gaps in our current portfolio of measures. Because as we set these priorities and they're very much along this longitudinal model, what you immediately see is that we've got some really critical gaps in that portfolio. I mean, there is the care coordination, there is a patient engagement in decision-making, all of the outcomes measures, we don't have much available in those areas and of course we don't have much on resource use, as you well know. And then there's, as we begin to try to actually develop the composite measures and overall indicators of measures of value, I think we're going to see quite a few new measures coming online, a sizable number, and that will probably be at a faster rate than we see existing, standardized measures going offline.

>> Reed Tuckson:

Janet, I find that argument persuasive and I think I begin to get it. The only other comment I'll make and I'll leave it alone so you can move on, is that I am as excited as anyone, I think, about envisioning the future. And all of the discussions we've had for the last two hours, you know, trying to sort of look at how, you know, all this is going to build toward an envisioned future. I hope that we are able to be very clear in our reporting back to AHIC about the relationship between an envisioned ideal future and immediate, urgent, right now. And I think that we all are sophisticated enough to understand that this very nascent field of performance assessment and the collection of data to be able to describe performance is very, very much under attack right now, as the reality of it sets in, and I think that it is important to make sure that we are using the forum of the NQF, the AQA, and this AHIC Quality committee, in association with the overall AHIC data collection infrastructure opportunities, to be able to populate at a statistically significant enough set of performance measures that are fundamental and basic that you can move forward with some degree of confidence so that this field can get out of the rut and move forward. And then also simultaneously worry about the more extravagant next steps. But I worry that if we do not focus in on a set of measures today that everybody can be explicit -- this is the question that came up about an hour ago as to what is the explicit relationship of the definition of what measures are we talking about, and then be able to find a way to aggregate the data at statistical significance and move that forward, I am very concerned that the beautiful future we envision is going to get pushed back because of challenges in the immediate right now.

>> Carolyn Clancy:

Reed, we hear you.

>> Reed Tuckson:

Thank you.

>> Michael Rapp:

Carolyn, this is Mike. I'd just like to follow up on some of this. I'm not sure if I'm hearing it right, but maybe we need to separate two different things, the physician world from the hospital world. And the reason I say that is I think that Reed's point is certainly clear with regard to the physician world. You have certain -- basically we're dealing with claims data there and aggregation of claims data and so forth. In the hospital world, though, it's a bit different in that, at least the way CMS does it, we collect data from the hospitals that they have to do chart abstraction for. And so although hospitals have electronic health information systems galore, it would seem that the problem is, I think would be helped a lot by having a data set that's fairly clear. And we would move -- that would move that arena quite a bit because the hospitals are quite burdened by that, or they make quite a bit of complaining about that. On the other hand, the physicians, we're talking about claims data, so they're putting that in there. So I think the two issues are somewhat different and I'm just trying to separate them a little bit.

>> Reed Tuckson:

Carolyn, this is -- I will make this on the last one, I completely agree and I think that is a friendly amendment. The only thing that it encourages me to comment is that the AQA measures, as again you know better than I, are a combination of claims and office data and some of which should be and are amenable to tools like CPT II codes and so forth. One of the things that I don't think that we are doing with enough clarity is to through this process, urge in advance the use of things like those sorts of mechanisms to try to capture -- things that we can do right now to capture and populate some of those other non-claims-based AQA measures, and if we've been more explicit than I realize, I apologize. But if we haven't, I don't want to fail to put that as an immediate issue on what we put forward to AHIC. I'll get off this point now, thanks.

>> Janet Corrigan:

Reed, this is Janet. Could I just make one quick comment on that? I think in order to have a practical set of measures and data elements, we have to also change the way we're develop and specifying the measures. If we want to move the field expeditiously and very quickly, we probably have to work very closely with the measure developers because right now the measures that are being developed and specified with these very, very long lists of exclusions for the denominator, it's going to be a long time before we have automated data that can support that kind of measure. So I think there's a two-way street here to getting a practical solution and to be able to move more rapidly. One, we probably have to streamline and rethink a little bit and retool some of the measures. And then two, you're right, we have to -- that will get us to a smaller dataset that's required to support our existing set of measures.

>> Reed Tuckson:

Thank you.

>> Carolyn Clancy:

Okay, so I'm going to take the chair's prerogative here and in a moment we're going to move to topic three. And the reason I'm saying that is I think that we need some additional clarification around issues that cut across both topics one and two. Because I think we've got some sequencing issues. We're talking about a number of issues. One is fundamentally reengineering the whole enterprise of quality measurement to be thinking, anticipating data needs and so forth up front as well as what's important. We've got some issues related to how does the output of HITEP map on to or overlap with CCD or not, and what are the implications there, which is a very short-term but incredibly important issue. Mike has reminded us very clearly that he's got one quality dataset in law and it's not like he's looking at that as optional. But I think we need to -- I think we need to bring back to you at our next meeting a couple of items. One is a report from the HITEP process so far and we're going to need to get very concrete about the workflow follow-up there. And I think that we're also going to need to come back with sort of a sequence of activities that begins to lay out some chicken or egg sorts of issues. And Reed, I for one am taking personal responsibility for giving some serious thought and making sure that we don't lose sight of your thought about communicating in a clear way -- I said it here, remember that -- you know, this tension between driving towards this envisioned future and the immediate, pretty important challenges we're facing right now. That's not to make light of it or to declare that there's so many problems we can't move forward but to not acknowledge those issues I think is -- would be a huge mistake.

>> Josie Williams:

Carolyn, can I just add to that, too? Josie. The issues that I see as we're trying hard to implement, as you know, is not only one of the issues we've talked about and the resistance that's there, as Reed implied, but also just one of structural necessities that are not in place or resources that are not in place and it's not as much in the metropolitan area as it is in the underserved areas, the rural areas, and those types of areas where the resources and the structures are not in place to even play in the game.

>>

Right.

>> Carolyn Clancy:

Yeah, no, that is huge. All right, I'm going to move to topic three, which is about record matching issues. Now, this is labeled on your slide as within the context of a Nationwide Health Information Network, and in some communities where those prototype implementations are taking place this will indeed be real-time and very live right now. But just to make this extremely concrete and bigger than where there happen to be NHIN prototype implementations, I will remind you, or share with you for folks who haven't been following this, that very recently the hospitals started reporting on 30-day mortality. Right away you are talking about a record matching challenge because you're trying to match a patient hospital record to a National Death Index or something along those lines, within a specified timeframe. So this is not a theoretical problem, it's not going to be something we have to deal with in theory if we can crash through all of the problems that we see around us right today. This is something that's happening right now. And I think it's fair to say that it has not been considered acceptable in this country to think of a unique patient identifier as the solution. In some countries that works, and here it does not. So I'm just going to leave it at that.

That means there have been a variety of strategies that have been developed to figure out how to match patients, how to make sure you're dealing with the same patient when you're drawing on data from two or more sources for the purposes assessing quality improvement. So some of the ideas that have been teed up here include an environmental scan of some specific strategies that are going on right now. Another is specifying or clarifying the role of electronic health records and health information exchanges in record matching, and again trying to come up with a clear articulation of options and strategies, including the sensitivity and specificity of various options. Remember that any algorithm is going to be less than perfect because some days I go by C Clancy, some days I go by Carolyn M, some days by CM, and so on and so forth. That's the most concrete basic problem that comes up in trying to match records. So one strategy is going to be extremely specific and never get it wrong , never get down the other people in Montgomery County where I live who share common initials and so forth. Other strategies are going to be we won't be -- we'll miss some cases when it was me. There are other strategies are going to have the reverse sort of situations, and we've talked before that no one on this group could identify having seen that very clearly laid out. That would at least be an important step moving us forward because until we get to the perfect world, in the way that George described it better than our chart did, of enriched administrative data, or a data strategy that takes advantage both of admin data as well as clinical electronic data, to get us to the kind of easy quality measurement we want on what's important, we are going to be stuck with this record matching kind of issue. This similarly comes up under providers. And I'm not sure which I think has more importance but it does seem to me they are critically, they are pretty important steps on the path moving forward irrespective of lots of external context issues. So let me just ask if there are comments or objections to that. Justine?

>> Justine Carr:

NCVHS actually held hearings on matching patients with the records, so there's some very good testimony and some very interesting models. And also there was a letter to the Secretary.

>> Carolyn Clancy:

Right. Other comments?

>> Margaret VanAmringe:

This is Margaret. I think this is a good one, because I think this is also going to be an essential infrastructure issue for data aggregation and longitudinal analyses.

>>

That's what I was alluding to, yeah.

>> Carolyn Clancy:

Great. Well, we will do some additional analysis and bring that back in the form of some options for recommendations moving forward. And we will take advantage of every syllable of the testimony that NCVHS has already heard.

Topic four is about considering specific and actionable options for data aggregation strategy meeting the unique needs of the quality measurement and reporting enterprise. Now, in its broadest dimensions this one is huge and would sink all of us immediately. And as I mentioned before, we will be bringing the results of the RFI back to this Workgroup after it's presented to AQA, so that will be the second agenda item for next meeting, will be a briefing, not only a summary of the responses presented but also a sense of the discussion that goes on at the AQA, so that people have a sense of how one important group of stakeholders responded to that. Again, as precursors I think to specific action items, defining the various approaches to data aggregation and articulating the, both the challenges and the opportunities inherent in those different approaches.

So for example I know many people who are connected with the hospital enterprise have a great deal of faith in the operational consistency of the central repository model, right? They have other issues, Mike would be happy to share any of their issues with you at any moment in time, but what they know is that there's literally one set of actors working on their issues. There's one set of troubleshooters that explains to any hospital in the country calling up, I don't know what to do with this exception or whatever. So I've been left with the impression that that sense of operational consistency greatly outweighs concerns about timeliness and so forth, but the reality is if it's feeding into a central repository, it's likely to be slower. So that doesn't help with us our goal of getting clinicians information and something like real-time. On the other hand, if you do have a strategy that's incredibly local, there are issues of consistency and I'm just hitting some very high level issues. I think clarifying what some of these are might be very helpful, and certainly I think would be a useful product for AHIC 2.0 as they figure out how closely and how do they intertwine with the quality enterprise.

There's also a lot of issues here related to aggregation and including data from registries and what I'm going to call a kind of meso-architectural problem which the hospitals are going to be facing in the near future. So right now, as Mike pointed out, they are largely collecting data from chart abstractions. We believe, and I'm a self-interested party here, that there are likely to be measures endorsed coming out of the NQF that use admin data. Their strategy is about chart abstractions and creating files from that data that's collected, sending it off to the Iowa Medical Foundation. How to integrate these disparate strategies is going to be, or disparate data sources, is going to be a big issue and we’ll try to bring some clarity on that back to you. Let me ask if there are questions or comments on this point or if anyone thinks I've completely left the planet.

>> Phyllis Torda:

Carolyn, this is Phyllis. I think another component of this is simply understanding what some of the more decentralized models might look like.

>> Carolyn Clancy:

Yeah, I think that's very helpful. And I must say getting back to the issue of variability, it seemed to me we heard quite a bit of variability in our prior testimony. But distilling that in a way that's understandable and accessible to a broader which doesn't necessarily have the time commitment for our Workgroup discussions I think would be very helpful. There's also an area, part of this might also relate to trying to learn from -- and this is another example of a decentralized model, the project that Mark McClellan is leading, which involves using a distributed approach to collecting data, maximizes privacy, essentially he's developing the algorithms and software programs. And none of the data leaves their home so to speak or their home health care organization. In terms of maximizing privacy that is fabulous, again in terms of getting clinicians actionable information in real-time, is likely to be slower, at least for the foreseeable future. But that will also lead us back to the BQI pilots. Because clearly those need to --

And then our last one, topic five. I think of as the Isham topic, largely, George, because you said it to me I think on the right day, when I could hear it, and said it in a compelling way, that in many ways, many parts of the quality and health IT communities actually don't know each other. Now, I haven't figured out what a getting to know you recommendation looks like for the AHIC, but some of the specific action items that have been teed up here include articulating ambulatory electronic health record functionality requirements that address the magnitude of patient data that accumulates over time. I might also include in there that include some aspects of what kind of information do you need to have around all the time and what can you actually kind of effectively store off to the side. Now this gets into the distinction between what's a chronic problem and what's an acute problem, but it's also about what's kind of a chronic active problem as opposed to one that is incredibly stable. So that might be one aspect of it. And it might actually ultimately require working through the AHIC priority setting process to articulate the need for such requirements to be included in use cases and so forth in the future. Now at this point I feel like I'm right at the edge of my understanding, although it feels real. But that could be because I'm at the edge of fatigue as well. Let me just ask if there are any comments on this last topic?

>> Rick Stephens:

Carolyn, this is Rick. One of the other questions is, is there actually an inventory of organizations who are all working in this space? Because I think that might be helpful to at least help begin the process of everyone understanding what others are doing so they can start this interaction.

>> Carolyn Clancy:

Right, so in other words who would be invited to this party? I think that's a great idea. The only way I could possibly know how to get there would be possibly an iterative sort of snowball process but that's okay. The good news I'm going to AMIA in November, so I can take good advantage of that. Any other last parting comments or words of wisdom? This is your last shot, Reed. I'm just kidding.

>> Reed Tuckson:

No, I'll be mercifully quiet.

[laughter]

I'm enthusiastic, by the way.

>>

Public comment.

>> Carolyn Clancy:

Yes, public comment. Thank you. I think we're now ready to turn for public comment.

>> Jennifer Macellaro:

This is Jennifer. You'll see a slide in just a second that has a phone number for people who have been listening over the Web. Anybody who’s already listening on the phone just needs to press star 1 to get into the queue, and there's an e-mail address there if anybody would like to write in comments after the meeting. I'll check back with you after we've let everyone have enough time.

>> Rick Stephens:

Hello?

>> Judy Sparrow:

Yep, Rick.

>> Rick Stephens:

I want to make sure we're still there.

>> Judy Sparrow:

You're here. We do have one public comment here in the room. So let me ask him -- Rick Stephens is on the phone.

>> Richard Singerman:

Can you hear me okay?

>> Rick Stephens:

Loud and clear, thanks.

>> Richard Singerman:

Great. Thanks. This is Richard Singerman, with IBM Healthcare Practice, actually used to be with ONC. And actually having worked on that CDS roadmap, one of the key things that I think was important, that I think is important also for the quality area to be addressed, is what really -- what is the secret in the sauce for transferability of best practices? There are definite reasons why there are these five to eight organizations across the country that had the best practices all the time, as opposed to the other 5,000 hospitals. And issues of whether that be culture or whatever, but I think it's important that those external influences be highlighted in best practices so that there really is an understanding of what is transferable. So that's one key point.

I'd say a second area is because our infomatics and translational medicine practice works at ground zero with academic medical centers and community hospitals trying to raise the bar, I would just like to echo some of the comments I heard before. That is that from the executive level of these small organizations, they really need a clear connection between the dots and not a 100-page report, more like a four-page executive summary of how does this atomic-level data connect to their potential EHR investments, because not all the hospitals have sophisticated systems, how does that atomic-level data then feed into the quality measures, and then if these organizations are making the investment they're really looking at pay for performance because they need a payoff and they really need to articulate that payoff to their boards. And so there needs to also be that clear connection to pay for performance.

And also one thing I haven't heard as much about is how are these organizations, whether they be Kaisers of the world or the Vas, using this quality information for actually executive-level decision-making. There's almost a tacit assumption that if the quality measure information is there, that people will be able to make intelligent decisions. But for a real concrete example, we've heard from clients, well, in our organization on the operations and business side we're humming along and last year our doctors were told to spend eight minutes a patient and this year they’re told to spend seven minutes per patients and that's better because they're seeing more patients in an hour, where the clinical side of the house says we can't really make the same argument or compete with that argument today, but we would like to be able to say that if we spend ten minutes per patient we're going to have better diabetes outcomes or better chronic care management. And so that kind of tradeoff, even though that sounds very simplistic, that sophistication doesn't exist for some organizations on the clinical side of the house as much as it does on the operational and financial side of the house where those electronic systems first emerged in those environments. So I think that's really an important point.

Finally one of the things we're seeing on this notion of connecting quality and translational research to outcomes research is in this area of interoperability. That is, as academic medical centers reach out more and more to affiliated ambulatory environments there's a question from the HIPAA point of view is that still one protected entity if the large academic medical center happens to own the physician practice? So that kind of begs the question, when is the information on the clinical side of the house shareable to the research side of the house? And does it have to be identified, do you have to have the patient explicit permission, or is that under the TPO rubric via quality? Because it's quality slash quality research, well, that starts to really trickle into translational research and translational medicine which NIH has been pushing so hard and we're finding a lot of organizations are struggling with that, so guidance from NCVHS would be really appreciated on that.

And finally, I think on the quality point of view I'd say because practical health care organizations are dealing with very limited budgets at times, they want to know what is that staged approach, even though we hear about the high hit measures, are those the ones that are also easiest to implement or are they the highest clinical value but the hardest to implement? What would be a stage three, or four-staged approach, this year we're going to go for these three measures that require these 20 data elements, so we can qualify for these six pay for performance payments whether they're for Leapfrog or CMS or whatever flavor.

So I think these are some of the areas of clarity that we're seeing that some of the organizations that we work with could really use from a very practical ground zero point of view.

>> Carolyn Clancy:

Those are incredibly important and I really appreciate it. Thank you.

>> Jennifer Macellaro:

This is Jennifer, we don't have any comments on phone.

>> Carolyn Clancy:

Great. Well, listen, you have all done yeoman's work today and I want to thank you for a really terrific discussion. We've teed up at least two items for our next agenda, and we will be, because I think continuity of our discussions is going to be very important, we'll make sure you get the minutes relatively early before the next meeting. If you have any comments on that diagram, please forward them to Michelle Murray who is Michelle, michelle dot murray at hhs dot gov. And thank you again for all of your hard work. This has really been exciting, and I think we're almost on the verge of figuring out what our critical path is. Thanks a lot.