Skip Navigation

Skip Navigation

American Health Information Community

Quality Workgroup Meeting #8

Thursday, May 3, 2007

Disclaimer

The views expressed in written conference materials or publications and by speakers and moderators at HHS-sponsored conferences do not necessarily reflect the official policies of HHS; nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.

>> Carolyn Clancy:

This is Carolyn Clancy, and I’m really pleased to report that Rick Stephens is here with us, in person, today as well. So we have a full complement, and we have a really terrific agenda for this afternoon. Just by way of a couple of reflections, I want to remind those of you who may not have had this streaming across your homepage as a banner, that this is National Patient Safety Week, and I mention this for two reasons. One, because it's very important in its own right, and two, because there are a few people who won't be able to join us and I have to leave early because there's a big National Patient Safety Foundation meeting here in town. We have a really wonderful agenda today, and this occurs at a time when we've made, I think, substantial progress on the specific charge, given to us by the American Health Information Community, and we're now making a transition to addressing the broader charge.

Now, the broader charge gives us a huge array of opportunities. However, it also comes with a little bit of risk that we may get so mired and so overwhelmed by opportunities that we're not as strategic and as focused as we could be. Our challenge is actually going to be to try to be strategic and focused. Today, right now I'm looking directly at my co-chair, and at Helen Darling, and I think I’m going to ask if you're getting impatient with us, please let us know. But frankly, I would welcome feedback from everyone, either in the room here at the Humphrey Building or on the phone, that if it feels like we're wandering a little bit, that you should let us know. I don’t think we need to -- there's not a clear path that’s set out with the ten commandments about what's the next logical step. And I think it's okay if we give that some thought. And I think today's presentations will actually give us a lot of context for what is the next step here. At the same time, I think we've got to be as focused as possible. So with that, I'm going to turn it to my co-chair.

>> Rick Stephens:

Carolyn, thanks very much and it is a great opportunity to be here today, to be able to see everyone face-to-face and participate in the meeting. I think you pretty well set the discussional tone, I think for today, Carolyn, and I, too, like you am looking forward to the discussions and I have a few comments later on, maybe about a process, about how we might be able to move forward in a more deliberate way. Thank you.

>> Kelly Cronin:

Is Matt on the phone? I think we somehow passed over the call-in procedures. It would be helpful to just introduce who is on the phone and go over briefly our call-in procedures.

>> Matt McCoy:

Yeah, I can do that, and I'll remind everybody of what Judy usually does, which is that this meeting is operating under the FACA guidelines, which means it’s being broadcast over the Web for members of the public to listen to and it's also being transcribed and recorded.

By way of call-in procedures for Workgroup members who are on the phone, just a couple of things. Please keep your phone muted when you're not saying anything so we can cut down on background noise, and when you do come in with a comment, please say your name first so we can get it on the transcript.

I will quickly run down the list of Workgroup members we have calling in for today. Jane Metzger, First Consulting Group. Pam French from Boeing. Theresa Cullen from the Indian Health Service. Tanya Woodward from OPM. Helen Burstin from the National Quality Forum. Michael Kaszynski from OPM, Jon Teich, Harvard Medical School. William Tierney, from Indiana University School of Medicine. Anne Easton from OPM, and Mary Beth Bigley from the Office of the Surgeon General. Is there any Workgroup members on the phone who I missed?

>> Judy Sparrow:

And maybe we could go around the room and introduce the members here. Helen, would you like to begin?

>> Helen Darling:

Helen Darling from the National Business Group on Health.

>> Janet Corrigan:

Janet Corrigan from the National Quality Forum.

>> Kelly Cronin:

Kelly Cronin, ONC.

>> Carolyn Clancy:

Carolyn Clancy, AHRQ.

>> Rick Stephens:

Rick Stephens, The Boeing Company.

>> Ron Paulus:

Ron Paulus, Geisinger Health System.

>> Mike Kramer:

Mike Kramer, Trinity Health.

>> Susan Postal:

Susan Postal, HCA.

>> Mike Rapp:

Michael Rapp from the Centers for Medicaid and Medicaid Services.

>> Charlene Underwood:

Charlene Underwood, Siemens Medical Solutions and HIMSS Electronic Health Record Vendor Association.

>> Jerry Osheroff:

Jerry Osheroff, Thomson Healthcare.

>> Judy Sparrow:

Carolyn --

>> Carolyn Clancy:

Great. So now that we're back on to the official sort of way to go through the agenda, I would like to ask if anyone is willing to make a motion to accept the minutes from the last meeting.

>>

So moved.

>>

Second?

>>

Second.

>> Carolyn Clancy:

Okay, any disagreements? Or comments? Okay. The minutes are accepted.

I am now then going to turn to Kelly Cronin who is going to give us an update on an ad hoc subgroup on clinical decision support. Just to remind you that a key part of our specific charge and of our thinking throughout, was not that IT can help us get smarter about collecting quality measures faster and faster, that we can drive looking through the rearview mirror faster and faster, but that that needed to be very clearly linked with getting that information to clinicians at the point of care. That is not the only type or application or specific focus for clinical decision support. And to that end I'll turn to Kelly.

>> Jane Metzger:

This is Jane Metzger. I cannot get into the Website.

>> Kelly Cronin:

Matt?

>> Matt McCoy:

Yeah, Jane, I can e-mail you and we can -- I can work with you offline. If nobody else is having a problem with this, then we can fix it while the meeting goes forward. Does that work?

>> Jane Metzger:

Great.

>> Kelly Cronin:

Is anyone else having a problem with the Website?

>>

None.

>> Mike Kaszynski:

Mike Kaszynski from OPM. I'm having a problem.

>> Kelly Cronin:

Okay. Matt, if you can follow up with Mike as well, that’d be great.

>> Matt McCoy:

Sure. I got that.

>>

(inaudible).

>>

They have a problem getting to the webinars, also. I don't know what the technology problem is, but it's in -- it's their technology.

>> Kelly Cronin:

Okay.

>> Jonathan Teich:

This is Jonathan, I had a problem until I took off the extra period at the end of the Website.

>> Kelly Cronin:

Oh, okay, well, maybe that's one of the issues.

>> Jane Metzger:

I tried that and it still didn't go through.

>> Kelly Cronin:

Okay, we'll work it out hopefully shortly, Jane.

I just wanted to briefly give you all an update on how we're trying to coordinate clinical decision support activities and building off a lot of work that's already been done and sponsored by both AHRQ and ONC. As many of you know, over the last couple of years the American Medical Informatics Association and a lot of people in the health IT community got together to try to develop a road map for CDS, and that really gave us a conceptual framework to work from, in planning a lot of activities and a lot of critical stuff that needs to take place to really advance this area. And everyone I think realizes this is the crème de la crème of where we all want to go. We want to realize evidence being delivered at the point of care so we really can start seeing some better compliance with the evidence base that we also have the tools that are needed across the board, also for patients to really be empowered and be part of the decision-making process. So given that this road map is out there and that a lot of the AHIC Workgroups have CDS aspects in the broad charge, I think most clearly our Workgroup is, a big part of our broad charge is to try to advance this. But the Personalized Healthcare Workgroup is also really focused on it, the Population Health and Clinical Care Connection Workgroup is really interested in bidirectional communication between public health and clinical care and making sure that when there's certain emergency situations or emerging outbreaks that there's the ability to support decision making at the point of care, related to public health emergencies, or other situations, and then the EHR Workgroup is also inherently interested because it's a key part of widespread adoption of electronic health records.

Given there's a lot discussion and a lot of interest but there's a framework to think from, we wanted to really organize our efforts across Workgroups and have a few people who really have worked a lot in this area over the last, in some cases many years. And get them to think through what our key focus areas should be. What should each Workgroup really try to drill down on in the next year and try to advance, whether it's trying to figure out a public-private knowledge repository or an analytic framework on how you use this data or how we get from measure development to use of this information at the point of care more rapidly and a lot of different things that have already been outlined at a high level in the road map and that many people, actually Jerry and Jonathan in particular, have really been working on implementing. So I think not only is there a lot to think about across the federal agencies but there’s a lot that the Workgroups can do if we're a little better organized. So on Friday, tomorrow, we're going to meet to really start planning what our next steps are and how exactly all of us should be contributing in what way. And then we'll come back to you all with an update as we have a more definitive plan. It’s likely that there will also be an opportunity to present to AHIC about how best do we really focus in on what to do over the next year, and where are the opportunities for us to advance some pilots or really get some more experience to move this forward.

>> Carolyn Clancy:

Great. So we have a series of really great presentations today. And the first person to present is Ron Paulus from the Geisinger System. Just by way of introduction let me say that I had a chance to visit with Ron and a number of his colleagues at Geisinger about six weeks ago, and what came across in every presentation with every group I met was the idea of a strategic commitment to quality and health IT, and to using data to learn. So I've been really looking forward to this. Ron?

>> Ron Paulus:

Thank you. Can I have the first slide, please? And first thanks for inviting me and giving me a chance to share some of the things that my colleagues and I have been doing at Geisinger. Next slide, please.

So the main thing I wanted to try to bridge here was hopefully moving from the concept of data ultimately to where I think the end goal is as a byproduct of all of this, which is ultimately achieving clinical transformation. By way of brief background, for those that may not be as familiar with Geisinger, Geisinger is, I think, a quintessential integrated delivery system. It was started in 1915 by Abigail Geisinger. Now is comprised of nearly 700 multispeciality physicians, three hospitals with quaternary and tertiary care, one is a closed staffed hospital, two are community open staffed hospitals, so it’s not just a completely hermetically sealed microcosm. We have about a 210,000 member health plan that provides about a third of the revenue to the health system but two-thirds of our revenue still comes from other insurers. We have about 11,000 FTEs, we serve about 2.5 million in our primary service areas, but it’s a large geographic swath with about 20,000 square miles of geographic coverage, so we've created our own little electronic network within the central northern Pennsylvania region. And as Carolyn noted we’ve made a strategic commitment to health information technology and clinical transformation and we just completed a five-year vision for the second century, and the title of that vision is striving for perfection, with Geisinger quality being the number one underpinning of the entire five-year vision. Next slide, please.

We made a decision long before I got there, so all of the insight really preceded my time at Geisinger, to invest in an electronic health record. And the decision was made for several reasons, one of which was to be an integrated delivery system. When you have multiple practice sites across 20,000 square miles, the question is how can you be a true group if you're not interconnected in an electronic manner? So we've invested now over 80 million dollars in hardware, software, and et cetera, we spent about 4.2 percent of our annual revenue as a run rate on IT and related services.

And we have a fully integrated electronic health record. That includes a patient portal, that we call MyGeisinger, we have 75,000 active users at that portal now. Patients can look at health reminders, they can schedule appointments online, refill prescriptions, pay bills, so a lot of electronic interaction. We also reach out to our community physicians through a GeisingerConnect EMR link that enables those physicians through electronic connection to gain access to their own patients’ data. We have about three million patient records. Next slide.

This will be the last slide on the background, which just geographically shows the county coverage and each of those yellow dots is a non-Geisinger practice that is actively engaged in electronic review of their patients within our electronic health record. The three stars are our three hubs. And that should give you a sense of where we are in Pennsylvania. Next slide.

Okay. This slide really lays out the concepts behind the infrastructure and there's a slide here that has a bit of of a diagram coming up. But if you think, starting at the far left, data are generated largely as a byproduct of care and activities that we engage in each day. We have the electronic health record, which is our transaction system for looking at lab results, making orders, reviewing data, et cetera. And we believe that increasingly, those data flowing into the electronic health record will either be patient self-reported or flowing from devices that will either be in the hospital or remotely in the home, et cetera, but data as a byproduct of care. Then the question is what do you do with that data? If you leave it within the structure of a transaction system for the most part those data are hard to get. They’re hard to access, they’re not necessarily standardized or normalized or transformed in the way that one might need them to be to do population management, trending, et cetera. So aggregating a and transforming that data and this is a project that we're putting a lot of energy into, that we've partnered with IBM on, to build a clinical decision intelligence system. Interestingly, for many electronic health record vendors, this data aggregation and population analysis is an afterthought. It's not why they were to be in business and it's a very important part.

Once you analyze that data, moving to the middle, you're really getting into performance measurement and analysis. This is where you're providing analytics, you're doing transformations, you're following populations and ultimately in the next component you're creating knowledge. That knowledge is important whether those are empirical norms, it's creating adjunctive support for evidence-based medicine and other standards. And the knowledge that's created through that measurement and analytic transformations can then be put into an effector arm in different ways, whether that’s order sets or prompts or reminders or reporting, what have you. Ultimately, the byproduct of all of this is not just to generate data or to aggregate it or to analyze it for analysis’ sake but it’s to change fundamentally the way care is delivered and that's where clinical transformation hits and I'm going to talk a bit about that as we go forward. Next slide, please.

So one of the reasons why we believe that aggregating data and performing transformation and analytics on that data is important is because of what we've referred to as the inferential gap. This is a paper that colleagues of mine and I wrote for Health Affairs recently, and basically, that clinicians are faced every day with patients for either whom there's no evidence that exists for much or some would say most of medical care, the things that we do we don't have hard data about. But even where there are good randomized control trials, there's a mismatch between those that met the inclusion/exclusion criteria for those trials and what shows up in your practice on a daily basis. They have more co-morbidities, they have more constraints and there’s other counterbalancing factors like economic consideration, formulary restrictions, patient preferences, that aren't necessarily adhered to just within the RCT or EBM data. And so trying to bridge that gap is an important component of how we view data infrastructure. Next slide, please.

I think the next slide is the schematics. So this is how -- a bit hard to read on screen there. The electronic health record represented at the far left is our clinical transaction system, it's providing real-time information to clinicians, data flow from there into our clinical decision intelligence system, that's where we're aggregating all of the data, normalizing it, transforming it, and standardizing it. We include claims data from our health plan, financial and operational data from all parts of the health system. So we have a truly integrated clinical financial operational data system, if you will. And then we take those clinical inputs, the financial inputs, the operational inputs, and we take other inputs as well, evidence-based medicine guidelines, patient preferences, for example, many of the things which we believe, that we refer to quote, unquote as noncompliance are not noncompliance, they're preferences that don't match up to the prescription modality that was provided for that patient. And then we have, are in the process of creating and leveraging a decision support engine that uses all of those different inputs, inputs from CDIS, real-time inputs from the EHR, and transforms and provides analytics and then has a variety of effector arms. Effector arms might be alerts, they might be prompts or reminders, at times they're completely integrated order sets, sometimes they're standing orders, they could be patient messages that go to the Website or automated letters that are generated and sent to patients' homes, and increasingly information therapy whether that's Web-based therapy or handout therapy. But conceptually the idea is you're linking together a transaction and an analytic system, you're leveraging all of the data that you have through a decision support mechanism, and you're using different effector arms to drive the clinical transformation that ultimately is going to get this to where we need to be. Next slide.

So with that, we have sort of an admittedly this might be a Don Quixote-ish goal, but we do truly want to try to reengineer the traditional decision hierarchy. And the way we thought about that is today most decisions are made by physicians, and that's the way that the majority of our system is organized, and as a physician myself and as one who has the utmost respect for all physicians and clinicians in general, these transformations are not meant to denigrate that role. The idea here is to try to match together relative uncertainty around decisions and relative risk. And where uncertainty is low, meaning what is the answer is relatively known, and where risk of choosing that choice is relatively low, that decision should be pushed down as close to the patient as possible. And we believe that patients and their families can make many of the decisions that today are made, with the support of computerized aids and other information therapy, that are made by clinicians, and ultimately saving more and more of those highly uncertain, highly risky decisions for the physician. So ultimately that's where we're headed with Geisinger quality and striving for perfection. Next slide, please. And okay, we can skip past that one.

I wanted to give a couple of brief examples before I finish up here to try to make tangible those principles that I was just describing. So we set out, like I think many have, to try to improve our performance around some of the evidence-based guidelines for diabetes. And in our clinical process transformation, there were a number of steps that we included, and I'm not going enumerate every single one here, but suffice it to say clinical transformation is typically a multi-factorial process. It’s not just giving somebody a prompt or alert, it’s not just doing an order set, telling somebody to be better and not just giving them a report card. We tried to standardized clinical practices and alter the workflow fundamentally so that when the nurse is rooming that patient, already he or she is reviewing the best practice alerts and those orders that need to be signed off on by that physician. We have an automated way for identifying which patients, by mining the CDIS, require which interventions. We have the ability to automatically create order sets on the fly and to make those into standing orders so that you don't have to redo them every time the patient comes. And then we have a variety of ways to reach out to the patients and I'll show you examples of those. Next slide, please.

A couple of these screen captures that will be coming up, don't worry about the details. They're visual cues. Here was the bundle that we decided upon. You can see the components, none of these are probably new to anybody. I did highlight with some of the circles that a number of these are hard values, they're not just process steps. And for us, we chose not -- with smoking status we didn't choose to measure smoking counseling given. We chose to measure whether they actually stopped smoking. So in our all or none bundle measurement, if they're still smoking it's a zero not a one. Next slide.

Here are a couple of examples. This is the screen that the nurse would see whenever they’re rooming the patient. It lists all the best practice alerts that are relevant to that particular patient. And note that they're not just alerts saying hey, why don't you do this. They're alerts that are pre-populated to generate a specific order set for the particular intervention at the particular periodicity that’s appropriate for that intervention. So whether it’s every three months or every six months, these standing orders can be put in place. It doesn't require another alert to be popping up the next time that patient comes to be seen. Next slide, please.

This is a sort of summary health maintenance view, so here the clinician can see for each of the events when it was last satisfied or whether there's an open hole. Next slide, please.

This is the patient view. So this is the patient portal where they can see what they're overdue for, when they last met the criteria, and when it's scheduled for a repeat. So you can see they last had the mammogram and when the next mammogram is due, so they're getting their own feedback. Next slide, please.

Now we know that every patient is not online. And we know that even if they are online, like physicians, patients often require multiple forms of intervention. So this is a letter, a patient education letter that's automatically generated from the intrinsic EHR and CDIS data. It tells each patient what their goals are, and what they should be doing and how often measurements should be made of various clinical parameters. Next slide, please.

That is followed with an actual patient report card. So this is not just, this is not just a doctor report card. This is what goes to the patient and shows how they're trending with their data over time, and lets them know whether they're in compliance or out of compliance and provides a little bit of context for why that's important. Next slide, please.

This is the reporting and feedback loop and a little bit of competitive incentivizing here. This is three different features on this graph. The bars show where this particular primary care practice site -- and we measure at the site level, we believe in team-based care, we know that every patient is not seen by the same clinician at every intervention. So the yellow bar represents the whole site, so there's a lot of peer pressure to keep performance up. The red squares represent what the average is across all primary care sites, and then the blue represents what our best practice is within our own health system. Next slide.

This next slide is the one that I think is the most impactful, and it's the one that really makes me feel good, getting up every day, about what I do. It will take a little bit of getting used to because you might not have seen a slide like this. But essentially this is taking the all or none bundle concept and looking at the number of components ranging from zero to nine, that each patient received, as a percent of all patients with diabetes. Let me point out a couple of things. One, this is 20,000 or more patients. This is not a trivial population in one office. This is a large group of patients. Each one of the graphs from left to right represents what the state of care delivery was across all practice sites at a point in time. And the simple interpretation here, as the graph gets higher and shifts to the right, the number of patients getting a higher proportion of all of the known evidence-based standards for diabetes is getting better. What you see there is what I call the snake digesting its meal. That meal is moving its way through, getting improved quarter on quarter on quarter. Now, I don't personally believe that there would be any way to transform the care for 20,000 patients in this kind of stepwise systematic process if all the different components ranging from buy-in to governance to electronic transformation and patient engagement could have happened. But it's been a remarkable set of changes, even though we still have a long way to go. We don't hold ourselves out as being perfect by any means. We're just striving to be as perfect as we can be. Next slide.

The next slide I think speaks specifically to one of the core issues that I think you are dealing with here, and one that's fundamentally important to care givers across this country. And that is, if care givers are trying to do the right thing and they want to measure these things, how can do that in a way that doesn't have them spend more money and more FTEs on collecting data than they do on actually transforming the way care is delivered within their organizations? So we wanted to be accredited by NCQA, ADA for the diabetes recognition program. That program's methodology is to pick a date at random, you look back at the last 200 consecutive patients across all your sites and then they score you based upon a variety of parameters. The take-home here is in the box. For that particular program we did not have to do a single manual chart review. We exported the data from our systems, it met the criteria, we got an 85 score, you need 75 to qualify. Everybody is happy. Nobody is going around with a clipboard wasting their time just gathering data for data gathering’s sake and it's unfortunate but that same thing can't be said for many of the other measurement processes for, you know, a myriad of health plans and so forth and so on. And I think that's something that we collectively as a clinical group and a methodological group across this country have to strive for.

The next slides quickly are -- I wanted to point out how even something simple, like vaccination, we know that in this country all those patients eligible for influenza and pneumococcal vaccinations just don't get them. Something that simple is really not. It's complicated. We began hardwiring pneumococcal and influenza vaccines together. We know that the fall influenza campaign is a great way to bring people in. We did many of the same things, we sent out electronic and letter reminders asking people to schedule, if they didn't schedule we did an outbound call. We set a standing order to administer both vaccines upon next visit during the window period if they met the criteria, so the nurse had that standing order and didn't have to wait for somebody to remember. And then we set it up with the same rooming tool order protocol, et cetera.

And the next two slides -- next slide, please -- graphically depict what the impacts of the clinical transforming processes have been. And here, like the other slides, you basically see the higher the peak, the more patients that are getting vaccinated. So this is essentially a three-fold increase in vaccinations across these four periods, and on the next slide, please.

You can see that we were able to drive the same kinds of benefits and pneumococcal vaccines, once we linked it to the influenza process. We didn't originally link the two together and that was a mistake but we were able to leverage that workflow process to make that kind of change.

The last two slides -- if I could have the next slide, please -- I wanted to turn to an in-patient example, because we are an integrated delivery system, we are not just a physician group practice. So we set about trying to fundamentally reengineer the way coronary artery bypass graft surgery was delivered. And to do that, we also wanted to change the way it was reimbursed. So we bundled together two things. One, from a financial standpoint, we said you know what, we're going to proactively develop a program where we go at risk for coronary bypass surgery. We're going to bundle together all professional, technical, and related services, pre- and post-op for a single price and go at risk for any and all readmissions, extra therapy, any kinds of complications, there's no incremental payment whatsoever for any of those services. It includes cardiac rehab, the whole nine yards. We also said that we're going to measure our performance internally on an all or none bundle using the AHA and American college of Cardiology evidence-based standards. So we had a group of cardiovascular surgeons that reviewed the class 1 and class 2 evidence standards, and here's another gap: lots of standards don't translate themselves into specific work process or clinical process steps. So those clinicians transformed those guidelines into 40 specific workflow steps that we then hardwired into the electronic health record in addition to providing patient and family and physician and other clinician communication. We developed a patient compact which is essentially a contract where a patient agrees to at least try to do certain things, like complete their rehab, stop smoking, watch their diet, take their medicines, inform their caregivers of side effects of medications, et cetera, et cetera. And then we began to monitor our performance from T equals 0, which was February of '06 and initially across an all or none measurement for 40 separate process steps, we were about 59 percent of the time where we hit all 40 process steps for the coronary bypass graph. Within about 6 months we've gotten up to 100 percent. We had some recidivism, we found some gaps. For those that have interest, it was not starting an IV infusion for a glucose greater than 110 in the OR. Interestingly enough, the glucoses where it was not started was only 113 and 114 but interestingly when it wasn't started they were up at 182, 20 shortly thereafter. So even that amount of three over the threshold for 110 was something that turned out to be significant. The P value for before and after hitting all the process steps was less than .001. Next slide. Final.

These following data were just presented at the American Surgical Association so for me as internist by training and sort of a health services guy by genetic constituency, you know, to have these data presented at the surgical society was pretty slick. You know, the data are still trending for the most part. The N wasn't large enough to get to statistical significance. We did have one statistically significant variable other than adhering, adherence to the process steps which was patients who were discharged home versus all other statuses: nursing home, long-term care, mortality, et cetera. So that was 9.6 percent better and was P less than .033.

So in summary, hopefully I was able to describe not only how we think about infrastructure, where we're going strategically in terms of reengineering the decision process, both for our own clinicians but fundamentally to push decision-making down to the patient where uncertainty is low and risk is low. And the idea that marrying together the electronic health record data with the process measurement population analytics transformation with decision support so that ultimately we can tie together in a single loop measurements, care byproduct data collection, transformation, and ultimately patient improvement. So thanks for the time, and I really appreciate the opportunity to be here. My colleagues at Geisinger really deserve all the credit for this. These things were all put in place long before I got there two years ago. Thanks.

>> Carolyn Clancy:

I'm hearing a lot of wows around the room. And I think the people on the phone are mute or we'd be hearing from them as well. Questions for Doctor Paulus? Janet?

>> Janet Corrigan:

Great presentation, Ron, and I heard part when I was at Geisinger, it gets even better every time. This is terrific. A question for you, and I know this session is really about -- this is the Quality Workgroup so we focus on the quality. What was the impact of this on productivity? And I take a look at your primary care docs and you are blessed with more primary care physicians than many parts of the country are. Does this also increase productivity as well?

>> Ron Paulus:

I have to answer that sort of conceptually rather than with hard, hard data, because I can tell you that the Geisinger group practice as a whole on the (inaudible) basis is about 74 to 80 percent, percentile within the (inaudible) which is how we benchmark ourselves and we know that we've been able to drive that productivity level up from much lower levels, while we've done these kinds of processes. Our belief is, is that the kinds of processes that I described not only improve productivity but they're functionally the only mechanism we have left to drive incremental productivity improvements. We don't believe that without transforming the work process and you know, our rules of thumb are automate, eliminate, delegate. So if we can automate something, we'll automate it. If we can delegate down to a non-physician, we'll delegate it, including to the patient. And if we can eliminate it, that's the best of all possible worlds. So we're applying a reengineering sort of thesis that goes along those lines. That's really what innovation which is my area of focus is all about, is leveraging technology and clinical transformation to make these kinds of change. And I'm not happy unless I see efficiency gains, productivity gains, and quality gains.

>> Jonathan Teich:

Carolyn?

>> Carolyn Clancy:

Yes, Jonathan.

>> Jonathan Teich:

Hi, Jonathan Teich. Great presentation. Excellent as expected. And appreciate the modesty, but I know you had a significant role in a lot of this. The question we often ask -- and we went through some of these things at Brigham and Women’s over time -- is with all of these wonderful results and all of these wonderful innovations, is how do you get this to the organizations that haven't spent as much time, resources, man-hours? It's always a matter of transfer. How can you take what you've learned and be able to get this to Joe's hospital system down the road? And I wonder if you've thought about that at all and thought about certain aspects that required the sheer manpower and dedication and others that can be given to those that may not have that quite a much of an impetus to them.

>> Ron Paulus:

It's a great question, Jonathan. You know, I think the way Geisinger has approached it, is within our region we've largely tried to help out with that, if you will, by being as collaborative and viewing our information technology as nonproprietary as possible. So you know, we try to provide access to the electronic information and supportive services in as much as possible to surrounding practices and to surrounding hospitals. We read electronically for example pediatric echocardiograms for a variety of hospitals in the region and we do lots of other supportive services where we try to enable patients to stay within those care-giving institutions. But we can provide our specialty services electronically and in as supportive a manner as possible. I share your concern and recognize the difficulty. I think the work that's been done around Stark relaxation for provision of electronic health records practices by organizations like Geisinger, we're anxiously awaiting the IRS ruling on how they view that and we'd like to have some reassurance, frankly, that a new Congress doesn't mean a new day and a re-visitation of those kinds of principles. Because I personally believe that it's either going to be health delivery organizations or vendor organizations that are going to have to provide more turnkey services because I think it's unrealistic to believe that small practices are going to be able to provide all of these kinds of services and that's one of the reasons why we've created the Electronic Health Record Safety Institute, because not only are there issues with getting people up and running but there are safety implications for testing, deploying, et cetera.

>> Carolyn Clancy:

Helen?

>> Helen Darling:

I think my question is a little bit along the same lines. You've got a great story. And I actually hope you will send all of us by e-mail what you just said about leveraging technology. Because that is a branding motto that we would at least like to see everybody live by because it's such a good message. My question is a little like the last one. But we hear all the time, for those of us who are trying to push this kind of innovation, that everything costs too much and you have 80 million, which you appropriately note given the size of your organization, it sounds like a lot and is a lot of money, but it's proportionate to what you have. So it also, this kind of innovation takes too much time, including time away from clinical care, and that everybody fears moving ahead so they resist innovation because they're afraid there isn't enough agreement, interoperability, and all those other things. They sort of want everything settled up front so they can move ahead confidently. What would you say to those kinds of concerns? And what do you think that we should be doing more of to get your message to the people who are more fearful and not willing to innovate because they're concerned about cost and other issues?

>> Ron Paulus:

Right. Yeah, that's a real dilemma. I think, easy for me to say, but I think for large organizations or medium sized organizations, this is a no-brainer. This is fundamentally a requirement of being in business, no different than people would view other inputs as being necessary, like medical supplies or having an operating room or a procedure room or what have you. Having electronic infrastructure that enables you to measure, monitor, and provide quality care is simply a fundamental, foundational component. Now, with that said, I do think for the very small organizations, and the small physician practices in particular, that's easy to say and harder to do. And that's why again I believe that it's going to take either health delivery systems or private sector vendor organizations that can provide much more turnkey services, that can provide financial arrangements where the capital doesn't have to be allocated up front because most physician practices don't have a balance sheet and they don't have capital to spend on an immediate basis. I think it will take some incentives and some prodding. You know, and then finally I think even though this is somewhat of a hackneyed metaphor, I think there's a generational wave that will ultimately take care of it. The question is do people want to wait around for that, and I think the answer is no. We need all those different steps to drive this. I don't think there's one magic bullet, but if we look at 80 million dollars and we're building a multi-100 million dollar hospital for advanced medicine, how could we build that building without having the electronic infrastructure that's going to make us operating that building as efficient and as quality as we need to be for our community?

>>

Thank you.

>> Carolyn Clancy:

I have a question. Can you just reflect within Geisinger what are the barriers and enablers that helped you?

>> Ron Paulus:

Yeah, good -- that's another good question. I think the enablers are probably very significant. One, I think over the years, and I can't assign this to any one individual, there have been leaders who have stepped forward and said this is something we need to do. Joe Bizzordi (ph), who is a clinician, was part of the original Epic cadre, had a lot of naysayers, you know, back in 1995, whenever this was being decided. But he persevered along with other leadership at the time. And that leadership mantle has been carried forward. Glenn Steele who is our CEO today, has made striving for perfection the number one goal for the next five years for the second century. And leadership that starts at the top and flows through the organization is absolutely essential.

I think the group practice model, to be truthful, was very helpful, although I'll point out that we do have two community practice hospitals that are open staff and not closed. But nonetheless that desire to be integrated as a group and to have that kind of leverage was definitely an enabler.

Difficulties, I think, one of the difficulties was the geographic territory. There was issues associated with broadband connectivity. And having to run literally dedicated lines between a lot of these practice sites that people in urban environments might take for granted. Even now we struggle with some of the connectivity problems in some of the recent things that have come to the public's attention about where those rural dollars are really going, I think people should pay attention to if they expect rural caregivers to deliver these services. I think that was a barrier.

I think there's been a barrier that still is partly there around, geez, we're in a very rural environment, people they're not going to deal with this technology. Patients aren't going to use a portal. Well, they do use a portal and we've got over 40 percent of our patients are 55 or older that are using that portal. If patients 55 or older and in the rural parts of Pennsylvania can do it, then they can do it pretty much anywhere.

And I think the last enabler, which speaks both to the community and to the organization, is there's a lot of trust. There's a trust of the community for, about Geisinger and there's at least a willingness to suspend disbelief by physicians about whether something can be good or not. And you know, that willingness to try to, you know, sacrifice for the greater good, is something that you have to build. It's not magically associated with a group practice. It's built through governance and collegiality and respect and track record. And so that would be an answer of the mix.

>> William Tierney:

Ron, this is Bill Tierney from Indiana. Since we're focusing on quality, let me ask a quality question. Your ability to track quality in specific conditions, et cetera, depends a lot on the type of data you're capturing and how complete you're capturing it. So for example if you want to track how well your patient -- that your patients with heart failure are getting good care, you need to identify them and you need access to not only diagnoses but the people getting misdiagnosed, so you can get access to cardiac imaging, in a coded way, and then you need access to full drug information. Is my patient on an ACE inhibitor, beta blocker, et cetera. And we all struggle with this. All of us. Us in Indiana, Jonathan at Partners. We all struggle with this and you know, try to approach the asymptote of having perfect data on everybody which we've never will have. Can you tell us situations where your data are incomplete, where you can't do things you'd like to do? And what you're doing to try to fill in those gaps?

>> Ron Paulus:

Yeah, obviously you're in the trenches, to be so insightful and specific. I think our biggest area of deficit is in sort of the history and physical, dictated kinds of text data. Clearly that's where a lot of great information is there and it's sort of in a blob and you know, you can’t do much of anything with it. We're toying with some text analytics just to see what, if anything, we can derive from that. We've used in certain circumstances like in our obesity program, dictation templates that dictate in the exact same way every single time so that text mining is much more amenable and valuable. And then we're about to embark upon, assuming I convince the other people around me, about to embark upon a complete review of all of our data that we would like to get that currently is lost in a blob form or a PDF or an image form somewhere, and basically say let's rank-order these things and let's systematically identify how we can capture these with the least amount of effort possible, recognizing that it's not going to be easy. It is going to take some workflow changes. But I try to sell benefits ahead of the cost. So if we can find things that clinicians want to do and they want to measure and they want to track and they want to use it for research, the likelihood that they'll be willing to enter those data, particularly if we can automate the entry of a lot of those data, I think will be better. So no magic bullets, just hard work, rank-ordering, and then (inaudible) analysis.

>> Carolyn Clancy:

This is -- what?

>> Kelly Cronin:

When you mention text analytics and text mining, do you think that's sort of a near-term possibility until we get the structured terminologies? Should that be something we should be looking at to try to get more robust data elements that we need for quality measurement?

>> Ron Paulus:

I'm not a text expert, so you're better off asking that to somebody else. My personal opinion is that it's a stop-gap measure. And you know, you have to gauge whether it's more work than it's worth. It does work well in -- when you're doing structured dictation. So that's sort of a half-step. People can dictate if they're adherent enough to dictate using the same exact templates, then you can use it more easily. It may be that -- I know the people at Columbia and others have spent a lot of time working on this. So I don't want to presuppose or not give text analytics its due. But it still seems like it's a leap to really do what we want.

>> William Tierney:

This is Bill Tierney. I can shed a little light on that. I think for certain things there's the really -- there's a lot that's been done that's available now. Things like natural language processing for radiology reports, et cetera. The best thing would be to have the radiologist actually enter codes and you wouldn't have to decompose their message. So I agree with Ron that by structured data entry you'll be able to do this. But in the interim some studies out of Columbia and here in Indiana there are natural language processing programs that are better than humans at extracting accurate diagnoses from diagnostic test reports and from notes.

>> Carolyn Clancy:

I think you're getting a sense here, Dr. Paulus, that we could use the entire afternoon asking you questions. Which means we're going to feel free to call upon you anytime but for today I think we need to move on. Dr. Kramer?

>> Mike Kramer:

Thank you. Good afternoon. Looks like we'll get the slides up here.

My name is Mike Kramer, I'm Chief Medical Information Officer from Trinity Health. And I think I’m aligning with the topic pretty well, since I'll echo some of the comments that Ron has said.

Clinical decision support to assist the clinician delivery of evidence-based care. We have probably arguably one of the nation's largest leading EMR implementations that's gone unpublicized. So hopefully I get a chance to describe that a little bit to you today. Next slide.

The Trinity Health is a deeply mission-based organization, it’s a faith-based organization. And it's core values include respect, social justice, compassion, care of the poor and underserved, and excellence. Our community benefits ministry is at the top of this ribbon to heaven and serving the community is probably the foremost objective of the organization. As an example of that, we will this year fully deploy an ambulatory care electronic medical record in three underserved nonprofit clinics in the state of Michigan on our Trinity infrastructure in order to increase the continuity of care between those patients and our organization as well as the ability to collect and manage data for the chronically ill patients in those communities. That's kind of an example of one of the many projects that we have worked on, and in particular along the lines of electronic medical records. If I can get the next slide.

Trinity Health is the fourth largest Catholic healthcare system in the United States. We have 45,000 employees. Of that, 7,000 physicians are credentialed at Trinity Health. Only 700 of those are actually employees of Trinity Health. A lot of other numbers here, briefly we have 46 hospitals, of those 46 hospitals, 33 of those will be -- are currently facilities that are live on a single electronic medical record and provide data to a single repository and a single data center. There are 379 clinics, and all of these organizations and clinics and hospitals are spread across 8 states across the country. Next slide, please. I think we may have skipped slide 2. Or sorry, slide -- I'm sorry, I'm out of order myself in my head.

Let me just start off with some take-away points and then after this slide I'll go into some details that describe these take-away points. Essentially there are four messages that I'll provide some supporting information on. First, as we talk about quality, much like we talk about interoperability and standards needed behind interoperability, we should describe quality using nationally recognized vocabularies. SNOMED, LOINC, RxNorm, other vocabularies are going to greatly help us understand and ask the questions that we need to of quality. Further, it’s also going to help our vendors provide a standardized terminology and functionalities out of the box as we receive that functionality, so we don't have to do quite as much building and managing of the data downstream. Next.

And this is a point that Ron has made. 40 percent of the data needed to generate -- to assess the quality of inpatient care is generated by physicians, who either don't have access to the electronic medical record, have provided us free text or dictated information, or may not have the time, incentive, or ability to provide discrete data for us in the assessment of quality.

Third point. There are substantial complexities in the quality definitions that require considerable manual abstraction and require data not known to our inpatient providers.

This particularly speaks to the next point which is much of the data may span multiple venues of care organizations, and might not be available particularly in underserved communities where assistance and associations with other organizations to collect and manage and refine that data is not available. You can see I’m echoing some of our earlier questions. Next slide, please.

What I want to do before going into a description of our experience with clinical decision support and the electronic medical record is to describe what the electronic medical record looks like for us. We have over 4 million patients in the electronic medical record, with 8 million registered in the system, 4 million patient medical records. Data is in a single location, in a single -- it includes clinical, financial, and administrative information, across all those facilities. We have two of the facilities that both outpatient and in-patient care is integrated in the same system. Of 33 hospitals scheduled for full implementation of the electronic medical record, 14 are live. A full implementation for us includes computerized physician order entry and our physician order entry rates are averaging about 60 percent and our most recent implementation is as high as 84 percent. About 60 to 80 percent of the previous paper medical record has been eliminated. Probably about 20 to 40 percent of that is -- 20 to 40 percent of what's electronic is free text and not discrete. So that's a potential barrier. We have about 1500 care givers using the system simultaneously during our peak hours, and 180,000 chart openings per day. The last bullet point I'll speak to is that the composition of the current most advanced deployment includes nursing care documentation, integrated pharmacy and medication ordering, provider order entry, lab, radiology data, and transcribed data. Next slide.

I want to differentiate between two types of decision support. And I think that's a key point for this group and I think you're well aware of the fact that -- two large buckets of decision support and limitations of each. Retrospective decision support has a lot to do with after the care is delivered. Going in and looking at the medical record, looking at the database, trying to abstract that data, identify trends and opportunities for intervention. But if the intervention is missed, the patient has left your care, gone on to the next provider, then we may very well have missed our opportunity to influence the care for that encounter.

Concurrent decision support are methodologies to inform the caregiver during care. Probably one of the biggest limitations in terms of information the physician or giving them relevant information at the point of care is not knowing exactly in the medical record what is that patient there for, having shortness of breath, or having trauma? Without having any specific diagnoses, prevents us from saying to the caregiver, wouldn't you like to give an ACE inhibitor, if you don’t know, if the caregiver has not distinctly said heart failure. So in order to supply clinical decision support, physician’s diagnosis or some proxy measure has to be used to provide concurrent decision support. And one of our limitations is most conditions cannot be inferred without data provided from the physician in a discrete fashion, when it's not made electronic. And without those clinician contributions early in the care process, we have difficulty informing them of opportunities that they may have missed. Next slide, please.

So before I go on into more detail, I'll give a little bit of background on the architecture at Trinity Health and our vision. This looks, maybe rearrange the boxes bit. One thing I'll point out across the left-hand side that has been instrumental for us is taking all the different systems and standardizing them. There's various dimensions of data, administrative registration data, financial data, human resources and how people are being used to care for the patient, the electronic medical record, and supplies that are used around the care of the patient. So at Trinity, the first five blocks are standardized systems across 33 of our organizations that feed into a common repository. The two bottom blocks represent partnerships with external laboratory systems, and external providers to feed data into that warehouse as well. That might be patient satisfaction scores, it could be laboratory, radiology, or external results. So this is the current Trinity Health data warehouse. And most of what you see in the middle of the slide is complete, and that as we move to the right-hand side of the slide, the outputs of this would be our quality reports, enterprise dashboard and external reporting being some of the goals of abstracting that data from that warehouse and the latter two are not yet projects that we feel we can perform from our environment. Next slide, please.

At Trinity we have a metaphor we use frequently, as we build our capabilities. And this is our house metaphor. There's probably about five different types of -- we've got our imaging house, our EMR house and this is our clinical decision support house. We are at the lowest level of this, the first two layers of the foundation. And I wanted to describe this to you to have you understand all of the capabilities of decision support that should be part of an architecture that grows over time. So currently Trinity has a financial data repository, and master person list, a clinical data repository and in order to feed that data repository we have to have tools and processes to capture relevant useful data. And of course I’ve already talked about the gaps in the data. And building those tools and refining them to capture missing data, is really a substantial barrier to getting useful data out of the system.

Our next challenge will be that light green layer, which is to start to build what we call exception reports. About, about 80 percent of the time we can go into the medical record and identify easy wins. The patient got an aspirin. Yep, I can see it right there in the medication administration record. About 20 percent of the time, particularly around aspirin around arrival, we don't know why the patient got aspirin, or we have to dig considerably deeper. And identifying those exceptions in an automated fashion is one of our next goals. Severity and risk adjustment is on list. As we move up through, into the attic, being able to do predictive modeling, look at textual and non-textual data, actually look at our provider workforce and understand how well they're doing in providing care, are increasingly capabilities we'll build on this architecture. Next slide, please.

So after showing you what our vision, what our current state and our vision is with regard to decision support, I'll tell you a little bit of a current state view. Largely our quality data collection at Trinity Health is retrospective and manual. If you look at our core quality measures, we have approximately 30 measures that we report on across all of our ministries and put out as a dashboard. All of those are abstracted manually. It takes about 60 to 90 days to collect accurate data, either electronically or through chart review. Another 30 days to validate and analyze. And so in many cases it can take about four months to clean up the dataset and report out across our ministries to influence change and improve quality. And that is an unfortunate place to be, because it's not very timely for our organization. Next slide, please.

So what are the barriers and difficulties that we've had in terms of turning after the fact retrospective quality data into real-time care and advising clinicians in real-time? One of the things that Ron talked about was looking at the quality data and systematically going through that data and identifying what are the data elements that are necessary to inform care, what are the data elements that are not currently electronic that could be electronic, or not usable that should be usable? And how can we systematically go through those data and architect the system and the structures that we can make this real-time data?

And I'll give you a bit of a story here on this slide. We looked at 26 quality indicators for acute myocardial infarction, heart failure, surgical care infection, and community acquired pneumonia. Out of those we identified 157 data elements that were used to identify patients, identify exceptions, or identify that interventions occurred. Of 157 elements, and I've described to you what our medical record looks like, fairly complete medical record, 79 are contained in the Trinity Health medical record or about 48 percent. 40 percent of that data is needed for automating -- 40 percent of the data needed to automate data collection depends upon the contribution of data from clinicians and if it's dictated or handwritten or in the outpatient or previous encounter, that's missing data for us. If the data is electronic, it's not standardized. In other words, the language is not used in a way that helps us identify the patient: is it heart failure, congestive heart failure, is it cardiomyopathy, how have we decided to define our heart failure population? Given these challenges, currently, out of the 26 indicators we can only automate 2.

So we have a lot to do. We have a lot of system, a lot of workflow, a lot of process changes, to get this remaining data electronically in a usable fashion. And in reality, about 100 percent of our reporting is manual chart abstraction or requires some aspect of that. So that sounds a little bleak but let me tell you in the next slide how we will -- we've overcome that.

So we want to provide the best information we can to meet the particular patient -- meet the needs of the particular patient at the time the patient care. So we’ve got to identify the clinical clinician. So working diagnosis is the biggest barrier and we will be turning on functionality that will encourage our clinicians, that give them an incentive to give us a diagnosis. If they give us a diagnosis, they'll be provided with medications, orderables, and other information that's relevant to that particular patient so they don't have to search through the whole orders catalog. And that will be a substantial advance and incentive for them. So if we can't get a working diagnosis from the providers, we're looking at surrogates that can act as triggers or identifiers for a particular disease or service line of that patient. And what I want to tell you is a little bit about the story around acute MI, and aspirin and troponin. So, as you may be aware, troponin is a fairly sensitive and specific marker for myocardial injury, and very often when a patient presents with chest pain, a troponin is done. Very often when a patient presents as having a heart attack but doesn't say chest pain, a troponin might be done in the laboratory panel anyway. So oftentimes we'll identify patients who have had a heart attack just by that test alone. I want to point out the last bullet point here is that we've had a lot of difficulty finding these markers for things like heart failure, community acquired pneumonia, and so forth. Next slide, please.

So in order -- once we’ve found a surrogate or we have a working diagnosis, we can start to build tools into the electronic medical record system that do a number of things. First, it embeds clinician -- embeds information into the clinician’s work flow. It encourages and guides the clinician to perform the right actions, and if the actions are not possible in the workflow, we have -- we've built functions that allow them to capture the data that tells us in discrete fashion why they didn't give the patient an aspirin. I don't know if you can see the detail of this particular slide but this is our order set for acute myocardial infarction and in it we have the normal orderables, aspirin, beta blockers, ACE inhibitors and so forth but we also have what we call a contraindication orderable. If the patient was not able to take an aspirin, they're able to click that, select that orderable and that proceeds into the process of documenting a very valid exclusion for them to get the normal quality intervention. If we can go to the next slide.

This is a -- by the way, that's an evidence-based order set where we've engineered the quality interventions as well as process flows that encourage the best care. This slide describes our rule, basically, in pretty simple terms, it uses the troponin. Upon opening the chart the computer searches for the elevated troponin level. If the answer is yes, the computer checks to see if aspirin was ordered. It checks to see if it was given but prior to admission. This is an example of if the patient got an aspirin in the ambulance, that is not part of our information system and we had to architect a process whereby the person to whom the patient presents first asks the question did you get aspirin in the ambulance, did you take aspirin before you called 911? And that was a gap in our data that we had to plug in order to not -- in order to avoid firing this rule unnecessarily.

So if it wasn't given prior to admission, we checked to see if there are any reasons why the patient shouldn't get aspirin. Allergies, bleeding complications, elevated creatinine levels, and so forth. And if it is, if we have still not given aspirin and no valid reason not, we alert the physician or the next caregiver to open that medical record to go ahead and order the aspirin or document contraindications. When we implemented this rule in December of 2006, we tracked and watched what happened with this. And across 9 hospitals, simultaneously went live with this across 9 hospitals, about 160 patients per month are getting aspirin that wouldn't have otherwise gotten it or gotten it in such a timely fashion. If we can go to the next slide.

This is the example of the actual alert that fires when the clinician opens up the chart. They get a very specific message. Very detailed. The actions we request of the clinician. They can click evidence and go directly to that. They can go ahead and document the reason that the patient shouldn't get the intervention. Or they can go ahead and order the aspirin. Next slide, please.

So we don't want to prescribe or force the physician to order the aspirin. If it's not appropriate. So this is an example of providing the physician the opportunity to document the reason they didn't get the aspirin, in a discrete fashion that's very clear and automatable, and allows that to become electronic. We can then automate this particular measure more readily by having that data in the electronic medical record and the warehouse. Next slide, please.

I'll come back to my take-away points and hope I've supported the statement. Standardized terminology to define quality. Clear recognizable definitions that we can make discrete in the electronic medical record. Encouraging, finding ways, helping us incentivize the physicians to provide the data, particularly in organizations like ours where the physicians are not our employees, not part of a group practice. Where we may be taking some of their time to provide discrete documentation. To help us with the algorithms and analytics, but reducing some of the complexities in the quality definitions so we can abstract these out of the medical record in a reasonable fashion. Helping collect -- helping us find ways to bring data from across venues of care and help with underserved communities and allowing them to implement electronic medical record systems and share data across organizations. Hopefully I've demonstrated that today. Thank you. Last slide is just some credits. We'll leave that.

>> Carolyn Clancy:

And we both take great notice that you both credited a terrific team of folks in both organizations, and I think we're getting a clear message about that. A question I think for both of you. We just have a few minutes here. It does seem to me that you're having to do a lot of back end work to adapt your systems to existing quality measures. So another way to think about this, a few years out in the future is that maybe quality measures as they continue to evolve, and I'm looking at Mike Rapp because I'm sure he's hoping they will continue to evolve since he's working with CMS, and recently passed legislation, that we might fundamentally reengineer that process as well as the process of clinical practice guideline development. Do you want to comment on that?

>> Mike Kramer:

I think we're all very early in the journey of trying to make the technology and the definition of quality align. Standardizing that language, standardizing the algorithms, building some of the logic that would allow us to infer that quality was received from discrete data is something that we should share in. Not just organizations like ours but also the vendors. And a great deal of effort needs to go into making the tools collect the data in a manner that's very useful.

>> Ron Paulus:

Yeah, I would agree with that. And I think to a certain extent some of the quality measures may have presupposed that people don't have electronic health records. And so there's a reality out there in terms of the degree of adoption and this bridging function. Along these same veins, this line, I've had discussions with Beth McGlynn at length about her multi-hundred set of measures and looking if we looked internally and saw how many of those we could or couldn't produce and what it would take, so we're looking for funding of that project if anyone is interested.

[laughter]

>> Nancy Foster:

Carolyn --

>> Carolyn Clancy:

Good business people. Yes?

>> Nancy Foster:

It's Nancy Foster, could I jump into the question?

>> Carolyn Clancy:

Of course.

>> Nancy Foster:

That is great presentation, both of them were -- and I'm wondering, in addition to the kind of advances in measurements that Carolyn referred to, what if the individual measures change? Could you help me understand how you'd process it if, for instance, aspirin is no longer seen to be the ideal drug for a certain subset of AMI patients you were describing Dr. Kramer?

>> Mike Kramer:

Well, I think the substantial benefit of being an organization that can turn on a rule across 9 hospitals overnight, or deploy an order set, allows us a great deal more influence over our caregivers than waiting for the penetration of knowledge in traditional methods. So the rule is really a flip of the switch. I think the harder part is engineering the tools into the normal and routine work flow of our clinicians. And so if the evidence changed that required aspirin to be given, not within the first 24 hours, but at exactly 34 and a half hours into the hospital stay and that's a completely bizarre idea, but let's say it did, there is no clinician sitting around in the workflow waiting to activate that order. So I think that's probably the harder part, is actually engineering this stuff in the first place. Once we have the hooks and the triggers and the tools in place, we can configure them pretty readily.

>> Ron Paulus:

I would echo that. The core competence that most organizations really need to have is ability to do care transformation. When we reengineered the cabbage (ph) process, okay, there were 40 process steps, it could be that not all 40 are actually required or maybe 46 are, it could be that 5 of the 40 get changed out later. That's not the issue to your point. The issue is can you develop a process including technology that can take evidence and put it into practice? Because the real gap is not so much evidence creation as it is translation. And getting at least what's known to be right actually into the real world practice, and that's where most of the barriers have been, whether it's between 7 and 17 years that it takes for a well-known intervention to be adopted by 80 percent. You know, that's the core issue.

>> Nancy Foster:

Thanks, I was hoping you were going to say that.

>> Carolyn Clancy:

Mike?

>> Mike Rapp:

Yes, thank you for your presentation, and as Carolyn indicated, I do have an interest in this. Not only do I work in the physician end of things but also the hospital. So it sounds like you've given a little guidance on how things we might think about more in terms of measures. First of all, that think of the data source, I suppose, and what it might be as opposed to just focus purely on the clinical. But then you talked about the incentivizing of physicians, and as far as they don't necessarily have an incentive and that you depend upon them for the data. So I'm kind of interested in that because at least at CMS we have a program that we do now, incentivize physicians to report on measures that frequently overlap with the hospital measures. But on the other hand, hospitals now are reporting this data. You have the order set that you're talking about, and what is your thinking about the potential for collecting the data with regard to the physician's role in these measures from the hospital? If one did that, then, and there was an incentive on both sides, wouldn't have to be through the hospital, but a separate incentive to the physicians for those measures. What is your view on using that as a data collection vehicle for the physician's participation as well as the hospital's?

>> Mike Kramer:

I think I see one point here which is that there some alignment because these physicians are practicing in both venues of care and we both have now incentives both in ambulatory care and in inpatient to share data.

>> Mike Rapp:

Well, actually in the, for example, the physician quality reporting initiative we have a measure to give aspirin for heart attack. So it's the same exact measure, it's just that we tell the doctor, you report, I'll go through in my presentation, we're going to ask the doctor to report the same information as the hospital is reporting. For the surgical infection prevention measure we're going to ask the doctor to report the same information for the hospital. Some people would think that's somewhat inefficient collection of data.

>> Mike Kramer:

You know, I could see that argument. My interest, I think, is seeing that if the ambulatory doctor is trying to, is struggling to report that data and we have that data as a hospital system or a healthcare system, our opportunity is to share that with them, and provide the data structures that would allow them to do some of that reporting more easily. A number of our clinics have actually, see the opportunity to use the same platform, the same electronic medical record. And therefore be able to share in that data sharing as an incentive to align with us. I think the fact that we're all struggling to collect the same data, means the result, we may end up seeing that when you bring that patient in for the heart attack, could you possibly fill out this discrete form in the hospital system for us as that patient is being registered and therefore we'll have it in the system and report back to you. And so those are some of the structures we'll probably set up for those measures that are aligned between outpatient and in-patient.

For those measures that aren't aligned between outpatient and inpatient there really isn't as much incentive for them to provide that data and they're not worried about the hospital's problem of publicly reporting quality measures. And the other aspect of this that I think is really difficult, is that in community-based practices, getting physicians to use the electronic medical records so we can influence them at all is difficult. We want them to have full access to the information but if they're not willing to start to use the system, because it's an encumbrance to them or there’s a big change or it’s not like the one in my office. So a lot of -- they're increasingly providing greater usability, greater functionality, greater value, and incentive will bring physicians in our community practices in line with the desires of the hospital to have them use the information system that's available in those hospitals.

>> Mike Rapp:

Thank you.

>> Carolyn Clancy:

One last question, Kelly and I'll come back to you.

>> Kelly Cronin:

I was wondering if either Geisinger or Trinity ever seen a need for doing regional data aggregation outside your systems. If we take on an expanded set of measures over time that would include data elements captured outside your system, have you sort of thought through when that might occur or whether or not that would be something you would consider?

>> Ron Paulus:

Well, at least for Geisinger, we are sort of spearheading one of the RHIO projects for the region. So you know, we're definitely interested both by pushing our data out, but there are lots of pieces of data that we'd also like to be able to get electronically, you know we've got a whole building of people that are getting faxes and scanning things, that kind of stuff. So we definitely have that interest. I don't know what specifics you're referring to but as a generic statement, yes.

>> Mike Kramer:

We had a substantial number of markets across eight states, and so there's varying regional efforts, the Michiana, northern Michigan, Indiana, southern Michigan, MHIN, is a regional information sharing organization. We will, we will be contributors in all of our marketplaces towards quality databases and repositories. But managing the complexity in the business agreements to be involved in regional organizations across the country is extremely ambitious and the complexities of that in each of our markets would probably be beyond our organization.

>> Kelly Cronin:

That's really helpful to know. So standardized data sharing arrangements that would support quality measurement would be helpful then?

>> Mike Kramer:

Indeed.

>> Carolyn Clancy:

Janet?

>> Janet Corrigan:

Mike, I really liked your slide number 10 that talks about the 26 Joint Ccommission CMS quality indicators you traced down to data elements and I have two questions for you. The -- you found that 48 percent of the data elements necessary to calculate the measures were present in the electronic medical record. Do you think that's typical of all types of medical records and maybe Ron could even corroborate that from his end, have others gone through that exercise, is that probably a reasonable about half. And do you think the set of measures that you selected, the 26, is fairly representative, if you had a broader set of measures would it be the same? And then a second question is, you found that -- I'm curious as you move to ambulatory settings. Do you think that it will get harder or easier to derive the measures from the electronic health record? I can see it both ways, because in part the encounters become less complex so maybe the issue of getting to a tentative diagnosis or reason for the visit is easier. On the other hand, I'm assuming that the proportion of data that comes from the physician goes up even higher in ambulatory settings. It’s probably more narrative and less structured.

The first question, I'm sorry, is on the 48 percent.

>> Mike Kramer:

I think you've got to -- so I think that as an organization approaches a complete electronic medical record, they will find that probably about 20 to 40 percent of the medical record is non-structured data. In our particular case, nursing documentation, medication orders and medication administration, the intake assessment process is all discrete and electronic data. And that gets us to about the -- between 50 and 70 percent of the medical record electronic. So I think the answer is it depends on what functionality you have turned on within your organization, and where you are in the capability, the fullest capability of an electronic medical record. Trinity is a little bit, I would bet is atypical in that we're probably further along with meds administration and physician order entry and nursing documentation than many organizations. So I would expect that most organizations have even greater gap.

>> Janet Corrigan:

Wow.

>> Mike Kramer:

As far as the ambulatory care goes, there are a few things working in the favor of ambulatory care. One is that we have providers who take more accountability for the maintenance of the longitudinal medical record so past medical history diagnoses, problems are captured probably more reliably in the outpatient. That helps a great deal. With the beneficiary notification orders are associated with diagnoses. As well as having end of visit diagnosis capture in a faster cycle time. It’s a 15 minute visit, you get a diagnosis or a bill. You may be able to solve the working diagnosis problem a little bit easier. I think many of our clinicians are working very rapidly and so the opportunity to use structured documentation tools that could slow them down is, will be more difficult in rapid cycle than use of care like ambulatory care or ED.

>> Janet Corrigan:

Thank you.

>> Carolyn Clancy:

So as we continue our whirlwind tour, if you will, getting different perspectives on data infrastructure, we're going turn to the payers’ perspective. And we’re going to hear first from Shirley Lady. Let me also say that in my excitement about telling you all it was National Patient Safety Week I inadvertently skipped over our opportunity to hear from our co-chair Rick Stephens, so what we've decided is that he will make his comments' taking a systems approach to some of this before the break. So is Shirley Lady on the phone or here? I see someone standing right up. Welcome.

>> Shirley Lady:

Thank you for asking me here. I'm certainly going to give you a different perspective than has been previously provided. Very different. But still, it is for us a very exciting time for Blue Health Intelligence, and the potential that this particular initiative has for the future. Next slide, please.

The vision of BHI came about approximately 2004, sometime in 2004, and eventually a number of plans came together, Blue plans came together and decided that it was necessary to meet our constituents' demands to build a data warehouse that would maintain claims, enrollment, provider, as well as lab data. And that's -- and pharmacy. And that's kind of the grand scheme. But it was really born out of very much a need and a multifaceted perspective from our constituencies. The primary focus initially and you'll see this through my presentation, has been with the employer groups. They do drive our business. And they have been out there demanding information, information is being demanded across the spectrum from all of our clients and they want informed decision-making and so in a response to that, a number of the plans came together and launched Blue Health Intelligence. Next slide, please.

This will provide you with the brave plans, you took a leap of faith and ponied up the financials for this particular initiative which is in the multimillions. The scope is extremely large and therefore we had to have a number of plans join together so they've shared in the economic funding of this particular initiative, and they provide a separate, a separate funded, a separate operation relative to the association, and not all member plans belong at this particular time. It is anticipated they will continue to grow, we started out with 19 last year, WellPoint joined. When WellPoint joined they brought a significant number of members to the table, about 29 additional members. And we're looking at the vast majority of plan participation. It's really economically difficult for some of the smaller plans, those that have a very limited number of members and resources, to be able to participate in this initiative, just by the sheer financials that are involved with building this warehouse. So we're hopeful that we're going to be able to, as we get it started, be able to add further plans into the mix as we move forward. Next slide, please.

What I'm going to tell you about today, a little bit, is what is BHI, and also to tell you a bit of what it's not. I'm going to talk about its size and what it's going to be able to deliver. Both today where we are today and then in the future because I believe that your organization is going to be most interested in positioning for the future. Next slide, please.

Just on sheer breadth and depth, we're bringing to the table something that's not been done before to build a capacity of this size in the healthcare world. And we do believe it's going to be -- will be -- is at this particular time in fact the largest data, health data warehouse, which gives us a lot of opportunities but the complexities around building this have been many. And I'll spend a little bit of time sharing that with you. But we do think that it's going to position us for the future and be able to meet our data needs. You need to note that this is all de-identified data, there's no individual record which researchers cringe when I say that, but it's also -- we had to make it that way for HIPAA compliance and security issues. When we announced BHI to the public there was a lot of backlash from individual privacy organizations that is said oh my gosh, what are you doing with the data, the big brother approach they were concerned about, is my employer going to get the analytic behind what's going on with the healthcare? And so we made a very pointed decision to make it all de-identified data, which means we won't be able to go down to the level and say Mr. Smith needs an aspirin, or had inappropriate care at this particular space. But what we are looking for is trending. What's going on in the industry. We will be able to identify drug utilization issues. Even from a smaller warehouse we had at the FEP, Federal Employee Program world we were able to identify issues with a number of the drugs before the FDA pulled them off of the market. With this vast amount of data we anticipate having in BHI, we'll be able to do that trending much sooner and with much more validity. Next slide, please.

So how we did this. And a couple aspects that were really integral and important to us in the development of this particular warehouse was to be able to use the data model that would guarantee standardization. And I know that sounds, a easy thing to have common data definitions but you've never worked with 19 different Blue plans and everyone had their own definition and they were all right. So bringing consensus to common data definitions across the spectrum was not an easy thing and it certainly was a significant challenge just within the first phase. You can't imagine the number of data definitions there are for just a maternity admission. When does it start, when does it end, does it include the child, when the child stays, when the mother goes home. All those different aspects had to be considered. And that is -- was no easy task. What we tried to do was to adopt standard definitions that had commonality in the industry, that had been previously determined acceptable by most of the industry so we were trying to set something that would not be unexplicable, unexplainable in the future when you were taking a look at what the data means.

The other piece that we had to pay a lot of attention to is the data quality. When you're taking multiple disparate claims systems, some legacy systems, some new systems, and trying to bring all that data together, it became a significant challenge. Once you establish the data definitions, you have to make sure those data definitions are complied with. So that, we brought in Milliman, which is an actuarial firm they have some of their own healthcare definitions out there. We asked them to review the data as each submission comes in and there's four levels of certification to getting the data in the warehouse. Now, that sounds fairly easy, but, I won't name the plan, but one of our particular plans has been trying to get their data in for over a year and they still haven't made level 4 certification. What we have done is cleaned up a lot of the data out at the plan level relative to the claims system because this has managed to reveal some issues that had previously been hidden. But what it has done is, across the participating plans we have common data definitions, we have common data elements, and we have confidence that the data coming out of this warehouse is accurate and reliable, which is critical for anything you're going to do with the data. Next slide, please.

As I indicated earlier, our first customers to be demanding information out of BHI were the employers, and employers across the country wanted to make sure they were having appropriate healthcare costs and appropriate utilization from their employees. And so we focused on their needs first and are building BHI from there forward. Those benchmarks, then, were going to be for national and local accounts, and as you can see here, the various extracts that are being provided to the plans cover both inpatient, outpatient, and a number of different aggregated sets. We have national level, regional level, and down to the MSA, and I'll show you a little bit more about that. And all of these benchmarks adjusted by severity, SIC codes, which are the industry codes, because that way we can compare a bank in the northeast and to a bank in the midwest, where are the variances, how are -- basically have similar populations, female dominated. They want to be able to compare accurately against their industry, their region, their local, with their age groups, so you can adjust and formulate these benchmarks relative to the particular employers' needs. Next.

We believe that having the MSA information and this is down to the three ZIP code, three-digit ZIP code level is significant because we're able to get the depth and breadth within the Blue population. We cover a lot of the territory of the United States and thus we’re able to ascertain benchmark information from not only the urban, which seems to be where most are able to determine, but also in the rural aspects because we have members that are covered who climb trees, and dig ditches and do most everything across the country. So we bring that to the table with over a thousand MSAs as well as some rural benchmark areas. At this particular time I indicated we do national and region and MSAs. When we have BHI fully populated we're also going to look to have district information, district benchmarks which will give an additional slice of perspective to the employers. Next, please.

More along the clinical lines and things that you'll be interested in. When we are, currently we're in stage 2, Phase II, I'll show you that in a little bit, which is putting the pharmacy data. This first phase we have the medical, we have the enrollment data, the medical claim data, we have the enrollment data, we have some provider data that's able to be ascertained off of the claims submission. We have future work to do along the other, along the other lines, other areas. But we did select a medical grouper, we're using MEGs, which I believe CMS is also using in some of their work. It’s a Medstat product, and we are also use DxCG for the, as a risk grouper and each of our plans received the results from their entire book of business as well as their various employers relative to those types of dimensions. Next, please.

We also, as I indicated earlier, have incorporated SIC codes. This differentiates BHI from a lot of warehouses because many of them haven't put in those kind of industry codes to compare industry to industry. And we believe that's a differentiator for BHI. This was not easily done either. We had a lot of the plans kicking and screening because it was a significant cost to establishing those SIC code categorizations and incorporating those into the warehouse. But still it's providing additional value to our customers. Next, please.

This slide, I think you're going find the most interest in because we're not trying to develop BHI in a vacuum. It's become apparent to us that we need to meet our customers' needs and the demand here is in, multifaceted. We've established and are in the process of establishing, they haven't met yet but we have the members identified for an employer advisory group. Significant employer base, the -- that the blues have. We've reached out to those various employers and asked them to participate in an advisory group to help us shape their needs within BHI. More importantly for this group are the top two indicators and that is the physician advisory group that has been formed, we have invited various physicians representation, across the country to join with us, to provide and direct our clinical quality agenda. Our quality agenda. We've just begun some quality indicators in incorporating those into the pharmacy phase that we're in right now. Some of those are HEDIS based. It's just the beginning. But we clearly understood that we needed some direction and why not go to our customers who we work with the most and that's the physician community. And get their aspect and their input into how to build the quality agenda within BHI. When we made public announcement by BHI, can't imagine the number of research groups, pharmacy companies, and biotech companies that came out of the woodwork and asked how can we get direct access into the warehouse. Our board's been very cautious about how to move forward with this because there is, I mean, there’s some risk to opening up your warehouse and we have to make sure that we do this appropriately. This is not an initiative that will -- there to make us money. We do truly believe we're going to use BHI as a tool to provide information to our constituencies. We've been asked if we're going sell our data. That has not been determined. We will partner, certainly partner with public and private enterprises in order to make sure that we maximize this. But clearly the public good and the public welfare is at the foremost thoughts of the Blue system, as many of you know, they're founded by physicians and for hospital and physician reimbursement structures to make sure people have access to quality healthcare. And that mission is being carried out within BHI.

The third arm there is the clinical advisory group. We haven't started that one yet. Biting off the two for this summer was quite enough for me, thank you. But we will be looking to that third one that was brought up, that we need the clinical advisory group because we know there will be research opportunities and we need to make sure we have oversight for the research opportunities and make sure they're appropriate and meeting the particular standards and guidelines that we will develop around this particular group's ability to access the warehouse. Next slide, please.

This is where we are relative to actually getting members and when you say it's 79 million covered lives currently in the warehouse, that's the potential right now, we should have those all in by mid-2008, actually the target is earlier than that, but nothing ever moves smoothly and on time in this. It's a vast undertaking. My background is not in technology at all, and so I have learned about things like ET L tools and different kinds of composition of different hardware and software. More than -- my key folks tell me you know more than you probably should have to know but it comes with the territory. It's a vast undertaking. And we've had to do it in stages. We rolled this out in various stages and phases to getting the data into the warehouse. It is operational now, it really does work. Our first benchmarks were released in December, our second set of benchmarks were released this month. Eventually we will get up to monthly benchmark releases for our plans. It will be on a rolling 12-month basis with 30 months of history in it. Now our researchers are telling us that's not enough. You need to build to 42, that just takes money. So eventually we may be able to expand the warehouse to include 42 months of benchmark, of data so that we can have more research capability within the data warehouse.

This next -- the second release moved our total covered lives in the warehouse itself up to 32 million, and we will continue to expand that as we add more and more plans into the warehouse. We have a second wave coming, should be in by August and then WellPoint data as I indicated earlier will be in hopefully by the end of this year but certainly in the first quarter of 2008. Next slide, please.

All right. As I indicated we're building this in stages, and stage 1 is complete. We are currently in stage 2, Phase II, which is the design phase of pharmacy. And when we're bringing in pharmacy is when we're also adding the MEGs risk grouper. At this time that's being built, that’s being designed, as we enhance the BHI data warehouse with that particular components of data. The third phase is of course the pilots and testing and then the fourth phase is the implementations. You have to develop, then implement it and roll it out. And stage 2 pharmacy will be rolling out through the second quarter of 2008. We have multiple threads of activity going on within this warehouse. The third stage is the enhanced provider data that, that's when we'll be incorporating credentialing information, demographic information, et cetera, so that we can build as complete information relative to the analytical capabilities in the future as possible, and lastly we've targeted lab results.

I anticipate that this project will probably go on beyond my retirement. It's a, a long-term project and I doubt it will ever end because each year the plans come to the table and say we want this enhancement and that enhancement and this enhancement and that enhancement. We actually have to cut off the list of enhancements per stage because we simply don't have enough funding to cover everything. The amount of money spent on BHI is phenomenal, at least it would be in my personal bank account and probably anybody else's here. It's not, it's funded by the plans, the annual budget varies by year based on what we're doing, but it's between 30 and 40 million. That's just the warehouse enterprise being built that has nothing to do with all of the back support that's coming out of the plans, which is multimillions more. So it's an extensive resource drain. Every single one of our plans didn't anticipate how much or how complex it would be. It's not something you drop in overnight. So I want to -- what I wanted to tell you is some of the individuals say, well, just build a data warehouse and you’ve got all the data there. It's just not quite that easy. As these gentlemen can attest. Next slide, please.

I’m just very, very quickly going to go over a little bit of the architecture. I had no idea whether you'd have much interest in this or not. But -- so I will skim it. If you have questions later we can spend more time on it. Essentially the data comes in, there's an extract, translate, and load tool that is used to transfer into our integrated warehouse data. Out of that, then, come all the normative benchmarks and on the next slide, can you get to the next slide, please.

It shows you how those benchmarks are delivered via -- the data actually goes back to, the detailed data goes back to the plan itself for their own individual population. Then they also get the statistical benchmarks and they get the summary benchmarks. They're also allowed to do queries into it and do comparative extracts from the plan location. We've limited the number of extracts at least initially because we don't know about the volume that will come down the line. Whenever you have a warehouse of this size you have to make sure there's traffic cops and ability to get the data in and out and get everything done within a timeframe that's appropriate. And so that's some of the limiting factors around the delivery of this product. Next slide, please.

So essentially we're building this upon the future. Initially we're getting benchmarking information out of it, it’s anticipated we'll get profiling information, ability to do quality analysis, research analysis, and also some predictive modeling. So when we take a look across the spectrum of BHI, we anticipate that it’s just the tip of the iceberg at this particular time. It's got so much potential relative to improving the healthcare not only for members but across the country. We do believe that it's our mission and goal to, as a Blue organization, to work with the public as well as private entities to meet that mission. Next, please.

So challenges. I've touched on each of these. It's large and therefore complex. The funding is large, and expands. It was the original estimates they did that said this can all be done for 20 million dollars. We've multiplied that by multiple numbers at this particular time. The complexity of the Blue organization brings something to the table that other payers don't have to deal with because we have independent companies around the country that are feeding into this. And the aggressiveness of the timeframe. Our clients want the information, our customers want the information now, not yesterday and so we're trying to meet those needs. And at a very aggressive timeframe that we get complaints about all the time. Next slide, please.

So in conclusion, we believe that knowledge draws better decision-making, it’s going to make better affordable quality healthcare for our members and our employer groups. We believe that information is necessary to meet the needs of these constituencies and that BHI provides that information, and we believe that we will get big returns both in the public and the private and our employer sectors with this particular initiative. Vast undertaking but one we think necessary and important for positioning ourselves for the future and the future needs of healthcare. Those are my comments. I hope I touched on something that you were interested in. When they asked to us speak, as a payer, I thought we could address where we are on this. We're not to the endpoint but we're certainly at the beginning.

>> Carolyn Clancy:

Thank you, very, very much. I am going to give a little heads up to our future presenters that we're going it need to pick you want pace a little bit here.

>> Shirley Lady:

I spoke as fast as I could.

>> Carolyn Clancy:

I really appreciate it and quite honestly our appetite and enthusiasm may have been a little bit too much for our timeframe but we'll work through this.

I have a question for you, though, and Ron and Mike I'd ask you to jump in if you want. You said the data in the warehouse are de-identified.

>> Shirley Lady;

Yes.

>> Carolyn Clancy:

Can you say what you mean by that? If they're totally deidentified you wouldn't be able to tell encounters or utilization from actual people.

>> Shirley Lady:

That's correct, we can tell encounters, we can do cost utilization relative to populations, and we could pull out a subset of individuals but not identify as Sally Jones or --

>> Carolyn Clancy:

Right.

>> Shirley Lady:

We can't do that. The data that goes back to the plans, however, they can translate those from that de-identified --

>> Carolyn Clancy:

So you’ve got encrypted identifiers, or something --

>> Shirley Lady:

Yes. We had to make sure we were HIPAA-compliant, we went and got a third party independent person to say no one can tell who these people are.

>>

Including dates?

>> Shirley Lady:

Pardon?

>>

Including dates.

>> Shirley Lady:

Yes -- well, no. Not dates. We do have date information. But it's stripped according to the HIPAA compliance issues for de-identification. And we -- then I got a third party to say yes, they’ve done this. There's too much privacy concern and for an insurance company to have this large data warehouse and have individuals identified.

>> Carolyn Clancy:

I'm sensitive to that, but I'm trying to balance that against the utility because if you don't have some way to match people back up, mothers and babies, for example.

>> Shirley Lady:

The plans do, the plans, the data that goes back to the plans does. And --

>> Carolyn Clancy:

Okay.

>> Shirley Lady:

So they can connect those together for their own utilization and cost and quality initiatives.

>> Carolyn Clancy:

So in essence you're almost like an analytic intermediary for them, or doing the first few steps for them and then handing it back to them so that they can do customized and specific runs.

>> Shirley Lady:

True, except that the beauty of BHI is that we can gather things on a national basis and tell national trends.

>> Carolyn Clancy:

That helps a lot. And if I were a researcher, just say, and I wanted to do something with the Blues, how would I go about that? Would I have to approach all the member plans?

>> Shirley Lady:

No. We have a BHI board, and that board pretty much directs the agenda. As I indicated, we are just beginning to take a look at the clinical advisory group that will be for research, but we do have a research arm within the association. And that works with a number of researchers across the country. They've already requested future access to the information. We're just not there yet. It just takes so much time and energy just to get the data in the warehouse. But once we have pharmacy, we'll begin to start some of that agenda. And that's before the end of --

>> Carolyn Clancy:

In the interest of full transparency, let me say that we do a lot of work with the Blue Cross tech center and I just want to be clear about that. Do either of you want to comment? Since all of your data is all in one system, issues of where personally identifiable data reside are less of an issue, I would presume.

>>

Well, it never is -- it's always an issue. First we have the data being captured in different systems so we have our own MPI to organize within our system. Then the issue of data de-identification which is why I ask about dates. Dates is usually the killer and whether you have to index dates, T equals zero plus six months, versus real dates. The real dates are important if you want to intervene. So those are the issues we've been dealing with. And then whether it's truly de-identified, statistically verified de-identification and whether that gets you out of the IRB, whether you need to trigger.

>>

At our point right now we're not doing a lot of research, we're not having to deal as much with the IRB, and identification issues but certainly high risk for any of that data getting loose.

>> Carolyn Clancy:

That makes sense. I was starting to key into it, particularly for you Shirley, when you were talking about getting to the quality reporting in the future and thinking now wait a minute, there's a certain point where someone has a key here.

>> Shirley Lady:

That is an inhibitor. The de-identification. We just didn't feel like we had a choice.

>> William Tierney:

This is Bill Tierney. Could I ask another question related to the usability of the data? Does de-identified, are they still linkable? For example, if I've got, if I want to identify which patients on statins are getting, if this patient on statin has an elevated, has a liver function test done and if so, was it elevated? Are things linked even though I can't identify who the patient is, you can link across different episodes of care?

>> Shirley Lady:
Yes, you will be able to do that. But if -- you could actually identify the population but it would not be out of the warehouse. It would have to go back to the plan.

>> William Tierney:

Okay. So the key thing is something that -- and I think this needs to be emphasized and you need to emphasize this, I think -- is something that Carolyn mentioned. You have an identifier in your dataset that allows to you link data. It's just encrypted in a way that you can't trace back to individual patients.

>> Shirley Lady:

That's correct.

>> William Tierney:

That's key because without that you can't do anything. We have a dataset here in Indiana that hospitals maintain that has no identifier and you can't link anything across episodes of care. Pretty worthless data.

>> Shirley Lady:

Yeah, no, we definitely will be able to --

>> William Tierney:

Good.

>> Shirley Lady:

-- link episodes of care, yes.

>> Carolyn Clancy:

Helen, and then Margaret.

>> Helen Darling:

I hope quick questions. Number one, you mention the 30 to 40 million to the warehouse. What is the total spend for the covered lives in the warehouse would be -- I mean, my guess is that that's actually pretty small for the number of people and expenses covered in all the plans.

>> Shirley Lady:

Ongoing maintenance?

>> Helen Darling:

No, the total spend on healthcare for the people in the data warehouse.

>> Shirley Lady:

Oh, my goodness.

>> Carolyn Clancy:

Like what percentage are you investing --

>> Shirley Lady:

Right.

>> Helen Darling:

My guess is it's small. Maybe not so much a question, I don't mean to put you on the spot, but it does help to know that because it’s probably actually pretty small because the expenditure is probably huge.

>> Shirley Lady:

That is probably true.

>> Helen Darling:

So the second is, are you thinking about in the data warehouse, access by consumers, not to the data warehouse, but to information that then could be translated into tools for consumers?

>> Shirley Lady:

Absolutely. It's part of our agenda for consumerism, is that we will take the data and make it available for consumers through different consumer tools. Absolutely, for transparency and quality initiatives.

>> Helen Darling:

Terrific. And my last one is, given the fact that it is national and covers a lot of areas, all of us who love geographic variations are immediately salivating at the prospect, but are you planning to build in geographic variations analysis?

>> Shirley Lady:

Yes, we actually have already begun that.

>> Helen Darling:

Great. Thank you.

>> Carolyn Clancy:

Margaret, and then we'll move on.

>> Margaret VanAmringe:

I was wondering about aggregation at the facility level. Are you doing that in the warehouse or is that only done back at the individual plan levels, like for at the hospital level, the nursing home level, whatever level of care, for facilities. Do you do any aggregation on that level?

>> Shirley Lady:

We haven't to date, no.

>> Carolyn Clancy:

Okay. I'm going to turn now to Dr. Wilhoit. And if I just murdered your name, please speak up. She's from Blue Cross Blue Shield of Illinois.

>> Carol Wilhoit:

Thank you for the opportunity to be here. It's always interesting when different people are given the same set of questions, how you approach it very differently.

>> Carolyn Clancy:

Yes.

>> Carol Wilhoit:

And so my presentation I think is going to be somewhat different than the others, which have focused on one project, one approach, one system, and I took the questions and tried to figure out as many different ways to answer each question as possible. So hopefully this will be helpful. But I tried to really cover the range of different things that we do. Why don't you go to the next slide?

Blue Cross Blue Shield of Illinois is a division of healthcare service corporation which has Blue plans in Illinois, New Mexico, Oklahoma, and Texas. The mission is to promote accessible, cost effective, and quality healthcare for our customers. I work in the quality area and so that's really the focus of the data that I'll be talking about. And from a practical standpoint, we look at quality by functional groups that we work with. One is our HMO medical groups and IPAs. One are, a second category would be physicians in the PPO network, and the third is hospitals. You'll see those themes coming through as I talk about some of the different data issues. And then there are significant differences in those different entities because of the different kinds of contracts we have, the different kinds of providers, the different initiatives. And so there are substantial data differences. Next slide.

First area I wanted to talk about is our claims data. Since that is really the richest data that we have and that I think would probably be true for most health plans. We have our medical claims data, which we use for a variety, looking at a variety of different types of indicators, which I've summarized here. We look at utilization, we look at complications, we look at episodes of care. We do HEDIS reporting, there's quality indicators that we look at, and we look at things like electronic claims submission rates by hospital and by physician. So we use the medical claims data to look at a variety of different kinds of quality indicators.

Second data source that we have is pharmacy data. And again, that's used in a variety of ways. We look at utilization, we look at things like formulary compliance, generic drug compliance. Pharmacy data is pulled into HEDIS reporting, both to identify members for the denominator of certain measures, as well as to identify members who belong in the numerator and are compliant for certain indicators. Similarly, we use pharmacy data both to identify denominator members and numerator members for different quality indicators at the physician level or the group level. And we also use pharmacy data to identify members for inclusion into these management programs. Next slide.

A new source of data for us that again adds richness and yet new challenges as well, is IPA encounter data. Our HMO medical groups and IPAs are capitated. And so they pay their own claims and so up until recently we really have not had the data about exactly what claims were being paid. We now have, do have encounter data from our IPAs. We have a few challenges before it will be perfect but we're getting close. And you know, we're now able to use that data for a whole range of health plan functions from the quality perspective. We use that, we're using that for HEDIS reporting, we're using it to identify members for some of our other quality programs as well. And going forward, we can use that data to identify members who are in need of preventive services, for example. But you know, that separate data source, you know, it's a, for the most part, unique to what's in our claims system already and yet it is a very different data source.

We do use publicly available Medicare data. I'll talk a little more about that later. But we aggregate that with our own claims data for certain purposes, which I'll talk about. So that's another data source that we use. And then a rich source of data that we have, but one that is labor intensive, is clinical data submitted by our HMOs IPAs. For our various quality programs in the HMOs, we identify members with specific conditions or members in need of certain preventive services, and the IPAs review their own data and submit it to us. We recommend that they start with claims data, with their own claims data since that is more easily obtained, and then that they do medical record review if they can't find what they need in the claims data. And because of the structure of the medical groups and IPAs, they have the administrative ability to do the medical records review, and so we have a rich, a very rich source of data for our HMO members. Next slide.

A whole different type of data, you know, that's different than what other people have talked about, but I thought was just worth bringing to the discussion, are other types of data such as survey data. You know, we have a variety of member surveys that we use for different purposes and it's a whole different type of data and that has different purposes. We have provider-reported data, for example, from our hospitals. We get information about physician board certification, participation in certain quality improvement initiatives, and so on and so we have provider-reported data as well.

We use publicly available data, such as we Leapfrog data, the Hospital Quality Alliance data, information about accreditation status from the entities that do accreditation. So it's another source of data that we pull into the mix. And then we have complaint data that we pull from our inquiry tracking base, our inquiry tracking database, and we use the complaint data. So even just starting with data sources, there are a whole range of data sources that we use and pull into our quality improvement program. Next slide.

So you know, dealing with the multiple data sources does become a significant challenge and effort. All of our claims data and all of our pharmacy data, and this would be across the enterprise, across all four states, all four Blue plans, the data is, the data are aggregated into what we call EDW, which is our electronic data warehouse. And so there's one database that houses the claims and pharmacy data and then when we need information for some other purpose, we have an extract pulled from EDW, and then load the file into what other applications we're using.

So what are the range of things that we pull extracts for and how do we use them, what kind of databases do they go into? First of all, we'll pull files that are used by our QI staff for functions such as creating list of members for outreach purposes, for childhood immunization or cervical cancer screening. We have files pulled from the warehouse for loading into vendor software that create episodes of care. We have files loaded into vendor software that we use for HEDIS reporting and that software will also accept our registry data, will accept medical record data. So again, it aggregates data within that warehouse. We have -- to look at hospitals, one of the things we do is look at the patient safety indicators and the inpatient quality indicators. Currently the state of Illinois does not make all payer data available. And so while we have about four million Blue members in Illinois, quite a bit of data, it's much less rich than being able to look at all admissions and looking at things like complications where the numbers are low, having a more robust data source is helpful. So we have a vendor build us a warehouse that combines the Medicare, the MEDPAR data that’s available, with our Blue claims. And so that creates the most robust alternative that we have currently to look at complications within each hospital in the state. And so again, it's another form of aggregation, it's another database that we use. And then we have, we're looking at physician quality indicators, we have a file pulled from our warehouse that we send to an external vendor who actually runs the indicators. So again, there's a whole range of different functions that we use data from the warehouse for.

I mentioned earlier that we get data from our HMO medical groups and IPAs. They -- we create data submission forms, the groups abstract their data, and enter it onto scannable forms. And we review their supporting documentation, verify the accuracy, and then scan it into a database. Going forward we will have a less paper-heavy process and be able to get that data electronically. But that's where we are at the moment. But that data is, because we review and validate it, it is very clean and has been a good data source. The encounter data that the IPAs are submitting now, again, we have a vendor who cleanses that, processes it, and you know, makes it into a uniform and usable format for us that we can combine with our own claims data. So the, you know, just getting the data and putting it into the various usable formats has been a challenge. Next slide.

So, next topic is aggregation of data. And once we have the data, what are the purposes, you know, how do we aggregate it and what are the purposes we use it for? For our HMO, where we're looking at physician groups, we report back to the group, we use the data for probably three purposes. One is feedback to the groups about their performance, second is pay for performance, and third public reporting. And so, you know, we do report for each of the types of indicators we're looking at, we report separately. We really perform the three functions with that data. Which involves a combination of our claims data along with what the, with what the groups submit.

Second area is looking at PPO physicians or physician groups and you know, we report back to physicians their performance in comparison to their peers on a range of quality indicators.

And the third area that we aggregate is hospital data, and for hospitals, we look at a whole range of indicators using multiple data sources. We look at indicators of structure, process, outcome, efficiency, member satisfaction, and we pull all those different types of data and all the different indicators into one profile. We do assign weights to the different indicators so we can come up with an overall score that we report back to the hospital and provide them with comparison data for their peer hospitals. And then we also do publicly report, so it's another function of what comes out of that data source. So just a -- next slide.

A brief summary of the kinds of functions supported by data aggregation, would be feedback to providers, quality improvement, transparency, pay for performance, we also use the data to, for network creation, our high performance network was developed based on quality and cost data. We use the data for disease management and also for HEDIS reporting.

The, one of the questions that had been asked was about database hosting and whether that's internal or external. And basically however it might be done, we do. You know, we have situations where we've built the databases and used them. We have vendor databases that we host and use. We have vendors who create databases and they use them and give us the results. So that we have a variety of different models for different functions. And you know, so there’s a range of ways in which the databases are hosted and used. Next slide.

In terms of hybrid reporting, we do use hybrid data for many of the things we look at. And for this purpose I defined hybrid generally as any time we're taking data from various, or various kinds of data and compiling them. So for HEDIS indicators that involve hybrid reporting for looking at performance within our HMO medical groups and IPAs we include both claims and medical record data. For our hospital profile again we combine many types of data into one data source.

One area that we will be able to report hybrid data very soon, is that for looking at physician data in our PPO, where we have probably 3 million members and 30,000 providers, so that the challenges of reporting are quite great. Currently what we've been doing is using claims data only to look at quality among PPO physicians, and we know that claims data are limited in what they, you know, what they have and yet they're very broad, you can look at everybody, you can look at every provider and there's really value to that as well. But we do have, we've been working with a vendor and we're very close to having available a Web-based application where we'll run the quality indicators using claims data, report a preliminary rate to physicians, they'll have a period of time then when they can go out on the Web, take a look at what's there, and supplement on the Web what we've gotten through claims with what's in the medical record. So for example you’re looking at cervical cancer screening where a hysterectomy done ten years might be an exclusion that would not be in our claims data, but would be in the medical record and they could supplement that information. For childhood immunizations, we have challenges in reporting when a child does not have immunization coverage, for example, the child may go to the health department for shots and we won't get a claim. If the vaccine is provided under the VFC program, we won't have a claim. But the physicians will be able to supplement our claims data with that additional information and so we have more accurate results. Next slide.

When data is available from more than one source, there's always the challenge of what to do if the two sources don't disagree. In general, you know, we have I guess two approaches for dealing with that. One is that if there's one more authoritative source we'll use that. And the second is that we basically try to err in favor of the providers. Once we find a numerator hit, we stop. If we find a Pap smear in one data source we accept it and move on and don't keep looking somewhere else.

We currently do not have laboratory results available. We're working towards that, but that is a whole other challenge in working with a network as large and broad and diverse as what we have.

Pharmacy data is another very rich source of data that we use for a variety of purposes. However, some of our employer groups, many of the employers carve out pharmacy and we don't always get that data back, so that we don't have it for all of our members, and so what we're able to do varies depending on whether the member has pharmacy data or not. Next slide.

We're currently looking at episodes of care, we're using MEGs as the methodology. And at this point we're using that for reporting efficiency at the, or cost at the physician level but we're not using that for quality purposes. We do use, in building the episodes, we use both claims and pharmacy data. However, for reporting we don't use the pharmacy data because we don't have pharmacy data for all members and if you include pharmacy data for some and not others, then there's a, the playing field is not level and so, you know, we have that limitation, so we don't include pharmacy in our reporting of cost. Next slide.

So barriers. You know there certainly are a variety of challenges to what we do. The -- our data warehouse, EDW, is used for many purposes within the health plan, and so it's large, it's complex, and you know, can't be easily changed for our needs and so there is limited ability to add new data elements, for example. I mentioned encounter data in the HMO. And you know, getting that pulled together, there have been many barriers to that, but that now is going to be a useful data source going forward. We talked about claims data as opposed to medical record data. Claims are wonderful because of their breadth but they are limited and so being able to supplement the two has advantages. But at the same time there will, you know, once we open up data entry, you know, across the whole network there will be challenges in terms of getting consistent data as well. Another data issue that became apparent once we started looking at efficiency, or at cost for physicians, is that for primary care physicians, the number of preventive care episodes is very high. And that's one of our largest MEGs, is preventive care. And yet it very quickly becomes apparent that when you look at the cost data, you look at the quality data, the physicians with high cost for preventive care are those that are providing the preventive care. So for most cost measures you want the cost to be lower, but the physicians with low cost on preventive care MEGs probably aren't providing the care. So it's a whole different issue. You hate to throw out that huge number of episodes and yet you hate to make decisions based on it, too. And so that is, you know, another issue that we become aware of. Next slide.

So in terms of lessons learned, you know, I think a few of the things we've learned is that it's important in the data reporting process to allow enough time to really look at the data thoughtfully once it's available, and verify the accuracy both with standard queries and also just reasonableness tests. You know, a lot of times what we get initially, you know, does need, there may have been a factor we didn't consider the first time around that needs to be looked at. Provider feedback has been extremely helpful, and has helped us to improve the quality of our data and our reporting process. Public reporting is another area that's challenging. We do publicly report performance for HMO medical groups and IPAs and for hospitals. However, we've also learned that it has been extremely helpful to allow time for several cycles of reporting and feedback, working bugs out before data becomes publicly available. And I think that has really helped for us to reduce barriers to public reporting.

>> Carolyn Clancy:

Thank you, and let me express my particular appreciation for your thoughtful attention to specific questions, and you're right, I think the fact that we got very different perspectives is precisely what we wanted from today. So I very much appreciate that. We have time for a couple of quick questions. Helen? You're on.

>> Helen Darling:

This may not be a data question, so feel free to cut me off, but I'm intrigued by this comment that physicians providing recommended preventive service will tend to appear more expensive than physician whose fail to provide recommended services. And I think that's probably correct. So -- but isn’t that what we want?

>> Carol Wilhoit:

We want physicians to provide preventive services, yes. But what we found was when we just plug, you know, when we put the data into the, into our episode group or had the reports spit out, you know, which is sort of what you expect, what happens is that the physicians -- and it really came to our attention from physician feedback. A physician came back and wanted to know why she looked expensive and when we went and looked at the detail and thought through and talked through with the physician, you know, she had something like 400 episodes. It was a pediatrician, she had something like 400 episodes and about 300 of them were preventive care. So preventive care was weighting her overall cost score tremendously. And she had a very young population, lots of babies and one-year-olds and who gets lots of shots, are expensive. And so her preventive care costs were very high. Probably appropriately high, but they were high. And so, you know, it did make her look expensive, but probably not fairly so. Because when we looked at the non-preventive care episodes, they all were very much in line. And so it really made us rethink and, you know, whether preventive care should even be kept into the evaluation of costs and our conclusion, I think were close to making is that it should not. But it’s a part of the standard algorithm and if you don't think about it, it’s one of those things that's easy to miss. And you know, that was one of the areas where provider feedback was very helpful. You get the phone call, you start looking, and think it through. But again I think where not publicly reporting the first time is really important, because there are some of those issues that you just don't always get right the first time.

>> Helen Darling:

So I would certainly applaud your tracking it, because I think we believe that it's very valuable and we would like to, if you will, reward those who are doing that kind of, providing that kind of care. That's exactly where we want to be. And by tracking it, we can answer that question. It's not just sending people off for more inappropriate, unnecessary imaging. It's actually doing exactly what we like to have happen. So I'm very glad to hear that.

>> Carol Wilhoit:

But that is sort of one of those things that there can be the unanticipated and if you don't get it off through, then you can have, impact things in a way that's not what had been intended or hoped for.

>> Carolyn Clancy:

Just as a footnote let me note that some of my colleagues have been exploring the overview use of some preventive tests as well, although it's not clear to me that the MEGs are equipped to deal with that, or that the data streams are equipped. Women getting Pap smears every six months or year, stuff like that. Pap smears yes, probably not that often, but --

>> Carol Wilhoit:

And clearly, high cost preventive care could be inappropriate as well, or could be where charges are high. There's a variety of things. But you know, just giving all the services is more expensive than not giving any of the services that should be given, too.

>> Carolyn Clancy:

Last question, then on to Mike Rapp. Margaret?

>> Margaret VanAmringe:

Two, I think, quick questions. I was interested in the kind of data quality arrangements you had when physicians were asked to input additional information from medical records, number one. And number two, are there any instructive issues when you try, or what is maybe the usefulness of trying to integrate the MEDPAR data with your data, is there anything there that we should know about in terms of difficulties or anything instructive or how useful really was it?

>> Carol Wilhoit:

First of all, in terms of physician entry of data, we aren't there yet. The software is close to being ready and we expect to use it this fall after some more testing. The data we get from our IPAs is really quite high quality. We have about 80 medical groups in IPAs, so it's a manageable number, we can work with them, we train them, we review what they submit. You know, there's an infrastructure to the groups and so that data is really a very high quality. But we haven't gotten to the point of getting individually, individual physician-submitted data yet.

In terms of the database that we have with the MEDPAR plus our Blue Cross data, it has really worked very well. We don't build the database, a vendor does that, so I don't know what challenges they've had in building the database. But it gives us, between Medicare and our members, I think about 70 percent of the admissions in the state. And so for looking at complications that happen with relatively low frequency, we end up with valid complication rates for most hospitals, for most complication rates we end up with valid numbers. And so that for us has been very useful and has worked well, in a state where all payer data is not available.

>> Carolyn Clancy:

Thank you very much. We're going to turn now to Mike Rapp now. Just to keep everyone sort of focused on making a brisker tempo. We finished your presentation 15 minutes ago and we're coming back from our break, just to give you a sense of our poor estimates of timeframe. And I know people are going to be very, very interested in this but if could you selectively go through some of this faster.

>> Mike Rapp:

Always nice to go last.

>> Carolyn Clancy:

That's right.

>> Mike Rapp:

Almost last. Okay, I will hop along as quickly as I can. The Physician Quality Reporting Initiative is part of the -- I guess it's fair to say the first step in value-based purchasing for physicians. And with regard to the data issues, I'll try to address those primarily. It's based upon an innovative way of reporting quality data for physicians. Next slide. Next slide. See how fast I go, Carolyn? Next slide. I'm too fast. Anyway, I'm going to go -- keep going, one more. One more.

>> Matt McCoy:

There’s a slight delay, so if you just want to say slide numbers, do it that way.

>> Mike Rapp:

Okay. Slide 6.

>> Matt McCoy:

Okay.

>> Mike Rapp:

Which is the next one. Okay. So with regard to PQRI, the focus is, of course, on quality of care, evidence-based measures, but the main thing that PQRI is doing for the first time is that it’s giving physicians an incentive to report quality data. We started out with a physician voluntary reporting program in 2006, but there was no financial incentive that required them to, that incentivized them to report the data. But this year, starting 2007, there will be. Next slide.

The way that the -- this is just about quality improvement. The next slide.

This is just to describe for the physicians, the benefits of the PQRI participation. First of all, they are expected, they will expect to get confidential feedback with regard to the reports and performance rates there is no, there's no public reporting that goes with this. They get the bonus incentive, as I mentioned, and they get to make an investment in the future of their practice as presumably we would move toward pay for performance. Next slide.

I'm just not going to go into a lot of the details about the statute and so forth and try get to the data itself. But basically the statute is quite broad. Next slide.

It pertains to physicians, MDs, DOs, other practitioners identified in the Medicare statute as physicians, therapists, and other practitioners. It’s very broad, basically applies to any person that bills part B, and directly. As we developed or sought to have measures developed, we focused on the physicians but when Congress passed the statute they made it broader. So they wanted everybody to be able to participate. Which is a little bit, I think, unusual compared to most other pay for performance or reporting programs that have focused on ambulatory services, primary care, preventive care, and so forth. So this program focuses really on everyone. All specialties. We're talking about outpatient, inpatient as well. Next slide.

So this -- this is fine here, number 11. So the final list of 74 quality measure statements can be reviewed by you, at our Website. In 2006, when we had our physician voluntary reporting program, we had 16 measures that we included in that, that covered 19 of the 39 Medicare-designated specialties. However, we actually worked through 2006 to get additional measures developed to the point that right before this statute was passed we posted a set of 66 measures on our Website that Congress incorporated by reference into the statute. That, and then in addition we were authorized to add additional measures based upon an AQA consensus process in January of 2007. That got it up to 74 measures. The bottom line is it covered 30, over 30 of the 39 Medicare-designated specialties, and that would be specialties that represented over 90 percent of the part B payments. It doesn't mean they're measures applicable to 90 percent of the services provided. But as you can see, it is nevertheless quite broad. Next slide.

The point here on the slide is it's a claims-based reporting system using CPT category II codes. And if you're not familiar with those, those are just basically, the CPT code is the procedure that the physician provides, like an office visit or an operation of some sort, any type of particular service that describes what they do for the purpose of billing. Then the ICD 9 code goes on the claim form as well. But a CPT II code is something designed to report quality data. It's not something that's necessarily part of the claim. But it can be added and there's been discussion here about pharmacy data and lab data and that sort of thing that's difficult to get -- well, you won't get it through just regular claims. You have to get your additional data feeds to do that and then aggregate it and so forth. But with the CPT II codes it becomes a doctor's responsibility to put that information on the claim. And so for example hemoglobin A1c greater than nine is an example of one of the measures we have. Normally you would have to get that lab data from someplace. But the physician is incentivized to report the CPT II code and does indicate whether or not the hemoglobin A1c was greater than nine or less than nine. So it's a method of augmenting the claims stream in a way that depends upon the physicians of course to put it and that raises validation issues and so forth. But nevertheless it's quite flexible and functional in that sense.

I guess then one deficiency, of course, is it's just something we're implementing and it's only going to be part of Medicare. But let's say these measures work out well, and we're certainly going to have quite a trial run of it, this year, but let's say they work out well, then it's a potential that, it's potentially measures that could be implemented in the private sector easily enough because it's still basically claims data. Incentivize the physicians to report the data and then once that happens, that could be aggregated relatively easily.

A couple other, I think, advantages, when one thinks about it, claims data always runs into the problem of trying to attribute either, both the claim to the physician, because sometimes it's not really refined that much, and you have that alone just trying to make sure you've got the right doctor if you're doing it individually. But even more important, are we talking about a doctor that should be responsible for the service at all? Who is responsible for the patient getting a mammogram, who is responsible for one thing or another when patients go to numerous doctors? When you use this, the physician, the physicians themselves basically proactively take responsibility for the patient in the sense of reporting the quality data. And as a matter of fact, the way the statute works, specifically, is there's a presumption built into the statute that if the physician reports the measure, then the measure applies to the doctor.

Once that's done, then we look for in all instances. So for example if a patient -- if a physician -- the way the measures work again, would be you have a denominator population that comes from just the routine submission of the claims. So once they report an office visit for a patient with a diabetic, now, we don't automatically assume that even though they do that, that they're the physician primarily managing the patient's diabetes. But once they report that on one patient, then we're going to look for it in every patient. So it sort of has them come forward and say this is a measure that applies to me. This is kind of the work I do. And then we can look at it in all the denominators, as opposed to just saying here the physician reported an ICD 9 code for diabetes plus an office visit, we must assume they're the primary care doctor for that patient. And they may not be at all. They may be doing entirely different because of the way the claims data works or the claims, the billing codes work, they're not highly specific when you talk about office visits. So let's -- where do I want -- go to slide 16, please.

So this is just to make the point the statute does require a validation, there's no appeals under the statute for this, but it does require us to use sampling or other means to validate whether the measures are applicable to the services. The point about this is the actual incentive under the statute is to report. There is -- it's not a pay for performance, and the incentive is paid if they report 80 percent of the time. So getting back to the example I gave you before that once they reported on one patient with diabetes, we're going to look for 80 percent of the time they see a patient with diabetes in an office visit and if they don't do that 80 percent -- they don't have to report every visit. There's a difference between what we call -- we don't use this term but per-visit measures like every time a person has a heart attack and comes to a hospital they should get an aspirin, so that's a visit-specific measure. They would report it on every visit. But another type, the hemoglobin A1c, would just be periodic. At any rate we deal with that and the 80 percent rule. Let’s see. Go down to slide 21, please.

This slide basically goes through how the data works. It is a claims-based system. So the top row may or may not be actually how the doctor works it out in their office, but at some point they need to get it on the claim form, or if it's electronic they'll get it electronically submitted as part of their claim. Then it comes into the carrier, goes to the claims history file. Then a contractor will analyze it for two purposes. One, a confidential performance rate and reporting rate for the physician and then it goes back to the carrier to calculate the bonus payment, which is up to 1.5 percent of the Medicare-allowable charges for the second half of the year from July 1 to December 31st.

I will point out what -- I mention what -- how it works with the CPT II codes and so forth. It's all claims. So they don't have to go outside that. They don't really have to abstract any medical record because kind of like you were discussing where you have an order sheet when they come in, doctors typically have a check off list for their billing and this could be just simply a code. And although there's 74 measures it's unlikely that 74 measures apply to all doctors. So they'll find the ones that apply to their practice and sort of tailor them and be prepared to do them. And there’s only, to get their full 1.5 percent incentive they only have to report a maximum of three measures. They could pick out -- even if there's 20 that might apply to their practice, they could select the ones that they want to focus on. So one of the things that we were sensitive to, and I know Congress was sensitive to, was the burden aspect of it. And primary care physicians, ambulatory measures are much further along in terms of development. There were more of them. It could have been that the primary care physicians would have had 25 to report, and anesthesiologists may only have one, just because of the state of measurement development but Congress took care of that in how they took care of the incentives. Go to slide 23, please.

This is just again to go over how the actual process works. You have a denominator. And there are no CPT II codes in the denominator, because those are discretionary codes. A regular billing code is not discretionary. You describe what the diagnosis is, and you describe what service you provided. I did an office visit for a patient with a diagnosis of diabetes. If you -- the hemoglobin A1c, though, is your numerator that comes from the category II codes which is what the incentive is for them to report. If you put a CPT II code, which is discretionary, not required for the bill, in the denominator to define the population, it wouldn't work. So we don't do that.

Exclusions. This, the exclusions work with the AMA -- the CPT II codes or AMA codes we start off with G codes when we did this for the physician voluntary reporting program but we moved to their codes which are basically the same descriptor but they do it through their process. And for exclusions, they have modifiers. One P, two P, three P, which is medical systems and patient reasons for excluding the measure. There has been some criticism that these are not specific enough in terms of the exclusions, but one way, this would be expected to be looked at as we get some experience with these. The modifiers are specific but, in terms of exclusions, but let’s say exclusions are only one percent of the time, that really doesn’t matter. If it’s 40 percent of the time that the patient reason that the measure doesn’t apply, that would be something that needed to be looked at. We’ll learn from that. This is a pay for reporting program so there is a modifier, which is, I didn’t do it. So to get the, to qualify for the incentive you have to report a CPT code every time, whether you did it -- so there’s the, I did it, I didn’t do it for medical reasons, I didn’t do it for system reasons, I didn’t do it for patient reasons, like they’re allergic, or I just didn’t do it. So there’s a full gamut of reporting, but it’s required every time. Next slide.

This is just some of the details of the reporting so I don't want to go to that too much. Let's go down to slide 26, please.

I guess I just want to make the point about again the claims and validation versus, on this type of a measure, versus what might be chart abstracted or might be EHR, and how the data works. So we're getting claims data in, and as I said, the validation would be essentially the same as the claims type of validation. If you want to deal with an EHR, and we're not wedded to this type of data collection system, for now it's quite a versatile and functional one. But in the long run we're interested in moving toward EHRs, of course. And in that case you would get the actual data elements, for example, the laboratory test result. And then you would calculate the measure that way. And of course the validation, if it was chart abstraction would be go to the chart or if it's EHR would be how validation is worked through there.

Let's see. Let me just -- I don't think I probably need to work with the slides particularly. But I think I made most of the points in terms of the data type. The potential for aggregation, if the private sector would adopt these, and I think we'll be able to have a good sense of how well they work or don't work, or how they need to be refined after we get through this program next year. We are developing measures for 2008. Let me correct that: we're going to be selecting measures or proposing measures for 2008 through a rulemaking process. Congress incorporated by reference a list we had for 2007, but for 2008 it will be through the rulemaking process that has to come out in a proposed form by August 15th and then will be finalized by November 15th. So we will propose whatever measures we propose and then the public will comment. The statute does require certain limitations to the measures that we can include in the program. For 2008, number one, they have to be consensus endorsed or adopted by, as examples, NQF and AQA. They have to include measures submitted by a specialty. They have to include structural measures, which we don't have this year. And the Congress gave the example of EHR, or e-prescribing as examples of that. And they have to use a consensus-based process for development. So we'll be adhering to those requirements and proposing the measures.

I did make the point earlier about the overlap with the hospitals, and this is what could be looked at two ways. One, it incentivizes the doctors to deal with the same data issues or the same process of care issues that the hospitals have to deal with in many instances. We have aspirin for a heart attack at arrival in the emergency department, for example, is one of the 74 measures. We have antibiotic administration for surgical infection prevention for inpatient surgery, for example. And now we have it for the physicians, the surgeons, as well and also one is for the timing for the anesthesiologist. There's a lot of overlap there and that's good because it aligns the incentives. It's somewhat bad in the sense we're asking for two different data sources, and so insofar as we could bring those together, that was really point I was making earlier, that if the data for physician in the emergency department giving an aspirin for a heart attack at arrival could come from the hospital but be attributed to the physician, that would at least make the data collection burden somewhat less.

So I think that's basically it. The slides are primarily for outreach purposes so they don't necessarily tie with some of the issues addressing, so I hope I didn't confuse you too much. I will -- one other data point I do, should make, though, is slide 36. And that's about registries. Congress did make a specific point that for 2008 we are to address registries as a vehicle for quality data reporting. And so what that seems to be interested in us dealing with, is again the potential for duplication of data reporting. It gives the STS as an example. The Society for Thoracic Surgeons has a registry for coronary artery bypass graft surgeries. And so they collect the data. The measures that we're using in PQRI, are a subset of those measures as well. We're having the thoracic surgeons for 2007, in order to get the 1.5 percent incentive payment, report that data to us through their claims, and do the CPT II codes. For the most part they're already doing it by registry. And so Congress was interested in us addressing that to avoid that duplication of data collection again. So I think that's it. If you have any questions I'll be happy to answer them.

>> Rick Stephens:

Mike, thanks very much. Certainly everyone will be looking with interest at the results when the data comes out. When will you start having data from this survey activity?

>> Mike Rapp:

Well, the time is between July 1 and December 31st. The eligible time period. It will be collected up till the end of February.

>> Rick Stephens:
Okay.

>> Mike Rapp:

And which is actually further in the process, claims can be submitted much longer than that. But Congress gave for the quality incentive cases submitted by the end of February, so it will be after that and the bonus payment will be after that.

>> Rick Stephens:

After that. Thank you. Any questions for Mike? Margaret?

>> Margaret VanAmringe:

A quick clarification. I just wanted to understand better what's the methodology for making sure that 80 percent of the cases are reported? Is it looking at the billing codes, then, to compare with the CPT, or is there another methodology involved?

>> Mike Rapp:

It will be looking at the denominator code. So step one is Congress said the presumption is that if the doctor reports it it applies to the physician. So as I mentioned, physician once reports a case of diabetes in the office, that combination of codes defines the denominator. Once they do that, every patient that they submit that combination of codes for, we look for it. So if they do it 80 percent of the time, they qualify. If they didn't, they don't.

>> Margaret VanAmringe:
So CMS is going to do that reconciliation?

>> Mike Rapp:

Right.

>> Margaret VanAmringe:

Great.

>> Rick Stephens:

Other questions? Seeing no other questions, hearing none on the phone, let me thank you very much, Mike, for walking through that. Appreciate your time you put through there. Let me talk a little about where we are schedulewise right now. We have about 60 minutes left, and we do want to finish on time. We also want to provide an opportunity for public comments. But I'm also mindful that a few people might want a biological break. So I might suggest maybe a 5-minute break and at 25 before the hour, we will reconvene. And then I'll make some comments about some of the systems work we're looking at. We'll get into our other presentations. Hopefully those will last about 15 minutes each, so the next presenters can be thinking about it. That will leave us some time for some summary and then public comment. So we'll take a 5-minute break, reconvene at 25 before the hour. Thank you.

[break]

>> Rick Stephens:

Are we ready to go? So at the risk of being labeled as a taskmaster, we need to get going, because it is 25 of the hour. There are some people that need to find airplanes, and transportation mechanisms on their way, and during our short break I was asked if the next two presenters could go ahead of my comments so next we'll hear from the Massachusetts eHealth Collaborative. From Micky, Steve Simon, and Al Harvey. I know Micky and Steve are on the phone. Would like to ask that you keep your presentation and the opportunity to questions within 15 minutes, because that will allow to us give everyone the opportunity to hear, but at 10 minutes of the hour, I will have to say time is up, so we can move on. Then once we finish those presentations I'll make some comments and then we'll have our recap testimony and then we'll do public comments. So with that, the floor is yours.

>> Alan Harvey:

Great, Rick, thank you. I'm Al Harvey, vice-chair of the eHealth Collaborative and let me just give an introduction quick to two people on phone. Micky Tripathi is our chief executive officer who will be presenting. And also on the phone is Dr. Steven Simon, MD, MPH, associate professor, Department of Ambulatory Care and Prevention at Harvard Medical School and Harvard Pilgrim Health Care and Steve is one of our active researchers in our statewide eHealth Collaborative. Micky, are you on the phone? And Steve are you on the phone?

>>

Might be muted?

>> Micky Tripathi:

Hello, can you hear me? This is Micky.

>> Alan Harvey:

Excellent, great.

>> Micky Tripathi:

Okay, terrific.

>> Alan Harvey:

Go right ahead.

>> Micky Tripathi:

Thank you. Thank you for the opportunity to share with you what we're doing. What I thought I would do is quickly go through some background of the eHealth Collaborative because it's kind of important to understand what it is the Collaborative is doing in order for you to understand how the quality data warehouse that we're building builds on that foundation. Next slide, please.

So the eHealth Collaborative has it's roots in really two efforts. One is a 50 million dollar financial commitment from Blue Cross Blue Shield to create some type of health information infrastructure across the state. And so we have used that money to run three pilot projects in health IT that I'll describe in a second. The second important thread in the e-Health collaborative is the role played by the American College of Physicians and Steven Simon, who is on the phone, hopefully, played a strong role with the other members of ACP, in introducing a plan for universal adoption of electronic health records which forms the intellectual basis of the Collaborative's project going forward. The company was launched in September 2004, we're a nonprofit registered in Massachusetts and we're backed by wide array of healthcare stakeholders in the state. Next slide, please.

These are the organizations of our board. We certainly don't have to go through details of it but just wanted to give you a sense of the range of organizations who are behind this. Not only spread across the healthcare delivery value chain, so we have provider organizations, professional association organizations, as well as plans and consumer patient organizations as well. In addition, one of the things that I'd like, that I always like to point out on this is all of the major competitors are represented on the board as well. So you can look in the middle and see Blue Cross is there but so are Harvard Pilgrim, Fallon, Tufts, really the idea being everyone needed to come together to solve what we perceive to be a systems problem that requires a system solution. The last point is that this is not a Blue Cross project. So we are, though our initial funding commitment is from Blue Cross Blue Shield, we are a separate company and Blue Cross is represented here as one organization among those 33, where it is one organization, one vote on the board. Next slide, please.

What we did with the 50 million dollars is we decided to do three pilot projects, where we would essentially wire three communities for healthcare delivery. And I'll describe what that means in a second. But we had a request for applications. We got 35 responses from across the state. And we chose three communities, Brockton, Newburyport, and North Adams to go forward with the pilot projects. Next slide, please.

The scope of the pilot project is roughly -- this slide is a little old but it's about 500 physicians who are in the project. There's about another 100 mid-level, so it’s 600 clinicians if you want to think of it that way. Together they take care of about half a million patients as far as we can tell and they're in over 200 office locations. Next slide, please.

What we're doing in the pilot projects is, I like to put it in four categories. Just starting from the bottom and going up, one is just clinical IT implementation and support. So with those 500 physicians and 200 office locations, we are outfitting every single one of them with an electronic medical record. And that's the hardware and the software, and all of the training, the pre-implementation training and the post-implementation training, around workflow, you know and all of that, through June of 2008. So you know, that's one big piece of what we're doing is just the IT penetration, sort of the bottom of the food chain.

The next piece, moving our way up, is connectivity. So in each of those three communities there will be a stand-alone health information exchange, essentially a wide area network or a local area network that will connect all of those physicians and the hospital systems in those communities. Again, it's three stand-alone systems for real-time exchange of patient-identified information for treatment purposes and accessible at the point of care. We have an evaluation project which is, one piece of which is the quality data warehouse which I'll describe in the second part of the presentation, and then finally there’s a steering committee running all of this in each of the communities. Next slide, please.

Overall timelines, I don't think we have to go through the details of this, but we are almost all of the way through our EMR implementation, so we had our first practice go live on this electronic health record in March of 2006, and by the end of June 2007 we'll have almost all the physicians up and running on at least the first phase of their electronic medical record implementation. And certainly by the end of the third quarter of this year we expect to have all of the 500 physicians up and running on their electronic medical records. We also anticipate launching the full health information exchange in each of these three communities by the end of June 2007. Right now, two of the communities are already getting live communitywide results delivery of lab results and radiology results. Next slide, please.

So the three pilot communities as I said, and they are just about to launch the health information exchanges, they'll be unique in that way in that they will have, they'll be sort of the first communities I think in the country that have all the full ambulatory setting, all physicians in the ambulatory side having an electronic health record and connected in some meaningful way among each other and in the hospitals in these three communities. Next slide, please.

The health information exchange has a couple of different pieces. One is the -- and we're doing patient recruitment in these three communities because we have an opt-in policy, meaning we will not allow patient information to be exchanged over the network unless we have a formal patient permission to do that. The health data exchange itself, which is that second column there, is essentially a repository. It's a repository in each of the three communities that will hold a set of, a subset of the clinical information that is held in each of the ambulatory offices in the separate electronic health record systems. And what it will do, it will essentially extract information on a daily basis from each of those electronic health records and then merge that into a patient-centric clinical summary that will be persistently held at the center and accessible all the time by any of the authorized users of the system. We don't have to talk about the other two pieces because they're not really relevant to the quality data warehouse. Next slide, please.

What we are exchanging in the health information exchange is -- and this is just a slide from a set of presentations we did in North Adams with patients - is we are distinguishing things that reside only in the individual physician office records, so above what you basically see is that we're not exchanging any text-based notes, text blob, quote, unquote, information, or scanned reports. At least, you know, on a shared, on a full-time shared basis. We will allow point to point exchanges for referral of that, but none of that will be a part of that shared or merged information, the patient-centric merged information. What we are putting into the center, though, is what you see in the bottom, which is basically elements of structured data that by sort of a negotiation with the physicians in that community, was an agreed upon set of information that is clinically useful and that patients would feel comfortable with from a permission perspective. And we tried to narrow it down to information that we thought was clinically useful and that was structured data. So no free text information will be allowed to pass through the center. Next slide, please.

This is just a screen shot of what that looks like. Next slide, please. Next slide, please.

So now I'll describe the quality data warehouse so you understand how that's going to work. So the eHealth Collaborative projects, in these projects what we are going to do is we're going to go back to one of the early principles of Clem McDonald and the Regenstrief Institute, which is extract data once and use it for multiple purposes. So what we're doing is we're building these health information exchanges which will take information out of all the deployed electronic medical record systems and then we will draw from the health information exchange information that's already been extracted for clinical purposes and then use that to populate the quality data warehouse so when you start at the bottom you see the provider level, electronic health record, we have seven, or six EHR vendors who are participating in the pilot, four that we're funding, which are the ones across the top. NextGen, Allscripts, GE, and eClinical works. And then the two, Physician Micro Systems, which is now McKesson, and eMDs were a part of, were legacy systems that were already there when the eHealth Collaborative started the projects. And at the community level you see how information from each of those communities from those deployed EHR systems goes into three self-standing health information exchanges and then at the next level, the eHealth Collaborative level, the quality data warehouse will be extracting information out of each of those health information exchanges to create a unified quality data warehouse which will use for a variety of purposes for analysis, for outcomes analysis for the eHealth Collaborative project itself, and then benchmarking data that we will provide back to the physicians so that they can monitor their own metrics over time. Next slide, please.

>> Rick Stephens:

So, Micky, two-minute warning.

>> Micky Tripathi;

Okay. So on this slide what I wanted to show is how we're dealing with patient identification. Because as I described the health information exchange is patient-identified data and that's all consent-based so data is passed to the health information exchanges, but then what we'll be doing is extracting limited data sets from each of those health information exchanges for the quality data warehouse. We will put, you know, a unique but random identifier on each of those pieces of data so that we are able to re-identify it. Next slide, please.

I think there's an animation there that will come in. So we will be able to re-identify that data on an individual basis back through the health information exchanges. So the quality data warehouse will have limited dataset, as the HIPAA definition of limited data set, so not truly de-identified. And but then re-identification to the extent it's needed back at the practice level will happen back through the health information exchanges. Next slide, please.

The benchmarking data, we basically are taking a clinical data superset, as we describe it, which is all of these categories that are there on the left, and then the benchmarking, the actual measures that we'll provide back to the physicians and we're doing this with the Mass Health Quality Partners, who you may be familiar with, we're starting with the AQA-recommended starter set as an initial set and then we'll expand that as seems appropriate going forward during the project. Next slide, please.

And then I just here list some of the challenges we faced in building that. As I described the health information exchanges we expect to launch in the early summer, and the quality data warehouse we expect to launch, because they're dependent upon the health information exchanges, we expect those to launch in September of this year. And I won't go through each one of these. I'm happy to answer questions in the question period.

>> Rick Stephens:

Great, thank you. Thanks very much, Micky. Do we have any questions from the floor or on phone? Yes, Kelly.

>> Kelly Cronin:

Hi, Micky.

>> Micky Tripathi:

Hi, Kelly.

>> Kelly Cronin:

It was a great presentation. Thank you. I'm sorry we're so rushed because this is really important stuff and sort of where the future is. But I'm wondering a couple things. Are you using claims data at all in aggregating at the warehouse level, or are you feeling like you're getting a complete set of data elements coming from the EHRs and the HIEs?

>> Micky Tripathi:

We're hoping that it will be complete, but we'll do what Dr. Eric Plune (ph) from Partners called the peak and shriek. Once the data starts coming in, I’m sure that we'll see the gaps that exist at the EHR level, that are mostly, I think, about how physicians are entering the data or not entering the data, rather than the technical capability of being able to extract it. We haven't yet built a program for integrating claims data in this, but that is kind of part of the longer term vision to try to do that.

>> Kelly Cronin:

Okay, great. And if you happen to get to the point where you have specific data requirements down and you know what your common data elements are across all measures, we're actually -- NQF, and under the quality alliance steering committee is in the process of pulling together an expert panel to really do that work. So we're trying to build in what you and what Indiana have already figured out through your BQI pilots.

>> Micky Tripathi:

Yep.

>> Kelly Cronin:

So we may be in touch on that.

>> Micky Tripathi:

Okay, that would be great. We've done a lot of work on that already.

>> Kelly Cronin:

Great. We appreciate anything you could help us with. And I'm also wondering, have you been working with another outside organization to do your outcomes analysis and public reporting?

>> Micky Tripathi:

We have -- well, any type of reporting that we do will probably happen through the Mass Health Quality Partners who are a subcontractor to us. The outcomes analysis side of it right now the researchers that have been identified are Steven Simon and David Bates and their team and we'll probably identify others as we go forward as well.

>> Kelly Cronin:

Great, but you see them as always being separate from your organization?

>> Micky Tripathi:

Yes, absolutely.

>> Kelly Cronin:

Okay, thanks.

>> Rick Stephens:

Great. Are there other questions? Micky, thank you -- I'm sorry, Helen.

>> Helen Darling:

One quick question. I missed whether or not you've gotten any voluntary approvals yet, or is that ahead of you, and do you have any sense of what the percentage of agreement would be?

>> Micky Tripathi:

Voluntary meaning physicians participating?

>> Helen Darling:

No, I'm sorry, patients.

>>

Patients.

>> Micky Tripathi:

Oh, I'm sorry, yes. We've started the consent process in North Adams, and you know, the data that we have right now is at the 95, 96 percent opt-in level. So we're getting very good participation.

>>

That's terrific.

>> Kelly Cronin:

And Micky, is that after extensive outreach?

>> Micky Tripathi:

That's after -- I mean, it's after some outreach, but all of it happening through the physician offices. So we're not doing huge ad campaigns, but we do have patient material that we've used a professional branding firm to help us with focus groups and development of that material that is distributed in the physician offices, and permission in the consent process happens at the physician office level.

>>

Great.

>> Rick Stephens:

Good. That's encouraging. Great. Seeing no other questions, we'll move on and thank you very much. Next we have the Wisconsin Collaborative for Health Quality and we have Betsy Clough. Did I say it right?

>> Betsy Clough:

Clough, like enough.

>> Rick Stephens:

Very good. Betsy Clough, the director of operations. So Betsy, the floor is yours.

>> Betsy Clough:

Thank you. It's a pleasure to be here this afternoon. I'll speak really fast because I know my time is limited. What I'll start out with is just a brief background and history so you understand where the collaborative came from and how we go to where we are today, especially in terms of our data and measurement model. And then I will focus on our data model. And if you could advance the slides to the last presentation, please.

The first -- this slide just depicts our overall mission and vision. We are a voluntary consortium of health systems from around the state of Wisconsin, and our mission is to improve the healthcare, health quality of the people of Wisconsin through developing and reporting of measures of healthcare quality. Next slide, please.

This slide depicts our overall organizational structure. We have what we call an assembly. It is kind of the multi-stakeholder group of people who represent the Collaborative from providers to hospitals to business partners to consumers to the Department of Health and Family Services. This group actually meets once a month for three hours. It's sort of a town hall meeting. And we have about 70 people attend this meeting. And then we also obviously have a board of directors and an executive committee. The heart of where all the work occurs within the Collaborative is at the workgroup level. We have various measurement workgroups that focus on ambulatory care measures, access workgroups, and then an assortment of improvement workgroups as well. Next slide, please.

Currently we represent about 40 percent of all of the physicians in the state of Wisconsin, and a little over 50 percent of all of the primary care physicians in the state. When we were founded in 2003, we were founded by nine of the major health systems from around the state of Wisconsin. We started small because we didn't know if we could make this work. And then after the first year of successful reporting, we've expanded it. And since then we really haven't done any active outreach, rather people have come to us because they want to participate. Next slide, please.

This just depicts, the next slide depicts the business partners who participate with us. When we were founded each of the founding members brought a business partner from their market and so we have active participation from these folks as well. They serve on our various workgroups, the business partners also oversee our validation process to make sure that we're comparing apples to apples as we measure and report. Next slide, please.

This just depicts a high level overview of our history. As I mentioned, we were founded in 2002. And we released our first paper report in the fall of 2003. And then since then have moved to a Web-based public reporting format and have continued to update measures on the hospital side on a quarterly basis, and on the ambulatory side on an annual basis. Next slide, please.

Switching gears a little bit to focus more on our measurement model. When we released our first report in the fall of 2003, we had about six months to get it together. Our CEOs had been meeting and then they said we want to release a report, quality folks go figure it out. And so we had a short amount of time to do that. And so what we did was really to look to data that we already had. So the Joint Commission, CMS core measures, Leapfrog data, we used HEDIS and CAPS (ph) results for those organizations that had, that owned a health plan, and then we did create a measure modeled after what IHI does for (inaudible) next available, so when it was released everyone applauded us, nice job, way to go. But our physicians said, you know, that HEDIS stuff, it's not so useful for me. I can't use it, sometimes it's not right, it's not all my patients but Mrs. Smith had a hemoglobin A1c two months ago, here it is. And so we said okay, we need to do something about that, because the data’s not good enough. Next slide, please.

So we set out on a mission to -- next slide, please -- to build a set of ambulatory measures that enabled medical groups to collect and report on quality of care regardless of whether or not they had an EHR. And we wanted it to be all patients, all payers. So that didn't matter. And so we convened a workgroup that we called the ambulatory care specifications workgroup, and they started meeting on a weekly basis, on the phone. And that was comprised of folks from, the quality folks, data folks, from the medical society, lots of clinical folks, probably about 40 people sitting in the call every week.

And so -- I don't think the slides are advancing but we're taking a look at slide 9. Really where we start from is selecting measures, looking nationally to see which measures are out there, which ones we should begin to develop and then moving forward to testing and developing, testing and developing in our workgroup. And as we started working on, what we started work on first was the set of diabetic measures and the problems that we encountered were that obviously we were unable to identify clearly in the physician group setting what was the population of patients. If I'm a hospital, it's an easy thing to do. CHF, diagnosis, you were admitted between this date and this date. If I'm a health plan, it's easy to say you have an eligible patient from this date to this date but with a physician group it is a completely different story. And so as we struggled for a while we said okay, let's think about what happens in a physician group.

Number one, we got to make sure that you're eligible for whatever condition we're talking about, so if we're talking about diabetes, do you really have diabetes? So that's the first question. The second question is, is this patient who has diabetes being managed by the given system or physician group? And so then the third question is, is this patient current within the physician group? Because I could move to Toledo and I don't have to tell my doctor that I'm moving, I just show up somewhere else. And so what we developed then was this three-question model that we are able to derive the denominator or population of patients completely electronically from existing administrative data. And we worked for about eight months testing and retesting and going back and forth and then doing actually manual chart review to make sure that what we ended up with in the denominator really was right. And you know, we're on a quest to kind of pull out the false positives and in the end what we've created is really a way to define, I guess population health at the physician group setting. So you're left with an incredibly accurate and credible list or registry even of patients with diabetes or hypertension or whatever we might be looking at. So when I hand that list to a doctor, he says yep, Mrs. Smith is mine and I should have seen her last year and I didn't. So it's incredibly accurate. Next slide, please.

And I'll just -- kind of backing up a little bit since I didn't have the slides here. But this is just kind of an example, a more specific example to the model with diabetes. So asking, is this a patient with diabetes, has he or she had two visits with a diagnosis of diabetes within the last 24 months? We went back and forth 8 months, is it 24 months, is it 36, is it 12? And then following that through. And if you want to go to the next slide. Slide 14.

This is a depiction of the infamous flowcharts that accompanies each of our measure specifications so this is a denominator flowchart and this is really the key to the denominator for each of our measures. So following that through will get you your denominator. From there, then, as I mentioned, then physician groups are able to harvest the numerator. And they're encouraged obviously to harvest it electronically when they can, and for the most part I would say that 80 percent of our groups are able to do so. And they're incented to do so, because then they've got that entire registry piece there and they can use that internally for improvement and public reporting, et cetera. Next slide, please.

So then going to the existing electronic data sources within their systems to capture the numerator, I would say when they're unable to do that there are two other methods available to complete the numerator, they can either do a random, a completely random sample, or they can do a hybrid measure, so get what they can get electronically and then supplement that with manual record review. But everybody always completes the denominator electronically, 100 percent. Next slide, please.

So just in summary, related to the methodology, we've developed a system as well as a method for defining a patient population over which we can measure the quality of care that we're providing to patients. Next slide, please.

This slide just depicts the measures that we're reporting. We're currently reporting at the physician group level. That was a decision that we made early on. Because of how we are -- of our data model, everybody is reporting internally at the individual level. We just haven't chosen to go publicly with that yet, obviously there's lots of scientific issues and other political issues with that and certainly as a BQI pilot we'll begin to address some of those. We do update this annually, all of these measures annually. However, I would say that internally folks are doing that mostly in a monthly or quarterly basis, depending on the system. We have 11 measures right now, and we will begin to add more as we move throughout the next few years, but we've gotten kind of our development process down more succinctly and have a formal process in place so that should get easier as we start to look at other measures. Next slide, please.

Just wanted to quickly outline how then once the data is harvested at the system, at the various physician groups, how they enter it. We have a secure online data submission tool, so data submitters at each of the systems have access to this tool. And they log in and submit data during the data submission period. They simply submit aggregated results at this time, denominator and numerator and we did that in the beginning because we were not very staff-rich and not capable of dealing with HIPAA issues at that time. As we move forward certainly, and certainly with our validation process that we have in place, we will move to a more registry-based submitting all of, every patient that hits the denominator for the given measures. So people are walked through the step-by-step process, they submit the aggregated numerator and denominator, there are checks and balances built in. So if you submit data that's 50 percent higher than last year it's going to say are you sure you really want to submit, is this really right, if it doesn't make sense. And then once that occurs, validation occurs. And if you go back to the -- if you recall the flowchart that I showed a couple slides ago, really what we start with is validation of the denominator. Making sure that every, at every cut point the numbers that people come up with make sense and that the right codes are included in each of them. And likewise the same thing occurs for the numerator. Next slide, please.

And I'll just quickly skip over just a couple of these slides, but just wanted to show you then in the end what the public reporting outcome looks like. For example, for breast cancer screening what we like to highlight is because of how we're measuring, we're able to measure the care for nearly 427,000 women who should have had a mammogram and each of the organizational levels, you can see each of their results as well, so for an advanced healthcare 26,000 women who should have had a mammogram and similar for the other physician groups as well. And with that I will conclude and then let you ask any questions.

>> Rick Stephens:

Great, thanks, Betsy.

>> Betsy Clough:

Thank you.

>> Rick Stephens:

Does anyone have any questions for Betsy? Yes, Kelly.

>> Kelly Cronin:

I was wondering when you were initially getting started whether you got a lot of pushback from physicians on the burden of reporting and using your Web-based tool.

>> Betsy Clough:

No, actually not. They pay us actually, to participate in the measurement reporting activities. So kind of a different model.

>> Kelly Cronin:

Yeah. And what percentage of clinicians do you have participating?

>> Betsy Clough:

About 50 percent of the primary care physicians within the state. We represent all of the large and most of the medium practices within the state. We're currently doing a pilot with 11 small practices to understand -- when I say small practices, less than 20 providers -- to begin to understand what the resource and burden implications will be, because it's going to be different, so we need to understand how we can help them.

>> Kelly Cronin:

Have you thought about a migration plan when they start to adopt electronic health records or as you start to get electronic clinical data from other sources?

>> Betsy Clough:

A lot of that's already occurring within the given system. As a result of what we're doing, a lot of the systems are going to the external labs or external radiology groups and saying, okay, I want this data electronically, and on a monthly basis. So what's happened for example in the Milwaukee area, is that four groups have banded together and gone to XYZ lab and said we want this data. I think just by virtue of what we're doing it's encouraged some of that already.

>> Rick Stephens:

Great. Other questions? Seeing none and hearing none, thank you very much, Betsy.

What I'd like to do now is transition and talk about something I was going to talk about at the opening but it probably fits in better here as part of the agenda. And that's to talk about some of the things we're looking at relative to the Workgroup going forward. I think one of the things we've observed from a leadership standpoint is there are a lot of tremendously terrific things going on in terms of quality. And we’ve heard lots of great testimony presented to the Workgroup about some of those plans. It's really interesting to note that even with all the great work, we still have the challenges about the language we all use, I mean what do we mean by what we say and even the simple things of the different numbers of things that we track. It's clear that each of the stakeholder groups have different motivations and we all talked about those today. And it's also clear that we all have the desired outcomes but it's not clear about what our approach is to get to the outcomes and again we've heard some excellent examples today as we've looked at how each of the presenters and the organizations are pulling together the collaboratives to get their quality data in a way that's of value to be able to meet all the stakeholder requirements.

So one of the things we're looking at from a Quality Workgroup leadership standpoint is how do we develop a systemic process for realizing the end state of the vision that we really want. We really do have the nation's health infrastructure in place that's capable of gathering this data to be able to meet the needs of each of the stakeholders. And so if we think about what that systematic process looks like, we've been reaching out and looking at such things as how do you pull together all of the data in a way that makes sense and so looking at doing some environmental scans, looking at the requirements analysis, looking at, you know, what each of the Workgroup members have brought, looking at such things as competency, skills, and the skill sets and what are the strategic options. Because in the end, we really are looking for a system that takes into account, again, everyone's motivations, but a system that's going to deliver the output that we're all looking for in our particular respective areas.

So as we've been thinking about this, we've been off talking to a number of organizations and I think probably some of you recall my put last time in the dialogue about can we bring some of the systems engineering capabilities out there, where we do, in fact, bring disparate organizations together and we're also working with Booz Allen Hamilton as we think about how we press forward. Now the approach that we’re looking at, in terms of being able to come up with the system has five key steps.

Again, one is to conduct that environmental scan and data infrastructure requirements. To really under and make sure we've got alignment about the environment and what the various stakeholder needs are.

Second is look at the various concept and drivers that are already out there, because this is not the first time someone has gone off and worked on a large scale systems integration process. We want to be able to take advantage of that. But let's make sure we all understand the implications about what it means to this particular Workgroup.

Then develop the approach. For collecting and analyzing the data. Link all of our key requirements drivers together based upon some priority needs. And then craft that systemic process.

And as we continue through this in the meetings here we'll share where we are on that dialogue. But again, a lot of great work going on. We're trying to figure out how do we package this in a way so we can come to some conclusions in a way that makes sense to all of us on the Workgroup, again, meeting all of our needs. And I think again that's an appropriate comment for this time in the meeting, because we're kind of reaching that, what's that next step. And it will be critically important that we all share and agree on what that looks like.

We've heard some of the examples today of what everyone else has used. For example, data generation, aggregation, transformation information, measurement analysis, knowledge creation, you know, clinical transformation. Ultimately this is about modifying the behavior of everyone in the system, so we're aligned with getting quality measures we can all use in our respective needs. The challenge is making sure we have a process we can all agree to. So we're going to be heading down that path.

As we think about, we've got in terms of time about 10 minutes left before we look at the, from the public comment standpoint. I know that in getting ready for this meeting today there were I think five or six critical questions that we were really trying to address. One is we talked about, we asked each of the presenters to talk about the general data infrastructure, which included such things as describing the process about how you gather the data or looking at the barriers. You know, we looked at some specific questions on data aggregation, what motivates people, you know, to be able to pull that together. What are the issues. We talked about what are some of the specific questions about work flow. We saw some flowcharts. I know Betsy had some and others had some, about how this all this comes together. And we talked about barriers. And there are a number of questions about episodes of care. So this may be a opportunity now for us as a group to maybe spend a few minutes and think about what are the key points that we're each walking away from this, and I would open it up to the floor for those comments so we can maybe get your input about what did you learn out of this discussion today that's most critical that we as a Quality Workgroup need to focus on and before I do that, Mike, do you have any other comments you'd like to add?

>> Mike Rapp:

No, I think you phrased it quite well. But just to elaborate a little bit. As we look to the future, we sort of have our end game in mind, electronic health records play a key part to that. And I did hear some important points, I think, especially Doctor, I believe it was Dr. Paulus’s presentation, but with the take-away points about using standard terminology, about looking to where the data comes from and so forth. And it just seems like there's a lot of different parts to this enterprise to get quality measures that can be, that use the data streams that we have available. Now, right now we understand the claims system pretty well, and there's a big focus on the claims data. And we know what HEDIS can do with that and we bring it together and aggregate it and try to use those measures. But when we deal with the electronic aspects of the measures, I think it's a little bit different, and perhaps there's not enough focus on the points that were made about what data is available electronically, and if we don't develop measures with a focus on the data that's there and just make the measures and say okay, let's collect them, we're working on an assumption, certainly at the hospital level, that well, we can just abstract it, we've got people down in the health information systems office and they'll get that data for us and they'll -- we don't care where it comes from, it's their job to get it. That's really not a very probably organized way to approach it if we're trying to move to the electronic system.

The point about what's in the electronic health record, well, there may be things in the electronic health record but there may be other things in the electronic system. Let's try to bring things together. So perhaps Dr. Paulus and Dr. Kramer might have -- they were certainly insightful and may have some specific steps, what are the critical steps we need to take and who needs to deal with them, who do we need to bring together to get to where we're trying to go?

>>

Well, I think, you know, those are important points and certainly taking I think greater mind share about what data elements are available in electronic health records as you think about measure development and measure reporting, is an important thing, because although those people down in HIM and often times in the basements of the hospitals can abstract that data, I'm not sure that's the best use of anyone's time. And I'm not suggesting you are implying that, but when you think about the real goal of this being transforming care, all that abstraction doesn't yield any actual benefit. It's a necessary evil to get to an endpoint, and I do think, you know, whether it's health delivery systems or the vendor association or others, significant thought should be given to the data that are collected in today's typical electronic health records and how those data elements could be used to drive reporting. I participate and I have to give a disclaimer that I'm not speaking for anybody on the Certification Commission for Health Information Technology and reporting measures as an output functionality is something that's always gotten pushed out, in terms of whether that's relevant for certification, you know, in the near term. And linking up with CCHIT, and up-regulating the prioritization of reporting and then blending together the measure development with the available data, so that you provide a clean path to reporting electronically, which would then in of itself stimulate adoption because it would solve the reporting problem. So I would view those data elements as a requirement input to the measure development process rather than it being something off to the side.

>>

Good.

>> Kelly Cronin:

Just to let you know, we did actually address that issue in our first six months, and just advanced some recommendations in that area. So the Quality use case and this, what we mentioned earlier, the Quality Alliance Steering Committee, this expert panel that’s being convened will build off a lot of work of the EHR vendor association already started with CMS and NCQA and AMA, with two measures and then we're going sort of take the core set of AQA HQA measures, figure out all the common data elements, have HITSP drill down on the very detailed interoperability specifications for them, and then CCHIT will certainly have it on their road map. So I think hopefully that will start to create this process where we're going to have a way of identifying the common data elements, identifying the interoperability specifications for them, and then having CCHIT start the development of the certification criteria for an expanding set of measures over time so. We'll have sort of three-step process. But we recognize that it's going to be a while before that really causes a lot of change in the market.

>>

I commend the idea of developing the use case and the process design, and the concepts around how one manages that data, defines the data, and captures the data. And I'll expand on that a little bit just to say in a data dictionary that you might use to characterize that, you start off with what is the definition of the data, so for example admission source type is defined by CMS in the UV data set. The indicator, that data is used in indicators and MI16, PMI1, PN3B, the documenter is typically the registrar and we could you go on and say that at the level of definition as we define our quality measures, we can even go so far as to say in the standard definition of an electronic health record, this data would be found in the demographic section of a nationally recognized standard electronic health record and could go to an extent where a small practice would be able to say, we know where to find that now, or we know what process we need to put into place to make sure that data is captured.

>> Rick Stephens:

Charlene, you had a comment?

>> Charlene Underwood:

I was going to support Kelly that one of the things that as we move through the process, certainly being able to drill down and also understand what data is captured today, what is even mapped to the standards, but I think this whole process can, among the vendor community, by agreeing on what that data set is, what those standards are, will drive the, you know, vendors to one place rather than multiple places which will help accelerate everything. And the other piece I think that is really important, as we look at electronic health records, sometimes they conceptually don't include the demographics. A lot of work has been done to establish data strategies, you know, in the financial areas, too, so if there's a way to think about it as the patient center health data strategy, independent of whether it's an EHR, whatever system captures it, I think that will help us, too as we move forward, because sometimes we can capture some of that data from a patient management system or other sources of systems other than just an electronic health record. So the more we can think of it as a health data strategy that's going to drive the capture of measurements, then there may be multiple sources, because I think we're going to have hybrid systems out there for awhile. We're going to derive some of that data from extracted data. We're building smart stuff out there to extract that stuff out. So there's going to be a blend of stuff to help report data for an interim period of time.

>> Rick Stephens:

Helen?

>> Helen Darling:

This may be something that you all know as being either resolved in some other group or has been resolved already, but I still hear a lot about the areas where we need some national consensus that everybody agrees to, whatever it may be and however the process. And I know the Secretary often talks about bringing people together, the key stakeholders and deciding, but it seems to me we have at least two broad areas. One is where most people agree, not every person, but most stakeholders would agree that we're better off as a nation to reach agreement in some areas, and then say it's not going to make everybody happy but this is it. On balance, we'll be all better off. And then all those others where there's a lot of opportunity and appropriate experimentation, much more knowledge, you know, many more demonstrations and things like that. And the process, Rick, that you laid out sounds perfect for among other things, maybe very early on, in step 1 or 2, there would be more clarity on that and even if there's plenty of work being done, I hear enough out in the country and talking to people, about confusion, so they don't know it if it's been done. So if we could just get the word out, as soon as possible. Because I know it would affect, among others, the companies that are in the business. And some won't be happy about the results, but at some point we just have to bite the bullet and move ahead. That's where I think some communication outside about both the process that you’ve got underway and also what those might look like. And then there will be a lot of discussion. It’s very much like the rulemaking process. I mean it's sort of informal rulemaking process. This is the way it's going to be. Let us know what you think, let us collectively sort it out, and then let’s move ahead.

>> Rick Stephens:

Yes, Jerry?

>> Jerry Osheroff:

I think one of the themes from today is that care delivery function is something of a black box from the perspective of the quality things that we're talking about. Quality measures that come into the black box and there's some question about what they all are and coming to some agreement about what they should be, and then coming out of the other end of the black box is the outcomes and those outcomes are useful to feed back into the care delivery process, those outcomes are useful for a bunch of other purposes as well. And I think what we're talking about here, what is it that we're trying to do, is to take the black box, and we heard from some of the provider organizations this morning some of the things that they're trying to do to connect the dots between the measures that they're trying to hit and the outcomes that they're reporting. And they shared some wonderful specific examples of ways they're getting knowledge into the workflow, that they're getting data out of the systems that are underpinning the workflow, and I think the work that Kelly you were mentioning with regard to sort of rationalizing the data flows, where the data is coming and going is a critical piece of it. But I think another piece of it that this Workgroup has discussed before and that's gone forward is this notion of understanding in finer detail, we heard at least two specific examples today again of where knowledge is getting into the system and how the data is getting out. But understanding the workflows. We've heard like some of the data needs to come from the registrars. So mapping out what those workflows look like. And I know that was one of the recommendations that went up to the Secretary, and I was curious if that's sort of unfolding. I think there was like a September deadline on that, something about laying out the workflows.

>> Kelly Cronin:

The same expert panel that's really digging into the careful detailed look at all the data elements across the numerators and denominators of a core set, they'll also take into account the workflow issues and the care maps that we talked about. So that’s, it's probably going to be after, because the rush right now is to get the Health IT Standards Panel the information they need to start their work on the interoperability specifications this summer. So the expert panel will probably convene over the next month to get as much as they can, building off what's going on in Boston and Indiana and the Indian Health Service and all the other sort of federal systems that have tried to figure this out over the years and so once that started, I think then they're going to be really concerned about workflow. And Janet Corrigan left but she's really been leading that. I know they've identified Paul Tang as the chair and they’re trying to build off of everyone who has been doing the work over the last year.

>> Jerry Osheroff:

Great. I think just sort of building on the engineering metaphor that you were laying out, if we can sort expose what's happening in the black box, use the examples that we heard today as sort of some pathways and try to industrialize the whole thing that everybody doesn't have to make this up on their own.

>> Rick Stephens:

Great. Well, thank you very much. I think we're out of time. And let me first talk about our next meeting. Our next meeting will be June 22nd. From 1 to 4 Eastern time. So we will reconvene at that time, and get the agenda out to everyone. Now time to ask for public comments. Judy, can you help us walk through that please?

>> Judy Sparrow:

Yes. Matt, I see you put the number out.

>> Matt McCoy:

Yeah, members of the public who have been following along with the Webcast will see a slide up now with instruction for making a public comment and if we've had anybody call in to listen over their phone during the course of the meeting you only need to press star 1 to alert the operator that you want to make a comment. I will wait about one minute for people to get through, and there's also, I’ll point out, an e-mail address on the bottom of the slide if people would rather submit comments that way.

>> Judy Sparrow:

Is there anybody in the room that might like to make a comment?

>> Angela Jeansonne:

Can you hear me?

>> Rick Stephens:

Yes.

>> Angela Jeansonne:

Okay great. Good afternoon, everybody. My name is Angela Jeansonne and I'm with the American Osteopathic Association. And I just wanted to say in listening to the conversation, it's been great to listen to all the presentations, they’re very interesting, a lot of interesting things going on. One thing that I noticed that I'm kind of heartened to hear is that there's a real understanding of some of the folks in smaller groups and smaller practices, and doing outreach, I mean the issues of cost and burden and interoperability and all those things, those are very real issues for folks, and we just really appreciate you all thinking about those issues and continuing with that dialogue.

>> Rick Stephens:

Great, thank you.

>> Judy Sparrow:

Matt, anybody on the line?

>> Matt McCoy:

No, not today.

>> Judy Sparrow:

Okay, well, thank you.

>> Rick Stephens:

So hearing any additional comments here from the audience here? Seeing none, we'll hereby call the meeting adjourned. Thank you all very much.