[This Transcript is Unedited]

Department of Health and Human Services

National Committee on Vital and Health Statistics

Workgroup on Quality

June 3, 2005

Hubert H. Humphrey Building
Room 705A
200 Independence Avenue, S.W.
Washington , D.C. 20201

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway, Suite 180
Fairfax, Virginia 22030
(703) 352-0091

TABLE OF CONTENTS


P R O C E E D I N G S [8:17 a.m.]

Agenda Item: Welcome and Introductions – Robert W. Hungate, Chair

MR. HUNGATE: We're talking shop; let's get going.

[Laughter.]

MR. HUNGATE: Well, our introductions will take less time this morning.

Welcome to the second day of the planning session of the Quality Workgroup of the National Committee on Vital and Health Statistics.

Since so far we have the same people that we did yesterday, we just need to, I think, introduce by name; no need to talk about any conflicts. I'm Bob Hungate, Chairman of the Workgroup, member of NCVHS and Principal of Physician Patient Partnerships for Health; Chair of the Group Insurance Commission in Massachusetts.

[Introductions.]

MR. HUNGATE: And we'll have others joining us as we go forward, so as they arrive, we'll have them introduce themselves and try to keep the record straight for our Internet audience as well.

I know we've had a lot of little discussions. The topic yesterday was marvelously enlarging in understanding, broad in its content, and challenging, and I think we got a lot of food to work from.

I think our initially what was called the "Carol matrix" -- [laughs] -- you know, sometimes you get immortalized that way. You want to rename it? What should we rename it, the matrix.

MS. McCALL: We'll move forward.

MR. HUNGATE: The matrix --

MS. McCALL: The Workgroup.

MR. HUNGATE: -- which, you know, puts down the individual health, population health, system health, as important aspects of what we're talking about. I think it's a good construct.

I'm hoping that Don can join us today because I think that maybe it's here that we should try to see what the Population Workgroup does on this kind of an issue and what we try to do.

I think this'll be the right arena to kind of work that out, and I personally think that there is a lot of potential synergy in those two activities if we think it through in a work plan way so that we know how those work plans dovetail.

That process involves both what we do here, the Executive Committee retreat, and then finally discussion I think at full Committee later this month. But I think after that, it'll be pretty well clear.

I think we're scheduled to have Carol lead off the next round of trying to pull us together on this thing, and I think that's where we should go.

The only thing I wonder about is whether we should try to finish up what we think our charge should be. Because our charge relates to what the work plan finally ends up being, then we probably can't really do that until we have our work plan, so maybe that would be putting the cart before the horse in this case, so let's just go ahead.

Agenda Item: Summarize and Facilitate Discussions of June 2 and Focus for June 3 – Carol McCall

MS. McCALL: Okay. And we will be having some guests come?

MR. HUNGATE: Yes.

MS. McCALL: All right.

MR. HUNGATE: Welcome, Carolyn.

MS. CLANCY: Good morning.

MS. McCALL: Good timing.

MR. HUNGATE: You're set up right up there with the microphone and a label.

MS. MCCALL: Good morning, Carolyn. You're timing's good. Actually what we're getting set to do while you get yourself settled, and maybe right before you get yourself all settled, do you want to have her introduce herself for the record?

MR. HUNGATE: Why don't you introduce yourself? We're on the Internet, and so we've gone through and told our names and functions.

MS. CLANCY: Great. Good morning. I'm Carolyn Clancy. Sorry to be a little bit late. The good news is no accidents for me, but people go very slow, they're very cautious. And I'm the Director of the Agency for HealthCare Research and Quality --

MR. HUNGATE: Welcome. Thank you for joining us.

MS. CLANCY: -- and I work with Anna.

MR. HUNGATE: And we're just starting to kind of rehash what we went through yesterday and talk through that, so please interrupt and ask questions as you hear things that you wonder about.

MS. GREENBERG: Do you want me to?

MS. McCALL: Yes, that would be great. We're asking Marjorie to facilitate.

Normally what I would do is I would jump up to a flip chart.

MR. HUNGATE: Yes, me, too.

MS. McCALL: I don't have to do it as long as I can be miked. Don, is there a way for me to be miked?

MR. WASHINGTON: Yes.

MS. McCALL: Okay. All right, so I would love to do that, and I'm going to be right up here.

MS. GREENBERG: You want to do it?

MR. HUNGATE: Yes, right.

MS. McCALL: Yes.

MR. HUNGATE: That's what I figured. [Laughs.]

MS. McCALL: Perfect.

What I'm going to recommend while Don is going to get that mike and some markers and all that set up is that we do a couple of things.

The first thing that I'd like to do is just, for the folks that were here yesterday, try to capture not so much what you think the work plan should be, although it could be that, what the key conclusions were, what you thought some possible and important actions would be, so that we just get those down in a visible way that we can all see them, okay, because we will probably be working with that either in its existing form or in some modified form as we go through into the afternoon.

So I thought that might be a really good place to start. And with Carolyn here, it would be a great way for you to get the condensed version, although, you know, some of the meat and the color and all of that will be missing, but, you know, we obviously have some time here this morning for you to ask for clarification, for you to

amplify, and all different types of things.

I think we're ready to go.

There will be times when, as I try to facilitate this, I may violate it by actually having some things to add, but I will not confuse you.

MR. HUNGATE: Justine, go ahead.

DR. CARR: Carol, are you ready? I'll start.

MS. McCALL: Not quite.

MR. HUNGATE: We were quiet too long on the Internet; music came on, right? [Laughs.]

MS. McCALL: Okay.

MR. HUNGATE: "There's no activity in this conference."

MS. McCALL: I think we are ready, so I think --

DR. CARR: Okay, thanks. It was a wonderful day yesterday. I think we all had our horizons expanded.

I, at the end of the day, had two observations.

Number one, which was very stark and dramatic, is that John Halamka, who probably has the most robust electronic health record, has no road map for quality. And I think if he doesn't have it, no one has it.

So, in terms of our thought about identifying gaps, I think that one is huge, so I would put that out there as number one, the crosswalk between quality and the electronic health record.

And the second theme was the considerable discussion about what is quality and how do you measure it, and the subheadings under that would be, number one, are we talking about quality or value, value being defined as the interface if cost and quality?

Second, that was brought up by Don Detmer, was the interactive and perhaps multi-dimensional concept of quality, and this, I guess, was also seconded by Brent James, that one metric over time is not a satisfactory definition of quality, that there needs to be more dimensions to it, but most importantly a kind of massaging of it, watching over time, checking and linking back to find out whether what we thought was a measure of quality is a measure of quality, and then perhaps an iterative process of refining the denominator or refining the definition, and in some cases deleting things that we once thought were measures of quality.

MS. McCALL: Okay. Just want to make sure that these capture what it was that you intended for them to. There's actually, I guess --

DR. CARR: Well, there's two things. One is the electronic health record has no chapter on quality today.

MS. McCALL: Okay.

DR. CARR: Number two, our definition of quality is not represented by a consensus. And I think that there

are more dimensions, that the definition about whether we're talking about quality or value, value also being the part that would align incentives with the patient, I think the point was made yesterday the patients aren't so much concerned about how many stars you got on health grades but they are concerned about the interface of their expenditures and their outcomes.

So I would say the definition of quality -- the two things are the crosswalk, the electronic health record; the second is the definition of quality.

That is simply elaborating on some of the dimensions that were raised by the speakers yesterday.

MS. McCALL: Got it. So it's kind of these two, one, two, with A, B and C.

DR. CARR: Right, yes.

MS. McCALL: Got it. Okay, great.

Now, hold on -- before you begin, I don't have any place to put these so the folks can see them.

MR. HUNGATE: Do we have tape?

MS. GREENBERG: Well, we'll get some tape.

MS. McCALL: Okay, that's fine. This is so that people can feel free to -- we can add on.

MR. HUNGATE: Right.

MS. GREENBERG: We had a memorable meeting -- I think it was the Quality Workgroup -- where we were writing on white boards in the conference room and at one point we started writing on the wall and didn't realize it.

MR. HUNGATE: She was nice. There was one guilty party.

[Laughter.]

MS. McCALL: Okay, great. Mr. Bill?

MR. SCANLON: Maybe it's partly a clarification, because I guess I think that within the electronic health record, in some respects there are fragments of quality, I mean that there have been steps that have been taken to try and influence quality.

I mean, we hear, you know, over and over again about some checking for drug interactions -- I mean, there's clearly steps in that direction -- but nothing that's systematic, more comprehensive.

And I think that's true as we move out and we think about taking data out of the electronic health records and moving it into some other context, because I think the one thing that struck me yesterday was the morning presentations about applications of information and how we're always focused on sort of a partial view of the quality and that in some instances we've got sort of problems in terms of (?).

Brent's presentation about -- we had a discussion of use of Caesarian section rates and sort of how they can

be misinterpreted or not interpreted correctly -- then he came back to an issue of risk adjustment, that we can get information about the provider, but if we haven't adequately risk adjusted, we don't know anything. It's not that C-section rates are necessarily a bad measure, where risk adjustments can be the ultimate problem.

MS. McCALL: So I want to make sure that I adequately captured that point about risk adjustment.

MR. SCANLON: I mean, in terms of when we move outside of the delivery context, I mean, we're trying to use information in different ways to influence quality, and we heard a lot from Brent yesterday about what CMS is doing.

The issue is they're trying to influence behavior by in some ways scoring providers, providing information to beneficiaries and consumers for choice. The issue is the appropriateness of the measures, the accuracy of the measures, and part of that relates to -- in addition to the issue of risk adjustment, there was another piece that he talked about which is the issue of translation and how do we translate this information to consumers? And we may be able to rate sort of delivery of services from a professional perspective, but how do we use this information in a market context to try and inform consumers?

MS. McCALL: Okay. So I want to make sure that I've got it right, that somehow the appropriateness and accuracy of the measures may not translate into how they're being used? Is that --

MR. SCANLON: They need to be appropriate and accurate before we think then about sort of how they can be then translated sort of for use.

And I think from what we heard yesterday morning that we're not there.

MS. McCALL: Right.

MR. SCANLON: We're not there in terms of comprehensiveness. We have a lot on the measurement side that we need to do, and this kind of, I think, goes to, in some respects, Justine's comment about the electronic health record not having sort of a chapter on quality. We don't have a chapter on the use of information for quality outside of the delivery system, either, and we need to get there sort of as well.

MS. McCALL: Okay.

MR. SCANLON: The other thing, I thought this afternoon discussion where Don was particularly talking about the interface between sort of quality and the IT, is the issue of how do we deliver the information that we want to use outside of the delivery sector to influence quality.

You know, we've got a lot of applications sort of

waiting in the wings in terms of information to consumers, pay for performance, et cetera, but we've got a real problem in terms of thinking about how are we going to have the information to actually implement these things?

MS. McCALL: Okay. So you said it's how we want to deliver it outside of that?

MR. SCANLON: How are we going to get the information delivered to the policy makers and to the program managers that we want to use this information?

MR. HUNGATE: Let me interrupt the flow just a little bit and get our new additions introduced. So, Stan, Dan --

MR. ETTINGER: Stan Ettinger, AHRQ.

MR. RODE: Dan Rode.

MR. JENCKS: And Steve Jencks.

DR. STEINWACHS: And Don Steinwachs.

MR. HUNGATE: And Don. Finally made it! Welcome, all four of you. Marjorie?

MS. GREENBERG: Okay. I also found yesterday not only fascinating but also very clarifying in some ways for me. Maybe I got too much into that simple box rather than the complexity box because Don Detmer was talking about, because it is all very complex, but a few things that I took out of it, and as I just said to Anna, I said the whole thing became very clear to me in the shower this morning but I'm not sure that I've retained it.

Obviously what we're looking for is what unique value can the Committee contribute here, because there is a lot going on? There's a lot going on in the government, there's a lot going on outside of the government, and yet the bottom line doesn't seem to be everything's under control; we don't need to do anything.

And what Don talked about was the Committee being the keeper of the vision, first of all, helping to lay out the vision, and then sort of the keeper and the expander and enhancer of the vision.

Sort of, I think, what he had in mind as a parallel was the way that the Committee has functioned with the NHII. And so when you mentioned about the synergies, Bob, with Populations, obviously I think the synergies with the NHII Workgroup also became even clearer yesterday.

And so he talked about the keepers of the vision, and that doesn't mean it's a National Committee vision really so much as the vision that the Committee is able to articulate through its usually effective process of hearing from all the stakeholders.

That relates to the matrix, because it became very clear that -- I mean, we knew this, but it was, I think, well articulated I believe by Joachim, or maybe it was Ernie Moy, I'm not sure which one, but that the different players here and the different stakeholders have very different perceptions of what they need.

That was very obvious from our meetings with the purchasers and the providers, but that goes broader, and I think both Brent James and then our other speakers, including Richard Klein from NCHS and Ernie talking about the quality and disparity reports et cetera, and of course CMS talking about the pay for performance.

It started to flesh out that not only are there many different needs for information on quality performance measurement and value but that in some cases those are oppositional. I mean, they're not all complementary, and focusing on one may create issues with the other.

But they go across this whole spectrum which you have on your matrix from the system, or as Don Detmer I thought very well put it, he said the real challenges to us is: What do we do until we get an American health care system?

I said to my husband this morning, "Gee, spending five years in England was really good for Don," but --

MS. McCALL: Can I put that on a slide?

[Laughter.]

MS. GREENBERG: You agree with that, Carol?

MS. McCALL: Yes.

MS. GREENBERG: And so, I mean, from the system or the non-system all the way down to the individual, as Bob pointed out, you know, what kind of information do I have if I'm faced with a health condition and the choice between a number of interventions and no interventions?

So I think that that idea of sort of filling out the matrix and as part of really articulating the vision -- and then he also mentioned the gaps, and we've talked about the gaps.

Now, my own personal epiphany was that we have been debating and discussing in this Committee, not just this Workgroup but in the Committee, for the last 20 years what started off as being "describe the statistical aspects of physician payment systems" and now is more, you know, do we do administrative data, do we enhance administrative data, do we put all our eggs in the EHR basket, and we, you know, had hearings on that in the last year et cetera, and I know I have been a protagonist in these discussions, but let's forget mode, in a sense, that we can put mode to rest, because I think -- not that we could have 10 years ago, but I think now the administrative systems are poised to collect quality performance data if we want them to.

But we all realize that the real benefit will come from the electronic health record, but properly configured and designed and properly linked with decision support and population-based health and all of that.

So we've got that goal, and everybody realizes that's the goal. You know, people have different views on how long it's going to take to get to that, and so, you know, I continue to feel that some segments are still going to need to use what they have, which is an enhanced claim or some kind of combination of an enhanced claim and a claim attachment standard which may not go to the payer but, I mean, those sort of HIPAA standards may still be vehicles for quality, but what the vision needs to do is to kind of lay out what people need, what the minimum -- one place I do kind of disagree with Don is I'm still a believer in core data sets, not maximum data sets but articulating what you really at a minimum need.

For example, I mean, you just cannot look at disparities if you don't have information on race and ethnicity; there's no way to do it if you don't -- I mean, racial and ethnic disparities. You can't really look at socioeconomic if you don't either have something about the patient or can't link it with community variables and all that.

And I think like with the mortgage information, that just was so stunning to me when they were able to come out and show the disparities on being able to get mortgages, and low-cost mortgages, and it was because they collected the information. I mean, you can't just, you know, figure that out if you don't have the information, but I'll get off that soapbox.

But in any effect, we can, I think, lay out functional status. You know, we know we need functional status, we know we need information about labs. But whether that be collected through this enhanced administrative or through the electronic health record, I think we can in a sense just say, "Those are options. We think the best option is really pursuing aggressively the electronic record."

And the other thing that I heard and that I think is consistent with certainly what's coming out of Dr. Brailer's office et cetera but that maybe we haven't focused on enough is these regional community solutions. And I thought it was absolutely stunning -- well, two things were stunning that John said, and thank you, Justine, for bringing him; I mean, all together it was very useful having him here.

One was the fact that here he has this whole advanced system and he still is like -- I think he was being a little modest but he says he still really doesn't have a road map as it were for quality.

But the other was that he said return on investment of the HIPAA transactions -- incredible in Massachusetts, went from $5.00 a transaction to 10 cents a transaction because they had a community solution, and they don't have every payer with their own companion guide et cetera.

When we had a hearing two months ago from the Standards and Security Subcommittee, we couldn't really get information on ROI. Well, we're not there yet; right now it's costing us more than we're, you know, making back on it. And yet we have him come in and say, "I've got the data," you know, the $5.00 to 10 cents. So that's because of the community solution and the regional agreement and the trust that's been built there over a period of 25 years, of course starting with the Massachusetts Health Data Consortium.

So I think there ways that the incentives maybe are currently being aligned and be aligned better. And my bottom line is that what was really effective, I think, was the NHII Workgroup and the 21st century vision for health statistics, though unfortunately we haven't really picked up on the latter the way the former has taken off, but got a lot of testimony, developed visions, and then we went out and had hearings -- we actually had regional hearings -- to get feedback on those visions and on those kind of conceptual models and principles and guiding principles, and then put out something that then people could run with.

And I could see a very similar thing being done here around this quality/value model.

DR. CARR: Can I just add? One of the other, I think, powerful points that Don made was just that we organize around the IOM Dimensions of Care --

MS. GREENBERG: Yes.

DR. CARR: -- and that we use that as an overview to have a balanced approach so that we don't overlook in particular equitable -- you know, in favor of safety.

I mean, and I think that, you know, to your point of what John said, by making an electronic health record, it's true, Bill said it, that, okay, writing is legible and drug/drug interactions are checked, and actually it's more efficient, more timely, if you do e-prescribing and so on.

So we're moving ahead on those areas, and I think, you know, in a way I like his idea because it resonates with mine, but, you know, when we made that grid and you could see all the things that we're doing on effectiveness, whether it be process measures or outcome measures or data registries, but as you go down and start trying to look at what do we have on patient-centered, what do we have on equitable, not quite as much.

So I think that that is a good grounding as we, you know, focus on what is quality, what is value, that we not lose sight of any one of those dimensions.

MS. McCALL: Okay.

MR. HUNGATE: Carol, do you want to add yours?

MS. McCALL: Yes, I would like to do that. It's going to be a little odd because I have to write and talk.

MS. GREENBERG: Do you want me to write while you talk?

MS. McCALL: That'll be great. A couple of things. First, some observations from the day, and then I've got some specific things that I think, you know, we want to do and you can capture as much or as little as what you like.

I was struck by this continuum that Brent put out in the morning from this accountability to learning, and there were just so many things that I took away from that, but a couple of clear ones is that if we kind of design a system at the wrong end -- I think I said this yesterday -- we're going to kind of paint ourselves into a box and we're not going to be in a position where we can really get something to adapt.

And I think this links with, Justine, something that you were talking about just in terms of having specific metrics and kind of multiple dimension of metrics so they're very clear and very detailed.

So some things that I took away is that we do need a clear vision of what we want in about 10 years. Okay, so if we can articulate that, and then I think that we can try to build toward that.

And they have to take a lot of these things in account, everything that we have so far up there, everything from an IOM type of balance to the fact that we want to have some accountability. We do have pay for performance, but we also want to learn.

So we have to take all of that into account and try to literally create a picture in our minds and maybe on paper of what, if you had it all, what would it do, okay, and what would be different? Then I think we can begin, because -- some notes that I made around the metrics, I think there's a paradigm shift that we need.

I was talking a little bit about this, before we officially got started, with Anna and with Bob, that we tend to think, or I hear us say a lot, that metrics -- that it's about reporting, and that the paradigm is that somehow metrics is a very passive thing, that if we can just get the data out and get some metrics built that we can look at them and see what to do.

And there are some thoughts, and it's that to know and not to do is really not to know. So if we want metrics, if we think they're valuable, they have to be designed into the very fabric of what we do. And that means at that point of whatever it is, point of decision, point of care, many points, it's not one point, it's many points, it's not just for breakfast any more; they have to be about everything.

So it has to literally be embedded into a flow, in two ways. It has to alter that flow by informing it and it also has to learn from that flow.

So we're talking about an adaptive system, we're talking about one that learns, okay, which is both a cognitive process as well as it has technology implications.

There's another quote that came to mind, kind to follow. It's one of my favorites, so I'm going to bring it up, which is that "We are what we repeatedly do. Excellence, then, is not an act but a habit." And that's Aristotle.

So if we seek excellence, again to the point of trying to embed it into what we do, then we have to do that. And so it gets back to -- okay, we have a hand signal from yesterday, Carolyn -- it's this, okay? And what this means is the IT/quality intersection that we've been talking about, because it's not just a system, all right?

It is in fact, first and foremost, a paradigm that is driven from a vision that has implications for mechanisms as well as systems.

So I think there are four take-always that I came away with, with all that as kind of preamble.

It is this IT/quality intersection, all right?

It's also having these conceptual models that Brent talked about, and he talked about ultimate metrics and kind of intermediate metrics, and the more robust those become in the chain, the more that -- you know, it's a false choice to say I have to pick between an intermediate metric and an ultimate metric, that if the intermediate ones are robust, that those are the ones that you really want to work with. The volume is great, the time lag, the latency, is shortened, and you can really get a feedback loop that's fast and you can truly learn from.

So we need those, and I think that that's that kind of shared intellectual property, that if we cooperate on quality and compete on service that that's a paradigm that's valuable.

And so if we have a shared intellectual property around what those metrics are, what is scientifically sound, what it is that we know so that we have a mechanism to know what we know and to know what we don't know and to distinguish between those, so that's a to do.

I still also believe that there is a need to link quality to pay for performance. And that may be a two-step process because we still believe in some sort of P for P, I believe.

So the question is, even with these new world metrics and how it's going to work and it's all embedded into the flow and life is great, I still won't know how that links to actual payment, and I think that there's work to be done there. So in this intersection, in these mechanisms, we still have to say how that link would happen.

And then the other thing I think we have to do is we have to care very much about what the link is today in any sort of quality or value metrics to current pay for performance because maybe there's a bridge, and that's kind of, Marjorie, what I heard you talking about -- is there a bridge between today and administrative and tomorrow and the EHR? We have to care about that from the metrics standpoint; we also have to care about that from a P for P standpoint because, by God, if we do something wrong now, we're really going to mess it up for 10 years from now.

So we want to make sure that we are directionally correct, even though that what we can do with administrative data, what we can do within P for P is going to be just a small subset of what we think we can do in the future, it still needs to point and be aligned in the same direction.

And I think that if we are going to be stewards of a vision that part of that stewardship is making sure that we don't mess it up today. So that would be a fourth, any sort of bridge to the current P for P.

So those were my kind of specific take-aways.

MS. GREENBERG: I think I got it, the last two.

MS. McCALL: Okay. Thank you. All right, other things, other reactions, common thoughts from -- now I'm back to facilitator.

MR. HUNGATE: I had a couple I wanted to highlight.

I thought we really kind of clearly stated in the context of discussion that the electronic medical record was a necessary but insufficient way of addressing quality issues. And I think that's an important thing for us to state as a conclusion, because I think it is kind of where we ended up on that. And it relates to your, you know, thinking through that.

MS. GREENBERG: Well, it was Simon's first comment also, I think.

MR. HUNGATE: Right. So if that's a given, then it should help us think through.

And related to that was Don Detmer's making a distinction between infrastructure and knowledge management.

MS. GREENBERG: Right.

MR. HUNGATE: And I think that's a very important distinction and should also help inform our work because the attention has largely been on the infrastructure and not on the knowledge management, and I think our vision has to deal with the knowledge management as it relates to performance measure, that that's kind of the core, what the vision is about.

The other piece that was a specific is the graphic that Don Detmer used of life in the complexity zone. You know, I thought it was very descriptive, and if we're trying to work on a vision having descripters that help us describe it is a big piece of it.

Brent's continuum of accountability and learning was one of those, and I think this life in the complexity zone is another.

In the hand-out, there are more pieces in Don Detmer's material than were actually presented and I would encourage people to look at them because I think they're additive to that.

There also was a suggestion by Kelly Cronin that attention needed to be paid to the requirements of decision support systems, and I think that ties to the knowledge management and is an important piece of it, and it was raised as an issue.

MS. McCALL: Pay attention to the -- what was the specific word you used?

MR. HUNGATE: Decision support systems.

MS. McCALL: Yes, but what about them, just kind of the content of them or --

PARTICIPANT: The requirements.

MS. McCALL: The requirements, thank you.

MR. HUNGATE: The term "audit" came up periodically, not in any clear way, but I think that when you start talking about knowledge, the verifiability, the credibility, of knowledge and audit are interrelated pieces, so it's a concept that we probably need to have in there in some way.

DR. CARR: I think they used the word "integrity.
MR. HUNGATE: Integrity? Yes, integrity.

MS. McCALL: Okay, great. Other things, take-aways?

MR. SCANLON: I had one footnote to --

MS. McCALL: Sure.

MR. SCANLON: -- the comment about the electronic health record is a sufficient but it may be a necessary --

MR. HUNGATE: Necessary, but not sufficient.

MS. McCALL: Right.

MR. SCANLON: I think it's necessary for, in some respects, sort of achieving optimum quality or efficiently achieving quality, but at the same time I would be concerned that we don't think of it as the prerequisite; we stop doing everything else until we get --

One of the issues that we did yesterday was: How do we encourage the electronic health record? And one of the strategies that's been put forth is the idea of we just move forward on things like pay for performance. It's going to be a whole lot easier for people to comply if they have them. And then that's going to be one form of encouragement of the electronic health record, so that's --

MR. HUNGATE: I agree.

MR. SCANLON: -- a footnote to that.

MS. GREENBERG: And that relates to my sort of mode epiphany that --

MR. SCANLON: Right.

MS. GREENBERG: -- that if you want to try to do this some other way, good luck!

MR. SCANLON: Right.

MR. HUNGATE: Steve, you were discussing yesterday. Do you want to add anything to those lists?

DR. JENCKS: [Off microphone.]

MR. HUNGATE: Okay. You're going to lie in the weeds for a few minutes? All right.

MS. GREENBERG: Shall we see if Susan --

MR. HUNGATE: Susan?

MS. GREENBERG: -- or anybody else who was here --

MR. HUNGATE: Dan, you want to make any comments? You didn't say anything yesterday.

MR. RODE: [Off microphone.]

MS. McCALL: Okay. Now, we may add to these, but these are important, so we may change their locations so everybody can see them, but they're important in that we will use them.

MR. HUNGATE: Let's let Dan put his two cents in --

MS. McCALL: Oh, I'm sorry.

MR. HUNGATE: -- if he's got something. He's been a regular attendee at these --

MR. RODE: Dan Rode, American Health Information Management Association.

The issue of value came up yesterday, associated with quality, and for many years we've equated, I think to some degree, quality and value in that it has affected the way that we use our administrative data and unfortunately has gone through some of the channels Marjorie's talked about.

We need to, I think, as we look at the electronic health record, as we look at any dataset, make sure that we have a consistency in these datasets so that in the future, as data's taken and then used to evaluate for quality and other things that there's no misnomer, because right now we change the basic data in our systems to meet payment mechanisms.

And that thought came to me partly because in the next part that she talked about yesterday was the use of value and how it relates to policy decisions, how it relates to pay for performance.

But I think just that population piece, and what struck me about New England was to recognize that if we have a chicken-and-egg situation, if we can begin to collect some of this data and establish some metrics, then we might be able to answer the question we've not been able to answer before, and that is: What is the value of different care that we render on the individual themselves? What's the ROI for better data?

As went through the ICD-10 discussion, the question always came back as: What's the ROI to have

ICD-10 data instead of ICD-9 data? And we couldn't answer that; Rand couldn't answer that.

New England started to be able to put some value in those kinds of discussions, but it's kind of a chicken-and-egg thing; you almost have to have the data --

DR. CARR: Exactly.

MR. RODE: -- in order to begin to discuss it. And then look at Brent's continuum yesterday, and as has been mentioned so often, make sure we go at that continuum in the right direction.

And that was for me a great take-away because it really gets down to the discussion not only of can we go from $5.00 to 10 cents on the administrative side, which is something I've been working with for too many years, but can we take the data and actually equate it to better care for the individuals that are receiving the care which I think we all believe in turn will also lower the cost of health care.

You have an interesting role as a Committee, and you didn't talk about it yesterday, but you sure talked around it. One role is education. I don't think, as I've gone through various meetings of various groups, outside of the IOM, I don't think anyone's talked about how do we educate the population to these needs and the values that we've heard?

I can't clone New England; I don't know why but it's uncloneable, I'm told. But how do we translate those experiences to something that becomes of value to other communities, that they can begin to see that? And I think that's a role the Committee can serve.

The other thing that we didn't discuss yesterday that I'm beginning to view is we're going to have some challenges in the metrics, competing metrics, and at some point the Secretary is going to need some advice.

And whether it's in this Committee or one of the other committees associated with NCVHS, you're the advisory body, and I think you're going to have to keep a pulse on these metrics and maybe play the role that Brent's playing, or maybe Carolyn plays the role that Brent's playing, to say: Is it time to review this metric, and do we change it?

And if you can to get to Marjorie's point on the data itself, we're very involved in putting together the standardized electronic health record.

But I think of the standardized record as a record of data and data elements; I don't think of it as a record trying to collect quality information. I think it's collecting the data that someone will turn into quality information, but it's key to have an independent body like this and the IOM talk about should be in that record. That's an arm-wrestling that's going on right now in HL7 and one that, you know, as we come out with these standards, is not going to be conclusive. We're going to add to this record over time, so that becomes an issue.

And of course I have to say any promotion of data like ICD-10, functional status and the rest certainly wouldn't help us because I agree with Marjorie -- it's going to be a while before we see a standardized electronic health record in enough facilities around the country to give us that kind of information, and hopefully communities like New England will push their end. We're looking at a national database.

We need to do that, and I think a move to 10 would start the structure, that it's the experience that you can then apply to metrics, too. Right now, we're applying metrics to a very old and very limited database and we'd need to expand that database, and if we started today, we still wouldn't even begin to collect that data till about the late part of 2008, so how long do you want to wait?

MR. HUNGATE: Very good -- thank you.

The footnote I might like to put on that is that the issue of competing metrics makes me think of chaos theory and complex adaptive systems and the core that, you know, that is kind of what's happening, it seems to me.

MS. GREENBERG: If I could just say --

MR. HUNGATE: Margaret?

MS. GREENBERG: -- something on this classification. Obviously, one of my deep interests and responsibilities is -- we talked about outcomes, and the ultimate outcome obviously is mortality. We'd like to have more intermediate outcomes because there's no way ultimately to avoid the latter.

I once heard an expert on – Joanne - what's her name from Georgetown -- say that our goal for ourselves and everyone we love is that no one will care what we died from because, you know, it'll just be the end of our life at a reasonable time.

But in any event, right now when our mortality data is based on a different classification than our morbidity data, we can't even do that continuum of outcomes in an effective way, which somehow doesn't seem to resonate with people as an argument but it certainly does with me coming from the National Center for Health Statistics, because before we die, we're usually sick.

But in any event -- but not always -- the other thing that was striking was that John said, "I've got lab data and I've got functional status data. I don't have vital signs unless we're, you know, actually for those people that we're actively monitoring."

That in a way kind of, I thought, complemented what we had heard about the difficulty of really trying to build measures around vital signs.

So when I probed him about the functional status data, he would seem to be collecting a lot of the right things. But then when I said, is it standardized, or are you coding it or is this all pretext? He said, oh, yeah, we have a way that we're coding it but it's completely, you

know, our own thing that we've come up with, which is going to happen more and more, which is why I think we not only have to advance the collection of functional status data because it's so critical to patients and as outcomes measure short of death, but also we need to advance a standardized way to gather this information. Everybody's going to have it, but we're not going to be able to use it in performance measurement because everyone will have captured it differently.

MR. HUNGATE: Okay. Stan, did you have a quick one? We're going to move along.

MR. ETTINGER: I just wanted to mention maybe one of the other issues is what is performance? We're paying for it, but what is it? There are various measures of performance.

For example, maybe it was discussed before, but the so-called "certification program" in CMS, 266 as you pay, because I'm going to need certain standards. You have outcome measures, you have mortality measures, but how do they relate to each other and exactly what constitutes good performance?

I think in general we all know something better; we pay more for it, or don't pay for it.

And actually one of my colleagues actually once brought this up. He was looking at the certification data, and he says to me, you know -- he was looking at it; he was sort of surprised it was very difficult to draw conclusions of how various facilities compare from the way the statements -- they're a lot of regulatory language but it doesn't really tell you what's wrong in a facility or what's right in a facility.

MR. HUNGATE: Okay. Very good.

Thank you, Carol. That looks like that'll challenge our minds for a little while this afternoon.

The next phase is to broaden our horizons once again. As I've looked at when NCVHS is effective in its work, the most effective work comes when the activities here at the Committee are aligned well with work going on within the Department someplace that makes what we do synergistic with what they're trying to get done and makes an amplification occur.

So that's what we'd like to get developed in this next section so that we understand better where we fit and what others are doing and how that goes.

So with that, Carolyn, it's your turn. Feel free to comment on all of this and –

Agenda Item: Perspectives on Prior Day's Subject Matter -- Interactive Discussion – Carolyn Clancy

MS. CLANCY: Yes. Well, first of all, thank you for inviting me to be here. Boy, you've surfaced a whole lot of really, really important issues.

I think the good news right now is that there's a lot of activity and certainly a strong interest in quality measurement at the moment.

And there's even a little bit of alignment, which seems almost breathtaking. You know the hospitals and a variety of folks have come together and agreed on some measures. A similar sort of activity is taking place in ambulatory care and so forth.

But the landscape is incredibly cluttered, to put it mildly, and there are days when it's a little bit difficult to see where this is all headed.

The question that came up from me yesterday was: Well, gee, should we just continue to add more and more of these disease measures or condition measures or population measures, a specific focus, or should we be moving more in the direction of Baldridge?

I don't know the answer to that question. I mean, my answer was something along the lines of Baldridge to me seems to apply more to fairly tightly organized systems, which seems to me to be a relatively small subset of health care delivery.

But that begs a larger question about why we couldn't be thinking about that on how do you apply Baldridge-like criteria to more just first actors because my guess is, in most communities, even if there are no financial or formal arrangements between actors and sectors, you could probably, with some kind of data analysis or maybe John Halamka could help out here or one of the other community RHIOs, you could probably draw some pretty clear lines in terms of pathways that people traverse for most of health care and so forth.

Just thinking about I'm really, really thrilled that you spent a lot of time on the intersection between health information technology and quality. I think there's a lot of myths out there right now as in we dump computers in everybody's offices and suddenly we get to the promised land. I think John clearly, from the notes that Anna took and shared with me, and from your discussions, kind of made it clear that we have a long way to go.

Very interestingly, Beth McGlynn recently tried to take the study they had published in the New England Journal a couple of years ago and apply it at the VA. VA's got an electronic record; they've achieved some breathtaking performance goals and so forth.

In order for her to apply her metrics, they had to print out. So, you know, the mental image here, visual image, here is carts with tons of paper being printed out because their system is not configured to do that.

MS. McCALL: Right. Yes. One other thing on that is that the other data that Beth has been working with is Humana's. And I have responsibility for that.

So what's fascinating is a comparison that we've looked at between what she was able to do at the VA and what she was able to do with administrative data. And right now she's not able to do a lot more with the VA than she's able to do with administrative data.

So I understand, you know, it's like the red-headed stepchild. But there is something that it's going to be used for, you know, and so I think we need to pay very careful attention to what's being done, note what the limitations are and things like that.

So I think your comment about what we're able to do with the VA's data, it's not going to be magic; there's work to be done.

MS. CLANCY: I would agree.

I guess two other points I'd make on that front. From talking to a lot of physicians in organizations, I think one little secret that no one wants to talk about very much is that safety and quality are not primary drivers of a reason to invest in information technology. I mean, everyone says, oh, yeah, yeah, quality and safety, we're there now. But there's a whole array of reasons, some of which relate to up-coding and getting better value from one perspective, I guess, being able to get better reimbursement and so on and so forth, which I think trump everything about quality and safety because right now we don't have a market that really demands it.

So I think, Carol, I think it was your point about staying connected to pay for performance.

MS. McCALL: Yes.

MS. CLANCY: Now, that actually leads to a very interesting conundrum. You can do all kinds of things with payment systems. They can't all be efficient.

And the level of specificity that is being developed now in quality metrics and our current payment systems -- I'm trying not to look too hard at Steve --

[Laughter.]

MS. CLANCY: -- I mean, trying to imagine administering that in a fee for service way becomes a little bit daunting.

So, where's that lead us to? I think that leads us to an opportunity and great need to learn much more about predictive power.

You know, every year in open season I get phone calls about "I need a new doctor" or, you know, "My doctor isn't taking people," or "We moved," or, you know, "I've had some family change," and so on and so forth. What one or two things could I look for in a doctor? Should I ask if they have an electronic health record? Should I ask if they do this or that?

And the short answer to that question is: I have no idea. I mean, usually I say, "Oh, I'll call my colleagues at GW," and that kind of gets me off the hook for the moment. Or if I know someone that I trust, based not on any of the kind of empirical information that we're talking about in terms of quality measurement.

And I think we need to learn a great deal more about if a system practice physician, other health professional, does well in areas A, B and C, is that a reasonable expectation that they're going to do well in other areas? Or to put it more in policy terms, MedPAC has recently recommended that CMS increase reimbursement for "quality enhancing activities."

Now, they came out with some examples. How to do that without breaking the bank, I have absolutely no idea.

In theory, a practice that's set up to be able to assess their patients as a group in some fashion, you would expect is learning something and you would also expect that they're doing something with what they learned.

But we don't have a very robust evidence base to demonstrate that at all, which I think is a big challenge.

A couple of other areas where I think we need clarity, and I think I want to underscore Dan's comment about the great opportunity for this Committee as an educator, because you haven't lived till you've actually talked to reporters or gone on talk shows. And we recently released some state reports on quality this year from our quality report which was quite educational for me.

What do we exactly mean by "benchmarking" and what are people doing with the information? Everybody's benchmarking -- NCHS has data for benchmarking, we have benchmarking. I don't even think we look at it very much.

I mean, we look at our data and we worry about that. What we don't look at is how people are using that, what does it mean to them. Is it just a reassurance value that they're not worse than the bottom, or what? I absolutely have no idea.

There have been some very clever tools developed, one being the achievable benchmarks of care that I think that quality improvement organizations use a lot which give you some sense in a local or a regional area what's best in class and where you are compared with that best in class locally so you're not telling people you're down here; you ought to be up here -- good luck.

You know, you're essentially saying to people: Your performance is here and there are people in your area with practices that look like yours that are up here; that ought to be achievable. But I don't think most people grasp that or how to implement it in a very clear fashion at all.

In safety in particular there's been a lot of discussion about learning from other industries.

And the one experience that I'm kind of struck that we're not learning from is education. The other day, a front page story in The Times claimed that because of systematic reports on school systems and so forth, now some communities are really being driven to do something about schools that enroll a disproportionate number of minorities.

Now, I don't actually know if this is true, but if it's true, it's incredibly important.

The same issues around the psychology of measurement I think are quite relevant to those providing health care and those teaching, and if you haven't spoken to a teacher lately, I can tell you they hate it as much as doctors. Somehow, they don't get to make as much noise, but they don't like it at all.

And the other area that I have no idea how to assess, but I know it's incredibly important to doctors, is: How do we incorporate when docs, or health care systems for that matter, do exceptionally well with incredibly unusual patients? Because at the end of the day, I think a lot of us want to know that.

I mean, Dan Streyer, who just passed away a couple of weeks ago who had been directed our Quality Center, was initially told by a physician at Hopkins that he was inoperable. It was probably a very rational decision. He went to a surgeon at Duke, and when I asked someone at Duke about it, he said "this guy's great. You would never, ever want him working for you if you were running that health care system, but if you had an exceptionally unusual disease, you would want him to operate on you." And I thought, whew, good to hear it; this is just who he needs.

But doctors feel like they don't get credit when they take care of very difficult patients or solve problems that were not anticipated and so forth. I think that links back to the concept of being able to predict performance or predict how a system or practitioner will do when faced with exceptional challenges.

And we're not looking at any of that right now. And that's understandable.

MS. McCALL: The other thing that that brings to mind, Carolyn, is that if we don't allow for that, how can we honor innovation?

MS. CLANCY: Right.

MS. McCALL: Okay? And that our measurement systems could literally drive it out of the systems. Okay, people that want to break tail with whatever is, you know, acceptable practice.

DR. CARR: And we have examples, at least in cardiac surgery, of people modifying the population on whom they operate based on publicly released outcomes data.

DR. STEINWACHS: Everyone has to give their spin on this, I can tell.

I remember once being an ROTC candidate and having a sergeant who I decided if I had to go into battle, I'd want to be with him, but I wouldn't want to live next door to him because he had to always pack a gun with him wherever he went.

[Laughter.]

And so it sort of, you know, made me think about, when you saying this, you know, sort of you have a military that's there in this country to take on extraordinary kinds of situations, as different from, you know, say, the police force and other things. And it just made me think a little bit about: Do you deal with sort of the average and extraordinary all the time by the same people in the same system?

And you sort of said it when you said "this is not the person you'd want to have as your employee" sort of statement; at the same time, it is the person, if you are a patient, in an extraordinary category.

Sort of interesting to think about how do you manage those kind of things within --

MS. CLANCY: No, I agree. And, I mean, I'm all too familiar that docs often use this as a shield. "Well, you know, I have sicker patients." And we say, "No, no, no, we can risk adjust for that," and so on and so forth.

But, you know, the truth is that any system that promotes quality has to allow that flexibility.

But that brings us back to, I think, a very important political dilemma, which is we talk about assessing and improving quality of care from several perspectives all at once all the time. One is learning, another is a sort of business, market-oriented approach. A third is a sort of regulatory approach. And they're incredibly different.

And the implications for how we think about quality measurement I think are very important there. This point was really underscored for me recently. Pennsylvania has a new Patient Safety Authority I think is the word, and they've got a place where you can anonymously report near misses.

So they came down to meet with us, and they have big needs for benchmarking because, you know, all this data are flowing in and they don't exactly know is this too much, too little, and so forth, and we were strategizing about that, and it's going to be very, very important to the agency if this patient safety legislation passes this year. It would be good.

But what really struck me was how clearly and how much energy they're putting into distinguishing themselves from the Health Department, which is perceived as regulatory. They're learning -- those guys regulate.

And I thought, well, you know, as you're trying to launch something, this probably makes a lot of sense. On the other hand, this can't be all that sustainable, I don't think -- but I don't know about that, either.

A couple of areas for opportunities I think you might want to consider.

One is there's an awful lot of large, organized systems, or it feels like it, anyway, that are now installing the EPIC system. I've been impressed visiting Geisinger that they have actually a couple of former Hopkins folks who work quite closely with the folks from EPIC to try to customize it for their needs, their population, and also for research.

So I think ultimately we have to move beyond an information infrastructure and talk about a knowledge infrastructure. We assume that they're equivalents, and I don't think they're equivalents at all.

If you are, you know, ultimately with a set of metrics, you'd like to think that a system that sees a fairly select population is focusing on the issues of greatest interest and importance to their population. We're not anywhere near there. And at the same time, those same systems have the capacity, or at least in theory have the capacity, to be a very important source of new knowledge.

Now, the good news is that the HMO Research Network actually has one of the contracts under the NIH Roadmap for reengineering clinical research. So they're starting to focus on how much does an electronic health record overlap with an electronic research record and so forth. But I think there's a lot that can be learned right there.

I think there's a lot that we could learn outside the state of New York in terms of public reporting, although I have not seen very much in terms of what happens in cardiac reporting or anything else in Pennsylvania, Florida or a few other states that have gotten into that. That may just be because AHRQ didn't have a bigger budget.

But I think it would be really important to understand both in a rigorous empirical sense but also at the level of policy-makers -- do they think this is worth it? -- because the question that Dan proposed, what's the ROI for more data, is both a political question as well as an empirical one, and if we forget that, we're not going to make a whole lot of progress.

And then the last area I'll just mention, and Steve may want to add to this later.

You know, recently, there's been an alliance put together around ambulatory care measurement and there's a starter set of 26 measures, and I see this as very much getting a toehold, a little bit of traction, and it's been a very, very interesting process.

But they're mostly primary care-like measures, in part because most of the measures we have derive from the ability to compare some sort of organizational level and don't necessarily get to any of the important stuff because the sample sizes are too small in some of the plans that are being accredited and so forth.

But the specialists are now chomping at the bit to be part of this. How do they get to be part of the alliance? How can they develop measures? And so forth. There's obviously a huge spectrum of capacity and capability, but again, underscoring the educational possibilities here, I think that that might be something you'd want to keep your eye on.

MR. HUNGATE: Is there a reliable set of definitions of what's a good measure? You know, these people they keep getting developed, is it just that there is agreement or, you know, Brent's spectrum of accountability to learning? Is there some way of ranking things on the accountability side or the learning side? Just a question of information.

MS. CLANCY: There are certainly specifications around what makes a good measure.

For example, we have a quality measure clearinghouse and a technical expert panel which has been very, very helpful in developing criteria for that, and some have suggested that the Quality Forum should use those criteria for all those measures.

I think we would be foolish not to acknowledge that of most of the measures that are currently in use, feasibility probably outweighs everything -- is there a data source, can we get it? And so on and so forth.

That's practical. I understand all that.

MR. HUNGATE: Right.

MS. CLANCY: But I think that has very much suppressed the level of thinking that Brent is talking about. But I think what he is describing is incredibly important.

MR. HUNGATE: Thank you.

DR. CARR: Yes, I was impressed also with what he said because we have criteria for measures -- what's available, you know, what's big enough, important? But what we do with it and how we understand it -- and his example of the C-section rate high in the community of midwives was a very good one because I think we get to that level; we have all this data, and then we have an outlier, and we don't have a robust process of understanding or drilling down or sharing what we learned and modifying.

Don Detmer brought up the NNE group and how they take their data and look at it and what seemed like an important thing sometimes gets thrown out, what seemed like an unimportant thing suddenly becomes a major metric. And that is the knowledge, the education and the ROI, because just having these dashboards with a little bit up or a little bit down, or even, as Carol said yesterday, you may be inside two standard deviations and still have a problem.

The other thing that came up, too, yesterday was you have the data elements, and what you package for a regulator versus a health care quality department versus HealthGrades.com are very different, and the audiences are different, the questions are different.

And, you know, we gave the example, a true example, you know, the difference between a 95 percent performance and 100 percent performance, is it three stars versus two stars? So what is a patient thinking -- that, oh, I don't want the second best when in fact 95 percent probably is 100 percent, you know, when you actually drill down.

So I think we haven't articulated what's the question this person is asking, and now are we answering it with information relevant to their question?

And the other thing that we heard, too, yesterday was about all the focus on looking at doctors' performance without acknowledging the fact that this system is a key driver in how those outcomes look.

And I'm intrigued by your idea, you know, the Baldridge idea. I think one comes away with a sense that quality has been truly vetted if you've gone through the Baldridge process. And I don't think one comes away with that if there's a high score, a little bit higher score, on this or that. It's sometimes hard to trace back, you know, are we good? Why are we good? How do we know we're good? You know, what good outcome accrued because we had a higher score?

MS. CLANCY: Well, you know, there's a paper that will be published soon in the American Journal of Medical Quality which compares how hospitals did on JCAHO accreditation. You may not want to bring this up with Jerod; it's still sensitive with him, with RX patient safety indicators. And by the way, I need to say we don't actually endorse any of the health grades analyses using these indicators even though they're very quick to --

DR. CARR: Right.

MS. CLANCY: -- always say these are ours.

[Laughter.]

MS. CLANCY: And there's very little correlation.

Now, I don't think that's a surprise. I'm not actually sure that finding is necessarily publishable, although some of these are colleagues, or former colleagues, of us at AHRQ.

But I have to say, in thinking about the counter argument, I don't have a lot to rely on, right? In theory, an accreditor goes in and what they're looking for is predictive capacity, what do I see now that will give me some sense of how care will continue to be delivered?

Now, clearly I think the fact that JCAHO is moving to unannounced inspections is a really very, very important move. My brother asked my sister-in-law once, "So what does it mean when the Joint Commission is coming for inspection?" She paused and said, "They have to put the hand lotion away."

[Laughter.]

MS. CLANCY: Which is kind of, I think, how most people have thought about it.

MR. HUNGATE: Yes.

MS. CLANCY: But with unannounced inspections, I think that really ups the ante. But that's totally different than the kind of rear view mirror approach that you take with performance measurement which assumes that there's a correlation between what's going to happen next week, which is what we're really interested in, or next year, and what happened previously.

DR. STEINWACHS: Just to pick up on part of this. You know, measurements, a lot of them fit very nicely into sort of a production kind of model that says "health care's an industry and being trained in engineering, it's very nice," but production processes also have predictability, which is part of what you're talking about here, which sometimes strikes me as part of the disconnect between those of us who are interested in sort of building both the quality monitoring and hopefully the learning and feedback and those who approach it from a clinical point of view. As we measure those intermediate and longer term things, I don't think they come very close to how clinicians generally say, "Well, how do I feel about the quality of what I'm doing?"

And I just wondered if you had any comments about are there ways in which we could be tapping in to aspects that clinicians would feel come closer, saying, "Yes, this is the way I'm looking at what I'm doing," and the way in which if you tell me that, you know, it's different may mean something more personal to me than what becomes sort of these things that are part of that production process, which is also important.

MS. CLANCY: I think it's a very important area to explore. Having said that, there's sort of a chicken-and-egg feature here. I think most clinicians do their best for each patient one at a time and they move on, and the vast majority have no capacity whatsoever to look at how they've done for the last 10 patients. There's not much of a cultural tradition of peer review.

I used to review for the QIO in Virginia before I came to Washington and it's kind of breathtaking in lots of ways. I thought this was an area where I could do research. I actually didn't understand a lot of issues around protections for providers in terms of making the data usable, but --

[Laughter.]

MS. CLANCY: -- I thought that I was always waffling, and it turned out I was like the toughest reviewer they had, which told me they were doing nothing at the time -- this is before Steve actually totally revamped the program.

[Laughter.]

MS. CLANCY: I mean, late '80s, early days of the QIOs. Then the PROs. That mindset, I think, is hugely important, to shift from saying not just how did I do on this patient but I need to take a new view of the population of patients I see because absent that, most people think that they're doing much, much better than they are, with no miss in prevention for 20-odd years. You know, docs think it's important they get the right answers on the test. If you ask them, which test should you be getting for which patients, and do you think you're doing it? Yes. And then you look at what they're doing and there's a huge gap.

Arnie Millstein calls this an optimism bias. I don't have a better term for it.

MS. CLANCY: But, you know, we were talking about this last night as well, and you sort of evolved from our discussion about would you collect vital signs. So if you collect a blood pressure and it's 139/89, you pass, and if it's 141/90, you fail. And I think that kind of uni-dimensional assessment is distasteful to physicians.

So as we're thinking about the electronic health record, you might be able to say: "Is your patient normotensive?" And the physician might have -- you know, using John's model -- their home blood pressure measurements and the multiple ones in the office and might be able to say, "Patient is normotensive." Or they might be able to say, "Patient is borderline hypertensive and I'm watching," you know, or "Patient is hypertensive; I haven't done anything."

But looking back then, it affords an opportunity to look at the gray zone. So just like NNE where half the people gave aspirin on the day of surgery and half the people didn't, and they could say, "I was doing my best," but someone could go back and say, "Okay, we've aggregated everybody's best and here's what we've learned: You should give the aspirin."

And, you know, I think to just take a number, it eclipses the whole judgment factor of here's my patient who is at risk for all these side-effects of anti-hypertensives and here was what I'm doing, so make me believe that these borderline patients belong. Not to perseverate, but I think the piece is engaging and aligning the physician in issues that they are invested in, so the borderline patient, you know, they retake the pressure and they can make it normal -- okay, fine, I passed -- but if we really engage around what do you do with that borderline patient, let's get an aggregate experience, it would feel more value added to the physician and more engaging.

MS. McCALL: You had some fascinating sort of comments about different models -- a learning model, kind of a market- or a business-oriented model, and then a regulatory model and that those paradigms, they will bring with them, whether you know it or not, a whole kind of different set of behaviors.

So can you talk a little bit more about that, some of those tradeoffs, which ones you think kind of work well together, which ones you think don't mix well, and how you think that we could approach it?

MS. CLANCY: I guess I'm having the brief thought that it's easier to introduce a topic than to --

[Laughter.]

MS. CLANCY: -- digress quite knowledgeably.

I think I was servicing it as an issue that needs more exploration. My greatest fear with the quasi-regulatory model -- and by that I would mean both stuff that's required or publicly reported or that is in some way linked to payment -- is that at the end of the day we'll get better coding.

And, you know, we actually have an industry that's prepared to do that, and miss a huge opportunity to bring along practitioners who are still living in a one-at-a-time world.

MS. McCALL: Yes.

DR. CARR: And that, I think, would be a big miss.

Now, having said that, to the best of my knowledge, one huge weak link of electronic health records currently is the registry function. I mean, in theory you ought to be able to sort of, as Beth McGlynn often says, hit F7 and, you know, pull up the registry of folks with diabetes or whatever.

[Laughter.]

MS. CLANCY: And I think there's like a huge opportunity to engage docs in focusing on what they consider the most important, which doesn't overlap very much with what would be publicly reported. You know, public reporting, right at the moment we tend to focus condition by condition. I think a lot of docs would be really engaged by saying, "Okay, where are we really screwing up for people with diabetes?" You know? It's the intersection of diabetes/hypertension control and a couple of other things.

That, I think, would be far more engaging because at the end of the day I think you can sell practitioners and persuade them that there's a lot of routinized stuff that's not happening and life would be much, much easier if we just had a system for it rather than always running around after a piece of paper or trying to remember whose job it is to do what.

But the judgment part and the part of how do we make sure that the people most in need of our services are routinely getting what they need, I think you actually need some of the best minds that you can get.

MS. McCALL: I guess when you laid this out, the thought that came to mind is: Are there couplings or pairings of these things that work well together?

And the reason -- you made a comment there's not a culture of peer review, which really struck me. And I thought, okay, if we're going to go kind of down the learning paradigm or maybe the learning paradigm coupled with a market model, and then somehow watch-dogged by regulatory just kind of out on the side.

The learning model requires a different culture. It is a shift. So if there's not one there today, do you think that it's going to be easy or hard to kind of bring in, usher into being? Do you think that it's not there because there's nothing to look at and so they've learned not to ask for anything or do you think that even if we had F7, hit a button, let me show you that they kind of go "no, no, no, no, no?"

MS. CLANCY: Well, in my experience as an educator, I once tried to persuade some residents to peer review for each other.

Now, how this worked was we provided care in a clinic of patients, about two-thirds of whom lived in the city of Richmond and another third came from outlying areas where their capacity to show up for an appointment was critically dependent on getting a ride. So to say that there were gaps in continuity here would be an understatement.

So we always had to have one or two residents in any particular session being the, you know, urgent care doc for people who needed to be seen but they didn't have their appointment that day and all that kind of stuff.

I thought this provided a great opportunity for them to kind of give each other feedback. I clearly needed to know more about how to present this --

[Laughter.]

MS. CLANCY: -- with the primary care residents, you know, with whom we have a very close working relationship and so forth. And let me just say the consistent universal response was, "We're not going there."

And I thought, wow, this is really incredible because for a couple of research projects when I had looked at a pile of charts over, say, a month, looking at variations in test ordering and things like that, I came away with some fairly stunning conclusions. You know, one was that we needed to educate the interns a lot better and so on and so forth and the other one being that senior residents would ignore anything.

[Laughter.]

MS. CLANCY: (?) of two? Not a problem. Just keep moving.

But they absolutely would not go there.

Now, some of it is probably that that's not been part of their training, and I could imagine building that in at the undergraduate level that this is kind of a routine part of business. I think the potential points of alignment there are slowly I think the CME enterprise is starting to realize that it's really important to link more directly with performance improvement rather than just sort of stuffing people with knowledge and assuming that that translates as improved performance and that medical specialty boards are now trying to do this through maintenance of certification.

And if you think about it, the idea that a physician is involved or engaged in some fashion in quality improvement, that that should count for other purposes, is a very powerful idea. The extent to which that can be exploited I think remains to be seen.

MS. McCALL: Right, yes. One thing that -- I don't know if it's something that we would have an opportunity to do as a group, but somebody to bring in to a meeting sometime is John Seely Brown and talk about -- there's a book called The Social Life of Information , which is just an exquisite piece of work, and to really talk about some of the paradigms of what it means, whether as communities of practice, but to really make some of this information have a social life -- you know, kind of get out, throw a party a little bit, that --

[Laughter.]

DR. STEINWACHS: Does this go with the martinis, or not?

MS. CLANCY: It does -- always. But it really is about CME enterprises, undergraduate, you know, education; it's about, you know, some of the medical specialty boards, that to make it live, it needs to know where to go to live.

And so I think it's having a list of things, and you'd said earlier, our opportunity as an educator. Well, we may not educate, but we may go talk with people who are in that business or could be in that business to pull this through. That could be some ways to kind of leverage a learning approach and then maybe link to a business- or market-driven approach.

MR. ETTINGER: I was just going to mention that actually when I was in New York, somebody had a very good educational intervention.

MS. McCALL: Well, you had better talk into the mike.

MR. ETTINGER: When I was in New York at one of the hospitals, one of the other hospitals, the lab director and the head of internal medicine had a very good intervention for the residents and interns ordering testing at all hours in the night. They changed their policy so if they ordered anything, they must call both of them, no matter what time of the day or not, if it wasn't on the list, at home, and ask them for permission to order it. Test ordering went way down.

DR. STEINWACHS: Let's hope it was good for the patients.

MR. ETTINGER: Usually it was.

DR. JENCKS: Let me just add one thing about the educational thing.

The enormous force which is coming into play in education is recertification, and it isn't at all clear what that's going to mean, but it has just changed the landscape of CME.

And it reveals very interesting things sometimes. For example, in 2006, the ABIM will require for recertification that its diplomates present evidence that they are using data from their practice to improve the care that they deliver.

MS. CLANCY: Yes, that's what I'm talking about -- they're calling it maintenance of certification. Now that's the direction they're moving in.

DR. JENCKS: Now. What's interesting here is that this is not a requirement for initial certification. And according to the people at ABIM, and I can't think of anybody who'd be more likely to know, the reason is that the teaching settings don't have the infrastructure to support it.

So if we want to understand why people are coming out of training with not quite the habits that we were hoping for --

DR. STEINWACHS: Let me come back to the peer review because I've for a period of time tried to use that concept, at least my thinking, in a positive way, and, you know, we always characterize professions as using peer review as their fundamental mechanism for quality, but, you know, it reminds me that peer review generally is viewed as a process of sorting out that which will be funded, that won't, that'll be published, that won't, that which will be punished, and then won't.

And so peer review, even though, you know, a lot of us like to think of it as a positive sort of supportive learning, is really in many people's minds pretty punitive as a process. And so you made me think about it again. Is there another way to talk about this collegial evaluation and feedback that doesn't quite conjure up what many times peer review I think does within professions, that it's the police force? It's our version of regulation in a sense or the sorting out of the marketplace.

MS. CLANCY: Yes, I mean, when you said that, what came to mind was the vision of morbidity and mortality conferences, which I'm told that at a lot of institutions are not being done anymore. Not only can you not share the information or discussion beyond the walls of your institution but some are so paranoid about potentially discoverable issues that they simply don't have them anymore.

But I do remember that as being very confrontational. I mean, boy, did you learn. And sometimes we made great jokes about it -- but later, many years later.

[Laughter.]

MS. CLANCY: It strikes me that if this patient safety legislation passes that that's going to create a huge opportunity to kind of turn that around, because the irony is --

I think one of the reasons the peer reviewers stay on our study sessions -- it's a boatload of work, and we certainly aren't paying people very much; we're not even paying them much for the day they spend there, much less all the hours they put into reviewing grants -- is that they do get a chance to learn and it's a great time to hear new ideas and new ways of doing things.

I think there's a little bit of this going on in the U.K. and Ireland and possibly other countries. This is really in some way the Royal College's efforts to reduce the isolation of people practicing alone in pretty rural areas. It'll be interesting to figure out if they've learned how to move beyond that.

MR. HUNGATE: Very good. Other questions, comment?

I’d like to welcome Dr. Jerod Loeb --

DR. LOEB: Thank you, Bob.

MR. HUNGATE: -- to our discussion. Glad to have you. I think we'll charge right in to your --

MS. McCALL: Actually, we were scheduled for a break. Do you want to do that now, before --

MR. HUNGATE: Well, that's what I was debating. Everybody want to take a break now?

SEVERAL PARTICIPANTS: Yes.

MR. HUNGATE: All right, let's take a break now. Come back at 10. Good, okay.

[Break from 9:48 a.m. to 10:00 a.m.]

MR. HUNGATE: The deliberative process continues.

MS. GREENBERG: Oh, great! Somebody already typed these up? Wow.

MR. HUNGATE: So, Carol has worked with Susan to get a printed copy of what's on the wall so that you now -- and Carol, why don't you go ahead and describe it.

MS. McCALL: So, first, thank you very much, Susan, for typing up what's on the flip charts.

You should have in front of you -- this is essentially a stapled two-page document. These are the notes as written from the flip charts.

Just a little note about process, and then we'll kind of get to Mr. Jerod.

What we'll be doing with this sheet are a couple of things. After we've had an opportunity -- we've heard from Carolyn, we'll hear from Jerod -- what we will be doing is adding to this sheet any observations that you would normally have added had we heard -- you know, everybody yesterday.

And at that point, and you can add them directly to the sheet, we will be going through a process whereby we essentially, using some different techniques, are going to identify and pick those things that we believe are the highest priorities. It's a technique called a "nominal group technique."

You'll be given a certain number of votes for those things that you believe are the most valuable.

There are some things in here, obviously, that won't be picked. That's the essence of NGT. And yet some that are not picked, they will in fact be kind of related to those that were, so they may be subsets, and we'll identify those that are kind of natural hangers-on from other ones.

At that point, it becomes the basis for the beginning of what will be a work plan. It won't be the work plan, but it will be the ideas from which a work plan flows. And time permitting, we may then say for those big, kind of headline areas, we will try to give them some operational definitions, and then to the extent that, again, time permitting, we can begin to lay out what we believe the steps are to turn that into a more specific work plan.

So, keep that in mind as we go through, and it just kind of lays out how we'll be spending essentially our afternoon. So I wanted you to know that you have those sheets and how we would be using them so that you can begin to continue to think about that as we continue to hear from folks this morning.

MS. GREENBERG: Now, it's just the members who will be voting or --

MS. McCALL: Yes.

MS. GREENBERG: You don't want the speakers to -- this is just the members of the Workgroup?

MS. McCALL: Those people. It is decreed our work plan.

MS. GREENBERG: Yes.

MR. HUNGATE: Yes, I think we'd like other people to be able to comment, but it will speed the voting process if we narrow the number of actual voters --

MS. McCALL: Yes.

MS. GREENBERG: The five of you.

MR. HUNGATE: -- and that'll be fine.

MS. GREENBERG: Anyone missing? Good.

MR. HUNGATE: Trying to think -- are there any other little issues that surface in that? Okay.

All right, moving forward, I want to welcome Dr. Loeb to our discussion now. I have to precede that with two comments.

In my days in Washington, DC, working on health policy, I came to get to know Dennis O'Leary reasonably well and ended up speaking to the Board of Trustees at one time, which helped me understand the difference between what the Joint Commission does and what I was trying to do and things like that.

I also remember the book The Measurement Mandate put out --

DR. LOEB: Oh, yes. It was our best seller --

MR. HUNGATE: Is it really, still?

DR. LOEB: -- of all the books ever published by the Joint Commission. Yes, indeed. We still sell that book.

MR. HUNGATE: And then there was a sequel to it or I don't remember whether it preceded or followed -- Accountability in Health I think was the other one.

DR. LOEB: It came later.

MR. HUNGATE: It came later.

And the other piece that I'd like to say is that in reading all the testimony before the Quality Workgroup earlier, I was always impressed by how the Measurement appeared in your testimony before the Committee, so welcome, and we look forward to hearing about your observations.

Agenda Item: Perspectives on Prior Day's Subject Matter -- Interactive Discussion – Dr. Jerod. Loeb

DR. LOEB: Measurement is my life, apparently.

Let me begin with two apologies. Apology number one is my voice. I've been in about 25 different time zones in the last seven days. I was in Italy, I was in Chicago, I was on the West Coast, I was in Denver yesterday, so I caught a cold, too. I'm at the back end of it so I'm not contagious. Still, my voice does have a bit of a strange sound, at least to my ear.

And secondly, in talking with Anna and Bob, I've done something I traditionally don't do for these remarks today, and that is I have my computer with me but I have no slides. I'm doing this really as a sort of a set of ruminations about performance measurement with the Joint Commission as a vantage point. It's almost like, what's on our radar screen? And as I walked in and looked at what's on your flip chart, it looks like there's a fair amount of concordance in terms of what's on our radar screen and what's on yours.

So with that as a preamble, I've got about, oh, 10 or so items, 10 to 12 items, that I'd like to at least tempt you with, and then hopefully engage in some discussion as to what's on the radar screen. And, Steve, please jump in with this as well -- I know you will.

[Laughter.]

DR. LOEB: I guess number one --

MR. HUNGATE: Everybody does know you, don't they, Steve?

DR. LOEB: No, we know Steve very well.

[Laughter.]

DR. LOEB: He's not saying anything!

The issue of who sets measurement priorities is an enormously complicated issue, and how those priorities are set is an enormously complicated issue.

At this point in time, as best I can tell, there are at least about four or five, or perhaps more, players from a variety of vantage points, all of whom believe that they're in a position to set priorities.

The IOM has set measurement priorities.

The National Quality Forum has set measurement priorities.

The Joint Commission has set measurement priorities.

CMS has set measurement priorities.

Leapfrog has set them.

IHI in the context of its 100,000 Lives campaign, at least indirectly, has set measurement priorities.

And the payers are setting measurement priorities.

And the question is: How do we get all of those various priorities to align? And even if we can do so, how do we get congruence in terms of the metrics?

Now, among the things that in my now 11 years at the Joint Commission in the measurement world that I'm most proud of, and I have to walk arm in arm with Steve on this, it's the ability of the Joint Commission to align its measures where we have common measures with CMS, and I think it's in large part because Steve has driven it at CMS and I've to a great extent driven it at the Joint Commission, as has Dennis.

And from the field's perspective, the ability to collect the data once and meet multiple mandates --

regulatory on Steve's side, and now payment as well with the pay for performance piece, and accreditation on our side -- has been enormously helpful. And as I go out and about and talk about this, that is perhaps the singular areas within the context of what the Joint Commission has done in the measurement world in the last decade that we are getting enormously thanked for.

So I think that issue is a very important issue.

Right now, the measurement priorities are often set by who yells loudest, by who lays money on the table -- and this is the NQF process. Because there's no de minimus funding stream for the National Quality Forum, NQF has to go where the money is.

And the question, of course, arises as to whether that's an appropriate way to be setting priorities. It's the only way we play at this point, but the question of its appropriateness is a good one.

And in the same context, as measurement moves outside the hospital arena, we're beginning to see analogs now of what has been formed in the private sector on the hospital side in the name of the Hospital Quality Alliance, organized originally by the American Hospital Association, the Federation of American Hospitals, and the Council of Teaching Hospitals of the AAMC, which is now posting data on the CMS website on Hospital Compare.

And we're seeing an analog of that on the ambulatory side called the Ambulatory Quality Alliance.

So now we have an HQA and an AQA. Now, that all sounds great -- as long as the measures are complementary; better yet, as long as the measures are in fact identical, where you're dealing with the same patient moving into and out of different settings of care. The problem is, at least so far, at least on the ambulatory side -- this is a fledgling effort; it's a little bit more substantiated on the hospital side -- I don't see a lot of congruence there.

And that worries me, because in the measurement world, clearly the devil is in the details. And unless we can get congruence identity, if you will, down at the level of individual data elements and inclusionary characteristics and exclusionary characteristics and risk adjustment algorithms and so on, it drives folks crazy.

And particularly as we begin to look at the continuum of care, where I hope measurement will go ultimately as opposed to being in the various silos that exist, we have got to create identity. So I think that is a very key driver, from our perspective.

Next issue that really troubles me, and I spent a lot of sleepless nights worrying about this one -- it's the issue of maintaining measures and reconciling measures. As the evidence changes, it's become clear that there is a need to shepherd the measures through a process.

The problem is there is no standardized process by which to do this. That's a big problem.

We faced that problem in the last year and a half with respect to the measures that looked at ACE inhibitor use in the acute MI set and frankly in the heart failure set as well when the issues around the use of ACE versus ARBs, angiotensin receptor blocking drugs, became paramount, and from the hospital perspective we saw a diminution in hospital performance as it relates to ACE use, which we became concerned about, but the hospitals in turn explained to us, well, your measures didn't keep up with the science. And our cardiologists are practicing more contemporary medicine; we're seeing a drop-off in ACE use, we're seeing an increase in ARB use, and you don't have any way to capture that. And that is a big issue.

Now, we at the Joint Commission and Steve and his folks at CMS recognized it very early on and were engaged in discussion about how to fix this. The problem that we faced, however, was that measures traditionally follow guidelines, and the American Heart Association, the American College of Cardiology were reticent to change the guidelines on the basis of what they perceived to be a limited amount of clinical data, even though practice was already changing.

So we then together, CMS and the Joint Commission, spoke with an AHA, as in Heart Association, and ACC about the need to accelerate the guideline process because we didn't want to get out ahead of the guidelines. And that really fell on deaf ears initially, and it was only when the National Quality Forum, thanks to funding from Carolyn, got into the game and said "you guys have to get your act together" that ultimately we were able to accelerate the process of guideline change at AHA and ACC.

And all of this resulted in a summit meeting here in DC I guess about 10 or so months ago now where CMS and the Joint Commission had actually pre-arranged an agreement by which we would change the measures so as to accommodate the changing clinical practice.

Now, that's one example of a situation in which this timeline between guidelines and metrics is a problem. The reconciliation of these measures both within measure sets and across measure sets is an enormous issue and it's an issue that I can tell you for a fact is on the National Quality Forum's radar screen, but frankly -- and I've had many discussions with Ken about this at NQF -- no one really knows how to handle it at this point. It's a black box and no one's sure where the responsibility base lies.

The National Quality Forum has this exploding toolbox. It's getting bigger and bigger every day. But the measures in essence, while they're endorsed by NQF, are not NQF's measures. They're measures that belong to CMS, to Joint Commission, and a variety of other entities.

And then the question of: How does NQF leverage these changes that are necessary? And once the change has been made to a measure, the question of whether it is a material change which requires NQF to go through its Consensus Development Process all over again is a big problem. No one knows what the definition of how changed a measure might be to require re-endorsement.

I mean, you know, if you look, for example, at the measures in the NQF toolbox now that pertain to acute MI heart failure and pneumonia, the areas that we have in common with CMS, the measures that have been endorsed are no longer the measures that are in use because those measures have been changed over time in response to changing science.

Now, NQF has made a decision at this point that those changes weren't material, which may or may not be correct. But ultimately there needs to be a process whereby the various silos of measures in the NQI toolbox are reconciled.

I'll give you one more example that I think really drives the point home.

I was on the initial steering committee for the

Hospital Measure Set which resulted in 39 measures. In those 39 measures are two sets of cardiac measures, the acute MI and the heart failure.

NQF empaneled another group, I guess it was about a year and a half later, to come up with a set of CABG measures -- coronary artery bypass graft surgery

measures -- and clearly the patient population is either close or identical to the CABG measures as compared to the acute MI, certainly in the acute MI, perhaps not as much in the heart failure set, but the data elements are probably going to be the same. It turns out they're defined differently.

So you sort of sit back and scratch your head and say, well, what does the hospital do when it's got a patient that potentially fits into a denominator of population for the acute MI set and into a denominator population for the CABG set but in fact they're defined differently?

That's a big problem, an enormous problem, and it's a problem that at least in my humble opinion at this point, I'm not sure that NQF knows how to fix.

Now, we have proposed -- we, the Joint Commission have proposed to NQF that there needs to be a series of reconciliation activities that take place again both across and within measure sets and they have to be done to assuage

the political concerns at one level but you need the data geeks like myself and others at another level who really need to be in the trenches and understand the data elements and understand the way all of these things are defined so that we can create a common dictionary and ultimately allow the data to be collected once.

The problem, and certainly this is on your list, I think it's on everybody's list, is in health care today we've got IT infrastructure over here, for better or for worse, and we've got performance measurement over here, and we haven't connected the Ven Diagram. And it's only when --

MS. McCALL: Could you do this?

[Laughter.]

MS. McCALL: Okay. We spent a lot of time --

DR. LOEB: I bet you did; everybody does.

It's only when data collection becomes a byproduct of the care delivery process that we really will have performance measurement at a point that it can be used in all the contexts.

Next issue -- again, these are airplane ruminations, I might add -- is the issue of labels. It's funny, back home in Chicago I told Dennis one of the things that has troubled me in the last couple of years, and the analogy I made was to the world of pharmaceuticals, is off- labeled uses.

As you probably know, in cancer, particularly in pediatric cancer, something like 80-plus percent of the drug use is off-labeled. And that's okay. Physicians are licensed to practice medicine; they can prescribe the drugs any which way they desire.

And I sort of made the metaphor that we've got measures being used in an off-labeled use and Dennis corrected me -- he's just a great guy, I think as Bob alluded to. He said, "You're wrong." He said, "It's not that they're off-labeled." He said, "They're unlabeled." He said, "We have never labeled the measures."

And if you think about it, all of this began in terms of performance measurement in the context of internal quality improvement and longitudinal analyses over time. We didn't have to label the measures; that's all they were used for.

But we've evolved from internal longitudinal analysis and traditional QI measures into public accountability, transparency, public reporting, if you will; we're using the same measures. We've morphed again into pay for performance -- we're using the same measures.

And we've morphed yet again, and I heard this as I walked in the room, in terms of maintenance certification as Carolyn and Steve were talking from the perspective of the boards of medicine.

And we had a meeting just in the last two weeks with the American Board of Medical Specialties and several of its constituent boards about the possibility of drilling down hospital data and tagging it at the level of individual physicians.

Now, granted, there's a whole host of problems which we can talk about if you wish later on, that relate to trying to do that drill down, not the least of which is the, as I call them, the tyranny of small numbers. But the notion of using these measures in all of these contexts without a label that says "yes, this measure should be used in this context," what's the level of evidence required for use of a measure in the context of internal QA or QI versus pay for performance versus literally one's livelihood, when you get into the perspective of maintenance or certification?

And is it just the level of evidence that really makes a difference? I mean, is it an RAP -- randomized control trial -- that should be used to judge, you know, what this level of evidence is? I don't know the right answer to that question.

So that's another issue.

Next issue, the creation of composite measures from process measures.

Now, that is something that is really beginning to be used in the context of health care today. We use it in our quality reports that are out on the website.

We actually take process measures, rate-based measures, and we aggregate them, using what Harlan Krumholz from Yale calls an "opportunity model." And that is, there's no waiting. The organization had the opportunity to do the right thing x-number of times, and in the context of determining the opportunity to do the right thing, you might add together aspirin at arrival, aspirin at discharge, beta blocker at arrival, ACE use, et cetera, and at all of those opportunities, how many times did you fulfill an opportunity and how many times did you not, and come up with a grand total.

Now, "Is that correct?" is a key question. We're beginning to see it used certainly in the context of public reporting. CMS hasn't done it yet, but I know from discussions it's being talked about internally as a potential addition to Hospital Compare downstream.

But the question of how do you do this is a very key question, and embroiled in this whole notion is the age-old question of process versus outcome.

And I can tell you, that argument is an argument that's been around since Donabedian and it's still raging today. And I can tell you we had a meeting of our Advisory

Council on Performance Measurement about two months ago where I presented preliminary data to them on our two-year experience collecting standardized core measures in acute MI heart failure and pneumonia, the data, by the way, that will be, I'm pleased to say, published in the New England Journal probably late summer, but in the context of presenting those data, one individual on the Council who happens to be from a purchaser, a large purchaser organization, not the government, said, "Aha, finally you have given me the ammunition I need to suggest that process measures are no good" because in fact we are showing virtually a static acute MI in hospital mortality rate across two years at roughly about 8-1/2 percent; yet the process measures are showing improvement.

And he concluded from this: Mortality didn't change, hospitals got better in terms of processes of care. Therefore, what really matters is outcome didn't change; we ought to be looking more at outcomes. Throw those process measures away -- they're useless.

My response to him, of course, was several-fold.

Firstly, a lot of these metrics look at processes of care provided in the hospital that have huge impacts upon morbidity and mortality post-discharge. He had forgotten that. I mean, that's a huge problem.

And secondly, all we're looking at in a crunch-down hospital stay for acute MI, you know, you're looking at basically a two-day length of stay, and not a whole lot of people fortunately die in that two-day length of stay. They die 30 days out, which is why CMS, the Joint Commission, the National Quality Forum and others are looking to develop a 30-day post-discharge mortality measure. Let's risk adjust appropriately.

But also embroiled in this process outcome measure controversy I guess I'll call it is the notion of bundles.

Now, that really arose out of the IHI work in large part but not exclusively. And what we're beginning to see happening now is that performance on the entire bundle, if you will -- and the bundle is just a group of measures that represent a like area; perhaps it's ventilator-associated pneumonia or surgical site infection, et cetera -- ultimately, you've got to do it all. You can't do really well in one or two and really lousy in three and four and expect a good outcome.

And in fact, one of the notions that's emerging is maybe in the creation of composite measures, or metrics, what really ought to be looked at is not just an opportunity model but you ought to be looking at whether you are fulfilling all of the aspects of care required in the context of a given bundle.

Now, this is a public reporting challenge, to say the least -- not to mention a measurement challenge.

Next area, standardization -- I guess I touched on this a bit, but I want to hit home with it one more time -- and it's standardization across care settings.

And this really, from the context of NCVHS, I think is an important issue because I'm afraid that we're going to get double counting taking place in patient populations if we don't address the notion of creating metrics that in fact are derived from the same data elements as the patient traverses care settings.

This is clearly going to be more important downstream perhaps than it is today. Certainly, it'll be more important when we get an electronic -- if we ever get an electronic health record that folks can agree upon.

How should the data be reported? Whose report card is best?

When I do talks on this issue, I actually show -- the first report card I was ever able to find on the Web, not health care, mind you, comes from education; it comes from Columbus, Ohio, in 1881. And it graded kids on geography and arithmetic and spelling and so on.

Then I show my second grade report card from the New York City school system -- I hide my accent as best I can, but I grew up in New York -- and the grading system in second grade for me was a grading system that looks at Works and Plays Well with Others, is Generally Cheerful and Friendly, He Practices Good Health Habits. I mean -- I show pictures -- this is really what's on my report card.

And the grades were N, Needs Improvement; U, Unsatisfactory; S, which stood for Satisfactory, and SO, which I got, by the way, in science -- I was a science geek, I guess, back then, too -- which stood for supposedly Superior and Outstanding. I had only one line of SOs and it was in science, and all the rest of mine were Ss.

And then I looked at my daughter, my older daughter who's now ready to graduate from high school, but when she was in second grade, and I compare her report card in a suburban school district, Chicago school district, with mine.

The grading scheme has changed entirely now. From 1881 to 1956 when I was in second grade to my daughter's second grade report card, the grading scheme has changed. In fact, she was graded in elementary school in three ways -- whether a particular skill set, and one of those skill sets, I might add, was called Measurement, interestingly enough -- and whether a skill set was considered to be Developing, Progressing or Consistently Demonstrated.

I've been on the medical school faculty at Northwestern now for 26 years, believe it or not, and I sort of fancy myself as an educator, and I've been involved in grading people for years, and I sort of fancy myself as knowing a thing or two about how do you do that?

And I got this set of grades from my daughter, and Rebecca's grades in particular in one area, which was Engages in the Writing Process, was checked off as Not Consistently Demonstrating but Progressing.

And my wife and I were horrified as parents.

[Laughter.]

DR. LOEB: Oh, my God! I mean, this is a big problem.

We went in and talked to the teacher -- honest to God, this is true -- and said, you know, do we need remedial help? What can we do to help this child?

The teacher looked at us like we were out of our minds because she also gave us the Iowa test grades that

Rebecca had gotten, and the Iowa test grades as Low, Medium and High,. It has Observed, it has Expected, and it has Rankings within the context of the school, the school district, the state and national.

Now, for a data person like me, that was Nirvana because I was able to see exactly --

[Laughter.]

DR. LOEB: -- what her performance looks like longitudinally over time in the various cohorts in which she ultimately participates.

Now, fast forward to health care: What do we have in health care?

We've got the CMS website, we've got the Joint Commission website, you've got probably another several dozen websites that you can go to that are proprietary and black box that contain purportedly information bearing upon health care quality, oftentimes calculated in a way that is entirely unknown to those who read it, containing a star system, check and minus system, no check and minus system. How do we report this stuff ultimately for those that have the need to use the data in a variety of contexts, be it payment, be it actionability, you know, and choice of provider, provider organization? There's no agreement on whose report card is best.

So reporting, again, is a huge issue.

Currency of data -- how current are the data? And without, you know, pointing any fingers, if you look at mortality data, for example, that you can obtain, it ain't very current nowadays. And in the throes of trying to do comparative analyses, for example, how do you match clinical data being collected, to the extent you can do so in real time or close to real time, with mortality data that's two and three years old? It's really difficult to do that.

Right now, within the context of where the Joint Commission and CMS are in terms of joint data collection activities, I think we're about as good as we can get at the moment, and that is, we're looking at data roughly four months after the close of the quarter.

Now, can we get better than that? If we have an IT infrastructure that everybody can agree upon, sure, I think we can. But that's about the best guess, or the best it is, nowadays, and I think that's an issue that we all ought to be thinking about in terms of the future, in terms of trying to get this as close to real time as we can.

And lastly, it's the issue of data quality.

I think it's fair to say that the quality of the data being collected today is quite variable, and as a guy that occupies a sort of a bully pulpit, I guess, from the Chicago suburbs and looks at data from a whole variety of entities out there, the quality of the data are widely variable. And that's very worrisome.

And the question of an audit process being put into place, which of course requires someone to pay for it, is a very significant challenge. Clearly, and I'm sure you're all aware of this, or I know Steve is aware of it, the GAO is doing a study right now that is looking particularly in the context of the pay for performance world at the quality of the data being collected in the context of the Hospital Quality Alliance.

And obviously when the government coffers are going to be used ultimately to reward or perhaps even punish those who are not performing well, the notion of data quality I think becomes paramount.

We still at the Joint Commission, even with all the attention we pay, the audits that we do both to the vendor community which is interposed between our accredited facility and ourselves as well as going right out to the accredited facility, the quality of the data is quite variable. We see missing data, we see data that is abstracted incorrectly, although we do have an AHRQ grant -- thank Carolyn; she's not here at the moment -- in which we're looking quite specifically at correlation coefficients when we go out and re-abstract data.

And I guess I probably should give you the good news story here, too, along with the bad news story.

If you look at it on the aggregate, we're seeing pretty strong correlation coefficients, roughly at about .9. It varies by data element. Some data elements are a heck of a lot easier to collect than other data elements.

But at the same time when you aggregate the experience on high, it looks pretty good, as I jokingly said to Linda Cohen and the folks at the GAO -- we've done a series of interviews with them, and I'm sure there's more to come, about this whole notion of data quality -- when you actually look at the data that we are getting from the nation's hospitals, we accredit about 80 percent of the hospitals representing roughly about 96 percent of the hospital beds.

The hospitals that have measurement reporting requirements for us is a bit smaller than the total number of hospitals that we accredit because a lot of them at this point are psych hospitals or children's hospitals that are outside the requirement base because we don't have measure sets yet that can accommodate them, so we have a total of about 3,800 hospitals that are reporting to us at the moment in a standardized manner.

And when we look at the aggregate experience across all of the individual process measures and the few outcome measures that we have, but on the process measure side in particular there's only one measure where 90 percent of the hospitals are performing at 90 percent or better, and that measure happens to be oxygenation at assessment.

Steve knows why because we were on a panel together where Steve said, "Follow the money," because there is a reimbursement trail for oxygenation.

The second area that hospitals do very well at is aspirin at arrival, aspirin at discharge. They are very close to 90 percent.

Following soon thereafter is beta blocker use both at arrival and at discharge.

And then it falls off horribly. And you look at some of the public health metrics like influenza immunization, pneumococcal immunization and so on, performance is woeful.

And I joked to Linda at the GAO, when the question arose, "How credible are the data and are the hospitals cooking the books?" and I somewhat tongue in cheek said, "If they were cooking the books, this is not what their data would look like."

[Laughter.]

DR. LOEB: I mean, they'd be telling us they're doing 100 percent in everything. And clearly we're seeing a huge display -- in essence a normal curve is what we're seeing. And we're seeing significant outliers both on the positive side as well as on the negative side.

I'll stop there because from talking with Bob and Anna by email, I think you want to engage in some conversation to the extent we can.

Thanks.

MR. HUNGATE: Thank you.

DR. LOEB: Thanks for indulging my ruminations.

MR. HUNGATE: Well, they're very helpful.

Who would like to start the questions? Or are you as overwhelmed as I am?

MR. SCANLON: I'm overwhelmed, and I wouldn't disagree with any of your characterizations of the problem. I guess I'm thinking in particular about sort of the issue of maintaining or reconciling the measures, which I think is very important because one of the concerns that we have is as we move forward in this area that if part of the problem is the measures become obsolete, then we have a backlash that's generated by that.

MR. HUNGATE: Yes.

MR. SCANLON: That could be as bad as if we have a backlash generated by sort of inaccuracy or anything else.

The question would be, sort of: Do you have an idea of how you'd go about -- I mean, thinking about our making recommendations to the Department -- how can the Department play a role here other than creating an endowment for NQF and --

DR. LOEB: Well, I don't want to be a shill for NQF, but I think funding is clearly of great importance here, and finding a disinterested third party like NQF, and we certainly believe very strongly in the notion of what NQF can do, finding a third party like that who's willing to take this on is great, but the issue that I brought up before that I worry a lot about is: Does NQF really have the leverage to create the change?

I mean, if NQF has a measure in its toolbox that in its wisdom for whatever reason it says needs to be changed, it doesn't have the expertise necessarily to do it. It's the measure developer that has the expertise.

But then there's a whole series of dominoes that fall downstream because once that measure gets changed, all of the infrastructure that's been set up from the perspective of the hospital, at the data collection end of it, to the vendor intermediary who's collecting the data whether it be, you know, a QIO data warehouse to which the hospital is transmitting, whether it be one of the Joint Commission's multitude of vendors with whom we have to track the relationships, there's infrastructure changes that are necessary at that end. There's reporting changes that are associated with this. And, you know, the developer certainly can say, "We don't have the resources to do this."

So how do you leverage that change I think is critical. What are the trip wires to generate the need to change a measure?

I mean, NQF worries primarily, I think, about the notion of what constitutes a material change, but I think that's sort of only part of the issue. The other part of the issue is, you know, how do you grade the evidence to determine that it is time to change?

I mean, if we have measures that looked at open cholecystectomies and we had those measures for years and now most of the cholecystectomies are being done laparoscopically and we never changed our measure to deal with laparoscopic cholecystectomy, that would be a big problem.

Your question is a good one, what can you suggest to the Department to create that change? I'm not sure of a good answer beyond the one I've given you, and I'm not sure that's a good one.

DR. JENCKS: Well, if I could add something here. We've really been working with Jerod and his folks. I have at least sort of a notion of how it might work.

First of all, I think that if you submit a measure to the NQF, you own it. It's just like you send your child to school -- it's your child; they come back at the end of the day. And people who submit it should take accountability for keeping it up to date.

And that can be very helpful because it then gives NQF somebody to turn to. NQF needs an internal structure which does something fairly simple, which is it puts a change recommended by the owner, or if you like, the

custodian for the foster parents if the original generator loses their status, and you put it in one of two buckets. It's either a non-significant change, or one that you can just say, "Fine," or it's a significant change -- "This measure really isn't the measure we had before" and it's got to go through processing here.

There are a couple of reasons why you want to do something like this.

One is exactly the problem that Jerod described, which is you get guidelines people who have different -- well, sometimes they have a different sense of the clock, but sometimes they have a different criteria. Guideline writers want to know what the best treatment is.

For many of these measures, what we're really asking is: What's a treatment that's good enough so that people would say, "Yes, that's okay," you know? And ARBs may still be not quite as good as ACEs. Maybe they are.

But from the point of view of the measures people, you know, we said, "Well, does anybody really think that somebody who gives an ARB instead of an ACE is practicing bad medicine?" The answer is, they're a couple of purists who do, but the vast majority of people don't.

It pushes the responsibility back down, and suddenly those guideline managers begin to look at the issue in a very different way.

So that's a way to do it. It's not necessarily the best way, but I wanted something to recommend to the Department.

DR. LOEB: Yes, and I think just to follow on what Steve has just said, I think the American College of Cardiology and the American Heart Association, have -- how should I phrase this? -- woken up into this notion in that they are now trying to merge together the guideline process with the measure development process.

Now, that's a good thing as long as -- and this is a really key point, I think -- you engage others outside of the American College of Cardiology and the American Heart Association.

And toward that end, they have believed, to the extent they've been looking at this so far, that they're looking at primarily physician-level measures and not necessarily organizational-level measures. So they're wreaking havoc with the world at large because while they merge the guidelines and the measurement development process together, they haven't accommodated the fact that these measures need to get used somewhere and leveraged by some entity, be it regulatory or accrediting, and hopefully they're waking up to that.

Actually, three points. Point two is the notion of -- how do I phrase this? When I talk about this stuff

publicly, I'd say the measures came from three places. They came from the measurement experts who know nothing about clinical medicine. They came from the clinical experts who know nothing about measurement. And they came from the third group, which are the entrepreneurs, who know nothing about either and care about something else, which is making money.

And somehow we have to get all three of those types of folks onto one track.

And the third point is the notion of the specialty societies.

It think it was Lyndon Johnson back a gazillion years ago who said it's better to have your enemies in the tent with you urinating out than outside urinating in.

[Laughter.]

DR. LOEB: Well, the notion here is you really need to do this as a consensus development process, and I think from a Department recommendation perspective, I think that's really important, to get all of the entities in the room together so that the process gets done once and you don't have dueling expert panels. That has been an enormous problem for all of us.

You know, the American College of Cardiology and American Heart Association pulled together a set of metrics that relate to X, Y, Z, and the Joint Commission and CMS do it, and then the thoracic surgeons do it and so on and so on, and you end up with a Tower of Babel that is absolutely impossible to deal with.

So to the extent that you can recommend creating in these unique clinical areas a singular mechanism by which all the stakeholders that are appropriate can sit around the table and build on what each other are saying. I think that would be an enormous boon.

And plus, from a cost perspective, it would be a heck of a lot more cost effective than to have these redundant iterative processes under way and they all collide at the end.

MS. McCALL: Right. Yes, and not agree.

Some things that you've said that struck me as just particularly insightful and powerful -- one, you started out by talking about who sets measurement priorities --

DR. LOEB: Yes.

MS. McCALL: -- okay? And I would link that -- they're not the same, but what you've just talked about, it's kind of the who and the what, this maintaining and reconciling of measures, that when we talk about the who, you listed off just a number of different players and how do we get them to align and how do we get congruence?

We spent some time yesterday also talking about -- and I think you mentioned as well -- different constituencies, that the same measure isn't really appropriate as you think about the different constituencies and what's the level of evidence that's required? You know, how do you actually maintain them? How do you morph them?

Whatever the mechanism is would probably have to align not only who's doing it but what they're doing and what constituencies are involved.

The other thing that you said that struck me was this gap between guidelines and practice, and we chatted a little bit about that, which is to say there's value maybe in that gap, which has to do with learning, you know, that it becomes the Catch-22 that people had talked about before -- what's the ROI on more data?

DR. LOEB: Yes.

MS. McCALL: You know, or what's the ROI on looking at something in a different way?

It's tough to know until you actually have a chance to look at it in a different way. God forbid that we have to all reach consensus on something with no evidence before we can actually move off our current position.

So it's really kind of maintaining a creative tension in that gap without letting it get too far that seems to be of value. I'd like your thoughts on that.

DR. LOEB: Yes, I guess a couple of points.

And, you know, many years ago the AMA fought against guidelines, and they fought against guidelines because -- and this is words from their argument -- this is cookbook medicine, and, you know, physicians should not be forced into practicing cookbook medicine; every patient is different, et cetera.

You know, that's fine and well. I believe that. I'd like my physician to treat me as an individual. But at the same time, assuming I have an indication for Drug X and no contraindication for Drug X, I'd like to know that he knows about that and that he is going to prescribe that, and of course I'm going to take it.

So I'm not sure that that necessarily makes me feel very comfortable. In fact, I worry about that gap a lot when I look at the fact that just about all of our measures that we're using today are based on very strong, sound clinical practice guidelines; they're based on sound, randomized control trials. I mean, it's about as good as the evidence gets, and we're seeing compliance rates that are terrible.

And, you know, when hospitals call us, and I guess I'm the guy that -- how should I say this? -- I tell them what I think when they say, "What's the benchmark?" you know, for aspirin arrival or for beta blocker.

And my answer that is: 100 percent. And that's because the measures are constructed in such a way as to permit the organization and its component practitioners to exclude a patient for whom a given therapeutic approach is improper. So if you're aspirin allergic or you have a GI bleed, you should not get an aspirin when you come into that emergency department with an acute MI, and the hospital should exclude you from that population. That's good medicine.

On the other hand, if you didn't have a contraindication to getting an aspirin, you should get it 100 percent of the time. That's good medicine.

MS. McCALL: It's not that situation. It's more the ACE/ARB situation --

DR. LOEB: Right.

MS. McCALL: -- is what I'm talking about.

DR. LOEB: You're crystal clear. I agree.

MS. McCALL: Okay.

DR. LOEB: The other point I wanted to make, and this I think comes back to the first question relative to, you know, recommendations at the Department level as to who sets priorities, we have been saying for about two years now, and it's really Dennis's concept, that there really needs to be a governmental entity probably out of the Office of the Secretary that is responsible for setting priorities.

You know, the NQF doesn't have the leverage to do this. The Joint Commission and CMS alone, or even together, I don't think, have the leverage to do this. The IOM probably doesn't have the leverage to do it.

We really need national priorities set, and probably set on a regular, ongoing basis with the funding stream associated with setting those priorities to make them real. And right now there is nowhere.

I mean, we call our metrics "National Quality Improvement Goals." It's pretty bold of us to call them that because who deemed them national quality improvement goals?

MS. McCALL: Right.

DR. LOEB: I mean, we did, but so what? We're just an accrediting body.

We really need a high-level entity, and our thought is that it probably needs to be at the level of the Secretary to have the leverage that it ultimately could and should have.

MR. HUNGATE: Look across the table.

DR. LOEB: Yes, exactly.

I mean, I think that's a real key.

MS. McCALL: Okay.

MS. POKER: I wanted to address another issue that you talked about, and that was about the bundles, of putting entire measures of like area.

DR. LOEB: Yes.

MS. POKER: And you were talking about it more from the regulatory perspective.

DR. LOEB: Oh, no, no -- from the outcome perspective. It's clear --

MS. POKER: Okay, yes.

DR. LOEB: -- that if you're looking at ventilator-associated pneumonia, for example, there's a number of things You can do to help ventilator-associated pneumonia. You can prevent stress ulcer disease, you can, you know, elevate the head of the bed 30 degrees, you could do a bunch of things.

And, you know, if a hospital's doing one or two of them but not the third one, the likelihood of being good at preventing ventilator-associated pneumonia is pretty slim. Peter Pronovost at Hopkins has shown this, as have others, but Peter has done yeoman's work in the ventilator-associated pneumonia area and this whole notion of bundles. Berwick has been preaching it for years.

MS. POKER: Well, there's bundles of performance that you could put together.

DR. LOEB: Right.

MS. POKER: Or bundles that you could put together for any workflow area, for example, for discharge. You want to do a number of things there you want to put together, and that's -- almost like what I heard is also kind of redesigning workflow.

And what I heard yesterday what somebody talked about is to change the infrastructure to transfer evidence-based information. And my thinking, with my informatics cap, is like decision support systems to help those bundles.

DR. LOEB: Yes.

MS. POKER: How do you see it today currently, because one of the problems is practitioners -- can they do the entire bundle? At what point do they drop the ball just because of their workflow?

So I guess the question is: How do we get there, because that, I think, would make a really big impact, if people would do the entire bundle, not portions of it?

DR. LOEB: No, I totally agree with you, but I think absent an infrastructure that has the systems and processes in place -- you know, they don't necessarily only need to be electronic. They can be other than electronic as well, although electronic infrastructure is certainly going to help with decision support aids and reminders and so on that precludes someone from simply hitting the Enter key when, you know, the reminder comes up.

When I flew to Chicago this morning, it was pretty funny, actually, on United. I had a paper upgrade that I wanted to use, and the agent behind the counter, she flipped really fast through the various keys and there was a screen that came up -- she told me this -- and said "Collect paper upgrade." And she was just an automaton, just going ahead pushing the Enter key, Enter key, Enter key.

And then she said to me, "They told me to do something, but I don't know what it told me to do, so I'm going to have to cancel your check-in and start it all over again."

And she did. And in fact what it was -- of course, I didn't want to give it to her because I would much prefer to use it on another flight, but she had a decision support aid that said "collect that paper upgrade."

Now, you know, how much of that exists in, you know, the hospital infrastructure today? And, you know, well, close to zero. And if you look at the complexity of systems and processes in hospitals, you know, it's not just ultimately the nurse dispensing that aspirin to the patient; there's about 95 processes that have to take place before that aspirin gets dispensed by the nurse to the patient. And all of those are subject to failure.

And if you look at, you know, failure mode and effects analysis and look at the possibility for the systems to fail -- as a limited aside, I mentioned this to Steve earlier -- I spent one of the most fantastic days I have in years yesterday, along with four of my Joint Commission colleagues, out at the United Airlines Flight Center in Denver.

And we went out there specifically because as a chance occurrence on a flight recently, I ran into a pilot who saw my briefcase, my JCAHO tag, and he literally shook hands with me -- stuck his hand out as I got on the airplane. He said, "I want to thank you for what you're doing to make health care safer." And I said, "Whoa! You even knew who JCAHO was?"

And we got to talking. We talked in the cockpit before we took off and then we traded business cards. He's come out to the Joint Commission. He's one of four managing pilots in Chicago. And we talked about issues around patient safety/aviation safety.

To make a long story short, we got into a very lengthy discussion about what United calls its "FSAP," its Flight Safety Awareness Program, which is a reporting system different from the Aviation Safety Reporting System, the traditional one that's run by NASA.

This is a system that United has bought into with the Airline Pilots Association and the FAA and the company. They're getting 125 reports every week, 125 weekly, and they invited us to Denver yesterday to sit through a five-hour meeting, which they have every week, where the FAA is at the table, the Airline Pilots Association and the company is at the table, and they go through these 125 reports one by one and figure out:

What was the system in process that should have been in place?

Where were the redundancies that could have prevented this situation from happening?

And what can we change in our systems and processes downstream? This is a protected environment. It's not subject to FOIA. And it's been, I guess, four and a half years now and the pilots have felt comfortable with the reporting that's going on and they feel comfortable because they're getting feedback and the processes are improving. It's making the airline safer.

Now, you know, fast forward that to health care, you know, my God! You know, at the Joint Commission we've examined 3,000 crashes over 10 years. Now, that's the tip of the iceberg. We think that's probably one-tenth of one percent of what's happening. One-tenth of one percent.

You can't prevent everything, but all of this could be significantly improved -- coming way back to your question about the systems and processes. That's the way to solve it. And by the way, they let us fly the 777 simulator yesterday.

MS. POKER: Oh, that must be good!

MR. HUNGATE: I've got Carol and Justine and Stan and --

MS. McCALL: Okay. I love that you were able to go out to Denver and spend that time. I thought for the group there's a couple of analogies or analogous situations. We've heard about airlines. We've also heard about education. You brought that up, Carolyn brought that up.

I think there are some opportunities and actually maybe a call to action for us to look at some other analogous industries and to speak with them to kind of bust through some paradigms and get some new thinking that could be a lot of fun.

And, you know, education around measurement, I'm wondering if there isn't another one that we can search for in there.

And then the second was to ask you about these bundles, and I find them fascinating. And I'm not sure I understand them right, so first a question for you and then a comment. The bundle, you talked about it in conjunction

with process and outcome, and what I hear you saying is that this choice between process and outcome is really a false choice and that a bundle is really a combination; it a different kind of composite that actually says that there's a bundle of metrics that includes not only how you do something but what it is that you try to achieve. Is that a fair representation or do I have that wrong?

DR. LOEB: I'm not sure what the classic textbook definition is, indeed if there even is one, for a bundle, but my concept is that it's a group of like measures the complement each other that address a single, well, in some cases multiple, clinical areas.

There's a bundle that looks at medication reconciliation that's in the IHI 100,000 Lives campaign, ventilator-associated pneumonia, surgical site infection, and these are measures --

DR. CARR: Line infection.

DR. LOEB: I'm sorry?

DR. CARR: Line infection.

DR. LOEB: Yes, central line infection. There's a bunch of them.

MS. McCALL: Right.

DR. LOEB: And they basically combine together now. Importantly, they don't create a composite metric, they don't come up with a single number.

MS. McCALL: Right.

DR. LOEB: But rather what they look at is: Do you perform flawlessly with each of the aspects of this bundle?

MS. McCALL: Or the spider? I think of it as a spider.

DR. LOEB: Yes, you can think of it as a spider, with the notion that the ultimate outcome, whether it be measured, you know, as in hospital mortality, 30-day mortality, five-year mortality, whatever it might be, the notion is that the ultimate outcome is going to be improved if you've paid attention to all of these things as opposed to one or two or three of these things.

MS. McCALL: Right, okay.

DR. LOEB: But it doesn't really rest on the notion of aggregating the metrics up to a single numeric.

MS. McCALL: Right. Okay, so that helps me understand a little bit.

And I think when we get back to the who sets them and how do we go about that, what I see is something that is -- it's turtles all the way down! I mean --

[Laughter.]

MS. McCALL: -- they're nested, they're subsumed. Some of these things are very context-specific and they're not generalizable to any other thing and yet they must be

what they must be. And then you kind of go up to higher levels and higher levels.

Trying to somehow prescribe all of that, or engineer all of that, is quite daunting, and so it may be the wrong paradigm; we've got to try to find a way to allow them to kind of emerge, and yet they can't be just completely unappended.

DR. LOEB: No, it's got to be consensus driven, I think it's very clear.

MS. McCALL: Yes.

DR. LOEB: You know, you need some consensus to set priorities, too, but priority setting I think in large part should be done on the basis of, you know, where are the problems, where do we know there's high-volume, high-risk, problem-prone areas for which, as Wennberg has been saying for 25 years, you know, "In health care, geography is destiny."

MS. McCALL: Right.

DR. LOEB: So we know we've got these problems; let's set the priorities around these problems and then let's gain consensus about what to measure.

MS. McCALL: You know what would be fascinating is you could imagine a system -- imagine what John talked about yesterday, okay, and into that system which has some baseline data that, you know, is going to be common, that

once consensus were reached -- so I'm assuming I have a can opener, okay? -- and there's some new set, that what you're able to do is you're able to load, just like we have software patches and we have some other things, you're able to actually load into your system, just like a data taxonomy, there's actually a metrics piece it comes with -- denominators and numerators and I got, you know --

DR. LOEB: You're a dreamer.

MS. McCALL: Yes.

[Laughter.]

MS. McCALL: But the thing is to visualize what these bodies might create, you know, in addition to opinion, that could actually be executed to help people --

DR. LOEB: You're right.

MS. McCALL: -- in the moment do what they do.

DR. CARR: Just maybe saying a similar kind of thing to Carol and following up on what we heard yesterday, how you define quality. Actually, we even had discussion about what's quality and what's value; should we be looking at value or quality? Let's just stay with what we call quality now.

I go back to what Don Detmer said, that, you know, we have the IOM Dimensions of Care as one thing. A second thing is then how do you approach it? And the approach has a lot to do with how it will be received, how it can be defended.

So what you've done is really taken evidence-based -- no discussion about whether this is right --

DR. LOEB: Right.

DR. CARR: -- because we have the weight of evidence with us, and it's a little bit maybe like Brent James said, "The light is shining there." We know this is an area that can be fixed and we can see it and we can work on it.

But then there are the areas that are muddier. So you're on the effectiveness and really you're being very effective in major diseases, cardiac, pneumonia, but on the other side there are things like safety that are not disease specific or condition specific; they're going to be system specific. And maybe these bundles are like that. We're safer if we prevent these complications.

And then, you know, there's timely, and so on.

So let me just say, two things had broader dimensions yesterday. One is, what's the definition? And then the second is, what's the process?

And I guess what I'm wondering is, as we're reciting all of the great work that's been done by these different agencies -- NQF is doing a lot actually on safety, system safety --

DR. LOEB: Right.

DR. CARR: -- and so on, you guys have done a lot on, you know, evidence-based implementation -- so one question is, or is this what you were saying, that there's a need to sort of set the priority, that, you know, in the next five years we will be safe so we're going to focus all on safety, or we will be evidence-based.

Then the second thing is, and Brent James had brought this up, and others, is the iterative process. Just as you found with ARBs, you know, we thought ACE was the answer; now we know ARBs are the answer, whatever. Actually, with the '02 sets, maybe we don't need to be measuring that because everybody's at 100 percent, we have that incentive.

So how do we go back and get the biggest bang for our buck, take things out that either didn't work or already done and then re-prioritize? And it's sort of this: It's not enough to have the measures; it's going back and now using them and driving.

And then the final thing that is going to be great is to see -- well, I guess it'll be great; I'll have to see the article in the New England Journal -- but this whole thing about we did all these processes and how were we better?

DR. LOEB: Right.

DR. CARR: I think we really, really need to understand that so that we can be better at what we do.

DR. LOEB: Boy, I put a couple notes down; I'm not sure I'm going to remember all of this.

You know, you're absolutely right in that we have pulled off the low-hanging fruit at this point. I mean, that's the areas in which there is clear linkage between a given process of care and a given outcome of care. I mean, the evidence is absolutely, you know, sound and strong and so on. These are processes that are inextricably linked to good outcomes.

But that low-hanging fruit is gone now.

MS. McCALL: Yes.

DR. LOEB: There are some other measure sets that are in a variety of states of evolution which are somewhat low-hanging fruit, like children's asthma care, for example, and so on, but now we're getting into some of the areas that are really kind of thorny, like surgical infection prevention and in our case we have a set of metrics that relate to ICU care and we'll probably be looking at emergency department care, things like through-put, I mean, stuff that is by no means low-hanging fruit at all. And how you measure them is going to be an issue.

That's, I guess, point one.

Second point. You know, one of the problems in measuring in safety is that in safety you're looking for the most part at errors of commission. In quality, for the most part what you're measures of omission. Not exclusively in either case, but that's pretty much what it comes down to. It's very hard to measure things that don't happen.

I mean, when the initial To Err is Human report came out and the Federal response to Quick Report and so on, everybody said, you know, John Eisenberg, bless his heart, you know, we have to have a 50 percent reduction in errors in five years. Fifty percent from what? And we don't even know where we are five years out on the IOM report now.

So to measure things in safety is very difficult, to say the least.

I guess the other point I would make, and I'm glad you made it relative to the oxygenation measure, you're absolutely right. There are some things that we've been measuring for some time that probably no longer fit the mold of the juice being worth the squeeze.

You know,, this is being done virtually 97, 98 percent of the time at this juncture, and the question is: Is it worth bothering to keep measuring it?

I would actually put that into this broad rubric I called "reconciliation and measure maintenance" because the measure sets have to change over time. They've got to get new measures in, old measures out.

But the problem is, when you have a responsibility and accountability, if you will, as we do at the Joint Commission as an accrediting body -- and I can tell you our Board has a Strategic Issues Workgroup which I staff and my staff work on that's really grappling with this right now -- it's can you blanket a set of rules that say, you know, in the acute MI set as an example, they're doing really well with the aspirin, they're doing pretty well with beta blocker; should we substitute in perhaps a look at management measures, some other measures where we don't know how well people are doing?

The problem is we've got hospitals performing all over the map. And does there need to be a set of rules, if you will, established that say "if you're more than two standard deviations, three standard deviations from the mean, no, you can't stop doing this; you must continue to do this?"

I don't know what those rules should be. I don't know who should set those rules. I mean, we'll have to set them ourselves initially as an accrediting body to fulfill our responsible as an accountable entity to the Federal government under the deemed status provisions of Medicare.

But, you know, as I sat through multiple discussions with our Board over this, and they keep coming

back to us and asking us to model this using, you know, the data that we've been gathering for some time and try to understand what would happen if we do this or if we do that or if we do something else, your point is exceedingly well taken: We have got to figure out what the rules of the game are.

And, you know, the push-back that we are going to get, I see the handwriting on the wall here, is the hospitals are going to tell us we've already got these systems and processes in place that you made us spend money on -- we spent capital to put in a system and process to do X, Y and Z and now you're telling me you don't want to measure that anymore and now you want to measure A, B and C? I mean, that's what's going to happen.

So we have to figure out how to deal with that problem.

DR. CARR: But I think, you know, that a little bit as we think our 10-year plan of the electronic health record and think about flexible fields, I think one underlying assumption is that it will be evolving and so how we build the quality dimension of electronic health records might need to be I mean even a binary yes/no --

DR. LOEB: Sure.

DR. CARR: -- because as you were saying, do you need to know 99 percent, 92 percent? No.

DR. LOEB: Yes.

DR. CARR: You need to know yes/no.

DR. LOEB: Well, you know, the gradations in these indicators, too, really are kind of fascinating because I don't really know that we truly have the ability to separate out in terms of fidelity a hospital that's performing at 94 percent versus one that's performing at 96 percent, but I can damn well tell you that I know the difference between a hospital that's at 94 percent and one that's at 44 percent.

DR. CARR: Sure. But P for P is separating out 99 percent --

DR. LOEB: It is.

DR. CARR: -- and 93 percent and --

DR. LOEB: Well, it is, but that's a problem.

DR. CARR: Yes.

DR. LOEB: What's happening is, you know, there's a finite tie that's being cut. You know, basing P for P on the notion of a zero-based budget is a real problem because what you're going to do is you're going to squeeze out those entities that probably are in the most need of investment to try to make them better and give the money to the entities that are already, you know, way at the head of the stream. And that's a problem.

DR. CARR: Interesting, yes.

MR. HUNGATE: Stan, go ahead.

MR. ETTINGER: Having sat on all sides of the JCAHO, being inspected by you and when I was in CMS I used to do some of your reports to Congress and have to review them --

DR. LOEB: Yes.

MR. ETTINGER: -- one problem that has always come up is -- I guess it's part of the pay for performance argument, too -- how many metrics do you need to decide if somebody is equivalent in the case of the deemed status?

DR. LOEB: Yes.

MR. ETTINGER: In the case of paying for quality, how many elephants can you fit on the head of a pin? How many measurements tell you -- you say like this bell-shaped curve -- well, you can take the high end and everybody's 99 percent so everybody's wonderful, you take the low end and everybody looks terrible, and how do you integrate these so-called, having looked at the survey data for God knows how many times in earlier lives, how does that fit in with the performance measurement-type data? It's very different and probably very variable depending on who's done it.

DR. LOEB: Right.

MR. ETTINGER: I mean, I've been in there behind some of your people, state people, and they wouldn't believe that all three of us were in the same facility.

DR. LOEB: I hear you.

MR. ETTINGER: And just looking down the road, how would you look at the issue of what to do in that case? What issues? How can we help in addressing the issue of how to maybe standardize that or there'll be some metrics that would help analyze that kind of information on some sort of coherent pattern?

DR. LOEB: Oh, it's a tough question. I guess a couple of thoughts.

Number one is the notion of, you know, a standards-based accreditation process, or certification process, is complementary to performance measurement. You know, you can't have one without the other. You need them both.

And so I sort of think it's the Flat Earth Society to say, well, you know, now we're doing measurement; we don't need to worry about those standards anymore.

We need to deal with both. That's, I guess, thought one.

Thought two is the ultimate answer to your question, and you'll, you know, chuckle, I'm sure, but the ultimate answer to the question is an electronic infrastructure. You know, when measurement becomes a byproduct of care delivery, it doesn't matter how many metrics you have, it doesn't matter whether you've got one measure or 100 measures -- you can calculate them by collecting the data once and presumably doing so in an electronic manner.

Clearly, we're not there yet. We don't even have systems in the same hospital that talk to each other. The billing system doesn't talk to the clinical system and the clinical system doesn't talk to the pharmacy system.

Moreover, and when I testified before all of you and it's probably a year or so ago, one of the points that I made back then was the notion of being able to transfer what today are clinically chart-abstracted data elements into an administrative dataset is going to make an enormous difference in terms of what the hospitals do with all of this.

And the challenge, of course, for all of us is to figure out to be smart enough to know what are those data elements? Which are the ones that we really need to pull out of the chart and make into electronically captured administrative data elements so that we can ever so closely bring together the IT side of this and the measurement care delivery side of this.

MR. ETTINGER: Well, in theory if you had electronic data, you could almost do live-time monitoring.

DR. LOEB: Absolutely.

MR. ETTINGER: I mean, talk to Big Brother at one level --

DR. LOEB: Absolutely right.

MR. ETTINGER: -- but it's also live-time; you don't have to worry about if you're going two years later, three years later or three weeks later.

DR. LOEB: You bet.

MR. ETTINGER: You'd have a live-time -- sort of composite monitoring you could actually do.

DR. LOEB: Totally agree with you.

MS. McCALL: Well, that's really the only way that you're really going to learn.

You know, there's a feedback loop in order to actually learn from something.

DR. LOEB: That's right.

MS. McCALL: It has to be fast enough to work. I mean, kids learn because they go, "Hey, Mommy, do it again. Again, again, again, again." This repetition. And so the ability to do live-time kind of feedback, it's really I think going to be foundational to getting the kind of change that you're talking about.

MR. ETTINGER: I think it's most educational even regulatory. I mean, going into a Joint Commission facility and looking at some records from three years ago doesn't really tell you if something's currently --

DR. LOEB: We don't do that anymore.

MR. ETTINGER: I know.

[Laughter.]

MR. HUNGATE: All right, the point that measurement is a byproduct of the care process is a critical observation that reinforces all that Brent said yesterday as well.

DR. LOEB: Yes, I looked at his hand-out here.

MR. HUNGATE: It fits.

MS. McCALL: Right.

MR. HUNGATE: You know, it needs no further repetition, I think, in terms of the absoluteness of -- you know, that enables you to have real-time measurement.

DR. LOEB: If you could make that happen --

MR. HUNGATE: It does so many things --

DR. LOEB: -- I know --

MR. HUNGATE: -- that, you know, it just changes the world.

DR. LOEB: It does.

MR. HUNGATE: And it's a piece of the vision.

MS. McCALL: We're going to work.

MR. HUNGATE: Now, we're going to move on quickly. I'm going to let Don have one more comment,

then we're going to let Steve go.

DR. STEINWACHS: I too believe that measurement ought to be part of the care process but I don't know whether you mentioned it, or maybe it was Carolyn or Steve, but, you know, we don't have much of a research base that talks to what ought to go into the medical record or what needs to be documented in the care process that would drive the sort of idea that as a care provider, this is the kind of thing that you're doing.

Otherwise, very quickly becomes measurement for some other purpose, and you're putting it on top of the care process.

And I guess, you know, one of the opportunities for NCVHS certainly is also to try and talk to what are research needs, since the Department is in a situation of being able to think about where there are investments.

And so I was interested just to get you to say something about, you know, how do you try and move -- the evidence part may be the process, but how do you try and move this process indeed where health care providers and clinicians would sit down and say, "Yes, these are what I need to collect because this is part of what I need to have in order to do the care process or to hand that care process off?"

DR. LOEB: Don, I'm not sure this answers totally the question, but when I talked about the notion of having an entity at the highest levels of the Department being involved in priority setting, it's not just priority setting for measurement.

DR. STEINWACHS: Okay.

DR. LOEB: An ancillary piece of that is clearly going to be the linkage between the research needs to meet the demands of what those priorities might have been. If a priority emerges that looks at -- I don't know, depressive disorders, for example, is identified as an area for measurement of prioritization, well, then there's a whole series of steps that need to fall into place underneath that which can be categorized in large part, I think, as health services research and which I hope would lay out, you know, a series of RFPs that will be needed downstream by AHR too or by whomever to be able to meet the rigors of what this priority might be.

I totally, you know, support that notion, and as a guy that -- you know, one of the charges that I have in my job at the Joint Commission is to grow our research infrastructure. And, you know, we've got something like, I don't know, 15 or 16 grants right now among my area.

So I resonate with what you say. I'd love to have, you know, a mechanism by which to fulfill what those priorities might be.

So -- absolutely. Please write that in the --

[Laughter.]

DR. STEINWACHS: All right. We don't know that people always listen to us --

DR. LOEB: I'm starting!

MR. HUNGATE: Very helpful. Steve, your turn to add to the --

DR. JENCKS: -- recital of human knowledge and wastes.

MR. HUNGATE: Yes, right.

Agenda Item: Presentation: "Health Statistics of the Future and Quality of Care: A CMS Perspective" – Dr. Steven F. Jencks

DR. JENCKS: I figured that people were probably, after Carolyn and Jerod, in some measure of PowerPoint withdrawal and so I'm going to dose you --

DR. STEINWACHS: Oh, thank goodness, Steve.

DR. JENCKS: I do my best.

And this is going to be a little sort of a bit of this and a bit of that, but I think where I ought to start is giving you an idea of how we're looking at some of these issues in the CMS, and Trent gave you some of this yesterday, but I'm to go for a slightly larger picture.

This is the vision that we have adopted as an organization, and I can only say that it's a sad testimony about the health care system that that's a radical vision. The ideas, the definitions, are very familiar to you; these are the same old IOM definitions.

We see this vision as requiring transformation. We've been measuring the rates of incremental change and --

DR. STEINWACHS: Are they going in the right direction?

DR. JENCKS: They are, but perhaps not for long. And the transformation, because of what we are, has to be the whole system. I mean, by the time you've looked at Medicare and Medicaid, and we're much too big to selectively purchase.

And transforming the system means transforming the infrastructure, and of course it means transforming CMS, which is its own challenge.

And I was thinking yesterday as Don was saying, you know, "And what do we do while we wait for a health care system?" I actually think that in many ways the health care reform business was very valuable because it convinced a whole lot of us that we couldn't wait for the legislated solution, which was typically called national health insurance but it could have had a dozen other names.

We've got a background kind of Internet contribution and that's an extraordinary sort of change in the way people relate to data and systems. We've got emerging, I think, understandings of what patient-centered care would like which really were not part of the vocabulary very much 10 years ago.

And all of this brings me to the central, single thing, which is, to use an analogy of Gandhi's remark that you should be the change you wish to see in the world, is we've got to figure out where we want to be and we've got to figure out how we're going to measure progress towards being there, because if we believe that measurement determines behavior and we're worried about distortion because, you know, the things we're measuring aren't important, the answer is: Let's get a strategy, get the measures of the things that are important in the places where we want to be.

And I come back a little bit more to what that might look like, but let me just say this is what I'm going to assume, okay -- Carol assumed a can opener; I'm assuming quite a lot more than a can opener [laughing]. But what "m saying is you have to have a sense of what this might look like in 10 years.

And I think this is a not unreasonable estimate. I'm not saying "every provider." I'm not saying that all the systems will have maximum capabilities. I'm not saying everybody's going to be connected to a RHIO or a grid or whatever you want to call it.

But I think we should assume that's going to be

the dominant situation, and furthermore that if it might not be, we should consider our job to do whatever we can to make sure that it is.

I also think that the payment system will be tightly coupled to that electronic health record, to that whole grid system.

And what we'll be talking about is efficiency of care. That is, to what extent are we producing what we intended to produce as efficiently as we can?

Third, I think we have transitioned from many purposes because of the interconnectedness to patient-centered measures really moving toward replacing provider-centered measures. And that doesn't mean that every provider will have managed that transition, but you can't have everything.

And, finally, that there will be lots and lots and lots of measures, because as the richness of the information system grows, we will no longer be having to ask, can we afford to collect this measure?

Now, that's just where I think we're going.

I also think that this is a moment when we can hope to achieve extraordinary things. I've been in this racket for a while and it's that magic moment when the ship has been built and it starts to move down the ways into the water.

There are four major reasons that I identify.

The first is that I think we understand much better how large the chasm is and we understand some things about how to fix it. Not everything at all, but some.

Secondly, growing complexity in the system -- I've learned from Mark McClellan to take the positive view, so it makes the reward for systems approaches much greater.

[Laughter.]

DR. JENCKS: The way I would have put this were I not being tutored, and this got rewritten so I am being tutored [laughing], not on this slide but -- is that the growing complexity means we're headed for a complete disaster because human beings were never engineered to do what we're asking them to do in this system.

Thirdly, there's really an unprecedented readiness of people to work together, and I mentioned some alphabet soup of examples, and probably you know some of those examples and you don't know others, and from our point of view it's really interesting. We're being asked to lead, the Department's being asked to lead. And it's very interesting, you know, because this is not necessarily the first response of a whole lot of the people who really do not welcome government intervention in every forum, you know, as Brent clearly doesn't --

DR. STEINWACHS: Well, by "some," is this an act of desperation?

[Laughter.]

DR. JENCKS: I think that's part of it, and I think that part of it is that they are beginning to see the difference between leadership and control.

MR. HUNGATE: A significant point.

MS. McCALL: It's time has come, you know?

DR. JENCKS: Yes. And finally, which is for us important but I think it's important for others, solving the quality problem is essential to the viability of Medicare or Medicaid, and S-Chip. It's not an option.

And I think it's not an option for the country in a different way. I mean, it's just sort of percolating through, and people see other reasons and all that, but the idea that General Motors is facing bankruptcy because of its health care costs is really earth-shaking in terms of people's sense, you know, that this isn't someone else's problem. [Laughs.]

So we see a business case which is fairly simple, as I said. We believe that we can only keep the programs solvent and viable by focusing on effective care and eliminating ineffective care. That's a survival strategy.

I had a slide which seems to have gone to slide heaven.

[Laughter.]

DR. STEINWACHS: Do you recognize the other place, too?

[Laughter.]

DR. JENCKS: Well, since one can't tell, one has to, I think, have the positive approach, Don.

[Laughter.]

DR. STEINWACHS: Thank you, Steve. Thank you for being my mentor.

DR. JENCKS: But I think actually there is very strong evidence that we can have major systems changes by addressing the quality issues. And I won't go into all the details on that because I think I'm preaching to the converted.

A couple of perspectives.

One is, CMS is a public health agency. And Mark is very straight and flat-out about this. And as he's put it, that's not a choice. We have such an impact on the health of the public that our only choice is to whether to be a public health agency thoughtfully or inadvertently. And we're trying to do it thoughtfully.

Second, pay for performance we see as absolutely critical, and it's not that it's critical because we really think that we can, you know, drive the bad ones out of business and make good ones rich; it's because at this point not to put a certain amount of resources on the table when we talk about the importance of measuring things just belies our claims.

On the other hand, it might be that we're not ready to do more than one percent of the total payments, based on these measures. And that has a lot to do with the concern, by the way, that I expressed earlier, that you'll run the places that are in trouble into even deeper trouble. This is enough money so people will pay attention to it. Four-tenths of one percent was enough money to persuade 98 percent of hospitals to submit data.

But it's not, I think, enough that anybody is going to legitimately claim that it's going to run them out of business.

And, finally, I really want to just echo -- first of all, Jerod is absolutely right; we've got more applause from the hospital industry which is who is affected by this for getting the measures to be the same measures than we have for almost anything else we've done, you know?

[Laughter.]

DR. JENCKS: I'd like to say, you know, that -- but it's really true.

We don't need to be encouraged, and we think that the consensus as to how to do it is beginning to emerge.

One of the things that happened: Last September, all of a sudden -- it wasn't on anybody's radar screen --

the American College of Cardiology and the American Heart Association produced a set of performance measures for cardiology and cardiologic aspects of primary care. And they got more hate mail than I think they'd ever imagined.

DR. LOEB: It was from me.

[Laughter.]

DR. JENCKS: No, sir! Yours was only a small piece of the pie.

And that was very instructive. I mean, the hospital industry told them that this was unacceptable. J said it was unacceptable. And that was very instructive.

I mean, the hospital industry told them that this was unacceptable, J said it was unacceptable; we told them it was crazy.

And they very quickly sat down and said, okay, but we think there are some things that are better about what we've done.

And we said, fine, let us all sit down together and incorporate those things that are better. But don't produce duplicate measures with different specifications.

And by the way, out of that whole business, and out of the work with the J, I would emphasize that alignment of measures is a phrase that should not be used. We had aligned measures. They had the same names. They appeared to be measuring the same thing. But the specifications weren't the same, and it drove the providers nuts.

So I think what you might think of in this, and I'm not sure whether it hits with your mission, is the notion that the measures need to be the same. And somebody said, you know, it should be like a software patch or a module, you know? People can have many different systems but they should all be able to just take the new version of the module and plug it in, you know?

I mean, one of the problems many of you, I suspect, know is that people who write code in the face of something that isn't quite right have almost no self-control. They fix it. There can be a clear rule on the wall in letters this high that say, "All system changes must go through the change process." You know, it's uncontrollable.

So having the stuff published and having the software standardized is, I think, a place we really do need to go. And I think we're all on the same page on this.

Now, we have five major strategies for transforming the health care system -- this is the way we're going about it -- and a whole bunch of things that aren't mentioned here. For example, an emphasis on prevention isn't mentioned here, obviously. The Part D benefit, which is really fairly important, is not mentioned.

But working through partnership, and that means really working in partnership with people, the traditional HCFA/now CMS partnership consisted of saying to people, "We're going to do the following. Would you like to be our partners and do it, too?"

[Laughter.]

DR. JENCKS: So this is genuine partnership.

Measurement -- and we've been talking so much about this, I won't reiterate it.

Paying for patient-centered care -- we're sort of trying to get a little past performance. I mean, everybody performs, but what it is exactly we're trying to pay for.

The promotion of IT.

And an increasing process of creating and using effectiveness data. Some of that may look a little exotic. It's stuff like: When we cover something, cover it with further effectiveness data required as part of the coverage.

But it's also coming back and saying, all right; we now know a bit more about what this is good for and what it isn't good for. I think it's relatively unlikely that anything's going to have its coverage removed as a result of the data, but I sure do think the indications will change.

So the thing, though, that I would say, and I would really emphasize this -- I mean, Carolyn said IT is necessary but not sufficient -- none of these things is a magic bullet. And every year, or every couple of years, we have a new magic bullet, you know? Four years ago at CMS, if we publish performance data on everybody, the world would be okay. Now, it's if we pay for performance, the world's going to be okay. And it takes nothing away by doing this.

Now, you start off with a patient, and this obviously is fictional because the idea that a system was built around patient is [laughing] -- you know, but nevertheless --

DR. STEINWACHS: We thought it was the provider, right? And then the system --

[Laughter.]

PARTICIPANT: And the doctor's name is Patti(?) and --

[Laughter.]

DR. JENCKS: Patti, V.M.

[Laughter.]

DR. JENCKS: But if you then say, okay, how does somebody, and I'm thinking of CMS as an example, how does somebody like CMS influence this system?

And there are a series of things we can do. We can lead, and that comes in a variety of packages. It is independent of whether we put money on the table. If we articulate that this is where we're going, it has a huge effect:

We can afford standard methods. That includes IT standards, it includes performance measures, it includes clinical guidelines.

We can promote partnerships. I've spoken about that a little bit. It's really central.

And we can provide technical assistance, which we do primarily through the Quality Improvement organization.

We can provide public information, as we do by putting stuff up on the Web and other ways.

We can structure the coverage and payment systems so that -- well, to take a very obvious sort of example, the decision to cover or not cover email and phone contact could be a --

And we can reward desired performance. That means, you know, pay for patient-centered care.

And, finally, which is what people always think of, you do establish and enforce requirements, you know. You tell nursing homes that they may not restrain a patient for the purposes of treatment.

So we don't see a single magic bullet in this, and I only emphasize this because I think it's useful to constantly be saying, okay, how might what we're talking about right now fit in the context of other ways to influence the system?

Oh, this is the sort of story about cost and quality and it's just -- you won't know this stuff; I'm not going to --

Now, let me talk about this Workgroup a little. I want to thank you for letting me sit here over the last day and a half. It has been really a pleasure.

You're talking about the right things. You're grappling with the right issues. And you've made me think about some things in slightly different ways and it's worth the trip, to use the Michelin standard for a three-star experience.

[Laughter.]

DR. JENCKS: I think if I understood the statement yesterday morning, what we were being told was that the job was to provide actionable recommendations that should take maximum advantages of the strengths of NCVHS and should be, too, to have maximum impact on improving the system. Did I hear that about right?

MR. HUNGATE: I think so.

DR. JENCKS: Okay. The big question I had as I listened to that: Is this Workgroup focusing on the quality of the system or on making the system such that quality improvement will constantly go on in it? I think that's a profound difference.

I would recommend focusing on the second.

I think, you know, the health informatics infrastructure people and lots of other people are focusing on what make a system a good system.

Now, we might talk about measures. I said before I think you want the measures that are based in the system you want to see. And I think this particularly means that we have to think about how to measure two things -- we're not very good at measuring at the moment -- which are patient-centering and efficiency.

You know, I mean we have a few trifling problems about price of process of care measures, but these two are really real biggies.

And these things change rapidly. And what's more, the ones we choose are more likely to change.

It took me a while to figure out why we were having all this happen when we had these wonderful measures which were based on rock-solid agreement. Imagine, though, that we were to graft our move toward greater certainty over time in some topic, let's say the use of drugs for heart failure, and so what we sort of like to think is -- it goes like this: Now, that thing might be a little more wiggly than that, but it's nevertheless envisioned as a monotonic, increasing function.

But the reality is not that. Reality is this: You may ask, what are you talking about? What do you mean we've become less certain?

Well, somebody introduces ACE inhibitors, and suddenly we've got -- I'm sorry, ARBs -- and suddenly we don't know as much as we knew a few months ago about ACE inhibitors.

Or, of a radically different problem, we have a list of drugs for prophylaxis for colon surgery, and then we have a national shortage of one of the drugs and some questions about some indications on another.

So it goes up and down.

Well, that's wouldn't be all that bad if it weren't for this: Do we actually select measures when we're at this point? Uh-uh. That's when we say, we've got to unscramble this problem before we should ever measure it. Select that, you know, select here.

And it's very likely that having chosen something, but on this opportunistic basis that we do for measures, and the opportunistic basis has two features. One, what's the data opportunity? And two, where is there really strong professional consensus?

And what I'm saying is: Professional consensus doesn't last as well as it might, and it becomes subject to all sorts of controversy, you know? Look at what's going on with obesity.

So that means we need systems -- I mean, this is just a way of triply underlining what was said yesterday about the need for flexibility and open loosely coupled architectures.

We also need, and I'm not sure that you folks would think this was within scope, but I think you've really got to think about it, is figuring out which things will self-organize. That is, out of the chaos you will get self-organizing behavior, and what are the things that you need small or large degrees of structure in order to assure that they organize?

And best of all, could we possibly have a little bit of a rule or some guidelines as to which is which so we can tell up front instead of just learning in retrospect?

The final thing about infrastructure, I want to just put it a little differently, and I certainly agree that the data should come off of the care system and off of the care process. But the basic rule of data quality is lose it if you don't use it.

Unused data rapidly deteriorates. The level of atrophy gets very high.

And I'm just pointing that out because as you begin to use the data, and particularly as you begin to feed it back, the quality goes up.

And I think that that is our most powerful tool for data quality improvement. It's better than audit.

DR. CARR: Can we have copies of these slides.

MS. GREENBERG: Yes, I'm sure.

DR. JENCKS: No, they're mine. You can't have them!

[Laughter.]

MS. GREENBERG: They're in the public domain.

DR. JENCKS: Exactly. They are not copyrightable.

Let me urge you to think a little bit about the consumer role, and it's wonderful that Jerod comes back from visiting the United center, because I think there are two models. One is what I'd call the "FAA Model," and the other is the "Consumer Reports Model."

The Consumer Reports Model says that people are going to make enlightened choices. The FAA Model says the system's going to be safe and effective, you know -- you don't decide between United and Southwest based on whether you're going to have a court action. And furthermore, if people started to suggest that that would be a good idea, I think there'd be a good deal of national push-back from people who have considerable political influence, namely, passengers.

So then we have to say that as envisioned originally, the selection of provider model has not really worked, and that's what Bob was saying yesterday.

And indeed this notion that you could refocus on the decision to have the procedure rather than, you know, whether this was the best possible person to cut on you is a possibility, and it is certainly also important that we get advice which is past the denial of services model, and yet you look at the work that Mully and Wennberg have done and it just hasn't taken over the world yet.

And we might think about, you know, whether it's within scope to consider how you could promote it over the world because clearly that joint decision-making model is what a lot of us believe is not only going to save money and produce more quality-adjusted light years; it's also sort of an ethical imperative.

And what I've come to think is that we face a very interesting dilemma, which is that the consumer information model is really valid for only a few consumers, and for them, that's fine.

I mean, there are people who will go and look it up on the Web and so on and, you know, we might think about the places that is most likely to work, e.g., maybe when you're choosing a nursing home for a member of your family,

and least likely to work, namely, choosing a hospital to go to when you have a heart attack since, you know, the 911 people tend to make that decision for you, raising interesting questions about what we're publishing.

So then we have to face the reality, however, that this excuse has produced a lot of very valuable change in the system. And I'm not talking here about pushing the beta blocker rate from 45 percent to 75 percent. I'm talking about the readiness of people to look at measures as having some validity, to collect them, to report them, to begin to look at the systems that underlie the numbers they have that they're not as proud of as they hoped.

And we've got to work through, too, the problem that the measures, although imperfect, have a lot of utility. And we clearly cannot afford to wait for perfect measures. So what we have to do is be very committed to the idea that: How are we going to improve them over time?

Now, what I haven't heard anybody talking about here is the importance of winning the hearts and minds of the people who play the game, without whom this exercise is completely sterile. You know, it's a reality that physicians often just don't understand what this is about. They over-estimate how well they do things. They over-estimate their own rate as to change if they're not doing as well as they think they are. And besides, they've got a lot of other irons in the fire.

It's also true that CEOs -- we just did a little survey on this, reporting something in this to get out -- think that the biggest obstacle to quality improvement is the docs. The chief medical officers and the QI officers have a very different perspective -- they think the problem is a system which doesn't work very well.

And, finally, we have the problem of purchasers, and I'll start with CMS. You know, we're not very good purchasers. The fact that we're purchasing from a broken system contributes to our not being good purchasers. We're desperately trying to figure out how to be better purchasers without doing things that are politically unforgivable.

But the bottom line here is truly the bottom line, which is we have to figure out how we're going to make these changes pay off for the people who invest, and I don't just mean the people who invest money. Money is important, but for physicians, the primary investment is not money -- it's a money equivalent: Time.

I mean, who was telling us about what happened to --

DR. CARR: John.

DR. JENCKS: Was that John? Yes -- anyway -- and so there is a huge purchaser issue here because at the moment there is very little mechanism for gain sharing. I mean, we've got a coalition working on improving surgical care. We think we can lower the rate of surgical complications by 25 percent.

When I suggested to various large purchaser that there needs to be some gain sharing because some of this might cost some money and they need to be at least able to cover their costs, the immediate response is: Excuse me. We're already paying too much. Now you're telling me we're paying too much for bad care. And you want us to pay more?

It's not a really interesting conversation, but we do have to be thinking about it.

MR. HUNGATE: It is an important issue.

DR. JENCKS: So let me just conclude with a couple of comments on systems thinking.

Again, don't be timid. We've got to have the IT; it's not going to do it by itself, which means we've got to be fairly radical in our thinking about how it's going to be used.

We're going to have to look across the board. This notion that we're going to have a measure here and a measure there and another measure just isn't going to get us where we need to go.

I didn't believe 10 years ago that people could sub-optimize by getting 20 measures to improve without improving their system, but they can.

Greg Meyer came up with this phrase; I like it very much: "You people in A systems will always be A people in B systems." We need to sort of teach them that.

DR. STEINWACHS: We don't believe that in health care, do we?

DR. JENCKS: Well, some of us do and some of us (Laughter).

DR. JENCKS: That's the problem!

I think we have a major job to do, and I think, by the way, it's a job that you folks can really help with, in thinking about how to reconcile systems thinking with scorecards and ranking. Scorecards and ranking are not going away. And systems thinking is something we really want to have, and we've got to define -- and it's one of those things where I think language may prove to be a very important part of how we make it work.

Okay, and the final thing, which I think is, if you will, the law of unintended consequences.

We've been doing some analyses of all the things -- this goes back, Carol, to your comment yesterday -- that could really get screwed up by pay for performance. You know, and how do you counter-balance that, because the concerns are very legitimate?

We have to think about how we balance that stuff. That's the price of thinking radically.

Okay, and finally, now I'm just going to give some advice, fully conscious that the biographical sketch of Socrates by an eight-year-old back in the days when eight-year-olds knew something about history, which is that "Socrates was a man in Greece. He gave advice. And they poisoned him."

MS. McCALL: Steve, can you talk into the mike?

We're supposed to be recording all of this.

DR. JENCKS: Okay. I would urge you to look at this new horizon, not at fixing gaps. And I'm saying this because that was a phrase, and I can't remember who used the phrase yesterday, but one of you did. That was you? Or are you waving to Jared?

I can tell you my own experience, having worked in a lot of organizations trying to do the kind of thing you're trying to do, that if you let yourself get involved in fixing the gaps, you will constantly get diverted so completely from looking at the horizon that you'll wind up just looking at your feet.

MS. McCALL: Right.

DR. JENCKS: Secondly, I heard yesterday, and I absolutely believe it to be true -- that means, for purposes of those who are reading notes that it is the systems quality interface that this group has an opportunity to make a big contribution, and I don't think there's anybody else about to do it.

I would argue --

DR. STEINWACHS: So, Steve, you don't have the symbol up there, though.

DR. JENCKS: Well, all right -- I'll get it.

Actually, there's a little problem with that symbol, which is that there are some political sensitivities. That's a lambda.

[Laughter.]

DR. STEINWACHS: It's a good thing you came to counsel us.

[Laughter.]

DR. JENCKS: So the focus on patient-centering, because I think we have little understanding of how well we're going to be able to look at that as regional, integrated information becomes available. At the moment, the information problems in judging it are terrible.

The fourth bullet. If you want to promote the movement that we want to see, one of the ways of doing it is the message: We think the present system is in the process of going down for the third time. We're not going to invest in trying to make it better. And, you know, it wouldn't be here by the time we had reasonable things to --

That might not be true, but it might be a little self-fulfilling.

And, finally, that you really want to develop a vision in a way which is complementary to the other people. And I emphasize that because I haven't heard it as much as I thought I might, you know. "Well, we don't need to do this because this is NQF's job, and even if they aren't doing it well, the solution there would be to fix how they do it." "We don't need to do this because there is an Office of the National Coordinator of Health Information Technology and he's working on it and we can provide advice but we don't have to do it."

MS. McCALL: I think that's spot on.

MR. HUNGATE: Yes, let's talk about that for a couple of minutes because it seems to me that if you're yourself saying this is what we think our role is, then you have to say what you think others' roles are in order for that to have coherence. Does that align with what you're thinking?

DR. JENCKS: Yes, and I think you sort of have to have a rough sketch and you need to be fairly sure that your sketch of somebody else's role is acceptable to them.

I mean, if you said that somebody was doing something that they don't think is their job, it'll cause just unnecessary problems and won't help anyone.

MR. HUNGATE: Seems to me that that's part of what we're trying to start working on here in terms of saying: How many visions do we have to get an understanding of in order to get our own vision to fit, I think?

MS. McCALL: Yes. I also think it means --

MR. HUNGATE: There's an iterative process there.

MS. McCALL: I also see it needs to be done not for this Workgroup but for NCVHS, which just has an overarching purpose which has to be complementary to the other parts, you know, kind of in the mall map of life? And also has to go back to, you know, in your target practice where we had, you know, Dr. Patti at the Center?

[Laughter.]

MS. McCALL: But can you go back there for just a second? Okay. If that's the toolkit, we also -- that truly is a map. We have roles in some of those areas, and so this can be kind of a visual to say, okay, where do we play, you know, and what is our unique contribution to do things that won't self-organize? They will not happen naturally. You know, with a particular focus on those.

So, yes, I think how we fit into this room here is critical.

MR. HUNGATE: Okay, floor's open for full discussion.

MS. POKER: I just wanted to say really quick that another roadmap that we have, that this group had, that we haven't really talked about is the IOM Patient Safety Report which also set a very specific roadmap. And I'm not saying one's better than another, but we do have a few roadmaps to kind of select from. Just FYI.

DR. JENCKS: My comment on that is I wouldn't judge that one roadmap is a whole lot better than a roadmap from a different company or done in a different way. It's just how much more valuable it is than no roadmap.

And I think that if that is true in general for an advisory group, it is just extraordinarily true because if you can lay out the people and, you know, I mean we just took the IOM six aims not because we thought they were the best possible statement of aims that anybody could imagine but because we thought there's a good degree of acceptance. If we use these and make progress toward them, we've done good --

MS. McCALL: Right.

DR. JENCKS: -- and people won't saying, "Well, why didn’t you use the IOM's aims?

MS. McCALL: Right. Well, and there's nothing that, you know, creates success like success. So start where you are, get it moving, and it'll get there.

MR. HUNGATE: Don?

DR. STEINWACHS: I was impressed, Steve, that you made sort of a center point here -- patient-centeredness -- even though we transformed that into Dr. Patti, E.N.T.

And, you know, I took from that sort of the tension we have and that, you know, we would like to think of a product like GM would like to think it's a car. We'd like to think a product is a person who is better off by virtue of having been into and through specific health care processes, yet we measure diseases, treatments for diseases, and outcomes that are generally disease specific.

Could you talk a little bit more about how you think some next steps might be that help us -- you know, we'll still stare at diseases and metrics around those, but get back to the patient, because the patient doesn't necessarily experience what we think of in narrow diseases?

DR. JENCKS: Yes. I think there two kinds of issues that I would focus on, Don.

The first is that I'm not sure that we, if we're trying to focus on patient, we do best to think about the outcome exclusively. The process is really important to most patients.

I think of some work we're currently doing on oncology care in which the suggestion is that a huge amount of oncology care is directed at patients who have failed the protocol treatment and now somebody is busy trying to

rescue them. Well, they may produce on the average a bit of life extension; it's pretty small. They produce enough side effects so that people's last months are not a whole lot more fun.

And I think that we need to remember, too, and I'm going to get a little philosophic here, but I think the philosophy is important, that the word "clinical" comes from the role of the priest at the bedside and that we are still, along with our many other roles, priests at the bedside and that you have to then see that interaction as being a very important part of what's going on.

So that's one piece. I would put a good deal of emphasis just on their being a healthy relationship between the patient and the health care system.

And I use the word "patient." Many people prefer, and I understand exactly why, the word "person." But "patient" emphasizes to me a responsibility to consider a person who is suffering, which is what the word "patient" means.

A second piece is that we have made, I think, a very interesting mistake in our efforts to collect information from -- and this originally was beneficiaries, not patients, because it was started with managed care in the CAPS surveys -- which is we have used focus groups to ask other people what they wanted to know.

In that process, we have often lost sight of the fact that the patient knows things about the quality of care which we can't get in any other way.

DR. STEINWACHS: And they put the pieces together.

DR. JENCKS: Yes. I mean, if you want to know whether somebody was given enough education so that they understood their choices and so they knew what they were supposed to do when they went home and whether they understood that they had an appointment when they left the hospital, you'd better ask the patient.

MS. McCALL: Right. Better ask them.

DR. JENCKS: The record is not the place to get this information.

And so I think a second part is to go back and ask the patient. It's a Francis Peabody thing. "Listen to the patient. They will tell you what is wrong."

[Laughter.]

DR. JENCKS: So those would be two pieces that I would really focus on if we wish to become more patient-centered.

But I think the other part is the IT. I think the IT actually is vital, because the notion that there is a patient record which is independent of the place where you're seeing the patient is really important.

I was in the Commission Corps for 33 years, and I got accustomed to the proposition that I carried my record around with me.

DR. STEINWACHS: Not anymore?

DR. JENCKS: What?

DR. STEINWACHS: Not anymore?

DR. JENCKS: I still do. It requires some training of the civilian physician. Anyway -- but that's another piece.

DR. STEINWACHS: Well said, Steve.

MR. HUNGATE: Bill, then Anna and --

MR. SCANLON: I was actually having the question before Don asked it and you responded sort of like, exactly what is the patient-centered focus going to involve in terms of a transformation? And your answer helps a lot in terms of thinking in those dimensions.

But it also raises the question of whether we're on dangerous ground if we think that we're going to make a complete substitution, because in some respects, the patient-centered focus is so much more difficult to achieve.

When I first saw your toolkit, it seemed to me the tools were a lot like the tools we've got today, in fact, if they're not identical. But you've given the re-definition of these tools that's helpful.

But it's going to be harder to get to trying to apply these with a patient-centered focus all the time, and we talked yesterday about not having the perfect, the enemy to the good, that we need to think about, in some respects -- this is a continuum -- move in the direction, but we're not going to get there.

DR. JENCKS: I think that's really well put. I mean, this is a journey. And not only that, but we spend a fair amount of time actually thrashing this back and forth because we were asking: What should CMS be doing to push in this direction? And there are some analogies to the problems of disparities, which is also a very patient-centered issue.

And what we concluded was that we wanted to make sure that we were looking at policy changes and asking what their impact would be on whether care was patient-centered. And for the moment, that would be a fairly powerful place to start, you know, so that, for example, if you're talking about pay for performance, are you including CAPS-like data as a starting point in measuring performance?

And, you know, rather than starting off and saying, well, the way we're going to fix this is we're going to pay people to have happier patients, instead saying, well, if we're going to pay people for something about how it was done, we are going to include this, aren't we?

MR. SCANLON: And I think, you know, that bringing up CAPS kind of raises a good sort of illustration of the difficulty we're facing here. I mean, we're talking about the electronic health record as being an incredible source of information that we're going to be able to use. Now when we switch to CAPS, we suddenly don't have the electronic health record potentially.

And when we think about paying for performance using CAPS, it's much different to get information on a half a million physicians than it is on 4,000 hospitals. That's, I think, one of the challenges we have to think about.

DR. JENCKS: But in 10 years, what do we envision about the feasibility of putting a CAPS instrument on somebody's computer screen when they do their email? I mean, I don't know the answer to that question and I don't know that it's necessarily the best way to do it, but I am saying we need to think that the world we're moving into is not the one we're in right now.

DR. STEINWACHS: We'll knock a dollar off your co-payment if you answer the questionnaire, right?

DR. JENCKS: How about that one?

MR. HUNGATE: Yes, Anna, go ahead.

MS. POKER: I just had a clarification question, and I wasn't too sure that I understand. When you say you don' want duplicate measures with different specifications --

DR. JENCKS: Yes.

MS. POKER: -- what are you saying exactly, Steve, with that?

DR. JENCKS: Well, let me give you an example from the work we did reconciling with the Joint Commission, which was when we went in, for example, there was a difference in the sequence in which you would look in the record for a particular piece of information. So their specification said, if I recollect, you look in the doctor's order sheet; then you look in the progress notes, and then you look in the nurse's notes.

And the other specification said, first you look at the progress notes, then you look at --

Well, you know, they looked at it and they said, "Give us a break!" But when you actually went and audited, if somebody had done it under this one and you audited under this one, they didn't look so good.

So that's the kind of thing I mean about specifications.

MS. POKER: Sequence, also.

DR. JENCKS: Sequence is an example, but --

MS. McCALL: There's another, which is how you literally -- what's in your numerator? What's in your denominator? How do you calculate them? How frequently do you do them? What is an outlier? What do you do with missing data? All of those things that says, look, I may measure percent of people that come out of the hospital, and yet there's a lot detail that goes into completely specifying that that can drive you nuts, and you will be apples and oranges.

DR. JENCKS: If you can't find a piece of data, does that mean it wasn't done? Or does that mean you exclude the case from both the numerator and denominator?

MS. McCALL: What do you do?

DR. JENCKS: Both are highly defensible, but they're not the same.

MR. HUNGATE: I had a couple of other questions of content. In your graph on certainty, is that from the health care system view or the patient view?

DR. JENCKS: It's from the view of people writing measures. It's not even the system.

MR. HUNGATE: It's just the measure certainty?

DR. JENCKS: Yes. The people who say, "I think we have to re-evaluate this measure," which looked like a very good idea six months ago.

MR. HUNGATE: Good.

MS. McCALL: Back to that particular picture, I would say that what we have on that picture is -- whose point of view is it?

Sometimes we have what I'll call a keyhole view, okay, because I would say that that perspective is probably God's view. You know, that's the only one that can step back far enough to kind of see that actually that's where we are. You know, we're kind of going like this.

The other thing is that those time horizons, they're so long that I would bet you that sometimes what it's going to feel like is zooming in on kind of that curve right there, that sigmoid branch, and that what we will feel are the G-forces of that acceleration at certain times over a very compressed period of time. Our certainty is going to ramp up, all right, and it's going to be like [ahhhhh].

And then there are going to be times when it's going to feel like we're falling right through the floor.

DR. STEINWACHS: Sounds like my investments in the stock market --

MS. McCALL: Exactly.

DR. STEINWACHS: -- and retirement accounts.

[Laughter.]

MS. McCALL: So I would submit to you that that time horizon's actually very long and that what we need to design mechanisms for is to keep from falling into one of these stock market crashes and reacting to what is going to feel like very fast acceleration or deceleration or, you know, kind of climbing and falling, to get to the very point that you're talking about, which is where are the points that you set them? How do you react to what is going to feel like sweeping change in a short period of time?

MR. HUNGATE: Another question. When you present your business case, I have the feeling that maybe that's a value proposition. Would you care to comment on that? Is that structured in such a way that it starts to talk about this value question that has come up?

DR. JENCKS: Well -- now you're talking about this. I really went through that fast.

MR. HUNGATE: You had one that you called the business case per se which -- at least I think you did, yes. There it is. Focusing on effective care, eliminating ineffective care. That's kind of a value proposition, that the value of effective care if far higher than ineffective care.

DR. JENCKS: It certainly is a value proposition, but I think more value in the sense of value purchasing than in the sense of your values are different from mine.

I don't know very many people who think that delivering ineffective care is desirable.

MS. McCALL: Yes, it's true, but it's a truism, so it's not terribly useful, you know, so to say that effective care is more valuable than ineffective care.

DR. JENCKS: I don't this is a big debate.

MS. McCALL: No.

DR. JENCKS: The debate is how much money is in there?

MR. HUNGATE: Right.

DR. JENCKS: Is it enough to bring at least the growth rate back under control? You know, and there would be different opinions on that.

MR. HUNGATE: Right. I have one last question before I relinquish -- we may be enamored with the need for a trusted advisor for patients, that the complexity of the system we're going to have is going to mean that a patient is vulnerable and having a hard time grappling with it, even if it's patient-centered, and that we're going to have to pay attention to the trust factor in understanding how the conduit of information for the patient is going to work, that that's going to be a critical component of achieving a patient-centered system. Does that make any sense?

DR. JENCKS: In two ways. First of all, I think you're reinforcing what I was saying about the basic model

of the priest at the bedside which I think is absolutely central for most patients. And for those of us who have been sicker patients at some time, it's a very interesting experience how much it changes sort of how you approach the world. So, yes, in that way.

There was a time, of course, when we thought that was the job of the primary care physician. There is sometimes an effort to substitute a nurse practitioner as somebody to whom people can at least talk and who has time to listen as well as talk.

I think it's absolutely true. It's really one of the malignancies of the current payment system that we have so emphasized. You know, it's like the lawyers talking about the tyranny of billable hours, the tyranny of procedures.

MR. HUNGATE: I think it's also interrelated with the certainty of available information.

DR. JENCKS: Yes.

MR. HUNGATE: The information deficit impacts on the strength that can be developed.

DR. JENCKS: Absolutely. There's going to be a huge information deficit. This is again one of the reasons why the role of the priest is so important, you know.

In fact, our reliable information about what happens on the other side of that great divide is very, very small.

MS. POKER: Steve, having chaperoned us through these two days and having your vast experience -- well, I mean, if you look at it with Steve Jencks's experience but maybe from a perspective of a potential patient or historically you were a patient, what do you see? What would be like the overarching thing that we have to make a selection agenda or issue that you would wish for this group to take on for short or long term? Like what sticks out for you?

I mean, you just kind of alluded to -- did I hear reimbursement? -- that we haven't really talked about, but, I mean, what would be your wish, just to kind of, so we get the --

DR. JENCKS: Yes. No, I actually think that what I've been hearing from the group is very close to what I would see as the right direction to go, that I think that this is something which desperately needs to be done, particularly if it can be done in what I would describe as a both highly scientific and highly humane way and that this group is probably uniquely positioned to do it and that there's nobody else really doing it.

And so I would really encourage you to see if you can find that as something, you know, over the next 18 months, let's say, to really develop a set of recommendations as to how we carry forward into this future in a way that makes it more likely to work.

MR. HUNGATE: Related question. It's a big task, in my perception, that's laid out there. Finite resources.

What resources do you think we'll have to tap in order to achieve that kind of a work product in a reasonable period of time? It's the doability question -- you know, what have we got to bring that sort of thing --

DR. JENCKS: Let me put this two ways.

On one level, the need is sufficient and the current efforts from others are sufficiently weak so that I don't think I would say, well, it's only worth doing if you can bring X, Y, Z into alignment.

The other side is that I think there are a lot of people who are really interested in this problem. They see a future coming. They think it has a lot of potential, and they think we're not planning very well for it.

That's a major contrast to most other aspects of the future where people think it's a disaster coming and we have to do something to avoid it. I think there's a sense of hope and opportunity so that I think you can get people to help you, that people will be very interested, and very varied people.

One of the things that's fascinated me is -- I've been working in the last couple of weeks trying to get some aspects of the Surgical Care Improvement Partnership ready for roll-out, and the interest of people in participating, because they see that, you know, this is something with a five-year time horizon; we're not planning to do something tomorrow afternoon, is really dramatic.

And I think if you said to people, you know, and the folks in this Workgroup represent a variety of people that you could go back and say this to: Look, we're starting to try to think about how you plan the informatics revolution and the quality revolution so that they support one other. We're not trying to plan the revolutions -- we think they're going to happen. You know, we might think it could go a little faster, but we think they're going to happen, but how do we make them fit together? My bet is you'd get a lot of people who wanted to help you in one way or another.

And then one of the limitations becomes, and I don't know exactly how you work this, you know -- how can you organize the people who would be willing to help? I mean, I know you're able to do a little bit of commissioning of papers, but I think it's pretty --

MS. GREENBERG: Yes. Even -- well, more than we used to be able to, although, I mean, again we have finite resources. But one thing, I mean, I'm wondering is if the Workgroup should engage -- I mean, we have Susan working

with us and sort of trying to pull this together for, you know, work planning purposes, et cetera. And, you know, you might want to continue to have a writer involved so that it -- in fact --

MR. HUNGATE: Absolutely. It's invaluable.

MS. GREENBERG: -- so it comes out, it's readable and all that.

But whether we would want to engage a real expert in this area to kind of work with the Workgroup, I mean that's one option.

And I'm not saying that we don't have experts around the table and that we're not all dedicated, but someone for whom, you know, that's a piece of work for them and they're committing 25 percent of their time or something. I mean, it depends who it is and whether we could afford it, but --

The other thing I'm thinking -- I mean, once you think this through is -- I don't know that Steve had this in mind and I know that this is not a time in which people are overflowing with resources, but it may be that some of the agencies in the Department who are very interested in this and have, you know, a lot invested in this might also be willing to put some resources. Certainly, I think they will be willing to put some staff resources. But if we needed more resources for, say, hearings or consultants or whatever, they might be willing to put some money into supporting that. Or, the Office of the Secretary, which already is the source of our funding for contractual work as opposed to the care and feeding of the Committee, might find this sufficiently worthwhile.

I mean, we're not talking about millions of dollars, obviously.

MR. HUNGATE: No.

MS. McCALL: I also think that part of the deliverable that we should have in addition, as part of a specific working plan, a work plan, are the resources and the mechanisms that we think we need in order to execute on it.

MR. HUNGATE: Sure.

MS. McCALL: So, I think you're right. I think that there's a lot of enthusiasm. I think we can tap into that vein. And I don't think it's a lot of money. I think a lot of this is sweat equity on the part of people who are going to be involved.

I also think that part of the work plan -- there's nothing like creating your own future, and that when we think about some of the mechanisms that we want to either create or to kind of create the who's going to do what that Jerod was talking about, that those entities will be impacted in some way, and for better or for worse, I don't know how, but, by God, they'll want to have a stake in that future. So there's a natural, I would hope, desire to work.

MS. GREENBERG: Yes. I mean, if you look at like the e-prescribing effort which was just one area but it was big, and I mean it did require having a lot more meetings than the subcommittees normally have, almost every other month, most of the people came in and testified on their own dime.

Strategically, people who we'd need support, we support, but generally you find that people rally around this, and so as long as you're kind of identifying what needs to be done and at a higher level and then not trying to actually do a lot of the work, identifying who is doing it or could be doing it, et cetera, I think it's within the scope of the Committee, too.

MR. HUNGATE: Good. I think this morning has, again, been very strong and very, very helpful in the range of what we need to grapple with. And we've reached lunchtime, and the cafeteria is open, and we should come back at 1:15.

[Lunch break from 12:33 p.m. to 1:22 p.m.]

MR. HUNGATE: Okay. We're about ready to get on to the next round of activity, and Steve's dressing for the occasion, packing up, having done his job admirably. And we are indebted to your help. Thank you very much.

MS. GREENBERG: Thank you very much.

MR. HUNGATE: How are we going to make it the rest of the way without some chaperoning?

MS. McCALL: Oh, now we can really have some fun!

[Laughter.]

MR. HUNGATE: Thanks, Steve.

DR. JENCKS: I'll remember that!

MS. GREENBERG: I told him he didn't have a vote, so he's leaving, I guess!

MR. HUNGATE: Okay. Well, we're doing very well, more good content to add to the list, right?

MS. McCALL: This is great.

MS. GREENBERG: Yes, that may be one thing you want to decide just before you do your ranking, I guess what you want to add from those last two people.

Agenda Item: Perspectives on Morning's Subject Matter --Interactive Discussion – Ms. McCall

MS. McCALL: That's exactly right. Just a little bit of a process check. First, do we have everybody? Just kind of odd to actually not have a discussion where we're all kind of facing same direction and --

MR. HUNGATE: We can change the table framework.

[Workgroup members discuss, then realign the table for discussions.]

MS. McCALL: So a little bit of a process check. It's now, what, 25 after? And we want to be done by, ideally, when?

MR. HUNGATE: Well, we've got to be done by four.

If we are able to finish before, but fine, but we'll stop at four.

MS. McCALL: That's a pay for performance!

[Laughter.]

MR. HUNGATE: That's right. We'll dock their pay if they don't get done on time, right?

MS. McCALL: That's right. Nothin' from nothin' still leaves nothin'!

Okay. And I think to Marjorie's point, we have a couple of things. And what we've said before is we kind of went into Jerod and Steve and Carolyn's. We have had some additional input. And I think like we did this morning as we started, we want to see what we take away and what we would add, so --

And that's why you have these sheets here, okay?

So I'm going to make a recommendation in terms of process. You guys give me some feedback; we'll decide how to proceed.

One is to gather our thoughts from those speakers, much as we did today, and essentially add them to this list, all right? And it could be that we may find some redundancy with what is already here, that what we heard just kind of confirmed what we'd already observed before, which is fine. And another, there may be net new adds to the list.

So what you may want to do is just take a moment and kind of reacquaint yourself with what's here. It's in a slightly different format, and a lot of times memory is a very visual memory, so the way it looked up on the, you know, flip charts may not be the same that's here.

So I would recommend just take a couple of minutes. Read what is here on these pages. Reacquaint yourself with them. And then we'll go and we will add our commentary to these, all right?

[Workgroup members spend several minutes reading the material.]

MS. McCALL: Okay -- how are you doing? Are you guys ready?

What we are going to do now -- you may want to leave some room on your pages so that you can almost mimic in a way that you're comfortable with. I'm going to put them up here, also on a flip chart, record the comments. We're going to put them all on the back wall because that's actually where we're going to do this nominal group technique, and our voting.

But to actually enable that process, you're going to want to make little notes on your own sheet that kind of mimic what I'm going to do so that you can do some private voting so that when you come up to the wall, you know, you kind of know where you want to go.

Okay, so let's just continue doing what we had done before. I don't know what sheet we're on; I think maybe 10.

Think back to Carolyn and to Steve and to Jerod. What struck you as --

MR. SCANLON: One of the things from Carolyn was the idea of rewarding excellence, and the outliers. Truly the ultimate in quality, but it's not what we've been looking at. We've been looking at how you would treat the population.

MS. McCALL: So, the point being that we need to look at how to reward excellence?

MR. SCANLON: Right. And for Jerod, I think the key thing for me was the issue of maintenance process -- how do get things current?

MS. GREENBERG: Maintaining and reconciling --

MR. SCANLON: Right.

MS. GREENBERG: -- he says, maintaining and reconciling. And also, who sets the priorities?

MS. McCALL: So, how to maintain and reconcile metrics as one? Another that I think you said, Marjorie, was some sort of forum for setting priorities.

MS. GREENBERG: Who sets the measurement priorities? Those were the two main things --

MS. McCALL: Right.

MS. GREENBERG: -- I took from him.

DR. STEINWACHS: Carolyn also raised --

MS. GREENBERG: And how are they set? I guess that's sort of "who," but --

DR. STEINWACHS: -- the distinction between what we do now is looking at disease indicators versus a Baldridge kind of approach.

MS. McCALL: Can you speak up?

MS. GREENBERG: We can't hear you.

MS. McCALL: We can't hear you.

DR. STEINWACHS: You can't hear me, oh -- is the Baldridge approach to quality versus indicator by indicator, disease by disease.

And the other thing she was very heavy on was prediction. It's not explanation --

MS. GREENBERG: Carolyn.

DR. STEINWACHS: -- but what are the predictive variables? What it is that we are looking at that predicts performance, as different from looking back and sort of disentangling.

MS. McCALL: Okay, so say that one more time. I want to make sure that I catch it right.

Yes, she did talk about prediction, but the need is to --

DR. STEINWACHS: To have indicators that predict future performance.

MS. GREENBERG: She said, do we need to learn more about predictive power?

DR. STEINWACHS: That's right. I think that was her point, was that, you know, a lot of our measures may be better at sort of identifying deviations than predicting what we're looking for as what's supposed to come out of this system.

MS. GREENBERG: Do we have the educational role of NCVHS? Did that come up after we did this, or is that already on here.

MR. SCANLON: Yes, we have a role on education.

MS. GREENBERG: At the end there? Oh, yes. We have here already -- we have a role in education. Then it says "translating successful experiences." But also I think explaining the value propositions, the gaps, the needs, the vision -- you know, it could be quite a broad educational role, so I think it's too limited here just talking about translating successful experiences.

MS. McCALL: So, just --

MS. GREENBERG: The educational role. We already had on here we have a role in education, "translating successful experiences," but I added to that explaining the vision, the value proposition, the need.

MS. McCALL: So, there's a role in explaining and educating on vision --

MS. GREENBERG: Vision, value. As Susan said, this is something we actually talked about, and even brought in some marketers at one point to talk with us about whether the Committee should undertake some kind of a campaign on the value of information for research in public health et cetera because of some of the privacy disconnects between the need for the data and then some of the privacy concerns, but we didn't pursue it, but anyway --

It's not a new discussion, but I think it's taken on new relevance here.

MS. McCALL: Okay. What else? Yes?

DR. CARR: What struck me today and actually yesterday was that we are all aware of the inadequacies of the current system, and I forget who called for this -- maybe it was Jerod -- but if we cut to the chase of the next step, we can resonate with the inadequacies of the system and maybe think of a few, or integrate what people are saying.

But I'm wondering if our role isn't to call for a national coordinator of quality, and interest in sort of how that came about with NHII or David Brailer's thing. But was it Jerod who said it, that just to have a final arbiter of what are the priorities, how are they measured, what are the characteristics of measures so that we don't have the growing little bit of Tower of Babel in some ways, different words, different languages, different goals, different metrics?

And I think actually it was Steve that said the volume of factors are becoming unsustainable. And so, I mean, my take-away from this is that there needs to be an organizing oversight of all of these agencies. They're doing great work but -- and they need to be linked.

MS. GREENBERG: Can I -- or are we just brainstorming now and not --

MR. HUNGATE: We're just getting the list.

MS. GREENBERG: I'd like to comment on that unless -- okay. One is that these would oversight -- like Brailer, you could make the analogy there, is really supposed to be a coordinator, not an oversight or an orchestrater.

DR. CARR: A coordinator is needed here, too, I think.

MS. GREENBERG: On the other hand, we have the

Agency for Healthcare Quality and Research and they specifically put "quality" into its name a few years ago, and they're supposed to have the lead on quality where sort of nobody had the lead on HIT.

But he did say at the Office of the Secretary level; that was his recommendation.

So I guess it's a question of whether AHRQ should be further empowered or whether it should be a separate.

DR. CARR: I'm just saying the concept that there is a growing entropy and volume that is not sustainable and some entity --

MS. GREENBERG: On performance measures?

DR. CARR: Yes. And what we measure, why we measure it, how we measure it, you know, and how we react to it. This is all of what we've heard in these two days.

And there is no single person or entity. So I'm not saying it has to be a new person or David Brailer or anything like that, but somebody has to be accountable at the end of the day.

MS. McCALL: It's kind of like you have to decide how we're going to decide, right?

MR. HUNGATE: That's a thought, but it's a debatable subject.

DR. CARR: Yes.

MS. McCALL: Okay, great. All right. What else?

DR. STEINWACHS: Like a group like us, right?

[Laughter.]

MR. HUNGATE: I have a couple of things. She made the observation that the registry function is missing in EHRs, and I thought that was a very important observation.

MS. GREENBERG: Was that Steve who said that?

DR. CARR: That was Carolyn.

MR. HUNGATE: Carolyn.

MS. GREENBERG: Carolyn.

MR. HUNGATE: Steve also -- or Jerod -- made the comment, "measurement is a byproduct of the care process." I think that's an important thing which I think is aligned with Steve's observation that we should focus on quality improvement, not on quality per se. It was a clear recommendation that he made, and it's an important distinction. You know, they're interrelated.

MS. GREENBERG: I guess that relates to his statement that we should reconcile a systems thinking with scorecards and ranking.

MS. POKER: You know, I'm just going to say something that -- Carolyn and Steve actually reacted to what we put on the boards for them. In other words, we gave them a list of what was presented to us on the first day and they reacted. I'm not sure if we would have asked them a different question -- we didn't -- what would they like? Well, Steve did get that question, of the wish list.

I think Carolyn was specifically -- I'm assuming -- addressing what we had addressed, you know, reacting to it. It'd be interesting to get her comments without that list, what her wish list would have been.

MR. HUNGATE: Well, she added: What should I look for in a doctor?

MS. POKER: Which was in addition to?

MR. HUNGATE: Clearly in addition to. So I felt that she did add the things that --

MS. McCALL: Okay. All right, we're going to spend just a little bit more time on this. I'm going to ask the question a slightly different way now.

Think about what we've heard from these three people today, and I want you to think back to what's on our list so far. And I want you to look for things that aren't already there. What is brand new that you heard or saw that we have not captured so far?

MS. POKER: I was fascinating with that bundles concept, and that falls under the education piece, because I think that one of the things that we're missing is how to provide -- how should I put this? -- practitioners with the right bundle, maybe right evidence-based data, so they do the right things at the right time? It's kind of hard to articulate, but --

MS. McCALL: Yes. I'm going to put down a couple of things. I want to make sure I capture them right.

Within a bundles concept, it's kind of process and outcomes. There were some other things that were mentioned in there. It becomes a knowledge management issue?

MR. HUNGATE: Right.

MS. POKER: Something that Kelly alluded to.

MR. HUNGATE: Integrated, self-reinforcing, synergistic.

DR. CARR: n, I'd like to sort of call the question of -- I mean, this is all stuff that's going on very much in ICU care and the IHI and all that. I mean, people who are doing this kind of care know what bundles are, or the evidence, and so on.

And I just wonder -- all of the things we heard today were tremendous, with great value, but I don't this Committee has a role in adjudicating which of these techniques or data elements -- I mean, again I go back to just build the bridge with the electronic record and have someone in charge of quality.

MS. McCALL: Right -- okay.

MS. POKER: I was responding to her question which was what did we hear?

MS. McCALL: And both points are very valid. And actually, as soon as we're wrapping up what we heard and what we saw, I actually -- before we go vote or anything like that -- want to come back to that question specifically about what our role is so we can vote properly.

MR. HUNGATE: Marjorie?

MS. GREENBERG: Well, in relationship to that, I guess, you know, a question, and I guess what will be discussed, is to whether in fact the Committee does feel it's the role, kind of gotten all this input, to do what Don Detmer suggested, and that is the kind of the keeper of the vision, which might actually require kind of laying out the vision.

And so part of that vision could be, you know, not counting things so much as looking more at these care -- I mean, I could see the clustering thing being part of that vision maybe but -- or whether you just want to make some very broad statements.

I did hear, I think universally, and actually did at the 2004 hearings, too, though they were more specific, but that there wasn't anybody who really said "we don't really think is an area that the Committee should be playing with."

And not only that -- I mean, they did say, look, there's a lot of people working in these areas, and you obviously don't want to be duplicative or you need to partner with people, and Steve made that point at the end. But there did seem to be an overall appreciation for the fact that there is kind of a void in pulling this all together, particularly the systems/quality interface and the informatics/quality interface et cetera, and that this Committee, there wasn't a really major competitor for trying to facilitate that, and that this Committee, you know, could very much be the appropriate one.

So exactly what the "it" is not entirely clear, but I think that’s reinforcing because I do think that's a major thing we want to get out of these two days -- is there really a role for the Committee or is everybody kind of, you know, on top of things?

MS. McCALL: Okay. Anything else?

MR. HUNGATE: Yes. Steve mentioned metrics for the transformation.

MS. GREENBERG: Or what?

MR. HUNGATE: Metrics for the transformation. In other words --

MS. GREENBERG: Oh, right.

MR. HUNGATE: -- if the visualizing, and in 10 years out it's going to be different, how are we going to know we're making progress? And I think that's an important observation in content.

MS. GREENBERG: You asked what's new that wasn't already on here --

MS. McCALL: Yes.

MS. GREENBERG: -- that we thought that we'd mentioned? Because I do think some of the things that Jerod said about how do we set measurement priorities and what's the process for maintaining and reconciling measures, I don't think we really talked about that much yesterday.

MS. McCALL: Okay. Yes, and I think we've got those captured.

MS. GREENBERG: I think those are definitely new ones.

MS. McCALL: Yes, they are new, absolutely. Anything else that's new?

MR. HUNGATE: In reading the list, I didn't see the software module idea, but it came back up again in conversation today.

MS. McCALL: Okay.

MS. GREENBERG: The what?

MR. HUNGATE: Software module. You know, once you learn how to manage a set of detail, does it get implemented in a set of code which could be put into various systems? It was a conception idea that --

MS. McCALL: Software, SW. So I'll write down the idea of metrics as a "software module." I've got it as a companion to EHRs.

The one thing, the comment that I would make as a participant here, is that this concept -- Steve had talked about owning a metric, okay, and owning a metric, I think, all the way down the bedrock, which is to say what it is, why it is, it's complete specification, can be done here, and you can also assign ownership, who owns that particular piece that's going to come in to some sort patch or whatever it is, whatever module.

MS. POKER: Well, there's something -- this was sort of an alluded topic, or an allusion to it, and actually I think it kind of came from Justine. It was about cloning the New England group. And it was a kind of a joke.

But I thought: Is that something --

DR. CARR: That was cloning John Halamka, imitating the Northern New England group.

MS. POKER: That's it. Right, but taking something that's excellent and replicating it or linking it up with people who are more challenged.

And I guess I heard it because this has been through my head, but it was kind of referred to.

DR. CARR: I think it was Don, who's very high on that. And I think that group, you know, has created a model of collect data, analyze it, and then modify behavior, and then collect more data.

But I would say, you know, what's interesting in that group is that when you get the data back -- I think there's seven hospitals in it -- you only know who your hospital is. They don't tell you who everybody else is. That was sort of interesting. But they are a collaborative and they work together.

It certainly has been a model. They were innovative, and the concept, I think, is the one. It doesn't have to be necessarily NNE.

MS. McCALL: Right.

MR. HUNGATE: I want to be specific in the semantics here because I think the New England collaborative project is different from the Mass RHIO, right?

MS. POKER: Yes, yes, it is.

MS. McCALL: Oh, I'm sorry. I thought you were talking about Northern New England.

MS. POKER: That's what I'm talking about, yes.

MR. HUNGATE: I believe we are.

DR. CARR: Oh, the RHIO, that's the --

MS. McCALL: Northern New England Cardiovascular Study Group.

MS. POKER: I was talking about this one, yes.

Maybe one of the things we could do is take that model, and how can we replicate it? Or can we kind of write maybe a recommendation for short term, ways to replicate something that's working?

DR. CARR: It's interesting that it hasn't been followed so much, but the concept is that you refine, you measure something and you do something with the information, and you measure it again. I think that's really the take-home message.

And where we are today is we have lots of measurement; we have no idea what happens.

And the other thing with NNE is they link it to mortality. They have five factors looking at mortality.

MS. POKER: But there has to be more to it, Justine, because you know what? I think there's something else they're doing. They doing a good organizational culture. They're building partnerships. They're doing other things that work that obviously doesn't work in other places.

So maybe a recommendation is to analyze that -- not by us, of course -- and push it out there. I mean, just a thought. I don't know --

MS. McCALL: A couple of thoughts that come up. You mentioned the word "culture." So there was some discussion earlier about kind of the cultural readiness for measurement, particularly on the part of physicians. So I think that there's a need to look at what I'm going to call here "cultural readiness" among physicians.

MR. HUNGATE: Or even to state it as "understand the essentiality of cultural change."

MS. POKER: Yes, and it's not only limited to physicians, I would hazard a guess.

MS. McCALL: Okay. There was another thing Steve mentioned which I'd like to put up which is essentially "patient-centeredness" and all that that translates into.

So, look at patient-centeredness and the impact on metrics et cetera.

There was one more, and it was they were using a lot of analogies -- airline industry, education -- and I think there's an opportunity to look at outside ourselves for some wonderful analogies and bring those in.

DR. CARR: Well, it's interesting. It didn't come up today, but, I mean, team training is one of the major safety initiatives, and very tremendous involvement and partnership with United Airlines as well as other members of the airline industry. And I think it's being put forward by a couple of national groups, maybe American College of Surgeons, I think, and --

MS. GREENBERG: What's it called?

DR. CARR: No, it's part of Don Berwick -- isn't that part of the IHI thing, team training?

MR. HUNGATE: Probably.

MS. GREENBERG: What is IHI?

DR. CARR: Institute for Healthcare Improvement. Finally, I know an abbreviation that Marjorie doesn't know!

[Laughter.]

MS. McCALL: Yes, put it on the list.

MR. HUNGATE: It's Don Berwick.

DR. CARR: Don Berwick, Institute for Healthcare Improvement.

MS. GREENBERG: Oh, I know him. Institute for Healthcare Improvement.

DR. CARR: Yes. There have been allusions to it. Maybe everybody didn't know this, but that at his annual meeting there's always a very thoughtful kind of call to arms, and this year's call to arms was based on the IOM report of 98,000 preventable deaths.

So he just roughed it up to 100,000 preventable deaths that we're going to prevent this year and that people were asked to pledge their support of pursuing the five things that could reduce the unnecessary injury or mortality, and they included line infections, ventilator-associated pneumonia -- what's the other infection? -- surgical site infection, then, let's see, rescue teams.

You know, somebody who's getting bad that you have just a S.W.A.T. team that comes in, all hands on deck. And maybe the other one --

MS. POKER: One more, I can't remember.

MS. GREENBERG: Is it all for hospitals?

DR. CARR: Yes, it's all hospital-based.

And Blue Cross in Massachusetts has given each hospital $35,000 and you get $35,000 from your own matching funds to do this.

Interestingly, there is no metric -- what the metric is: What was your mortality last year and what's it this year? And there's no consequence -- like if you're already doing four out of five things, you're not going to get that impact.

MR. HUNGATE: My suspicion is that we're going to be thinking of other things later on as we go through --

MS. McCALL: We will.

DR. CARR: We will.

MR. HUNGATE: -- that we'll add to this and --

MS. McCALL: Okay. All right, so I think we'll round it out at the lucky number 13. To me, I like prime numbers; they're just great.

MR. HUNGATE: I agree with prime numbers but usually skip over 13.

MS. McCALL: Oh, do you!

[Laughter.]

MS. McCALL: Okay, before we actually go through and try to make some choices here, I think it would be important to go back to what Justine brought up, and I think what we've been talking about throughout is our role.

Agenda Item: Organize Issues for Work Plan, Narrow Choices -- MS. McCALL

And so I think it's important for us to think about not a vision statement, not a charge statement, but just in particular: What are the things that we can do, all right? What is it, our role, to do? What's our role.

And so let's just capture just fairly quickly some words that we think describe what our role either is or could be, okay?

DR. STEINWACHS: One is providing a vision.

MS. McCALL: Okay.

MS. POKER: Oh, I have a good one.

MS. McCALL: What's another?

MS. POKER: I love this one. It's "brokers for change." Or maybe not brokers, but include for us to bring in change, you know?

MS. McCALL: Okay, so we are a catalyst?

MS. POKER: Okay. Catalyst for change.

MS. McCALL: What do we do?

MS. GREENBERG: Educate.

MS. McCALL: Okay.

MR. SCANLON: Maybe related -- my sense is we need to manage expectations.

MS. McCALL: Okay.

MS. POKER: What did you say? I didn't hear you.

MR. SCANLON: You can manage expectations. It's a key part of this education because there is probably very unrealistic expectation that exists on these sort of all parties' sides, and that will be a great detriment to making progress beyond the first steps.

MS. McCALL: What else do we do?

MS. POKER: I think we need to gather data. Before we educate, we have to kind of collect information, or gather it.

MS. GREENBERG: Sort of -- the word I'm thinking of -- reaching out to stakeholders, but, I mean, that's part of gathering the information, but sort of --

DR. STEINWACHS: Convening.

MS. GREENBERG: Convene -- definitely convene.

MS. POKER: Convene.

MS. GREENBERG: See where there's consensus. I'm not sure about adjudicating.

MS. POKER: No.

DR. STEINWACHS: Marjorie, really that's you!

MS. POKER: Would you say identify major issues or identify commonalities or --

MS. GREENBERG: Identify consensus and gaps.

MS. POKER: Yes.

MS. GREENBERG: Steve said don't just identify gaps. He said, "Don’t be timid."

MS. POKER: Yes, but Don Detmer said --

MS. GREENBERG: Yes, so we're not being timid, but just starting with a vision.

MS. McCALL: Okay.

MS. POKER: -- be the custodian of vision and gaps.

MS. GREENBERG: That's right, he did.

MS. McCALL: Okay. What else? What else do we do? So let's say we do all this, like all the words on the page.

MS. GREENBERG: Well, advise, yes. Clearly, that's what we're supposed to be doing.

MS. McCALL: Advise.

MR. HUNGATE: Advise the Secretary.

MS. McCALL: Advise. Okay, who do we advise?

MR. HUNGATE: The Secretary.

MS. GREENBERG: Secretary. But always it's heard outside the -- I mean, there's a long history of that.

MR. HUNGATE: And the public.

MS. McCALL: Okay.

MS. GREENBERG: But as for actionable items, there really do need to be actionable by the Department, I mean some of them do, because that's whom the Committee is advising.

MR. HUNGATE: Right.

MS. McCALL: So, we advise the Secretary for action. We advise the public for what? Education?

MR. HUNGATE: Yes, education.

MS. GREENBERG: Or, you know, action, too, because --

MR. HUNGATE: Catalyzing change. Most changes come from --

MS. GREENBERG: -- as goes the Department, often so goes the nation.

MR. HUNGATE: -- the ground.

MS. McCALL: Okay. What else? Anything else?

MS. GREENBERG: Well, maybe a vision and a roadmap.

MS. POKER: That's a good one.

DR. CARR: Yes. I just want to make sure that we're very concrete about the fact that there needs to be an intersection between quality and the building of the electronic health record.

MS. GREENBERG: That's the vision, I think, yes.

MS. McCALL: Yes.

MR. HUNGATE: How you feel about altering that term to "quality improvement" and the electronic health record as opposed to "quality" per so? You know, it was Steve's suggestion.

DR. CARR: Are we ready to go into that?

MS. McCALL: I think we are. So let's talk about why we did this. All right, next we're actually going to each of us individually and then collectively are going to opine on all the things that are up on these sheets, and they're a little disparate. But we're going to put some votes, some tick marks, next to those things that we think are the most important.

Now, they can be incredibly important and yet it may not be within our power. We may be concerned, but it's not within our circle of influence. We have to think about -- again, we're developing our own work plan -- what is within our circle of influence, as opposed to our circle of concern, that we think that we can do, all right? So it being important is necessary but not sufficient.

We start and say I think it's important, I know it's within my circle of influence; I haven't decided how yet. We'll get there when we get the specific work plan, you know? It could be important. It may be not articulated on the page as something in a way that, say, we can influence it but in fact we can because we'll build a work plan around it that is about, you know, convening stakeholders and finding consensus and gaps. So there could be a work plan that is in fact within our line of sight.

But I think it's important for us to understand the things that we do so that we could sort of relate them to what needs to be done. Yes?

MS. GREENBERG: I'm not sure -- just looking through my notes to see if there's anything we've forgotten -- that this that Carolyn mentioned, aligning the health record with the health research record, which was interesting. She didn't really expand that. And we've talked about identifying gaps.

But I don't know if there's anything explicitly up here about identifying the need for research and advising on where research is needed, and that's partly what we know and what we don't know.

But I think, you know, that kind of ties in with something we've been talking about in the Executive Subcommittee, but --

MS. McCALL: Say it one more time, Margaret.

MS. GREENBERG: Well, identifying areas that need research and advising on them. And it may be even suggesting priorities because we don’t know everything.

MS. McCALL: Okay, at the bottom of the last page that we did I am going to put just identifying areas that need research, okay, so that we have it, so --

MS. POKER: A research agenda is really, I think --

MS. KANAAN: But isn't a role, though?

MS. GREENBERG: Well, it is a role to identify gaps, but I didn't think the research piece maybe got picked up in the --

MS. McCALL: Right. And I'm going to put parenthetically, guys, you can't see it but it's going to be an "ERR" -- she talked about an "electronic research record" as opposed to an EHR. Okay, and that may be able to help trigger what it is, that whole dialogue.

MS. POKER: Yes, but I think also what Susan is saying that I think is kind of important is that also falls within our roles. Isn't one of the things that we have to do as a Subcommittee is also feed into research agenda information?

MS. GREENBERG: We've talked about that. I mean, I don't know if there's a total buy-in, but --

MS. POKER: Yes, we've talked. I just want to put it on the list.

MS. McCALL: Okay. All right.

MS. GREENBERG: We kept hearing our experts saying, though, there are these things we don't know.

MS. McCALL: Okay, all right. So with our role in mind, and we can just leave this up here, and with the priorities that you have on your sheet already and the ones that we just did which, you know, if you've been kind of taking notes or if you want to look -- I think we started with the new ones at Sheet 10. They have been numbered. So, 10, 11, 12, 13 are the brand new ones.

We're actually now going to vote, yes, before we start. You had a question?

DR. CARR: Some of these are not voteable items. Some of these are descriptive. Do you want to just see what's on the list and what's off the list?

MS. McCALL: Well, what I would do is I would just avoid them, okay? I know, in an ideal world we would have actually been set up to do something a little bit different, which is we would have gone through --

MS. GREENBERG: Grouped them.

MS. McCALL: -- and grouped them using just what's called an "affinity process," and then we would have done some nominal group techniques off of that.

And in talking with Bob and looking at the clock --

MR. HUNGATE: We've run out of time.

MS. McCALL: -- I think we probably would run out of time. So you're going to have to do some of that, take that into account as you go through and vote. Avoid the ones that you think are redundant.

We will have some after the fact work to try pull other things together that are subsumed in larger points. So as you go through, try to vote for the --

MS. GREENBERG: The one that expresses it the best.

MS. McCALL: Yes.

MS. POKER: So is there a number of things --

MS. McCALL: How many votes do you get?

MR. HUNGATE: How many votes do we get?

MS. POKER: -- everything that would be helpful?

MS. McCALL: It wouldn't be helpful if you vote for everything. So the question is: How large a number of votes are you going to get? And I think you get 10.

MS. GREENBERG: This is like Chicago -- you get 10 votes?

[Laughter.]

MS. McCALL: You get 10 votes. Vote early, vote often.

You can put all 10 on one. You can spread them around. It is up to you. If you want to wait and hold back and you see somebody voting for something that you don't think is right, you can kind of, you know, use him to kind of counterbalance that. It is all up to you.

MS. POKER: You know what I don't have, though? I don't have the update of the new additions.

MS. McCALL: Understood, and that's why you had the print-out, so if you wanted to write them down, you can do that. They are on Sheets 10, 11, 12 and 13, okay?

MS. POKER: Yikes.

MS. McCALL: They are numbered. Ten and 11 are over here on your left; 12 and 13 are on your right.

MS. GREENBERG: What, the new ones?

MS. McCALL: The new ones, correct.

MR. HUNGATE: And the time allowed is?

DR. STEINWACHS: Ten minutes -- 10 votes.

[Laughter.]

MS. McCALL: Well, actually why don't we take a little bit more than that because there's actually two distinct steps. So I'm going to say 20 minutes, okay?

So about 10 to go, make some decisions and look at what's up here. And then what I'd like you to do when you actually do vote, if you can, pick a marker that is red, so you may have to share, or maybe we can get some more. And I would like you to do it just with tick marks, all right, so that they're easy to see and easy to count.

MR. HUNGATE: All to the left of the label, right?

MS. McCALL: If at all possible.

MR. HUNGATE: If there's a standard to be invoked --

MS. McCALL: Exactly.

MR. HUNGATE: -- in this process.

MS. McCALL: Okay, any questions? Are people comfortable with what to do next? I'm going to assume yes, great.

[Workgroup members cast votes.]

MS. McCALL: Okay, I guess starting here, these are the votes that came in. What came up top? So what do you think? Does it agree? Do you like it?

MR. HUNGATE: Are there any of those that are not a piece of the vision per se? Is there any observation, anything, we've prioritized there that is different from vision?

DR. CARR: I think the quality IT intersection is a very concrete, tangible mandate. Yes, it's part of the vision, but it's really a call to action that something happen. I see it as, you know, an important separate entity. It needs to happen and no one's doing it. So I would call that --

MS. McCALL: Okay, all right.

DR. CARR: I mean, the vision -- you're saying, are these other things -- well, let's see; what are the things we think go --

MS. McCALL: No, there are other ones that are distinct like that.

DR. CARR: I'm just trying to scroll down on myself.

MR. HUNGATE: Yes, I was just trying to do a kind of grouping --

DR. CARR: Right.

MR. HUNGATE: -- in my own mind.

PARTICIPANT: It would be a help to the Internet people to hear what the list is.

MS. McCALL: Yes. Okay. Through this prioritization process, what we've done is we've now voted, and we have 12 of a much larger number that have surfaced as what we believe to be the highest priorities for our own work, given the need that we heard about.

First one, going from top to bottom, it is the discussion around quality and IT, the intersection between those two.

Second, kind of tied for second place, taking into consideration the continuum and designing for the continuum from accountability to learning that Brent talked about.

Another, also receiving five, the keeper of the vision.

Also receiving five, knowledge management as separate and distinct from that infrastructure.

Receiving four was the whole dialogue around patient-centeredness and how that is taken into account.

Also receiving four was the definition of value as opposed to perhaps the definition of quality as a definition that we would want to work with, define and then bring into the work.

Also receiving four was what we heard today about maintaining and reconciling metrics, all that Jerod talked about.

He also talked about, which also received four, was who sets the priorities and how is it done?

Receiving three votes, using IOM information to create a balanced approach.

Also receiving three votes, that the vision should include some sort of minimum. And again, these then become compressed, and there's even more full words over on the flip charts behind me and I'm struggling to kind of see them here.

Also receiving three was cultural readiness among physicians.

Finally, receiving three, the role in explaining and educating on vision, value and need.

So those are the 12 that received top votes.

There were others that received votes. They may have received two, they may have received one; all of that information again is captured and available. But these were those that received three or more.

So, summary done. Back to: What is a concrete deliverable? You highlighted this one, Justine, and we're starting to scan for other ones that are concrete, you know, deliverables, things for us.

DR. CARR: I would set verification of who sets the priorities and how.

DR. STEINWACHS: Which are more recommendations. I mean, that's a product from a vision and an implementation plan.

DR. CARR: Right.

MR. HUNGATE: I don't think you can separate that from a statement of a vision.

DR. CARR: Yes, you could. I think that the vision is that we have a systematic progression, as opposed to a chaotic entropy.

MR. HUNGATE: Well, I could argue that point, that I think the priorities will be set in a way from the ground where clinicians self-identify the need.

DR. CARR: I'm just saying our vision can be about quality, but a concrete deliverable is that there is a final arbiter, some place where the priorities are set.

MS. McCALL: Right, okay.

MR. HUNGATE: Well, yes, but I'm saying that's a decision that we haven't made as part of the vision yet, right?

MS. McCALL: No, understood. And what I hear you saying is that that is actually putting something in place so that there is a who and there is a how. That can be a concrete deliverable. It's not ours to build, but to make a recommendation around that.

DR. STEINWACHS: Around that piece of the vision.

MS. McCALL: So this is a concrete deliverable.

This has got some concreteness. This one's got some concreteness to it.

DR. STEINWACHS: Patient-centeredness give it a focus, too, as it falls on the quality/IT intersection.

MS. McCALL: All right. Other things, other thoughts, as you look through this?

DR. CARR: The educational role I think might be a separate deliverable.

MS. McCALL: This one?

DR. CARR: I guess so. I don't remember it being so quite like that, but that apart, I think that's a separate deliverable.

MS. McCALL: Okay, all right. Is there any one of these -- and we're not trying to winnow them one more time; we're just trying to wrap our minds around what they are -- are there any that you think you would eliminate because there's not a role that we have in there?

DR. CARR: Well, I think cultural readiness among docs is kind of very -- I'm not quite sure I understand. I think that's sort of micro. I think, in contrast, patient-centeredness, if in fact we've heard that and we believe that, I think, you know, that makes sense to put that in the vision.

To speak to cultural readiness among docs, I don't know that's ours to change. I don't know what strategy we would put forward, what expertise we'd have.

MS. McCALL: Okay. Yes, Don?

DR. STEINWACHS: Let me just talk on that because, you know, part of the argument, I think, and I guess it was Steve who was making it, was sort of there are two types of leadership -- there's transactional leadership and there's transformational leadership. And a lot of the things we do incrementally are sometimes viewed as more of a transactional leader that's trying to get something done and out the end and so, you know, part of, I think, that discussion about culture -- it's like the culture of safety -- is that is that part of a transformational process within organizations, within health systems and society?

And so it depends on when you'd lay out a quality/IT intersection. If this is some sort of transformational idea, does it require certain things to come into place? And part of that may be like a culture of safety for safety; there may need to be a culture of readiness to adopt. But it talks to transformation in the way people think and function within health systems.

DR. CARR: Well, I would then say "among clinicians," because there are nurse practitioners, nurses --

MR. HUNGATE: Well, we might want to even generalize it to patients as well. You know, it could be the cultural readiness --

DR. CARR: Cultural readiness, yes.

MR. HUNGATE: -- of a system for transformational change --

DR. CARR: I think that might be even more --

MR. HUNGATE: -- might be the better way of articulating.

MS. GREENBERG: Do we relate it to the educational one?

MR. SCANLON: I think that's part of the education. I mean, the issue is, given our description of what our roles are, we're not going to solve any of these things, but we're going to find out sort of how much need there is for some of these things and then what might be the mechanisms to address it.

And so, I mean, and part of this idea of an education strategy is in creating an environment where this is accepted.

MS. McCALL: It's how we get at this.

MR. SCANLON: We need to think about all the different stakeholders and how they're going to view this, how they currently view it that might need to change, and sort of think about strategies of how you can change that, so --

MS. McCALL: Okay.

MS. POKER: Not just eliminating, but what would be short term versus long term, and one of the things that comes into mind, this idea about who shall set the priorities and how kind of falls in line with another one that didn't get up there, but appointing a national coordinator of something like that.

But could be a short-term recommendation? Like do we want to think that maybe making one of the short-term goals is to think about suggesting to the Secretary who's going to be -- we can expand the role of an agency or we can have one national coordinator or convener or whatever we decide to call it. But that would be sort of a short term. That wouldn't be the long term, whereas the quality/IT intersection, that's probably a longer term one.

MS. McCALL: Yes. Okay. And I don't know. I think it's a great question. This is not a work plan yet. So, yes, and I think that that's something that we should definitely take into account.

What we're saying is that we will take this information and use it as a foundation to build a work plan, because this is what we want to focus on. These become the focus elements of a plan; they're not the plan yet. And that plan has to have short term and longer term.

And each one of these may have short-term and long-term components where we can make, you know, our own measured progress. We need to also learn to exhibit that which we expound.

DR. STEINWACHS: You're really raising the bar here.

MS. McCALL: I am. So I think it's important for us to also talk about, as a part of our own work plan, how we will gauge our own success, how we will measure that, so that we can talk about that.

DR. CARR: I think Marjorie made a good point in sort of saying, you know, we already have people who have accountability for quality and so on, so I think maybe the one, the second from the bottom, that says "who sets the priorities and how" would be more that someone, that there is accountability and closure. I use "accountability" -- well, I'm not quite sure what the right word is, but there is a final word -- that there is someone who sets the priority, not who, but there is someone.

MS. GREENBERG: Right. Someone needs --

MR. HUNGATE: Well, related to that, I see missing from this -- well, maybe "maintaining and reconciling metrics" catches the content, because there's a very distinct interrelationship between that and the priority aspect, it seems to me.

MS. McCALL: The other thing, I find myself wishing that -- I know that we worked with what we had in terms of materials and time and all that -- I find myself wanting to go back and pick up all these other points and see which ones are actually subsumed in these, to kind of fill it out and flesh it out. And each one of these, I think, if we take it now as a tip of an iceberg, can be fleshed out not only with some of the points that are here, which won't be lost, but other things that actually make it richer, and I think that those all become to-dos for us, so --

MR. HUNGATE: Let me ask Anna if it would be fair to ask you to do that grouping of the next step of that because we probably can't do it all in this arena.

MS. POKER: I would take a stab at it.

MS. McCALL: Something for people to react to?

MR. HUNGATE: To react to, right, so that we have an iterative back-and-forth. Does that make sense?

MS. McCALL: Well, depending on how have time, yes.

DR. STEINWACHS: Well, I always like to volunteer my friends and colleagues. You know, it seems to me you've got a better sense than anyone here maybe how to maybe take what we've done and do some clustering -- or essentially what you're saying is take these --

MS. McCALL: Right, and cluster --

DR. STEINWACHS: -- massage some of the others, and maybe with Anna's support. But it seems to me it'd be helpful if could sort of help guide that and Anna work along with you --

MS. McCALL: Absolutely, okay.

DR. STEINWACHS: -- because I agree, it'd be nice to come out with some things that kind of lay the plan. And then I'll owe you one!

MS. McCALL: I always love that capital. Okay, so I think that brings us, agenda wise, to -- I'm sorry?

MS. POKER: [Off microphone.]

MS. McCALL: Actually, we're a little ahead of schedule.

MR. HUNGATE: Yes, you're doing great.

DR. CARR: Like now do we have like Roman numerals because this -- I mean, Simon says we can have two, three, maybe four, that we put forward. So what are they, because here he have how many, 10?

MS. McCALL: Actually, 12.

DR. CARR: Twelve. So we still have to get sort of the headline for four of them.

MR. HUNGATE: Well, one of them is vision.

MS. GREENBERG: And I think several fit under that.

MR. HUNGATE: And most of those will fit.

DR. CARR: Right, absolutely right; I think so, too. I think most of them fit under vision, and the question is what are the other ones that we cull out?

MS. McCALL: Right. And so on that I think it's important to reach maybe not complete consensus but I've heard vision as one, which is kind of like a Roman numeral.

MS. GREENBERG: Well, developing the vision.

MS. McCALL: Right.

MS. GREENBERG: Keeper of the vision I do think is a separate thing, though you can't keep it if you don't have it.

MS. McCALL: We actually said our role was in providing a vision.

MR. SCANLON: I guess I'm wondering if after you numbered the first one, Quality/IT intersection, if everything else isn't sort of a part of the vision eventually.

And there's really an issue which is: Shouldn't we be working on the first one and think of the rest as a menu that we will then make choices from, as opposed to trying to pick a couple today and say, you know, this is where we're going to go for the next year?

DR. CARR: The only thing I would add is just the education piece. I mean, if we believe that that part of what we should do is education in addition to developing the vision. I mean, it's not implicit that education is part of the vision.

MR. SCANLON: Well, I mean, I think that in terms of trying to implement the vision, there will be an education piece. I mean, and there's kind of various types of education, so -- because I think that the discussion where you and Bob were disagreeing a little bit is partly a question of where you are. I mean, at 30,000 feet, I agree with both that -- when we bring it down to 15,000 feet, maybe then we end up with some conflicts, and so --

MS. McCALL: I'm going to make an argument for one more, for actually a third thing to surface as something distinct, and I think there's some related pieces.

It has to do with "knowledge management versus infrastructure," but I think this is related to what I'm going to call "maintaining and reconciling metrics," which I see as related who sets the priorities, because to me these are all about a mechanism to perpetuate it forward. And I think that that mechanism is distinct from content --

DR. CARR: And interface.

MS. McCALL: Right. So we have to have a vision and a vehicle and a strategy -- the vehicle being IT, the strategy being who prioritizes, and a vision.

MR. HUNGATE: But the vision is of the knowledge management issue, is it not?

MS. GREENBERG: Or of the quality/IT intersection.

MS. McCALL: Not totally.

MR. HUNGATE: Help me understand what's different then. What's the separation?

MS. McCALL: I think the vision includes this --

DR. STEINWACHS: How quality and IT intersect.

MS. McCALL: -- what this is. I think the vision includes the definition of quality versus value. I think it includes concepts of patient-centeredness.

A vision is a picture that says "10 years from now, what's this thing going to look like?" And we're going to say, "There's an EHR everywhere." We're going to say that it embeds, you know, definitions of quality. And we're going to say it has a mechanism for keeping itself propelled and calibrated and all that kind of stuff.

So to say that everything comes in the vision is true but not terribly useful, you know?

So we have to write one of these and create a mechanism to get feedback. But these really are the two -- you know, you always have to have that. And so these become the two very specific things that we want to --

MR. HUNGATE: I'm comfortable with that.

DR. CARR: I was voting for three, to say that there's a designation of accountability. Right now, entropy is multiplying as we speak, that the who is a separate and distinct kind of call for a mechanism and I think we're calling for, you know, the electronic health record as a vehicle and an accountable party to prioritize what we're doing because we have many, many things happening at once now and it's --

MS. GREENBERG: Well, it's not just prioritized. I think it's to make sure that these things get standardized as the devil's in the details. You can't have similar measures for the same thing. You really need to have --

DR. CARR: Right. But, I mean, what we heard from Jerod today is no one is in charge --

MS. GREENBERG: Yes, nobody's doing that.

DR. CARR: -- and I think that was important enough to break out of the vision.

MS. McCALL: Let me ask a quick question. So in your mind, then, would you relate these three?

DR. CARR: Yes, could be.

MS. McCALL: Or would you just see the who as separate from these other two?

DR. CARR: I don't understand knowledge management versus infrastructure. I mean, what does that mean? Can you use it in a sentence?

[Laughter.]

DR. CARR: And then I'll spell it.

DR. STEINWACHS: It's the trains and the trucks, the quality/IT intersection.

MS. McCALL: I think it's what you said before, which is that there's a technology infrastructure and then there's the knowledge management, which includes the metrics and how you keep them maintained and --

DR. CARR: So I see that as part of the vision.

MS. McCALL: Okay.

DR. CARR: But an action item being absolutely that the knowledge management has to have an owner and the infrastructure has to get developed.

MS. McCALL: Okay.

MR. SCANLON: My one concern about taking on the decision-maker, or the person who's going to set priorities, or the entity that's going to set priorities, very early on is that it's going to be a very controversial area, because you know what we heard this morning from Steve was that part of this is where there is a concern about sustainability of Medicare and Medicaid from a cost perspective.

And, you know, we've got a history. I mean, the predecessor to (?) in terms of whether they could be trusted -- you look at effectiveness within the Department; the Department is a funder or payer. And Steve suggested that we need to get beyond that mentality about worrying about whether people are making decisions based on the cost. It's going to be hard to get beyond that mentality.

And so this is something where I think we're going to need to get a lot of outside input in terms of --

MS. GREENBERG: Oh, yes.

MR. SCANLON: -- writing a report and making a recommendation.

DR. CARR: But, I mean, wouldn't that drive our hearings?

MR. SCANLON: Right, it will. But I'm thinking in terms of our shortest-term product, it's not going to be one that we're going to be able to say much about.

MS. GREENBERG: I would think at this point we could have findings on that topic, the findings being that there is no one in charge in a sense, that people with developed systems are saying, we don't know exactly, you know, where to put our emphasis now because there's too much diversity; that some measures are competing with each other.

You know, I think there are a lot of findings that we could identify. I don't think we're ready at all to identify a solution, but I think the hearings --

DR. CARR: But define the current state, yes, define the problem. Anna?

MS. POKER: I was going to say also that one of the things that comes out here is the value, and Carol rolls it up really nice into -- I forgot -- with quality and IT, I think you've rolled into.

I'm not sure if we can't think of value, the definition of value, as being an important segue to what you're involved with saying, in a sense that the reason we need somebody as the keeper of the knowledge management information is so people don't have redundancies like Steve was talking about, unknown to anybody, I forgot which organization came up with sort of metrics and that caused a whole lot of havoc.

So maybe if we could have a business case for why somebody should be overseeing that, that would be --

MS. GREENBERG: I have a feeling it has to be public/private, too, although --

SEVERAL PARTICIPANTS: Right.

DR. CARR: Exactly the problem, it was public/private.

MR. HUNGATE: Yes, if you can define a process, then you can assignment ownership of that process somehow in the system. Whether that's a single point or a multiple point is a secondary decision, it seems to me, and that's the part that I wanted to make sure we didn't get into a corner on, that it's a process decision that's the important thing, because as soon as you'd name a single point of control, you're into a heavy political battle on all fronts.

MS. McCALL: That may help -- this is a process, okay.

MR. HUNGATE: The other place where I'm worried about that is whenever we say "quality" and then we start to define it, we're saying that we're going to reconcile quality definitions. And we can't do it, because quality is going to be dependent on the perspective of the measuree.

DR. CARR: Are we saying that, that we're going to -- what are we saying?

MR. HUNGATE: Well, we said quality versus value --

MS. McCALL: We said quality versus value as --

MR. HUNGATE: We may have to be careful of that is all I'm trying to express.

MS. McCALL: I'm going to make a recommendation because we, in terms of use of definition, we get hamstrung there. As a guiding idea, again not to try to define quality or even try to define, you know, kind of put it on the page -- that's a rat hole that we'll never get out of -- but if I augment that to say "quality versus value as a guiding idea" in how we go through that --

MR. HUNGATE: Yes, but let me pick up a little bit on some of our history. Peggy O'Kane from NCQA made the observation in our earlier hearings that they have found that it's better to talk about performance measurement than quality, that it works better in the dialogue.

Performance measurement ties better to quality improvement, and so I'd like to be as precise in our terms there as we can, because I think we've got to try to link to the continuum that Brent first introduced for us which puts the learning side, which is the quality improvement side, as the core element. So partly I'm trying to say how we articulate it, the medium is some of the message.

MS. McCALL: Okay.

MR. HUNGATE: Does that make any sense?

DR. CARR: Yes.

DR. STEINWACHS: Well, I guess all I would add, and I don't disagree, but it seems to me you do have to at least start out with a quality framework and then move to performance measurement.

MR. HUNGATE: Okay.

DR. STEINWACHS: And I would suggest the 1990 definition of the IOM came out for quality, also brings in the patient's expected and desired outcomes. And so -- whew (whistling) -- too dangerous. She's ticking!

The 1990 IOM definition brings the patient into that, so it's not just professional judgment; it's also the patient's expected -- and so I think that that could provide sort of the big umbrella and then you could move to performance measurement.

MR. HUNGATE: I'm comfortable with that.

DR. STEINWACHS: I think dealing with performance sometimes you usually think of it as only a process --

MS. McCALL: Yes. Stan?

MR. ETTINGER: Or maybe can't you make it a role of quality measurement in determining value, or something that's -- so it's not "versus" or "is" --

MS. McCALL: I'm sorry -- say that one more time. "The role of --"

MR. ETTINGER: Quality measurement in determining value, or something along those lines; it's not one versus the other or defining both of them. It's sort of the role leading to the other.

MR. SCANLON: I think value was introduced yesterday with the idea that we are going to take into account cost.

SEVERAL PARTICIPANTS: Yes.

DR. STEINWACHS: The outcomes.

MR. ETTINGER: And very different than if we just focus on quality.

MS. McCALL: Right. So I think what I've done is added a couple words here that say just change the definition on the page so we know when we come back to it what we mean, quality measurement in determining value as the definition, and then a couple of other words here. Just that language is important as well, to kind of capture that idea.

DR. STEINWACHS: Maybe it's one of the factors.

MR. SCANLON: Can we go back to it for a second, go back to this issue --

MS. McCALL: A few more minutes here and then we're going to kind of go on to the next stage. Yes, go ahead.

MR. SCANLON: -- because it's a question of accepting value as the objective, I mean, and whether or not we're going to take costs into account. I mean, this is a controversial issue and it may be critical, and if, you know, General Motors is going bankrupt because of health care cost as we've mentioned this morning, that brings attention to the cost, is a legitimate dimension. But up to this point, it hasn't been recognized as a legitimate dimension in many contexts.

MR. ETTINGER: It is a CMS/AHRQ issue again.

MS. GREENBERG: I think it's -- well, it takes me back to the ICD-10 discussions. But if you're going to take cost into account, you have to also find ways to quantify benefits, and that has to do with the education part et cetera.

It is always easier to quantify the cost than the benefits, so that's why it's like the value proposition. The value is not just how much does this cost? I mean, it's trying to weigh that against benefits that are much harder to quantify. But there is scenario development and there's some examples, I think we heard a few -- you know, we know Bradford Middleton has done a little bit of that with IT or whatever -- so I think that's an area for research, frankly.

But I think when you're saying you're taking cost into account, because how can you not, but there are costs and benefits. And then I think it's sort of, you know, cost effectiveness.

But I actually wanted to say two other things. One is that I think when we bring all these things in, it will clarify a lot of the things that we maybe think are missing or won't be or where there's really lack of clarity -- that will become more obvious, also.

MS. McCALL: Something else that we need to spend a little bit -- and background.

MS. GREENBERG: And one thing that I'm not sure was even up there, but, of course, as Anna said, you hear what you want to hear, one thing that was really striking to me, particularly coming from the people that it came from yesterday and today, was the focus on a population perspective and a public health perspective in a very broad way.

Don Detmer, who didn't start off there, came out with that quite a bit yesterday, and Steve Jencks. I mean, this is profound in a way that they're saying CMS is a public health agency, because I was at a Data Council meeting a number of years ago when a previous CMS administrator who actually I would have expected to have this perspective said HIPAA has nothing to do with public health. If HIPAA has nothing to do with public health, you know, does quality have anything to do with it?

Clearly, we've gotten past that. But I think that we want to make sure the vision has that perspective in it, and I'm not sure there's anything up here that says that -- I mean, unless you don't agree with it. But I certainly heard it from people. It wasn't public health people or population health people who came forward and said that in the last two days; it was more people on the provider side, payers, provider side, so --

MR. HUNGATE: We're going to need to take a break.

MS. McCALL: Yes, okay. Before we do, and this will be an incentive for us to do so -- obviously there's a lot of work to clean up and bring together and all of that -- I would like for us to end this part by being able to come up with either two or three things that we believe are going to form the foundation of our work plan. And I'd like them to come from this list, and we have a couple of candidates here, all right?

One is the quality/IT intersection -- I've heard one. And if we were going to stick with two, I've also heard the second one to be kind of who does this, and how, as a second part of our work plan, to try to go figure that out and make recommendations on that.

So that's what I've heard so far.

MR. HUNGATE: I'm not sure you can separate the who from the what.

MS. McCALL: Okay.

MR. HUNGATE: The knowledge management --

MS. McCALL: Okay, so that some of these things get bundled together and that's why.

MR. HUNGATE: I think so, yes.

MR. ETTINGER: It might be politically and strategically not to make it look like we're picking people in the Department or somebody. So it may be nice to roll it in as combined with --

MR. HUNGATE: Yes, I think we've got to stay content-centered.

MS. McCALL: I think so, okay. So it's really kind of a who and how, with the understanding that it is about a knowledge management process? Okay.

MR. HUNGATE: Right. What are the components of a knowledge management process?

MS. McCALL: Yes.

MR. HUNGATE: You know, we didn't put decision support up there, but it's one of the pieces that ties into that, so --

MS. McCALL: It is.

MR. HUNGATE: -- when we blow that up a little bit, it's going to get pretty good size.

MS. McCALL: It is. All right, so here's two, and this one's really big -- it's got some sub-parts that include mechanisms and how. It's that intersection and then it's basically how you get it defined and then how you keep it going, okay. And those would be our two big.

MR. HUNGATE: You know, in the quality/IT intersection, the observation that there was no provision for registries in the EMR is a pretty big hole.

MS. McCALL: Oh, yes.

MR. HUNGATE: So, you know, that raised a red flag for me that say, hey, we've got to get that on the agenda early, and which links to the quality/IT.

DR. CARR: Remarkably, I don't agree.

MR. HUNGATE: Really?

DR. CARR: I mean, maybe it's my concept of registries, but I mean I think we're sort of at the point of how can IT have quality data and what kind of fields and flexibilities. But when you say "registry," I don't know --

MR. HUNGATE: Let's try to come to agreement on terms, because it seems to me that I can argue that the Northern New England cooperative group in effect has a registry which is called a database.

DR. CARR: Right, but they have people hired in each institution with very specific definitions and the same person who's in the cardiac surgery registry might be in the CABG or the -- I mean, the heart, you know -- and everybody might have a little bit of heart failure. I mean, this is huge.

I guess it's really what registries, what patients, toward what end.

MR. HUNGATE: Knowledge management.

MR. SCANLON: Could I offer my confusion, which is the idea that if the electronic health record has the data element, isn't the registry a software module?

DR. CARR: Right, a query.

MR. SCANLON: It extracts the elements that you need. And I don't know how you design an electronic health record that has everything else but doesn't have a registry, a pass-key to it. What are you leaving out?

DR. CARR: Well, you have data elements that then be queried, and so you create fields that are flexible fields so that if a person's admitting diagnosis is this, these fields pop up. So maybe that's what you're saying. But a registry to me is a static, unwieldy thing that you think is a good thing and then you're stuck with or you have to change.

But to have flexible fields that you can query, you know, in a real-time basis --

MS. McCALL: One comment. Okay, this has been really, really good.

MR. HUNGATE: Semantics are important.

MS. McCALL: Yes, but they are important, and I think we're going to get there. I don't think we're going to solve this here.

I would offer the following comment, that they are so important that when you said the word "registry," what we got was somebody else go, "Oh, my God." And what you have is a mental model of all the pain that the historic frame of registries have created.

And I think one of our biggest opportunities, as well as challenges, lies in trying to break the frame of what we've walked around with for so long when we talk about what this means.

So I'm going to recommend that we not continue to try to define that right now but just recognize it as we're going to stumble over that at every single turn and we'll have to kind of work through those.

I want to make sure that basically, Bob, you have what you need when you go to this Executive meeting, which is at least a foundation for a work plan.

MR. HUNGATE: I think we're partly there.

MS. McCALL: We've got that.

MR. HUNGATE: We have a foundation for our work plan. I think we can express a set of goals from that.

But a work plan is, okay, well, what are you going to do?

MS. McCALL: Right. Now we are now at a point where we can take a break for, I guess, 10, 15 minutes?

MR. HUNGATE: I haven't gotten anybody down to 10 yet, so let's go for 15.

MS. McCALL: Okay.

MR. SCANLON: Oh, the same for five.

[Laughter.]

DR. STEINWACHS: It'll still be 15.

MS. McCALL: And then I think it's all about next steps.

(Break)

Agenda Item: Work Plan Development, including Next Steps – Mr. Hungate

MR. HUNGATE: Okay, we're on our way.

DR. STEINWACHS: Do you want me to say something about the population quickly --

MR. HUNGATE: Yes. I think we're on an excellent track that I think is going to play out to our combined satisfaction. I applaud everyone's contribution to this dialogue at this stage.

DR. STEINWACHS: Thank you for our leadership and our facilitator.

[Applause.]

MR. HUNGATE: And I think we've got buy-in from CMS and AHRQ in our endeavors, and I think that's going to help us make this come true in a very real way.

So Don and I have been talking back and forth about what we try to do here and what we try to do in Populations. I think it'd be worth it to take a few minutes now to talk about that because we're going to talk about it further at the retreat of the Executive Subcommittee, but, you know, we ought to talk about it a bit here first, so --

DR. STEINWACHS: Well, you know, in the great wisdom of the NCVHS, and Marjorie, no doubt, it was her guiding hand, there's a lot of overlap between the Populations Subcommittee and the Quality, and so early on Bob and I had discussed that it seemed that if we were going to proceed, there ought to be a lot of complementarity in the agendas of the Quality Workgroup and the Populations with the kind of distinction, as you might expect, that you might think of the Quality Workgroup as putting more emphasis on interactions between individuals and medical care and the health system and the Populations Subcommittee focusing more on population health or groups, and, you know, there's a strong interrelationship between those two, but both in terms of metrics and terms of what you think of ERH, other things supporting the issues in population health are different.

We are a little behind schedule at least from where I'd hoped to be at one point in the Populations Subcommittee addressing the agenda, so I see us as benefiting from this discussion going on here.

We are trying to put to bed, as you know, the report, racial and ethnic, the measurements, and being able to have the capacity to look at disparities, and so I'm expecting that that will be going -- I'm hopeful if it gets through the Executive Committee, we'll be going for a vote and presentation to the full Committee in June, end of June.

I thought at the Populations Subcommittee at the end of June is we would be talking about the Populations Subcommittee agenda, and so I'd love to say that build off of this discussion.

Also, Marjorie and I have been approached by an interagency working group that would like to see the Populations Subcommittee maybe take on functional status measurement around the area of disability, and so I thought that would be on the agenda for discussion.

But I would like to see some complementarity because I don't think most of us as individuals will survive this process if, since we're working on the same things, unless there is some strong complementarity or

it'll probably slow down both processes. So I'd welcome other comments, but, you know, Bob thought and I agree that it'd be good just to try and get us to think a little bit about how the two things fit together.

MR. HUNGATE: Observations, comments?

MR. SCANLON: I'd feel very positive about that kind of an approach because my sense is we're going have a vision and in some respects we've already laid out a bit of the vision in the matrix and that, you know, we didn’t identify population health as part of one of the dimensions.

And, frankly, once we're beyond sort of the vision at 30,000 feet, we are going to have an incredible amount of work in dealing with each one of the components. And the idea that our time on Populations and Health Subcommittee, not that we're doing sentences, but --

[Laughter.]

MR. HUNGATE: You are serving your time.

MR. SCANLON: We're serving our time!

[Laughter.]

MR. SCANLON: And our time could be spent in a complementary role. To try and work through some of the other details here I think is a very positive thing because otherwise, I think, one of our next steps is going to be a long list of areas that we could potentially investigate and think about what our role is. And we'll be able to say that we're sharing some of that list between the two groups.

DR. STEINWACHS: You know, I think a lot of people would be interested, if we were willing to take it on, about what our vision is for the intersection between EHR and population health. And it sort of talks to both the potential and the limitations, and then that helps talk to where our national health statistics, other things, fit in.

And so I could see, if we wanted to do it, sort of that complement so we were building two things along the way that paralleled each other but took a different sort of lens to this.

MS. POKER: Don, I was also going to emphasize -- you weren't here yesterday, but Ernie Moy from AHRQ actually made a request for that, not specifically that population, but he talked about including disparities data in the EHR. And even though he said we'd probably be opening up a whole array of other problems, that's how important that is, and also customizing information for different audiences.

MS. GREENBERG: Yes, he mentioned that disparities was part of the quality equation, which of course it is.

DR. STEINWACHS: Well, you know, I think we sort of talked briefly before that certainly the two reports that come out of AHRQ, the national quality report and the disparities report, are in essence the effort to do a population look at those issues that also can expect to be looked at within organizations and within health plans and so on.

And there's some real weaknesses in those reports because of issues around data and, you know, how thick the numbers are when you get down to the different subgroups and so on. So, again this idea of are there ways in which EHR might help support that is one way to go. The other certainly is to look at our national health statistics capture capacity and the fact that it is failing to be able to fit in lots of pieces in that quality and disparities equation as it addresses health status and utilization.

MR. HUNGATE: Okay. Any other comments, suggestions, there?

We've got a lot of "what" content in front of us and almost no "how" at this point. And in my experience, a work plan has a lot of hows.

It seems to me that our challenge now is to think about how we get the content addressed by other interested parties in a topically related series of information-gathering processes. And one of the first decisions is where do we start, and how do we do that?

We've got a good start here in terms of the spectrum of content of vision. One could say, okay, what are the parts of that vision which are haziest to us? Can we describe what they are and can we think about what people could help illuminate that part of it?

DR. CARR: Well, I'm thinking back to yesterday. I forget who it was who said in terms of IT and of quality data elements that getting a group together that included David Bates, Brent James -- there are fewer than 10 -- sort of thought leaders and to brainstorm with them and with John Halamka of how do you take this information -- I mean, I'm hesitating because I guess we're still wondering -- well, I mean, perhaps just define the array of data elements that would be achievable in an electronic health record. I mean, that might help even being an organizing focus. You know, some things are probably going to be out of scope because they're too complicated but some things are and what format.

I don't know. I just think of that as a concrete step I heard yesterday and I think it would be very informational and it would begin to warrant thinking in the direction of how you structure that kind of area.

MS. GREENBERG: Can I just push that up one, to kind of define the questions that you might want electronic

health records to answer or address, and then from that, I guess, the data elements. And the idea being, you know, much as I love data elements, when you start with the data elements, you sometimes lose sight of --

DR. CARR: Well -- or maybe even using what Don Detmer suggestion. Of course, including Don, you know, in terms of safe, effective, efficient, IOM dimensions, using that because I guess it is a slippery slope. I mean, we're sort of struck by the fact that there are a million definitions and permutations, and so if we wait to say "here's going to be the one that's most important," it'll be a problem.

But if we think about what is the functionality, or might be the functionality, of electronic health records and what kind of formats can you collect data in, whether it's yes/no or quantitative, whatever, we could at least begin to define an architecture which could then be backfilled with questions addressing maybe the dimensions of care or something.

MS. GREENBERG: I wonder if it would be good to -- although they really were pretty much at the 50,000 feet -- but if it would be good to include in that, or separately, to hear from the group at HL7 that was responsible for developing the functional model for the EHR.

MR. RODE: There's a supportive measure within the functional model -- I can show you the whole picture --

MR. HUNGATE: It looks pretty from here.

MS. POKER: Colorful.

MR. RODE: The pink stuff. But there is a group right now working on this. John (?) from our staff is involved with this project and maybe it'll be better to talk to them sooner than later and have that dialogue and maybe after you talk to this other group that you've mentioned but not to do sooner -- maybe you do it together.

MS. POKER: Also, if I could just suggest if you want airplane time reading material, maybe the chapter that Brent James recommended, Chapter 8, from the Patient Safety -- that's the only one I know, so that's what I'm pushing forward -- but that did happen to write that. And I think it also answers some of the questions, Justine, you put forward. I mean, that's just also a thought.

DR. CARR: Well, from his perspective, too.

MS. POKER: Yes, it's very first person. You've read the whole thing? Oh, wow. You haven't -- I don't know if you've read it.

MS. GREENBERG: Do we have that Patient Safety Report?

DR. STEINWACHS: I guess one sort of question. And, you know, part of what Bob here is talking about is how do we lay out the landscape of -- but when you think of a quality/IT interface, it does talk to everything from just the documentation function that a physician does, or other health professional, of what is in the care process to does that drive into a decision support kind of mechanism? So does what you put down there then gives you feedback? Does that drive into reporting function that goes into either organizational, other report cards? Does that drive into some of the things in population health?

And so I guess it would help me a little bit to if we talked about, you know, what's the scope of this? We're interested in the interface. One, if it does talk about producing information useful or measuring quality within organizations and within practices at a minimum. You know -- does it go further than that I guess is really what I'm asking. Does it get into essentially decision support for the physician? Does it get into --

MS. McCALL: I think it would be valuable to actually take that quality/IT intersection and talk about that interface and say, what are all the different masters that it would need to serve? Everything that you've laid out -- population, decision support --

DR. STEINWACHS: You could generate the bill off of it.

MS. McCALL: -- pay for performance -- and talk about that from a visionary perspective, that really it's a vision maybe around that interface. Then we can pull out the matrix again.

And so it's really getting that, getting a deep conceptual framework for that, so that it has some depth and specificity, and then I think we'll bring in various folks to talk with about that. We'll understand, you know, what can't be done today, everything. But that seems to be a very concrete, although broad, set of things that we could do.

MR. HUNGATE: A question related. The Northern New England cardiovascular group has been used as a good example of a learning system that somehow is using information technology in some way to self-improve. Would using that group as an example, asking them to come and talk about how they would see what they're doing interfacing into these other domains?

DR. CARR: But I think actually didn't Don mention that Jerry O'Connor would be the right person to come and represent that group and, you know, could talk about their model?

MR. HUNGATE: I need tangible stuff to understand the abstraction, and it would help me a lot.

MS. McCALL: Yes. And they're actually up here on the flip chart, so it's actually kind of what I'll call

a lather-rinse-repeat model of getting some testimony, creating a vision or a paper, and then, you know, getting feedback on what that is. And so that process, we might be able to iterate through that, with: This is our focus, all right? With the quality/IT interface as our focus, with very specific audiences, people, things that we want to hear about, people we want to hear from. And then we can write.

I would be very concerned if the first thing that we tried to do would be sit down and write a vision.

MS. GREENBERG: That'd be premature.

MR. HUNGATE: Too early, too early.

MS. McCALL: Very premature. So that organizing these ideas, these thoughts, and blowing out more detail, fleshing out what we mean by that, and then literally creating some venues to gather some testimony on that specific might be the next step.

MS. POKER: Didn't he also say for us to do a field trip?

DR. CARR: I was just going to say that.

MS. POKER: Oh, yes.

MS. GREENBERG: A field trip -- haven't been to Boston for years.

MS. POKER: Yes. That would be the time.

DR. CARR: We could have dinner at my house.

MS. POKER: If it's so different if we see it.

MS. GREENBERG: Wow -- there's the mother in you.

DR. CARR: It really would be good --

MS. GREENBERG: Yes.

DR. CARR: -- because it's been learning for me to hear John describe it and see people look surprised, because it's the only system I've ever known, so I don't even recognize, you know, how novel it is compared to other places. So it would be delightful. You will like it; it's pretty cool. And see the capabilities.

MS. GREENBERG: So there is that kind of a site visit but also maybe to meet with some of these people who are up there --

SEVERAL PARTICIPANTS: [Agree.]

MR. HUNGATE: We'd have to do the two together, right?

MS. GREENBERG: Yes, it would be like two days. One day, the site, and the other day --

DR. CARR: Yes, that's right. That would be terrific.

MR. HUNGATE: Now, how much advance, how much lead time do you need in order to do something like that?

MS. GREENBERG: Well, you know, I think the first thing would be to poll the members and key staff and see availability. But is it possible there's a meeting room at the hospital that we could meet in?

DR. CARR: Yes.

MS. GREENBERG: Otherwise, you know, if we have to find a hotel or whatever, then you need more lead time for that. It would be nice actually to have free meeting space, but it's not required. But if it's going to, you know, not be a big public hearing but more --

DR. CARR: I'm sure we can work something out, you know, for a conference room with a little lead time and have folks come over.

MS. GREENBERG: Yes.

DR. CARR: I mean, John could host this, in a way.

MS. GREENBERG: So then the main issue would be, you know, checking all your availability and the lead staff and then the ability of your organization or to accommodate.

So I don't think there's a huge amount of lead time that's needed.

MS. McCALL: So who all do we want to be there, I mean outside of our Working Group here? Dan, you had mentioned somebody.

DR. CARR: He wants to go.

MS. McCALL: You want to go?

MR. RODE: I'd like to go.

MS. McCALL: All right.

MS. GREENBERG: Well, I would have to say that -- I mean, this is like fact-finding.

DR. CARR: Yes.

MS. GREENBERG: So it's not like we have to sort of invite the world. But on the other hand, we don't want it to be secretive, either.

MS. McCALL: No -- I'm talking about people if we're going to have a couple of days; we're obviously going to be on a field trip. But there were some other people that we said we wanted to hear from.

MS. GREENBERG: Were you talking about Jerry O'Connor?

MS. McCALL: Yes, that's who -- well, I guess, all right, so you're structuring. Day one is field trip --

MS. GREENBERG: Yes.

MS. McCALL: -- and day two would be brainstorming with some folks up there.

I was just saying to Don, you know, including Lisa --

MS. GREENBERG: I was just thinking of Lisa. We talked about going up there and meeting with her.

MR. HUNGATE: Yes.

MS. McCALL: So, Lisa -- this would be a great time to have her.

MS. GREENBERG: Yes. And maybe even Dan Friedman from --

MS. McCALL: That's what I was thinking about.

MS. GREENBERG: from that kind of population side. I told you -- don't even think about serving your time, because there's no end to the service. Lisa --

DR. STEINWACHS: Well, Bill has come back once, you know, and we expect him to come back over and over again.

MR. SCANLON: I was on probation but I --

MS. McCALL: He's on reserve.

MS. GREENBERG: What about Barbara Stargill(?)? I think she lasted longer on the Committee, until Simon, than anybody.

MR. ETTINGER: Oh, Marjorie, you might also want to ask Simon if he's available. He might be interested.

MS. GREENBERG: Who?

MR. ETTINGER: Simon Cohen.

MS. McCALL: No way!

[Laughter.]

MS. GREENBERG: Oh, well, obviously other members of the Committee who are interested would be welcome.

MR. HUNGATE: Brent James was mentioned.

MS. GREENBERG: And the NHII group.

MS. McCALL: Well, I would ask that you take that

to the meeting next week and talk about alignment, talk about intersections, talk about whatever. You know, reshuffling or juggling could take place because there may be not only a desire but a real need to have some key intersections here.

MS. POKER: What she just said?

MS. GREENBERG: With the NHII.

MS. McCALL: With NHII in particular.

And if we develop the agenda, you know, on a trip to Boston and there were folks that you wanted John to invite to sort of speak to some of the issues that have come up, even the ROI and the HIPAA, you know, perhaps we could have a more generic -- I don't know.

MR. ETTINGER: Marjorie, did you also want to take a look at the VA here in DC because Edouardo probably could arrange something there. It might not be that much of a big thing.

MS. GREENBERG: We talked about doing a site visit to them as well and that is certainly something we could do. It's good to see two systems.

DR. CARR: And, really, I think actually you're right. Understanding what went wrong -- you know, we heard yesterday or today, whenever, today, I guess, that with all of the automation of the VA system, the data analysis had to take place on print hard copy.

MS. McCALL: And, yes, that was specifically the work that Beth was trying to do, Beth McGlynn, in trying to go in and do that study and why it was that so much had to get printed out.

MR. HUNGATE: That's an interoperability infrastructure question that, you know, is getting worked on.

MS. McCALL: She's specifically trying to work in that intersection and so we may want to actually hear from Beth --

MS. GREENBERG: Oh, yes.

MS. McCALL: -- about, so what happened in there?

MS. GREENBERG: Maybe not when we go to Boston. Is she in California?

MS. McCALL: She's in California but she's all over the place.

DR. STEINWACHS: The reason I wasn't here yesterday, and Kevin Vigilante is part of that problem, is we put in a proposal to the VA to evaluate nationally the quality of care for severely mentally ill in the VA which had me dive in then to all those datasets. And essentially the VA runs an EHR that is free text, and so there isn't much you can do with a free text system. And that's what EHR traditionally is -- it's free text plus there may be some structured lab fields or some other things, you know.

And I think part of the transformation you're talking about here is: What structure do you impose? What elements have to be there? And how do they have to be recorded?

And so the VA has an elaborate system. They abstract 250,000 charts a year to support their quality reporting. I mean, so they have systems in place that do it, but they do it what sort of feels like a little bit the old way except that they can pull that record in across all sites, so the record's not a site specific.

MR. HUNGATE: I understand. But I think that issue is being addressed elsewhere in the structure of the NCVHS. You know, we don't need to illuminate that problem.

DR. CARR: No, but I think as we think about the myriad descriptions of quality that we heard today and at our hearings last year, that knowing what kind of constraints we might be thinking about -- you know, free text, there's always going to be a limiting factor.

MS. GREENBERG: How much of your system is free text and how much of it is proprietary coding? How much of it is standardized coding?

DR. CARR: A lot of it is free text.

MS. McCALL: Well, and the thing is, this is going to get into this idea of knowledge management, and the idea is one around a taxonomy. And we can sit and debate taxonomies -- all -- yes, then it is plural -- which ones are the right ones? Do we have to come to consensus? If you come to consensus, it becomes fairly slow-moving.

But I would like for us to investigate what the other opportunities are. For example, it may be that there is: This is a taxonomy that we're going to use, these are the definitions, and on top of that to allow people to play, to discover, that systems allow for the creation of what's commonly being called "tagging," or "folksonomies," which is an emergent property that says people create their own taxonomies and they can see what other people are creating and they can begin to use common vocabularies and things like that.

And so that's if I want to do something a little bit different than, or more than, or more distinct, than what the formal and official taxonomy is allowing me to do because I'm trying --

MS. GREENBERG: What is it, folksonomy?

MS. McCALL: Folksonomy. F-o-l-k --

MS. GREENBERG: That's what I thought. Haven't heard that one before.

MS. McCALL: But there's just some new capabilities and ideas that are emerging that if the infrastructures some of these things to self-organize and people understand how to use them, they will self-organize.

And the question is, can it work? Can they co-exist together so that you have the formal, top-down "this is the formal taxonomy; it's used to drive metrics and a lot of other associated things" and there's also a way for people to try something new that taxonomy doesn't quite fit.

And if it actually produces research that's valuable, then maybe it does become official.

MS. GREENBERG: Or can feed into the formal technology --

MS. McCALL: Yes. It's just an idea, but it's to say that there's value. Just to kind of wave our hands over a free text system and say somebody else is dealing with all that doesn't do justice to what we have as one of our key objectives as a Workgroup.

DR. CARR: And I think the other thing, I mean just raising a question that Marjorie said, we have notes that are, you know, just regular notes, but we also have fields that are free text but you can search on a term. So chief complaint to the emergency room -- you know, you can think of 10 different terms someone might say for a particular condition and you can pull out those cases.

So some of it is just structuring and architecture so that at least whether it's free text, it's a searchable field.

MR. ETTINGER: Actually, David Base(?) and I talked about this because -- [off microphone].

I'm sorry. David Base and I talked about this on other issues, but there is a VA in Boston and I do see some common (?) between them. And one of the problems is differences in the systems in transferring information. But not getting into their operability. But it might be a good opportunity to see maybe the Boston VA and one of your hospitals and look at the two systems together.

DR. CARR: That would be interesting.

MR. ETTINGER: We could discuss -- there are some problems with e-prescribing, even. Patients that come to the VA, they get their prescription at your hospital and they have to pick it up at the VA. And there are certain problems that arise from those situations.

And it might be a nice thing to kill multiple birds with one stone.

MR. HUNGATE: In the sense that I'm kind of of the "just do it" school in terms of charging in and getting it going, how soon could we do something?

DR. CARR: You know John. It's guaranteed he'll [off microphone].

MR. HUNGATE: Yes, I understand. But I don't know how soon everyone's calendar is clear enough so that we can all be in the same place at the same time.

MS. GREENBERG: We're looking for two days, folks.

MR. HUNGATE: We've got to look at two days. We've probably got to start in July.

MS. POKER: Mid-June.

MS. GREENBERG: I can't imagine it'd be -- this is already June!

MS. McCALL: Well, if we didn't have that full Committee meeting.

MR. HUNGATE: Well, you could try and restructure the agenda and have all these people appear at the full Committee, have the full Committee meeting in Boston.

MS. McCALL: I think we start looking in July.

MR. HUNGATE: Let's look at the window between, say, arbitrarily July 15th and August 15th. Is that okay for staff polling to see if we can find two days that we could all make?

MS. McCALL: I'd actually look at the window before that --

MS. POKER: Yes.

MS. McCALL: -- for the end date at August 15th, and look for anything in that window.

MS. POKER: I would just look at the month of July. I think the month of July could be --

MS. GREENBERG: What's your availability?

MR. HUNGATE: Mine?

MS. GREENBERG: We usually start with the Chair and work from there. Let me look here.

DR. CARR: You're going to have to email. I couldn't tell you mine, so --

MR. HUNGATE: Yes, I think it's going to have to -- I can give you that, but --

MS. GREENBERG: But it's a question of where do we start? I thought maybe he was saying July 15th because -- I personally know I'm not available the week of July 15th. What we have right now in July is the NHII Workgroup is meeting the 27th and the 28th of July.

MR. HUNGATE: Here, right?

MS. GREENBERG: And the Standards and Security is meeting the 26th and the 27th, which must that I don't think they're overlapping because they have overlapping members, so it's probably the 26th to the 28th that Subcommittee and that Workgroup are meeting. Am I correct about that? I mean, there's so much overlap between NHII and Standards to think that they would be meeting concurrently. So probably one is meeting for a day and a half and then the other's meeting the next half and day.

So I don't think we'd want to be at that time.

MS. McCALL: I think we're going to have to send out emails to people.

MS. GREENBERG: Okay.

DR. STEINWACHS: Carol and I don't handle our own calendars; I can see that already.

MS. McCALL: Actually, I do.

DR. STEINWACHS: Do you?

MS. McCALL: I have no administrative assistant. What I lack right now -- and I mean none -- is a computer. It's waiting for me at home.

MS. GREENBERG: Okay. We'll start with the week of -- let's see, there's July 4th and then maybe start with July 6th because if you don't want people to have to travel on July 4th -- start with July 6th and poll up until the 15th.

MR. HUNGATE: There's a 6th and 7th. Try the 6th and 7th, which is Wednesday and Thursday in that week.

MS. McCALL: Just start polling that week, or at that time, and then we'll figure it out.

MS. GREENBERG: Okay.

MR. HUNGATE: I know I'm going to be wiped out a good bit of that, but --

MS. GREENBERG: Okay. So let's just poll for that.

MR. HUNGATE: Let's find out where we are. Poll through the end of August.

MS. GREENBERG: Through the end of August.

MR. HUNGATE: Through the end of August.

MS. McCALL: I'm also going to make a recommendation that we have a conference call as a group between now and before that.

MS. GREENBERG: Oh, we'll have to.

MR. HUNGATE: Yes, a couple of them.

MS. GREENBERG: Particularly if the second day is going to be hearing from other people or whatever, that's the biggest time constraint really, getting on their calendar once we've agreed on dates.

MS. McCALL: I think we need two, then. One is to wrap up what we've done here, and we need to be able to put a period at the end of the sentence.

And then I think the second one would be to plan for whatever this July/August-ish thing is going to look like, okay?

MS. GREENBERG: Am I correct that you might want to do the wrap-up thing before the June 28-29 meeting so that you'd have something to report to that.

MS. McCALL: Oh, my!

MS. GREENBERG: Yes. I don't mean the conference call. I mean that you would like to --

What are the next steps here? You and Carol and Anna are going to work on integrating all this.

MS. POKER: Yes.

MR. HUNGATE: There's going to be all this, right?

MS. McCALL: Right. We're going to integrate all this together, and that includes the notes that you've seen on these pages; we now have pictures of them, courtesy of Justine, and typed versions, so they're going to be kind of organized a little bit differently and some subsumed. We'll bring all these together.

We'll also integrate that with what Susan is putting together for actual notes of the meeting itself so that she can kind of --

MS. POKER: Can they be separate because I think that would be too much reading?

MR. HUNGATE: Now, they're going to need the notes of the meeting.

MS. POKER: Right. But shouldn't we put it as two separate attachments?

MS. McCALL: I don't know exactly how it'll organize.

MS. POKER: Okay.

MR. HUNGATE: Yeah, it'll settle down.

MS. McCALL: It'll get organized in a way so that all of this information doesn't come out of context where somebody is trying to say, what happened back there?

MS. POKER: That's fine.

DR. CARR: Susan is doing that? Right.

MS. McCALL: Then I would like people to opine on that, especially with respect to how some of these things are going to get integrated back into these and because I think that that then becomes the basis on which we add and expand. We go deeper and broader in some of the work that we're doing.

MR. HUNGATE: Is a week the right time frame for the pass that comes out of this collective process, the first document?

MS. McCALL: You mean for the electronic pass?

MR. HUNGATE: Yes.

MS. McCALL: Yes, I think that it should be out no later than a week from today, electronically.

MS. POKER: Wait -- that's next week.

MS. McCALL: You're not going to have any sort of --

MS. POKER: It couldn't be like till evening, because a conference is going on, so we have like evening stuff going on.

MS. McCALL: Okay.

MS. POKER: Can we just do it 10 days out? As soon as I get back, that's what I'll start doing.

MS. McCALL: I think I'll still take a straw man attempt and then you can react to it.

MS. POKER: All right.

MR. HUNGATE: So, we'll shoot for the 10th in whatever way we can.

MS. McCALL: You'll still have a chance to react.

MS. POKER: Fine.

MS. McCALL: Shoot for the 10th, and then I'd love to have a conference call.

MR. HUNGATE: Conference call in the window of the 13th to the 15th.

MS. McCALL: Anywhere in there.

MS. GREENBERG: Of what?

MS. McCALL: June.

MS. GREENBERG: Okay, so you'll poll for that? Okay, thank you.

MS. McCALL: Okay. Somewhere in that week. And then we need something to plan our Boston.

MS. POKER: Can we discuss it also at this phone call? I'm sorry, Stan. Between 13th and 15th, should we also discuss it or should we start polling for that in addition to -- how do you want to do it? Because we could do two polls. We could do one poll for the phone conference and we can also do simultaneously a poll for the site visit.

But first of all, Justine, could you find out with John, and then based on that --

MS. McCALL: What I would do is I'd handle the

polling separately only because they're contingent on different things.

To wrap this up, all we need is ourselves, you know? We don't have to wait for John to find dates and facilities. Because we need two discussions. One is right away. The other one is in prep for, you know, this Boston trip.

MS. POKER: Site visit.

MR. HUNGATE: And I think the second discussion probably has to occur about a week after the first one.

MS. McCALL: So we can poll for those at the same time?

MR. HUNGATE: Right. Those two phone calls can be polled for the same.

MS. McCALL: Yes. It's this other one that's contingent on some other planning and other people that --

MR. HUNGATE: Yes.

MS. GREENBERG: You don't want her to poll yet for this meeting in Boston because first we obviously have to find out. There's no point polling for dates when John is going to be on vacation.

MR. HUNGATE: Okay.

MS. POKER: Can I make a suggestion? You let me know when John gives you the okay, what dates, and then I'll take care of the polling with Jeanine, if that's fair?

MR. ETTINGER: One thing you might want to consider in the dates -- if you want to go see the VA, that's another set of dates, another date; you might need more time.

And you might want to ask him, because when I did it with David, I just sat at his console as we went through it. It's much nicer to do it that way than have like a slide show. So if there's too many people, it's difficult.

MS. GREENBERG: Which is very good.

MR. ETTINGER: Yes, I've seem him do it.

MR. HUNGATE: Okay, I think the second polling is maybe between the 21st and 24th?

MS. GREENBERG: What was that?

MS. POKER: That's for the site visit. This polling is for what?

MS. GREENBERG: Second conference call, the week of the 21st through the 24th.

MR. HUNGATE: Two-day. One-day site visit. Second day, panels. Right? That's what we talked about, isn't it?

MS. GREENBERG: Yes, bringing in people.

MS. POKER: Right. But this time you're talking about polling for it between those times, or actually making the site visit between 22nd and 24th of June

MR. HUNGATE: No, no, that's polling for the phone call to plan the site visit.

MS. POKER: Got it.

MS. GREENBERG: We're talking two phone calls.

MR. HUNGATE: Two phone calls. One to settle all this and second to deal with the next session.

MS. POKER: Got it.

MR. HUNGATE: All right, have we done enough work?

MS. GREENBERG: Since you said we have to end in five minutes, I hope so.

MR. HUNGATE: Anybody got any add-on thoughts before we let this sit?

It's been fun. Thank you all. We're done for today.

[Whereupon, the meeting was adjourned at 3:55 p.m.]