[This Transcript is Unedited]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

FULL COMMITTEE

June 28, 2001

Renaissance Hotel
999 9th Street, NW
Washington, DC

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, 160
Fairfax, VA 22030
(703) 352-0091

TABLE OF CONTENTS

Patient Safety Task Force

National Quality Forum and IOM National Quality Report

Action Items

National Electronic Disease Surveillance System (NEDSS) and Public Health Conceptual Data Model

Reports from Subcommittees and Work Groups on 2001 Work Plans

Future Agendas for NCVHS Meetings


P R O C E E D I N G S (10:05 a.m.)

DR. LUMPKIN: Good morning. Welcome to the second day meeting of the full committee. Since we continue on the web, let's start with introductions.

(Whereupon, introductions were performed.)

DR. LUMPKIN: The first speaker will be on the patient safety task force.

Agenda Item: Patient Safety Task Force

DR. MEYER: Good morning. Thank you for inviting me here to tell you about what is I think a very exciting and quite nascent project within the Department of Health and Human Services.

Let me begin by pointing out to you that you do have a fact sheet in your agenda materials concerning the Patient Safety Task Force. I will not be referring to this explicitly during my comments, but you can use these for future reference, and they do capture much of what I am going to be saying this morning.

I also want to start by recognizing that I am here representing what is a collaboration among four DHHS agencies, that is, the Agency for Health Care Research and Quality, and the chair of the task force is Dr. John Eisenberg, who could not be here with you this morning. We also have Steve Jenks from the Center for Medicare and Medicaid Services. In addition, we have Janet Woodcock from the Food and Drug Administration, and we have Julie Solomon from the Centers for Disease Control. These are the principles who have been working this process through to date. With that said, there are a number of staff in all of the agencies involved, who have been putting a great deal of time into the Patient Safety Task Force work.

Let me begin by telling you a little bit about what we are aiming to do. We are trying to make a very small but important contribution in terms of providing opportunities to learn from errors.

I wanted to begin with this slide. I borrowed this from Jim Battles, who is here in the room with us. Just to make sure that the committee recognizes that the entire spectrum of what we are talking about here goes well beyond those incidents or accidents where patients actually receive harm because of a hazard. In fact, there is much that we can learn from no-harm events, those times when the patient suffered some consequence, but they actually recovered quite well, and near-miss events, where there was no harm at all, the hazard was averted at the last minute, and we frankly just got lucky.

It is important to recognize that we can learn from our failures. I think Paredo said this quite well: Give me a fruitful error any time, full of seeds and bursting with its own corrections. You can keep your sterile truth for yourself.

We believe, the member of the task force, quite strongly that we can improve our processes when we study and learn from the incidents along that spectrum I just shared with you. No-harm or near-miss may predict bad outcomes or disasters.

I would beg your indulgence; with a show of hands, how many people here have been involved in a fatal or near-fatal car wreck in the last six weeks? The answer to that is, thank goodness, it doesn't look like any of you. How many of you have been involved in a near-miss, where you almost were in a fatal or near-fatal car wreck? The truth of the matter is, none of you are admitting that you drive on the Washington Beltway, where that is in fact a daily event.

But think back over the last few months of your driving history, and you will recognize that there are a couple of things that have happened, where you almost got in an accident. Hopefully you have learned some lessons from that, and hopefully it may have been that you reached down to pick up your cell phone, change your radio or talking to the kids in the back seat, or you were too tired because you stayed up late the night before. Whatever that may be, we can learn as much from those near misses as we can in fact from those times when there actually is a wreck on the highway.

In terms of the controversial issue of reporting about patient safety events, one of the things that we had to some extent assumed during the process of generating the response back to the President on the IOM report from the Quality and Coordination Task Force -- and that report was called Doing What Counts -- was that there was a public demand for some information regarding safety events.

That was actually confirmed last fall when we did a poll with the Kaiser Family Foundation, which found that 73 percent of the national representative sample of Americans felt that the government at some level should be requiring health care providers to report medical errors, and to make sure that this information is publicly available. There is no other qualifiers put on exactly what that information would be.

We also know that in addition to there being some public demand for work on reporting, that there are systems that work. I share this with you. This comes from the national nosocomial infection surveillance system. If you look at the top line, those are patients who are at high risk for a hospital-acquired infection. You will see that among those hospitals that participated in the CDC's NNIS program, that they had a remarkable decline over time.

What this program does is, it couples reporting of nosocomial infections on hospital-acquired infections with a system to develop programs and interventions that reduce that risk. So we can see here that this is not just theory, but that in fact, coupling a system of reporting with some improvement interventions, you can actually get a dramatic increase, in this case, in the safety of patients in terms of their risk of hospital-acquired infections.

With that said, we also recognize that there is a very controversial issue regarding reporting. That is the tradeoff between reporting for accountability and reporting for change or for improvement. Between those two, there is somewhere where there is a balance between those competing needs.

David Marx has described that in his term, a just culture, a system where there is an inherent justice and a balance between those needs, where there is a feeling of trust on the one hand and there is motivation to report on the other.

This is a great example of that. This comes from the medical errors reporting system in transfusions, again another slide from Jim Battles. This shows a hospital's experience with transfusion related patient safety events.

What happened at the end of that year? Was there something really horrible that happened? Or did in fact the staff recognize that there was an opportunity to learn and they started reporting more? We need to understand that, because looking at this slide alone, one could come up with two very, very different conclusions about what is going on here, one of them being that there is something wonderfully powerful and good that is going on here, the other one being that they are suffering some terrible problem.

The Institute of Medicine report and then the quick report that followed it, outlining steps that the federal government would take, outlined a number of areas where we would do some work on reporting systems. Much of that work is being done through contracts by AHRQ and CMS with the National Quality Forum. We are asking them to develop two types of measure sets.

The first one is a small list, a small parsimonious list of egregious, easily preventable errors that occur that in fact should never occur in hospitals. What we are talking about here are things like wrong site surgery, very rare events.

I have unfortunately gone on public record as saying I don't think that is going to do much to improve the safety of American patients, but I stick by that comment, because the truth of the matter is that these are by definition rare events. In fact, if we wiped out every incident of wrong site surgery in America tomorrow, if they never happened again, the truth of the matter is that the safety of American health care would improve only a very, very small amount, because they are by definition rare events.

The second area where the National Quality Forum is doing some work is in getting together a list of patient safety practices. Here, this is following the systems-based approach that the Institute of Medicine recommended.

What we are asking the National Quality Forum to do is to look at an evidence based practice in a report that AHRQ is commissioning and will be released some time in the middle of the next month. That report has examined a number of patient safety innovations and practices, and looked at what is the evidence base behind them that they can in fact improve patient safety, which of those -- thinking about the U.S. preventive services task force model -- are Grade A, have very strong evidence that they work, which are Grade B, where there is some evidence there but needs some bolstering, Grade C, where it looks promising that the evidence isn't very, very strong, and so forth.

What the Quality Forum is going to do with the results of that evidence based practice center report is, they are going to decide which of those have a strong enough evidence base such that that is information that may in the future be useful to the public. So that, in the future, you may know when you or your loved one goes to a hospital, whether or not that hospital has certain interventions available that in fact can improve your safety as a patient. We are not going to be asking hospitals to report on individual incidents. We are going to be providing a data set available in the public domain, a measure set where they can then go out and do some reporting on what systems they have in place.

We also have initiated a patient at AHRQ of research. First and foremost in the pantheon of RFA's that we released this year and we are right now in the midst of reviewing, is one that focuses on health system error reporting analysis and safety improvement demonstrations. This program is approximately one half of the $50 million research program budget, the reason for that being that we got very explicit direction from the Congress, noting that when we were asked and others were asked, what is the ideal reporting system, our answer frankly is, we don't know, that we needed to make a significant investment in research in this area.

With that as background, I want to spend a few minutes just introducing the Patient Safety Task Force to you, and then open it up for some questions.

We began the look at our own systems that eventually led to the formation of the Patient Safety Task Force, and very quickly discovered that there are a number of existing reporting systems out there, including those run by the JHCO. There are federal systems. There are some private sector systems that are run by hospital organizations.

The National Academy of State Health Policy did a report for us looking at existing state-based systems. What we found is that in addition to those that currently exist, there are a number under development. However, they by and large are very uncoordinated, even those within the federal government, in fact, even those within a single department, even those within a single agency.

In addition to that, many of them I think can be fairly but harshly characterized as what I call a data graveyard. They have paid exquisite attention to trying to develop mechanisms to try to get data in the door, information in the door, and have completely or in large part ignored the responsibility to analyze that data and to feed it back to those that are doing the reporting, so that they can actually improve their practices. They really by and large failed to close that quality improvement loop.

There is also a number of reporting systems where there is redundant effort, where you ask for some more information among forums. As you can well imagine, that results in an overall breakdown of the patient safety effort related to reporting.

The truth of the matter is, when I am sitting in my office and I'm seeing a patient, and I have a waiting room full of patients waiting to see me, and that patient that I am seeing at that time tells me about a certain adverse event which may indeed be reportable to some federal collection mechanism, I sit there and I struggle with that question. Do I take the time it will take to do the reporting, or do I see my next patient, who will be out there waiting in the waiting room for another ten minutes?

Even more to the point, when I look that next patient in the eye, can I tell them, by the way, I'm sorry that I kept you waiting for ten minutes, but I learned something very important from the patient who came in previously, and I think that information in the long run is going to help me take better care of you and your family.

We can't do that right now. We can't honestly do that right now, by and large, because we are not adding value to that reporting function. And guess what? It doesn't happen. People don't do it.

Within the Department of Health and Human Services, we found that there were a number of existing systems, including the CDC's national nosocomial infection surveillance system. The FDA has the med watch and med SUN programs, and the Center for Medicaid and Medicare Services has the PRO programs. Among those, a single event could require multiple reports to different agencies and departments.

For example, a Medicare beneficiary comes in and gets a nosocomial infection which was due to a device failure in terms of antibiotics delivered on the floor. You feel you can conjure up an event that would in fact require reporting to all of those systems that are listed there.

With that said, all of them ask for those reports on different forms, and they ask for much of the same information. It is a major disincentive to the reporting. Again, at the end of the day, what is the value that you have given the reporter in terms of providing the information that they can use for improvement?

The Patient Safety Task Force is our answer to try to at least get our own internal house in shape to make some progress on this issue. We are looking at the existing reporting systems, many of which I just listed on the previous slide, and trying to say what can we do on the front end? How can we make that interface for reporting more user friendly, such that we don't ask for redundant information, so that you can go to a single place, you can put in the story about what took place, and then it can then be sent out to the appropriate federal agency, so that you don't have to go in and fill out multiple reports.

In addition to that, there is a back end piece of that. That is, how can we integrate this data so that we can take advantage of those learning opportunities, where you see the whole picture for example of the event I just described with the Medicare beneficiary who is in the middle with a device failure that ultimately led to the antibiotic to be delivered in a nosocomial infection; how can we integrate the data so that we can look at those.

This is I think a fair schematic of what currently it looks like out there. Hospitals, clinics, doctors' offices and other facilities face a number of potential primary collectors of information. It could be their health care organization, it could be a health plan and others. At some point in time, that information is then sent over to the appropriate federal, in this case, DHHS, agency and sent out to the state and the accreditors.

The truth of the matter is, those tertiary collectors, in terms of making that data available for research and making it available to policy makers, it just doesn't happen.

What we are trying to do in the Patient Safety Task Force is build a system that is easy to use and reduces reporter burden. It provides reliable and valid information to those that could benefit from it, and it maintains confidentiality, that it reflects and supports public collaboration. What we are talking about is a very small piece of work, in terms of trying to get four agencies working together. But we recognize that there are many issues that we are confronting that are confronted by other reporting systems, and I'll go into them in a moment here.

The bottom line for all this work though is that we try to insure that our work on reporting systems can result in improvement. As I argued before, the added value often is not there in the reporting equation.

There is also another important piece here. This is well illustrated by a study done by Harold Wolfe, Galend and others, which looked at the Reeves practices of dialysis membranes. Many of you will remember this study from about seven years ago. What that study found in short was that certain practices in terms of disinfecting dialysis membranes are more inadequate, it exposed patients to risks of infections, and it led to patient deaths.

As you well know, this resulted in a change in end-stage renal disease treatment practices around the country, and in relatively short order. Here is the scary part. The scary part was that in order to do this study, the researchers had to sit down and hand match data that was being collected by the CDC, that was being collected by HCFA, that was being collected by the FDA.

By doing that effort, they were able to come up with this incredibly salient, important finding, which improved the safety of a group of patients quite dramatically. What is scary about it is, is how often are we going out there and hand matching data? What are the missed opportunities that are out there, in terms of not going out and collecting new data -- and that is not what this task force is about -- it is about looking at our existing data collection efforts and trying to integrate them in a way that in fact, in the future that data will be available for research that won't require that tremendous effort of hand matching.

There is a quick vision schematic here. What we are hoping to do in the end is to produce a common user interface. This is some time away, but we are beginning steps toward developing this. There also will be a federal data pool, a pool of de-identified data that will be useful for researchers, for others, to be able to mine that information that we can do the kind of studies that I just showed you, but do it on an integrated data set where there are many fewer barriers than there currently are to doing this type of research.

We held a reporting summit at the end of April. At that summit, we invited a number of stakeholders, ranging from about 20 states to provider groups, to health plans, to insurers, to vendors of IT systems, and told them about what we were trying to do, and to get some ideas from them about what are the barriers and what are the opportunities.

We learned many things. One of them is that there is clearly a call for some leadership here, and many welcomed the effort within the Department of Health and Human Services.

We also need to pay very, very careful attention to the burden issues, the Don Berwin concept of all and only, to insure that we collect all the data that we need, but we collect only the data that we need.

We also heard from numerous stakeholders across that entire range I just mentioned, in terms of the need for work on language and vocabulary. What we have developing right now in these new reporting systems and among the existing reporting systems is a Tower of Babel, such that different people are using different definitions. Here is a place where we will confront that beast as we try to integrate our work among the Department of Health and Human Services, can we use that experience as a lever to be able to produce a product that will be much more useful for states and others that will be developing new systems.

There is also a need to get public input into this process. We began that with that summit, and we are developing some plans about how to do that in the future, and keeping the focus on learning. This is not a system for accountability. I showed you before those scales there; we are talking about the right side of the scale here.

This is a system that is designed for improvement. This is not an accountability system. The existing systems that we are going to be working with here in the Department of Health and Human Services are ones which we feel will be valuable for improvement. They are not ones we feel will be especially valuable for public reporting in the future.

We are avoiding new reporting mandates. We are starting out with a relatively modest effort, compared with some things that are going on in the private sector. But again, we think there are some synergies and opportunities there, in terms of leading by examples.

We will be asking some partners, for example, states with reporting systems, to expand that synergy and to tell us about the data collection possibilities, how can we develop a platform that in fact will be useful to them.

We are also working with international collaborators. You can well recognize the issues, especially in terms of vocabulary and data coding, do not respect any international boundaries. I spent some time in the last couple of days with officials from Australia and New Zealand and UK, and they all embraced the notion that some work should be done here, and it should be an international piece of work that will be valuable to them as well as to our efforts.

Our next steps. We are going to be doing a feasibility or an implementation planning study. We are going to make progress here. This is something we have decided and make a commitment to. We are going to be looking at how can we examine the existing systems, what is similar about them, what is different about them, and as a first step trying to think towards getting towards a uniform reporting interface, the burden issue.

We are also considering doing a pilot project, perhaps focusing on end stage renal disease. We are in very early discussions on that. We have also discussed some potential work with the Institute of Medicine or others to move forward on the vocabulary and data standard issues, and this is in the very early phases. We are also working on some partnership building, and we will be responding to new federal legislation in the future. For example, there is legislation sponsored by Frist, Kennedy and Jeffords related to patient safety reporting that we will in fact have to be cognizant of as we move forward.

If you want to get updated on this activity, you can do so through the website at www.ahrq.gov, or feel free to contact me as well. My contact information is located at the bottom of the fact sheet that you have here.

Thank you very much.

DR. LUMPKIN: Thank you. You are going to be here for the panel also?

DR. MEYER: I am.

DR. LUMPKIN: Maybe we can take one or two questions and then move to the panel.

DR. STARFIELD: Thanks, Greg, that was very helpful. Could you help us with terminology? Your first slide used accidents and errors. This morning, I don't know whether you saw the front page of USA Today?

DR. MEYER: No.

DR. STARFIELD: It reported on the fact that one in 20 patients that undergo laser eye surgery are severely damaged. That is really not an accident and it is really not an error, but it is a lot of people. Is this something that you are looking at?

DR. MEYER: I think that this gets to vocabulary issues. After working with this in the Department of Defense and talking to a number of folks, both here and elsewhere, about how to best frame this issue, one rubric that works quite well is the term hazard, hazard.

What we want to do at the end of the day is, we want to reduce hazards faced by patients in our care. Those hazards could in fact be done from an ill-conceived procedure, they could be done from poor execution of a procedure, they could be done from a lapse or a slip, if you want to go into the terminology that the safety folks use. But I think the term hazard is --

DR. STARFIELD: It's none of the above. Laser surgery is just none of the above. It is an adverse event.

DR. MEYER: Right, but I think if we focus on the issue of the hazard, of reducing the risk that patient face, I think that it pulls in many of these concepts.

One of the things that we frankly struggle with -- and I am speaking now not of a Patient Safety Task Force perspective, but from an AHRQ perspective -- where are the boundary lines between the patient system issue, the quality issue, and in many senses what you are talking about in terms of the laser eye surgery is an example where there is clearly a need to think about where does that boundary lie.

At the end of the day, we need to do work on the issue. I think if you use generic terms like hazard, they can be very, very helpful, and they can get us away from getting caught up in language of error, mistake, which I think will actually slow us down.

DR. SHORT: I'd like to take a step back. The Patient Safety Task Force is a fascinating example of something that is happening that one could argue there is a need for much more of. This has incredible potential for cross-collaboration and coordination among the agencies.

You have described a coming together of CDC, FDA, AHRQ and CMMS to address this particular issue in ways that are exciting and as you have attested, may be long overdue. But what concerns me is that some of the needs of this kind of collaborative activity may in fact be based upon common views of clinical data in general, and missions that may go way beyond just patient system that relate to these agencies and others within HHS.

So I get a little concerned that this itself might be an example of something that is a little too narrowly defined. It doesn't look at the big picture with regards to the need for a shared big-picture vision across all the HHS agencies of how they need to be collaborating in general, when it comes to clinical data management, reporting, interface with electronic medical records systems that are out there in the vendor community. There are just so many issues that come spilling out.

Of course, maybe the only way to tackle it is with some narrow, well-defined problem, and work on that. But if that doesn't happen in the context of recognizing that it is only an example, you may close your doors on some very important long term issues with regards to that kind of collaboration.

I just wonder to what extent that is recognized and discussed.

DR. MEYER: What you have done is, you have just reflected on one of the key discussions within the Patient Safety Task Force. That is, do we keep this as a relatively narrow and small effort, recognizing though that it does have larger implications, and it needs to interface with many different things, federal activities, many private sector activities.

We have had a great deal of discussion about that. As we have gone through that process, we have come to the conclusion that in order to make progress, we need to keep this effort small. However, with that said, we have been doing a fair amount of outreach to make sure that number one, we let all those other stakeholders know about what we are up to, because frankly, it could seem quite threatening if you talk about four federal agencies working together on patient safety reporting. I think you could write a very interesting, ill-informed USA Today article about what that is about.

So we have really been trying to go out there and make sure the stakeholders know what we are up to. But in addition to that, it has been an opportunity for us to hear from them about how things fit together. In particular, the summit that we had at the end of April provided a great opportunity to do that. We came away from that recognizing that we need to continue doing that kind of outreach effort, to keep that big picture on our radar screens, recognizing that for the time being, we are focusing in a very, very insular fashion with something that we feel we can start to make progress, and with that, we hope to be able to move that and expand it outward.

I'll give you an example of some of the thinking about how these things could relate to each other practically. One of them is in terms of thinking about the way that we can integrate our reporting interfaces among the HHS agencies.

One of the starting points that we have come to is, we absolutely need to think about in IT terms this notion of this open architecture. So we recognize that yes, there is this opportunity there, we have been talking to states and others. Yet we are trying to focus inward for now to get some progress made, with the hope being that by continuing engaging stakeholders, we will be able to roll this out and bring future partners on board.

Our ultimate vision of this program is not to develop a reporting interface in rating data from four federal agencies. It is much broader than that. But we think that by tackling that small, defined problem, we are going to provide some value in terms of the efforts to tackle the wider issues.

I think the classic one there is the vocabulary issue, where we have to tackle that for our effort. But I think that will be a more generalizable and useful product.

DR. SHORT: Just as followup, there are examples of technical issues that without great care might get solved differently by different agencies, but with some coordination could have a huge simplifying impact on the way in which vendors and developers of data systems for use in hospitals and practices will see the reporting function, for example.

A good example of this is communicable disease reporting to the CDC or to local public health agencies, as compared to adverse data reaction reporting to the FDA. If those aren't seen as two examples of the same phenomenon, then they will end up having to be implemented in two different ways in medical record systems. Even if we end up with Internet-based reporting of these kinds of things, the failure of those two agencies to basically say, we have got two examples of the same phenomenon, we have got to be sure that we come up with a single way of doing it, and that the vendors therefore end up with a single solution out at their end.

It is a technical detail, but it will only get addressed properly if those agencies basically come up with a common view of what they get back from the periphery.

DR. MEYER: I couldn't be in better agreement.

DR. LUMPKIN: Thank you. We are going to have to move on to the panel.

Agenda Item: National Quality Forum and IOM National Quality Report

DR. KIZER: Thank you, John. I was asked to comment this morning about the National Quality Forum and some of the genesis and operation of the NQF and some of the projects that we are currently working on.

I intentionally am not going to use slides. I think the main point is going to be covered fairly quickly, and hopefully I'll leave a little time for some dialogue.

The National Quality Forum or its formal incorporated name, the National Forum for Health Care Quality Measurement and Reporting, which is a bit much to say without taking a breath, so by and large, we refer to it as the National Quality Forum, was conceived by the Presidential Advisory Commission for Consumer Protection and Quality in the Health Care Industry, that was initiated in 1996. They reported out in 1998. They made a number of recommendations, including the patient bill of rights and the creation of the National Quality Forum and some other things.

Subsequently, a planning committee was convened that conceptualized a governance structure and then operational mode that led to the incorporation in the District of Columbia as a 501.3c public benefit operation in May of 1999. I joined them as the president and CEO later that year, and we actually became operational in about February of 2000. So the NQF for all intents and purposes has been operating for just about a year and a half. We are just starting to have things come out of the pipeline that are gaining recognition.

I suspect that many of you, if you have heard about the Forum, don't know much about it, and I think that is the reason why John asked that we say a little bit about it.

The primary purpose of the Forum is quite simply to increases provision of superior or high quality health care. I think the President's Advisory Commission recognized the need for an entity that could standardize the metrics for reporting of quality. They opined, consistent with our American culture, with having a private sector entity, or that that entity be in the private sector, although it would be unique in that it really is a public-private venture. I'll say more about that in a moment.

The basic strategy of the Forum is really quite simple, although there has been a little mission creep over the last 18 months or so. But the fundamental strategy is simply to standardize the health care quality measures and how things are reported, to encourage the public disclosure of this information. Indeed, when organizations join the Forum, they sign a statement of principle acknowledging their commitment to the public disclosure of data. Then finally, the third prong of the strategy is simply to overtly encourage health care purchasing decisions to be made on the basis of quality of care data. And of course, if that data is going to be meaningful, it has to be standardized and uniform, so that one can make real apples-to-apples type comparisons across the country.

There are some unique attributes to aspects of the Forum that are worth noting. The first of these is the broad stakeholder representation, which includes both public and private sector entities on the board of directors. Our board includes now CMMS as well as ARQH and the Office of Personnel Management from the federal government. A couple of state agencies are represented. John as the health officer for the state of Illinois is a member of the board. General Motors, AARP, March of Dimes, there are a number of other prominent entities from the private sector constituting the rest of the board. By design and as part of our bylaws, a majority of the board is composed of consumer and purchaser organizations.

One of the questions that inevitably arises is, how can federal agencies sit on the board of directors of a private sector organization. That is possible under the provisions of a law that is not particularly well known, called the National Technology Transfer Act of 1995, which when it was enacted I know did not have health care in mind, but it says basically that the federal government must in many cases and should in other cases adopt private sector standards that are developed by a voluntary consensus process. Then it says some things about what are voluntary consensus standards or a voluntary consensus standard setting body. The Office of Management and Budget has further opined on what constitutes that, and the long and short of it is that our consensus process meets those requirements, and we qualify as a voluntary consensus setting body, so long as we follow the outlined consensus process.

The law also specifies not only that the federal government must or should, depending on the circumstances, adopt those standards, but also allows for the federal agencies to participate in the governance and operation of those organizations. That is why NCFA or CMMS or other federal agencies can't sit on the board.

The second characteristic of the Forum which is unique is what is known as a strategic framework board and a strategic framework board process. As part of the effort, a group of nine of the world's most well-known authorities on quality improvement and organizational change and related subjects was convened to lay out a national strategy for health care quality measurement and reporting, that in essence would tie things together so that there actually was a coherent and comprehensive strategy for how all the different data needs would be met, how different audiences would get the data that they need. In many ways, as Greg was talking about before, the all and only principle, how that actually could be operationalized.

They were appointed in November of 1999, started work shortly thereafter. They have just produced their report, and that report is going through the formal consensus process now.

Another attribute of the Forum is -- really, it has an implementation strategy built into its governance and operations, i.e., by having the largest public purchaser of health care, i.e., CMMS, as well as the Office of Personnel Management, which contracts for the federal employees health benefit program, as well as large private purchasers like General Motors on the board of directors, when the Forum endorses things, it has the potential to be immediately operationalized by both the public and private sectors. While that is at the moment an untested or untried strategy, it certainly has the potential to be a powerful or more potent strategy than has been utilized before.

I should say -- this is a sidebar -- one of the reasons why it is advantageous for federal agencies to participate in this process is, they can adopt these standards without going through rulemaking, which obviates a lot of the work that goes into setting or adopting standards for the federal agencies.

Then I think perhaps a fourth thing worth noting here is that the Forum is essentially an organization of organizations clearly biased to include the consumer and purchaser perspective, and to create equity among all of the various stakeholders.

For example, all organizations when they join participate in one of four member councils, either the consumer council, the purchaser council, the provider and health plan council, and the research and quality improvement organization council. Each of those councils has one representative on the board of directors that can advocate the perspective of that council, but there is equity insofar as the consumers, purchasers. Each have one vote, as does providers. It has created four interesting dynamics insofar as physicians and hospitals and health plans and non-physician providers, and everyone else in that category of provider are all part of the same council.

And of course, the immediate reaction, as you might predict, was that at least one large component there felt that we had clearly made a mistake in conceptualizing this, and the physicians should be separated out from the rest of the providers, and came to the board requesting that a separate council be established. The board overtly declined that request, and the reaction was, clearly you didn't understand the issue. So they came back a second time, and the board again made it clear that the idea is that providers are going to have to learn to work together and perhaps focus on what are the common elements and what may be more important for the patient and for providing care, as opposed to some other things as they go through this exercise.

There are a number of things I could say about goals and specific enabling activities that are underway. I think in the interest of time, I would just say two further things. One, make a point that we are not in the business of creating new measures. There are plenty of organizations out there, whether they are government or accrediting bodies or research organizations, proprietary interests, that are creating potential quality improvement metrics.

What we see our niche as is looking at those measures that exist, seeing what is the evidentiary base behind them, how well they do the scientific and technical information and criteria behind them, whether they meet national priorities for improving health care. Indeed, I didn't mention, but one of the projects underway is to try to establish consensus about what should be the national goals for health care, and then also to see how well those metrics meet the information needs of diverse audiences, and then on the basis of that analysis endorse certain measures that then could be operationalized by both the public and private sectors.

To give you a flavor for some of the work that is underway and some of the things in the wings, I'll just very quickly run through some of the projects that we are currently working on.

CMMS now had asked us to undertake a project to standardize performance measures for acute care hospitals so that all Medicare participating hospitals would be reporting information about the same set of performance measures across the country. That project is underway. We hope to have at least the first of those measures out of the pipeline probably by early next year. We had hoped for by the end of this year, but it will probably be early next year before realistically they are done, building upon what JHCO and DEQUIP and others have already done in this regard.

I must just as a sidebar note that while they are not voting members of the board, we do have liaison for seats on the board that are liaison members, which are for JCHO, NCQA, AMAP and the Institute of Medicine, and they actively participate in the board's discussions along with the voting members.

We also were asked by ARQH and HCFA to compile a compendium of best practices for patient safety, or in essence a guidebook for safe practices, what are all those practices that health care facilities should have in place to minimize the likelihood that the processes of care are actually going to cause harm or create hazards for patients.

To help with the analysis there, ARQH contracted with the evidence based practice center at Stanford-UCSF. They will be producing their report imminently, we hope. Then we will move forward with that and do hope to have that guidebook out by the end of this year or early next year.

We were also asked to develop a consensus around a set of measures or adverse events that might constitute the basis for a national medical error reporting system, a state based system. That steering committee has completed its work. They identified 27 things that they feel would form the basis of -- or errors that are so egregious that just shouldn't happen today. They have produced a report that got some coverage in USA Today and some other newspapers a few weeks ago, and that report is going through the formal consensus process now with the hope that by the fall will have achieved consensus on that.

We were asked to look at quality metrics for minority populations. Indeed, one of the reasons why I have to run out early is that our workshop in this regard starts at 12 o'clock today over in Arlington, and I need to open that. But we will be spending the next couple of days looking at the state of quality measurement in ethnic minority populations in this country, and if there aren't some things that could be done or should be done to improve that situation.

Another project. We will be holding a summit on IT information management technology probably in February of next year. That really is looking at whether the process that the Forum has as far as setting standards and this generic process may help break the logjam and achieve some of the things that Greg mentioned before, some of the political if you will roadblocks to moving forward in this area as far as open architecture, interoperability, standard nomenclature, some of those things, as well as hopefully getting buy-in from some of the industry and other folks on things that they might take on to actually move this agenda forward over the next two or three or four years, so that hopefully by a few years down the road we actually would have a national health information infrastructure and electronic medical records wouldn't be in the distant future, but would be something that was here and now.

We have a number of other projects that are at varying stages of negotiation, including a potentially very large project looking at the whole issue of quality of care and long term care. We work with National Cancer Institute on a potential project on standardizing quality measures for cancer care, and a number of other things that again in the interest of time I'll forego at the moment.

Let me stop there and pass the baton to whoever is next.

DR. MEYER: Do you think that given Dr. Kizer's schedule that we can take questions now and then move on with the panel? Some of these matters are a little bit different, because they are going to be concentrating on the quality report.

DR. LUMPKIN: Okay, we can take a couple of questions.

DR. BLAIR: Could you help me understand a little bit better how you are relating to the existing agencies and standard development organizations and professional associations that appear to be addressing the same issues that you are covering?

DR. KIZER: To help me answer that, maybe you could identify a couple of those entities that you have in mind.

DR. BLAIR: Well, part of them are the College of American Pathology that is working on a reference terminology, HL-7. The standards organization is working on a reference information model. The mainstream standard development organizations.

DR. KIZER: I ask that, because I think that provides a good example of -- part of the issue for example in the IT area, whether it is going to go with SNOMED or something else, someone has to reach agreement or consensus that that is what we are going to agree on and move forward. The Forum provides that opportunity through the consensus process that has been outlined to bring all the parties together -- for example, CAPP is a member of the Forum -- and basically look at it from the various perspectives, consumers, purchasers, provides, et cetera.

If the Forum board then endorses it, then the potential is for the government as well as the private sector entities to move forward, that that is where we are going to go. For example, in the case of the hospital performance measures, what is envisioned at this point is that when the Forum endorses those, HCFA will take them and operationalize them. I.e., they become conditions of participation. The General Motors and Pacific Business Groups in health, Washington Business Groups, et cetera, they would end up in contract language, and you would then start producing information that was comparable across the country among both public and private providers.

So I think that is different. Right now, for example, this is really a voluntary effort. In SNOMED there is no impetus necessarily to operationalize that.

Any other questions?

DR. SCANLON: One question to follow up with Ken. Ken, you alluded to this process of -- in the voluntary standards process, there are policies that allow participating groups to adopt those standards without going through a full regulatory process, unlike HIPAA, which deals with many of these same issues in the IT area, that does require a full rulemaking process.

I am trying to look at what the distinction may be. We tried this case with OMB, and it didn't seem to hold any water with them. In other words, for HIPAA standards, if we are really using the power of federal law and regulation to require that they be used, the whole industry used them, industry wide, it was viewed as a rulemaking process. I am wondering what the difference is.

Is this purely voluntary? The members decide they will voluntarily limit the standard, or how do you ready the industry-wide scope here?

DR. KIZER: That's a good question. I think that some of it remains to be worked out. Part of -- in vetting the issues, there is broad industry buy-in, if you will, and participation in it. Certainly education is a first step in that as to what it is, and then ultimately buy-in and support for it.

There is the potential as well then to basically adopt them, which is what I think the CMMS is expecting and planning to do with measures that come out of the Forum. Greg can comment if he wants to from our perspective, but the potential from the government agencies is that they can immediately be opreationalized, and they would be the same measures that would be being operationalized in the private sector.

At the moment, since there are no products out there, I can't say that this is how it actually worked. However, the array of folks who are involved and the level of experiment and participation certainly leads me to be encouraged that it will be maybe more powerful and more effective than a new regulation, which usually the first response is, how do we fight it. But through the consensus process, there is a buy-in as well.

DR. SCANLON: I guess the assumption is that if sufficiently powerful participants adopt that standard voluntarily then it creates in the market a force for others adopting it voluntarily. But you are not really forcing anyone to -- you don't have the power.

DR. KIZER: We don't have people in uniform that go out and police. It is really the power of persuasion and hopefully doing the right thing, and the buy-in into the process.

DR. FITZMAURICE: I just wanted to expand upon the exchange between Jim and Ken Kizer. To me, it seems like the standards being developed are like a code set. The relationship with the HIPAA transactions is that if you have a regulation that adopts a code set, then the organization that maintains that code set can make changes in the code set without having a new regulation. Whereas, if the government were to make the code set a part of the regulation, every time you change the code set you have to change the government regulation.

So what I believe Ken Kizer's organization is doing is developing a set of quality measures that could be adopted by a private program, and as they make changes in it through the voluntary consensus process, you wouldn't have to change a regulation if you had adopted this set of quality measures. Even though they have changed the components of it, you wouldn't have to change the regulation. You have still adopted that quality measurement set.

DR. KIZER: That is exactly right. I think certainly some of the federal agencies have seen the advantage to that.

Not to belabor the point, but I think all you have to do is look at the effect the leapfrog group has had on the market, to see potentially how powerful a voluntary effort could be. When everyone is involved in -- at least, in this case the business coalition is pushing this; frankly it is having a lot more effect than if the government were to write a regulation that these are going to be the requirements.

So while you may not have the force of law, the power of persuasion may actually be more powerful.

DR. FITZMAURICE: It is marketed. To the extent that it is adopted to a fair extent in the market, it becomes a de facto standard anyway.

DR. LUMPKIN: Thank you very much. I think we really have to move on. Thank you for coming. Greg?

DR. MEYER: We are going to hear now from the two remaining panelists to talk a little bit about the National Health Care Quality Report. We'll start with Janet Corrigan from the Institute of Medicine.

DR. CORRIGAN: Thank you very much. I'm going to talk today about the report that IOM released just a few months ago on envisioning the National Health Care Quality Report. It is my understanding that you have heard from my colleague, Marguerita Hertado just before the report was released, and she probably gave you quite a bit of information on it. Marguerita was the study director, and did a terrific job on this project. So what I'm going to do today is to just very briefly go through the final recommendations, because I think you have heard about a lot of the rest of it. She couldn't reveal the recommendations to you at the time she was here, but I think gave you a lot of background.

You also should have received a copy of the report, I hope, in the mail. We did send those out to you, and I think the executive summary is in your background packet.

The background behind this project. The President's Advisory Commission back in 1998 recommended that we have some sort of a National Health Care Quality Report, because the goal there of course was to try to really produce substantial implement in quality over the coming five to ten years, and we wanted to be able to know whether or not we were making progress.

The Congress then passed the Health Care Research and Quality Act of 1999, which included a provision in it directing the Agency for Health Care Research and Quality to submit such a report to Congress on an annual basis, looking at national trends in quality of health care.

AHRQ then turned to the IOM and asked IOM to set up a committee of experts to provide some advice on what that report should look like, the conceptual framework, and some of the basic domains of quality that would be addressed. We then of course set up that committee, which was chaired by Bill Roper. The vice chair was Arnie Epstein. The committee released its report just a few months ago.

The purpose of the report once again is to serve as a barometer of quality, to systematically assess progress in meeting national aims and goals.

There are two audiences. The primary audience for the report is the policy and legislative community, Congress, the Administration and others. The secondary audience is the American public at large. There is the very strong sense that if we don't keep bringing the public along about quality of care, the message on quality and the need to improve, that we won't have the political leverage that is needed to take strong action to be able to keep improving over time.

The recommendations of the committee. There were ten in total, and they fall into four areas: the framework and categories, and I am going to spend most of my time on that; there are also a set of criteria and guidelines that the committee developed for the selection of measures and for the measurement set overall; there are some recommendations that pertain to the kinds of data sources, the characteristics of those data sources, data collection process, and last but not least, a recommendation about the reporting mechanism, how the information should be reported out.

The committee essentially adopted a two-dimensional conceptual framework. The first dimension are components of health care quality. They selected for safety, effectiveness, patient centeredness and timeliness. These should start to look familiar. The Quality of Health Care in America Committee that released the Crossing the Quality Chasm report, had six components of quality, or they called them domains, and these are four of them.

The other two components of quality are equity and efficiency. This particular committee viewed equity as a cross-cutting issue in the report, and efficiency was outside the domain of the committee's purview.

So what we are trying to do is to take the work from a variety of IOM committees and other groups that are working in this area. Each time we start a new committee, we encourage them to the extent possible to adopt the same conceptual framework, adopt the same terminology, and define things the same way, because what we find is, there is just so much confusion out there with each group starting from scratch. So rather than start at the home plate and try to get to first base, start with the prior committee and end up at first base, and see if you can get to second or third; that is the basic idea.

The other dimension of quality that the committee adopted is the consumer perspectives on health care needs. This is drawn directly from the work of the Foundation for Accountability -- staying healthy, getting better, living with illness or disability, and coping with end-of-life care.

I don't know as I need to go over the definitions. These are laid out in the book for safety effectiveness, patient centeredness and timeliness, and once again, also for the consumer perspectives.

This is just a matrix that kind of shows you how the two dimensions can work together. If you then envision equity as a cross-cutting issue, you might look at all of these domains of quality or components of quality or perspectives on it, in the context of equity, by looking at different subpopulations or groups.

In addition to that, if you can also envision here this kind of a report or matrix being used for specific conditions. We call them priority conditions, whether it is asthma or diabetes or nicotine addiction, conditions that are very, very important and consuming a great deal of health care resources and account for a lot of health burden. This kind of a framework can also be used to look at those.

The committee adopted a variety of criteria, both for the selection of individual measures and for the measurements overall. This committee wasn't charged with identifying the specific measures that would fall in each one of those cells in the matrix. They were asked however to come up with examples of the kind of measures that would fall in each of those cells.

You will see in the report that for each of those components of quality, for example, effectiveness, there is a table in the report that has a long list of possible measures that might fall within that particular component, ones that ARQH might choose from as they go forward.

I think one of the more important recommendations of the committee is that there should be an ongoing independent committee or advisory group that can help to assess and guide improvements in the report over time. The committee didn't recommend that this be a new group. It may well be able to be one that already exists, or some combination of groups that can serve this function. But they did feel it was particularly important that there be a broad-based committee that had both public and private sector representation in many different disciplines and areas of expertise, for several reasons, one to make sure that there is broad-based private sector support for this effort as it goes forward, and to make sure that there is appropriate input from experts in different areas, but then third, to also keep the pressure on for improvement in the tool over time, to make sure that we see quality improvement in the National Health Care Quality Report.

We are not going to get it right the first time. It is hard to do, and there are lots of limitations in data and available measures that have to be worked around. But with some sort of a vision -- and that is what they were trying to create here, is a vision of what this report should be five, seven, eight years down the road, and with continuous quality improvement, we should be able to achieve that over time.

What we see mainly in performance measure in the past has been reliance on street measures. So for example, if you look at a tool like HETUS, you will see a set of measures that target different areas. There are a few preventive service measures for diabetes, there is whether you got your annual eye exam or foot exam.

What we want to drive towards over time though is to not only have individual measures of important characteristics of the medical care process and outcomes, but also to begin to have what will be a real comprehensive set of measures, that measure across all of the critical clinical areas, that covers all subpopulations, and that picks up the preventive care, getting better, living with illness, all of these various domains, a much more comprehensive set of measures.

There are some different ways to go about doing that. Our committee did hear a good deal about a tool that Rand has developed for QA tools, which relies right now on medical record abstraction, but it attempts to pull out information in those records for absolutely eery aspect of care that should be pertinent to that patient. So if the patient had three different conditions, it pulls out the appropriate information to assess compliance with the evidence base on that conditions. It looks at all the preventive services for the individual that are relevant to that individual.

We may be able to think down the road towards more comprehensive approaches, where you could actually have some overall indicator of compliance with the evidence base as an overall indicator.

If you think about this concept of a comprehensive approach for an individual condition, too, we hope that over time we begin to move towards four sets of quality measures for say diabetes or the other priority conditions, that go way beyond just one or two indicators, but look at the health care process for that condition and the complete management of that condition.

The committee felt that the primary reliance should be on medical care process and outcome measures. There is so much change going on in how health care is delivered and the structure of health care that they backed off a bit from structural measures, unless there was a very clear link between those structural measures and medical care process and outcome.

They also developed a set of criteria for the data sources that would be used to produce the National Health Care Quality Report. Needless to say, they should be credible, valid, have a national scope with the ability to roll down to the state level; that is one of the goals of the National Health Care Quality Report, is to be able also to be produced at the state level, available, consistency of data over time, timeliness, ability to support the population subgroup in condition-specific analyses, and hopefully public accessibility of the data.

It was recognized that there aren't many data sources that meet all of these criteria, but this is the overall goal.

I would also say too that there was extensive debate around the issue of using available data sources that are primarily from survey data or from administrative data versus how much do we really try to use this tool to drive investment in clinical information systems. That really is the key in the long run to being able to do what we want to do in this area.

The committee came down very strongly that whenever possible, when decisions are being made about the National Health Care Quality Report, what will be included in it and what the strategy will be for data collection, our long term goal here is the automation of clinical information and building that clinical information infrastructure to the extent that decisions can be made that reinforce that overall direction. That is a very important thing to do. So they would really encourage pushing towards automated clinical data sets, but recognizing that we are just not there yet, and we can't wait for that to happen before we produce the National Health Care Quality Report, which mans that we will be drawing heavily on data sources such as MEPS and others that are currently available.

As for the release of the report, they recommended that the report actually have several versions, that it be tailored to different audiences, policy makers, consumers, purchasers, providers and researchers.

So I think some of the challenges in going forward will be populating the framework matrix with a sufficient number of measures, both by selecting existing measures, as well as beginning to define new ones, where we feel that the existing measures are pretty lean, and we need additional ones. Beginning to establish a comprehensive quality data set that can be accessible to the public and to researchers.

It is expected that the kinds of information in the National Health Care Quality Report will be at such a level that when we see differences or shortfalls in terms of the goals that we are trying to reach, that what yo really want to have is a rich data source available, so that whether it is individuals in the academic community and the health care delivery system and the public health community and the state government and elsewhere, can drill down more deeply to really understand what is driving some of those quality differences, and hopefully to take action to improve.

So that is going to mean that we need not only a National Health Care Quality Report, but we need rich data sets that underlie that, that are available in the public domain for people to get to.

But this report also should be very focused. If it gets beyond -- if there are more than three or four or five key findings, you are going to lose the primary audience, which is the policy community and members of Congress and people in leadership positions in both the public and the private sector.

So let me stop there on the National Health Care Quality Report. I'd like to just quickly mention a couple of other initiatives that we have underway and what you can expect to see in the future as we keep moving forward in these areas.

IOM is planning to start an information technology and quality project. That will probably get off the ground before the end of the year. We think that there is a very real need to take a much closer look at what constitutes the national health information infrastructure, and in particular to look at issues not only of standards that Ken talked about earlier, what types of standards are needed and how best to produce them, but also to take a close look at the resource requirements, capital as well as operational resources that are needed to invest in hardware and software, and to retrain the health professional work force, to be able to function in an environment that delivers a lot of care in different ways than it has from the past, and to make the best use of decision support tools. That is going to require a good deal of retraining. There is also productivity loss that takes place as a lot of this transition happens over time.

So we will be focusing a lot more attention on IT and quality. We are also beginning to think about the next steps to follow up on the Crossing the Quality Chasm report. We will be holding a summit on the Chasm report in the late spring or early summer of next year, to bring together a lot of different groups to say what has happened since the release of both the Chasm report and the medical errors report, what have been the most successful efforts that have been taken, what are the major barriers that we still continue to see, and how can we begin to address some of those. I expect that we will organize efforts around some of those key environmental areas that were identified, whether it is the payment system, the regulatory system, whether it is the health professional education and training components. So we are going to try to really move on the Chasm piece.

The last thing I will mention is, we have an effort underway taking a close look at the Congressionally mandated study that has come to us from ASPE. That project is looking at the federal quality assurance, quality improvement efforts of Medicare, Medicaid, SCHIP, DoD and the Veterans Affairs. That project will also be taking a look at the quality related research that is being done, whether it is in ARQH or HCFA or elsewhere, and how we can try to modify some of those quality oversight programs and efforts for the kind of health care delivery system we see evolving over the next decade or two, how we can make them more efficient, how we can standardize some of the measurement activities, how we can make them work better with the private sector efforts, whether it is through the accrediting entities or elsewhere, issues of status, some of which are getting resolved as we speak. So that is another major project that will be releasing its report in September of next year.

Thank you.

DR. MEYER: We are now going to hear from Dr. Tom Reilly. Some of you had the opportunity to hear from me a little bit of background work and some directions that we were just starting with when we began the National Health Care Quality Report. One of the more important steps that has been taken to date is that we have had the opportunity to bring Dr. Tom Reilly over from HCFA, now CMMS, to work with us and to champion this project and to move it forward.

DR. REILLY: Thanks, Greg. I want to talk a little bit about what we are doing to move out on the recommendations that the IOM has made.

I would like to give you a little bit of an update on our current activities, what we are planning to do next, and then if we could hold some time at the end to have a little bit of discussion about how the committee might become involved or participate somewhat in our activities.

I want to mention first that this is a real Department-wide activity. We have representation from key players from throughout the Department. It is AHRQ led, but there is heavy collaboration from our colleagues from across the Department.

Let's just talk a little bit about our current activities. I want to go through just a couple of points. One is to have a couple of additional comments on the conceptual framework that the IOM has recommended. I want to talk a little bit about measures that populate that framework. I want to think a little bit about identifying data sources for the measures, and some of the research that we are doing around report design and formatting.

Janet just talked about the conceptual framework that the committee laid out for us. What is key for us are a couple of points. One is that -- one way to think about the report is as a mechanism to track our progress toward achieving the aims laid out in the Chasm report. That is reflected in the matrix, the components of health care quality. We have effectiveness, timeliness, patient centeredness and safety. We want to track the extent to which the nation is making progress across those aims. That is a key focus for us in this report.

The consumer perspectives on health care needs reminds us that we want to look at those dimensions across the breadth of health care need, not just focus on hospital care or nursing home care, but really to focus across a breadth of considerations, so really a broad undertaking for us.

The other important consideration to keep in mind is that equity is cross cutting. If you think about developing measurement sets within each of those cells and looking at subgroup differences across those measure sets within each of the cells, equity, although not an explicit column in our matrix, is really inherent in everything we do.

The drill we are in right now is thinking about developing and identifying measures to populate the matrix. We are working to identify a pool of community measures. The IOM as part of their process sent out a call to the private sector, asking what measures would you recommend for inclusion in the report. We also sent out a call to federal agencies asking the same thing; are there measures that you think would be appropriate for which we have data now. Again, this wasn't a conceptual exercise, this was, what measures do we have data for now that we could use in the National Health Care Quality Report.

Those two enterprises have generated a fair load of measures for us to consider. We have reviewed some existing literature as well, and looked at some reporting systems in several agencies. We have 600 or 700 measures that people have recommended as potential candidates for the report.

We are in the process now of populating the framework with those measures. I'll talk a little more about that in a second. As we play through our process, we will be evaluating the candidate measures to include in the report, because that is obviously way too many, using the criteria that the IOM laid out for us. We are hoping that this exercise will identify measures for the report, but it will also identify gaps to help us define our future research in quality measurement. That is one thing we especially want to talk with the committee about, is that we see that as one area of collaboration, that we could potentially work together in moving the agenda forward.

Let me back up just for a second. Let's just focus on the effectiveness dimension for just a second. Let's focus just on that column representing effectiveness.

As Janet mentioned, inherent in our process is also a priority sitting process around conditions. I just chose a few as examples. We are in this process now; these are not necessarily ones that will be chosen. It is just an example to illustrate the kind of thing we might be doing.

Let me just mention that the Chasm report suggested to our Department that we identify 15 to 20 priority conditions. That process is playing out in the Department now, and will play out for a while. We hope that the quality report will align itself with those priorities.

The four that I have chosen here are just examples from Healthy People 2010. Four of the priority areas in Healthy People are infectious disease, heart disease and stroke, cancer and diabetes.

As an example of how our process would play out, if we chose those as priority conditions, we would think about trying to develop measures across health care needs in those areas. So for example, in infectious diseases, in terms of staying healthy, we think in terms of immunizations around influenza, pneumococcal immunizations, childhood immunizations, as examples. In terms of getting better, we want to think of treatment followup for pneumonia. We want to think about treatment for URI, not because URI is a heavy disease burden necessarily, but it is an important example of overuse as an example.

Heart disease and stroke; again, we can think about what are the important issues in staying healthy having to do with heart disease and stroke. We can think about screening for blood pressure, for cholesterol and risk factors for disease. In terms of getting better, we can think about treatment followup in AMI and stroke, we can think about measures around those kinds of considerations.

In terms of living with illness and disability, we can think about measurements of hypertension and management of heart failure. For cancer, we can think of a variety of measures around staying healthy. There are indicators around the screening for cancer that we want to try to capture. We can probably pick off a specific cancer, breast cancer, for example, and think about treatment and followup for breast cancer, and think about end of life care for cancer patients.

For diabetes, again, we do a similar thing, screening and management.

For each of those areas, we would want to identify relevant indicators for which we have some data now, and use those for the first quality report. As time goes on, we hope to develop those further, but for the first report, we are going to focus on what do we have now.

As an example, let's just take influenza immunization. We could have several indicators thinking about influenza immunizations. We could think about vaccinations of high risk persons aged 18 to 64, for example, who have COPD. We could think about vaccinations for the over-65 population. We could think about vaccinations for institutionalized adults. We can also think about, why didn't someone get a flu shot, what is the reason, and then we could try to identify some outcomes around those measures.

We have data that we can capitalize on for those measurements. Just looking at vaccinations for high risk populations, we have data in the international health interview survey that we could use at the national level, and we have data in the behavioral risk factor surveillance survey that we could use for the state level estimates. We also have data we can use for MEPS and VA data that would be available to us.

For the population over 65, the same kind of thing. We have data from the NHIS for the national level estimates, and VRFS for the states. HETUS has a measurement around that, the Medicare current beneficiary does, the national benchmarking database does. So we can compare estimates across databases as well. It is really a rich opportunity for analysis here.

The reasons for why people don't get flu shots, that only appears in the Medicare current beneficiary surveys, so we only do that analysis for the Medicare population. As an example for an outcome measure, looking at hospitalizations, we could use the national discharge survey for national level estimates, perhaps HCUP for some state level estimates, and Medicare claims data can be used for the same kinds of things.

So we can think about for each of those areas identifying measures that would be appropriate to try to capture the kind of care being provided. That is the approach that we are adopting. We are in the process of thinking about what are the important priority conditions. That process will play out for awhile, but we hope to again align ourselves to other departmental activities.

The priority condition work applies pretty heavily towards the effectiveness dimension, and applies a little bit less towards things like patient centeredness and safety. We may look at more cross-cutting issues than those dimensions. But the idea is that the IOM has identified a conceptual framework for us that represents the aims of the health care system. The Chasm report laid out within that -- we will try to identify priority conditions and develop areas of measurements within those.

We are in the process of working through that now. We have an interagency work group, who is working on conceptualizing our approach within the cells as we are talking about here. It will also be responsible for reviewing measures within those areas for which ones we will actually choose. So that work will go on for quite a while. It will be going on through the summer and into the fall.

The kind of data sources we will be tapping; as Janet mentioned, for the first report we anticipate that it will be a real mosaic of data sources for us. There isn't a centralized source that can meet all of our needs, so we will be picking and choosing from a variety of different areas. We will be using population-based data collection efforts, again, the national health interview survey, behavioral risk factor and surveillance survey and MEPS, NCBS. There are a lot of sources we will be choosing from.

There are establishment and provider based data collection efforts, things like the national hospital discharge survey, HCUP and the national nursing home survey and so forth. There is administrative and regulatory data, things like Medicare claims data, things that are a byproduct of the administration of a program. HETUS and CAPS data fall in that same category. We will be using vital statistics and surveillance activities such as SEERS and the cancer registry. So there is a mix of things that we will be trying to capitalize on.

One of the things that we will want to think about though is gaps in the data system for our needs. We will try to think about how can we adapt existing data collection efforts to try to fill those gaps.

Just a couple of comments on report design. It is very hard to -- in fact, I can't design a single report that is going to meet every need. There have to be multiple reporting products. So we are in the process of building on the recommendations of the IOM, of thinking about the kinds of reporting products we want to build on this effort. We need to identify the needs of potential audiences and design products to try to meet those needs.

Right now, we are anticipating there will be a written product, there will be a written report to Congress. There always has to be a report that actually goes to the Congress. But probably more important for a broader set of audiences would be a web-based reporting system of some kind, where we would have a warehouse of data tables built upon the measures that we talked about. We would have some kind of front-end software, where somebody could come into the system and say, I want to look at measures for MI, and then they could drill down to the level of detail that they want.

That system will take us a while to build. The first year's report will be a written report with some technical appendices and so forth, but ultimately we hope to build the web-based product. That is the thing that we are especially interested in doing.

We also anticipate that we probably will have to design some additional products for other audiences such as the public potentially and the press and so forth. But that is still under development.

We are in the process now of finishing up an inventory of existing reporting projects. We are not the first ones to try to do this, and we are trying to learn from what people have done in the past. We are also conducting some audience research. We are going out and trying to talk with potential audiences for the report, to ask them how can we design this to best meet your needs.

Thinking about the first report, the first report is due to Congress in September of 2003. That sounds like a long way off, but in government time, that is not a long way off. We have a lot to do between here and there.

The final framework. Again, the IOM recommended a framework to us. We have to obviously go through clearances; anybody in government knows we have to clear the thing, but we anticipate we will have that soon.

We are hoping to have our final measure stats selected in the fall, by end of November. We will be going through that drill with our interagency committee, thinking about how do we want to populate the matrix, which measures do we want to select for the first report.

We have to have our final report design figured out by the turn of the year. Then we will go into the data process. As you might expect, there will be a very large data crunching kind of activity. Once we have selected a final measure set, you define a set of specifications we would use, and define a set of idealized table shells, and give that to the analysts and say, start generating these from the data sets that we need. That goes on for awhile. We have to have those results generated by September of next year. The final draft of the report we are hoping to have done in February, then it goes into clearance. That will be an extended clearance process. We are allowing the better part of seven months for clearance, or eight months. It will take awhile to get this out of the department.

In terms of longer term developmental activities, I mentioned that we do want to develop additional reporting products. That is the web-based reporting tool. It is additional products for other audiences in addition to the report for the Congress.

We need to develop a research agenda around quality measurement. We need to develop measures to begin filling the gaps in the conceptual framework. As you might expect given the breadth of the framework that the IOM laid out, we will have good measurement in some areas and not so good measurement in other areas, and we need to start building an agenda to start building and developing measures that fill in those gaps.

We also have to consider other things such as -- and this is just an example -- development composite measures. A lot of the measures we currently have are at a fairly low level of clinical detail. If we are providing information to policy makers, perhaps rolling up some measures into composites might be a useful way to approach things. We need to look at the psychometric properties of doing that type of work in this setting.

We also need to think about expanding data sources and capitalizing, and we need to think about how we can expand that through time. We had to enhance some existing data sources. My colleagues at NCHS for example are thinking about modifying some of the content in their surveys, the national home health and hospice survey, the national nursing home survey just as examples. They are hoping to expand the quality content in those surveys. We are expanding the quality content in our MEPS survey, and we are increasing the sample size and increasing the geographic dispersion of the sample, increasing the precision of the estimates. We are trying to increase the number of states participating in the HCUP projects. So there are a variety of efforts going on throughout the department to try to enhance our existing data sources.

We also have to try to identify additional data sources. For the first report, we expect that the report would rely more on private sector data to also be included in our reporting processes.

We need to think about what are those. We have a project that is funded right now that is going out to try to think about what private sector data might be applicable for the report. Also, we are trying to begin to work with the states as well for NAHDO, the National Association of Health Data Associations, to think about, is there a way we can align to the extent possible data collection activities in the states with the kind of things we are trying to do on the national level.

We also need to establish an external advisory body. Janet mentioned this. This is an important next step for us. We need to get in place a body that will be working with us on an ongoing basis as we work through the development of this project. Again, we are thinking about the National Health Care Quality Report as an evolutionary process. We'll do the best we can with our first report, but as time goes on, we hope to get better and better and better. That is why we are putting such an emphasis on thinking about how we develop this thing through time, and we see an external advisory body as a key component of that development.

The things we are looking for in such a body, we don't want to try to create a new FACA committee. That is an incredible amount of intensity of labor. We would like to try to capitalize on existing committees, if we can.

We also would like to have a centralized source through which we can get input from external stakeholders. We can run around to individual stakeholders, but it is better if we have a centralized place through which we can get feedback through a wide variety of groups.

We also have to have a small subgroup of key experts who can give us some technical advice as needed. So these are the kinds of things we are looking from our external advisory body.

One of the things that I wanted to -- was especially glad to be able to come here today is, we need to think about how we are going to move out on starting to fill the gaps in our work. We need to think about how we are going to fill the gaps in our measurement framework that we are going to be using, we need to think about how we are going to fill the gaps in data sources.

We are going on the stump now and talking with groups who might be key players in that. We wanted to come here and pose to the committee to see if there are any opportunities for us to work with the committee on an ongoing basis. I would be happy to come back here and brief you at any time, and to work with you on a more formal basis perhaps, as we think through how we want to move forward in filling the gaps.

So I'll open that as a question for the committee, if there are opportunities or interests or thoughts around how we might work together on this.

DR. LUMPKIN: I just need to do a housekeeping thing before the questions, so I can understand where we are and what we have to do.

(Remarks off the record.)

DR. STARFIELD: Thanks a lot. I'm glad you mentioned what you mentioned at the end about the need to expand beyond the effectiveness column. I thought it was very striking. We have been doing those effectiveness measures for 50 years, and there is not very much new, and most of the measures aren't even new.

How can we expand to the other three columns in the IOM matrix, which is really where things are moving? Have you begun thinking about that, or are you depending on others to do the thinking?

DR. REILLY: The patient centeredness of care dimension, that is a fairly new undertaking. The CAPS data captures some of that stuff in the work that has been done by that Medicare program in looking at collecting data from CAPS for the managed care population as well as for the fee for service population. There has also been a lot of CAPS data collection in commercial populations, and somewhat in Medicaid as well; I think they have 20 or so states in Medicaid.

There is a thing called the national CAPS benchmarking database, which is a collection of those capsulated efforts. So that is some of the work that we hope to capitalize on in the patient centeredness dimension.

DR. STARFIELD: Can I just follow up on that? There was a whole thing that came out -- I think it is in the IOM report about comorbidity. So it is not only aspects of care, but it is also focusing on the patient as not only a disease, but morbidity as well.

DR. REILLY: One area where we are especially concerned that we have some developmental work in place is in the patient safety dimension. We really want to move out, and keeping the stuff that Greg was talking about in that area. So we are hoping to see some developmental work around that.

DR. MEYER: I'd just make a further comment on that. Many of you around the table here have been working on developing quality measures for some time. I don't think there will be anything that is going to be like the National Health Care Quality Report in terms of putting up that matrix that we have received from the IOM, in terms of making it very stark. It is a real reality check on our inability to measure quality in a lot of important areas.

When we had some of the early discussions about the audiences for the quality report, we talked about those which the IOM data came forward with, in terms of the Congress, because that is what the legislation tells us is our first audience, the public. There is a third audience that we thought was very important, and that audience we defined as us, the us being for example the Department of Health and Human Services work group, because we are the ones that can over time hopefully effect some change with the help of many of you gathered here today, in terms of filling in these gaps.

But I think the quality report is going to be very powerful about what it tells us, but I think it is going to be equally or even more powerful about what it can't tell us. That will define I think very clearly a very powerful research agenda that hopefully we can share not only among our colleagues in the Department of Health and Human Services for this activity, but with our research colleagues as well in the private sector.

DR. LUMPKIN: Let me see if I can get the sense of the committee. I think we are a FACA committee. I am just picking up on something you described.

DR. REILLY: Doesn't want to create a new one.

DR. LUMPKIN: And you don't want to create a new one. We as a committee have had some interest because obviously, health care quality has overlaps to a number of areas that we are very interested in, not only in relationship to connections to HIPAA or the quality subcommittee, broader issues related to the public health information system.

So trying to understand exactly what it is that we are looking at doing might be a little difficult to do in the context of this meeting. Is it the sense of the committee that this is something that we should explore further, and perhaps we can come back with a more developed proposal on how we could help you out?

DR. REILLY: That would be great.

DR. LUMPKIN: Okay. Go ahead.

DR. SCANLON: I would ask the agency what sorts of ideas you had. The committee can hold public hearings, the committee can individually review and deliberate as a full committee, give you feedback, be a sounding board on some of these issues. Are those the kinds of things you are envisioning?

DR. REILLY: Those are among them. I think when we look at what the IOM committee was able to do for us to get us started, it got us some great momentum moving forward. But we do need to have an opportunity to hear from many of the groups that were represented in that committee. There could be some additional ones as this rolls on.

So I think what I would beg of this group is to please consider not only how the individuals that are part of the committee can play a role. Also, the committee can tap into other external resources and can be a conduit for getting advice to us as a FACA committee. I know that these are complicated issues, but we really want to think creatively about how we can develop this capacity.

One other aspect of it that is important is that first among equals and our partners in this activity is the NCHS. So in some senses, it makes a great deal of sense.

DR. COHN: I think the discussion we are having is great. I actually think the idea of us working with them is a very good idea. I would observe that we actually have a work group on quality, which is part of the Subcommittee on Populations. I think this is something that they should definitely have some discussions, come back with a firm proposal for the full committee on how we should be working together. Kathy, were you going to suggest the same thing?

DR. COLTIN: Yes, but in a little different context. There are an awful lot of groups that are working in the quality area. We just heard this morning from Dr. Kizer that one of the main components of the mission of the National Quality Forum is to define measure sets.

So you identified two types of areas where you need advice. One is around filling out the framework with measures. The other is about the data that are needed to create those measures. It seems that there might be a natural kind of partnership that could evolve.

We have been talking about this already, in terms of how we might better relate to the work of the quality forum and the quality work group. It seems to me most of what we have focused on up to now within the quality work group has been looking at the data issue, where are the stumbling blocks, where are the obstacles, that those who are trying to develop measures are running up against, and what can we do both in terms of improvements to the federal data collection systems and also at the state level, and also in the private sector, what could we do through HIPAA, what could we do through other types of initiatives that might be promoted within the private sector.

It seems to me that that is a natural area for us to be involved with. So I think there may be a way to structure an advisory relationship where a primary role is around the information to construct the measures, but where we need to work closely with those who are engaged in that kind of a process to understand their needs and how we can best help.

DR. REILLY: Those would be the kind of creative solutions that we would be hoping you could generate for us.

DR. LUMPKIN: We will also spend a little bit of time discussing this at our executive committee meeting that we have scheduled in Chicago, so we may be touch about -- see if you can come out and visit us in the Windy City.

DR. SCANLON: In December.

DR. LUMPKIN: In August. We just had a new budget that was passed, and our agency got a significant increase. So we just need to have lots of people come in and pay sales tax, and make that budget work. Thank you for coming.

DR. MEYER: This is really an incredible, exciting piece of work. This is something that we have in our legislative authority that this will go on and move forward. This is an evolutionary process, but we hope that you will have the opportunity to share in the excitement that we have about it.

DR. LUMPKIN: Thank you. We have a few minutes before lunch. Perhaps, Simon, if you could pass out the letters so that people have a chance to see them, and then we will take them as the first item right after our presentation on --

DR. COHN: I guess the question is though, are we going to have a quorum?

DR. LUMPKIN: Yes.

(Remarks off the record.)

DR. COHN: I was going to actually say, at the risk of keeping people, deferring the next talk a little bit post lunch. Maybe we should do all the action items before lunch. Most of them have already been discussed pretty significantly.

Agenda Item: Action Items

DR. LUMPKIN: Okay. Let's go to the first item, which is the letter that we all seem to have no problem with. I think we just wanted to take a look at the attachments, which were the specific ones. This is the Thompson letter, and we are fine with it?

DR. COHN: Barbara also looked at the information.

DR. LUMPKIN: It has been moved by Simon, it has been seconded by Dan that we forward the letter concerning the updates of the standards from the DSMOs. Is there further discussion on this? All those in favor, signify by saying aye.

(Chorus of Ayes.)

Those opposed say nay? Any abstentions? So it is exactly the same letter we had yesterday.

DR. ZUBELDIA: I have a technical correction.

DR. LUMPKIN: But you helped write this.

DR. ZUBELDIA: But I hadn't seen this addition. Margaret Wycker is the chair and Maria Wood is the vice chair. They are not co-chairs.

DR. LUMPKIN: We will consider that to be a typographical correction. If there are any others along those lines, we can take those and make the corrections before the letter goes out.

Second item. This is a letter which we saw yesterday and has been revised by the committee. Simon, if you could perhaps move this letter?

DR. COHN: I'd be happy to move it.

DR. LUMPKIN: Seconded by Jeff, and then we can put it on the table and then point out the changes.

DR. COHN: The changes are primarily grammatical. We have to do some reordering of the information. We have an introductory now, two sentences, that Dr. Lumpkin assisted us with, talking about the importance of and the appropriate use of health information technology.

Beyond that, I don't believe that there is -- other than just some wordsmithing, there is no changes to the first page. Do you want me to read this?

DR. LUMPKIN: I think we read through it fairly much yesterday. The additional changes on the first page or the first sentence, which reads, the National Committee on Vital and Health Statistics recognizing the importance of appropriate application of health information technology as a means to improve the quality, efficiency and effectiveness of health care while lowering costs. Accordingly, implementation of the administrative simplification provisions of HIPAA is essential. Then the language that was there before.

DR. COHN: The recommendations are essentially unchanged. We did a fair amount of reformatting and renumbering. So the first recommendation which is providing early guidance is unchanged. The second recommendation which is allowing flexibility in enforcement is also unchanged, so it has been moved around.

The third main point, which is opposing delays, this is a compression of two bullets, one which had to do with opposing delays, and then a discussion around, if a delay is considered, what the Secretary might consider. To that area, we have actually added an additional bullet as a consideration, which is that all transactions -- this is at the very bottom of the page -- all transactions could be required for implementation by the compliance date, but statutory extensions in limited cases could be granted based on demonstration of significant progress and a plan for full implementation, which was suggested yesterday by John, which I thought was a good addition.

Item four, promulgating and implementing rapidly, is unchanged. Five, supporting changes, is unchanged. Six, exploring consistent standards for paper, is also unchanged.

DR. LUMPKIN: Is there any discussion on the adoption?

DR. FITZMAURICE: On the last sentence right after number three, opposing delays, it seems that it needs another word to avoid reconfiguring it in a way we didn't intend. It is imperative that we not undermine print implementation efforts and continue to promote the urgency of working on the implementation of standards. I would suggest we put the word do after end, so that does not undermine, both the verbs undermine and continue.

DR. GREENBERG: Or you could switch those two around, we continue to promote the urgency of working in the implementation of standards and not undermine current --

DR. FITZMAURICE: That would work better.

DR. LUMPKIN: So we'll switch the two clauses.

DR. BLAIR: We are talking about the implementation of HIPAA standards, right? Do we need to put the word HIPAA in before standards?

DR. LUMPKIN: No, because two sentences above it refers to HIPAA standards. Is there any further discussion?

DR. SCANLON: Page two, last bullet. This is more of a technical issue. The second line there, where you refer to statutory extensions, I don't think we mean extensions to the statute. It is just an extension that the Secretary --

DR. COHN: No, actually these were statutory extensions. This is if there is a delay. I think that was my understanding, John, why it is there as opposed to allowing flexibility in enforcement, which would be Secretarial discretion.

DR. GREENBERG: To each party it would be statutory, or there would be a statutory authorization to grant some extensions based on significant progress in the plan for full implementation.

DR. LUMPKIN: So based on that, if we put the word authorized, then it would indicate that a change in the law which authorizes the Secretary to grant extensions, as opposed to a statutory extension which would require Congressional action for each extension.

DR. GREENBERG: Yes, which we don't want.

DR. SCANLON: We don't want to call it a statutory extension.

DR. LUMPKIN: Right, statutorily authorized. Let me just read that. All transactions could be required for implementation by the compliance date, but statutorily authorized extensions in limited cases could be granted based on demonstration of significant progress and a plan for full implementation.

DR. STARFIELD: This just represents an expansion of what we took out yesterday, which is, HHS could support legislation. It was meant to expand on that, I guess. We took out a sentence in one of the bullets yesterday, the last sentence, which said that HHS could support legislation. I read that as just basically an amplification of that.

DR. LUMPKIN: Yes, we had some examples. We wanted to look at options. One was flexibility in enforcement, and the second one was to grant some sort of waiver or extension. That is really what this one speaks to. Further discussion? All those in favor, signify by saying aye.

(Chorus of Ayes.)

Those opposed, say nay? Any abstentions? We need a motion to permit the NHI work group to circulate the recommendations under Tab 10 to a limited group of external commenters. It has been moved and seconded. Is there further comment on that? All those in favor, signify by saying aye.

(Chorus of Ayes.)

Opposed, say nay? We have one other action item, or three others, but -- did you have an action item from Populations?

DR. MAYS: Yes, we do. Do I need to explain it first?

DR. LUMPKIN: You can make your motion and then you can explain it.

DR. MAYS: The motion is that the full committee would send a letter about the SCHIP regulations, commenting on the continued inclusions of race and ethnicity, and then a comment about, I hope that they will further examine the issue of the data collection on language.

DR. LUMPKIN: So the motion is that the National Committee send a letter to the Secretary, commenting on the SCHIP regulations, specifically indicating one, that the --

DR. MAYS: We appreciate their continued collection of data on race and ethnicity, and that we encourage them to give serious consideration to the collection of data on the language of the participant in the program.

The reason that we are asking for this at this point in time is that it has to be done by July 25.

DR. LUMPKIN: Is there a second? It has been moved and seconded. Now explain the motion.

DR. MAYS: The motion is that there is an open period right now for comments. The SCHIP -- I think most of us know what the SCHIP is, but I think the important issue around SCHIP is that they agree to continue to collect data on race and ethnicity, but did not come forth with a comment about the collection of data on language.

There are some states who currently do collect that information. It is not clear, because I don't think we have enough information at this point in time, whether we should just totally support the collection of data on language, because there are some burden issues. We also wouldn't want to push for collection of data that might be more stringent than what some states with already very diverse populations are already doing.

So I think that the wise thing at this point in time would be an encouragement to consider the issues, very much like they considered race and ethnicity, which resulted in them continuing the collection of that data. My sense is that if there is a careful analysis, they might end up collecting data on language. But we don't know. So that would be the wiser thing for us to do, with the shorter amount of time that we have to respond.

DR. LUMPKIN: The deadline for comments on the regulation are?

DR. MAYS: July 25.

DR. LUMPKIN: July 25, so the process would be that the letter implementing this motion would be prepared, it would be circulated to the executive committee for signoff that it is consistent with the motion, and then would be sent consistent with that motion. Any questions or comments on the motion?

DR. FITZMAURICE: Is the question the number of languages that the patient speaks, or the most common language, or does the patient speak English at all?

DR. MAYS: That is part of what is not clear, so I don't know, for example, whether or not it would be to collect the greatest percentage, or whether it would be to collect -- if any patient speaks any language and that is their primary language we would need to collect it. So there was just a little more information that we needed. The time line is so short, that I think it is best that we encourage them to further pursue looking at this.

DR. LUMPKIN: Other questions? All those in favor, signify by saying aye.

(Chorus of Ayes.)

Those opposed, say nay. We have two action items under Tab 3. The first is the revisions to the executive committee functions and process. This is, the chairs of the work groups and the subcommittees become members of the executive committee, which is different than in the past, where members at large would be primarily the members of the executive committee. It doesn't prohibit members at large from becoming, but it would place those individuals on the executive committee. Is there a motion for approval of that, at the executive committee functions and processes? So moved, seconded. Further discussion?

DR. SHORT: Is there any difference between subcommittees and work groups? Are work groups more shortlived?

DR. LUMPKIN: Yes, technically, yes. At the end of the projects of the NRHI and so forth, we have to decide are there things that need to go to subcommittees, or should the committee just go away. All those in favor, signify by saying aye.

(Chorus of Ayes.)

Those opposed, say nay. The last item is the charge, also under Tab 3, of the work group on health statistics in the 21st century. That would be approving that charge. Of course, we could delay it to the end of the year, and by then they will be done.

DR. STARFIELD: This is an update to the previous charge.

DR. LUMPKIN: Yes, it is an updated charge. It has been moved by Dan. Is there a second? Seconded. Is there further discussion? All those in favor, signify by saying aye.

(Chorus of Ayes.)

Those opposed, say nay. Carries. I think that completes the action items for which we need to take a vote. This will give us 50 minutes to return for our after-lunch presentations. We are adjourned.

(The meeting recessed for lunch at 12:10 p.m., to reconvene at 1:00 p.m.)


A F T E R N O O N S E S S I O N (1:00 p.m.)

DR. LUMPKIN: We are going to get started. Just a reality check, looking at the agenda and what we have left. Depending on how disciplined we are in our committee reports, we could get out of here by three. That is allowing for the full and appropriate attention to our report.

Just one item of business before we get started. Before you, you see a draft letter which contains the sum and total of the motion. It needs just a few editorial changes, such as inclusion of what that is. But if it is agreeable to the committee that with a slight -- the last sentence needs to be worked so it reads a little bit easier, but other than that, this will be what we will send, if that is agreeable to the committee. I see a majority of the committee shaking their heads up and down, as if this is it, it will not be sent to the executive committee, this will be what our motion was.

Now. We as a committee have noted that there is a significant development going on in the Centers for Disease Control in relationship to standards for information and collection of physician data.

When we started, we have gone through a full gestational period as well as eight months afterwards, but despite all these efforts and all the obstacles in the way, we have finally gotten to the point of getting Denise to come and give a presentation to our committee. So we are really thrilled that you have an opportunity to come up here. We enjoyed sharing the photos with you. We still think you ought to have been called Ned, but we are really pleased, because we have been waiting with some anticipation here about your important topic.

Agenda Item: National Electronic Disease Surveillance System (NEDSS) and Public Health Conceptual Data Model

DR. KOO: Thanks. I'm going to talk about NEDSS, the National Electronic Disease Surveillance System. I usually start with a couple of caveats, one of which is, the term NEDSS was given to us by OMB. So although it says NEDSS and it may imply a single monolithic system, it really is meant to imply an interconnected system of systems. But as I also told people, OMB gave it this name with a $20 million largesse and we said, sure, we'll call it whatever you want.

The other clarification that I like to explain is that though again, it says National Electronic Disease Surveillance System, we are very interested in data about conditions, about syndromes, not necessarily limited to diseases.

So with those two caveats, I'll go ahead. I think a lot of people -- you have heard from the National Quality Forum folks this morning, but just to remind people how public health differs from medicine and public health care, we generally focus on the health of the population. Our patient is the population, not the single patient. We emphasize prevention. Our scope of activities for public health, although this is right now by and large focused on the health care system, but it really can be anywhere in the causal chain of disease, so in the environment, peoples' behaviors, not just once they enter the health care system. And of course, we are you might say limited by operating in a governmental context in terms of jurisdiction over some of the activities; they are often at the local or state level. So again, just to remind people that often say that we or you or clinicians trying to determine the kind of disease a given patient has, we in public health try to determine what kind of person has a given disease, so that we can try to identify risk factors and prevent disease.

Public health surveillance. I have talked at NCVHS about this before, but I realize there are a lot of new members, and briefly again, just to reiterate, it is a systematic ongoing activity that involves the collection, analysis, interpretation, dissemination of data, and it has to be linked to public health practice. Surveillance is one of our diagnostic tools. The population doesn't exactly come into the health care system. We do public health surveillance on an ongoing basis to diagnose problems.

This is illustrated a little bit in this next slide, which shows the public health approach. Surveillance is the cornerstone. We do these ongoing activities to identify potential problems. Once we have identified potential problems, we of course have to do investigations to identify what is the cause, what are risk factors for this problem. Once we have identified the risk factors, we have to evaluate potential interventions, what might work to ameliorate this problem. And of course, once we have figured that out, we actually have to implement the programs. Once we implement programs, we then of course have to circle back down to surveillance to determine did we actually have an effect. So it is a very, very critical public health monitoring activity.

There are very many information system functions that are needed for public health, and surveillance is really only one of them. Preparedness of course for public health requires that all partners at the local, state and federal level are part of the systems that are developed. Surveillance is focused on data analysis, event detection and management There are other information functions such as notification, communications, information issues and knowledge management.

This is a depiction of the current situation, the current mess that we are in with all of our surveillance information systems, the legacy systems. As you can see, there are individual systems that go from local to state to CDC. These have been developed because of the fact largely that we are funded categorically by Congress to do tuberculosis, AIDS, STDs. These systems do not by and large talk to each other. In fact, they are a tremendous burden on our local and state partners because of the fact they use different interfaces.

They are DOS-based systems. For example, our favorite example is that the AIDS system, you hit F10 to delete the record, and in the notifiable disease system, you hit F10 to save a record. So if you are a local health department person, you have to learn all these different systems, maybe put them on different computers. It is a tremendous burden.

You can also see, this is a public health-centric view. The data sources are way off on the right-hand corner. Physicians' chart review, laboratory reports, et cetera. These are largely paper forms that come into the health department. So there are very many obvious limitations to this current situation, one of which is the multiciplicity of categorical information systems.

The data are fairly incomplete and they are not timely. We are well aware that the burden on the health care sector is increasingly unacceptable as we ask for more and more information in different forms that are not necessarily compatible, duplicative information.

The volume of the data, these pieces of paper, overwhelm the health department, particularly for example for large volume diseases like chlamydia, gonorrhea, syphilis, and our systems are not state of the art. They are largely DOS-based systems, largely.

Some of the motivators, as if that is not obvious enough, but motivators for change for our information systems are of course the need for more information. We are interested in more and more problems, issues, diseases. You heard this morning about issues that we would like to monitor in public health in general. We can't continue to build new independent information systems. Obviously, HIPAA changing the health information policy, has provided another major motivator for us, and opportunities to increase efficiency through use of evolving technology.

There is also the increasing use of electronic information systems. Our partners at state and local health departments have also been building their information systems. They do not wish to continue to use, understandably, CDC-provided systems, not all of them. Some of them would like to, but they would prefer to take their data in their own systems and not have to re-enter it. I'm sure the same is true for the health care system, to put their data into electronic information systems. We don't expect that they will want to re-enter it into a public health information system.

Then there are the opportunities to enhance security and confidentiality.

So the NEDSS long term objectives are the ongoing automatic capture analysis of data. We want to detect aberrations or problems more automatically, and we want to use data that are already electronic. Obviously, these sorts of concepts would sound familiar to you in terms of NHII division for health statistics. We don't want people to have to re-enter data.

Our systems will now be designed on relevant data sources, as opposed to individual diseases. So instead of our AIDS folks or our TB folks and our emerging infections folks all negotiating separately either in a large laboratory or managed care plan, we would negotiate about how to get data to public health and then parse it to appropriate programs or appropriate health departments. All of this implies an integration of public health in the health care systems I think to our mutual benefit.

A couple of examples that I like to use for people who may or may not immediately agree with some of this, or see the uses. One is a classic notifiable disease issue. A patient might see a physician with respiratory symptoms. In this electronic medical record world, the differential diagnosis would pop up as they are entering the symptoms. It might include something that a clinician would not think about that public health is concerned about, something very rare such as anthrax. It might recommend tests. For pneumonia you might do a chest X-ray anyway, but for example measles, IgM might be recommended for rash and fever, because as measles is increasingly rare in this country, people may not think about it and may not do it. We need to know whether it is endogenous or exogenous measles.

The test results and the diagnosis when they come back would automatically be shared with public health, because we don't necessarily think that clinicians are going to remember to report to public health, and they are not very well trained to do that. So if the system does it, I think we will be in better shape.

Another example from this far-off future is, we might have automated tracking of drug resistance among all the laboratory isolates across the country. We might notice increasing resistance to an antibiotic. This is the computer detecting this across all these databases. We might also be able to search pharmaceutical databases and note an increase in the sales of that particular antibiotic, whether it be concurrent with or after or prior to. Either way, it is not going to be very effective for that particular bug, so we would then automatically send a notice to health care providers and embark on an educational campaign. It is just not very useful to use that particular antibiotic, particularly if your area is one where there is a lot of resistance to that antibiotic.

This area is obviously fairly futuristic and far off, since we don't exactly have an electronic medical record just now. But we are working in the area of standards, standards, standards. We are not necessarily trying to develop new systems and give them to people.

We are working on pilots with the health care system. We have some projects with Labcore and Quest, where they are using standard HL-7 files and sending them and providing them to several different states.

The architecture of NEDSS -- I'm not a techie, but I'll talk a little bit about it -- is build on integrated data repositories. So data received from the health care system is capable of going in a single format to a single receiving point. That is not the case right now. We do want the data that public health maintains to be capable of sitting in an integrated data repository. It may not, because in certain states the HIV data needs to stay separate, but they need to be capable of it.

We are collaborating across our categorical programs, and obviously we are working with sophisticated security standards, compatible with HIPAA to maintain a public health track record of protecting sensitive data, since this is obviously a major concern.

I also think that we need to explain that we are not going to wait until this is all the future. We are working on trying to capture data electronically now, and learning how to use it; what does exist out there in the health care system, as well as gearing up toward the future and trying to influence how and what data are collected a priori to make the utility for public health.

Some of the pilot projects that we are doing are in public health, to see what happens, what are some of the issues we run into when we try to use data as they exist now, are for example some electronic laboratory reporting pilots. In Hawaii, they found it not even using a standard format, but getting the data electronically from laboratories, lots of them getting over twice as many cases of reportable diseases with much more complete demographic information. Also, it was more timely.

We had a DEEDS project that is going on with the Oregon health division. It is getting reporting directly from the -- and this is not limited to a specific area; it is injuries, infectious diseases, et cetera, to the health department, again, a slightly different approach for us, going to the data source and using it for different disease areas.

We have some pilots that we have funded in various states for bioterrorism, where we are trying to see what kind of data can we capture in a quote real-time basis, conditions and symptoms which will enable us to detect a BT event earlier than waiting until people are dying of anthrax or smallpox.

Then there are some managed care projects going on, particularly in Massachusetts, where they have detected an increased number of cases of for example active TB by looking at HMO pharmacy data that already are available electronically.

One of the standard efforts that we have relied on is the public health conceptual data model, which is what I was asked to talk about. This is the definition of the categories and kinds of data needed for public health, particularly focused on surveillance, is where we have started, although we have tried to make it board.

This is a diagram showing the relationships between these kinds of data. It is a conceptual data model, and the inputs we have used have been the Australian and Canadian models, the HL-7 reference information model and of course the CDC systems.

You are obviously not meant to read all of this, but this is a schematic of the data model actually as we published it a year ago, so it was our initial version one. This is actually not up to date, because as HL-7 modified the RIM last fall, and as we have been developing the MAGS base system -- I'll explain that a little bit more -- we have actually made some modifications, and we hope to publish an updated version some time soon.

We have basically simplified this a lot in following the HL-7 model, but the key issues of course are that the four subject areas we initially started with were parties, location, material and health related activity.

We started with this data model to try to reduce our development efforts for computerized systems and enhance our data exchange capabilities, both with health care providers and among public health partners. It has been very useful for representing our public health data needs to the national standards development organizations, particularly HL-7, which is of course where we based the model on.

Some of the examples that I would use in terms of what we have argued with or articulated to HL-7 -- and they have been very receptive -- is that for example party should include population groupings, that location needed expansion, not just where is the clinic, but how do you find or locate a person or where they might have been exposed, and that materials for intervention should not just be focused on medications. Interventions include fluoridation of the water supply, chlorination of water, et cetera. Also, that a health related activity for us specifically was the act of reporting to the health department.

Now to get a little more technical, which is not necessarily my area, but to show you where the data model fits in the big picture of this system architecture elements, these are the eight elements that we have discussed and decided are key elements. This elements modular; you don't have to use a particular system. This is meant for people to rely on Cox based products.

But a state health department would need web browser based data entry and management, the ability to do HL-7, electronic HL-7 message processing. At the center of this they will have an integrated data repository, and actually I have a picture of this in the next slide. Data translation exchange capabilities, transportable business logic, data analysis reporting and visualization, a sharable directory for authorization and security system and policies.

Unfortunately, this analysis is going to be a little bit of a challenge to show, but at the center of this would be the integrated state and local data repository. Again, it might be set up slightly differently in a given state and not be just one data repository, it may be several. Then around this is of course obviously appropriate security with the sharable directory of public health personnel that authorizes who is allowed to access what data and when. There would be tools that we could drop into the system for analysis, GIS reporting, et cetera.

I guess I should have started over here with the web browser based data entry forms potentially for local health departments, if they wanted to enter the data directly. Perhaps the data may be maintained at the state health department or a clinical site that doesn't necessarily have their own freestanding system. Then out of the clinical world, the messages might come in HL-7 or eventually HL-7/XML, and we would exchange data presumably within public health using XML. There is a lot of work obviously that is going into all of this.

What we are doing, because of what the states have requested, that we not get completely out of the software development business. We have hired actually a computer sciences corporation to develop a base system for states to use if they so choose, as opposed to being required for this. This base system will be a concrete implementation of some of the NEDSS standards. It will include a core demographics module, a national notifiable disease module, a module that enables the electronic interchange of laboratory data. It will be a person based integrated repository, as opposed to what we are used to, which is disease based, and it will be as I said an implementation of the NEDSS standards. This base system is meant to be a platform for other models, because obviously by this summer we won't have it all completed.

There will be many other program area disease-specific modules we will need to add on to this. But it is intended to be a first step for the many states who have requested that we help them get started. But they will have the option to use pieces of this with their own system.

Then for those of you who are a little bit less technical, since I certainly was when we started all this, the role of this data model, to get back to the data model which is obviously the basis for the integrated data repository, is that there is in the center there a conceptual data model. The idea is that it can be mapped to either independent state systems or independent programmatic systems at CDC with their own normalized models, database design models, physical implementations, because of course we can't dictate to the states what particular database management system they use.

So we are really trying now to apply this data model. Some folks say it is all well and good to have a conceptual data model, how do you use it? So we have disseminated it for use and feedback. We are developing a process model that will provide some of the context for the model. I'll talk a little bit more about some of the implementations.

We are using it though to coordinate and harmonize with national standards, particularly HL-7, and we hope to develop a changed management process, how we are going to evolve this model in an organized fashion. It has been helpful though to really promote dialogue among the public health partners, did we get this right, is this going to be useful at your state and local level.

We have actually developed a logical model for the NEDSS based system for folks to use in developing their physical database designs. We have developed a prototype database design model that will actually be used for the base system. We are working on developing message specifications for data interchange, and obviously we need to work on the vocabulary issue. This is obviously being applied to the systems design, and we hope to disseminate the process for mapping systems to the model, so that the states who have requested this logical data model actually understand its derivation and use.

There are a lot of issues that are surrounding NEDSS that we have to deal with. One of the key ones is of course privacy concerns. As was discussed a little bit in the privacy and confidentiality work group this morning, folks are not very aware -- providers are not very aware that in the rule, it specifically permits disclosures to data to public health for public health activities. We really need to do a better job getting the word out, because they are already starting to refuse to share data, or they are just very concerned. We are very concerned that if they should stop sharing data, it will be very hard to turn that around. But we clearly need these data, as I hope you understand, to do our jobs, particularly at the state and local level.

There are data ownership and access issues. If we try to approach this in the most efficient fashion for public health, who does that, who owns it, who gets to see the data first in terms of local versus state versus federal. So the roles and responsibilities need to be worked out; how do you actually make these independent systems secure. We need to have very secure systems. We need to explicitly lay out the data needs in the standards. We have done it in this conceptual fashion; we haven't actually gotten down to the vocabulary issues.

Obviously, an EMR will help us. That is part of why we have chosen to work with standards development organizations, and the Internet is obviously a useful tool for us.

So our next steps are to prototype these specifications and standards in new system development at the states. We will be doing the integration testing in a couple of states this summer, and then hope to award up to 22 states to implement or use this base system later this fall. We are continuing to work with standards development organizations, and we hope that they are interested in including the population health perspective, I think again to our mutual benefit. We are working obviously with the public health data standards consortium.

This is a slide to point out that this is very related to EMR, and it is obviously very related to NHII. Dan, do you have an acronym for the vision for health statistics? You've got all these acronyms, NEDSS, NHII, NQR, what you heard about this morning. But we have these pilot projects that are going on as I mentioned, electronic laboratory reporting, emergency departments, managed care, pharmacy. We are learning a lot about technical standards and policy issues from these pilots in terms of the data being entered and used for a particular purpose. But obviously, the privacy protection issues we need to continue to address.

I think that was it for now. Thank you.

DR. LUMPKIN: Fascinating. Questions?

DR. MAYS: Can you discuss two issues that sometimes emerge in the collection of data that I have heard at more the community level? One is the tension between collecting the data and using it purely for public health purposes, and collecting the data and using it for research purposes.

Then the second one is, can you talk at all about some of the ethical issues that come up in terms of ownership of the data, some of the ethical issues that come up in terms of -- and I have seen them more occur within the context of research, where in order to do the research, sometimes there will be like tracking that can go on. In most instances, you wouldn't be able to do it, but because you are at the level that you are, you can actually access different systems. I am wondering if you can talk a little bit about what some of the ethical issues are that are being talked about beyond just privacy.

DR. KOO: You have asked two huge questions. I'll attempt to at least address them to some degree; I don't think I can answer the questions.

But the practice versus research issue is a big one, and there has an enormous amount of discussion in the public health community about differentiating and understanding the difference between public health practice and public health research.

I think that very often in the community or in the public at large, there is an assumption that public health uses data just for research purposes. I do think that they do not understand that in order to our job and to diagnose problems in the community or in our jurisdiction, whether it be a local, state or federal level, we have to have some sort of data.

Those are day to day practice issues, the monitoring function of public health, to determine what are the problems in this particular jurisdiction, what is the distribution of it. It does need personally identifiable information, obviously, to identify potential cases to then figure out where was they exposed. You have to ask them some questions, you have to follow up with these people in person.

Differentiating that and explaining to people, that is different from research, is very, very difficult. I think the general definition of research, NIH's definition, is anything that leads to potentially generalizable knowledge. In some senses, some of our activities, we generalize them to the community, and that is why we do them. So there is a difficulty here explaining -- and i'm sure I'm not going to do a very good job of this -- the issue of our intent and our need to do research.

But we very much -- obviously we do research, and we do public health practice, and we are very aware of the fact that there is a difference between those. Obviously, for research we need IRB clearance. So a lot of it depends on the intent of a particular data gathering activity.

The ethical issues of ownership, I'm not sure I totally understand the question. Obviously there are a lot of ethical issues involved with collecting this kind of information. This is personally identifiable data. This is data about people who have very serious diseases, connected by and large at the state and local level.

I would also emphasize, at the federal level, at the CDC level, we have no need for that information, because of the fact that it is at the local level that the response happens. So they are the ones who need to know who the person is or follow the problem up. But there are certainly confidentiality guidelines about how to handle these data, and only use them for these public health practice purposes, unless of course you've got an IRB clearance to use them for research purposes.

So that is an attempt to get at some of those issues.

DR. MAYS: The ownership issue really was people who often will have tests done in a public health setting. Sometimes they have unique identifiers, other times the test results don't come back to them, and they feel as if, I could have done something differently had I known. So in some instances, there are some tests that we perform that they don't get the results specific tot them, and other times at which, even if there are results, they would like to get them so they don't have to pay to go someplace else and have them repeat it. So sometimes in the community this gets to be a big issue.

DR. LUMPKIN: Let me perhaps ask a somewhat related question, because it gets to ownership. I'm not sure if anyone has looked at this, maybe you have.

Under what authority does the CDC collect data? My guess is that much of what -- well, I know in Illinois, I don't believe we collect anything that isn't authorized under state law somehow or some way. So it is interesting that the way I perceive, and my conceptual approach to this question, is that the system is really dependent upon a structure of state laws who define who owns the data. Is that a correct assessment?

DR. KOO: That is exactly my understanding as well. CDC collects the data and gets the data voluntarily from the states. The reason I think states do send it is because it provides a service. We provide a service, because who else would take the responsibility of collecting the national data. But it is a voluntary system, and I think that is something that is a misunderstanding in the general public. They think that we, CDC, tell the states what to collect or what to do, whereas it is really the other way around; the states tell us what they are willing to collect and what they are willing to send to us.

Obviously there are other issues in terms of funding and grants, et cetera. Congress tells us what to do. That is another area. But the jurisdiction as it says in the U.S. Constitution, is at the state level.

DR. LUMPKIN: But I think the converse and the important thing is, if you only look at it at the federal level, there appears to be no authority for authorization for this data system that is in place, where the authority exists in the systems who are collecting the data. That is a problem with perception when it gets to data collection and that dividing line between research and public health purposes. While we may generate new knowledge at the state or federal level, the collection of the data predominantly is directed or protected under state law. I think that is a major difference between public health generation of knowledge and academic generation of knowledge.

DR. SHORT: I have a question which I guess is an historic clarification. You described at the beginning of your talk a silo tendency for individual systems that have been developed over the years because of the way in which the authorization was to do a tuberculosis system or an STD system or what have you. So these things developed separately, they didn't talk to one another, and you ended up with problems that you are trying to rectify now.

The implication is that you have now an explicit authority to rectify this in some way, and you said that OMB told you to do this, is that right? So I'm just trying to understand exactly how now you are trying to fix this problem that had arisen over the years.

Then I have a followup question, but how did this come about?

DR. KOO: I blame it certainly on part of the categorical funding issues. What happened historically, in the 1980s, as there was increasing use of computers in the public health system in states and CDC, there was an attempt actually to develop an integrated, notifiable disease system. That is one of the little silos listed there, is the national electronic telecommunication system for surveillance, or NETSS, which adds to the confusion -- NETSS, with a T.

That was based on a standard record format. It was initially 40 bytes and then 60 bytes, just a case report. However, that was not well resourced, because it is cross cutting, it didn't fall into one of these categories. Therefore, these other programs, which had a lot of resources and a lot of demands, said we need to go further than AIDS, STD, TB, et cetera.

So the systems continued to develop. But meanwhile, our partners in the state and local health departments have been increasingly vocal about -- they thought they were happy initially with the AIDS system and the TB system and the STD system, because at least then there were standardized data in that area from local to state to CDC. But as it became the XYZ system and we ended up with over a hundred systems, they said, this is not useful to us. That is a little bit of an exaggeration, but the fact is that we are organized at the state and local level often by categories as well. So there is AIDS and STD and TB. They were somewhat happy, because the AIDS people were getting the data to the AIDS people at CDC, and they just stayed in their one area. However, at the state and local level where there is just a few people who have to run all these, they said this just doesn't work, and you can't continue to add to our burden.

So that led to several discussions around CDC. We initially in the '80s and then the '90s formed the surveillance coordination group and then the health information surveillance systems board, which now has been replaced by the information council. But it is still a very strong challenge at CDC to do this on a cross-cutting basis with no resources.

So it has really been a strong help to us for OMB to also support this idea that we should approach it in some more cross-cutting fashion and to give us a line item for resources.

DR. SHORT: That was basically an OMB decision?

DR. KOO: Essentially. What happened was, they started asking questions about the existence of all of these systems at CDC. We said, the problem is that we are funded here for TB and here for STD, and so unfortunately we have had to do this, because we are very cannibal for these funds for this particular area. It is very hard to do these things in a cross-cutting way.

DR. SHORT: Had CDC been seeking cross-cutting funding so that you could try to rectify this? It had not been forthcoming?

DR. KOO: Congress I think was not particularly excited by infrastructure or cross-cutting integration initiatives.

DR. LUMPKIN: But at the same time, ASTHO, the Association of State and Territorial Health Officials and NACCHO, National Association of County and City Health Officials, had in the early '90s really focused in on this issue and done some work with the Public Health Foundation. There were a number of documents that were out there, and there was a lot of pressure being put on the Assistant Secretary of Health and the heads of the agencies.

The ones that were most responsive were HRSA and CDC, in that they began to allow up to five percent of each categorical grant to be expended for development of integrated systems. So all of that work, the work of the CDC and the states and the locals, created an environment where I think it began to get some attention.

DR. SHORT: That is very helpful. So my followup question now is trying to anticipate going forward, how needs will work, given the realities of your dependence upon linking to the state-based and in some case multiple system within a state that exists out there today.

On the one hand, you presumably have folks there who are looking for something that is helpful, having recognized that all these silo systems in time became non-helpful, because they were too many and they were non-standardized and not linked to one another. But they have all invested money, energy, they have got their own systems. They are not standardized maybe even within a state, but certainly across states, I assume. I expect there is tremendous difference in the sophistication of the systems from one state to another.

Even with your conceptual data model as the mechanism by which they can presumably now all try to link in to some kind of national solution, it seems there are incredible challenges in getting the legacy systems in the individual states in any way to be adapted over time to be part of this, because they too will have to sell infrastructure arguments to their state legislatures to get the funding in their states in order to be able to do it.

I just wonder how that reality is being addressed as part of the overall needs effort. You can come up with a wonderful design, but the practical challenges of getting it to actually coordinate across all the states sounds a bit overwhelming from this perspective.

DR. KOO: Oh, it is overwhelming from this perspective as well. It is a huge challenge. You are exactly right; there are tremendous tensions between the folks who independently would just prefer to continue with their independent systems, both within CDC and I imagine also at the state and local level, where they are established. Although hopefully, there are several motivations for them, one of which is, the systems are old and they will want to upgrade. If the upgrade that is offered is the modern upgrade, then at least they have the motivation to do that.

DR. SHORT: So that is this Computer Sciences Corporation contract. Is this a pathway to upgrading their state systems?

DR. KOO: It is something we are discussing, in terms of how to work with the states, in terms of can they also work with the same particular company, in terms of direct assistance or funding, if they so choose.

A lot of them are not. The first round of funding went to assessment and planning, and so they have sometimes contracted with folks to figure out how should they start to perhaps in a stepwise fashion integrate our systems.

It is an enormous, enormous challenge. It is very complex, but I don't think we feel like there is any real choice in how we do this.

DR. SHORT: How do the states view it? I assume that the states are aware that this is going on at this stage. Can you speak on behalf of the states?

DR. LUMPKIN: Well, we do have a saying in the states that if you have seen one state, you've seen one state. So I think that there are all sorts of differing approaches to that. Many of the states are trying to develop their own solutions.

We don't have a choice to make changes. For instance, we were just notified by our CMS, which stands for Central Management Services, not at the federal level, that the system that operates our vital records system will no longer be supported by them in 12 months. So if we don't replace it, it essentially is going to degenerate or something.

So there is only so long that you can hang on to these legacy systems before they do have to be replaced. During that process, the conceptual data model is very important for us. We have done some internal analysis, and there is a fair bit of interoperability between similar systems. A surveillance system only works so many different ways, and has so many inputs. So when you build the model for a surveillance system, then it is just a matter of re-using the software to develop new surveillance systems. So we are looking at those kinds of things at least within our state. Dan may be able to comment on Massachusetts, into the microphone, if you would be so kind.

DR. FRIEDMAN: More generally, I think there has really been a sea change in the last two to five years in recognition of and acceptance of the need for building state-based integrated systems, as well as CDC-based integrated systems. I think even two, three, four, five years ago, discussion of standards, discussions of the common interface, discussion of a common data repository would have been greeted with a fair amount of territorial protectiveness.

I think now, to a much greater extent, extending well beyond communicable disease, I think people recognize that it is absolutely essential, and that we are wasting too much money, we are wasting too much time, and we have too much of a burden on ourselves and on our data providers.

I think that the atmosphere certainly at CDC but equally at the states, has changed in ways that at least I never would have anticipated several years ago. Having said that, as Denise said, there is a very long way to go. We have probably roughly one hundred distinct person based data systems in our state health department, all of which have different address fields, different definitions of race and ethnicity, et cetera, et cetera, et cetera. DR. LENGERICH: Just as a followup to that, do you see, is there any move to change the notifiable diseases or the way they are interpreted, or to use that as a mechanism by which more data, different data can be collected in a different way?

DR. KOO: There is probably a several-fold answer to that question. To be very concrete about some of the informatics issues in notifiable diseases, our case definitions for infectious diseases under surveillance are -- there was a reasonable effort in trying to standardize our approach to them, but they do not -- they are not systematic enough to lend themselves to coding in a standardized fashion. So we really will need to re-think, and that will be a challenge, partly because some of our case definitions are influenced by other factors beyond the science of the case definition. The lobbyists for certain diseases are very concerned about our definitions and how they categorize or count cases of a given disease.

So there is that aspect of it. The other aspect which you may have been hinting at in terms of notifiable diseases, which again gets at the point that John raised about state law and jurisdiction, is the authority to collect information that is not just what we think is a suspected case of measles or anthrax or tuberculosis, but to really think about how public health -- under what authority do we collect information about respiratory syndromes, et cetera, that we are concerned might represent early cases of some sort of catastrophic terrorist event.

So those sorts of issues are very much an issue. I don't know what the solution will be, because it is very complicated, and it gets at all these privacy-confidentiality access issues.

DR. LUMPKIN: If I can just add to that, I have authority as the state health director to mandate notification of an infectious or communicable disease, and not do the same for a chronic illness that is not infectious in nature. So I might be able to stretch it and require peptic ulcer disease to be reported, and other diseases as they may have an infectious component. But as you go to different states, even though there may be a desire to develop those kind of reportable systems, they are subject to jurisdictional limitations that may exist at the state or local level.

DR. ZUBELDIA: Following up on what Dan was talking about, having a hundred systems in Massachusetts where the providers will have to report, and thinking about the burden to the provider, even if you have a standard way of communicating, where the provider may have to communicate to a dozen or more different systems, including some federal systems, and even with standards, that really doesn't simplify it too much.

Have you looked at any clearinghouse type of model where the provider communicates with one location, and that location is in charge of communicating with everybody else in whatever standards they need to communicate with, with whatever conversion of codes and data content? It is a model that has been looked at in HIPAA for an imported solution.

I don't know, clearinghouse, virtual data center, warehouse type issues, these concepts have certainly come up. I think there are a lot of discussions about the policy issues around. That was my quick allusion to data ownership and access issues. I think we recognize that for efficiency's sake, it would be helpful, and I know that the providers, whether it be large clinical laboratories that cross jurisdictions or individual providers, would prefer to have a single place that they can send it, and then it gets parsed out. But there are a lot of policy issues that we are going to have to work on.

DR. SCANLON: Just to go back to the concept of authority for data collection, actually as John and Denise said, for the notifiable diseases and the exercise of the state police power, those are vested in the state. That is where much of the public health activity is geared towards.

On the other hand, at the federal level, there are authorities -- they are often general, but they are often quite specific -- and the famous Section 301 of the Public Health Service Act literally authorizes the leadership of HHS to engage in these sorts of activities, prevention, research, number of interventions, in a very general way.

So there is an authority at the federal level for virtually every activity that any agency undertakes. It is not that there isn't authority.

Number two, historically again, this concept of integrating data actually goes back in many manifestations, decades, and just depending on the technology available at the time and the timing, and the public health community has been trying to integrate or streamline at the time, whatever the technology allowed at that time without throwing the baby out with the bath water. The first rule was, do no harm.

I think as the technology -- the thing that makes the differences in the last few years is, number one, HIPAA, which I think really does change the formulation both on the health care sector and the public health sector, the whole HIPAA activity in terms of standards, interoperability and so on, and a way of relating across the health care sector. Number two, the ubiquity of the Internet and the whole concept of using it within the past five years. I think this has probably moved up to a position where we can actually realistically talk about integrating some of these activities in a realistic way.

Before that, there were a number of proposals John alluded to. HHS actually included language in all grants to allow -- clearly before that, you could only use the money in that grant for that particular purpose, which would literally mean you would develop an information system only for that, if you developed an information system at all.

Working with ASTHO, with John's group, we actually managed to broaden that to have a provision in other grants to allow more of an integrated approach. I think there is even a wider approach to that now.

Ultimately, I suppose people are looking for a grant that provides money for integrated information systems generally. That was an idea of ours, too, Ted, and while everyone liked it, Congress just didn't find the funding available to do that.

So I think a lot of this is a matter of timing and the technology available and the political climate at the time.

DR. RADA: My name is Roy Rada. I am still somewhat skeptical about it. I understood the intent of Ted's question to be how will this general model motivate the states to try to be compliant with this approach, and then the responses about sea changes and Internet and so on.

Referring to the HIPAA story, the HIPAA story shows that despite people agreeing it is a good thing to do, it wasn't enough to have good will. I wonder whether you want to do something more specific, like what they found they needed to do with the HIPAA transaction standards, which is that they had a regulation that had to be in a certain standard.

So the comment was that I am skeptical about the good feelings. The question is, what can one do, beyond being hopeful that good feelings will win the day.

DR. LUMPKIN: You are with which group?

DR. RADA: I'm from the University of Maryland.

DR. SCANLON: Can I just respond? My question is, why doesn't the market -- we always turn to the market in the U.S. -- why doesn't the market respond? Why doesn't the market provide systems that meet these needs? Why is this such a specialized information technology service market that what has happened in all of the other industries hasn't happened?

DR. LUMPKIN: Because the number of customers is so limited.

DR. SCANLON: There is no money.

DR. SHORT: And the customers in general don't get it. It really is not a knowledgeable purchaser about the details we are discussing. I think that in fact, that has been the problem with the clinical information systems. Industry from day one is a real disconnect between the marketplace into which they are selling and understanding what these issues are.

So decisions being made on money criteria, it is a cynical perspective, but it really has to do with the culture of medicine with regards to information technology. It is changing, I agree, it is changing.

DR. LUMPKIN: I think the answer to -- or at least a partial answer to Roy's question is related to the Constitution and states' rights, and to what extent can the federal government in fact impose requirements on states without A, funding them and B, within those things that under the jurisdiction of states, which is collecting -- primarily collecting infectious disease and other kinds of public health information.

So it is a little bit more complicated. I personally would not have a problem with the federal government doing that. I can't agree to all my colleagues -- and then as we learned very forcibly about four or five years ago in our Association of State and Territorial Health Officials, we fall under the aegis of the National Governors Association. Having all of us agree would not be the same as having the governors agree.

DR. SCANLON: That's for sure.

DR. LUMPKIN: One of their basic tenets are states' rights, and they are very hesitant to give up anything to the federal level. So the political environment would not be there. But I think that we have an environment whereby if there is a quid pro quo, and there have been in a number of federal programs, the vital records system is a good example, where there is no requirement for states to turn in data to NCHS, but NCHS does give money to states in return for receiving the data, and some assistance. So that helps the partnership.

But I believe that there is a sense of good will and of benefit. There are many states who are now joining the national electronic -- for fingerprinting of organisms, I forget the name.

DR. KOO: Pulsenet.

DR. LUMPKIN: Pulsenet. There are a number of states who participate in Pulsenet who are not funded to participate, just because of the good. If we have an outbreak and we do DNA fingerprinting, we can compare to DNA fingerprinting of outbreaks in other states to see whether or not they are related.

So I think that despite the fact that there is some resistance, that resistance as Dan pointed out is significantly less today than it was five years ago.

DR. FRIEDMAN: On the one hand, I agree with the skepticism that good feelings aren't enough. At the same time, without the good feelings, nothing is going to happen. For the first time in the past two-three years, we are seeing individual states and individual state legislatures making what for state health departments are substantial investments in information systems. It is certainly not as much as I would like, it is certainly not as predictable as I would like, but at the same time, cash is being forked over, both on the data collection end as well as on the back end for building integrated systems. That is a real change.

DR. KOO: We have been talking with ASTHO staff about what sorts of strategies should we use to try to engage the Conference on State Legislatures, NGA, the governors, and how do we market better, how do we explain better what the benefits are to our public health partners. This is at all levels. This is within CDC, but also state and local. So it is not that we think that this is going to happen.

DR. LUMPKIN: We're going to take one last comment.

DR. RADA: Is there any way we can stretch the interpretation of HIPAA when it talks about health claims or equivalent encounter information, so that your committee could recommend that one looks into the standardization of these transactions for the state health departments and the CDC?

DR. LUMPKIN: I suspect not, but I think that to the extent that the standards are developed at the Centers for Disease Control level, states have been very quick in adopting those standards. Just as with the conceptual data model, there are already states that are contacting CDC, saying we are starting to develop a system, can we make it follow those models that you are working on? I think it is a matter of information, less than necessarily trying to reach coercion. I think states are interested.

DR. KOO: And certainly, our approach has been because of HIPAA to work with the standards development organizations. So it is not CDC per se, it is working in that external world outside of public health.

DR. SCANLON: The other benefit of working through the HIPAA framework is that you tend to look for commercial off-the-shelf kinds of applications that are universally standard, not creating unique one of a kind applications which nobody -- which you then really do get a marketplace from.

DR. LUMPKIN: Thank you very much for this presentation. I think we would like to have you back probably in a year or so for an update. Six months?

DR. SCANLON: I think we're going to make big progress.

DR. LUMPKIN: This is a very important piece for us as a committee to keep tabs with, since our perspective is not only just health care, but the broader aspects of health. So thank you.

DR. KOO: Thank you.

Agenda Item: Reports from Subcommittees and Work Groups on 2001 Work Plans

DR. LUMPKIN: We have taken care of all of the action items on our agenda. We have just the reports left for the various -- the executive subcommittee we have taken care of. The executive subcommittee will have an all-day meeting on August 14 in Chicago. You have seen the information from the NHII, the draft recommendations, so I think that is about all we have to talk about.

Standards and Security? Simon?

DR. COHN: You have received the letters which are obviously a principal work product of the last month or two. The committee in addition was discussing upcoming hearings. There will be a hearing on August 20 and 21 to get into the issues of PMRI and next steps. Jeff, do you have any comment about that?

DR. BLAIR: No, I don't think so.

DR. COHN: No? Okay. Well, there will be a day and a half hearing where we will be talking about information models, I think some about data models, but certainly about information models and looking at the state of the PMRI standards at that point.

In addition to that, we have tentatively scheduled and are at least holding dates October 9 and 10 for hearings to be determined.

DR. GREENBERG: Ninth and 10th and 25th and 26th?

DR. COHN: No, not the 25th and 26th. I made an error. It is the 9th and 10th of October. We are holding those dates. We also have dates held on November 14, the day before the November NCHVS meeting, and December 13 and 14. It is likely that one of those sets of hearings will be canceled, but we decided to make that decision after the August hearings.

DR. GREENBERG: There is the possibility of the meeting November 14 in conjunction with WIDI?

DR. COHN: That's right, exactly. That is why we are holding things open and haven't made a commitment about which two of the three dates we are likely to go with.

Rather than going through our list of issues, I will only comment that we actually do have a list of issues that we are following the implementation of. The HIPAA administrative transactions is issue number one, PMRI is issue number two. The third issue that we saw ourselves getting involved with is the year ends and next year starts is the whole issue of code sets. We will be starting from a more HIPAA view, in terms of gaps in HIPAA code sets, how do we fill legitimate needs of users, where do the more expansive terminologies versus specific code sets fit in, how do the maintenance and updating issues of the code sets going, various issues like that. I think that will be a major issue that we will be getting into as 2001 ends and 2002 -- we get into the swing on that one.

So just by way of information and to let people know what is coming up.

DR. LUMPKIN: Great, thank you. Privacy and Confidentiality?

DR. ROTH: The Privacy and Confidentiality Subcommittee met this morning to go over our plans for our hearing in August on the HIPAA privacy regs. We have planned a hearing to take place on August 21, 22nd and 23rd. It will be the afternoon of the 21st, all day on the 22nd and half a day in the morning of the 23rd.

We met with the OCR representatives as well this morning to go over some of the specific issues that we should take up at our hearing. The four topics that we are going to be working on consent, minimum necessary, research and marketing. We will identify the groups and individuals who have known feelings on these issues, and we will invite them to express their views to us, noting of course that this is not a general session on every aspect of HIPAA, nor every aspect of consent, or the other issues. We are going to send them letters outlining specific questions that we would like addressed, and also suggesting to them that this is a problem solving kind of hearing, where we hope to be able to come out with a recommendation.

At the hearing we also have time for public testimony. That means uninvited testifiers who can either notify us in advance or sign up on the morning of the particular hearing.

Then the subcommittee will deliberate and try to develop some recommendations for sharing with the full committee, and for the necessary revisions and the like, and then send those on to the Secretary for work of OCR in their rulemaking process.

DR. LUMPKIN: Thank you. Subcommittee on Populations?

DR. MAYS: Populations also met yesterday. It began with a continuation of the discussion with the guest that we had at the earlier panel.

Part of what we continued discussing with our guest was them presenting us with some examples of some of the dilemmas that we are going to encounter, in terms of the utilization of data collected in terms of race and ethnicity when there are multiple race designations. I think it was very helpful to us, to get a sense of both what is being thought about in terms of that and some of the issues that I think we are going to struggle with.

Second, I think what also came out of that discussion was some sense of -- in terms of assessing disparities, that we may want to look at variables other than just race. We really want more vulnerable populations to understand the disparities, talking about things like education, talking about things like socioeconomic status. But those are all things that I think are important.

So I think it is going to help to inform our next step, which really is going to be for the populations group to take on a closer examination of the implementation of the data collection on race and ethnicity and try and get a sense of what are some of the policy recommendations that we might make.

What we decided that we would do is, we would have a hearing. The next one would be with individuals from the federal agencies, so that we can actually get some update on how for example they are implementing the standard, so that we can determine from there exactly what is going on, and then after that probably hold a hearing in which what we would do is really to bring users into the picture. We will hear from them when the data is released what they are encountering in terms of using the data, and moving from there.

So we do have some potential hearings. I think we are going to have to talk about dates, because we may try and do one as early as in conjunction with the executive committee meeting in August and maybe another particularly for the users in conjunction with the American Public Health Association meeting, since many of them do attend that particular meeting. But that is all still to be worked out.

DR. GREENBERG: You're talking about a hearing in Chicago?

DR. MAYS: Right.

DR. GREENBERG: I think if you want to hear from the agencies, I would not recommend -- if you are going to have a hearing or a meeting with the agencies to hear about how they are implementing, that should be in Washington, D.C., definitely.

DR. SCANLON: It could be one of our breakout sessions, actually.

DR. MAYS: Okay. So we will reconsider the time and place, but I think in terms of the strategy of where we are going, that will be the direction we will take.

I think in terms of other issues that arose for us, it had to do with how we are going to structure out upcoming work. So I think that part of what we talked about is a strategy in which we will be taking on some long term-short term kinds of initiatives and a long term initiative being that which I just talked about.

I think other than that, that was about it.

DR. LUMPKIN: Thank you. Ann, anything to add?

DR. FRIEDMAN: In your meeting booklet, there is a one-page insert which is a schematic outline of the final report. I'm not going to go through the outline. We have been -- there has been essentially a core group representing the different partners in the endeavor, the different partners being the HHS Data Council, NCHS, CDC and NCVHS, who have been communicating and meeting regularly to draft the final report and further develop that outline. That consists of Ed Hunter from NCHS, Rob Weinzimmer from NCHS, Skip Parrish from CDC -- or Ed Hunter from NCHS and the Data Council, and myself.

We have gotten together two or three times in the past four months. We are getting together again at the beginning of July. We met -- we meaning the NCVHS work group -- met this morning. We went through the detailed outline in part and received some extremely helpful suggestions and comments. We are also planning on meeting for roughly half a day in Chicago following the executive subcommittee meeting to continue going through the iteratively developed detailed outline.

Simultaneously with that, we are also going to be working on essentially a structure for the recommendations -- not the recommendations necessarily themselves at this point, but what we are hoping to do is send out that structure sometime during the summer to various stakeholders and people who have provided testimony at the hearings, and invite their submission of recommendations within that structure. This is a way basically of trying to proceed on parallel paths, rather than waiting for the entire report to be ready before we start circulating it.

We are hoping to have a more complete draft available for review at the September meeting. Then once we have a more complete draft, we will also hopefully have somebody who will be able to do some professional editing on it.

So that is where we are.

Agenda Item: Future Agendas for NCVHS Meetings

DR. LUMPKIN: Thank you. The dates for the next meetings are listed in the handout. Do we have any items that we need to be aware of for the agenda for those meetings?

DR. GREENBERG: I just wondered, if we're back to the agenda, we didn't have any report obviously from Quality, but this is one item to report from them. I don't know if you want to do that or you want me to, for information.

As you know, the work group on quality has been organizing sessions pretty much in conjunction with the full committee, because of the interest across the full committee for these topics, and also the fact that it has been the same members as on Populations, et cetera, and people just haven't been able to come in for that many additional meetings.

But there has been a lot of very good presentations, including those we have heard in the last few days. So Kathy Coltin is chair -- and we have talked about this over a period of maybe more than a year, but she mentioned at our meeting of the Subcommittee on Populations the idea of having somebody pull together all of the information that particularly focused on the barriers, the data barriers and issues related to being able to report on quality and do performance measurement, et cetera, which has been the focus of a lot of these sessions.

So we are going to be looking into that, getting somebody to do that, a consultant to work on that. That would be a product of the quality work group and then of the full committee.

DR. ROTH: I just wanted to note for the agenda for the September meeting, we will need to discuss the Privacy and Confidentiality Subcommittee recommendations.

DR. LUMPKIN: Right.

DR. GREENBERG: Action items actually would be helpful to know about. I know there are a number of --

DR. LUMPKIN: We can cover that in the executive.

DR. GREENBERG: We'll do that in the executive, right. Anyone who is not going to be at the executive subcommittee meeting and has suggestions for presentations, et cetera, at either the September or November meeting, please just e-mail those to me or to Debbie Jackson, action items the chairs should be aware of. Also, I know there are a few items we have carried over already that we are planning on.

DR. LUMPKIN: I think we have accomplished our agenda in a very disciplined fashion. So since we are not scheduled to adjourn until four, would you all please sit in your place?

DR. SCANLON: I think I hear a motion.

DR. LUMPKIN: Having accomplished our agenda, this meeting now stands adjourned.

(Whereupon, the meeting was adjourned at 2:25 p.m.)