[THIS TRANSCRIPT IS UNEDITED]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

Work Group on Computer-Based Patient Records

May 17, 1999

AHCPR Conference Center
6010 Executive Boulevard
Rockville, Maryland

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
(703)352-0091

TABLE OF CONTENTS

Call to Order and Introductions

Overview of Clinical Vocabularies and Issues

Overview of Terminologies and Issues

Statistical Classifications and Code Sets

Clinical Specific Code Sets


Participants:


P R O C E E D I N G S [9:05 a.m.]

Agenda Item: Call to Order and Introductions

MR. BLAIR: I think it is time for us to convene. Please take your seats.

This is the National Committee on Vital and Health Statistics, the Computer-Based Patient Record Work Group. The meeting is within the context of the mission of the work group, which is to study uniform data standards for patient medical record information and the electronic exchange of that information.

These hearings are part of a set of hearings going on throughout 1999, which will lead up to recommendations that this committee will make to the NCVHS and on to Donna Shalala by August of the year 2000.

I am Jeff Blair. I am with the Medical Records Institute. I am chair of the CPR work group. I think the next thing I would like to do is have all of the committee members introduce themselves.

DR. COHN: I'm Simon Cohn, a member of the committee and the chair of the Subcommittee on Standards and Security, which is the parent subcommittee for this work group, and welcome you to this hearing.

MR. MAYES: I'm Bob Mayes, Health Care Financing Administration, lead staff to the work group.

DR. FYFFE: Kathleen Fyffe, member of the committee. I work for the Health Insurance Association of America.

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Health Care Policy and Research. I am liaison to the National Committee on Vital and Health Statistics and co-lead staff to the Computer-Based Patient Record Working Group.

DR. FERRANS: Richard Ferrans. I am a consultant to the VA and chief of informatics at LSU Medical Center. I am staff to the committee.

MR. GARVIE: Jim Garvie with the Indian Health Service, also staff to the committee.

DR. CHUTE: I'm Chris Chute, professor of medical informatics, Mayo Foundation.

DR. CIMINO: Jim Cimino from Columbia University.

(Whereupon, introductions were performed by members of the audience.)

DR. GREENBURG: I'm Marjorie Greenburg, National Committee on Vital and Health Statistics, CDC, and executive secretary to the committee.

MR. BLAIR: Is that everyone? Just to put this in perspective for you, we indicated that this particular work group was focusing on uniform data standards for patient medical record information and the electronic exchange of that information.

We have pursued that activity by defining a set of focus areas. One focus area is message format standards. Another is medical terminologies. Another is the quality and accountability of data. Another is the requirement to address the proliferation of different standards in different states related to electronic health records, and some infrastructure issues and some cost benefit issues related to these subject areas.

Agenda Item: Overview of Clinical Vocabularies and Issues

These are the first two days that we are having hearings on medical terminology. We expect that we will probably have one or maybe two more days later on. You may notice that as you glance through the agenda, today and tomorrow are essentially developers of medical terminologies. As we begin to look a little bit more into medical terminology, we will be looking at vendors and users of medical terminology, so that we can try to understand these issues a little bit better.

We have attempted to craft today and tomorrow into some areas of focus. For those guests that are here, we have a diversity on our committee of folks. Some of them have more familiarity with medical terminology and some have less. So these first sessions here this morning are really intended to educate the committee as a whole with respect to the principles and structures and objectives of terminology and in particular, medical terminology.

For that reason, we have asked Dr. Jim Cimino from Columbia Presbyterian Medical Center and Dr. Chris Chute from Mayo Foundation to begin the education process for us. That will be followed by Mark Tuttle of Lexical Technologies and Keith Campbell from Kaiser.

Jim, could you begin for us with our basic education?

DR. CIMINO: Sure. Good morning. I'm not sure whether I am a developer or a user. I try to use whatever I can and develop the rest, so I kind of come from both sides.

Mike asked me to come down and talk a little bit about some educational things, but also to say whatever the heck I wanted. So I have the recency effect and I am mostly doing the second.

I actually gave testimony to the full committee, I think it was back in March two years ago. In there, I covered a number of topics that were pretty well described in the minutes, talking about the multiple needs for controlled vocabulary, who needs them, who the different groups were, different constituents. Talked about capturing clinical data for multiple purposes and re-using clinical data. I covered some of the inadequacy of current systems, and then talked about desired characteristics in a model for the future.

I'm not going to try to go through all of that today. That was about 45 minutes worth of stuff, and some of it is fortunately outdated. For instance, some of the current systems have started to evolve to the point where they are addressing some of the former inadequacies. I'm not so sure what the model for the future should be anymore, so I'll just cover a couple of these.

First of all, this is one of the slides from a couple of years ago on what I thought was still relevant and important for the committee to keep in mind.

In the center here, for Jeff's benefit, is a box that says, collect patient data. That feeds into a clinical repository. Then in health care institutions, what typically happens is that we have to then decode the patient data. Then that goes into a financial database. So there is this process of collecting data and then recording it.

The recording is actually done by people other than the people that collect it. So there are problems with accuracy of the recording. Then the terminology that is used to recode is not the same terminology that is used to capture the data, so there are often translation problems that occur as the data are transferred in this process.

Meanwhile, in many institutions like Columbia, for instance, we have another box for you of collect patient data, and that goes into a research database. So we are actually collecting data twice on the same patient and putting it into two disparate databases, using two different processes, really three processes of coding patient data. All we need now is to have somebody come along and tell us there is yet another reason to start coding data. Then we'll have to start adding more of these boxes.

So at least to me, the obvious solution is to collect the patient data once, put it in the clinical repository, and based on the terminology that you have used for coding the data, then be able to generate your financial data and your research data from there, instead of having three or more separate processes for data collection.

I wanted to cover a few of the basic, what I have come to call desiderata for controlled terminology. There are about 12 of them. I don't have time to go through all of those here, but I wanted to leave some impressions of what I think some of the most important ones are.

The single most important one, I have come to believe, is concept orientation, which is the notion that each term isn't just a name, that you say, this is the right name to use for this patient. It is a concept. There is a meaning behind that term, and that concept is the actual focus of what you are coding, not the name that you are coding.

The things that go along with that is that there is one meaning meaning -- I've got my redundancy in there already -- one meaning per code. So there is only one -- each code has only one meaning underneath it, one concept. That is to avoid ambiguity in your coding system.

There should be a clear semantics. That is, when you say gunshot in atrium, are you talking about a wound to a part of the heart or are you talking about an event in a part of a building? You have got to be clear in what your concepts mean and what context they can be used in.

Meaning and concept are interchangeable notions. That is, a concept is something that is an object that embodies a particular meaning, so you can't ever change the meaning of a concept. That goes against the definition of what a concept is. But you can easily change the meaning of a code, and that happens all the time. ICD-9 does it every year. They will take the codes and they will rename then, and in the process of renaming, they will change the meaning. What happens then of course is that you have got a code that has two meanings over the course of time. If you are looking at the world as one year snapshots, I guess that is okay. But when you are collecting data in a clinical database, in a longitudinal database and trying to take care of patients over time, it is critical that you can retrieve information over time and look at trends and look at changes to be able to say, when did this patient first have this problem, or find me all the patients with this particular problem. If the meaning of your code changes over time, it becomes impossible.

The last thing is concept permanence. That is, once you have said that a concept exists, you can't go back later and say it doesn't exist anymore.

I pick out ICD-9 CM all the time, because they do that every once in a while. At one point, they added lots of concepts about different kinds of smoking, and then they said we don't care about those anymore. They had lots of concepts about HIV and then they got rid of them.

These concepts are permanent. The data are in the database with those codes. You can say that these codes are no longer used, that they are retired, or Randy Miller coined the phrase emeritus terms.

We don't use the term non-A non-B hepatitis anymore, but we still have lots of data that are coded non-A non-B hepatitis. We can't simply pretend that those data don't exist anymore. They are still very useful, especially with the patients who have the data associated with them.

I tried to boil down some of the do's and don'ts that I would like to recommend, and in the process try to explain as part of this educational mission that I have been given what these things mean.

It turns out when I looked at the structural aspects, the all turned into things, don't do this, don't do that. It's a lot easier to explain that way.

The first is, when you have a terminology, don't limit the depth or breadth of a hierarchy. Don't say you can only have 10 things at each level, or you can only have three things in a hierarchy, because as soon as you do that, somebody will come up with some nuance that they want to express as the lowest level of a hierarchy, or they will come up with a new version or something at a particular level and you have run out of codes and you have to put it into Other or you have to put it somewhere else. Limiting the breadth and depth over and over again will run into trouble.

When I started at Columbia, Paul Clayton recruited me. He came from the health system. I was trying to convince him that we needed a vocabulary that didn't limit the breadth and depth of a hierarchy. He had the health terminology, and the health terminology seemed to have plenty of room, because they used 256 codes at any level, and I said that is not enough. When we looked, it turned out there were places in the health collect where 256 simply wasn't enough things to have in a particular level. That is one don't.

Another don't is, don't limit to a strict hierarchy. Strict hierarchies are fine in some cases where you have mutually exclusive things. So for instance, the axes of a terminology, where you are saying, these are diseases and these are organisms, organisms are never diseases. Semantically, it is just not possible for an organism to be a disease. It can cause a disease. The concept of a disease may have the same name as an organism, but they are different concepts.

So you can have a strict hierarchy that separates these mutually exclusive classes. But there are many places where you can't settle for a strict hierarchy.

For example, you look in ICD-9 CM, which is a strict hierarchy, and you say, I want to find all the cancer terms, you can't find a class that embodies all the cancer terms, because they are scattered around in the lung diseases and the GI tract diseases and so on. You can't even say, get me all the lung diseases, because when you go to the lung category, some of the lung diseases are often in the infectious diseases category.

So if you want to start looking at terms in the classification, using classifications, multiple hierarchies are critical in medicine. You take the example of hepatorenal syndrome. Where does that go? Does it go under liver disease or under kidney disease? The answer is, it goes under both. If you put it under one, somebody who is saying get me all the patients with liver disease will miss the hepatorenal syndrome if they have been classified as kidney disease. Or if you say, get me -- well, that is sufficient for that one. I only have 20 minutes, so I don't want to use it all up on that.

Don't put meanings into codes. There are different ways of expressing this. There is saying you have non-semantic codes or meaningless identifiers. They all sound like the codes don't mean anything, because there is no meaning associated with it. That is not what is intended.

What is intended is, when you look at the code, it shouldn't be telling you the meaning of the concept. The reason is that you may change, for instance, where you want to put the term in a hierarchy. For instance, ICD-9 CM, the location of a hierarchy is specified by the coding system.

So if you say, we are going to now put peptic ulcer in the infectious diseases category, you have got to give it a different code if you want to do that, because you can't keep the same code. So now you have discarded one code and produced another one.

Also, when you start putting meaning in the code, you force yourself to say, okay, I'm going to have four letters, or I am going to have 10 letters or whatever, and you run out of room. You can't come up with enough abbreviations for the different things and cram them into some little coding system.

Some people would say, why not use the name? Just use the name as a code, and then you don't run out of room. The problem with that is, what if you want to change the name of your term? You are allowed to change the name of a term if you want to clarify their meaning, or they are misspelled or what have you. You don't want to change the meaning when you change the name, but you can change the name. And if you have used the names as the unique identifier, now suddenly you have got two unique identifiers for the same concept.

This is my favorite one to pick on, NEC, Not Elsewhere Classified. I urge at every opportunity to do away with Not Elsewhere Classified as a way of pigeonholing terms. It is not necessary any longer with computer systems now to have to force yourself to say, we've got A, B and C, and all the others are going into D, because we don't know what they are going to be, or we don't have enough codes for them, or we have some excuse that we are going to lump them all in there.

You lose information when you do that, number one. Number two, when you add another term to your terminology, the meaning of what other is changes. That is, if I've got three pneumonias this year and then another thing that is all other pneumonias, next year I have a new name for pneumonia, and previously those people were categorized under the other category, now they get their own code, the meaning of other has changed, so now I can't aggregate my terms. This violates the notion of concept permanence. So I urge getting rid of this.

In my opinion, the way to handle the problem of saying I don't have a code for this, they said pneumonia but it's a different organism and we don't have a code for pneumonia by that organism, the answer is, use the code for pneumonia or bacterial pneumonia, and then add the modifier as either free text or a code, and then you have in effect said, this is a pneumonia not elsewhere classified, because it is pneumonia. It is not just plain pneumonia, it is pneumonia with a modifier. Later, if you develop a term for that meaning of the combination of the pneumonia with a modifier, you can gradually go back and recode your data if you wanted to. But in any case you haven't lost information. You can go back and aggregate your data.

You could say, get me all the patients with pneumonia. You will get all those patients. You can say, get me all the patients with pneumonia not otherwise specified, the NOS, you can do that. You can say, get me all the patients that have pneumonia that is not otherwise specified and it is not a separate code, but it has a modifier. Again, you can do that with this and you haven't lost any information in the process.

Now some do's, on content. Try to strive for content that matches the -- especially matches the granularity of the primary creators of the data. If you want to successfully have that drawing that I showed in one of the early slides, where you collect data once and then use it for multiple purposes, the coding system has to satisfy the people that are capturing the data.

If you say to those people, look, we really are going to need a ICD-9 CM eventually, so why don't you just capture your data in ICD-9 CM. That will be inadequate for the purposes. It would be maybe adequate if all they had to do was represent financial data, and sometimes I think that is all my hospital wants me to do. But we have to capture it for patient care. That is the primary reason we are doing this.

Examples of this for instance would be the terminology, or the terminology used by the pharmacy knowledge base vendors who are creating the actual codes that can go in a record and represent what data are being captured about the patient. That is of primary importance. You can then derive the other things, the aggregations that you need from those.

If ICD-9 CM for instance wants to say, this year we are going to count these as things that we are interested in and all the other things will go in the other category, they can change that by changing the mapping from year to year. But the primary collection of the data has to be of the granularity of the people that need to capture the data and need to use it, primarily.

It is important then to have a feedback mechanism that incudes both the creators of the terminology and the users of the terminology, so that you are not simply saying, these are the things we want to create, but the people out in the field may not know how to collect those data. Or they may want to be collecting something that the people that are doing the aggregation and the finances of the statistics are unaware of. So the feedback mechanism has to include everybody, to make sure that what gets added is going to be usable and useful in the primary setting of patient care.

Finally, there needs to be development of a timely concept oriented update process. That is a loaded statement. Both timely and concept oriented are important -- timely because clinical medicine is changing. We add new drugs to our formulary every day, we need new codes for them every day, new lab tests every month, new diseases. As we start to accelerate our understanding of the human genome, the terminology that is going to be associated with that is going to explode. That is going to have to happen in more or less real time.

In my institution, I update my terminology on at least a weekly basis and sometimes on a daily basis, to keep up with the new terms that are in there. If I don't do that, my systems will fail. So if we are going to have standards for this, they have to match that.

The concept oriented part refers to the notion that when you do an update, it is no longer adequate to say, this term is no longer used, this is the new term. You have to have more information about what is happening with these terms. Why is this term not used anymore? Was it incorrect? Has it been subsumed by another term? So the old terms now -- maybe there used to be two terms that mean the same thing, so don't use that one anymore, just use this one. When you find the old one, you know that you can map it to this new one.

If you change the name of something, say why you are changing the name of it, so we can say, you changed the name and you also changed the meaning. That is not allowed. We need to be able to critique these updates by looking at them from the concept oriented viewpoint.

Some lessons to learn. Lessons from ICD. Everything that ICD does, don't do it. I use that as my example in my classes all the time. Maybe it will get HCFA mad at me, but what can they do, make me document more? It is a strict hierarchy, it is a number coding system, it is just -- and they change their codes every day, they don't tell us how they update them, why they do the updates. Everything they do messes up those of us who are trying to capture data and use it for the primary purposes of patient care.

SNOMED has learned a lot of lessons over the last three or four years, and has started to evolve into SNOMED-RT. I assume that -- Keith, are you going to talk about SNOMED-RT here? Good. So you will hear from them some of the changes that are being made to SNOMED-RT to for instance get rid of NEC, for example, to adopt a multiple hierarchy and make it explicit in the terminology, that sort of thing. So lots of lessons to be learned from SNOMED.

Lessons to be learned from . Clem can give you those lessons better than I. The lessons from are that, in order to create this terminology, they did two things that I think were unique.

The first was, they got the people that actually created the terminologies and did the terminologies into the same room and said, let's talk about what we have in common, let's try to understand what it is that we all do, and let's come up with a terminology that covers everything that we do in our laboratory systems. If we do that, then we will have something that is going to be really useable. We will capture data at the point of care, and then be useable for other purposes from there.

The second thing that they did is, they created a knowledge model. I don't think they thought they were creating a knowledge model at the time. What they thought they were creating was a fully specified name. They said, we want to have really good names for these things.

So let's say, the first part is going to be the thing that the test measures, and the second part is going to be the thing that it measures it in, like serum or urine or something, and the third part is going to be, what period of time, is it a point measurement or a timed measurement, is it quantitative or qualitative. They felt that by creating these expanded names, they would be able to create unique names that everybody would recognize and understand.

That was a great idea from the standpoint of just saying, look, we don't have a standard, we just have a standard naming system, a nomenclature. If I give this to people, they will understand what it is. As they started to enumerate these things they came up with codes for them, maybe unique identifiers.

They said, everybody agrees that there is this mean cell hemoglobin concentration. Why don't we put a code on that? Then we can ship the code around. Meanwhile, if we don't have a code for something, we can still use the fully specified name.

What they were doing though is, they were creating a knowledge model for representing the laboratory terminology, which will allow them ultimately to do some very interesting things, like look for redundancy in the terminology, look for -- do multiple classifications of data, aggregation of data, and so on. They created their terminology by looking at what was the knowledge, what were the concepts they were actually trying to model.

I think that's it. That's all I had.

MR. BLAIR: Jim, thank you. I'd like to bring up Dr. Chris Chute next. Then after Chris, please save whatever questions you might have had from Jim's session, and we should have about 10 or 15 minutes for questions to both Jim and Chris.

DR. CHUTE: Thank you, Jeff. I'm just going to try valiantly to switch little boxes here.

Simon Cohn, my colleague in many activities and on this committee, asked that I address in the context of these introductory remarks what the heck we have been doing in CPRI the past few years, since many of the activities that have surrounded terminology development have had a forum among others that has included the CPRI activities.

I want to begin with some reminders and motivations, and three or four slides of why anybody would care about this sort of thing. This is a quote which basically says, those with more detailed, reliable and comparable information for cost and outcome studies, so on and so forth, are going to win in the marketplace. There is a lot of drivel in between, but you get the idea.

The principle is, this is not a pedantic or academic activity. This is a significantly important activity to the business of health care. Any failure to recognize the crucial importance of this as a business characteristic in health care will be to the detriment of those that fail to recognize it.

This of course is my famous -- I think you have actually seen this one. This is an old slide, where we begin with the notion of patient information on one side of the circle, with a big sweep through clinical information, observational data, to the notion of medical knowledge, that medical knowledge continues to sweep around through guidelines and expert systems to continuously improve patient care. So this grand circle, where better patient care yields more knowledge, more knowledge yields better patient care.

Then of course, what holds this great circle together, what is the hub around which this rotates. Many of you know, because you have seen this slide 50,000 times before. But at the center of course is the notion of terminology. It is the substance which describes patient information. It is the substance which describes medical knowledge, and it is the glue which can link knowledge back to patient care, and from which knowledge can be derived. Not a new idea.

So to summarize this motivation very quickly, terminology is a crucial requirement without which health data is not comparable, health systems cannot meaningful or efficiently or effectively interchange information. The secondary uses that we are all becoming to recognize as more crucially important, of quality and the like, are not practical or possible, and decision support linkage is not efficient or effective absent a commonality to link knowledge and information.

The last slide in this little series. People thought of this before. My friend Bill Farr in 1839 said, look, unless we have a standard way of representing health information, we won't be able to engage in this as a practical science, because analogous to the physical sciences having weights and measures, the nomenclature is to health care a crucial and fundamental metric that is necessary.

This is just to review. There are three papers listed on this slide. They include the content coverage paper, done by the CPRI work group done back in '96, the phase two evaluation published in '97, and the framework for comprehensive health care systems, published in '98.

The notion here is that the CPRI working group has been addressing these problems for some period of time, and borrowed heavily from the work of others. Half of Jim's ideas were in the framework. It is not as though we invented them. But nevertheless, it was a coalescence of something in the field.

Among the conclusions of the study published in '96, although they were in final form back in '94, was that surprise, most coding systems lose information, and some remarkably lose more than others.

The notion that ICD even augmented with CPT loses over half of the clinical detail was news to some parties, but the consequences that misclassification in terms of the information that one would seek to study or bias from studies that are premised exclusively on administrative data, is highly likely as a consequence of these findings, and loss of information in ways that Jim described.

The framework that was published last December was very analogous to some of Jim's desiderata, where the characteristics of terminology, structure of terminology, maintenance details of terminology and administrative desiderata were more or less outlined at a relatively high level. It reflects again the thinking of the large community, and was done jointly by the CPRI working group along with the ANTSE HISBE working group. Membership on that working group included many people sitting around the table here and many people in the audience. So it was indeed the work of many.

Back in November of 1996, CPRI conducted a national summit, not focused on terminology; it was on health care as such. But one of the conclusions from that summit was that a national conference devoted to the topic of terminology, at which health care providers, CPR system vendors, payers, government and terminology developers should meet in a common forum to address the concerns that I think are becoming increasingly well recognized.

That meeting was one of a long series. There was a national conference in November of 1997 that emerged from that recognition. There was the joint conference on lexical solutions for the GCPR, cohosted by HL-7, which occurred about a year later, in August of '98. Then there was the second terminology conference establishing the consensus lessons from experience, which occurred several weeks ago in Tyson's Corner.

The materials that have emerged from this series of CPRI activities are now consolidated to a single website which points to everything you would ever want to know. I know this because I wrote the darn web page yesterday; it is now out there.

So every slide presentation that occurred in these fora is publicly available. Substantial summaries. I acknowledge Patricia Gibbons at Mayo for really doing a superb job to summarize and generate synopses. Whether 50 pages constitutes a synopsis or a treatise we could argue, but nevertheless, they are comprehensive summaries of these meetings and are publicly available. The references to the work that I have alluded to in this presentation are also on that particular web page with pointers to all the other activities.

For those of you that can't see the slide, it is www.cpri.org/terminology.

The first conference in '97 had as its goals industry defined terminology requirements, the goal of agreement upon a common framework for progress, a prioritization of requirements and the intention to generate knowledgeable input for national debates.

In fact, I followed as I tend to do Jim at the NCDHS meeting several weeks later and reported on some of the findings of that particular conference.

Among the agreed-upon issues from that was a tentative definition of what constitutes a clinical terminology. Essentially, it talks about standardized terms and their synonyms with a clinical granularity, with the notion that these terms can map to higher level classification systems.

This dichotomy between clinically granular information and higher level summary classification systems was also made explicit, and I think many a hatchet was buried at that first meeting. Prior to that, there had been a lot of fuss and hullabaloo about which was better, detailed formalized systematic classification systems or highly granular, specific, acyclic, multiple hierarchical clinical representation systems.

The answer to the question is both. Both are necessary, both are good for you. They serve different purposes, and they are fundamentally incompatible. It is perfectly reasonable to have detailed clinical information mapped seamlessly to broader level classifications. Indeed, some of the desiderata that have been written, including those that emerged from our own framework may have been somewhat too focused on a one size fits all mentality. The desiderata that pertain to a higher level classification may not be identical to the desiderata that pertain to a detailed nomenclature. One has to be careful about insuring that one is targeting concerns appropriately, whereas in fact the functional characteristics of these systems are intrinsically different.

But the recognition that they exist along continuum I think was widely accepted at the conference. Also, a notion that parallels many of the issues that Jim raised is this idea of an entry level terminology that can in turn be an entry point to an underlying reference terminal. Again, the properties of what a human being actually has to interact with versus the highly technical and computer science oriented properties of what constitutes a reference terminology may and probably should be fundamentally different.

Similarly, the ideas or concepts that emerged are stored within a reference terminology and can spill out into administrative terminologies du jour, which this week as best I can tell are ICD-9 CM and CPT and ICD-9 CM procedure codes. But clearly, as we evolve through ICD-10 CM or ICD-10 PCS or CT5 or whatever else may be emerging, the same reference terminology could contain an immutable, unchanging conceptual purity, clinical concepts we seek to preserve with their mapping into administrative system du jour, as I have said.

This is Simon's favorite slide, although I have cleaned it up, I swear. The notion at the bottom, Jeff, is a big red block of nomenclature, and emerging from it are different colored pipes that change hue from this pale red into a color of their own substance. The idea is that if you have an underpinning clinical representation of patient events, one should be able to derive from that a higher level classification system that would characterize patient data in the way that the clinical classification system is intended.

For example, ICD would have a different characteristic and substance and content from a procedure code, or from an image transfer code or from messaging tables and content. The idea of mapping from one system to another then, instead of having awkward tables that try to provide incongruous mapping, which is almost impossible, one would go back from these detailed or highly evolved differentiated classification systems to their clinical root and remap at the level of the underlying clinical representations to find an analogous representation in a classification of a different color, simplistically.

The final point that emerged from that first conference was dealing with an administrative issue having to do with cost. The notion that recognition and fair recovery of the development and ongoing maintenance costs are justifiable and necessary, absent a mechanism for public funding. This could and perhaps should be revisited if the underlying funding mechanisms for clinical system maintenance might change, but the idea that all terminologies should be free and available, while highly desirable, is just impractical, given no other reasonable funding mechanism to support the very involved and elaborate posits of creating and maintaining them.

I want to skip to the second conference which occurred just last month, a few weeks ago, where the emphasis at that conference was case studies and practical examples of what has worked through the interim. While one could posit this ideal of interlocking terminologies and how we should all work together, while a fanciful and desirable notion, there was previous little detail on exactly how one would do that.

So the conference focused on examination of a number of case studies with bilateral coordination between a classification and a coding system for nomenclature, was already in existence, where there was experience drawn upon and learned from. The hope was that we could generalize these experiences as examples to proceed toward the goal that I think is common to all of us.

Among the recent coordinations that have occurred since the first terminology conference include the NLM facilitation of bilateral mapping between and among CP, ICD and SNOMED. While arguably not fully mature and not ready for prime time, they do illustrate examples of activities and a working model wherein these linkages can be formalized and developed.

Similarly, there is a different kind of coordination between LOINC and SNOMED. I keep using the phrase, LOINC is included by reference, and the SNOMED people haven't completely challenged me. It is a bit more elaborate than that, I recognize. But the general principal that SNOMED would not presume to develop laboratory codes, but would essentially use the existing LOINC codes, that is evidently spelled out in excruciating detail on page one of some document -- is it the LOINC manual? -- if you want to know of the nitty gritty.

Similarly at he level of tool developers or fundamental resources for terminology, there was the merger of Lexical technology and Ontext announced as a letter of intent. I don't think it is a done deal, but this consolidation in the field, where similar strengths are recognized and put together in a synergistic way.

MR. BLAIR: Chris, some of the committee members may not quite realize that you are talking now about tools and enablers to develop these coding systems with Ontext and Lexical Technologies. Maybe you could clarify that.

DR. CHUTE: That's right. The under title here is terminology component vendors. These are organizations that produce materials and resources that facilitate the development of terminologies. Since they were doing really the same thing, but using highly complementary techniques and technologies, their merger made logical sense.]

Then finally and perhaps most spectacularly was the recent announcement of the merger between SNOMED owned by the College of American Pathologists and the UK Clinical Terms project that was formerly known as the read codes, to generate a common hybrid product that would derive essentially from the content of both, further unifying the major nomenclatures, at least in the Western world, that address clinical patient information.

The centers of gravity have become more focused and clear since the last conference as well. Specifically, the ISO working group on terminologies has been created and I think has emerged as a viable forum to address many of the meta standards around terminology development. There are newly emerged public and private efforts that I just alluded to. The HL-7 vocabulary working group has emerged as the predominant form for practical consideration of terminology issues. Their intention to register terminology systems was raised at the last meeting, I think a positive step towards again bringing some sort of order and orchestration to the process. Finally, NCVHS has provided a valuable activity.

The last meeting provided detailed updates on many of the major systems, including CPT-5, ICD-10 CPS, SNOMED-RT, LOINC and ICD-10 CM. The major points that emerged from that work were four.

One is the increasing recognition of the business relevance of terminology and health care. They are now recognized as enablers to quality improvement and outcomes, and they can enhance clinical efficacy and provide reliable linkages for decision support. This is no longer questioned and no longer argued, but is a done deal.

Furthermore, the second point that emerged is the obvious demonstrated coordination and emerging spirit of cooperation as evidenced by the case examples that were presented at the conference, and a positive consolidation in the field, which appeared to be escalating.

The third major point was not simply a recognition of this continuum from nomenclatures to classification, but evidence of their proactive adaption by many of the classification developers and many of the nomenclature developers in a synergistic, cooperative and collaborative way. This is not a war, this is now a party.

Finally, the recognition that public funding and the announcement by Betsy Humphries of the experiment to support LOINC in its initial development, where public dollars used for the development and maintenance as -- and this is my quote -- an infrastructure for the public good, with an aim toward reducing the end user costs, is both welcome and encouraged.

We are not done in this community. The remaining tasks in my mind are to fly engage and payers and the providers, since the vendors of systems and information are not going to implement these things unless there is demand for the vendors to do so, which is not fully mature.

Similarly, I think the next conference if we have one should focus on the notion of where does a terminology model, where does a structure and representation of information as characterized within a terminology meet the health information systems model or the reference information model, if you will, and how do we reconcile distinctions between those complementary, but nevertheless at the present different views of the information.

Then finally, we have to complete the transition of what I still think remains in the hearts and minds of many as a perception that terminology is still an isoteric interest that only a small community worries about and has no bearing or no direct application in patient care to a recognition that these are really a crucial infrastructure for the public good and for effective and quality health care to be delivered efficiently.

I think that's it.

MR. BLAIR: That's good. We have a good 10 or 15 minutes for questions. I would encourage not only our committee members, but members from the audience as well that if any of the basic concepts or premises or terminologies that were used in this first educational session, if you don't feel comfortable, this is your opportunity to get these points clarified. So I encourage you to ask questions.

Bob Mayes, sitting to my right, is going to assist me as you raise your hands.

MR. MC DONALD: I just wanted to make sure as I heard the two talks whether you were both saying the same thing. You are both nodding.

There was a statement about the continuum between classifications of terminology and another side that had entry systems and classifications on this side. Were they supposed to be equivalent entry systems classifications?

DR. CHUTE: No, I think entry systems are closer to the nomenclature access. There is nomenclature down in the lower left corner.

MR. MC DONALD: The question really is, is this blended issue -- are you saying the same things, about the blending? It sounds -- I kind of had a sense there might be a couple of axioms that didn't match. I wonder if you could comment. If you both say you are saying the same thing, then you are saying the same thing.

DR. CHUTE: In my professional career, I have always agreed with Jim. But the only nuance that might have emerged here is whether the desiderata that he has characterized and that I have more or less borrowed apply equivalently to the extreme end of the spectrum and classifications to the other end of the spectrum of nomenclatures.

Fundamentally, I am posing the question, does one size fit all. I think for many of the desiderata, the answer is probably yes. But I think it could bear examination on a point by point basis.

DR. CIMINO: I think that is a good point. And actually when you were saying that, it occurred to me that the laws of physics apply differently in the macro world than the sub-atomic world. I think that there may be differences.

One example is the notion of the hierarchy. A strict hierarchy may work at a higher level classification scheme, but a lower level may not work, or certainly doesn't work. So I agree that it certainly does bear examination, but otherwise I think we are pretty much saying the same thing in terms of -- there are other things I didn't go into for lack of time, but the notion of different levels of granularity -- the high level of classification schemes, that is the place where we need things for aggregation for statistics, for reimbursement, for decision support, even, and then down at the low level we need it for trying to capture what is actually going on.

MR. MC DONALD: One slight followup. The classifications have started. The world is different than it would be if you were inventing the world by mathematics. But wouldn't you say the classifications would be defined in terms of the elements -- assessed defined by its elements? They wouldn't stand alone? How else will you define a set?

DR. CHUTE: In an ideal world, that was the notion I tried to display graphically with a big red brick of nomenclatures essentially providing the underpinning and the atomic elements of classifications.

It ain't that way right now, but in a grand perfect future, one would recast existing classifications, so that they were in fact premised on a combinatorial or an explicit combination of reference terminology elements.

MR. KOLODNER: This is an excellent summary. I look forward to having it out on the Web where people can hear your explanation, and we can have the slides out there, so we can bring people up to speed.

One of the things that continues to be a source of confusion to people stepping into this is all the different things we use, the terms we use, nomenclature and classification codes. It might be useful to incorporate the definition of that. I came in a little late, so I don't know if that was in your first slides, Jim.

We are immersed in it, but I think we need to provide some clarity for people when they first come into it, to differentiate those.

DR. CHUTE: The CPRI summaries have appended to them a fairly large glossary which addresses to some extent those things.

MR. KOLODNY: Also, I know that we will have made it, Chris, when the $100 million five-year investment of the government in GCPR is included as one of the citations that you talked about, in terms of the activities. But that is our current estimate of what we plan to put into that effort.

DR. COHN: First of all, I want to thank both of our presenters. I think you have done wonderful jobs.

I also want to remind the committee that the committee has been part of a planning activity related to both terminology one and terminology two conferences. So I think we have obviously been participating in the effort.

Now, Chris, you had commented about the fact that we need to complete the transition from the esoteric view of terminology and esoteric interests to a crucial information structure for the public good. Certainly, on one hand one would observe that the public and society as a whole recognize diagnosis and procedures as not esoteric. They are fundamental to most things that happen in the outside world.

I would ask the question, is it terminology in general? Is it the fact that we need additional elements that are really next steps that need to be considered as standardized terminologies? What are the next steps on that?

DR. CHUTE: If you want to think of it graphically, I am in a graphic mood this morning, if you think of the continuum from classifications to nomenclatures, I agree. I think the public grasps clearly the notion that diagnosis and procedures are the substance of what is done in health care. But it is sliding that slider closer to the nomenclature end which at present is considered the esoteric realm. So there is an endorsement and a recognition that detail specific granular representation of these events and diagnosis and concepts are as important if not more important than the larger categories that they see on their hospital bills.

DR. CIMINO: If I could just add to that, I think the other aspect is the notion of how disciplined the approach to terminology is. Most people may acknowledge the fact that there is a need for terminology, but really don't understand -- they may think that it is already done. They say, don't you have this? You have been doing medicine for hundreds of years, don't you have the terminology now?

There is the notion of how disciplined this terminology is. So they may be thinking about what Chris was referring to as the entry terminology, the collection of things that we say, and boiling that down in some disciplined way into the collection of things that we mean. That is something that escapes most people.

DR. COHN: I think really what I am mulling about is, this work group is intending next year to make recommendations to the government having to do with issues that merit government attention or other actions to move things forward. And certainly, there are various ways of approaching the issues of health care terminology.

One is to focus on the issue of an overall framework and getting the framework right, and then letting the pieces evolve in. The other would be to take more of a piecemeal approach and say, gee, there are certain areas that need some help or need some standardization, or need something to help with comparable patient medical record information.

Obviously, we can do both, but the question is, which is the better approach and what are your views on that?

MR. BLAIR: We have three other folks that are in the question key here. There is someone standing at the microphone from the audience. There is Richard Ferrans and Michael Fitzmaurice.

DR. COHN: Can I ask them to answer the question first?

MR. BLAIR: Oh, I'm sorry.

DR. CIMINO: I wouldn't want to pick one method over the other. I think both are crucial. If you just do it top down, you end up with something that looks like ICD-9 CM, which is saying, here are the kinds of things that we have got to be able to ultimately say at the end of the day. If you do it from bottom up, you end up with all these disparate little pieces that are going to overlap, are not coordinated.

That is the real danger, and that makes things hard to adopt. You say, I'm using this thing, and it doesn't quite map to the thing that somebody else is using, so I can't use what they have got.

I think that both are crucial. I think there are well defined domains like laboratory tests in terminology that doesn't overlap with anything else. You are not going to confuse that with domain of drugs or anything else, although the terminologies used within LOINC definitely come from other domains. So the things that are measured from tests come from different domains. There are chemicals and organisms and cells and things like that, physical properties and whatever they are.

I think there needs to be a high level organization to figure out where all those pieces are going to be. But they can develop together and meet in the middle somewhere.

MS. GIANNINI: This is Melinna Giannini from Alternative Link. I have a hopefully helpful suggestion on separating when to do the mapping to the code sets.

I think that a clear delineation of maybe the life of a claim on the administrative side, and then the use of the claim on the clinical side would -- if that was very well delineated as to who is using the information for what purpose, then I think that your comparability would be more important to everybody than it would be if you just tried to merge the information all at once. So that is just a suggestion as to keep from doing a lot of crossover work that may or may not be necessary in the short term.

On the administrative side, I think that I keep not hearing what happens there and what the information is used for by which people. I think that we tend to overlook the life cycle of a claim for administrative purposes between providers and payers, and who has to view the information and what they have to do with it.

MR. BLAIR: Any responses to that by Chris or Jim?

DR. CIMINO: The only thing I would say is, I heard something that I wasn't quite sure what you meant there. You said something about how you would use claims data for clinical purposes. I wouldn't use claims data for clinical purposes.

I think the point is to collect the clinical data, and then if you need to generate claims data, generate it from the clinical data, but don't try to go back and say, this patient had pneumonia not otherwise classified last year, I had better give him this antibiotic this year based on it.

There is an inherent loss of information when you go across into a different realm like that. I think the key thing to remember is what we are trying to do is not pay for patient care, we are trying to do patient care. So we have got to collect data for patient care primarily, and everything else has to be seen as an outgrowth of that.

If we say, yes, it is important, we have to pay because if we don't pay then the hospitals go under, -- yes, it is important that all these pieces work. But the primary business that we are in, although it feels sometimes like the primary business I am in is filling up paperwork to get reimbursed, the primary business I am in is to take care of the patients, generate the data, and those data then can be used to generate the paperwork and so on and take care of everything else.

So it is a matter of perspective that has to be kept. When I was at the meeting two years ago, the person who preceded me -- she was from Mayo Clinic, I don't remember her name -- gave a beautiful talk about all the uses of data and all these other things. The one thing she left out was patient care, because she was all caught up in the reimbursement and the statistics and all these other things that are also important. It is just a matter of perspective, though.

MS. GIANNINI: I agree with you totally on what information needs to be attached to the patient. But I still believe strongly that until you understand who is using the information for what purpose, you are going to work too hard to get attachments and crossovers and crosswalks made to information where it perhaps is meaningful and where it is not.

A point in case. The payers -- I have encoded alternative medicine -- the payers have to understand for alternative medicine from a code level what is legal and what is not legal. That doesn't need to be the whole patient record. That needs to be, can they diagnose, can they prescribe, are they able to dispense. Those kinds of issues are the things that they need to know for a legal issue, and it has nothing to do with the patient.

So I just caution everybody that there are reasons to compress the information in some instances. When you are looking at patient care and the patient record, then you need the expanded definition.

DR. CIMINO: I don't think anybody would argue.

DR. FERRANS: I want to thank you both for your presentations.

I wanted to focus on a three-part question on three different dimensions of the problem. The first one has to do with vocabularies and the trends in particular, Chris, that you pointed to about consolidation, about mappings between. If you wrote a followup to your paper on the clinical coverage of classifications, a year or two from now, at the time where the committee is supposed to be making recommendations, where do you think we are going to be? Given the various trends, can you extrapolate forward how close are we going to be to having an 80 or 90 percent solution, realizing that there is going to be a significant amount of time to clean up overlapping.

In that question, does it really matter whether it is a single terminology, or whether we have good coverage when you combine multiples that are mapped together?

As part of that, Jim mentioned about organization and how to put together -- the importance of having some organization. What do you all see as the role in the National Library of Medicine in doing this? Obviously in the terminology arena, we would certainly look to them. How do you see their role evolving?

DR. CHUTE: Let me answer the first question, if I might. I see continued consolidation and precision and uniformity at the levels of higher level classifications and reference terminologies.

Frankly, I think there will continue to be -- and perhaps this is good -- variability, market products differentiation at the level of entry terminologies, what interfaces the clinicians actually see, language processing modules and whatnot, that would generate these underlying reference terminologies. I think that is an area that is too highly variable and dynamic to speculate much about at this time.

DR. CIMINO: I think I don't have a good vision for how these things can evolve. I think that work groups like CPRI have a real potential to develop consensus, and I think that is going to be as important as anything that can be dictated from anywhere else.

I think the National Library of Medicine has a lot of experience in collecting, producing, disseminating this information. I don't know that they would see it as their role to take on the task of deciding what the top level should be for trying to organize that.

Don Limburg said repeatedly that what they have is books, they don't have patient records, and they don't see themselves as experts in the area of representation of clinical data. But I think that there is certainly coordination with the NLM that will be crucial to do dissemination. I would see the NLS for instance evolving into something that is the distribution mechanism for whatever we end up using.

DR. FERRANS: The lat part of the three-part question was about the funding of the maintenance. I guess this is a particularly important topic as recommendations go forward. How do you see the model with LOINC and other -- how would you envision a role for funding or for public funding of maintenance of vocabularies? What do you think is appropriate?

DR. CIMINO: I think it depends on how the pieces are being developed. In the LOINC it is mostly volunteer, so as far as I know, the funding that is being used is mostly for paying the travel for the people to get together in the same room. But those people are not paid for doing the work. They are volunteers or they are contributed by their companies.

That model works in some cases. There may be other places, for instance, that have been working with the drug knowledge vendors to see if there is some consensus terminology that we can come up with that will help all of them conduct their business. They all have a vested interest in being able to see their terminology being transferred to other systems and vice versa. So there may be different models for different pieces of the puzzle.

I think that where you really have to roll up your sleeves and pay content developers to do that, then you look at things like the SNOMED model, where there are license fees to support that activity, and actually hire people who are experts in microbiology, who figure out what is missing and what needs to be added and so on.

DR. CHUTE: Personally, I think one of the roles of government is to fund an infrastructure that individuals or small organizations would not independently choose to pay for, because these are common infrastructures for the public good.

I see terminology development -- which turns out, if you look under the hood, to be a relatively expensive and difficult process if it is done well, and if it is done scalably and maintainably, as an ideal target for appropriate government support. I think the experiment with LOINC is one that will be looked at very carefully, and hopefully one from which we can learn a great deal.

It is my hope that it will prove to be a successful and prudent experiment that might recommend extensions of similar public resources to other HIPAA approved terminologies.

MR. BLAIR: We have one last question from Michael Fitzmaurice.

DR. FITZMAURICE: I too want to thank the presenters for their very cogent presentations. I learned an awful lot, and learned more than I need to know.

(Words lost) an agency where we try to improve quality measures and develop new measures. We need to be concerned about the efficiency with which we can get information for those quality measures. If you need to describe clinical performance measures such as are found in HETUS, to show business they are getting value for the dollar, to guide providers to a better quality of care, the question I want to ask you is, where is the starting point for new useful measures of quality of care, to be able to pull them out efficiently from documentation for medical care?

A couple of examples would be, did a pregnant woman see a physician in the first trimester? Is this a coding problem? Or did a young child with asthma receive the proper medication to avert a hospitalization? Is this a matter of classification, a matter of terminology? Is it a matter of coding systems with an access for clinical performance measures?

How would you approach this from the terminology and the nomenclature and the coding system?

DR. CHUTE: I actually have strong feelings on this one, Mike, so hold me back.

It gets very deeply into the issue of where an information model about the health record meets terminology. For example, I think coding which trimester one happens to be in into a terminology system is a very misguided notion. I would far rather last menstrual period and date of visit be recorded in the information model, so that one could compute whatever appropriate time interval was pertinent to the quality metric, and then have as fundamental clinical concepts a reserve terminology for coding fundamental clinical concepts.

It requires however a harmonious synthesis of information model about patient record with information structures of terminology. That is an activity which has yet to take place. If government were to encourage, fund or support conferences -- not that I would ever raise such a notion -- to address or improve these activities, I think that would be fertile ground.

DR. CIMINO: I think that your example of the asthma, is the child on appropriate asthma medication, one way to record that in the record is to ask somebody, is the patient on asthma medication, and create a concept of appropriate medication. That could be a concept, right?

The problem with that of course is that what it means changes over time. This year it is one thing, the next year it is another. This points to why it is crucial to point in the record the level of terminology that the clinicians are dealing with. So they would record what medication the patient was on, and then in whatever year you decide to do your survey, you decide which standards to apply, then you create an ad hoc classification, saying, I am interested in these medications, was the patient on any of these during this period. You can always go back and do that from the primary record, because the data are still there.

DR. FITZMAURICE: So what I hear you saying is that instead of creating codes for it that a physician would put in the record, create the use of fundamental terminology to let you derive these quality measures.

DR. CIMINO: Yes.

DR. FITZMAURICE: Have the physicians record them in the record, that is, encourage that at conferences and otherwise encourage it, so that you can draw the information out of the record, rather than having an abstract to go back to the paper record or have a physician even put a special code, but the fundamental terminology into the record.

DR. CHUTE: But distinguish what belongs in the terminology and what belongs as other attributes of recorded information about the patient. I think the example of trimester of pregnancy is a classic one. That should not derive from the terminology, that should derive from information in the record, such as dates of last menstrual period and date of visit.

MR. BLAIR: We are about to hit a break here. We are running about five minutes late. I'm going to have us reconvene about five minutes late at 10:30, but we will convene promptly at 10:30. Thank you, everybody.

(Brief recess.)

MR. BLAIR: We have Dr. Keith Campbell with us for the next session of the portion of our testimony these next two days, which is to help us get an educational foundation on medical terminologies. Keith, please proceed.

DR. CAMPBELL: Thank you. First, I just wanted to menton that my responsibility within Kaiser Permanente is to try and get Kaiser to have essentially nationally comparable data across the entire Kaiser enterprise. In order to do that of course, we have to have comparable terminologies across our enterprise. We are working on standardizing those terminologies.

We have been working on standardizing these for a number of years. I think we have some lessons that I would like to bring to the table today. So the title of my talk is trying to bring to you an enterprise view of terminology that is more than just an end user view. It is the view of the large enterprise. It is consumer data, but also consumer of information systems in an organization that provides care.

But before I went into specific detail of some of the experiences that we have had over the years in working with terminology, I wanted to share with you Kaiser's vision, and then translate that vision into how it is supposed to relate with information systems and how that then translates down into terminology.

One of the people that we have worked with extensively in developing clinical information systems are our requirements people. They try to emphasize to us that having the traceability of these recommendations is very important.

If for example we say here is an esoteric thing regarding terminology and it is very important, unless we can trace this esoteric requirement back up to the business requirements, as to why we are providing health care in the first place, often we get significant challenges when we ask for budgets to implement esoteric things that haven't been traced to business requirements.

So here is actually a vision statement that came out last week. It was put out by the chief operating officer of the health plan as well as the CEO of the Permanente Federation, which said that KP has strategically committed to differentiate itself from the health care marketplace on the basis of its ability to delivery high quality care and service.

A second part of that statement was that the use of the electronic medical record and other clinical information system tools are critical enables on the path to significant improvements in health care outcomes and quality service.

To translate this vision statement for the enterprise into clinical information system goals, I put three of them up here. One is that we have to have nationally interoperable information systems that integrate applications from multiple vendors and from multiple application areas.

Kaiser Permanente being a national organization with different regions, and has historically grown up in a federated model as opposed to a single corporate entity, we have laboratory information systems in different regions or from different vendors, and laboratory information systems in other regions. Similarly for pharmacy information systems, we have multiple vendors with multiple terminologies inside of them. We are trying to integrate these all together into a comprehensive health record.

In order to do that, we recognize that interoperable information systems must be founded on robust terminologies. We have even learned that it is actually more important to have robust terminologies than it is to have a single application implemented across the enterprise.

At one point, we had tried to implement common insurance systems or common businesses into Kaiser Permanente, and people thought that would solve the problem of national comparability of data, that if we just implemented the same system in all our different regions, we would be able to pool our data.

It actually turns out that was not the case at all, because when people tried to use these systems in specific regions, they often had to make their own enhancements of the terminology to make the system operate in a way that met the local needs. When that happened, the terminology was not comparable, and we could not pull data from two different regions that had terminologies that were different.

So a number of years ago, in the early 90s, Kaiser decided to make a strategic investment in terminologies. These terminologies must fit within our strategic CIS needs.

Let me just describe to you a summary of what those imperatives are. The first one was that the terminologies needed to be medical record vendor neutral. There are a couple of reasons for that. One is that we are in a period of innovation in the medical record marketplace, where people are trying to differentiate themselves based on unique value.

We want to allow that and take advantage of that, but we need to have to map the common reference terminologies that will be acceptable to multiple vendors. A number of years ago within Kaiser, we actually had four significant information systems efforts. We had one going on here in the Mid-Atlantic states, one going on in the Northwest region with EPIC, one in Colorado with IBM, and another one in Southern California with Oceana, and there was a fifth one in California with Oasis. We had to get all of these vendors to try and agree that they would make their systems interoperate with the terminology, but yet we had these proprietary battles, saying, but if we make ours work with this system, how do we know that our proprietary information isn't go to go from our work to our competitors' work through this terminology.

That actually was one of the reasons why we founded this notion of separating out the interface portion of the terminology from the reference terminology. It was just a practical matter. We had to do it in order to make our terminology efforts vendor neutral.

Another requirement is that we feel that the terminologies have to have scientific validity. So for example, for laboratory terminology, I think the effort LOINC has put forward is a good demonstration of trying to have naming be scientifically valid, where they explicitly state things, like what is the analog being named, what is the substance measured, and that the people participating in this are actually the users of the terminology, the laboratory technicians as well as the pathologists in charge of the laboratory, trying to make sure that what is being represented in terminology is what actually describes what is going on.

Next, the terminologies have to be well maintained, because if the terminologies aren't well maintained, we get creep in the underlying terminology, because people have to make local enhancements, and that repeats our experience with the business systems. If they are not well maintained, available with daily, weekly, monthly updates, you have drift in the terminology and you no longer have the ability to have comparable data, and then you get idiosyncratic versions of the terminology that may be specific to a particular vendor, and you no longer have that vendor neutrality.

Another issue that we actually think is very important -- I know those licensure issue has been brought up a few times so far, and I'm sure it was brought up again in the remainder of this conference -- we believe that the terminology itself must be self sustaining by some mechanism.

I'm not trying to adjudicate what I think that mechanism should be, because I think different types of terminologies can have different mechanisms for self sustaining. But given our position, where we have actually seen EMR vendors come and go, and often we have to pick up the pieces when an EMR vendor goes, yet we have developed a corporate dependency upon their software, we often have to internalize that software and develop it ourselves, and keep it up to date.

The data that is in our systems actually has a longer life than any one system, even if we kept with the same vendor. They are going to be upgrading their product, this version for the next version and so on. We need to make sure that our investment in data and resources is preserved as we implement new systems and as time goes on.

Finally, we have to have scalable infrastructure and process control to manage these terminologies. In order to again prevent the problem of having to make local enhancements in a way that violates the semantics of the overlying terminology to meet local needs, if we could have a responsive organization that we could say, we need terminology in this area, they are able to get it back quickly, they are able to do that on a large scale, so that they can have requests coming in from different regions across the country, for maintaining that terminology and getting it up quickly and synchronizing it. This is what we feel is strategically required to meet our enterprise needs for terminology.

Today's situation is that few terminology systems meet our strategic imperatives. I would argue that the only ones that do are the ones that work very closely with to try and help them to meet these strategic imperatives.

One of the things to note is that we have invested significant resources towards meeting these, and have tried to develop partnerships with terminology organizations to try and meet these requirements. Our challenge right now that we are faced with, an education and collaborative challenge, that we have a lot of experience in developing terminologies, we have an enterprise need of having these terminologies be high quality and robust and well maintained. We are actively trying to work with terminology vendors and EMR vendors to meet our clinical information system imperative.

But why are there so few terminologies that meet our needs today? I would say that the number one reason is failure to invest in robust infrastructure. The support for infrastructure

must be part of a self sustaining revenue model for the terminology, and that a scalable infrastructure requires significant resources.

You may be able to take a terminology and work to a certain point where you have a database on a personal computer that doesn't have distributed access, doesn't support distributed development, but it will not scale. You are going to reach a certain point where the abilities of one single individual to manage that terminology are going to be eclipsed by the demands that are being placed on that terminology, as people try to put it into practice. So then you have to start developing distributed systems, dedicated database administrators and replication of database across multiple sites, and being able to handle large transaction loads and so on. Developing that infrastructure is non-trivial.

Another challenge is that there is difficult forming collaborations. Some of the issues are that collaborations are technically hard to manage, because you have different organizations with different priorities trying to work together on something that arguably is the same, but in many senses is different.

This for example is one of the challenges we had in trying to work with different EMR vendors to try and move them towards working with a common terminology. They would say, we need to differentiate ourselves, all of the issues about wanting to make sure their proprietary aspects of their applications were not compromised as part of working within a collaboration, how do you make that happen.

There is also a problem of, not invented here. Many people say, I have looked at this system over here, and it may have a few concepts that are a little bit better than the system that you are working with, so I'd rather work with this system over here rather than this one.

The message I would have to this, which is one that I tried to give very strongly at the previous GCPR conference on lexicon solutions, was that the process and the scalability are more important than the starting point. You have to have groups that are willing to work together towards creating process and scalable solutions, and that is the foundation upon which your terminology solutions will be solved, not who has the better content today. In point of fact, those efforts could be quickly eclipsed by a group that had better resources, better infrastructure, better scalability, could be sustained in a way that would meet our enterprise needs, as opposed to having today's terminology solution work for five years.

We invest several hundred million dollars in the enterprise, developing information systems that make use of that terminology, and then it fails, and it fails because the revenue model for the terminology was not based on the terminology itself, but was a loss leader of a particular vendor, trying to say, we will put this terminology out for free for people to use, and we are going to make our money on knowledge bases that make use of it. If their knowledge base system fails to make the revenue that they need to sustain the other part of it, then we are really stuck with a $100 white elephant.

Those types of things are just untenable to us as a large enterprise.

Finally, the last challenge is getting sustained organizational commitment. I think one of the reasons for this is that the challenges, once you open up the hood and look at the intricacies of trying to support terminology development on a large scale, are not intuitively obvious. People get into the collaboration thinking this is going to be easy, and within six months we are going to have this problem licked, because we are working together.

Within six months, you might have had some initial agreements about process and scale, and then after that six months has gone by, and the naive initial view as to what was going to occur within that six months are not realized, trying to sustain organizational commitment beyond that is often a challenge.

One of the things that I would like to do is have two specific examples of systems that could benefit from some of the infrastructure that we have been talking about with regard to distributed scalable terminology solutions.

One that is particularly cogent, because this is something that this committee has endorsed for claims attachments, is the NDC codes. I just wanted to first talk about it with regard to our clinical imperatives, or our imperatives with regards to terminology systems.

First of all, are the NDC codes EMR vendor neutral? Yes, they are. They are a government mandated system, where manufacturers of drug products are required to submit their codes to a process for managing them. It doesn't give proprietary advantage to any particular company who might make use of them.

Are they scientifically valid? I would argue, sort of. They are scientifically valid for the specific purpose for which they were originally intended, but many people take systems and try and make them work for more than what they were originally intended for.

There are many ways that the scientific validity of NDC codes could be improved. That is why I hedged a little bit and said they were sort of scientifically valid. We could improve the validity of them. But are they well maintained? This is one of our criteria. The answer is, no, they are not. There is not one organization that says we will take accountability for publishing on a daily basis a quality reviewed version of the NDC codes that we will make available to everybody, so that then it becomes a credible standard for use in our information systems.

Is it self sustaining? Here again, I feel that there are different models for self sustaining revenue. I think in this case, it is a self sustaining model, in that it is a regulatory requirement for those people that are manufacturing drugs as part of their manufacturing process, they have to properly name those drugs. So the revenue from that regulation with regard to the manufacturer of drugs, is the revenue that drives the NDC process itself.

Then our next question is, do we have scalable infrastructure and process control. I think the answer there is clearly no. We do have a specific example of that, in that the Food and Drug Administration, the Health Care Financing Administration and the Veterans Administration all maintain different listings of the NDC in triplicate and these listings are different. If you take the VA version and HCFA's version and you compare them, they are not going to be identical.

I think that there are solutions to that. There have been different solutions proposed. One was to develop something completely new. I believe developing a completely new solution is a high cost, high risk proposal for solving the problem related to NDC codes and pharmacy.

One of the issues is that it develops new organizational accountability that has to be developed, they know. Also, depending upon what solution is picked, it may provide an unfair proprietary advantage. There are different companies already out there that develop a business model based on providing quality checked NDC codes to business. If you were to pick one or the other as the foundation for the solution, then you are creating a competitive advantage for one.

I would propose that a better solution would be to improve the existing NDC process. I think that this would be a moderate cost way of going forward with low risk. You work to refine existing organizational accountability as opposed to creating new organizational accountability, and incrementally add new functionality as part of this process to improve the scientific validity and solve some of the other imperatives we have regarding our terminology systems.

Here is just an example of how I view an improved NDC process. If you had a distributed development process with robust infrastructure, you could start out with an NDC version that had all of the codes to date. Then if different manufacturers and different repackers were to submit electronically their version of the code to a central site, and this would go quickly through a quality control process, where if the manufacturer had made a mistake in how they submitted it, they would go back to that manufacturer within 24 hours for them to do a rework and then resubmit it. It goes through the quality control process and is improved and then is published as part of the next version of the NDC codes. If this process was done in a scalable way so that it would meet the needs of all of the repackers, all of the drug manufacturers that produce pharmaceuticals within the United States, then you are starting to get to a credible, scalable solution that would eliminate for example the triplication of effort between the Food and Drug Administration, the VA and HCFA.

But in order to make that happen, there has to be an implementation of infrastructure that currently doesn't exist today.

Another system that I think is interesting to look at is the LOINC codes. I think that they are a very successful example of what can be done. I think that they are EMR vendor neutral; I think that is one of the things in its favor. They do have high scientific validity, the process that Jim talked about, where you got the users of the terminology and the laboratory systems in the room together to try and figure out what to do, a very important thing.

Are they well maintained? I think sort of, today. People have been making a very good effort. It is a volunteer effort, but it does not have dedicated infrastructure behind it to be able to scale to the system that I think it needs to scale for a large enterprise like Kaiser to depend upon it nationwide for its information needs.

Is it self sustaining? Today it is not. There is a recognition that it does need some funding. It did get some government money, but that government money is not a promise of continuation funding, or is it? The self sustaining model for LOINC is something that needs to be worked out.

Does it have scalable infrastructure and process control? I think the answer again is no.

Now, similarly to how we worked to improve the NDC codes, we have an improved LOINC process that allowed distributed development with robust infrastructure. Maybe for example the diagnostics vendors, the people that would trade new tests, new viral load tests, are the ones that need to propose a name, just like the drug manufacturers and drug repackers are the ones that need to propose NDC codes.

You have a central infrastructure where a diagnostics vendor can submit a new test name that goes through a quality control process, or perhaps it is reviewed by two different individuals for scientific validity. If they agree, it goes through the quality control process on the back end and gets published in the next version. If not, it gets sent back to the diagnostics vendor, where they can then clarify their request and send it back to the quality control process until it is published.

This type of infrastructure is something that we have been working on for awhile. If I can just distil the problem down a little bit, I think the problem is really one of facilitating collaborative development.

Now, collaborative development is complex. Anybody that has ever written a piece of software can tell you, the easiest way to write a piece of software is for one person to write it, but the problem is that often, systems are so large that you have no choice but to have more than one person try to work with that.

Operating systems are great examples of trying to write a mainframe operating system or a UNIX. Those were things that had pluralistic input from many different individuals with many different skills. They developed a whole complex process around configuration control of source code and the ways of managing versions of source code and branches of source code and releases of source code, and QA processes around software releases.

We need to have the same robust infrastructure around terminology development releases that we have today for software. When I started working at this problem in 1992, there was no process or environment to support collaboration. I was fortunate enough to be around when a critical mass of individuals got together to create something called a convergent medical terminology project.

The initial goals of that project were to develop and evaluate distributed development methodologies that are organizationally scalable, and by organizationally scalable, I mean as more and more developers are brought into the process, the infrastructure does not collapse on itself and make it so complicated that you can't continue to get work done.

Also, this architecture had to be developed on a distributed and scalable computer architecture, so that as the demands on the system increased and the number of people participating in it increased, we had to be able to increase the computing infrastructure so that it could be distributed and meet the needs of the end users.

The CMT project original participants were Kaiser Permanente, Mayo Clinic, the College of American Pathologists and Stanford University. It was mainly funded internally, with some support from the National Library of Medicine and the Agency for Health Care Policy and Research, and we had some software support. Tools were developed for us by IBM, Lexical Technologies and Ontix.

Here, we had this basic framework of evolution. In a sense, you would have a version of a terminology. People would work independently on their own branches of the terminology to make local enhancements that they felt met their needs, and submit those changes and go through a merged quality assurance process to create the new version.

I would argue that this is the same process that can be used for LOINC. It is the same process that can be used for NDC codes, and that the scalable infrastructure that we have shown can work in large systems could be brought to bear to solve some of these problems with regard to the other systems.

Over the years of our project, there is a publicly described methodology for it. There are mechanisms to generate local changes, mechanisms to collect local changes and identify conflicts, processes for evaluating and resolving these conflicts, and mechanisms to disseminate these local changes and global updates.

One of the points I am wanting to make here is not that the distributed development methodology is somehow unique or proprietary to the three companies that participated with us in this project, but in fact, it could be the basis for further standards development, and different vendors could create their own solutions, their own editing environments, their own configuration environments that are compatible with this environment.

I do have a document here that is probably the most comprehensive view. It is several hundred pages of some of the background that we have had, which I would like to leave with you for entering into the record. I can give you a PDF version of that, if you wish to put it on the website. I am certainly not going to try and recite the document today.

One of the results of going through this distributed development process is that we were able to improve the scientific validity of the content that we were working on. Again, the foundation of our original work was SNOMED.

Here is just a simple example of one of the things that comes up. SNOMED had a term called cellulitis of the skin with lymphangitis. We had primary care physicians working on the terminology. They looked at this for awhile and they said, we think cellulitis of the skin with lymphangitis should be defined as an infection of the skin and subcutaneous tissues with lymphangitis, and it affects the skin.

Other people had a more anatomic pathologist's view, shall we say, of cellulitis, and they defined it differently. They said it is an infection of the skin and subcutaneous tissues. The associated topo is the subcutaneous tissue and the lympathic vessel.

The real sticking point that they had was, the primary care docs felt that cellulitis was a feature that could be diagnosed on inspection of the skin that they would palpate, recognize in duration, inflammation, and make a clinical diagnosis of cellulitis. The pathologists said, all of the features of cellulitis are visible below the dermis. So here you have two different perspectives trying to be brought together, so that we can reconcile that terminology to meet the needs of all of the users, not just a particular set of users.

So I think in this case, the distributed development methodology, the ability to work at scale, where we can have multiple developers participating in the process may not be the most efficient way to develop a terminology, but I think it is the most appropriate way to develop a terminology that is of general purpose.

I think that is one of the things that Jim commented on with regard to LOINC, that one of the reasons that it worked was that it got all the people in the room together that had a stake in there -- the end users, the laboratory representatives, to try and come up with the right ways of representing it and going forward.

One of the advantages or initial benefits that LOINC had was that the domain they were working on was relatively small with regards to thousands of concepts, whereas here we are working in an environment that has in excess of 100,000 and probably soon to be 200,000 concepts that does require significant infrastructure to be able to manage that.

MR. BLAIR: Keith, for the purpose of keeping in time, I don't want to cut short what you are saying. Just a few more minutes.

DR. CAMPBELL: Sure. One of the other results was, we had scale infrastructure and process control. This is just a diagram of some of the systems that we have implemented. We do have a master site at Kaiser Permanente in Oakland, we have another master site at the College of American Pathologists in Chicago. We are able to synchronize our databases on an hourly basis. In addition to those, those master sites can have other sites that replicate off of them, one in Portland, one in Colorado, another one at Oregon Health Science University, and hopefully one soon at the National Health Service in the U.K. Also, the results that the system is well maintained.

Just to summarize, I would like to say that I hope that the CMT project has demonstrated that collaborative distributed development is a viable option for trying to solve some of our contemporary terminology problems. This methodology is publicly described and commercially available, and validated distributed development methodology can overcome some of these problems in the other systems, such as NDC and LOINC.

The recommendations I would have is that the government should work to facilitate collaborations, the government should try to collaborate with itself and with industry. I think the example with the NDC codes is particularly prominent. We should have a shared infrastructure so that HCFA, the VA and the FDA can share work on the same database for NDC codes. But we need to invest in infrastructure, and the terminology efforts of other governments should meet uniform standards of data representation, configuration management and reusable tools and processes.

Just to remind you, as we implement these recommendations, we have to make sure that we meet these strategic imperatives of being vendor neutral, scientifically valid, well maintained, self sustaining and have scalable infrastructure and process control.

That concludes my remarks. Thank you.

Agenda Item: Overview of Terminologies and Issues

MR. BLAIR: Thank you very much, Keith. The next presenter is Mark Tuttle, Lexical Technologies. Mark, are you ready? Then right after that we'll have questions.

MR. TUTTLE: Hello out there in Internet land. I always wonder if anybody is out there listening. I guess we'll find out later.

Given the three previous speakers are the all-star hit parade on terminology, I have decided to shift the focus to what are we going to do about this, and give me recommendations to the committee on where we should go with terminology in terms of facilitating the electronic medical record.

I am completely serious about wanting you to think about the way that airmail influenced air travel as a way that terminology might influence the electronic medical record.

We have got a couple of problems here. In the '20s, air travel was primitive, disorganized and chaotic. If you remember the Jimmy Stewart movie or Lindbergh biography or whatever else you read about, Charles Lindbergh spent a year flying the mail from St. Louis to Chicago. If the weather was bad or he had engine failure or had to land, he got on the train.

But nevertheless, the government paid regardless, whether there was one letter, whether there were 50 letters, whether he had to go on the train or whatever. If they had too many engine failures, his company was going to lose the contract.

In the next decade, we are still looking at the beginning of the decade of the same primitive disorganized chaotic situation with the electronic medical record. This is why this committee is meeting and why we are here today.

A couple of notable exceptions. There are what are called EFC's, electronic filing cabinet versions of electronic medical records. Two of these happen to run at Harvard Beth Israel Hospital and Kaiser Northwest. I'll get to defining what I mean by electronic filing cabinet.

So in both cases, there are a few standards. Limited economies of scale. The public is poorly served.

Solutions. The government paid for airmail on the margin. In other words, the government didn't pay for all airlines in the United States in the '20s; it gave the airlines mail contracts that typically allowed them to stay in business. I am proposing that the government pay for electronic medical record results, with the emphasis on results, again on the margin. The public we hope got air travel, and we hope it gets electronic medical records. So I want to basically now talk about how and when such a thing might happen.

Where are we in 1999? Most electronic medical records that have ever been built have failed. You figure that most airlines that have ever been started have failed; maybe that's okay, maybe this is a learning process and this is what we have to go through.

Some electronic filing cabinets flourish, so let's define this again. The electronic filing cabinet to me -- I'm not a database person, remember -- is something where you can retrieve the information typically by a single key, like a patient identifier. You can put the stuff in and you can get it out, but it is a blob of stuff retrievable by the patient identifier. Within that blob is arranged typically by date, it is in chronological order and maybe by category, like lab, Xrays, notes, problem list, whatever.

Where these things flourish, physicians like them because they are available 24 hours a day, seven days a week. They can be accessible from home or anywhere in a hospital or an office, and they are embraced by users. But you all know the problems, which is again why we are here today: the data in these systems is not comparable. We can't ask the question computationally, are there any similarities between this patient and that patient, about what happened to a given patient last year and this year. It is just not an answerable question.

Furthermore, what comparability does exist is not sustainable in the ways that you heard from Jim Cimino and Chris Chute this morning, and it is not scalable in the ways you just heard from Keith Campbell. In fact, these things are highly idiosyncratic. As Keith pointed out, often to make things work locally, they have to be idiosyncratic, so we end up with something that is not going to be used by anyone else anywhere else.

So we have few opportunities for economies of scale. Everything is expensive and difficult. The existing opportunities for economies of scale are not exploited. So for instance, if every physician in the United States had access to an electronic filing cabinet, we would be farther ahead than we are now. We would be talking about the engineering problems of semantic normalization of these things. As it is now, most physicians don't have 24 hour a day, seven day a week access from anywhere to electronic medical records, even if a computer can't understand them.

So my point is that the public is poorly served. So again, let's go back to air travel in the 1920s. Local efforts were sustained through airmail contracts. The U.S. government in the '20s couldn't go to a single company and say we want to have airmail in the United States, or we want to communicate with airmail service around the world. It didn't exist. There were only local mom and pop airlines of varying sizes around the country, and the post office gave various of them on some criteria airmail contracts.

Note that the government did not pay for the planes, the airport, the pilot training, unless you consider military service the training of pilots the government paid for, or passengers. If these airlines wanted to take passengers, that was okay, that was the airlines' business. But the point was that the U.S. government contracted to carry the leather bag with the lock on it with the onionskin letters in it.

Competition, standards and regulation was incremental and reactive. Even in just the decade of the '20s, if you had too many engine failures, you would lose your contract. Or if someone came along and flew more often or more reliably or whatever it was.

Anyway, good things happened. At the beginning it was, anybody could fly anywhere on an everyday basis and get the mail through.

Clearly, this was a loss leader for the post office. No matter how much they charged for those letters, it wasn't going to pay for whatever they had to pay the airlines to get the mail through. But it was clearly the right thing for the post office to do this.

Finishing up with my air travel, the government paid for results on the margin, and it regulated things like what could be airmail. So again, early airmail was highly constrained. It preserved privacy, which is a very important attribute of the post office. The Founding Fathers recognized it. Contrary to mail in Europe, it was a bad thing if the government read your mail. The post office had a tradition with this. It is clearly one of the things that this committee has had to spend a lot of time on. It adapted rapidly. So by 1930, airmail was completely different than in 1920.

The government did not cancel the railroad contracts to carry the mail. This is Keith's point about risk management. Of the railroad contracts to carry mail, probably a few of them still exist today. The bulk of the mail, 99 percent of it, went on the railroad for a long time, and it would have been silly to cancel those things.

So let's look at electronic medical records in the '90s. It is going to look something like this. We are going to have an electronic medical record of some kind. Claims are going to be submitted to the government, and the usual required stuff is going to be in that claim, and you have already started filling out what these requirements are going to be.

I am proposing that we just have an optional add-on that is the comparable stuff, this is sort of going to be the airmail. So for the usual stuff, let's say the government pays X dollars in reimbursement. I am proposing that the government pay a very small fraction of X for comparable good stuff that would come with it. In other words, clinical descriptions that would fit all the criteria that you have heard Jim and Chris and Keith talk about.

So again, we are trying to finance this on the margin. People are already going to be sending stuff to the government in electronic form. We are just looking at an optional add-on. If the optional add-on meets certain criteria which we will get to in a moment, the government gives you a little incentive.

The usual stuff is the railroads, the optional stuff is the airlines. The railroads are predictable and reliable and not risky, and the airlines have the advantage of rapid evolvability. You can take more risks with the airlines if you know that the railroads are still going to be running. It is like when the weather was bad, Lindbergh got on the train.

How and when would we do this? The point is again on the railroad analogy, to do the doable with reimbursement claims attachment. You guys have got a tough job, you have my sympathy, but you are going to do whatever you can do, and that is why the NDC codes are already in there. That is the doable. It was very interesting, reading some of the criticisms of that on the Web that we researched before we put this talk together.

We want to pay on the margin for comparable results. Once we get these results, then the government is in a position to assess the value of comparability. Basically, it is a hypothesis that comparable clinical descriptions are going to be worth the public's while to pay for. We all believe that, we just don't know how much.

As you have heard in three different talks, comparability requires terminology that has certain properties. It has other things, too, for instance, some standards for lab values, the numbers that are in the lab tests. But anyway, there are things that are required for comparability, and for the point of this talk, terminology is the nut that we have to crack first.

Why the government should help. As you have just heard this morning, I think we know what to do intellectually, more or less. It is clear that we are stuck. As we sit in this room, the trajectory towards having any national degree of patient comparability is pretty small, the slope of that curve. it is like air travel in the beginning of the 1920s.

If we had marginal subsidies, it would level the playing field for everyone who is a resource, whether they are in academia, government, the private sector, whether they work for nothing, whether they are expensive, whatever. Again, the government didn't pay rich airlines more to carry the mail than poor airlines, they just paid whatever it was.

We claim more comparability would benefit the public. We don't know how much and we would like to know. The government should spread the marginal dollars among the players. So the terminology providers, as all three speakers have focused on, they need to get some of this. Otherwise, the terminologies are not going to be sustained.

The comparable prescription suppliers need to get a subsidy too, because there is no financial incentive in the short term right now to do this. In order for the public to get its money's worth out of this, there need to be people who analyze these descriptions.

The point is that we are trying to competitively satisfy government criteria, whatever the government, whether this committee or any other committee, decides is the right thing.

So to go back to this previous diagram again, let's say, how would the terminology providers get subsidized? The answer is that some of the fraction, the minuscule fraction of X -- again, if a normal reimbursement is X dollars the government pays the enterprise for some care, and a very small fraction of X goes back to the enterprise for giving the government some comparable descriptions to go along with that, some of that has to go to the terminology providers, whatever it is.

To qualify for these, the government should set up a number of hurdles. Initially in the airlines, it was, could you fly every day, assuming the weather was okay. And of course as the '20s went on, the hurdles got higher. Could you fly every day and get there in a certain amount of time, could you carry passengers, whatever.

So in order to qualify for the dollars, I am suggesting that the terminology providers have got to put up a terminology server that is optionally mirrored for the government, so that the government can't use the terminology for whatever it wants to, but it can certainly use it for what it is paying for, namely, to get these comparable descriptions.

This terminology server has got to be the authoritative archive of all the versions of the terminology since the start of the subsidy. Otherwise, the public isn't going to get its money's worth. It should be on the Web for all licensees. In other words, the terminology people are going to be in the business of selling terminology for whatever consideration. The government should say, you have got to be on the Web so that the entry cost is low for anybody who is trying to license your terminology to prevent the cream skimming.

Obvious things, like it has got to support class based queries like this group of diseases or that group of diseases or lab tests or whatever it is. It has to support aggregation.

The server has to supply changes to the terminology as say XML transactions, or in whatever standard makes sense in the future. Most importantly, the terminology server has got to support some notion of longitudinal query, what Jim was complaining about this morning, that certain terminologies don't do today. So the server -- again, this is just a terminology server, it doesn't have any patient information in it, and this is not perfectly specified today, but pretty well specified, has got to have some way -- I want to do a query over three years, and I know the terminology change in this time. I either want to do it from the point of view of three years ago today or something, this server has got to tell me how to do it, and it is up to the government to specify the hurdle, how high that hurdle is to get over.

So let's look at the comparable description providers, namely, the enterprises that are going to try to send in this optional comparable stuff to the government, along with their claims attachment. For them to qualify, they have to put up a description server. In other words, when they send a claim to the government, it has got to go into a data warehouse they keep. Obviously, this has to be highly secure, for the reasons that you know only too well. Otherwise, we lose the public's confidence and Congress is going to be upset, and it is never going to work.

Again, and this is a delicate political point -- I wish Braithwaite were here to talk about this -- if the government is going to pay for it, this server should be optionally mirrored at a site of the government's choosing, a delicate point. Somebody has got to pay for this, because it has got to be secure, it has got to be current and so on.

An incentive here is for the government to say, the server should cover the largest patient population for which the patient normalization problem is solved. In other words, do you have a master patient index or something, because apparently we are not going to have a national patient unique identifier for awhile.

It has got to support class based queries, just like the terminology server. It has got to support longitudinal queries, just like the terminology server. In fact, the server may choose to do it by sending a query off to the terminology server to get that job done. And it has got to be current, so that if the terminology world moves on and there is terminology in this warehouse, then either the queries have to be current or the terminology itself has to be current. And it has got to be sustained across all relevant terminology versions.

So we set up a list of hurdles again, so that if you are going to get paid on the margin for this optional comparable stuff, those are the criteria that you have to meet.

Finally, we are going to have description analyzers, either inside or outside the government. They are going to need to be able to get at all this stuff, so they are going to need to be able to build warehouses also that work at web scale, and again have to satisfy all these very strict requirements on confidentiality.

But basically, their job is to do the doable. If they are going to try to synthesize information from more than one site, because that is how they won their contract from the government, it has got to be optionally mirrored, and their job is to help the government focus on opportunities and needs for incremental improvements, whatever those are, and their real job is to try to quantify what is it worth to American health care to have comparable patient descriptions.

A plan. How would we ever do this? First we define what the early objectives are. We have to budget for the seed dollars remember, to level the playing field here. These amount to pre-pays. So basically we have three groups of talented people, the terminology maintainers, the description creators and the analyzers, all of whom are going to need seed money to get started, and that is essentially prepaid towards whatever incremental pay they are going to get for supplying the value that they do. We define a competition amongst them, and the winners get subsidized by the government, again on the margin.

If somebody put me in charge of this, the first thing I would do is, I would put Bill Braithewaite in charge of this. As many of you know, he worked on trying to analyze data at UCSF, I won't even mention how long ago. That was his doctoral dissertation. I think he is probably one of the best qualified in our field to make something like this work.

We need to make the evaluation of all this ongoing, obviously, change it as fast as possible. The whole point is that through communications standards focused on pluralism, we try to bootstrap the thing along, and right now there is no bootstrapping, very little communication, few standards, no focus. We have so much pluralism that you can't really call it pluralism usefully.

In some sense, it is a giant clinical trial, which is going to be a rolling set of objectives.

Budget. What am I thinking about here? We need to see these industrial strength servers. By industrial strength server, I don't mean a PC in a closet. I mean something that has a non-interruptable power supply, redundant web connections, maybe it is mirrored in some web farm somewhere, but in any case, the whole point is that it doesn't have to run fast, but it should be up all the time and fail as little as possible.

MR. BLAIR: Mark, can I give you a five minute alert so we will still have some time for questions?

MR. TUTTLE: All right, just about done. I am thinking that a half a million dollars paid to each of these things would fund the creation of these things, would get them started. These are in prepays, just my estimate.

In some sense, I think the organizations that would do this would put far more value than this into it, because it would reflect their mission in any case, and it would be worth it for them to put more resources into it than this. Then the government would decide how many of these things to fund in any given year, or new each year, or whatever.

Obviously, they would be awarded on merit, sustained on the margin, adjusted on value, meaning if we find out that there is more value in this kind of description -- like when Keith and I were talking about the NDC stuff, a stereotype is that drugs are a major rathole for health care money right now. We'd like to prove that or not, decide if it is manageable or not, and go on.

There would be some sunset provision, so that Congress doesn't get upset about this; you could set some time limit that the government support should expire or be revised. Again, the hypothesis here is that the only reason to do this is because we think the benefit is many, many times the investment.

If you only remember one thing, again, I am proposing to marginally subsidize existing and emerging efforts to achieve economies of scale in the public's behalf. Help us get unstuck, and I predict if we did this, in 10 years the optional part would dominate the part that everyone is submitting now.

That's it.

MR. BLAIR: Could I invite questions both for Mark as well as for Keith? I think, Carol Bickford, you had indicated that you had a comment or question?

MS. BICKFORD: Carol Bickford from the American Nurses Association. When I first began this session, I thought I understood what some of the language meant in relation to classifications and terminologies and nomenclatures. But as the morning has progressed, I have more confusion. So I would like the committee or the work group to begin establishing some standards on the ground that this is the common language that you use to help those of us who are not living in this environment understand what you are talking about.

MR. BLAIR: Could I maybe direct that question especially to Chris Chute. What she is looking for is some kind of definition between classification systems, nomenclatures and terminologies, just to help clarify some of the things that she is hearing.

DR. CHUTE: You and everybody else. The analogy is the cobbler's children have no shoes. The terminology used among terminology developers is, how do we phrase this, idiosyncratic.

In recognition of that, ISO working group three is promoting a meta standard which would be a meta terminology about terminology. That is a work project headed up by Angelo Zimore in Italy, and I think will come forward with what we hope to be the definitive meta terminology, to give some clarity and consistency to these issues that are bantered about, as you have correctly pointed out.

DR. CAMPBELL: If I could very quickly point out, the challenge is as much getting people to use the terminology that has been consistently agreed to as it is to agree to it, because there are standards around the differences that people don't adhere to.

MR. BLAIR: Simon, do you have another question?

DR. COHN: Yes, I actually had a question. I'd just make one final comment about the issue of what terminology is versus what codes and nomenclatures and classifications are.

There is actually a document that was produced a couple of years ago that Chris commented on, the framework document, that at least posits one man's consensus view of how some of this should be described. The view was that the terminology was the overall term that represents and includes nomenclatures and classifications, code sets, things like that.

So I think this will be further refined, but in my simplistic view, I tend to think of it that way.

I actually had a question for the speakers, and specifically for Mark Tuttle. First of all, I wanted to thank you both for what was really a wonderful set of presentations.

Mark, I didn't want to let you get out of here without asking you on something relatively different than what you were describing proposing. This has to do with your experience with the UMLS and what I think have been probably some of the more aggressive attempts to model terminologies one to another, than probably has ever been attempted, which is what I see a lot of the UMLS is.

Certainly, one of the discussions I think we all have at least as we move into this area is, we are talking about terminologies to meet the needs of health care. We talk about that in a plural sort of term. Usually, we also recognize that there are a lot of overlaps on the edges of any of these terminologies, both in terms of granularity and domains and all of that. We all feel that there are ways to map them all and to make them all comparable, so that you can move back and forth between them to do all the things that we have all been talking about this morning.

Now, I am asking you as someone who has been in this deep level for a couple of years, is that true? What are the requirements around that? You can answer now, and also give us some written information, if you would like to follow up.

MR. TUTTLE: I think you are asking me if the meta thesaurus is the answer. I think the only way that we are going to know is to do something on the scale that I proposed.

In other words, I can argue intellectually that the meta thesaurus creates relationships between terms, but having said that, that these are invaluable and shouldn't be re-invented. But having said that, the uses to which coding systems are applied are not universally reconcilable yet.

So on the one hand, we can take the names and coding systems and try to decide if they are synonyms or relatives of other terms. But the meta thesaurus does not yet explicitly represent the meaning of codes.

So a simple example, to borrow Jim's now infamous example, if a n says some term, NOS, and somewhere else in the same terminology it uses the same term without the NOS, the meta thesaurus says they are the same concept. So from a term point of view, it is probably the appropriate thing to do. From the code point of view, obviously it is not.

So we are left with a kind of an emergence of complex tasks that come out when we try to take all these terminologies and put them together. Having said that, it would be silly it ignore the utility and work that has gone into the meta thesaurus, because one of the things that it can do is, especially for humans, help people navigate amongst all these different terminologies.

Now, one of the reasons that I proposed the plan that I proposed this morning is that the real test of all this is whether computer programs can do such things. If we had people sending in comparable descriptions in a variety of terminologies, some of which may be overlapping, let's suppose that there is more than one standard. There is no way that HIPAA can guarantee that you can give a list of codes which are not redundant. This is something that you have written about extensively in your website. If that is true, is the meta thesaurus going to be sufficient to sort out the overlap, so that when one description comes in in one terminology and another description comes in in another one, and the meta thesaurus said they should or shouldn't be joined or should or shouldn't be related, we would never want to ignore that information, but the question is, is it sufficient.

I don't know the answer. I think we need to test that. Does that start to answer your question or not?

DR. COHN: It begins to. If you have other written information you want to submit, we would appreciate it.

MR. BLAIR: We have a question from Bob Mayes here.

MR. MAYES: Actually, I just want to make a real brief comment. I agree completely with Mark's idea of marginality. I would however point out that you have to be sure of what margin you are putting it in. The government is not monolithic. The government rarely if ever funds infrastructure purely for infrastructure's sake; it is always to support a today project. Most government agencies are actually quite like most corporations, in that they have a very, very short strategic time frame that they work under.

So a lot more effort I think needs to be done in identifying appropriate types of government activities by specific agencies, and then going after those margins specifically. I get a lot of vendors who come in and talk to me for a couple of hours. I can remember once when somebody came and talked to me about terminology and how I might be able to use that within the Health Care Financing Administration.

Just to put a little point of reality here, it is useful to realize that the Congress has budgeted not a dime for HIPAA standards activities. That is all being picked up out of existing operational budgets in various agencies. So we just need to keep some reality here, as to how the government funds things.

MR. TUTTLE: Can you tell me what fraction -- is to 50 percent of the current American health care budget that is paid for by the government? Is that what it is?

MR. MAYES: HCFA pays 30 percent, HCFA alone, so you could add on top of that. I'm not arguing that there are lots of ways that we could be doing this. I continuously look at my own operational requirements to see how I can incorporate these. But it is difficult from the outside sometimes to pick what margin to put these things in.

MR. BLAIR: Let me just get in the last two questions. We have both Clem McDonald, and then someone who is standing at the microphone. So could we try to catch those two?

MR. MC DONALD: This is really the focus that has been on much of the discussion, but you mentioned, Keith, the free volunteer versus the cost, and I don't dispute any of that.

You brought us something quite interesting in terms of both NDC and in terms of LOINC, that is, go to the original source and vendors of these things and get them to volunteer. There is a volunteerism in that, but it is self-interested volunteering. They have two categories of terminology vocabulary. One of them are those that are based completely on artifacts of humans, like instrument based things. What comes out of that is what they say is going to come out of it, drugs that get manufactured.

I think I would like to see a focus even more on how we can get focus on that, that the artifact generators would make requests for formalism. The difference in the NDC right now is that they just do it all on their own, and you have to go back to this comma so you have a cross classification.

We could do an awful lot, and actually, artifacts are becoming increasingly the dominant source of information, if you project some of the DNA realities. We may not need as much of this talk stuff that is harder to classify and make terms of.

MR. BLAIR: I was just advised that apparently there is no place to eat in this facility, which means we go out. We tend to compete with the lunch crowd, so forgive me if we restrict it to one last comment, so we can scramble to lunch.

DR. CAMPBELL: My quick response is that I think that the infrastructure supports large groups of paid people, that contributing content is the same infrastructure that supports large groups of volunteer people. The predominant concern I was trying to press is that we need infrastructure, and that that infrastructure

is reusable and therefore could be brought to the benefit of more than just one particular terminology effort.

MR. BLAIR: I think there was someone from the audience that had a question?

MR. CASSIDY: Patrick Cassidy from Micra Incorporated in New Jersey. All of the discussions including the earlier ones this morning didn't explicitly address the question of whether there should be a specific national project to develop an abstract upper ontology, which would provide us with a defining vocabulary that we could provide very specific and clear cut, logically well defined definitions for each of the terms that would be included in each of the terminologies.

It doesn't exist right now, and to develop such an ontology would be a major project that would probably cost over $100 million and take several years. But from the history of the last several years, it seems it is not feasible for a resource infrastructure to be developed by collaboration among private corporations. It doesn't seem that it is going to work out that way.

So the question is, do you feel that it would really be useful in order to provide the kind of interoperability and ability to recognize the precise meanings of definitions of terms? If you think it would be useful, then the question would be whether the committee would think this would be an appropriate time and place to recommend that funding be provided for such a project?

MR. BLAIR: Could I invite Keith Campbell to address that? I know that he used ontological principles in the work that he did with SNOMED-RT?

DR. CAMPBELL: I'll try to just be very brief. I think that the notion of agreeing on high level standards for the meaning of concepts that are used in the medical record is of paramount use. I disagree a little bit that there is no effort going on to do that. In fact, I think that there are some strong collaborations, Kaiser Permanente's work extensively with SNOMED. SNOMED is now in the College of American Pathologists. They are now working with the National Health Service, and also participating actively with HL-7 in the development of the XML patient record architecture.

I think that there are these efforts underway, although perhaps they could be better funded. Whether that funding comes through licensure or government support or whatever, I think that we want to incrementally improve these existing collaborations, as opposed to creating do novo new projects to develop something that is already being substantially addressed through existing efforts.

MR. CASSIDY: If I could clarify a point, I didn't suggest that people weren't attempting this. The kind of fundamental vocabulary would include an ontology of thousands of terms along with the semantic relations between them that would be needed to encode the meaning of these terms does not seem to be -- would not seem to be the end result of the kinds of collaborations that you are discussing.

DR. CAMPBELL: Well, I would argue that that is exactly the end result of SNOMED-RT.

MR. CASSIDY: Okay, perhaps there is something that I am not familiar with in that particular regard.

MR. BLAIR: Let me extend the time a little bit for our lunch. If you could please be back here at 1:15, then enjoy the lunch. Thank you to all of our presenters this morning.

(The meeting recessed for lunch at 11:45 a.m., to reconvene at 1:15 p.m.)


A F T E R N O O N S E S S I O N [1:17 p.m.]

Agenda Item: Statistical Classifications and Code Sets

MR. BLAIR: This next panel of testifiers is generally grouped to -- to some extent we tried to organize these things by themes. You will notice that for the next day and a half, depending on availability we weren't always able to do that perfectly. But in general, the representatives here that will be testifying usually wind up having developed data sets or code sets or terminologies or vocabularies that are focused on billing code systems, things that are generally used for reimbursement purposes.

Could we start from left to right? Please try to keep your comments within the time frames that have been allocated, so that we will have time for questions at the end.

MS. GIANNINI: My name is Melinna Giannini, and I am from Alternative Link. In 1996, we began developing a code set to describe the different types of alternative practitioners. This grew into describing nurses and also midwifery.

I'm just going to -- I do have a handout about my comments to the committee. But I thought it would be most useful if I would just go ahead and show you what the codes look like and what they did, because it is hard to talk about otherwise.

We are an all alpha code set. The CEAAA is the code description. It fits into the HCFA 1500 form and the UB 92 in the procedure code space, and it is meant to be a procedure code as are HCHCs.

The first code that you will see has a very strong good CPT crosswalk to 59400 for full ob/gyn care. What is unique about our code set is that we describe on a state by state level which provider types can use that code. Of course, MDs can, but then you will see ND for naturopathic doctor. We are in the state of Colorado; what is their scope of practice for the practitioner types?

Nurse midwives and registered midwives are also able to supply that care. Nurse midwives fall under the normal scope of practice laws in Colorado, and that looks pretty similar across the country; there are variations on it. Registered midwives are either okay to practice usually, or they can't practice. Then you get into details for registered midwives; can they puncture the skin, can they do other things. So the code set of 4,054 codes describes on a code level which practitioner can use that code in which state. So that is how our code set is different.

Oriental medicine is what OM stands for. We are talking about dental anesthesia with acupuncture needles. This does not have a good CPT crosswalk, nor do the next three codes. There is no description in CPT that looks or smells or feels like this code. You will note that the CPT crosswalk to 99 codes would require that the provider write a report, so the code is more specific. In Colorado, Oriental medicine practitioners are the only ones we found that were trained to do these three forms of anesthesia.

We also attached on a code level decision support as it is used by payers when they are adjudicating claims. It saves everybody time and money to do this, you all know that.

I'll show you how that works. If we are talking about a midwifery code, we are actually putting on a code level in the database that is retrievable in the format, the expanded definition that defines the parameter of services. We are not just naming what the service is,w e are actually attaching the information to describe the service.

This is the same thing for the dental anesthesia. It is a lot more specific, because no one out there has information about what this means, so it delineates what it is in a big definition. It also says how long it should take, and it also says if you need more time, go to this next code.

That is just another example.

In addition, on the code level, we felt it would be very interesting if the payer or whoever was looking at this had the national training standard attached on the code level to refer to from a claims adjudication level. So if people were not familiar with what a registered midwife -- how they were trained, they now have a tie on each code level to the training standard. What we are moving to is, if the state says they can't do something, we will say why at that level that they can't by state law.

We looked at the insurance industry probably from a very practical point of view, which was, what are providers doing? They are creating superbill information that then turns into a HCFA 1500 form. That is then sent to the claims department at the insurance company, and they are doing those edits on the claims.

We are supporting the provider in yellow, and the claims adjudicator and the medical decision people in the blue, because we felt that that was the information that they would have to have before they could tolerate having alternative medicine added into their claims benefits.

Remember, we only took the providers and put the information into the code set for providers who had national training standards, who were licensed by at least one state, and who were able to get malpractice insurance. We won't see medicine men in here, because we couldn't find any ties to that information. Although if you got outside of the U.S. health care system and into their own health care system, it would be possible to code that, too.

It gets more complicated when you are stating to talk about electronic commerce. This is what is going on currently with level three clearinghouses doing some of the edits, the routers sending to the payers and the payers doing these edits. And of course, what we are moving towards is that everybody will hopefully have Internet technology soon, and be able to real time make these decisions back and forth to support the coding person and the information that is tolerable or not tolerable to the paying agency, in real time edits, so that the knowledge level goes up. Instead of waiting two weeks to hear back from somebody, we will now know right after you do something why that is acceptable or why that is not acceptable.

That is all I have. These are the types of practitioners that we are currently supporting. We are about ready to -- I think it is July 15 is our deadline for having 50 states and the District of Columbia and all of the information for 13 practitioner types interfaced with the 4,054 codes.

So that's it. Thank you.

MR. BLAIR: Our next testifier, could you introduce yourself, please?

DR. POLLOCK: I'm Dan Pollock at Centers for Disease Control and Prevention in Atlanta.

Let me begin by thanking NCVHS for this invitation to participate in this forum. It is a pleasure to be here. I am going to refer and follow fairly closely to the printed testimony that I hope everyone has a copy of in front of them. I may deviate a little bit from time to time. It is three double-spaced pages, so I think we're going to come in under 10 minutes, even if I add a few impromptu comments along the way.

I am, as I mentioned, at the CDC in Atlanta. I am a medical epidemiologist in the injury prevention and control program. One of our major programmatic responsibilities and challenges is improving the accuracy, completeness, timeliness and accessibility of emergency department -- and I will refer to emergency department as ED -- ED data and public health surveillance of injuries. These data are needed to monitor the incidence, causes and effects of injuries and evaluate the effectiveness of countermeasures.

Here is my first deviation from the testimony. Injury prevention and control basically borrows from classic infectious disease epidemiology, prevention and control, looking at the interaction between agent, host and environment. So instead of E.coli, a fast food restaurant and a consumer, we have got motor vehicle collisions on highways involving drivers and pedestrians. Hence the need for data for surveillance, much as in infectious diseases, those data are needed for tracking and intervention purposes. So too in the area of injury prevention.

ED data have many other potential uses, including public health surveillance and infectious diseases, asthma, ischemic heart disease, drug and alcohol related emergencies and other acute medical problems. In the event of mass casualty incidence such as may occur with a terrorist attack, EDs are a primary data source for a rapid needs assessment and mobilization of a coordinated community wide response.

However, variations in the way that data are entered into ED record systems and even within individual record systems impede collection, communication and reuse of ED data for these various secondary purposes. Recognizing this situation and with great interest and support from the emergency medical and nursing professional associations, CDC's injury prevention and control program began in 1994 to coordinate a public-private partnership that has developed recommended specifications for many data elements in emergency department records. Data elements for emergency department systems, Release 1.0 DEEDS, is the initial product of this broad based collaborative effort.

DEEDS was posted at a CDC website in August, 1997 and published in hard copy form two months later. This is the hard copy form. If anyone would like a copy, please send me an e-mail; we would be happy to send you a copy. It is fairly heavy. I was advised by my editors of the need to get the heaviest paper we possibly could. Going through the Government Printing Office, they had experience with being able to see both sides at once. So we went ahead and did that, but it makes for rather heavy volume.

Since publication in August and then in October of '97, select specifications from DEEDS have been incorporated into the LOINC database, the health level 7 implementation guide for claims attachments and the forthcoming HCFA Notice of Proposed Rulemaking for HIPAA mandated claims attachment standards. Indeed, the ED attachment proposed by HCFA is essentially a subset of DEEDS data elements.

DEEDS also is serving as source material for statewide ED data standardization efforts in North Carolina and Massachusetts, and DEEDS specifications have been opted for use in CDC funded injury surveillance projects in several states.

The vendor community's response to DEEDS has been uniformly positive. Although we have not yet measured the extent to which DEEDS specifications have been incorporated into commercial products, several vendors report DEEDS compliance, including at least one that reported this before our specifications were finalized and published.

A guiding principle in the DEEDS development effort is that the primary function of an ED records system is to store clinical data and facilitate their retrieval during direct patient care. Hence, DEEDS' scope of coverage focuses on data elements in current clinical use and the 154 data elements in DEEDS are organized in the approximate temporal sequence of data acquisition during an ED encounter.

A structured format is used to document each data element, including a concise definition, specification of data type and field length, a description of when data element repetition may occur, coding specifications for coded elements, and reference to data standards for guidelines that were used to define the data element and its field values.

To the fullest extent possible, specifications for DEEDS data elements incorporate national standards for health care data, particularly standards applicable to electronic patient records systems. Data types and other relevant specifications conform to HL-7 version 2.3, and an appendix maps DEEDS data elements to HL-7 fields and segments. That is a feature that has been particularly appealing to a number of the vendors.

Other standards used in DEEDS include the U.S. Bureau of the Census industry and occupation codes, Office of Management and Budget standards for classifying race and ethnicity, the X12 health care provider taxonomy, LOINC codes for laboratory result types, and ICD-9 CM external cause injury and condition codes.

In some instances, when a standard terminology or code set was unavailable for use in DEEDS, we developed a recommended set of terms and codes. For example, DEEDS includes its own code set for mode of transport to the ED, ground ambulance, helicopter ambulance, et cetera, patient acuity, requires immediate evaluation or treatment, requires prompt evaluation or treatment, et cetera.

In still other instances, additional research and development are needed to design terminology or coding specifications to select a set of terms and codes from available candidates. Work is needed on chief complaint medication identifiers, patient outcomes and several other coded data elements.

Chief complaint is particularly important, because the patient's reason for seeking ED care is a major factor in triage decision making and hence in decision support, a key determinant of resource use and service intensity, and the aggregate provides a crucial unit of analysis for evaluating episodes of care.

We believe that with adequate support, a field ready set of chief complaint terms and codes can be identified or developed in two to four years. Terminology could be selected from existing vocabularies, as long as they allow representation of undifferentiated complaints. That is, health problems for which etiologic attribution is premature. That is a very important issue in emergency medicine practice.

For example, chest pain may not be attributable to a specific cause at the outset of an ED encounter. Representing this lack of differentiation is crucial. Also important is ease of use and reproducibility of a chief complaint system. It must function effectively in a high volume clinical environment, with multiple users working at various levels of clinical experience.

The initial release of DEEDS is intended to serve as a starting point. Further work is needed to expand the scope of coverage so that it covers all types of data entered in ED records, including images, wave forms, medical device measurements and other data not in the initial specifications.

The public-private partnership used to develop DEEDS and our reliance on existing health data standards set a precedent for future revision and expansion. We plan to begin work on the next version later this year, and anticipate completion by late 2000 or early 2001.

Thank you.

MR. BLAIR: Thank you. This is David Berglund, is that correct?

MS. PROPHET: Actually it is Sue Prophet.

MR. BLAIR: I see.

MS. PROPHET: Just so you know, I am not Pat Brooks, but I am filling in for her today, as she was unable to be -- she had a conflict and was unable to be present. Her designee had a conflict as well. So she asked me to provide you with the information on ICD-10 PCS.

First of all, the goals of ICD-10 PCS were to replace Volume 3 of ICD-9 CM, which has been used in the U.S. for reporting of in-patient procedures and some outpatient procedures since 1979. So it is 20 years old.

In 1992, HCFA funded a project to produce a preliminary design for a replacement of Volume 3. Being 20 years old, it was getting more and more difficult to incorporate new procedures and new technology into the system, as many of the code numbers had been used.

After a review of the preliminary design, HCFA awarded 3M HIS a three-year contract in 1995 to complete the development of a replacement system.

Year one involved completion of the first draft of the system. The goal of the system was four main objectives. Completeness, in that there should be a new code for all substantially different procedures. Expendability, so as new procedures or new technology developed, they could easily be incorporated into the system. Multi-axial, in that each code character should have the same meaning within a particular section and across sections whenever possible. And standardized terminology; the system should not include multiple meanings for the same term, and each term should be assigned a specific meaning.

The completion of the first draft of the system involved a technical advisory panel made up of both health information experts as well as medical experts.

Year two was devoted to the development of both a training guide and informal testing of the system by the American Hospital Association and the American Health Information Management Association, in addition to some additional reviews by physicians.

The third year involved formal testing and review of the system by the independent contractor, as well as review from some of the physician specialty groups and production of the final version.

The entities that conducted the formal testing were the clinical data abstraction centers, FMAS and Dyne-Keyprobe. The clinical data abstraction centers or CDACs were initially created to be a key factor for the successful implementation of the health care quality improvement initiative program. The CDACs are under contract with HCFA, and they subcontract with the peer review organizations for data abstraction services.

In the first phase of the test, 5,000 medical records were coded using ICD-10 PCS, 2500 by each CDAC. They used a broad range of cases. Generally, since the goal of this system was to replace the areas that Volume 3 of ICD-10 PCS was currently being used, the records were surgical in nature. So they made sure at each CDAC that there was one case for every surgical DRG and then beyond that they were randomly selected.

During this review of the 5,000 records, they identified a number of revisions that were needed. They gave feedback to issues found in 3M and then following the completion of this review, they did a comparison test of 100 records, in which they recoded 100 records using both ICD-9 CM and ICD-10 PCS. They would code half of them first with one system and then the other system, and the other half, they would recode them in reverse order.

They in the process have pretty much completed the process, which I will get to in a little bit, of performing some additional testing on ambulatory records, which was the key recommendation of one of the previous coordination and maintenance committee meetings.

Some of the modifications that were made to the system as a result of the testing by the CDACs was -- one of the key issues was that there was really not a not-otherwise specified option within the structure of the coding system. This was deliberate, in that the 3M developers felt that it was very important to have the specific detail and not allow the NOS. However, many coding personnel were concerned about the fact that in paper records today, a lot of times the documentation is just not there to assign the most specific code, particularly in outpatient records, if this system would also be used on some outpatient records.

So even though the CDACs didn't find a significant need for an NOS feature, a compromise was reached in which there isn't an NOS feature per se, but there are a few areas where there is a default character. For example, in the root operation area, the default character would be repair. The body part default would be the main body part. So if you didn't know which lobe of the liver was involved in the procedure, you could default to a character that simply said liver. Then they reduced the number of approaches from 17 to 13, which also meant some of the NOS concerns -- because some of the approaches were so close together that it was impossible for people to determine from the documentation which approach was correct. So the approaches were reduced significantly down to 13.

The dual numbering issue has not been completely addressed yet. There are some groups who feel that the codes should have an embedded meaning, and there are some groups that feel that there should be no embedded meaning. So right now the draft of ICD-10 PCS shows what the code numbers would be either way, with and without embedded meaning, while this is determined what the final outcome of that is going to be.

A number of issues came up regarding the training manuals, and this wa revised as well.

There were problem areas addressed, such as no tabular entries for certain procedures, several root operations were redefined. One of the biggest issues identified was, there was no root operation for biopsy. Biopsy was classified as an excision, which a number of reviewers felt it was clinically important to be able to distinguish a biopsy from a therapeutic excision. Therefore, instead of adding a root operation, there is a qualifier character definition that now says diagnostic, so a diagnostic excision in ICD-10 PCS is the way a biopsy would be described.

Some of the findings of the testing process by the CDACs was one that was pretty much obvious from the beginning. That is, it is much more complete than ICD-9 CM. There is far greater specificity and detail. As evidenced by some of the recommendations that the CDACs made, it is very, very easy to further expand the system and add in additional processes and procedures as they come along, and very easy to add in new technology and so forth.

The hierarchical structure, in that each character of the seven character alpha numeric code has a very specific defined meaning. It is very easy to learn and understand, and the standardized terminology was very well received by all of the people participating in the testing. It was difficult to learn in the beginning, in that some of the definitions were not what one would think of as being the definition, but once the definitions were learned, it was very easy from that point forward to code with the system, because the definitions were standardized.

So the testing found that certainly it should lead to improved accuracy and efficiency of coding. While it took some additional training time in the beginning to learn the system, once people learned the system it was actually easier to use in a lot of instances ICD-9 CM.

Training time will of course be a factor, since it is a new system. Having all terms defined makes it easier to teach. One interesting finding is, those of you that are familiar with ICD-9 CM know that using the alphabetic index is a key component of ICD-9 CM and a mandatory step in the process that we were all taught in school for coding in ICD-9 CM. But ICD-10 PCS, this tabular structure of the tables in the tabular section of the system were so easy to follow and easy to use that the testers found that they very rarely ever had to use the index. The only time they used the index was for terms such as eponyms that they weren't sure where they might go within the structure of the system.

One thing I might point out from the testing. Since it was focused on replacing Volume 3 of ICD-9 CM, some of the non-surgical sections have not been tested significantly. The testers from the CDACs did look at some of the Xray codes and some of the others, and found that there definitely needed to be further explanations and examples, particularly in the imaging and radiation oncology section, in that some of those perhaps could benefit from some additional testing.

So the current status is that the electronic version is currently available on HCFA's home page, and there is the address. There will be mapping from ICD-10 PCS to ICD-9 CM, and the training manual is also available.

The next steps, some of which have already occurred, is that the findings were presented at the May 13 ICD-9 CM coordination and maintenance committee meeting, which were the findings that I have just mentioned in addition to the findings from the testing of the ambulatory records that the CDACs did, in which they found that the system was quite usable on the outpatient side, but some of the non-surgical areas needed some more training, some more explanation to be able to use.

There was also an issue where some brand name pharmaceuticals that were used in the medical records were not adequately explained at all within the system of PCS, which caused some difficulty in coding for the pharmaceuticals. So they found that probably to use this in more areas than Volume 3 is intended to be used would require definitely some more testing.

They also suggested at the coordination and maintenance committee meeting that non-Medicare records needed to be tested more with this system, since obviously the clinical data abstraction centers' vast majority of records were from the Medicare population. So obstetric cases have been tested very little with this system, as have pediatric.

Again, the proposals for whether or not and when ICD-10 PCS would become an administrative code set would take place within the structure of HIPAA.

MR. BLAIR: David Berglund.

DR. BERGLUND: I am David Berglund from National Center for Health Statistics. I am going to talk about the international classification of diseases as it relates to patient medical record information. I'll talk both about ICD-10 CM and ICD-9 CM. ICD-10 CM is based on ICD-10 from the World Health Organization, and it also draws from ICD-9 CM, and it will be completed later this year.

ICD-9 CM is currently the de facto standard for representing diagnoses in the United States.

Let me first say a little about ICD-10 itself. At last report, October 1998, WHO had 30 countries that had implemented ICD-10 for mortality and/or morbidity. Since that time, ICD-10 has been implemented for mortality reporting in the United States, starting at the beginning of this year.

ICD-10 is the broadest revision of the ICD to date, with changes including alpha numeric codes and addition of various new features and restructuring, and expansion of detail.

Let me say something about patient medical record information. It includes a lot of information, basically everything in a medical record. Some of the narrative things are the toughest to capture, such as physician notes or nurses' notes.

Chris Chute had mentioned this morning that ICD loses over half of the clinical detail in the medical record. I don't think that should be so much of a surprise, since the ICD is not intended to capture all the information in the medical record.

There are a lot of different uses for patient medical record information, and different levels of detail are necessary for the different uses. Naturally for clinical treatment, the entire medical record has to be available. Uses that look at multiple patients do need comparable information from different sources.

Some areas that may require comparable information and information on multiple patients do include public health, research and tracking. The patient medical record information has to be comparable for the results to be reliable, but how comparable depends on specific use.

So what is the role of the ICD? It is a classification designed for collecting aggregate data, both national and international comparisons. It does then provide a code for every condition, but not necessarily a unique code. I would say that for an aggregate classification, such concepts as not elsewhere classified I believe can be necessary.

One of the things that we had this morning was the issue of whether or not such a thing is necessary. I do believe that the desiderata that Jim Cimino talked about do have some variation in how they would apply at different ends of the spectrum, and for a classification such a concept is important.

Now, the ICD does have a hierarchical structure, a strict hierarchy, not a multiple hierarchy. However, use of excludes notes does serve the purpose of being able to find conditions from multiple hierarchies, although terms should be found generally using the indexes, as Sue Prophet noted.

Looking at current uses of ICD-9 CM, it is of course used in public health, for tracking, for research and for billing purposes, although the ICD wasn't created for billing purposes, and of course creates DRGs.

The domain of the ICD is primarily diagnoses and reasons for encounters with the health care system. The scope does include secondary areas such as signs and symptoms as well as some medical test results and nursing diagnoses.

Settings where the ICD is used include both in-patient and outpatient health care settings as well as long term care, home care and clinical and epidemiological research studies and public health reporting.

Regarding expansion of the ICD, while ICD-9 CM is widely used, some users have requested additional detail. ICD-10 CM will be a major expansion with increased specificity and revisions in many areas throughout. We do not plan to make major changes to ICD-9 CM, although ICD-9 CM has had yearly updates in an open process.

Let me comment that we do add detail by making new and more specific codes. The aggregate codes such as not elsewhere classified codes then will no longer include those conditions. While we do not consider that to be a change of meaning, as Jim Cimino alluded to, this can be an important issue for data. So to enable longitudinal tracking, we do have a detailed conversion table which lists new codes that have been added, what year they were added and how the condition was previously coded. This does enable longitudinal tracking with the ICD codes.

Why did we need to make a clinical modification of ICD-10? There were certain codes we had to remove, those unique for mortality, procedure codes. We also were returning specificity in ICD-9 CM for certain conditions, and modifying it for consistency with U.S. clinical practice.

I am going to go fairly quickly through some of these slides. Basically, all changes made needed to conform to WHO conventions. We did further evaluate ICD-10 categories to provide codes for ambulatory and managed care encounters, for clinical decision making and for outcomes research. We did expand certain aggregate codes based on previous recommendations that weren't able to be implemented in ICD-9 CM. We also consulted with physician groups and others. I do have a list of reviewers here which will be in the handout, and I'll let you all just go over those later. I won't go through those in detail right now.

Let me go into the major modifications and changes that were made to ICD-10 CM. We did add a sixth character for more detail, and created full code titles, added laterality and created certain combination codes.

We expanded in a number of areas, including the OB codes, the diabetes codes, the injury codes, the post operative complication codes.

I do have a number of detailed slides on these which show how the codes look, such as S32.001, a closed fracture of the lumbar vertebra with (word lost) injury. I'm not going to dwell on these examples right now. I'll let you look at them in more detail subsequently.

We did add trimesters to OB codes, for those who are interested in that issue.

I'll just skip ahead now, looking now at the relationship of ICD to the other terminologies. The ICD does look at aggregate data at a coarser level of granularity than many other terminologies and nomenclatures. There often are mappings from the other systems to the ICD such as in the UNLS meta thesaurus in the National Library of Medicine.

In relation to message format standards, ICD-9 CM is referenced by both HL-7 and X-12. It is the de facto standard method of representing diagnoses in both HL-7 and X-12.

Looking at medical terminology issues, which are deserving of government attention, I just noted a few things here. From the standpoint of the ICD, it is important to maintain ICD-9 CM until ICD-10 CM implementation can begin. However, I think it is important to move to ICD-10 CM for recording diagnoses as soon as that can be done reasonably. Note of course that the U.S. is now using ICD-10 CM for mortality, as I referred to earlier.

It is also important to facilitate coordination with the World Health Organization in relation to international terminology use and statistical reporting. Under treaty, the U.S. reports morbidity and mortality using the ICD. It is also important to facilitate standards for electronic interchange of detailed medical information, which is one of the issues that is being looked at closely right now here.

Let me add that it is also important to facilitate mappings between systems. I see that as being extremely important.

Patient medical record information does have many different uses. Some need more detail than others. Some of the concerns about privacy may favor use of systems with less detail as being more acceptable when the full detail of the medical record is not needed. That doesn't decrease the need for security. Security is still of course very important.

Another issue. Some payers do not follow official coding guidelines when using systems. For comparability of patient medical record information, under HIPAA it would be reasonable to mandate use of official coding guidelines for systems in use.

Coordination among terminology developers could be encouraged by government specification of standard ways to access and to express patient medical record information in electronic formats. This is important for mapping between systems. Certainly I believe it is something that the AHCPR has looked at and information from that can certainly be beneficial in a wider context, I would say.

Terminology developers and message standard developers do need to coordinate. Standard setting arenas are beneficial for this. Use of ICD codes is relatively straightforward in message standards as compared to compositional nomenclatures. However, this is still an important area.

The ICD and representation of patient medical record information. In research, inclusion criteria for a study have to be strictly defined. When ICD-9 CM codes are used for this purpose, many studies use multiple codes to capture all the cases of interest. The ICD has been designed to capture data that is comparable and that is of value for public health.

The ICD by itself will not meet all the needs of research or of public health. However, in relation to capturing diagnoses, ICD-9 CM itself does meet most needs at this point in time. ICD-10 CM will have increased specificity that will better meet the needs of users. For some conditions, levels of severity have been added in ICD-10 CM, and there are some areas where enhancements in ICD-10 CM will be very beneficial.

Now, ICD-10 CM has gone through an open comment period based on an earlier draft. Twelve hundred comments were made, many of which merited direct incorporation; some required further study. Others either did not need further action or were inconstant with the structuring conventions of the ICD.

Next steps. At this point, we have completed review of those recommendations. We are finalizing tabular list revisions. We will be moving on toward revising the index, crosswalks and guidelines and developing new training materials, doing formal testing in a comparability study with ICD-9 CM.

For tools beyond a print version, a database version or a tagged version which could facilitate mapping to other systems is an additional important tool that we intend to develop.

I also have our webpage here, that has information both about ICD-9 CM and 10 CM, as well as links to information about ICD-9 CM and ICD-10 CM for mortality purposes.

Thank you.

MR. BLAIR: Thank you, David. Tracy, are you next?

DR. GORDY: My name is Dr. Tracy Gordy. I am the interim chair of the AMA CPT editorial panel. It is my pleasure to appear today on behalf of the AMA before the work group on computer-based patient records. Thank you for the opportunity of testifying.

My remarks will focus on your questions on terminology and related standards for uniform data standards based patient records, including the federal role.

In response to your first question on patient medical record information, a patient's medical record should include sufficient information for physicians and other health care professionals to assess previous treatment, to insure continuity of care, to decide on future and further treatments and clinical activities, and to avoid unnecessary therapies or tests. The strictest patient privacy protections must be observed. Comparable and accurate PMRI will help us realize the clinical and research utility of such information.

At the same time, efforts to enhance this comparability must not detract from the fundamental purposes of such information, the needs of patients, their physicians and other health care professionals in the institutions and facilities in which they receive their care.

Let me move to your second question, the role of CPT. CPT is used primarily as a working clinical nomenclature to describe medical procedures and services. It is an organized listing of terms and codes for reporting the services of health professionals. The purpose of this terminology is to provide a uniform language to accurately describe medical, surgical and diagnostic services. Accordingly, CPT provides an effective means of communication among physicians, patient and third parties.

CPT addresses the full scope of medical services, as well as physical therapy, occupational therapy, speech, language and hearing, optometry, some chiropractic services, dietary, psychological services and other health professional services. CPT is unique in describing clinical practice in a manageable number of codes, with pre-coordination of terms to meet the needs of physicians, other health care professionals and payers.

Deeply clinical, it has been extensively adopted by payment and administrative systems and has a wide practitioner acceptance. CPT is updated annually and made available through an extensive distribution network. Updates are also available in electronic formats, and are made available by licensing to end use third party vendors.

There are many indicators of acceptance for CPT, for example, physicians and many other health care professionals use CPT to report health care claims. Essentially all health care institutions use CPT. All third party payers processing medical claims use CPT. CPT is used internationally in South Africa, Mexico, England, New Zealand and Hong Kong. HCFA, the Department of Justice, Department of Defense, Indian Health Service and 42 state government agencies use CPT. Finally, of critical importance, CPT is used as the basis for the resource based relative value system for Medicare and other reimbursement systems.

As perhaps you know, the AMA is developing the next generation of CPT, CPT-5. CPT-5 is explicitly designed to meet the HIPAA criteria for longer term clinical procedure code set for administrative and financial transactions, and to be an integral element of the code sets for computer based patient records.

The CPT-5 project is incorporating the following actions: to reduce the ambiguity and enhance the consistency and accommodate non-physician needs; to insure that managed care plans can use CPT, making changes to accommodate practice guidelines and report cards and other quality measurement tools; to make additional changes such as increased granularity as we move toward computerized health information systems; to incorporate and enhance functionality and to address interest in a more hierarchical approach to CPT and associated specialized queries, using the current methods and technologies; and finally, to refine the editorial process to allow for open exchange of information through a greater participation and contributions from the national medical societies, the non-physician practitioners and other pertinent parties.

CPT-5 has been designed to meet the diverse needs of its users, including clinicians, payers, managed care plans and researchers, and we are capturing input from a variety of sources. CPT-5 currently has six work groups: managed care, research, non-physician practitioners, maintenance and education, sites of service, and structure and hierarchy. Members include public and private sector experts in coding and payment and health systems. In addition, representatives from the National Center for Health Statistics and Agency for Health Care Policy and Research and a liaison from the National Library of Medicine participate in the work groups.

CPT-5's core purpose expressed by the executive project advisory group is our written statement. The key elements are that CPT will include clear and comprehensive descriptions of clinically recognized and generally accepted services provided by health care professionals to patients and populations. It will be designed to communicate information about clinical services performed, and will address the needs of health care professionals, patients and payers for administrative, financial and analytical purposes, including outcome studies, public health initiatives, health services research and evidence based medicine.

The components of CPT-5 will be the codes with descriptors and instructions for uniform interpretation and correct application of coding conventions among other health care professionals, payers and other users. There will be readily accessible mechanisms for periodic review and updating, with processes for reporting evolving technology and services in a timely manner. Finally, CPT-5 will be structured to support easy electronic interface and coordination with other computer interpretable health care terminology systems, electronic, medical and health records, other fields on the administrative record and analytic databases of varying levels of detail.

We anticipate CPT-5 will be released in 2002, although some aspects are already being implemented in CPT-4.

Turning to question three, I will indicate how CPT relates to the other medical terminologies. The AMA has taken initial steps with the National Library of Medicine and the College of American Pathologists to map CPT to SNOMED. In addition, CPT contains many concepts that are similar to those in ICD-9 CM, Volume 3. Former relationships with other terminologies have been modelled in the NLM's medical thesaurus.

On question four, the relationship of CPT to health care message format standards. CPT is explicitly used in ANSE AS CX-12 and HL-7 standards as required data elements for identifying professional medical services.

As I conclude, I will address question five and six, the federal government role in medical terminologies. The issue of federal government standards for computer-based patient record standards is addressed in Section 263 of HIPAA. Section 263 calls on NCVHS to study the adoption of uniform data standards for patient medical record information and electronic exchange of such information, and to report to the Secretary by August 2000 with recommendations and legislative proposals.

It is important to note that this issue is fundamentally different from the HIPAA administrative transactions. First, there is no federal legislation requiring implementation of patient medical record information standards or the code sets for such standards. Moreover, it is not clear if the HIPAA model for administrative standards intended to address inter-enterprise communication, is fully applicable to patient medical record information standards that involve a large intra-enterprise focus.

We believe therefore that the federal government should limit its focus to broad recommendations for medical terminology frameworks with code sets maintained by the private sector. The federal government could play a useful role however in helping identify gaps in standards and helping the private sector to facilitated and priortize standards development. At the same time, the AMA does not believe that federally mandated PMRI standards are likely to be appropriate or productive.

There is no single solution to the vocabulary needs for electronic patient medical record information. Dr. Spackman of the CAP has previously identified the need for clinical interface terminologies, clinical reference terminologies and clinical administrative terminologies. Each of these types of code sets which will overlap in their functionality is an important role to play.

Clearly, SNOMED has a major and promising role as a clinical reference terminology for computerized patient records and other purposes. In addition however, CPT is a clinically based working terminology that reflects the preferred terms and communication needs of physicians and other health care professionals with carefully designed pre-coordination of terms. As such, it will be an important part of the overall clinical terminology solution.

Finally, coordination among code set developers and message standard developers follow a largely bilateral and limited multilateral model driven by the needs of each code set and developer. This model is exemplified by the coordination between NLM, the AMA and the CAP, as well as the work of the CAP and the LOINC. Similarly, both X-129 and HL-7 have readily incorporated such major external code sets as SNOMED, CDT and CPT, based on user needs.

Thank you for this opportunity to present the views of the American Medical Association. I would be pleased to respond to your questions.

MR. BLAIR: I'll just hold the questions until everybody finishes here. Thank you, Dr. Gordy. Robert Lapp, I think you are our last witness here.

DR. LAPP: Thank you. My name is Robert Lapp. I am the director of dental informatics for the American Dental Association. I am pleased to be here today.

As an engineer speaking for dentists, I will stick fairly close to the text I drafted, that was reviewed and approved by a number of dentists. That text pretty much follows the questions that were asked.

With respect to how you would define or describe a PMRI, at the American Dental Association we would prefer that it be a health record, but the American Dental Association believes that for optimal patient benefit with the assurance of confidentiality safeguards, appropriate health information should be available at the time and place of care to practitioners authorized by the patient through the development of the computer-based patient record.

The ADA's working group on the computer-based health record found that five fundamental criteria are essential for information to contribute to quality health care outcomes and efficient and economical delivery. Those five points are quality, utility, proximity, accessibility and confidentiality. These characteristics can be summarized in the working group's statement: "Complete and accurate information must be seamlessly available to all users authorized by the individual in a form optimally usable at the time and place required across the traditional boundaries of health care professions, specialty or care delivery environments.

With respect to (word lost) comparable PMRI required, what functions does it serve, I think we have heard considerable information about that so far today. But again, the American Dental Association believes that the availability of comparable health information at the point of decision enables the patient, the caregiver and manager to efficiently use automation to enhance their best judgment in achieving long-sought optimal quality and economy in care delivery.

With respect to what are the implications of lack of comparability, I think that falls in Dr. Cimino's don't category. Essentially, a lack of comparability, precision and accuracy reduces the quality of care delivery to sub-optimal levels and increases its cost by the conversion, interpretation and verification that may be necessary.

The role of the American Dental Association in terminology, to achieve uniformity, consistency and specificity and accurately reporting dental treatment. The ADA developed the code on dental procedures and nomenclature, the dental code. It is published in the ADA's current dental terminology and is intended to be used by dental professionals to record and report care provided to their patients. It is also used by third party payers for reimbursement.

The dental code is used by all practicing dentists in the United States to record and report dental treatment and procedures. This includes clinical, academic and administrative settings. The dental code has also been proposed as the HIPAA standard dental code.

During the preparation of the third edition, which will be released this summer, members of the ADA's Council on Dental Benefit Programs, representatives from the ADA-recognized dental specialties, representatives of the Health Care Financing Administration and representatives from nationally recognized payer organizations participated in the revision process. This revision process provides the opportunity to resolve any user reported gaps in the terminology.

In what areas are you planning to expand your terminologies? The advisory committee on dental electronic nomenclature index and classification committee has developed the ADA systematized nomenclature of dentistry, nicknamed SNODENT, which will provide dentistry with a comprehensive terminology to establish and define dental and oral disease classifications and comorbidities.

SNODENT was developed and is maintained by the American Dental Association. SNODENT will be an integral part of the computer-based patient record and will therefore be composed of diagnoses, signs, symptoms and complaints. This provides not only the means for diagnostic coding, but when collected, compiled and analyzed, reliable diagnostic treatment outcomes data can be compiled. It may also be used by third party payer to eliminate the need for narrative descriptions and other attachments which currently impede the use of electronic transactions in dentistry.

Dental practice management system vendors are expected to incorporate SNODENT in their systems to maintain a comprehensive patient health record.

With regard to how does your medical terminology relate to other medical terminologies, SNODENT is a micro glossary of the systematized nomenclature of medicine, SNOMED, and will be contained in SNOMED Version 3.6, which is maintained by the College of American Pathologists. The ADA has a licensing agreement with CAP for this purpose. The dental code, the procedures, is the established and accepted standard for dental procedures.

How does your medical terminology relate to health care

message format standards? The ADA is a designated code source, and the dental code is used in the X-12 message format, and has been proposed as the required HIPAA standard dental procedure code. We hope SNODENT will be accepted in the X-12 standard and named as the preferred HIPAA standard dental terminology in the future.

Although both the ADA's dental procedure codes and SNODENT could be used with other message format standards such as HL-7 and LOINC, the formats are not currently used in dentistry. Dentistry is a relatively cottage industry, and doesn't have a lot of the communications equipment that currently is used, for which those message formats are used.

However, if the patient record design is adopted using these or other message formats, we would advocate acceptance of both SNODENT and the dental procedure codes as the required terminology for dentistry.

The ADA is also actively participating in the ISO TC-215 working group three on health concept representation to insure that international terminology standards are adopted that provide comparable health record information.

With regard to question number five, are there issues related to medical terminologies that deserve government attention or action, we believe that the designation of the HIPAA standard terminologies is a major step that will promote convergence toward comparable nomenclature for health records.

The health record is essentially a repository of transactions, so an electronic transaction terminology standard will improve the consistency of the patient record over time. It must be recognized that terminologies are dynamic and change to reflect technological and methodological advances. Consequently, government adoption or endorsement of terminologies must not allow stagnation. Rather than standardize a specific terminology version, the ADA would urge the designation of terminology sources. In addition, it is important to encourage those with coding requirements to work with existing terminology sources to minimize duplicate efforts. This will insure that consistent terminologies evolve to meet the changing needs of health care and provide comparable information in the health record for both immediate and future use.

Are there issues related to comparability of the PMRI? Continued participation of the government agencies in the national and international standards development process will promote the comparability of health record information. As previously stated, the HIPAA standard terminologies and code sets for transactions will encourage comparable nomenclature for health records.

Specifically, the efforts of the ISO TC 215 working group one, that is the modelling group that is developing an international model of the health record should be supported. Increasingly, health record format transcends national boundaries for both patients, health professionals and vendors. Comparability of both content, terminology and format are essential for general acceptance.

As we heard previously, coordination and communication as proposed by the HIPAA regulations for designated code sources and content committees will insure that terminology standards are maintained. The formal coordination through the national uniform data committee can provide this coordination.

We appreciate the opportunity to present our testimony, and would be pleased to answer any questions.

MR. BLAIR: Thank you to all of our witnesses. It was clear from the testimony that the usage of these statistical classification systems goes beyond just reimbursement purposes. We now have the opportunity to take questions. Bob, could you help me identify who the first questioners are?

DR. COHN: I agree. I want to thank our presenters. I think you have all done a wonderful job.

The committee is obviously very concerned on the issues around terminologies, but it fundamentally is most concerned about the issue of data quality and how it plays out in terms of all terminologies, as well as the use of terminologies.

You referenced as one of your issues that some payers do not follow official coding guidelines. I would observe that there are two parts of that problem. One is that probably some payers have other coding guidelines that are somehow not in sync with what are the official coding guidelines. Then there are others that say yes, that is fine, but we won't pay you for certain sets of codes.

Which is the bigger issue, and what are your thoughts on all of this?

DR. BERGLUND: The concern that I am raising here is that some payers may say that we would pay for this condition, but we want to define it in this way, and therefore we aren't going to pay for it even if it is coded properly under official guidelines. We are still not going to pay for something.

Now, basically there is an issue here that usually it is a matter of reimbursement on this issue, and basically the payers are trying to in general avoid reimbursement in as wide an area as they are able to.

I have heard some concerns expressed about this, both for ICD coding and in relation to CPT coding. I think it may be a broader issue than just ICD.

DR. COHN: So it is fundamentally a money issue?

DR. BERGLUND: I think that is probably true.

DR. COHN: Do others have comments about this?

MR. BLAIR: Marjorie, did you want to comment? I'm sorry.

DR. GREENBURG: I was just going to add in response to your question, I think what is paid for is the contractual issue between the payers and providers, et cetera. But I think the problem is when people are told they can't code certain things in certain ways, because that then affects the quality of the data and the accuracy of what was actually done.

It is one thing to make a coverage decision; it is another thing to say you can't actually record certain codes, if they are valid codes.

DR. COHN: That is exactly the question I am trying to ask the panel, which is the bigger issue?

DR. BERGLUND: I think the issue from the payer point of view is that they are trying to avoid payment in some cases. But it would be more appropriate that they address that issue in different ways, I would say, and that they not request that people code things inappropriately or deny payment for something that is coded appropriately.

Again, that is a data quality issue on the other side, because if different payers are all making their own coding guidelines for how things should be coded, you have no comparability across the country.

MS. PROPHET: I would definitely concur one hundred percent with David's comment. If there are certainly instances where the payer might say, we won't pay for this test, for this diagnosis, so then people feel like they have to come up with another diagnosis then.

But that is not really the bigger issue. The bigger issue is just as David said, where they say, we will pay you for the service, but we want you to describe it this other way which is not appropriate. If you describe it correctly, then we won't pay you for it.

DR. GORDY: There are some contractual arrangements also that need to be kept in mind. For example, Medicare will only pay for one service on a day except in special circumstances. So if a provider provides more than one service, they are likely not to submit it. They may submit it, but many think that that is fraud if they submit. It is only fraud if they accept the payment.

But that is a problem and an issue in the provider network out there that needs to be considered in terms of capturing data. You are only going to get a code submitted if the provider is going to be reimbursed, for all practical purposes.

MR. BLAIR: I think Dr. Huff has a question.

DR. HUFF: Just a comment and a clarification. Both in the CPT presentation as well as the ICD-9 presentation, those codes were described as being used and approved for use in HL-7.

That is accurate, but I want to clarify that those uses were for billing diagnosis codes and for billing procedure codes, if you will. In both cases, those nomenclatures have been found insufficient for describing clinical detail. So if people send a different diagnosis that they call a clinical diagnosis, when they are describing what they need to know in detail about a patient that they would use as part of a problem list or other things, then they need different codes than ICD-9 codes to represent that. The same is true for clinical procedures where the LOINC codes are approved for describing the detailed clinical observations that are made, and the CPT codes are used for the billing classification.

Just a clarification of those issues.

DR. FITZMAURICE: In the past several years, I have noticed the American Medical Association's participation in more standard coordinating groups, and I have applauded that. I also applaud their use of CPT-5 to take into account many more uses than just payment for physician services, particularly the potential for incorporating quality measures in CPT-5.

I looked at page four, and noticed that you testified, Doctor, that currently no organization is focusing on the necessity for vendors to incorporate the specific needs of the practicing physicians and the ambulatory care setting. I think probably you are right. I wonder, is there an appropriate place to identify the need of these practicing physicians, or is it something that you think the AMA is going to carve out and look at its physicians and respond to the community with what the needs are? I think it would be a very useful service.

DR. GORDY: I think I can only speak in the future. I think the answer is yes to your question. We don't have it developed at the present time, but I think that is the way CPT-5 is heading. The identifying of various sites of service where CPT-5 can apply, we will try to develop codes for that purpose.

In some of the outpatient services now, as you may know, CPT-5 is required by the Medicare program. I'm sorry, CPT, not CPT-5, pardon me.

MR. BLAIR: I have a question. Could you all help me get a little bit of clarification as to the differences between the procedure codes covered with ICD-10 PCS and CPT?

DR. GORDY: I might make a comment. Currently, because ICD-9 was used to develop the DRGs, the procedure codes done inside a facility such as a hospital, a surgical procedure, for example, is coded with an ICD-9 code or PCS code.

In the outpatient setting and in the office setting, CPT is the procedure code used. There is a crosswalk to a certain extent, but perhaps Miss Prophet can add to this a little bit, there is a certain amount of crosswalk between the ICD-9 and he CPT for procedure codes done in the hospital. But for reporting purposes, ICD-9 is the current reporting coding system for in-facility procedures. Outpatient, as I said, is CPT.

MS. PROPHET: From the ICD-9 procedure point of view, the greatest amount of detail is in the surgical procedures with some other types of procedures, but certainly a minimal detail. ICD-10 PCS, while developed principally to replace those areas where ICD-9 CM procedure codes are currently being used, 3M did not feel that they could just develop a med-surg system, if you will. So it is a complete procedural system that covers al types of procedures, lab, imaging, rehab, so forth, all types of procedures with the exception of evaluation and management type services that are found in CPT.

MR. BLAIR: Dr. Gordy, did you want to add?

DR. GORDY: No, I don't think so. I think the two comments are probably cogent.

MS. BICKFORD: Carol Bickford, American Nurses Association. I have a question from the perspective of a non-physician provider. It is not very clear that the systems that have been in place necessarily support those of us in nursing, not necessarily in advanced practice, but in other clinical settings. Similarly, other non-physician providers may be in the same predicament, that we can't describe our clinical practice because we can't bill against it, but we still can't describe it because the code systems don't support that.

Am I correct in that assumption?

DR. GORDY: I think I'll try to answer that. I think that is probably correct. In CPT-5, we do have a work group for non-physician providers. That work group has been charged with developing a code set for that group of non-physician providers which currently cannot use CPT codes, because in some instances like nurse anesthetists, for example, can use the anesthesia codes. So in that instance they can use the CPT codes.

In certain instances, the nurse practitioners can use the ENM codes. The psychologists can use the psychiatric codes. The social workers can use the psychiatric codes. There are places in CPT where certain groups of non-physician providers actually have codes at the present time, like physical therapy.

The hope is that we can extend this to the other groups that currently are providing services in the U.S.

MS. GIANNINI: I'd like to address the question, too. We found 600 codes that were billable procedures in the nursing intervention classification system. We are doing the work which will be finished next month for the Omaha system for HHCC home health care classification system. I believe there is one other, I'm sorry, I'm forgetting that.

But what we have found is that all of those events are measurable. They have relative value units attached to them. We have asked the correct questions as we have gone over this for the last year, to define this in a billable way for payers and also to give them the information of, does this fall within the scope of practice of each one of the providers.

MS. PROPHET: From the PCS point of view, there are probably some areas where some nursing services are covered, but I grant you, it probably isn't completely covered.

For example, there would be some in the area of administration, measuring and monitoring some of the rehab section, the mental health section we have some, but because it was not completely designed with nursing in mind, I am sure that there are some significant gaps.

DR. BERGLUND: From the standpoint of the ICD, ICD-9 CM does have some codes which would enable capturing of information along the lines of nursing diagnoses. However, in what circumstances such codes would be used and how they would be used in billing situations would be somewhat variable.

So it is not clear how useful that would be to enable reimbursement at this point in time.

DR. FERRANS: This is for Dr. Gordy. Thanks for your testimony. I just wanted a couple of things from your testimony.

On page two, there was reference from the AMA's position about the number four bullet, the focus on the CPR aspect of the human-computer interaction, and then in parentheses, the physician data input step, and to work with software vendors on the design of interfaces.

I certainly applaud that as a goal. I think in our discussions of terminology -- and this is a conversation I have had with some of the other members of the panel, but we need to focus on this also. I know how difficult it is in our organization to get people to code completely and appropriately for a variety of reasons. That is just with CPT and ICD-9. If we start then expanding and using all sorts of other codes also to capture more clinical granular data, we definitely need those interfaces. So I very much hope that the AMA continues on that.

I did want to just make mention however, on page nine also, and perhaps you would like to address it, it mentions about the AMA's position about the work. It says that with regards to federal standards, it says, moreover, it is not clear at all if the HIPAA model for administrative transaction standards, which is intended to address inter-enterprise communication is fully applicable to patient medical record information standards that involve a large intra-enterprise focus.

I think from the standpoint of the goal that we are trying to do here, I think there are a number of CPR systems that can meet inter-enterprise needs, but it is our hope and our goal that we can facilitate inter-enterprise communication because of the portability of Americans and the ability to -- the necessity to aggregate information.

So I just wanted to point that out, and perhaps you wanted to comment on that. I think that is the reason why we are going after the clinical standards, at least study them.

DR. GORDY: We certainly respect that position. I think that that is the issue, to study the possibilities. At the present time, we just see it as very difficult in the inter-enterprise versus the intra-enterprise. But of course, we are more accustomed to working in the intra-enterprise section.

So we would applaud your effort to study it and cooperate in any way we could to aid you to do something in the inter-enterprise group.

DR. SABA: Virginia Saba, Georgetown University. I have been attending these meetings for several years now. I guess my first meeting went back to NIH in 1985, when we were revising ICD. They had a group of all the professionals there to try to see if ICD-10 could capture all the health care professionals' coded transactions.

Well, of course we were unsuccessful at that time from the nursing point of view. Here we are in 1999. We will be implementing HIPAA two years from now, and it seems over these so many years, the same tone comes forth: unless a code is reimbursed, it is not likely to be in any code set and is not likely to be in any computerized patient record system.

If in the questions that were presented to us as panelists, what can the federal government do to enhance the HIPAA legislation or to move this gap forward, is my question to the panelists and to the board.

DR. COHN: I'll try to reply. I think first of all, the health care terminologies have a number of purposes, one of which is reimbursement. But within an intra-enterprise environment, communication and decision support become very important focuses of terminology.

Having said that, within an enterprise terminologies may be a little different than between terminologies. I am struck that many providers in the health care environment and others, without incentives, do not willingly provide additional data.

As I say that, I'm not sure I have a strong logic that would be able to persuade them that for no reason they should provide other data willingly for no reimbursement or other acknowledgement.

Clearly, I think we do reflect that there are different purposes for terminology.

MS. GIANNINI: I would like to say too that I think one of the hangups of some of the terminologies is that they weren't measured for the purpose of payment. I think that having a relative value unit next to a procedure is clearly the way that the industry wants to do business now. It is too hard to negotiate on a code by code basis for services. If you can go out with five conversion factors, however, and negotiate every procedure, then you are much more likely to have that procedure be part of the payment system. The insurance companies are a little bit afraid of handling procedures if they don't know at least a range of what the parameter of the service might cost them in underwriting.

So I think that if people want to make sure that what they are doing is reimbursable, they have to measure it in terms of how long does it take, how difficult is it, what setting does it have to be provided in, and what is the training that went into getting the service, plus what the malpractice insurance costs for it.

MR. BLAIR: Any other comments in response to Dr. Saba's question?

DR. BERGLUND: I could state that certainly from the public health standpoint, we do at times like to see data reported just for the sake of having it. One of the only ways in which that may be driven is that there may be laws requiring that certain types of conditions be reported to public health. Even with that, at times the data comes in later than it is supposed to, and it can still be a problem.

But the types of conditions for which that could be required are quite limited also. So it is a difficult situation to get complete data. As Dr. Cohn was saying, it is very difficult without having some type of a financial driver, as it were.

DR. GORDY: I was just going to comment that what Dr. Cohn said a while ago is perhaps true in certain areas. Within the intra-enterprise areas, there are certain HETUS requirements, for example, that might be fulfilled and with code sets. But those generally are not sent out to the inter-enterprise section.

On the other hand, you take a situation where an ophthalmologist does a surgical procedure, where that particular ophthalmologist administers the anesthesia, Medicare doesn't pay for that surgeon doing his or her own anesthesia. So it is unlikely you are ever going to capture that data.

MR. BLAIR: Are there any other replies and responses? Dr. McDonald.

MR. MC DONALD: Both a comment and a question to the panel, specifically to Dr. Gordy. The issue of intra-enterprise and the absence of things for the private practitioner, I think that is more absolutely stated than is accurate. There are certainly laboratory results that have been flowing in private offices and many of the small office practice systems will take in laboratory results from outside, and a lot of other examples.

I think the AMA could be very helpful in both focusing down on areas that do have strength in building up and promoting and lobbying them. I think there is an activity going on in the AMA in the next year to try to orchestrate some things, and fill in the places where it isn't, or agitate, et cetera.

So I think the AMA has a great role in all of this. I guess I would encourage this new activity to become more involved in the standards.

The more question is about the simple codes, being able to code chief complaints the way people would code them. I think we heard a lot about more detail, more detail, and that may be the clinician's ultimate nirvana, to have more detail.

But as one who actually does enter codes into a computer systems, I can tell you, it is heinous. I would like back from some of the panelists the option or the strength in the possibility that we have very good clinical code systems that would let you say simple things.

The one I stumble over is depression. I say they are depressed. That is all I know how to say, but if I try to say it in ICD-9, there are 18 different codes reflecting all different levels of intermittent and on and off and started and stopped, and all these other things. I don't know what they mean even, let alone do I want to have to say them.

So I think I would like to have a response from the panelists as to how we can make sure that the codes are clinically fitting as well as the detail we want.

MS. GIANNINI: The University of Minnesota has come up with over 700 reasons in a study that people come to the clinic and say what is wrong with them. They have started tokenizing that already into other things. So it would be a good place to start. At least, this is patients saying what they think is wrong with them.

MR. MC DONALD: I don't think we are short of them. It is the policies that are (word lost) the stuff that gets us in trouble.

DR. POLLOCK: I think this relates as well to the previous question. There is this incentive in the emergency medicine and nursing community to develop a standard way of expressing a chief complaint, and to be able to teach that to triage nurses, to physicians, to medical students, so that there can be some consistency in the way that that absolutely pivotal piece of information gets entered into the record and gets used in an episode of care. It ties in with the decision supports that might be triggered by a particular expression of a chief complaint, in conjunction with other information.

MR. MC DONALD: We heard for example last year at the NOM NHLBI symposium on new information technology in the heart attack alert program how incorporating chief complain and vital signs and EKG can allow for a very rapid decision support in the area of determining the likelihood of benefitting from a thrombolytic agent.

So there are I think incentives that can be applied. I also think that while we are unlikely to see a completely comprehensive code set for all the chief complaints that patients can present to emergency departments, we certainly can achieve some consistency for the top 80 or 700 or whatever.

DR. BERGLUND: The issue of how to code depression without becoming depressed yourself. It is certainly one of those issues that you raised here on the level of detail that is available. While to many clinicians the depression is something they just want to say and leave it at that, of course the psychiatrist might be asking us for even more detail and nuances of how someone could be depressed. So trying to strike a balance between these in a system that is going to be used across the board can sometimes be difficult.

Certainly while we do have quite a bit of detail, we generally do like to have ways of coding things without further detail or not otherwise specified in some situations. So we do like to have some different ways that can capture the detail. But it is sometimes a difficult question.

MR. MC DONALD: I guess the plea would be, there is an ICD code, 288, for depression. I think that is the number. But the real issue is the policies that surround these, and some deliberateness in having the common -- it is just a lumpy thing, like a chief complaint, just to be able to say honestly what you know. Maybe it is more HCFA and a procedures and billing issue that end up making life very hard. You have to answer questions that you can't answer.

MR. BLAIR: Do we have other responses from our panelists? Other questions?

DR. POLLOCK: I am wondering if I should share the content of a slide I sometimes use in talks, which is from an emergency physician from Jacksonville, Florida, who noted several years ago that hospital administrators have rapid fire feedback about when to re-order lasagna noodles, but they would be hard pressed to provide to their emergency department directors the top 10 chief complaints in the previous year.

It is a matter of the way the investments are being made and the application of the information technology. Quite clearly, it is not being made in many instances where it could be of direct and immediate benefit.

MS. PROPHET: I would say that is exactly what you were saying is the reason that the NOS option was added to PCS, even though the developers really wanted to have all that clinical detail, because everyone kept saying, sometimes, all you know is that a procedure was done on the stomach. You don't really care or know which part of the stomach. All you know is that it is the stomach. So that is why that particular feature was added in, for some of the characters.

MR. BLAIR: Do we have other questions? None? Let me thank our panelists very, very much for their excellent testimony. It is almost -- it is moving on to 3 o'clock, our break time. Can we reconvene at 3:15, please?

(Brief recess.)

Agenda Item: Clinical Specific Code Sets

MR. BLAIR: This will be our last panel of the day. Dr. Bidgood, can you start it off for us?

DR. BIDGOOD: This will be a change of pace. I shall try to do two things at the same time for you. Today we have talked almost exclusively about linguistic evidence, linguistic data, words, codes. That is important to us in biomedical imaging.

I represent the Dicomm standards committee. We are a voluntary non-proprietary message and documents standards development committee, composed of professional society members, trade associations, industry, domestic and international liaisons to trade associations and to government agencies.

My written comments answer the questions that were given us by the committee. These questions refer to the SNOMED Dicomm micro glossary. This is the biomedical imaging component of the SNOMED international standard.

I am going to take another step beyond this to highlight a feature which you might miss otherwise, which is the importance of the standard open structured representation of non-linguistic evidence such as images and wave forms in patient medical records. This is an even larger factor in comparability of imaging data than the coded description per se.

In the Dicomm organization, we are fortunate to have active participation from a number of specialty societies that you see in the transcript. There are active projects in a number of subject domains.

The challenge here is to maintain sufficient depth and breadth in our representation, insofar as codes go. That will be an ongoing challenge. In that case, imaging is no different from any other area.

Let's take a look at this deep problem of non-linguistic evidence and the interplay between codes and this type of evidence. Digital images and wave forms are non-linguistic evidence. They have great potential value to individuals and society. They have relatively high cost of production and documentation vis-a-vis other types of evidence.

Non-linguistic evidence is also relatively inaccessible. Interpretation requires special training, such that a diagnostic image is a black box. Much potential information is there. It is essential. Experts are very scarce, and we need to know what it says.

So we who interpret this type of evidence spend most of our working hours in a nutshell, transforming non-linguistic evidence to linguistic evidence. Today we have been talking about representing as much as possible of that transformed linguistic output into some sort of coded standard form.

One might say that the effective value of non-linguistic evidence is proportional to the quality of our diagnostic descriptions. Let me give you an example. Here is a drawing of a cardiac angiogram, showing some life threatening findings. Most of us would stare at this and walk away, not particularly edified, and the patient doesn't benefit much from our help. However, if we have a friend who is an expert standing behind us with a wax marker, or if we have in our database a way to link our descriptions, whether they be free taxed, numeric or coded to the specific areas of abnormality, we immediately feel more comfortable. We have much more utility of this non-linguistic evidence.

Here you see an arrow pointing to something described as critical stenosis of the distal left common carotid artery and orifice of left internal carotid artery. Unfortunately, we are not perfect. We live in a fallen world, and we have here evidence of that fall, the delta between the yellow arrow and the dotted white arrow which was the original opinion. We felt comfortable here, we are not comfortable here.

This is the crux of evidence-based medicine as far as non-linguistic evidence is concerned.

Here is another example. Suppose I give you such a piece of evidence, which is a line drawing of a chest Xray. There are four separate classes of intentionally ambiguous findings here, all of which are life threatening either to the patient or to the attendants who are serving the patient.

I won't go in the full detail. I'll simply show you -- suppose I give you a full text report in addition to the full non-linguistic data set, and ask you the question, do you agree with my interpretation. It is not possible to agree. This is the point. Unless you have unambiguous documentation, which links our knowledge, the observer's knowledge, to findings at the feature level, you have more or less serious ambiguity in the documentation.

In a nutshell, the Dicomm structured reporting model offers a standard interchange medium for attaching pretext, numeric measurements as with LOINC coding, codes as with SNOMED, LOINC or other standard coding systems to features, and allows one to have an audit trail back to the root, that evidence which evoked the judgment of the observer.

So I began by saying that yes, we are developing specific and granular coding systems to suit highly specific context within various specialties of biomedical imaging. But that in itself would not be sufficient if we were not also in Dicomm working on this structured representation of image content, and working with industry in such a way that it can become a reality rather than blue sky. It needs to be implemented in equipment in order for users to use it. The goal is open representation of image content, in which we store image features as objects.

We are able to document the specific evidence that leads to conclusions, where it is in the patient's interest and within the constraints of our business model to do so. All the abnormalities I have shown you today easily can be justified to mark, even in the busiest department.

I don't advocate, nor does Dicomm advocate that we go to the industry and to doctors and say, let's mark every finding, no. Just the highly critical ones. After awhile, the physicians learn to like that ability, such that we can tag this type of non-linguistic marking which can double as an annotation, or in fact a code, which here is reprinted as a text string, but may be behind the scenes represented as a code, to that object for visual purposes.

In the database, the two are inextricably linked, so that a less trained person can do analysis on a large number of cases which are analyzed by highly trained persons. You can see the obvious efficiency here in large protocols and trials involving thousands of cases. To have this ability to link up structured knowledge, in other words, to share knowledge about non-linguistic data in commercially available equipment is a radical departure from our current state.

The same applies to wave forms. I'll just show you this picture to show the possibility of linking to singles or multiples in a stream, as in a coronary angiogram.

So with the application of robust, granular context specific terminology from SNOMED, from LOINC, from elsewhere to this structured non-linguistic evidence model with other powerful features as well, one achieves for the most difficult type of evidence closure of this loop, from practice guidelines to clinical data sets, now to some degree, high or low, represented in structured form, to performance measures upon explicitly marked data, and back to feedback to the practice guidelines.

Recommendations to the government. It would be very helpful to have some incentive or to promote implementation this type of model, perhaps procurement guidelines for military or other government agencies, that would induce the use of compatible software.

We think that research and development is needed here, because some of the pointing, the non-linguistic denotation activity will require some more R&D for user interface.

Finally, we do believe that some sort of mandate, some sort of endorsement of robust clinical coding schemes is very helpful, since the plethora of non-standard schemes obviously increases the ambiguity of markings, even once we have pointed to what we want to talk about.

I have included some references for more detailed analysis. I will submit to the committee a full copy of the SR documentation model, which is the model behind the model of Dicomm SR. This is available for free use and reference for any purpose.

Thank you.

MR. BLAIR: Thank you, Dr. Bidgood. Dr. Spackman.

DR. SPACKMAN: Thank you. I am Kent Spackman, associate professor of pathology at Oregon Health Sciences University. I am here representing the College of American Pathologists in my role as chairman of the SNOMED editorial board. I thank you for the opportunity to appear here.

Since my appearance before the committee last year, CAP has embarked on several initiatives aimed at advancing the role of SNOMED as a national reference terminology for patient health information. There are four of these, and I will emphasize these as I go through.

First of all, the CAP continues to expand the functionality of SNOMED and has begun testing the next generation of the SNOMED RT, or reference terminology. Second, the CAP is pushing ahead on its vision of a global terminology. To that end, we have entered into an agreement with the National Health Service of the United Kingdom to merge the SNOMED and the REED codes.

Third, the CAP has further opened up the SNOMED governance and the editorial processes to reflect broader stakeholder participation. Fourth, the CAP has begun the process for SNOMED to become recognized as an ANSE accredited standard.

We believe these actions taken together represent significant advancement of SNOMED as the reference terminology to support the patient medical record worldwide. I'll elaborate on each of those initiatives.

I am going to skip some of my answers to the questions about patient medical record information in the interest of time. But let me just concur with those who have spoken previously about the usefulness of secondary uses of detailed clinical information.

The patient medical record has tremendous potential to serve as an effective tool for case management, clinical decision support, evidence based medicine and research, and in order to achieve those results, there needs to be in place a uniform acceptable reference terminology that accurately represents patient medical record information, allowing for timely access to and retrieval of information.

Skipping down to an overview of SNOMED and its role in representing the patient medical record information, SNOMED is a comprehensive nomenclature created for indexing the entire medical record, including clinical findings, etiologies, interventions and many other essential components of clinical information.

As a reference terminology, SNOMED allows consistent gathering and transmitting of detailed clinical information, retrieval of information for disease management or research, and performance of outcome analysis for quality improvement. SNOMED is compositional, that is, you can create new concepts out of the basic concepts that are in SNOMED. Independent studies have demonstrated that SNOMED is unsurpassed in its comprehensiveness. The CAP has over a 30-year commitment to the development and maintenance of SNOMED. It is intended to facilitate access to clinical information regardless of where, when or who has a legitimate need to use the patient medical record information.

We believe SNOMED enables providers of various specialties, researchers, even patients themselves to have a common understanding of a patient's condition and the health care process across sites of care. So SNOMED is appropriate for all health care settings.

With respect to market acceptance, SNOMED is used both domestically and internationally for a variety of uses. Its uses are numerous, mapping standardization of medical terminologies, standardization of clinical reports, encoding medical concepts, transmission of normalized data, infectious disease reporting, emergency room encounters, article search and retrieval, outcomes assessment, coding of adverse drug reactions in clinical trials, and the list goes on.

SNOMED licensees include information system developers, managed care organizations, community hospitals, multiple hospital systems, payor provider systems, medical libraries, insurance companies, research entities and many government agencies, for example, the Department of Veterans Health Administration and the Centers for Disease Control and Prevention. For example, SNOMED is used by all of Columbia HCA's anatomic pathology laboratories as the core vocabulary for coding tissue samples, test results and cancer reports. As you heard earlier and you probably all know, Kaiser Permanente, the nation's largest nonprofit health maintenance organization, has chosen SNOMED as the core clinical reference terminology for its national computerized information system. Even in the U.K., the pathology record is encoded almost exclusively using SNOMED.

Each day brings inquiries of new uses of SNOMED. For instance, we are starting to see interest by radiologists in using SNOMED for encoding radiological images to facilitate retrieval. SNOMED has obtained widespread international recognition. It is currently used in nearly 40 countries.

Let me turn to those foreign issues that I mentioned at the start. SNOMED RT, the next generation of SNOMED, is currently undergoing rigorous beta testing by 30 institutions and vendors. These include the Mayo Clinic, 3M Health Information Systems, Oceana and many others.

SNOMED RT combines the granularity and comprehensiveness of SNOMED with improved clarity of meaning and analytic power. Its enhancements include over 110,000 concepts, more than 180,000 terms, each with unique computer readable codes, and an excess of 260,000 explicit relationships between terms and codes. It enables one to retrieve a case based on any variety of criteria.

During the testing phase, SNOMED RT will be applied to a wide variety of applications, including clinical documentation and decision support, HETUS reporting, cancer research, data warehousing and disease management.

I made a note to myself about Dr. Campbell's comment earlier today about the five characteristics. One of them was vendor neutrality. It is clear that because of the influence of our working relationship with the Kaiser group that SNOMED RT is going to be very vendor neutral. It does not give a particular proprietary advantage to any information system's vendor. We have many different information system vendors who are testing SNOMED RT.

It has multiple hierarchies and explicit semantic definitions. We believe these will enable providers and researchers to retrieve and aggregate data more completely and consistently, and therefore benefit from more complete and accessible information.

With respect to the CAP NHS agreement, the CAP recently entered into a collaboration with the United Kingdom's Secretary for Health on behalf of the National Health Service executive, to combine SNOMED RT and Version 3 of the REED codes. This agreement will create a new international approach for computerizing scientific terms that physicians, nurses and allied health professionals use for the effective management of patient records and medical communication. It will combine the strengths of the two existing terminologies. The strength of SNOMED RT in specialty medicine includes pathology, and the richness of the REED codes Version 3 in primary care.

SNOMED RT and clinical terms Version 3 will continue to exist separately until the new work is well established. There will be forward compatibility from those systems to the new work. The CAP and the NHS will work closely with physicians, system suppliers, government agencies and end users to make a smooth transition from current vocabularies or terminologies to the new work, which is scheduled for availability by the end of the year 2001.

The merger of the two works will decrease duplication of effort across the SNOMED and the NHS, while at the same time create the most comprehensive language of health and provide an essential building block for a common computerized medical language for use around the world.

Point number three, greater stakeholder input. With the expansion of the SNOMED governance and the editorial processes, it is important to note that within the CAP, a separate division, SNOMED International, oversees the daily administration and maintenance of SNOMED. The intent is that SNOMED will reflect the ongoing needs of contemporary clinical practices across specialties.

The maintenance of SNOMED occurs through an open process, allowing for broad input of all stakeholders. The SNOMED editorial board has recently been expanded to include representations from the NHS. At our request, the Department of Veterans Health Administration and the Department of Health and Human Services have agreed to send liaisons to serve on the SNOMED editorial board, and the SNOMED authority, the governance body within the CAP for SNOMED, has also been expanded to include representatives with experience in enterprise-wide implementation of clinical systems, the needs of the global market and international system setting activities. These individuals of course are not pathologists. That is a very remarkable change I think in the way SNOMED was governed in the past.

The expansion of the SNOMED governance attests to CAP's commitment to develop a clinical reference terminology useful for a multiplicity of users across various sites. Furthermore, SNOMED International will maintain its not for profit status, but will eventually become self sufficient. The CAP is committed to widespread availability and access to SNOMED with minimal cost and minimal restrictions.

ANSE accreditation. We are in the process of submitting SNOMED for ANSE accreditation by the canvass method. This application reinforces the CAP's commitment to the open, multidisciplinary approach to make enhancements to the reference terminology.

For example, as outlined in our application to ANSE, working groups of the SNOMED editorial board such as the convergent terminology group for nursing will be convened, and actually, that group has already met. The purpose of the SNOMED convergent terminology groups is to advise the SNOMED editorial board regarding scope of coverage, creation of hierarchies and semantic terminology definitions, and the scientific accuracy of concepts and terms. So we believe that ANSE accreditation of SNOMED will further encourage the broader adoption of SNOMED worldwide.

We also believe that that addresses the second point that Dr. Campbell mentioned this morning, that of scientific validity. We feel very strongly that reproducibility, understandability and usefulness of the terms constitute the three criteria for scientific valid terminology, and we follow those very closely.

SNOMED relationships to other medical terminologies and health message format standards. Obviously, we have a vision of a terminology system that meets the needs of the users. That is really what is driving us here. So we need to cooperate and collaborate with others who have been developing terminology in a variety of ways, and other terminologies that have complementary or different purposes.

SNOMED is currently mapped to ICD-9 CM codes. We intend to enhance that mapping in collaboration with the NLM and the NCHS to include rules as well as code to code linkages. A mapping of SNOMED terms to CPT-4 has begun in collaboration with the AMA and the NLM. SNOMED also contains a set of concepts and codes that fully support compatibility with MEDRA, the Medical Dictionary for Drug Regulatory Affairs. There are many other relationships with terminologies, and you ought to refer to the CPRI-2 proceedings to see some of those other relationships.

We are a member of Health Level 7. We actively participate in the coordination of HL-7 messaging standards and SNOMED content. We also have significant interaction with Dicomm, and I'll skip over that as well.

On to some recommendations about what you might do. Both in the U.S. and abroad, there is a growing recognition and support for standards related to medical terminologies. In the absence of such standards, terminologies having limited scope and limited direction have been developed to address emerging issues.

The government can play a critical role in testing and improving medical terminologies, as well as in fostering collaboration among developers.

First, the government should use its role as a provider of health care services to adopt and require standardization in their own health care initiatives for clinical decision making and support systems in federally funded health care initiatives. An example of such an initiative is the government computer-based patient record framework.

Other examples of government health care initiatives include public health cancer and chronic disease reporting, ICD or NIH, adverse event reporting to the FDA, Medicare risk adjustment data, Medicare managed care reporting of HETUS measures.

Second, the government can help to facilitate the coordination between the various interlocking medical terminologies and facilitate the interface between the end users, the developers of software systems and the developers of terminologies. Such was proposed at the recent CPRI Terminology II conference regarding the interface between SNOMED, the drug database vendors and HL-7 messaging. Collaboration among developers is essential in order to decrease redundancy across different systems and users in order to increase resource efficiency.

Finally, the government might consider providing financial incentives to providers for submitting government requested public health data in electronic form. This would help allay the individual practitioners' concerns and help to ally their interests with that of the interests of the national health.

Then issues related to the comparability of patient medical record information and government interface. In most information systems today, the data gathered for and entered into the patient medical record is not comparable. That minimizes the ability to share patient information, conduct appropriate patient monitoring and make appropriate quality assurance determinations.

Furthermore, because most information systems rely on paper based medical encounter data, retrieving patient information can be very inefficient and costly and time consuming. So the government could facilitate comparable patient medical record information by adopting standardized clinical reference terminologies for the computer-based patient record to allow common representation of patient medical record information. We believe that providing a recommendation along these lines to Secretary Shalala is well within the purview of the NCVHS.

Let me just mention the other three of Dr. Campbell's criteria. A terminology ought to be well maintained. We believe that SNOMED in its current situation is being very well maintained. For example, we have a microbiologist on staff at the CAP collaborating with the CDC experts on bacterial nomenclature to keep up to date the bacterial part of SNOMED, and the examples go on.

It is self sustaining, or at least on track to becoming self sustaining. We certainly would welcome suggestions from others in ways that we can accomplish that. We obviously have a particular model that we are pursuing at the present time, but that is not in any way fixed in stone.

We are following Dr. Campbell's scalable processes in configuration management. In fact, he is our main advisor on that. So we certainly feel we meet the five criteria that he set forward for a useful clinical terminology.

So in conclusion, the CAP would like to stress that in order to keep pace with current and future health care information needs, detailed clinical information in the patient medical record must be available whenever it is needed, wherever it is needed, by whomever has a legitimate right to that information. Central to the ability to access patient medical record information and use it to its fullest potential for patient care monitoring, quality assurance and so on is the need for standard reference terminologies.

SNOMED is the most complete reference terminology in use today that captures all aspects of the patient medical record in computer readable format. The government should consider the valuable contribution that it can make to the adoption of consistent reference terminologies by its own example. We believe that an electronic patient medical record will ultimately strengthen the future health information infrastructure and the health of our population.

Thanks.

MR. BLAIR: Thank you, Dr. Spackman. Dr. Goltra, I think that you are next.

MR. GOLTRA: Thank you, Mr. Chairman. Mr. Goltra, I'm afraid. I appreciate the accolade, however.

I am Peter Goltra, president of Medicomp Systems. Joining me today in this brief presentation will be David LaRoche, chief operations officer for Medicomp Systems.

I brought along a slide which I felt represented today the current state of the medical information superhighway. As you can see, there are some problems. One of the reasons we are all here today is to try and address those problems. I think (word lost) of the nomenclature that we publish may have a partial use in beginning to cut through the blockage that we see on this superhighway.

I would like to start with perhaps an overview of the entire situation of terminologies in general. I do that by making what might seem to be a rather silly analogy, but I think an important one for focusing in on this from a different perspective.

That is, would anyone seriously consider that all the people in this room should be wearing the same clothing. Obviously not. Would we have variations in style, but all the same size? Obviously not. Would we restrict that on a lower level, subcategorizing it, would all men wear the same size shoes? Of course not. Would women wear the same dresses? No.

One can extend this considerably. Let's think about the terminologies. Does one or two or three terminology sets -- will that represent what needs to be captured and stored in medicine? Obviously not. Is there even a single set of characteristics that can describe all the terminologies? No. There have been some presented this morning, and those refer to clinical capture of medicine, but there are other aspects.

The primary aspect is billing. The CPT nomenclature is often criticized because of the ambiguity in some of its codes. In fact, I have found some that could be split out to 32 separate concepts, all contained within a single code. Is this a problem? If you are trying to use it in a clinical chart, yes, it is a problem. If you are using it for billing, it can be an asset, because you are grouping together like kinds of things for reimbursement.

So we are talking about the use of the terminology. Is any one terminology going to fit all the different uses? No. Is a single approach going to fit all the uses? No. So it is going to take different terminologies as the need arises, working together seamlessly, so that the user doesn't get involved with them to present the kind of system that is needed for a particular use, whether it be at the point of care, whether it be for research, irrespective. They all need to work together. With that, perhaps we can make a difference on this medical information highway.

I would comment just briefly. Dr. McDonald, you brought up in the previous session the question about depression. I would submit that depression can be quite a different concept, depending on the point of view. The patient will come in perhaps complaining of depression, perhaps not, perhaps complaining of anxiety, other things. It is hard to say. Depression is a very elusive thing to tease out.

The psychiatrist will employ certain techniques during the session to draw out additional information and that will result in the physical finding of depression. Then combining with a history, whether there have been previous episodes, et cetera, will have a diagnosis of depression.

Are they all the same concept? No. They are all the same word, though. So it is not straightforward in trying to determine how to bring all these technologies together into one. We can't. The report for billing is going to be the conclusion. The clinical documentation is going to be something quite separate.

So with those few notes, I would add one additional thing before turning it over to David LaRoche. ICD-9 and ICD-9 CM, the clinical modification was made for very particular purposes. ICD-10, again quite a different terminology.

I had the opportunity to invite Dr. Butler and Dr. Louris down from Geneva to where we were staying in France last year for what turned out to be a very lengthy luncheon. In fact, I think we adjourned as they were starting to serve dinner. Part of the discussion was the American desires for ICD-10 versus the European. It is a very, very different situation.

With those thoughts, David?

MR. LAROW: Thank you, Peter. I am going to talk briefly, answer some of the committee's questions, and then show you in a little bit of detail in the four or five minutes I've got left, what Medsin is and where it fits in.

When we look at patient medical record information data, from our approach we are looking at information to diagnose and manage a patient primarily. There are different uses of it, reimbursement, research and patient care. As we are finding as we start to implement Medsin in our vendors' applications, the need for comparability between data elements varies by the use of those elements themselves. As I go through here, you will start to see what I am talking about.

Medsin as a component in that whole continuum of care really is based on point of care documentation, capturing and presenting information at the point of care in the way that the provider is approaching it, which is diagnosing and treating a patient. Other people may want to split that information down into much more atomic levels for research. They may want to group it for billing, but we are primarily trying to provide an engine to EMR vendors for documentation and presentation of information they care about at the point of care.

In order to make the whole thing work though, we have to provide links to other terminologies, which is why as I close my talk, you will hear me talk about one of the government roles, that I think has to be aggressively promote mapping between terminologies as one of the initiatives.

Medsin's primary setting is clinical encounter. As we go through our 20 years of development, in which Medsin was the basis of our own proprietary EMR until about three years ago, when we decided to publish in 1997 and make it available at low cost to vendors, since then and particularly in the last six months, with some of our software controls, eight different vendors who now have ambulatory sites with about 165,000 users have signed on to use Medsin as their basic clinical engine for presentation of information to the clinician at the point of care.

We are expanding Medsin in areas as those providers come to us and as clients come to us and say, we need you to focus on this area.

The current situation out there for capturing information. If you look at a stack of charts, patient charts, and you start reading through them, in the ambulatory setting at least, about 80 percent of the information in any chart is history and physical data. It is not coded diagnoses, it is not coded procedures, it is the things that document the physician's decision to take those actions and make that diagnosis.

The dream of coding that information and capturing it and presenting it is that it be fast, easy to use, linked to other knowledge bases, is easily customizable, has standards that it follows, and can share data between that particular venue and the person in the back end who is researching it or needs it for billing.

The reality out there in the field as of this point has been that to require the doctor to capture this kind of structured data in history and physical is slow, confusing, takes months of mapping. You have to build a large number of fixed templates. It is custom built, there are conflicting needs between the users, and when it is finished, if everybody has their own data elements, what kind of data do you have?

We approach that by making our approach a little bit different. In the current approach which most people have thought about, they say, I see a lot of people with chest pain, I need a chest pain template. I see a lot of people with diabetes, I need a diabetes management template.

The problem with fixed templates is, it is not the way most clinicians work and think. Once you build them, you have to maintain them. There is a large number required to handle clinical differences. You are not going to treat chest pain the same in a 70 year old person with a 50 year smoking history as you are in a 15 year old female.

Clinical thinking can be reductive, which means that the more information a clinician acquires, the most closely they focus on a diagnosis and treatment, rather than a template approach, which is, they have fever and chest pain and they are coughing up purulent sputum, it could be infectious, it could be heart, it could be pulmonary, I don't know what is going on. Whereas most clinicians given that set of information will narrow it down pretty quickly, so that the data elements provided at that point need to reflect clinical thinking, not the millions of possibilities without any decision process behind it.

The other problem is, fixed templates basically say to the doctor, we are turning you into a data entry person. Here is the data you have to enter for this encounter.

Peter has built primarily with clinicians from the Harvard, Cornwell and Hopkins systems over the last 20 years the ability to use intelligent prompting, which is a form of dynamic templating, which adjusts to the clinical situation during the encounter. It adjusts for age and gender, and at this point intelligent prompting uses an engine of about five and a half million links between medicine findings and diagnoses adjusted for age, gender, et cetera. It actually provides a dynamic presentation of findings for documentation, diagnosis and therapy.

On the next slide, please don't panic. I won't spend a lot of time on it. It really is the heart of our engine and why the vendors are turning to it.

When I say we have the ability to provide about five and a half million links between findings and diagnoses, what you are looking at on the screen is one of our 15,000 diagnoses, pick one of the ones that is fully mapped at its highest and lowest levels. There are about 3200 of those. You are looking at a partial index extract from our knowledge base from our combined congestive heart failure.

It has 409 items in it. As you go down the left-hand column, you will see that the first item says symptoms, 39 items. There are 39 symptoms expressed, which have some sort of a link to combined congestive heart failure. I picked about five out, and working down you see history, physical exam, tests and therapy.

The columns represent the links and the matrix. The first one, for example, under symptoms shows soft tissue swelling. The second column, the presence of soft tissue swelling gets two points toward the diagnosis of combined congestive heart failure. However, just below that, soft tissue swelling in one extremity would tend to rule it out. We also have columns for the absence of an item or the normal or negative, no soft tissue swelling for example tends to rule against combined congestive heart failure.

However, this is great, we have this fabulous diagnostic and decision engine which is represented by the three columns on the right. The problem is, if you want a physician to capture codes for the symptoms, history and physical exam of combined congestive heart failure, if you throw four or nine items up on the screen, they are going to throw whatever device they are using back at you and say, I'm not using that.

So the intelligent prompting column, the IP, which is the first of the numbered columns, is where we allow the user to pick at what level a long or short list they want with only the items that are absolutely hallmark of making that diagnosis. We do it by a value of one or two, which is different than the logarithmic values we use for the other columns.

The effect of this on the provider is that if you look at this slide on the upper left quadrant, combined congestive heart failure, I have just taken a short list and extracted the 82 physical examination findings below that: heart sounds, S1, diminished heart sounds, S2, accentuated P2. When Dr. Bidgood was talking about putting something in linguistics that are appropriate to the person at the time of use, you can see that even though combined congestive heart failure, aortic stenosis, angina are all in areas of cardiology, there are some similarities between the items presented. For example, heart sounds S3 shows up in combined CHF and also in angina, but it doesn't show up in aortic stenosis.

If a physician is trying to document a case like this, or if they note that the JVP is elevated and there are heart sounds S3, we want to be able to provide a relevant list of items for documentation regardless of whether they are approaching it diagnostically or symptomatically. That is what this engine does.

Medsin in use in the field. It is used primarily as a user interface terminology to give vendors a way to provide clinically relevant information, given the clinical situation. They do it because it provides them rapid clinically relevant documentation with a coded structure underneath it. It is currently licensed by about 10 vendors who as I said represent about 165,000 users.

We have links to other coding systems. As far as HL-7 and ASTM, we are members. We are participating in those working groups, and Medsin is considered optional content for clinical messaging.

As the government role, I think the government should encourage vendor cooperation and mapping between terminologies, promote low cost of basic data elements, and encourage all vendors in any government sponsored project to participate in standards bodies.

I thank the committee for the opportunity to present, and turn it back to you.

MR. BLAIR: Thank you, David. Dr. Huff, are you next?

DR. HUFF: I'm next.

MR. BLAIR: I just want to thank all of the panelists that we have had so far today, not only because of the quality of the content of the presentation, but also its relevance and the fact that you have both done an excellent job of staying right on schedule as well. So thank you very, very much.

DR. HUFF: I'm glad to have an opportunity to present today. I'm Stan Huff. I work for Intermountain Health Care, also associated with the University of Utah, co-chair of LOINC along with Clem McDonald. I am chair-elect of HL-7 as well as co-chair of the vocabulary committee and advisor to the SNOMED editorial board.

I want to talk about LOINC, and I wanted because of the limited time get out facts and then focus on one or two aspects in the limited time that I have, to show what LOINC does and why it is important, and how it relates to other vocabulary and to the message standards.

LOINC was initiated in 1995. It contains about 20,000 codes. It is free for use. The copyright is held by Regan Strief Institute. It is available and free for download from the Web. Associated with those files on the Web is a matching tool, matching application that you can have. I'll be glad to give you -- and we will incorporate these slides as part of the information that is available.

I think it is important to classify what it is that we are trying to do. Sometimes we focus on the terminology and forget that. I think it can be used for lots of things. I want to focus specifically on communication of information between computer systems. I think that is an underlying thing that is implied by a lot of the other uses we have talked about, but we don't focus on it directly.

As we talked about, we can intra-enterprise or inter-enterprise. Intra-enterprise, we are trying to accomplish the reduction of redundant data entry and we are trying to provide better clinical care. In inter-enterprise, it can be for public health reporting, studies such as outcome studies for clinical trials, and you can have patient administration, from enrollment claims payments and a whole bunch of other activities.

I think it is important to keep that in mind. The interface standards come into play because what we are trying to do is decrease the cost of communicating, and make it easier to communicate and have greater semantic meaning in the information that is transferred between the systems.

If you look at HL-7 messaging, the whole idea between HL-7 messaging, if something happens in the real world and that becomes known to some system, and that system then transmits information from itself to another system, and that system may respond, either with acknowledgment or other information. The whole idea is that they are independent systems. We are not trying to standardize the systems. One could be object oriented and another could be relational databases. One could be a cancer registry and the other could be a clinical system.

What we are talking about is trying to get the information communicated from one to the other. In that context, HL-7 defines message structures. I think one of the points that I would make is that the record structure is as important as the terminology, because it is a combination of those two things that really leads to semantic understanding of what is being transmitted.

But in HL-7, you have some header information, you have patient identification, and then you have these two blocks that repeat, an OBR segment, which is a collection of one or more observations. It is a named entity and provides some hierarchical structure to the message. Then within that, you have individual observations. It is within the context of this kind of message paradigm that LOINC grew up. I want to explain that a little bit.

If you look at an OBX segment, it really is a named value pair, or some people talk about it as an entity attribute value combination of things. In doing that, you have one part that is the actual data. So in this case, the data is 38 and this is a temperature reading. The way that you know that it is a temperature reading is that there is a code there, in this case a LOINC code, that says this is a temperature reading.

There is another part of this message that says this code is a numeric item, and so you know that 38 is a number, instead of being some other arbitrary text string. There are other things in here that say, for instance, for this measurement what its units of measure are -- and again, that is a coding scheme to say what the units of measurement of this thing are.

There are other things, like the status of the field. There are also a bunch of other data fields that describe when his happened and who did it and what kind of method was used, and all of the attribution information that tells you how this thing happened and came to be.

What is the problem? The problem right now is that if you look at the HL-7 message that is being used throughout the United States and internationally, people are not using consistent codes. They are not using consistent codes either to name the attributes or slots that you are communicating about, but they are also using different codes to represent the values of those items.

What that means is, it takes a long time to map between any two implementations of an HL-7 interface. You have to match up vocabularies between the systems and using today's technology, the matching of vocabularies is somewhere between 75 and 90 percent of the work of getting two systems to communicate involving terminology matching.

You and I might know that these things are similar, but the computers don't know until you physically map those codes. What we are trying to do is get that to be consistent. We are saying we want this to be the same code and we want these to be the same code. If we can achieve that regardless of what you are using internally in your own system, when you communicate it externally to other systems, you use a consistent vocabulary.

The philosophy behind that is exactly the philosophy of Esperanto. If we all learn one language, even though it might be different than any other language, we can all talk that language and from that language we can get to any other language. Whereas, if we do mapping between all the vocabularies, then we are into an N squared minus N number of mappings that have to occur in order for semantic communication to happen.

So LOINC's goal is to make a universal coding scheme for these observations within messages. So if we look at this as a real example, we would have a code and an OBX segment that means this is an ABO blood group that we are identifying. There is a code that says this is a LOINC code, so we can support multiple coding schemes in here, and you can identify the code and the coding scheme it came from. We have another code that now identifies that the actual observation made was blood group O. This is coming from the SNOMED international coding scheme in this example.

Other coding schemes can be used in this thing, and we don't have time to talk about the strategy within HL-7, but the strategy within HL-7 would allow more than one coding scheme to be used, but actually know the synonymy between those systems.

So when you look at it, you have got LOINC codes that are specifying the attribute or the characteristic, the thing that was observed or evaluated, and you have SNOMED codes that are being used as the actual values. Again, it doesn't have to be SNOMED, though SNOMED is certainly a good coding scheme.

The other thing that I wanted to impress and explain to the committee is that there is this inexplicable association between vocabularies and the structure. So for instance, shown in interface A, you can represent -- something as simple as a blood type, you could represent as two separate observations. One says the ABO group and a separate one says the RH group.

Another system, in fact very commonly done, that can be represented as a single observation, where I have APO and RH, and I say that it was O positive. So I have done a higher degree of aggregation in the kind of message.

So you have to coordinate -- it doesn't do you any good to just say we are going to use LOINC or SNOMED or Medsin, or even that we are going to use LOINC and SNOMED. You have to say, what is the record structure, how does that correlate with the use of the vocabulary and how do I use that combined structure terminology to communicate semantically from one system to the other, so that the other system knows exactly what I mean.

Final things. Some federal initiatives that are already using LOINC. It is proposed for use in claims attachments as part of the HIPAA initiative. As Dan Pollock noted, it is part of the DEEDS database and the work for emergency department reporting. It is part of and has been approved for electronic reporting of laboratory data within the CDC. There is a proposal in place for use of LOINC in cancer registry and public reporting of cancer registry information, and it is already included as a part of the UMS medical thesaurus.

What should the government do? Use HL-7. Support free for use medical terminologies with public money, be it contracts with existing terminology providers. That could be a public private partnership, and we certainly want to use good maintenance tools that have been explained. I think there is a purpose to that. Somebody in the government needs to manage those contracts and make the terminologies available. I think there is a role for the National Library of Medicine and the UMLS meta thesaurus and a lot of the currently in place infrastructure in the government for doing that.

Which terminologies? Make the best cost effective choices. Don't simply study it to death. I think enough is known actually about these terminologies, that we don't need to launch into a five year review cycle to understand which of these we ought to use.

You ought to use LOINC, I think without question. SNOMED and REED play into the mix, and Medsin potentially, and others. I think NDC codes with improvements in clinical drugs that have already been worked on with the UMLS meta thesaurus.

Then there are a bunch of classifications. If you look at it closely, this breaks down into reference terminologies versus some user (word lost) terminologies versus classification systems. There is a role for all of those in the things that we are trying to accomplish. So ICD-9 CM, ICD-10 CM, perhaps other things as procedure coding systems, et cetera.

The rest of my presentation goes into details, and I think they are self explanatory. My time is up so I'll stop. Thank you very much for your attention.

MR. BLAIR: Thank you, Dr. Huff. Karen Martin, are you next?

MS. MARTIN: Yes, thank you. I am Karen Martin. I am representing Omaha System today. You have two handouts. One is entitled ANA Omaha System, and it is an ANSE HIS updated handout that I will refer to in a couple of minutes, and the other is my testimony which again follows the questions that we were asked to respond to.

I have the pleasure of being the first of six nurses coming before this committee. We are extremely grateful to be here, and appreciate your invitation. My colleague, Virginia Saba, will follow in a minute. Judy Osbolt who is with us today will be speaking tomorrow, as will three other developers of vocabularies. We are very appreciative to Carol Bickford and ANA for the coordination, the visibility, the support they have given the vocabularies over time. It has been a very, very important piece for our profession.

As I begin my remarks today, I would like to respond from the perspective of the administrators and the clinicians who are employed in community focus settings, i.e., home care, public health, school nursing centers, ambulatory clinics and case management programs. In other words, clinician teams who include multidisciplinary professionals and paraprofessionals, and who provide wellness and illness services in partnership with our clients, who are individuals, who are families, who are groups.

Most of these teams include nurses, physical therapists, registered dieticians, occupational therapists, social workers, speech language pathologists and home health aides. They use email, phones and faxes to contact each other while they travel throughout their communities. They also communicate with physicians, with dentists, with local community resources and others, many others, who are involved in providing client care.

The clinicians and administrators in community focus settings need a comparable record to share details about the concerns and needs of the clients they serve, the services they provide and the outcomes of those services. Administrators need a comparable record to attract essential clinical, staffing, program and cost data, as they generate reports and manage their agencies, and as they seek, obtain and renew contracts with third party payers.

While I will be using the term client today as I speak about the recipients of care in community settings, some of my colleagues use the term patient, while most refer to client or patient record rather than the medical record, and who use the term health care terminologies rather than medical terminologies.

Now, how comparable, precise and accurate does a PRMI need to be? For me personally -- and I think I represent my community-focused colleagues, that structured record is an essential component of an automated, integrated community-focused information system.

In the last 10 years, we have watched an unbelievable change in community systems and in their movement toward automated records. Administrators need precise and accurate detailed data before they can aggregate that data accurately to calculate trends and risks. Complete and accurate record information is a requisite for successful continuous quality improvement programs, for reimbursement, for accreditation visits and for audits, all part of the daily life of community focused health care.

What is the role of the Omaha system in representing the PMRI? It is to provide a simple yet comprehensive research based structure to describe and quantify the concerns of clients and the practice of nurses and other health care professionals. So I am truly coming from a clinical perspective.

The Omaha system was developed for use in community focused settings. However, as that development started in 1975 and has proceeded through four research projects ending in 1993 for the initial development, the number and type of users has expanded dramatically, as the emphasis on a seamless continuum of health care and a shift toward community focus for health care delivery increased.

Omaha system users now include nurse managed center staff, hospital based and managed care case managers, nursing educators and students, acute care setting staff, nurse practitioner and the international community.

Some further examples of market acceptance include recognition by ANA, as I mentioned before, incorporation into the International Council of Nurses, international classification of nursing practice, inclusion in the JACO and NLN accreditation standards, increase in commercially available software, and publication in health care literature and inclusion in major health care conferences. Because of the increasing community focus throughout health care delivery, the need for students to value standardized clinical data, the first College of Nursing based their revised curriculum on the Omaha system beginning in 1997.

Now I would like to just very briefly refer you to the second handout, the ANA Omaha system summary that I mentioned a few minutes ago.

The purpose and some of the summary points of research are under 4A, description. Again, there are more references as we go through this in a minute, if you are interested.

The relevant characteristics on page two under K suggest that there are three parts to the Omaha system, the problem classification scheme; we use the term client problem more often, but nursing diagnosis is a synonym. Again, as some of my colleagues present today and tomorrow, you will hear these terms repeatedly.

The second part of the Omaha system is the intervention scheme. That again is set up in a taxonomic framework with categories, targets, client specific information.

The third part, problem rating scale for outcomes, is designed as an evaluation tool to measure client progress from admission through points in time and dismissal from care. The three major points on that are knowledge, behavior and status. Again, my colleagues and I as they present to you will talk about our similarities and differences. The specific problem classification scheme is similar to NANDAS nursing diagnosis which will be described tomorrow. The intervention scheme as some similarities between the NIC classification, the home health care classification that Virginia will discuss, and the patient care data set that Judy will discuss tomorrow. The problem rating scale for outcomes has similarities with the nursing outcomes classifications that will be discussed tomorrow.

There are on the bottom of page two some of the references. The most comprehensive are two books, and then earlier this year we had the privilege of having the Winter 1999 issue of Online Journal of Nursing devoted to the Omaha system. There are five articles there. The website is noted. You do have to sign up and subscribe to that journal, but there is no charge for doing so.

In terms of publications, as I have noted on this handout, we are in the process of getting a new publication together. We have also -- just in this last week, I had a confirmation on our users group for next year, so we will be having a meeting in 2000. There are also some videotapes available, and I have commented on those.

Under VIA, there are some comments about automation and the variety of vendors that are beginning to use the Omaha system and their similarities and differences. Also commented on some of the international use. The first country to translate the Omaha system pocket guide into another language was Japan. That was done in 1997 when I visited Taiwan last fall in 1998. The Chinese nurses, the nurses in Taiwan, reminded me quickly that they represent one-fifth of the world in their Mandarin Chinese language, their written language, and they were interested in the Omaha system. I had a email just last week from another one.

The ideas or the relationships between the other code sets are outlined or commented on in some of the remaining items on this page. Specifically I will make a few more comments on those now, which is in relation to the number three question that you distributed to us for comment.

I have already referred to some of the similarities, some of the differences that you will hear as we each present. With the Omaha system and the others, they have to do with the development process, the timing of that development process, the terminology, the structure, the level of detail, the users and the research.

The Omaha system has come totally from the practice setting. The Visiting Nurses Association of Omaha began the research in '75 with the assistance of seven test sites throughout the country. That does not negate our colleagues in education, and I have referred to those. We have continued to have them in our loop. But truly, the effort came from the service setting. Therefore, some of the needs of service, the absolute critical nature of keeping it very simple, keeping the number of items as limited as they could be, and yet allow us to document comprehensive data has been absolutely essential. Staff and community focus settings cannot spend any more time than needed documenting services. Their number one priority is providing care. It has been that way from the beginning.

My number four response, how does the Omaha system relate to health care message format standards. It is designed to describe multidisciplinary clinical practice. Software developers, who include the Omaha system in their products, control the selection of platforms and the message format standards that are preferred in the industry.

Are there issues related to terminology that deserve government attention? I think personally that agencies should help coordinate and disseminate terminology information as this particular committee is doing, and as the National Library of Medicine has been doing.

The ANA has developed a very good model that focuses on evidence-based practice in nursing, and that address the standards and guidelines for nursing vocabularies. Governmental agencies should coordinate efforts between the clinical work of the professional organizations and the reimbursement issues of third party payers, review and update terminology standards and guidelines, and address security issues for an entire range of clients, health and illness services, and health care providers.

Are there issues related to the comparability of PMRI? Yes. The National Library of Medicine's meta thesaurus is one example of the benefits of government involvement, and certainly the ANA facilitated the crosswalk between the vocabularies.

No, it is not a hundred percent crosswalk at all. There are differences within the structure again, as I alluded to, that make that level of crosswalking impossible. But it certainly allows nurses and other health care providers to see many of the similarities and also the differences.

The other thing that I wanted to mention, nursing educators, masters and doctoral students and practitioners, are currently conducting federally funded Omaha system research that will certainly help improve identification of problems, selection of interventions and measurements of outcomes, a very, very important part of the movement, the process of the whole idea vocabulary. Certainly, we couldn't have done what we did initially without the four federally funded research projects. So moving and extending nursing science is certainly related to research funding.

In addition, there is a need to increase coordination among terminology developers, and between terminology developers and message standard developers. Discussions such as this meeting, the 1997 committee meeting and others illustrate the value of terminology and discussions. A proactive collegial attitude is needed to advance the work on these very, very complex issues.

I thank you very much for the opportunity to come today, and look forward to continued involvement. Thank you.

MR. BLAIR: Thank you, Karen. Dr. Saba, I think you are next.

DR. SABA: I am the last and not the least. This is the final presentation of this afternoon.

Thank you very much for letting me present. It is an honor that your committee is now including many of the developers of the non-reimbursement funded vocabularies.

But this committee did address one thing in their questions which has never been addressed before. That was what they called the medical record information, and how the medical record information can be incorporated into the computer-based patient record.

I am pleased you are broadcasting it on the Internet. WE are having our first virtual graduation tomorrow at the university, and I am all for Internet education and Internet conferences, so I am pleased that this is being done, with the Veterans Administration, who is responsible for our distance learning educational program, which is a little plug for them. They have worked very hard in funding a lot of distance learning.

In any case, this patient medical record information is what I honed in on when I had to develop my testimony. You have my handout and I am not going to focus on that. I am going to focus on that and not on my testimony.

The computer-based patient record. To reflect information in the medical record is the key. Traditionally, the progress notes that are in the paper record reflect the care between events that are coded, but the between part is not coded. They capture the care during and between home visits, that is not captured in the coded requirements.

So what we have got out there is a paper record that has worked, but a computer-based record that lacks the care data, that lacks the patient care data, that reflects the quality and that measures the outcomes. That is the key in why we need care data in our computer-based patient records, if we are going to implement HIPAA to actually reflect the patient record.

With that in mind, I am going to really focus on the home health care classification system, which attempts to get at the care that is provided during and between home visits. Traditionally, home health care which has over 10,000 certified agencies and growing or shrinking as the reimbursement cycle impacts on their services, uses a medical model. Yet, the physicians are not in the home providing care. The health care providers or the patient care providers document their care in a paper record and many times in a computer-based record, but the reimbursement is based on a calculation called a visit.

We have no knowledge of what goes on in the visit. No one is collecting that data. And if they are collecting that data, it is not being coded, it is not being reimbursed. Therefore, to measure outcomes of care at the end of an episode of care cannot be accomplished.

You have admitted in these meetings in the past that the patient care data is the biggest gap in the care industry right now. So I propose that because of this care data gap, the federal government should look at how we can get at that. This committee should address how patient care, regardless of provider, can address codes and interrelate that with the coded events that are now over coded and overdone.

Everything that is presented so far today, except for LOINC, gets at the procedure, gets at the diagnosis. So the home health care classification is a system. It was designed very similar to ICD-10 or ICD-9, as the case may be, coded like ICD-10. The ICD-9 concept, where it has a volume for disease or patient problems and it has a volume for procedures, and together they have the same structure or framework, which I call the 20 home health care components. These 20 home health care components represent the framework between the two vocabularies, so that you can code the terminologies, you can relate the two terminologies. Because it uses ICD-10, it can be linked to the medical diagnosis, which is primarily used at this point in time for the entrance into the health care system for reimbursement, for Medicare funding, for third party payers.

So these 20 care components which are on the wall represent not only the psychological and the physiological, but functional and behavioral care components. This makes the vocabularies unique, in the sense that they not only get at the physiological and psychological conditions of care, but they get at the functionality and the health behaviors that care provides.

So with that in mind, you have then a system of two compatible terminologies. The chapters in the book are the 20 care components, and they represent the home health care classification system up to terminologies.

In my handout, you will notice that the home health care classification of nursing diagnosis consists of 145 major categories. It is hierarchical in nature. It has 45 major and 95 minor. But we have expanded it using the ICD-10 philosophy to use three modifiers. These three modifiers then expand the vocabulary to represent some 345 categories. That gives that to the diagnostic label.

The same is done with the interventions. We have 160 nursing interventions, and we expand the intervention by using four types of action.

Now, the four types of action times 160 times four gives you 640 terms in that part of the book or in that section of the vocabulary. That means that we can code care during and between home health visits. We can code the purpose of a visit and a specific example.

Let me give you my best example. If you are providing wound care, you can assess would care, you can teach would care, you can manage wound care, and you can call the physician or refer wound care to somebody else, which means they all require different resources, they all cost, therefore the visit would cost differently, and they all have different implications for the quality and the outcome of the care.

The home health care classifications therefore can track care over time. This is critical, tracking care over time. So we have a beginning point, which is assessing the client or the patient on admission, tracking the care during and between visits, and evaluating care at the end or as an outcome. Then you have got a continuum. Then you can link that to the next setting and the next setting and the next setting.

The home health care classification is related to many other terminologies that exist. It is framed like ICD-10. It uses the HCFA 486, and it is complementary to Oasis. I have to point out that Oasis is the new quality indicators that are being required by the Health Care Financing Administration. But that is a data set, that is not a terminology, and that is not a vocabulary, and it does not track the care process. It gets at one point in time and it doesn't follow through. So I see the home health care classification and Oasis being complementary to each other.

It is approved by the American Nurses Association. It is in the unified medical language system. It is coded in the cumulative index of nursing and allied health literature, which means you can use it as a search term in your literature. And of course, like Karen pointed out, it is incorporated into international classification of clinical nursing practice being developed by the International Council of Nursing. The nurses around the world or the non-physician providers around the world believe that they have a vocabulary, and they want to be visible by the way they document their care.

What am I doing to expand this? I now teach over at the Uniformed Services (word lost) at the Georgetown University, which has given me a whole new arena. I can now test the system in the Army, Navy and Air Force, Walter Reed, Navy, and Malcomb (word lost), which means they haven't got the restrictions that are in the civilian environment. It makes it a very nice place to test the tools. We are testing the tools in the ambulatory environment, and we are testing the tools in the situations where documentation is required in the military forces.

We are also expanding the system to include what is called the primary care providers, because that is the term that is being used. They are called clinicians regardless of whether they are a physician, a nurse, an advanced practice person, physician assistant, regardless, they are called clinicians, and they are all coding and documenting patient care in the same continuum of care. We have a lot of different strategies that are different than the non-military environment. But they are interested in research and they have research funds to test and develop a complementary coding set separate from the military system that is being developed for the in-hospital environment.

What do I suggest you do from the federal point of view? I would like to see a study being done, where we compare the HCFA traditional codes that are being used for the reimbursement of the home visit, Oasis with the quality indicators, and the home health care classification system terminologies. I have a doctoral student who is working on the -- we are taking 10 conditions and we are coding them three different ways for each of the 10 to see which is the best way to cost out care, to measure outcomes, to evaluate the quality. Hopefully, with the support that I have now from the two universities, we can move this system forward.

I am pleased that your group is considering adding our terminologies to this in order to implement HIPAA in the next two years. We look forward to having our patient care data included in this process.

Thank you.

MR. BLAIR: Virginia, thank you very much. I think we are at a time now where questions can be asked to our panelists. Kathleen Fyffe.

DR. FYFFE: Dr. Saba and Karen Martin, do I understand correctly that the six AMA recognized vocabularies would be Omaha, NANDA, nursing interventions classification, home health care classification, patient care data set and nursing outcomes classifications?

DR. SABA: Yes.

DR. FYFFE: Do we have documentation on all those?

MR. BLAIR: Yes.

DR. SABA: The home health care classification is in the public domain. It was funded by the federal government. Because it is in the public domain, it is on the web at Georgetown. It is downloaded free of charge. The number of people who are using are out there. I do not have an audit of how many times it is downloaded, but it is widely used across the country and across the world.

DR. FYFFE: Okay, thank you.

MS. MARTIN: The other four presenters are tomorrow, so from what I understood, you already have materials in your packet. But maybe they will also be bringing more for handouts, I don't know.

DR. FYFFE: Okay.

MR. BLAIR: Profiles were also in the inventory for our committee. I'm sorry, did you have another question?

DR. FYFFE: No, Jeff.

MR. BLAIR: Then we have Simon, then we have Clem.

DR. COHN: Actually, I have two questions. Both are for Dr. Saba and Karen. One is, I saw both of your systems being referenced in discussions around alternative links. Yet in your testimony and written material there is no reference. What is the relationship between your systems and alternate link?

DR. SABA: In my case, they have been communicating with me. They want to take selected interventions and put them in their system. They don't want to take the whole system as a total package.

That is their right. They are sending me what they are using, but it is their interpretation. Some of the terms that are coded in the system.

MS. MARTIN: My response is similar, in that I started communicating with that particular company in the fall of 1998, and I am not at a point to say what is or isn't or what I think of what may or may not be in it at this point in time. But they are moving along.

That is just one example of many groups who have added the Omaha system to software.

DR. COHN: Can I just ask one followup? Once again, I need to do a little more reading tonight about all the nursing systems. Are both of your systems focused on home care?

DR. SABA: Mine was developed by the -- was federally funded, and its purpose was to design and develop a system for documenting patient care.

DR. COHN: But it is in the home health environment?

DR. SABA: Surely, in the home health environment. It was designed to maybe be a prospective payment method. It was home health care, purely.

MS. MARTIN: The Omaha system's history goes back to 1975 with the federally funded research. Our goal from the beginning was to make sure we addressed the entire spectrum. So in other words, we have the nursing diagnosis or client problems as one part, the intervention is the second part, and the problem rating scale for outcomes, or the intervention part, is the third. So we have the whole spectrum of the service and the documentation.

Both home care and other community focused services, i.e., public health, school nurses, clinic nurses -- and I mentioned as I spoke a few minutes ago the expansion into acute care setting, which is not where we were in 1975. But with the mergers and the seamless environment, that is where clinicians say they want to go.

DR. COHN: Okay, thanks for the clarification.

MS. MARTIN: Surely.

MR. MC DONALD: My question is addressed to Dr. Spackman. The term vendor neutral I don't think is a sufficiently worthy goal. I think Microsoft could be described as vendor neutral versus health care vendors, because they all use them. So I don't really think that is sufficient.

This gets into the idea that ultimately, we really have this yin and yang. I think many of us in the field would really like to see one standard for a purpose, not necessarily one code system, just because that makes the communication easier. If we can map them, that's fine, then we don't have to worry about it.

What is behind a lot of the discussion is that you would like to have -- and that has been a monopoly. How do you deal with a monopoly versus not having competition? I think for a federal organization to support it makes a tough thing.

The beauty of the monopoly is what we see in the Internet. We have monopolies in terms of, this is the standard used. It is a soft monopoly, but if you are going to use IP, you are going to use IP, and you are going to use it exactly as specified.

I think there are lessons coming out of there that I think I would like to have you look at, to see -- there has recently been -- the W33 committee came out with a specific statement about patenting, because it turns out that some of these standards some guys are now patenting, the ones who worked on it, which could destroy this free flow, very rapid flow and economic advantageous flow. I think we will get the same thing in this field if we get the common standards, where there is a very rapid flow.

So they had some fairly strong words about what you are allowed to do, and they didn't preclude the possibility of patenting certain kinds of things at all. They didn't preclude patenting absolutely, but they said -- I can't remember the wording, but it has to do with a very, very low paying level in terms of the licensing arrangement.

So that is what -- it is a beautiful presentation. You are doing beautiful stuff, covering all kinds of things. I think that is along the lines -- something like that would make it easy to become a firestorm. I know everybody needs money to make these things work, but in a competitive environment it is all fine, it all works out. What we want to get is semi-monopolistic environments, just so we can have this commonness across everything.

So I just throw it out for you. You did mention low cost, but I have never heard that term in the numbers yet. Is that coming out yet? What may be low cost to one guy may not be low cost to the buyer.

DR. SPACKMAN: We are obviously learning. I think that in the HL-7 vocabulary committee we had the discussion about how do you get self perpetuating terminologies and on what principles should you rely.

A couple of the statements were that the terminology should be -- the cost of the terminology to somebody who implements it should be no greater than proportional to the value they are getting out of it, but also should not be greater than the cost of developing and maintaining terminologies. So there wouldn't be any usury going on.

But those are all general principles. I think those have to be brought down to actual implementation.

MR. MC DONALD: As I understand your fees, they are very low. I'm not sure I understand them, but I think in terms of just a vocabulary, I think $25 for your whole place, is that right?

DR. GOLTRA: Yes, the nomenclature is $25, unlimited number of users per site, and we have made it available for HL-7, messaging at zero cost.

MR. MC DONALD: Those are some nice ones to target. I understand all of the other yins and yangs, but it is really a matter -- I just think to get widely used -- Microsoft sells software very cheap, too. It is because it gets to be volume now, they turned out all right. Well, not very cheap, never mind. Strike all that stuff about Microsoft.

MR. BLAIR: Did anybody else have any other comments related to Clem's --

DR. SABA: I want to ask a question. How do you handle then recodes, which is a code from another country that is owned by their government? Does that create a problem?

DR. SPACKMAN: No, it doesn't create a problem, because the government of the United Kingdom has provided the intellectual property rights to the College of American Pathologists in the work that will be created. CAP will be developing, distributing and making that available to both users in the UK and the U.S. and other parts of the world as well.

MS. BICKFORD: This is Carol Bickford from the American Nurses Association. I want to provide some more information to the work group. We have had a new data set recognized at the American Nurses Association, so we have now extended our recognition program to seven. The last one recognized in February was the peri-operative nursing data set, just for your information. There are now seven that have been recognized through our program.

We have very specific criteria that these products must meet. One of the key points is that it is research based. If you are interested in that criteria for recognition, I would be happy to provide that. Just let me know.

DR. FYFFE: Does your ANA Internet site tell you anything about that seventh one? How can we find out about it?

MS. BICKFORD: You can go to the ARN webpage and then go into their product. We don't have any links at our ANA page at this point. It is a new recognition, it is a new product. So we direct you directly back to that organization. I can provide that contact information.

DR. FYFFE: Okay, thank you.

DR. KUN: I have a point of clarification, Dr. Saba. I was wondering, now with the proliferation of daily home care in particular, for example, you talked about four different identifiers or modifiers for the interventions. If you are looking at the outcomes, the assessment, the caring, the teaching could be different. So are you planning to multiply now by two all the modifiers, since you can have a nurse that goes home physically or goes home remotely, and the outcomes could be different?

DR. SABA: We have tried to get some funding to conduct the selection of a -- whether you go into the home as physically or whether you go into the home through teleconferencing, by the type of procedure.

We developed a model -- Mitretech Corporation developed the model with my consultation, and we are just sitting on it right now. But there is definitely going to be some way to determine whether a person should go in and get paid for what they do, or whether it can be done through two-way teleconferencing for our reimbursement, or for providing the equipment.

It is a great project. We have got it ready to go. It would be very nice to test. A model is needed. But I think you are going to need a decision support model or some kind of a decision criteria to decide who gets the home care person and who gets the home care teleconference.

MS. MARTIN: If I could just add, in addition to telehealth, community focused services have never been really site specific. The home has been a major point of contact between health care providers and clients. But clinics, all sorts of other sites, schools, staff have provided services at various locations. Sometimes congregate meal sites at churches. So truly, the particular site is not necessarily an issue. It is not -- oftentimes the phone has been a point of care in addition to direct contact.

DR. KUN: Jeff, is it possible to get a question to the panel now?

MR. BLAIR: Absolutely.

DR. KUN: I would like your opinions on the following. Today's presentations were pretty much based on diagnosing and patient care, a system which depicts today's health care system. As genetic related information creates new added information that will eventually be incorporated into the computer-based patient record, the system will shift perhaps into disease prevention. This type of a system should allow individuals to take drugs or follow procedures which may delay the onset of a given disease, or maybe even avoid it by years, and therefore improving the quality of their lives, perhaps decreasing the cost of health care.

How will we incorporate this information into the loop represented by Chris Chute this morning? For example, he talked about medical knowledge, the clinical guidelines, the expert systems and therefore the patient encounters themselves. How will they be affected? What will that do to the terminology and the classifications and the code sets, and in some cases, even reimbursement?

I am concerned perhaps that we might create a whole classification that in a few years, the genome map being completed will need to be shifted, or how we incorporate that information.

MR. BLAIR: Is the question being directed to each of the panelists?

DR. KUN: No one in particular.

DR. SPACKMAN: I'd like to take a crack at that. First of all, I think it is not true that the current terminologies that we have -- if you look at SNOMED and REED, it is not true that they exclude preventive procedures. In fact, there are preventive medicine procedures now. It may not be adequate for what we are looking at in the future, so the terminologies obviously will have to evolve over time.

That is exactly the reason why you have to have terminologies that are maintained on a very active and proactive basis, so that as new molecular diagnostic procedures come on line, as new aspects of the genome come along, that you track those very quickly and institute those changes in the terminology as they occur. As we recognize the need for better naming of the procedures that we do to prevent disease, we ought to be adding those terminologies, those concepts and mechanisms for gathering that data into our terminologies.

So my bottom line on your question is that it is precisely for that reason that we need to have well maintained terminologies. This is not a building we build once and set aside and say, okay, are done with terminology now, but it is a living, breathing thing that has to change over time, and in today's environment probably has to change rapidly.

DR. GOLTRA: I would like to add to what Kent just said, and expand on your question, if I might. It is not just therapy that is going to be affected. History obviously is going to be affected. People will have in their history the fact that they do have particular genetic proclivities.

In the diagnostic area, imagine the shift that is going to occur in what we today call a diagnosis, when there is available that kind of data. There is going to be massive reclassification. As a result, as Kent said, we have to be very on top of it and constantly modifying. As that data comes along we are going to have huge tasks in front of us.

MS. MARTIN: I would certainly agree that our work has not kept up or certainly can't foresee exactly where the genetic piece is going to go. But at the same time however, the Omaha system problem classification scheme has had three modifiers from the beginning, has health promotion potential, actual, are the modifiers of each of the 40 problems or client concerns. So the health care provider specifies -- that is an algorithm route and goes that direction.

So there has always been built in that the preventive piece.

DR. SABA: You are really talking about the lifelong computer based longitudinal record that Dr. Gabrielli has been promoting for many years. The military has developed this new chip to collect the history of our servicemen, which will give a snapshot of their health. So it will all be a part of the history that they are going to need to find out why their parents died.

On the other hand, as Karen said, prevention is not new. It has been proven to prevent critical care from being implemented in many of the studies that have been done in Arizona, where they have proven that by preventive services, regardless of their status, they have prevented admissions into hospitals and they have been able to maintain the chronically ill patients in their homes.

MR. BLAIR: Are there any other responses to Dr. Kun's question?

DR. BIDGOOD: I would just say that although your focus was on improvements, the marginal benefit of the new genetic information imaging as a screening for desperate disease will continue to be a mainstay. As we go to the structured representation, our sensitivity and certainly our monitoring of our effectiveness shall increase in areas such as breast, carcinoma screening, diabetic retinopathy screening and malignant melanoma screening, which are some of the -- three of the first initiatives on which work is being done with Dicomm structured reporting now.

MR. BLAIR: Any others? Then we have a question from Michael Fitzmaurice and from Dr. Richard Ferrans.

DR. FITZMAURICE: I have a question for Dean Bidgood. Dean, what you presented seemed to be fairly simplistic, not that it is simple to do. But what you in Dicomm seem to be doing is linking digital images with a text coupled with well-placed arrows and other joins so that the radiologist notes can be better understood in context.

You seem to also be developing a geographical representation or map of the body and body organs, so that you can point to representations of the body organ rather than the actual picture, so you can start combining things like images.

Is my understanding right? Maybe it is also very simplistic. Will this work for things like waves, too?

DR. BIDGOOD: Actually, it is disarmingly simple. We are fortunate that Dicomm is so widely implemented in radiology and has been accepted in so many medical specialties internationally now. So we can expect to see wide implementation of just the simple transfer of the images.

The beauty of that fact, that simple fact, is that each of those work stations that supports a Dicomm image also fully supports the semantics of the coordinate system of the image, which gives us the ability to represent any region of a Dicomm image unambiguously, in a very concise way.

So we don't have to represent a Bitmap, a full Bitmap, which in the case of a mammogram can be several megabytes large, but instead a very concise string of characters. This enables us to do the kinds of things one can do with GPS, with graphic positioning systems. We take it a step further here, in that we are actually linking up Descartes and Aristotle, geometry and taxonomy linked with freedom of speech, so that one can attach information along any point in the spectrum of expression from foreign format speech, recorded speech, to free text to categorical text, numeric, discrete codes or very highly constrained networks of codes. Then one has a very, very powerful combination of those two worlds.

DR. FITZMAURICE: So that will also work for say brain waves, images of brain waves that you could point and say, here is where I think something is showing a problem, just like an image?

DR. BIDGOOD: The principle is very similar for multidimensional time dependent data of any type, EKGs, EEGs, wave forms, yes.

DR. FITZMAURICE: Thank you.

DR. FERRANS: I just wanted to add to Michael's comments. What I find so attractive in that is that 80 percent of in-patients get Xrays done, and this is really the only way where we are going to be able to leverage that information from the standpoint of data mining or being able to look at having comparable data, and start to look at findings in a very objective way.

I think as it is implemented, and as we aggregate all this information, I think we will make a significant number of important and unanticipated discoveries about patterns of disease we haven't been able to capture and code and be able to make sense out of it.

My other question though was actually to Kent. It had to do with maintenance and the principles that were talked about earlier about a well maintained system.

Given the fact that now we are going to be combining with the REED codes, I find it interesting that we are going to have an international system prior to having a national system. But maybe that is the route we need to take. In any case, given that, and given what I see anecdotally as an acceleration of discoveries and new drugs, and given that if we do have some sort of national vocabulary, or if we come up with different components that we are going to use in the government for whatever reporting purposes, I would think that the maintenance cost would actually go up in the future.

What are your feelings on that? Is that a threat to continuing the process that is well underway now? Especially when people really start to rely upon this for the day to day clinical operations, as the systems are deployed that have the mature vocabularies in them?

DR. SPACKMAN: We don't actually anticipate that maintenance costs will go up, in the merger between SNOMED and REED there will be a certain time period where we have to have extra work to do as we identify which codes mean the same thing and which mean different things, and we have put it all together in one coherent structure.

I anticipate that will take two to three years. After that is over, then I think the maintenance costs will actually go down, especially because we are distributing those across larger groups of people. The NHS will have an interest in maintaining the terminology from their perspective, and we will have an interest from our perspective and so on. We will have these more distributed working groups more up to speed.

So I think in general, my anticipation is that the costs will go down, not increase, especially because once the structure is in place and the processes are in place for distributed development, you then have the ability to take advantage of the contributions of large numbers of people, instead of having just one or two people in a very localized area trying to make the decisions about every single addition to the terminology.

So it is a combination of distribution of effort and the existence of a well defined structure and process that will accomplish that.

MR. BLAIR: For the most part today, we have had the opportunity to have our committee members and the audience ask questions of the panelists. But this last couple of hors was rich with information, so I would like to invite our panelists to ask questions either of other folks that testified or of the audience at large.

DR. SABA: I have one. These questions that you have posed to us, which is very innovative, and the one about what do we envision the role of the federal government can help us with, I guess we are probably all going to say we would like to have some funding. Is that in the books?

MR. BLAIR: The book hasn't been written yet. I think that is the answer.

DR. SABA: I was going to write a proposal and put it in my testimony.

MR. BLAIR: Right, right. I think that I could fairly indicate that the pattern of the questions might reflect our thinking to some extent. As long as the private sector and the private marketplace is making progress, then that is probably the best way to handle it. The only thing that we were looking for is if there are dysfunctions in the marketplace which this panel or other panels advises us of, where the marketplace is having difficulty either creating a level playing field or the marketplace is having difficulty creating a code structure that needs to be created, or if coordination isn't going as smoothly or as quickly as it should. Then those are the kinds of things that I think we are looking at first.

Simon may want to add to that.

DR. COHN: I was just going to say that that is one member's opinion. Nothing has been decided at this point.

DR. SABA: But do you think that you will come up with a uniform framework for all the vocabularies?

MR. BLAIR: We are continuing to gather information. We don't know yet. Is that what you think we should do?

DR. SABA: Definitely. I think we should have a framework that we can all link to at some point which could satisfy everybody's needs. The framework would be a freestanding framework and therefore anyone could develop -- not anyone, but the vocabularies could remain copyrighted or not copyrighted in the public domain, be unique for a certain area, but the framework would be a common federal requirement that everybody -- or a structure like ICD-10 that everyone would have to code their data sets that way, and/or have a way to map to the framework.

DR. SPACKMAN: It is important to recognize what one means when one says framework, and it could mean lots of different things. I think my own impression -- my own ideas, and also I think my impression of the opinion of the College would be that the government should as Jeff has said not be too heavy handed in this area, not impose standards or not impose regulations in a way that would stifle development and collaboration and so on. If there is a framework, it should be conceptual.

The idea that somehow, this committee would recommend to HHS that everyone has to code using a particular thing or code a particular way I think at least at this stage would stifle the natural development. We are making great progress. I think we are making fabulous progress, but we are not there yet. I can't point to a particular approach and say this is exactly what it should be, and if you don't do it that way, you are crazy. That just doesn't exist yet.

Obviously, I think SNOMED is wonderful, and everybody should adopt it, but I think there is more to the picture of the patient medical record information than just reference terminology. All those other pieces of the puzzle aren't put together completely well yet, even if you take as a given that you are going to use SNOMED. And some people don't even take that as a given, I assume, but they ought to.

DR. COHN: Actually, I am not going to make a comment here.

DR. GOLTRA: Just very quickly, if I might, I would like to echo Kent's comments. The marketplace is just at this point becoming aware. The interest at the various shows in the past years has picked up dramatically. People are actually starting to use systems or want to use systems. That is going to feed back with requirements for all of us to evolve, to add new capability in our terminologies, perhaps to change structures.

Kent, I think you would agree that this is something that we are all very sensitive to, and that we will be the very first, all of us, to respond to the needs for changes in the frameworks or in classifications because of genetics, whatever it turns out to be. The industry with its checkbook is amazingly persuasive when it comes to these issues.

MR. BLAIR: Could I get a feeling from Dr. Bidgood and Dr. Huff as to their perceptions?

DR. BIDGOOD: I think there is quite a diverse structure of the coding standards that we have looked at even today. It would be quite difficult of course to push them into a common framework.

It would certainly disrupt the users out there who have implemented what they have. But I am always one to try to unify things. I like to think we would have a long time schedule on that. I have a real bias about what the framework would look like, and I suspect that some of the coding systems out there would probably choke on it.

MR. BLAIR: Any other observations or comments?

DR. FERRANS: Kent, actually I wanted to follow up given what you have just said. What is your feeling as to what the appropriate role of government is, which was one of the questions that we always get back to? We are very much seeking that guidance.

DR. SPACKMAN: I think in my testimony I said it, but I will rephrase it. What I would like to see functionally -- and this doesn't tell you exactly what you ought to do, but what I want to see functionally come out of it is a process whereby the vendors of clinical information systems become more, not less, eager to implement detailed clinical terminologies in their systems, basically that the marketplace will be fostered and expanded, and also that there will be less uncertainty about some of the decisions that people need to make regarding clinical terminologies.

On the other hand, you want to avoid mandating something that would be rigid. So functionally, that is where I would like to see the government take action.

So one of the ways that the government could do that is for example for the GCPR, to say here is our approach to standardizing clinical information and for HCFA to say, here is our approach to detailed clinical terminologies, and in gathering HETUS data, here is an approach. For the government basically to lead by example and say these are the approaches that we are going to take.

Obviously, that doesn't mandate what other people are going to do, but it certainly acts like the 500 pound gorilla in the marketplace, because if the marketplace has to respond to that, then there will be movement. So that is one way for the government to get its act together.

DR. COHN: So you are saying that this is not -- a permissive environment would be one for example that HCFA or another large insurer in the marketplace might say, we are requiring that you do X, Y and Z now, this way? Is that what you meant by that, or are you referring to interaction between the FDA and the health service and things like that?

(Simultaneous discussion.)

DR. COHN: When you are talking about standardizing, do you mean internal to the government or interactions between the government and the private sector?

DR. BIDGOOD: Right, interactions between government agencies internal to the government and with the VA, the DoD and Indian Health Service, the way they are doing things. Then if there are lessons that are learned out of that that say it is much more cost effective to adopt a particular approach, then they might begin in a non-regulatory way begin to encourage government to private sector interactions to take place in that same way, and as a last resort mandate what things should be, once it is clear that it really does reduce administrative overhead, save costs and so on.

DR. COHN: Meanwhile, I was not talking about mandate, I was just talking about requiring for payment. That was really what I was curious about.

DR. BIDGOOD: I think the whole issue of payment needs to be -- I would like to see it kept as much separate as possible from the issue of the patient medical record information details, but that there be a link between -- keep them independent but linked, is what I mean, so that the mandates for what needs to be transmitted should have a different set of requirements than the rest of the patient medical record information, that they would be more free for development and for variation.

DR. SABA: So how are they going to implement HIPAA then? Are you saying that this transmission for reimbursement cannot be standardized, or have a common framework? They have to do one. They have to build something.

DR. BIDGOOD: No, you standardize the parts that are mandated in the law. When you say implement HIPAA, I assume that is what you mean. Then this second part that we are talking about today, I think there are still things that can be done to move industry and the medical records developers and physicians and providers and everyone else towards convergence. But it is too early to mandate it.

MR. BLAIR: Peter, do you have a comment?

DR. GOLTRA: Yes, I would like to pass along a thought that occurred to me about six months ago. It is one of those realizations that suddenly strikes in the shower.

If one looks at what HCFA did with the EMN codes and in particular the physical exam and all those bullets and the subcategories and so forth, which may or may not go forward, who knows, but the concept of what happened there was, the government produced what we could consider a virtual nomenclature overnight with a stroke of the pen, because that is really what those EMN guidelines for the physical exam are, the virtual nomenclature. By the time you mapped that, you have a whole new structure.

If the government can coordinate and work together to think out all the possibilities and the potential implications of what they do, for those of us who are responding in the industry to these various initiatives that come forward, I think it would be very helpful.

MR. BLAIR: We are just about approaching 5:30. I think there is one more question. Kathleen?

DR. FYFFE: I don't know if you are going to like this one. I'm going to ask a -- this is a full day question, actually, but Kent, you said you would like to see the scheme be decoupled from the financing or the reimbursement scheme.

My question to everyone, and you will never be able to answer this question, is, how do we get the folks out there in the field to code and use these coding and classification systems, if they don't have the carrot of reimbursement? I am being very cynical when I ask that question.

Now, you all in the panel were all good Girl Scouts and Boy Scouts and so forth, and you don't need a short term incentive, but how do you get very, very busy people out in the field to do this, if you decouple it from reimbursement?

DR. SPACKMAN: I have encountered that question many times from -- that is right in my home base of pathology. Pathologists say, SNOMED is great, but I don't think pathologists should code. And I say, I agree with you, pathologists shouldn't code. They should use structured data entry or auto encoders or something that gets the meaning that they are expressing in their pathology reports into the data form that can be used for all the secondary uses of data, that everybody is talking about. All these secondary uses have to be made easy for the primary care physician and the pathologists and everybody else to get their data into that form.

That is where our biggest difficulty lies at the present time. If I take for a given that we have combined REED and SNOMED, and we have this great concept representation system, we still have a gap between the user who has to get that data in in some form and that great concept representation, that we can then manipulate and map to the administrative terminologies and do all kinds of wonderful things with, the development of that is the kind of thing that Medsin has been developed to do, that Oceana is working on for Kaiser, that lots of other organizations are spending time on and are least well developed at the present time. But I think we are making great strides.

The College of American Pathologists has created a set of checklists for cancer reporting. Within the pathology community over the last 10 years, there have been a lot of implications about how to structure the pathology reports so that the appropriate information gets sent to the surgeon and the oncologist on cancer, and also so that the right information gets to the cancer registry.

It is not an extremely high tech solution, but it does involve having a reference terminology and some kind of set of well understood -- I wouldn't say standards per se. They are not standards, they are protocols or practice approaches for getting that information. Once our laboratory information system vendors catch on and get that information and begin to code it all the same way, we will begin to derive that benefit.

Now, the question is, how does the government foster that, and I think that is the question that you have to help us with. But that is my vision of how these things are going to work.

DR. GOLTRA: As they say, I'm glad you asked that question. As Dave showed with the intelligent prompting, we think that we have a bit of an answer to the resistance on the part of the physician to enter structured data, and thus have coded history and physical data in the chart.

If indeed that is the case, and these various vendors are using Medsin in that way to capture data at the point of care, then by working -- by Medsin working with SNOMED as we are, and working with the Health Language Center which you will hear more about tomorrow, and with other groups, then all of us together as we share information, as SNOMED deconstructs Medsin into its components, as Health Language Center tags the intellectual content, then this data becomes available and it is not a resistance, for your question, but rather it is a byproduct of the physician doing what he ordinarily does.

MR. LAROCHE: One of the initiatives that we are working on right now regarding -- many of our vendors want to put these fairly advanced electronic medical record systems --

MR. BLAIR: Would you identify yourself?

MR. LAROCHE: Dave LaRoche of Medicomp. They want to put these fairly advanced systems out there and the physicians wouldn't document anything if they didn't have to for reimbursement. Basically, it has come down to that.

One of the approaches that we are experimenting with is tying our diagnostic guidelines to other peoples' therapy guidelines, guidelines for treatment. There was a study in one of the magazines, industry magazines, last month that showed the average cost in the health care system of obtaining a referral to a specialist or for a procedure is about 70 dollars, and a lot of that is buried at the payer level, some of it is buried at the level of the doctor's office, et cetera.

If we can for example synchronize our data with the guidelines of the carrier for approval and get that to happen electronically, it gives somebody an incentive to pay to have that electronically done to drive that 70 dollars of cost out of the system. We are currently working on a couple of pilot projects to prove that concept.

I think that that does a couple of things. It lowers costs, it may or may not improve care; it depends on how good the guidelines are. But it will make the care and the approval of care more immediate, which is a positive for the patient.

No one vendor or guideline developer or terminology developer has all the answers, but the linking of those things electronically, which is one of the reasons we said that these people all have to work together, all the terminology developers have to work together. We have drawn the line and said, we will provide a way to read the past data at the point of care and the way the doctor thinks about it, if we can link that to the way the payer thinks about the guidelines and what is appropriate for the procedure, based on what is in the record. Don't do the MRI until you have done a CT scan and it is inconclusive, et cetera. Then that 70 dollars to approve a transaction with the provider knew was the right one in the first place, there is some opportunity there. We think that is where the real opportunity lies, but you do have to incentivize the provider. So you do it by saying, do this and we will give you some incentive per transaction if it goes through electronically. That might be -- we are testing that. In a year or so, we'll let you know how it works.

MS. MARTIN: What I have watched over the years, agencies with their staff nurses, their PT's, their administrators, et cetera, who believe in the power of data, who start using it and find value in it, and find it in various ways, get so much more turned on. The accuracy of the collection is so much better than when there is the carrot of, you are doing it only or you are doing it because you have to do it for reimbursement purposes. That oftentimes turns into a negative, and sometimes clinicians in particular resent it tremendously.

I have also watched faculty throughout the country, as I work with colleges of nursing, particularly now beginning to work with some social workers and some other students, and as students and faculty start valuing data in a very different way than it used to be valued, I have seen this interest, the desire to use it and the accurate use of it just change dramatically.

DR. SABA: She brought up an interesting point. At this point in time, medical education -- all the health care educations are paper and pencil. Until they learn how to do a physical using a computer screen instead of a paper and pencil, structure -- we can't expect our physicians and our nurses and our other health care professionals to use technology when we don't even use technology in the teaching of how to document their care or assess their patients.

MR. BLAIR: Are there any final responses to Kathleen's question? If not, then thank you, everyone, for a tremendous day, and we will reconvene tomorrow at 8:45.

(The meeting adjourned at 5:40 p.m., to be reconvened Tuesday, May 18 at 8:45 a.m.)

Transcript for Tuesday, May 18, 1999