[This Transcript is Unedited]

DEPARTMENT OF HEALTH AND HUMAN SERVICES

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

SUBCOMMITTEE ON STANDARDS AND SECURITY

May 22, 2003

Crowne Plaza Hotel
1489 Jefferson Davis Highway
Arlington, VA 22202

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway
Fairfax, Virginia 22030
(703)352-0091

TABLE OF CONTENTS


P R O C E E D I N G S

DR. COHN: I'll want to remind everyone that we are being broadcast over the internet, and I want to make sure that everyone speaks carefully and into the microphone. I've been as big an offender on that as many others over the time but we will certainly remind our presenters and during our discussion after the presentations, I'll try to also be vigilant on that.

Agenda Item: Call to order, introductions, review agenda - Dr. Simon Cohn, Jeff Blair

This morning, we continue our health care terminology discussions and that will be all morning. Obviously, Jeff Blair will be leading those sessions. Thank you, Jeff, and Susie and Marietta and all the others and Steven, I'm sure I'm forgetting a number of others, and Michael, for helping to organize these. I think these have been tremendously helpful so far. I'm hoping that we can make some significant progress this morning.

Now, this afternoon, we will be having some discussions and I think an update about the federally-funded ICD-10 cost impact study being undertaken by Rand. As I've commented before, we will be adjourning at 3:00 today.

Hopefully, there will be some time in the midst of all this to at least have a brief discussion about the interim enforcement rule. This is the third time I've mentioned it, hopefully to get you all thinking about what sort of letter needs to be produced by the full committee and subcommittee and we will be discussing that at some point, maybe right after lunch for five or ten minutes.

With that, let's have introductions around the table and then around the room. For members of the subcommittee, I would ask you if there are any issues coming before us today for which you need to publicly recuse yourself, please state.

Jeff, would you like to introduce yourself?

DR. BLAIR: Jeff Blair, Vice President of the Medical Records Institute, Vice Chair of the Subcommittee on Standards and Security.

There's nothing I can think of to recuse myself of, but I just remembered from yesterday that I left out one of my affiliations. I'm a member of AMIA, HMSIS, HL-7, and ASTM.

DR. BEBEE: Susie Bebee, NCHS, staff to the subcommittee.

DR. GREENBERG: Marjorie Greenberg, NCHS, CDC, and executive secretary to the committee.

DR. PICKETT: Donna Pickett, NCHS, CDC, staff to the subcommittee.

DR. STEINDEL: Steve Steindel, Centers for Disease Control and Prevention, liaison to the full committee and staff to the subcommittee.

PARTICIPANT: Medical assist answer for Medicare and Medicaid services, staff to the subcommittee.

DR. GRAHAM: Gail Graham, Department of Veteran's Affairs, staff to the subcommittee.

DR. MEADOWS: I'm Beverly Meadows, from the National Cancer Institute.

DR. DOLIN: I'm Bob Dolin from Kaiser Permanente.

DR. BUTLER: Sam Butler from EPIC.

PARTICIPANT: Also from EPIC.

DR. PITTLEKOW: Mark Pittlekow from Mayo Clinic and Mayo Foundation.

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Health Care Research and quality liaison to the committee and staff to the Subcommittee on Standards and Security.

DR. HUFF: Stan Huff with Intermountain Health Care and the University of Utah in Salt Lake City.

I am, I will need to publicly recuse myself from any votes or discussion on LOINC. I'm also involved in HL-7 and looking forward to the testimony today.

DR. ZUBELDIA: Kepa Zubeldia with Clarity, member of the committee and subcommittee.

DR. SUJANSKY: Walter Sujansky, independent consultant and advisor to the Subcommittee on Terminology Standards for Patient Medical Record Information.

(Introductions of audience.)

DR. COHN: I think that's all present.

Before I hand this over to Jeff for some introductory comments, I do need to make two comments.

One is that as you all know, and this will be more of an issue this afternoon when we are talking about ICD-10 and the members of CPT editorial panel and therefore I will be publicly recusing myself from any discussions related to CPT.

The other is I'm obviously happy to have Bob Dolin presenting this morning. Bob is a colleague from Kaiser Permanente. I think you all realize Kaiser is a very large provider of health care. We are the nation's largest non-profit health maintenance organization. We have obviously asked Bob to come and discuss some of Kaiser Permanente's experiences with medical terminologies.

I just want everybody to realize that he is obviously with the same organization as I am. Jeff?

DR. BLAIR: Thank you, Simon. Let me give a brief introduction for those of you who not have been here yesterday so that the activities of this morning are put in context. I'm going to be briefer than I was yesterday.

The Health Insurance Portability and Accountability Act directed the NCVHS to study issues related to the adoption of uniform data standards for patient medical record information and the electronic exchange of that information and to report to the Secretary by August of 2000.

The NCVHS did so. For most of us, there was no definition of what PMRI is. Many folks consider that to be clinical data standards or electronic health record standards. You have the flexibility to wind up interpreting it as you will.

We have, as of that August 2000 report, set forth guiding principles for the selection of standards and a frame work for how we would proceed.

We made our recommendations on PMRI message format standards in February of 2002. Those message format standards, along with lab LOINC were announced by the Secretary March 21st of this year as CHII standard -- that's Consolidated Health Care Informatics Initiative meaning that DODVA and HHS have adopted those standards or are adopting those standards.

August of last year, we began the process of studying, selecting and recommending PMRI terminology standards. That process has included getting information from -- guidance from the industry in a day and a half session in August in terms of the scope, the approach, other considerations, other guidance.

We wound up pulling that into a modified set of criteria for selection. Those criteria were then reflected in a questionnaire that was sent out to all of the PMRI terminology developers that we could identify and responses to those -- it was about a 14 or 16 page questionnaire -- were returned to us in the February time frame.

They were analyzed by our consultant, Walter Sujansky, and presented to the subcommittee March 25th of this year with a first draft of that analysis.

That first draft wound up selecting out of more than 40 terminology developers 10 terminologies that met the basic technical criteria of concept orientation, concept permanence, non-redundancy, and explicit version identifiers.

We have then, in the subsequent months, shared with the terminology developers the analysis information to see if our analysis was accurate and appropriate and complete.

A second version was prepared by Dr. Sujansky in April with those modifications and the number of terminologies that qualified based on that criteria was 12. Based on that we have invited users of those terminologies to testify to us yesterday and today.

And today we have a panel of four testifiers, and we have allowed each testifier, since we have, you know, quite an agenda for yesterday and today, that if you have one terminology, please try to keep your comments within ten minutes apiece.

If you have, for each additional terminology that you will be testifying on, you can add an additional five minutes for each. So if there's two, there's 15 minutes and if there's three, there would be 20 minutes so those would be the guidelines, we would ask you to try to stay within.

I'd like to welcome all of you. We are delighted to have you here. Maybe Dr. Dolin, would you like to be first or is that out of sequence?

DR. DOLIN: Yes, I think Sam Butler is ready.

DR. BLAIR: Sam, I wanted to know if you wanted the first honors, second honors or third honors, which is an inside joke. I have to mention to you that Dr. Butler gave a presentation at the TEPR awards where EPIC became first honors for a large hospital category so congratulations.

Agenda item: Panel 5 -- Terminology users: Health Care Providers & Vendors -- Dr. Sam Butler, EPIC

DR. BUTLER: Thank you. Thank you for inviting us here.

If you will notice in your handouts for our slides I forgot to change the footer and it still says EMR medium/large hospitals at TEPR and I'll apologize for that.

Thank you. My name is Sam Butler. I'm a physician on EPIC's clinical informatics team and my background is on pulmonary and clinical care practice and practice management. I joined EPIC approximately a year and a half ago.

The first slide that you will see up here is a typical winter morning in Madison, Wisconsin, where EPIC is based.

We are going to start off our testimony -- we are really going to cover -- EPIC currently uses three of the terminologies that you asked us to comment on -- LOINC, medicine and we are beginning to use SNOMED and we'll talk about each of those in a little bit more detail.

I'm going to start off our testimony by turning it over to Donald Chiolus(?) who is our expert at EPIC in interfaces and has the most experience using the LOINC terminology and he'll start with that and I'll take over with medicine and SNOMED.

DR. CHIOLUS: Thank you very much, Sam. You have already covered the slide on the three kinds of terminologies that we are going to use.

Use of LOINC on EPIC, we use LOINC in two different areas within our products. In the outcoming EPIC lab product and the interfaces. The key word on the outcoming lab product is that we plan to use LOINC. We are not using it yet.

We have a design in place and the main purpose of this design is to assist the users of our lab product to map code. Their result report is based on the LOINC codes.

Now, we do use this terminology currently and actually quite extensively in an interface background interface work. We use it to represent the OBX-3 field of HL-7 messages, the field that has to do with observations. We use this terminology, to map what comes from external systems to our internal data bases and the same way for messages that go out of EPIC, we use this terminology to represent the same kind of data.

We don't exclusively use this terminology. It's one of the options that our customers can use which sends data through HL-7 interfaces.

As I said before, the talent and the place where we want to concentrate our efforts and the design of our lab products with the use of LOINC is to give the ability to our customers who use this terminology without really disrupting some of their existing work codes and some of their existing knowledge on the tests that are familiar with or the tests that they are using or the tests that they are ordering.

The current design that we have in place is map the specimen identifiers the way they exist right now in our data bases and the methods, again, the way they exist in our data bases to the specific value suggested by LOINC so these two tables are going to be mapped first.

Then what we plan on doing is grouping the tests that our customers that are going to be installing the product have in categories and each category, the set of tests that will belong in each of these categories will have a common component kind of property, time aspect and time of scale as LOINC suggested.

So during the work flow when somebody is trying to peak one of the tests that they are familiar with, they basically just really need to keep the specimen and the method, make them specific enough, make them specific enough to be represented by specific LOINC code.

Before I turn the microphone to Sam, a very quick summary of what we believe about the LOINC terminology.

We believe it's specific enough for the particular purpose that we are going to use it -- the laboratory is not reporting. It doesn't have any redundance as far as we can tell.

The mapping utility that comes with the terminology is (inaudible) from what we've seen, from what we've played with and, of course, you cannot beat the price. It's a good thing to have with our customers.

Sam?

DR. BUTLER: The purpose, as the gist of our testimony, is how we are currently using these systems, not as experts on the actual systems themselves.

We wanted to talk about MedCine which is the vocabulary that our customers are using now from the list that you asked us to testify on.

It is the only one of the clinical vocabularies currently in use although some of our newest and largest customers use additional vocabularies that we are now working with to implement, but MedCine is the one that is currently in use.

We are currently -- our customers are only using the terms that come with MedCine, not the additional tools that come with MedCine that allow more robust mapping of such things as terminologies to diseases.

Our customers are only using the terms and they use it to link to documentation of medical, medical documentation, for the most part.

They use that in the forms and lists and so it's used in documenting progress notes, histories and physicals, more detailed specific forms which can be used to capture other detailed information.

This slide here is just an example of one of the text templates used in EPIC where you see in the bottom, I highlighted the lung exam and you can see a list where the choices are chest is normal appearance without tenderness, etc., etc., the lists.

Now, these lists can be used just to insert free text tied to no concepts or the user can decide that each of these questions should be tied to specific concepts and here is the screen behind the scenes where the user when they set up the list would actually choose from a concept and you see in the list in here, the concepts are all from MedCine because this is, as I said before, the main concept identified that our customers are currently using.

So you can see they can choose many of the descriptive terminology for a reason to help when they document that a patient is wheezing, it can be tied to a concept and be more than free text. If they choose not to, of course, they could just file the free text.

In our forms for the eye exam, we can see the detailed forms that we can collect and behind each of these forms can be medical concepts, currently MedCine.

The disadvantages that our customers relate to us regarding MedCine and that we also agree is the single hierarchy of MedCine and the redundancy, that is, that there are a number of codes with NOS. That is not otherwise specified.

I learned today that the reason for this number of not otherwise specified codes in MedCine was a requirement from some of their customers to include all ICD-9 codes in their terminology which required them to include then a number of NOS terminologies. Nonetheless, it still adds to the terminology, and the duty of some of the terms present in MedCine.

The price, if you, any price more than the price that gets paid for LOINC is considered by our customers frequently too much.

At a recent meeting of CIO's, from university hospitals, the question was kind of an informal question, how much would it be worth to spend on a vocabulary, even the most expensive vocabulary was, say, was $50,000 for an entire, very large organization and that's a fraction of their budget yet most people, our customers have not been willing to spend that kind of money on terminology.

The advantages of MedCine clearly, besides it's other advantages as a medical vocabulary are the terminologies are very descriptive.

Now, there are other vendors and organizations that are going to testify here about MedCine that have used it very well. It's just that in our experience, we are not, other than the terminology and the disadvantages and advantages, we list -- I shouldn't add any more to MedCine at this point.

SNOMED is a terminology that at EPIC we are very excited about and our customers are also, I think, for the first time excited about the prospect of using SNOMED terminology and in the EPIC application to document clinical information.

It's currently not used by any EPIC customers at this time, EPIC customers who are live, on electronic medical record systems.

As I said, there has been a great increase in interest in this vocabulary in the last six months to a year.

We have begun to consider its use in using it for diagnostic, used in diagnosis, problem lists, and also using this terminology to speed up the process in which our customers implement electronic medical record.

A lot of time is spent defining category lists. Those are drop-down lists for all types of things, including such things as marital status.

Each organization must go through and say what are the six marital statuses a patient could have. SNOMED categorizes many of those things to make it easy for us to put those types of category lists to be from the SNOMED vocabulary right into our customer system.

The biggest problem we see with adapting SNOMED is the difficult, one of the greatest advantages, it's granularity and the physicians get very excited to be able to describe specific diagnoses, for example, the common one, there are 15 ICD-9 codes that describe diabetes and the complications of type two diabetes where there are 100 in SNOMED that would also allow you find the concept so they are very excited about doing that.

There are 15,000 ICD-9 diagnosis terms roughly and approximately 88,000 in SNOMED so the problem becomes how do we present in information to physicians and other clinical users to allow quick and easy coding of the diagnosis to the specific granular level.

Right now, that is a challenge that we have not seen any answer to and we are beginning to develop interfaces ourselves that will help the physicians and clinicians do that, but that's a real challenge and we weren't interested in the past, there was not much development done on this because there was not much interest in using SNOMED or any medical terminology very much in the past years and that's all changing now.

You are very well aware of the advantages of SNOMED. The main coverage seems very good to us. The dummy ID, the permanent multiple hierarchy is very important for our customers. We see a purpose. I can envision many, many ideas, many, many ways to use this multiple hierarchy in helping the users find that right diagnosis, for example or reviewing a problem list and rite clicking, for example, on the problem list and seeing all the related terms in the semantic network to help me choose a related problem for the reason the patient is now presenting to me in the clinic.

And we hope that it becomes very soon the national standard because I think that's what our customers are waiting for is the statement: Yes, you should use SNOMED and I think we are beginning to see that happen now.

Recommendations. This is from your questionnaire. We recommend that this committee recognize, adopt one or more of these clinically specific terminologies that can serve as the core set of the national PMRI standards; that the government spend its time analyzing these clinical functions and identifying the gaps and existing terminologies for filling these functions and that would support existing terminologies to fill these gaps.

Among the presented terminologies, SNOMED and LOINC seem to be the most promising ones, covering different and essentially disjoint aspects of the medical terminology domain, and we will all be watching SNOMED, its evolution closely, in working on our, with our clients to help utilize this potential via our software.

Of course, we also recognize that if we had a wish list, the drug terminology is not sufficient in SNOMED and the providers of pharmacy information are doing a great job providing this information, but there's no standard to compare those to, and we wish that would be addressed as well and we know that you are considering that.

I think that's the end of our testimony from EPIC. So if you have got any questions.

DR. COHN: Sam, thank you. I think, Bob, if you are number two, is your computer up and running?

Sam, thank you very much. We will be taking questions hopefully at the end of this session.

Agenda Item: Dr. Bob Dolin, Kaiser

DR. DOLIN: Good morning. It is a real pleasure to be here to testify at the NCVHS on terminology, one of my passions for the last couple of years.

What I wanted to present, actually, is how Kaiser Permanente is using terminology, talk about our approach to enterprise terminology; from there talk about the core standard terminologies we've brought in; and from there identify the gaps we see in the existing terminologies.

We call our enterprise-wide terminology the -- I'm from Kaiser, Department of Internal Medicine. I work on our national clinical information systems, the HL-7 board of directors and the structure document committee co-chair and the SNOMED editorial board.

So our convergent medical terminology, this is Kaiser's central terminology resource. It was originally developed to serve the needs of our evolving national electronic health record. It was actually built, put together in conjunction with the national electronic medical record that we were building.

Since then, as a result of putting this terminology together and then we started putting interfaces into this electronic health record, over time it actually became cheaper for us to continue to have a central terminology because we had built so many interfaces to support our national electronic health record that it began to be perceived as a core Kaiser Permanente asset, and now it serves as the definitive course of concept definitions for all the codes used in the organization.

It provides a consistent access to all the terminologies used by the organization.

The last bullet means that while we may have several terminologies available within Kaiser, they are all put into an integrated CMT environment so any researcher that needs to know something about a code has a consistent interface from which they can extract definitions or semantic information about the codes.

So this is my brand new slide and I guess a little bit of a caveat about Kaiser's approach to CMT is it is shifting with the shift in our vendor selection for national electronic health records.

So what you are going to see here, what I'm talking about is informed by two years of work, putting together CMT for our prior product, now going through a major revamp as we prepare to implement a new product and we have a lot of experience with our new product which is EPIC, we've been using it in Kaiser Northwest for a number of years, and this is through a lot of discussions with EPIC as well.

This picture, in a nutshell, describes CMT and people ask CMT so I had to put it, you know, draw a circle around it so people now know what the scope of CMT is.

CMT is really comprised of two things. It's comprised of a core. Everything in the CMT core is a single integrated data base, a single, in fact, classified data base and the core is supported with the applon terminology development environment.

We also have something called extensions. The notion of extensions are things that they don't go into the core for one of two reasons.

One reason is that they just don't fit into the core. They are just too obscure, the concepts, and if we try to put them into a classified data base, it just wreaks havoc with the data base.

The other reason we use extensions is because we make it a big request to stick all of the stuff into the application. We need these thousand concepts in the application by tomorrow. We don't have time to integrate them into the core and so we will plot them in an extension so that they are available to the applications and then over time we'll prioritize the extensions and we will harvest some of them into the core.

Everything within the sphere of CMT has a national Kaiser identifier. So Kaiser tracks all CMT concepts with its own globally unique concept identifier.

From there, we add a couple of special features. One is we add this notion of cross maps so from our central terminology environment, we have a single cross map mechanism where we can define cross map sets.

The reason we define cross map sets rather than just cross maps is because we like to have meta data associated with each of our cross maps. This meta data may include something like who the owner or who the responsible party is for this cross map, the person that we can go to if there was an error there or that can sign off on the reliability of the cross map.

The other reason we have cross map sets is because we have found that the relationship between two concepts and two terminologies can differ, depending on the context of use and because we have cross map sets, we can have two concepts in different terminologies related one way in one cross map set and related in another way in a different cross map set.

So because cross maps tend to be very use-case specific, we break them up into cross map sets.

The other piece we do is we create something called context sets. What's interesting here is that our last application, our national electronic health record with IBM, actually plugged into CMP, right into the core. It was directly interfaced into the core and that caused some interesting challenges because the only way that you could surface vocabulary to the application was to put it into this integrated classified terminology.

What we now do is we provide these things called context sets which is just any subset of CMT and this actually, this is where EPIC plugs in to our terminology up here, is into a context set but we define context sets -- these also have associated meta data for the context of use so a context set might be the set of codes used to populate a drop down list or a context set might be the set of codes used to populate a preference list, anything where you want to aggregate a set of concepts. It might be the set of concepts that are appropriate for a particular HL-7 field and so we define context sets.

So that's kind of in a nutshell, Kaiser's enterprise terminology. We balance many objectives with CMT. We balance user documentation, data analysis and external reporting and whenever we show this slide, we always tip the skate towards user documentation.

In effect, we establish our organizational priority that user documentation takes precedence over data analysis and external reporting, the thought being that if you can't get users to document with the terminology, you are not going to be able to have much success with data analysis down the road so we make it very clear that that's where we tip the scales.

CMT is built upon industry standard terminologies. The core of our CMT is comprised of SNOMED CT, laboratory LOINC and First Data Bank drug terminology.

I didn't come here prepared to say we support these. I should just say it's a given that Kaiser Permanente is going to continue to use SNOMED CT and Laboratory LOINC. We are 100 percent committed to these terminologies.

First Data Bank drug terminology is what we are using for pharmacy right now and again, these core terminologies are integrated into a single knowledge base. They are classified together as one cohesive core and again, CMT is our lingua franca of interoperability.

I guess, to this group, talking about the need for standardized terminology, maybe I should just skip this slide but obviously, this is, it's interesting, too, because the more we have CMT mapped and supporting interfaces, the interesting piece is that it actually has now become the cheapest solution, having a core terminology, you know, we have 20 different ancillary applications out there that all have their local codes.

When we wanted to move from one application to another application, building interfaces for those applications that had already been plugged into the electronic health record and had their local terminology mapped into the CMT, those turned out to be the cheapest interfaces to recreate for the new application.

Another point is that CMT is a knowledge base; it's not a data base. And this is a point I want to come back to because I think there's enormous potential in the terminology that is still going untapped.

Concepts have logic based definition imported from the source terminologies so LOINC has logic based definition, SNOMED had logic based definitions. We import them in and we classify them.

The classifier organizes the concepts into the polyhierarchy. The act of classifying, the nice thing about the act of classifying is that it helps us identify synonymy in the concepts and it helps us maintain a cohesive hierarchical structure to the terminology.

The other nice thing about these logic based definitions is that it allows the computer to compare various representations and determine whether or not they mean the same thing and actually I gave you guys three handouts. The first handout is the Powerpoint presentation. The second hand out is our written response to the questionnaire. The third handout is our fall AMIA article from 2002 and our submitted fall AMIA article from 2003 and the fall AMIA article from 2002 talks about selective retrieval of pre- and post-coordinated SNOMED concepts and it goes into this notion in more detail, how the logic based definition allow you to compare various representations of pre-coordinated and post-coordinated concepts and aggregate them together to determine that they mean the same thing.

So, for instance, if you want to find all patients that have had a pituitary operation and one person entered the SNOMED code for hypothesectomy and one person entered two codes, one for brain excision and one for pituitary gland and one person picked the SNOMED code for partial excision of the pituitary gland by transfrontal approach and one person picked brain incision and pituitary posterior probe, the fact is because of the logic based definitions, you can identify all four of these patients as satisfying the criteria that they have had pituitary operations.

So now I want to walk through some of the lessons we've learned, predominantly over the last three years as we've been developing this enterprise terminology.

Number one is that cross mapping is hard. It's probably the biggest cost of enterprise-wide terminology deployment but an interesting point is that there's a very large initial investment as the interfaces are built but then after that, this probably becomes a low cost maintenance solution and this is actually something that a lot of our architects had to understand is that there's two pieces to building an interface.

There's the technical interface and then there's the semantic interface because whenever our electronic health record talks to an ancillary application to a lab or to radiology, they all have their own local coding systems as part of building on interface to that application, we have to map their terminology into our CMT and again these mappings are use case dependent.

The type of mapping and the scrutiny by which the mappings are QA'ed have to vary by use so, for instance, if we are mapping SNOMED to ICD-9, there's one type of scrutiny that has to go into that.

If our objective is to go directly from user documentation to ICD-9 to generate a bill, one of the most critical mappings, though, has to do with our interfaces, especially for order entry and results reporting.

It's very interesting because I'm sitting here as a clinician, I'm ordering a SNOMED code, it goes across the interface where it gets translated to the code that is recognized by the lab system, the lab performs their local code, they send back the local result, that gets translated into a LOINC code and the clinician sees a LOINC lab result and if those mappings weren't correct, then the result that the clinician is seeing on the screen is misleading to them. So we think that these order entry and results reporting mappings require the highest degree of scrutiny and in fact, we have at least two domain experts manually review and sign off on every one of these mappings.

The other AMIA paper that's in your packet is we did a review ever all the mapping scenarios we've encountered over the last two and a half years as we deployed CMT and the article kind of summarizes what we've learned from that.

This covers three terminologies so does that mean I get 20 minutes? I don't think I'll take that long, but thanks.

So lessons learned. The next lesson learned is, again, to establish use ability as an organizational priority. We have a defined, CMT has a defined set of objectives and guiding principles that clearly articulate the need to support usability while time balancing it against the needs of data analysis and external reporting.

It's an interesting scale and it's an interesting set of discussions about how best to balance this.

Next is to differentiate interface from reference terminology and we struggled with this question for a long time. Do we really need to differentiate interface from reference terminology?

What I mean by the differentiation, right now what I mean is a level of indirection where the application interacts with our context sense rather than directly with CMT, gives us this differentiation from interface to reference terminology, and in moving from the IBM application to the EPIC application, instead of having the application talk directly to the core, but now talking to these contact sets, it's opened up enormous flexibility for us and it's made it much easier to achieve usability without sacrificing data analysis and reporting.

So I've become a firm believer that there is a clear difference between interface and reference terminologies and that plugging these things into these context sets is the way to go.

This level of indirection allows us to, number one, identify the specific subsets of CMT that are relevant in a particular context and one of the comments that Sam had made is very relevant. I don't want to give my users necessarily 88,000 disease concepts.

A context set allows me to just give them 15,000 if that's all they want or -- it lets the users take ownership of exactly what concepts will show up in a particular context. It turns the application into a pull mechanism rather than a push mechanism for terminology.

Within a context set, we could alter the display names and the synonyms depending on the context so every context set for the same concept, it can have a different display name in that context; that display name is unambiguous is it doesn't necessarily need to be the same display name for that context in every context set.

Also, it insulates the users from changes to the source reference terminologies. You know, SNOMED is getting updated every six months. That doesn't necessarily mean that I want my application users to have to review changes to SNOMED every six months. Again, these context sets represent what the users want as they want more terminology, we will add it to their context set so now they are insulated from changes we have to do to support the reference terminology.

The next lesson learned is we have to make it easy for users to find the concepts they need. Otherwise they simply won't use the terminology. They will just resort to free tasks.

And so what we've done here is we've developed user friendly names and we have styled guidelines for concept display names and for abbreviations and we work with our user groups on naming conventions and search strategies and this is becoming even more prominent within the organization is that more and more people get to have their two cents about what is our approach to synonyms, what is our approach to display names, what is our approach to search. How are we enabling that, how are we facilitating that within the application.

The other thing is we want to maximize search precision and recall so that when a user is searching for a term, they find exactly the term they want, they are not inundated with a bunch of extraneous terms and so to do that, we will create search key words and we'll use context sets to limit the number of concepts that appear in a number of contexts so that rather than having you search against 88,000 disease concepts, if you are a specialist and your focus is on a particular domain, we may initially just have you searching within a limited number of concepts and only if you will don't find it, we may let you go out and search a broader context.

Also, it's important to tailor the search strategy to the application serving up the terminology. We continue to find that the application that serves up the terminology has specific nuances and if we take advantage of those nuances, we can be much more precise in defining search precision and recall.

Next is we want to demonstrate -- the next lesson learned is that we want to demonstrate immediate value of the coded data to the user.

We want the user, the minute they click a coded concept, we want them to have immediate clinical value and so that they are actually rewarded for using coded terminology.

Where we've successfully triggered decision support, for instance, off of coded data, we find the users more willing to use it and, in fact, we routinely captured coded data for allergies, diagnoses and orders, pretty much uniformly and we commonly capture the reason for a visit and it's largely because these codes are used to trigger drug allergy alerts, data entry templates and other forms of decision support.

So basically if I have coded allergies on my allergy list and now I go in order of medication, I have now the hooks there in my application so that I can be warned, if I just prescribe something that the patient a allergic to, and that the providers all know that, and as a result of this, they feel like they are using a seat belt when they use coded terminology. They feel that the computer is helping to watch over them.

The last three slides, what I want to do is say, that was Kaiser's approach to terminology, this is what we needed. We used SNOMED, we used LOINC and we used First Data Bank drug terminology and based on that and based on what our requirements are, the next three slides will go over each of these three terminologies and talk about what I call the action items.

I call it our action items because I feel that this is the connection that Kaiser wants these terminologies to go to meet our requirements.

For SNOMED CT, I think we need further development of qualifiers as a way of constraining allowable post-coordination.

What that means is if you let people join together any two SNOMED codes or any three SNOMED codes, or four SNOMED codes or what have you, you wind up with these post-coordinated or concatenated strings. They are just impossible to deal with.

You need to have some level of control over the allowable post-coordinations and we are looking at using SNOMED CT qualifiers as that constraint mechanism.

By using SNOMED CT qualifiers, we will hopefully guarantee that any concepts that are post-coordinated in the application can be compared to pre-coordinated concepts that live directly in the terminology and again, this approach is explained in more detail in the fall AMI paper from 2002 that you have there.

Next is we want to standardize some of these SNOMED CT context sets. For instance, for allergies and for HL-7 fields, we want to say, here's all of SNOMED. It's 350,000 codes. Let's hope that you and me, when we send information back and forth, we send the same code. It's a little bit of a challenge a I know over time as SNOMED gets cleaner that will become less of a challenge but for now, we feel that these contexts have actually standardize subsets of the terminology, have a big value to us and as a for instance, we work now with the drug terminology vendors.

We've looked at First Data Bank and faxed and comparison. We looked at what Kaiser Northwest was doing with allergies and it turns out that amongst all these we can define a set of about 14,000 codes that users like to use when documenting allergies. We've created a SNOMED subset.

We feel that this is a good subset to standardize upon because otherwise the drug vendors have no notion, no idea where we map to in SNOMED to trigger our decision support and also for HL-7 fields, if we can say this is the set of SNOMED codes that is appropriate for this particular HL-7 field, we feel that the development of these contexts, standardizing them, will lead to greater interoperability.

Next is refine meant of the SNOMED/HL-7 reference information model overlap.

The challenge here is that there's still the potential for unidentified redundancy. You can represent, you can post-coordinate in multiple ways. I can post-coordinate by gluing two SNOMED codes together or I can post-coordinate by sending the SNOMED code and sending a code in another field of the reference information model.

I think what we need here is guidelines and/or a formal mechanism for identifying the redundancy. And I think in a lot of cases, we need cross maps between SNOMED and HL-7 defined code sets.

Next is we need to foster a broad understanding and demonstrations of the power of the description project, under pinnings of SNOMED CT.

At Kaiser, I feel we are sitting on a kind of gold mine that our researchers haven't even realized yet. We have this enormous investment into CMT and I think in a couple years people are going to realize what an enormous investment it is and how much we can get out of it, but I think there's still a very significant lack of understanding of what description logic is.

The fact that SNOMED is based on description logic and that it's classified, what does that mean to us, how does that allow us it more precisely analyze our data.

So I think an objective is to make sure that more people understand it, work with our researchers to operationalize this description logic, take advantage of it when we are querying our data repositories.

We still have a lot to learn about how best to deploy that and how best to use it as a search strategy.

And finally, I think from SNOMED's perspective, we need further enhancement of the term request process. It has to be easier for us to get our requests in to SNOMED and find out exactly what happened with our requests.

For laboratory LOINC, I guess I should also put a caveat in there that we worked extensively with SNOMED over the years, helping to develop that terminology. I haven't been personally involved with working with laboratory LOINC and I know I've seen some demonstrations from Clem that he is already working on some of this stuff so I just wanted, my caveat is I think this is important for LOINC. I think it's already being looked at, but I don't know the extent that it's being looked at.

For laboratory LOINC, I think we need additional scrutiny over the values used to populated component ax's such as the set of measured components.

I think the LOINC codes themselves, they are very good, they are very clear, but the codes used to define the component axes I feel need to have unambiguous definitions and we need to have them hierarchically structured as well.

The other thing I think we need in laboratory LOINC is a cross map between these component values and SNOMED CD and this would actually facilitate a complete integration of those terminologies.

I make reference to an article here from JAMIA that we wrote with Stan Huff and Ken Spackman and Keith and Roberto Roca where we actually describe this mechanism where you can use the computer to integrate LOINC directly into SNOMED by defining them both using a similar description logic and by actually cross mapping the component values of LOINC into SNOMED.

So basically, you have a LOINC code and you are defining it, its components are basically defined based on SNOMED concepts and that allows you to integrate these concepts directly together.

Finally, First Data Bank, this is the drug terminology that we use now. It isn't the standard. It's the vendors' drug terminology and we use it because we needed it but it's not a standard and it doesn't, now that we are expanding the scope of our electronic health record, we are finding that it's, now that we are having different pharmacies using different vendor terminologies, it's not working for us any more.

We would prefer to see a national standard drug terminology such as RX-norm and goes down to the clinically relevant level of granularity and it's fully integrated with SNOMED CT. Vendors would ideally adopt or map to this national standard.

Kaiser Permanente supports RX norm. We are in the process of evaluating it.

And that's it. Thank you.

Agenda Item: Beverly Meadows, NCI, NIH

DR. MEADOWS: Good morning. On behalf of my colleague, Angel Ribble and myself I thank you for allowing us to provide the opportunity to provide comment on PRMI terminology issues. Angela is, her schedule didn't permit her coming back from Seattle to attend but she is a project manager with the Southwest Oncology Group KNOWN as SWOG.

One of the clinical trials cooperative groups sponsored by the National Cancer Institute. Her duties include oversight of the SWOG data base, development of case report forms which is essential to the project I'm going to talk about and the clinical research associates workbench web site and intergroup interactions.

She also serves as the HIPAA privacy officer for SWOG.

My name is Beverly Meadows and I'm the nurse consultant for clinical trials development informatics with the cancer therapy evaluation program also known as CTEP in the Division of Cancer Treatment and Diagnosis. We are in this acronym soup here at the National Cancer Institute.

I'm the coordinator of the common data elements project which I'll talk about for CTEP and also the assistant project officer for the cancer trial support unit, also known as the CTSU which will actually be the implementation of the common data elements.

We work with the PMRI terminology in the context of cancer clinical trial data collection and management.

Prior to proceeding I'd like to give you an over view of CTEP and the cooperative group program as a bit of a background.

The Cancer Therapy Evaluation program, the mission is to improve the lives of cancer patients by finding better ways to treat, control and cure cancer, quite a lofty mission. It's accomplished by funding an extensive national program of cancer research which includes a cancer clinical trials cooperative group program.

In addition, CTEP attempts to forge broad collaboration within the research community and works extensively with the pharmaceutical and biotechnology industry to effectively develop new cancer treatments.

The cancer clinical trial cooperative group program was established in 1955 -- actually it was conceptualize then and was originally funded under a research grant for the National Cancer Institute to test new anti-cancer agents provided by the NCI drug development program.

Over the intervening years, the focus of the program has evolved to address definitive studies to combine modality approaches for the treatment of cancer.

Efforts of thousands of individual investigators which number about 10,000 who participated in cooperative group protocols were instrumental in accruing over 27,000 patients to these group treatment studies in 2002.

The PMRI terminology being reviewed for this testimony is the NCI thesaurus which was developed by the National Cancer Institute to provide a centralized reference terminology based on the working vocabularies used by the NCI and collaborating groups.

One of the vocabulary endeavors recently created to address sessions from n end pendent review committee for restructuring cancer clinical trials is the common data elements also known as CDE's.

In 1997, a collaborative effort was initiated between the cooperative groups and the NCI to develop a dictionary of terms that will both address the needs of the end users as was indicated earlier while providing a usable format for system designers to incorporate the terms into group legacy data bases.

And some of these groups have been around for 40 or 50 years so you can imagine the legacy of data that's in their data bases.

The vocabulary services at NCI were consulted to discover any similar efforts in current data bases that might contribute to the CDE effort and re-use what was already done.

Since this effort standardizes actual questions rather than terminology systems, it's a little different back to the CDE's, no match has existed in the NCI thesaurus to the unique focus of a common data element.

However, the goal of the CDE project is to standardize responses to questions which often entail a list of valid values that are drawn from NCI thesaurus controlled terminology.

In some instances, several terminologies are needed to accommodate the requirements of representing mandates such as FDA or the needs of the clinical community. An example of the latter would be the use of both the World Health Organization and French, American, British or FAB naming conventions for hemotological diseases such as leukemia and we found from our clinical constituency, they need the variety because we are looking at different sites throughout the country.

One category of data that must be collected is laboratory results to establish the patients eligibility for inclusion on the trial and to track the response of various body systems to the treatment program.

Prior to initiating the trial, the history and physical examination of the patient provides baseline information as to the status of the patient's disease and overall level of health, essential for having entry on to the trial.

One or several modalities are specifically chosen for inclusion until the treatment regimen based on findings from prior research.

Modalities include the administration of agents but also the intervention using surgical or radiological techniques. It really must be adaptable to those as well.

The NCI thesaurus is utilized as a controlled terminology to provide valid values for these various clinical content domains.

Alterations in health status may be precipitated by the treatment regimen which may cause an adverse event of varying intensity. Adverse events are evaluated using what we have established as a common toxicity criteria or CTC which is mapped to terminology from the medical dictionary for regulatory affairs, known as MDRA mandated for reporting to the FDA.

In addition to the correct term, the clinician must also grade the intensity of the adverse event and objectively assess if this temporal relationship or attribution of the event is related to the intervention.

The NCI medical thesaurus provides the complete MBRA terminology and mapping for us. Adverse event terms created by the experts in the field such as SWOG trialists as represented by the CDEs and NCI thesaurus can then be mapped to MDRA preferred term.

In addition, the NCI thesaurus staff maintain the updates for these mappings to comply with the reporting mandates which also ask that we use the most recently available version of MEDRA.

The CDE project was initiated to incorporate regulatory vocabularies and to facilitate electronic communication among diverse applications.

Working with the clinical trials groups, we found that there may be nine super applications that we had to make sure that we would be able to communicate.

However, a more important aspect of the CDE project is to create data elements that are meaningful to end users such as clinical research associates, data imagines and research nurses who are essential to initiating and maintaining data integrity, a process which begins with the initial extraction of clinical data from patient's source documents. If you can't get it right there, it won't get any more write-up of the chain of events.

There are several concerns when structuring a vocabulary to facilitate clinical research and that's been addressed early. A primary concern is flexibility to respond to dynamics of this clients, essential for research. Since clinical trials must reflect current science, it is essential to offer timely inclusion of new terms into the vocabulary without creating obstacles for the research process.

Domain experts are consulted to provide basic categories or meta data such as definitions and suggested valid values going to a variety of different terminology for a newly committed vocabulary element.

The NCI thesaurus is utilized to assure the correct representation of these values with a dual purpose of enriching not only the CDE dictionary but also for updating the NCI thesaurus to reflect current practice in the clinical arena. We have a symbiotic relationship between the two systems.

Although terminology systems are often stored as remote repositories that nobody really sees and they are distant from the actual practice setting, the CDE project addresses the applied use of a control vocabulary and a companion NCI project, the cancer trial support unit, also known as the CTSU. This is noted for really acronym friendly at NCI.

One of the goals of the CTSU and to create a remote data capture system, RDC, for the collection and management of clinical trial data for phase three treatment trials led by these cooperative groups such as SWOG.

To accommodate the deposition of CDEs as the standard for data exchange, the laborious task -- and I'm sure you know that -- of mappings is necessary to link terms in the data base reference libraries unique to each comparable CDEs and once again we found some of the cooperative groups have 14,000 terms in their own personal libraries. That has to be mapped and utilized.

The standardized vocabulary provided by the NCI thesaurus facilitates this mapping process to produce a common terminology for exchange of data across these group systems.

With a growth of the CDE dictionary, a robust infrastructure was needed to accommodate the needs of the scientific community and also to address the technological aspects of adapting terminology as a data exchange standard. Prior to building the infrastructure, a standard was needed to universally represent the meta data that defines each CDE.

A decision was made to use the ICL 11-179 meta data standard to provide a more robust infrastructure as the cancer standard repository or CADSR. We had to get a new acronym.

Suggestions to the NCVHS: Although the actual use of the remote data capture system is just beginning, CDE standards have been incorporated to meet the challenge of data transfer to not only consider the development of new systems but only to accommodate the legacy systems where data from other studies are stored.

When terminologies are chosen as standards, the selection criteria should strongly consider the actual use potential to bridge the gap between vocabulary repository and the implementation of these vocabularies in clinical settings.

The actual utility of the standardized vocabulary could only be tested when implemented in an end user application.

Thank you.

DR. COHN: Our next speaker is Mark Pittelkow.

By the way, Beverly, thank you for reminding us. I think we ought talked about abbreviations in standardization. I think you have brought up the point about acronyms and standardization.

Agenda Item: Dr. Mark Pittelkow, Mayo Foundation

DR. PITTLEKOW: Good morning, Mr. Chairman, co-chairman, and committee members. Thank you for allowing us to present today.

I'm sorry that Dr. Elkin isn't here today. He's on his way back cross the Atlantic but he asked me to step in for him this morning and as many of you may know Dr. Elkin, he has a particular interest and commitment to these kinds of projects and we are pleased to be able to present today.

This initiative is part of the Mayo Foundation, Department of Medicine, Laboratory of Bioinformatics and Dr. Elkin and I have hope fully been able to respond to some of the queries that were submitted in the questionnaire is we'll talk about mainly two terminologies although in practice, Mayo does use a number of these terminologies including LOINC, but what we'll be focus on this morning is basically SNOMED and NDF a little bit.

As many of you know, and we have already discussed this morning, SNOMED CT is becoming more widely adopted and utilized at the Mayo practice we actually have used it quite extensively for indexing clinical notes with automated data abstraction which I will show you as a little hopefully demonstration at the end.

My interest as a practicing dermatologist and laboratory dermatologist as well, is in indexing images which, as you know, dermatology is a visual based, image rich atmosphere where we use this terminology and that's basically my background.

As I'll give you some examples, there's been some use in the usability laboratory and in the capacity of SNOMED CT to be able to index the web pages which, as you can imagine, and I think we all readily acknowledge it's a very rich, very wide ranging and of varying levels of veracity and I'll give you some examples of that, of its ability to do that type of indexing.

Certainly we also use it and we are beginning to develop some pilot projects to automate the assignment for E and M coding which I'll show you and certainly in fraud and abuse auditing which we are beginning to implement at Mayo as well as various quality assurance programs, both at Mayo as well as the VA.

Just a little bit -- we will then cover the NDF-RT which basically, as you know, being a automated medication list generation and some quality assurance issues which we'll talk about which we feel complement SNOMED CT as we've already heard about this morning.

So some data about terminologies. SNOMED-CT has in the neighborhood of approximately 380,000 concepts and 800 thousand terms with both pre-coordinated and partially post-coordinated terms and, of course, as a result, not all the semantics are supported.

There also is some overlap with terms such as using and method where certainly there is some commonality but not complete overlap and those are some of the shortcomings that I think as the terminology evolves in-depth, that unique differences are recognized that these things will be disambiguated.

There are obviously, some duplicate terms in the hierarchy with various different concepts IDs and that's also some of the shortcomings but I think as we recognize it's evolving that those ambiguities are being reinvolved and we also recognize that we believe there is generally poor coverage of some drug brand names and that's where we feel, believe that NDF-RH or other terminologies may be able to be utilized.

So a little bit of data about the terminologies for SNOMED-CT.

In the project that Dr. Elkin carried out looking at health concepts and a random sample of 41 health records generated another Mayo clinic, there was approximately 2.7 percent coverage with the use of 14,792, approximately 15,000 concepts. Some of these commonly used terms as we heard in Dr. Dolin's testimony, it would begin to sort of shake out in its common usage.

Interestingly, this dropped somewhat when you use various oncology drug indications, down to approximately 86 percent and then when we actually examined off the web a little over 1,000 web pages which were indexed, we were able to have remarkably high, essentially 100 percent sensitivity using concept based IDs and with a very high specificity of 98 percent.

By contrast, only using key word indexing, this dropped significantly to a little under one quarter of sensitivity and still maintain the high specificity of 98 percent.

When you look at that and apply statistical analysis, you can see there's both significant marker or very strong significance when you look at both sensitivity and specificity so that when you compare just keyword indexing to SNOMED, SNOMED clearly takes, has the ability to provide much better coverage.

Just speaking briefly about NDR-RT, there are approximately 123,000 concepts with approximate 230,000 terms and another 140 or so relations. Interestingly, it was able to cover about 71 pent of the drugs with various oncologic indications and does much better when you look at it compared to SNOMED-CT which had 58 percent coverage so there certainly is room for improvement which we've heard from drugs and using some of the drug specific terminologies.

So where we are coming from in brief. It's amazing that one century ago, a rather remarkable innovation came to pass in the United States when Henry Ford rolled off the first model T in Detroit, in Milwaukee, William Harvey and Davidson built the first motor vehicle and that's the same year, one century ago, Orville and Wibur Wright both flew their flights at Kitty Hawk so clearly I would say until the 20th century there's innovations set the pace for the remainder of the century.

So what I'm, Dr. Elkin and I have sort of queried is how will we be remembered in 2003. I think what this committee hopes is that there will be development of a knowledge-based health record; that there will be a decision support that will enable, as we've heard already this morning, good evidence-based medicine that will have the ability to provide excellent longitudinal care for our patients, providing both the respect and their values and mine but also maintaining the physicians' and health care providers' values and the standards that we hope we can allow for improving the health care.

So what we believe with PMRI terminologies is that they will support hopefully near complete coverage content; they will involve in a rather graceful and not awkward manner perhaps better than those first few flights, though they were successful though they left something to be desired as we looked 100 years later with the Wright brothers that will allow open input from both the clinical and informatics communities, which support; to provide basically a formal knowledge representational language and express their concepts and that, as we've also heard this morning will likely provide the best decision support ultimately.

And ultimately we hope to be able to have all the developments and advances in natural language processing to support basically the terminology.

What I hope to give you is just one example of where we are headed at Mayo in some of the practical aspects.

I will show you just a brief demonstration in the remaining couple of minutes here.

This is basically an HTML document that is XML based and since I'm not connected here this morning, what you see on the right is the dummy record, basically containing these aspects of the history of MPI of past medical history, social history, family history, medications, allergies, examination diagnostic tests and assessments and, as you can see, with these, they will give that code and the terminologies that are linked with them and then positives are in blue -- the operators in -- highlighted here or positive or negative operators.

The negative terminologies are in red as we move down the record, you can see areas where there's indeterminants or uncertainties that basically are highlighted with those appropriate codes.

Then on the left, this is broken down into a structured ENM coding where you basically have, you just review systems, past medical history, family history, social history, examine and then various data components of the data diagnosis, risk, etc. And as you curse through those, basically that then links the right to the coded record.

So what we believe is, as has been discussed here this morning, these potentials for abstracting a record, linking to that record for the physician to provide basically accurate ENM coding and ultimately then decision support and which in this case is based on SNOMED CT will provide significant advances for what I think clearly this subcommittee is charged with and hopefully will be able to provide for the government.

So, thank you for your time this morning. I'm happy as with the other testifiers to take your comments.

DR. COHN: I want to thank all the speakers. Questions from the -- Steve and then Walter.

DR. STEINDEL: Thank you for the excellent presentations. I have a couple of questions for Kaiser and EPIC and Mayo, and we'll start with Mayo first, which is my relatively simple standard question.

Are you using SNOMED routinely for recording information in patient charts?

DR. PITTLEKOW: In actually several clinical settings, we are; on the hospital and general medicine we are; and it actually is becoming, it will be part of our image data base as that is being rolled out towards the end of the year

DR. STEINDEL: For about how long have you been doing this?

DR. PITTLEKOW: Approximately one and a half years for the clinical records in the hospital setting.

DR. STEINDEL: Thank you very much.

And Bob, I have the same question for you. It's my first one and I have a second one.

Are you using SNOMED to record information in Kaiser in-patient charts?

DR. DOLIN: We are. Kaiser is such a big place, but in the national electronic health records SNOMED is the core for patient documentation; for our national electronic health records SNOMED is our core terminology

DR. STEINDEL: And how long have you been using it?

DR. DOLIN: It started in Colorado maybe six years ago and we deployed the IBM national medical health records a year and a half ago.

DR. STEINDEL: Thank you. That was the easy question. Now for the harder one.

There was one thing I picked up during your talk. You made the term "as SNOMED becomes cleaner." I'd like you to elaborate on what is dirty in SNOMED and how much work do you think is needed to clean it up.

DR. DOLIN: Well, I mean, the comments from Mayo where there is some of the defining attributes of SNOMED, we need further distinction between the defining attributes to that they are use reliably and consistently by the people doing the modelling work at SNOMED. We want to make sure that all of the defining attributes have crystal clear definitions so we can rely on them.

The other thing is once we know the model, once we know what these defining attributes are, we want to go through and uniformly apply them throughout SNOMED as well.

What was your question -- how long will this take or how will we approach this? Lately what I'm thinking is we are going to attack focused areas where we know we've evidence-based medicine and where we know we've good decision support so that rather than necessarily making all of SNOMED, we may come up with some asthma guidelines and based on our asthma guidelines, we may come up with asthma decision support and therefore we want to go into SNOMED and we want to really clean up and make sure that every concept related to asthma is just perfect.

So that's our current approach.

DR. STEINDEL: Thank you. Dr. Butler, I have a question for you and this is, picking up on your comment that customers are not willing to spend anything on terminology, and I think that's a very strict criteria to apply to this committee because obviously, terminology is not free.

It costs somebody something, and in some cases it may cost the end user something, either through the vendor like you or directly by them and how strict should we hold to that statement that you made? Is like $50 a year a reasonable price? Is nothing a reasonable price? I think we can all agree that probably a $100,000 a year is not a reasonable price.

DR. BUTLER: I think it's as fair question because they certainly have to pay for EPIC. I mean, I think that they ought to be willing to pay for the vocabulary and I think my comment was made almost, somewhat jest but also in amazement that they are not willing to invest in this.

I think -- I was reminded with Bob's slide that we need a payoff for the users and your example of decision support working if they use vocabulary. I think that is going to be key, for us as a vendor, is to educate users as to what they will get out of a medical vocabulary and I think they ought to be willing to pay a reasonable amount for the work that goes into creating it and maintaining it

DR. STEINDEL: Thank you. Those were my questions.

DR. COHN: Walter and then Stan and then Jeff.

DR. SUJANSKY: Actually, I have a question, a different question for each of the testifiers. I'm going to try to do it as quickly as I can. How much time do we have?

DR. COHN: We have about 15 minutes.

DR. SUJANSKY: Then I'll get right to it.

Dr. Butler, if, at the time that SNOMED CT becomes more or less freely available, will EPIC continue to use medicine in its applications and its development work and in its newly deployed applications? Why or why not?

DR. BUTLER: EPIC's guiding principle for development has been to give the customers what they want, guided by -- that's pretty much across the board in our development and as long as we can maintain a neutral, we maintain a neutral stance towards vocabulary to date but really there hasn't been a lot of choices. It's been easy to maintain that neutral stance.

I think that we hope that SNOMED -- one of the things is that if SNOMED became a national standard, I believe that we could then tell our customers we have, we are going to develop for a SNOMED application, our application for SNOMED use and have that as a background when we say that we want you to be able to follow the national standard therefore we are developing more with SNOMED than with other vocabularies.

But I think that as long as other vocabulary, I think we would still like to maintain neutral to meet the needs of our customers, that they want to use vocabulary other than the national standard and we should make attempts to use that if possible.

DR. SUJANSKY: Dr. Dolin, I believe at one point you mentioned that terminology should have, clinically, the drug terminology should have clinically relevant level of granularity.

How would a characterize what that is, what elements are required in a drug terminology to provide clinically relevant level of granularity. You know, at one extreme, you just have the drug name. At the other extreme you are all the way down to an NDC code. What's require for the terminology that's a standard?

DR. DOLIN: You and Steve picked up on the fuzzy points in my slides pretty keenly.

The reason I put clinically relevant without being specific about it was because for different applications to level of clinical relevancy may change and whether we need, you know, probably we need NDC level codes but we also need the level of granularity where I can be in my application, can order something and I can go to the pharmacy and the pharmacist can fill the order. So I think we need down to the manufacturer level and NDC level, probably so.

DR. SUJANSKY: That needs to be part of a national -- in some sense it's already a national, but it needs to be explicitly matter of the national program linked to the other levels.

DR. DOLIN: I think so.

DR. SUJANSKY: Dr. Pittlekow, sorry if I'm mispronouncing your name.

One of your slides cited a study that showed that 71 personality of oncology drugs, I believe, it was mapped to NDF. So that implies that 29 percent did not which is almost and third so that seems kinds of high.

Why, in your opinion, why is that? Why are almost a third of those drugs not in NDF? Is it a formulary issue? Do you know?

DR. PITTLEKOW: Well, it's several levels and I think that reflects part of what the query was to Dr. Dolin. Yes, part of it is a formulary issue and I apologize since I didn't run those studies but Dr. Elkin did but I don't have those that were outliers to know what the scope of that limitation was, but it clearly says that we have to make some improvements if we are going to get THE coverage that we WOULD expect or that we would hope to at least obtain. I'm sorry I don't have the specific shortcomings.

DR. SUJANSKY: Okay, and Miss Meadows, if I can ask one more question. I was actually a little uncertain from your presentation what the relationship between the common data elements and the NCI thesaurus is.

I gathered you are here to testify about the NCI thesaurus and its value and use and so forth. So if you could explain that a little bit and how the NCI thesaurus is useful in what you are doing, if you understood if correctly in developing the common data elements.

DR. MEADOWS: Whenever we do develop collections of elements, we do consult the NCI thesaurus to see if there's any information in there from other efforts that have been done throughout the NCI so that we are not creating new entities where there is already some information available so we do, we do consult the NCI thesaurus as our first, as our first foray into seeing if there's already information there and we can also go through the NCI thesaurus to NCI meta thesaurus to go see if there are any elements that would map to what we are doing or would have some similarity to the terms we are using so we go through different levels of the different terminologies.

DR. SUJANSKY: For your specific purposes, would there be any reason you couldn't use something like SNOMED CT instead of the NCI thesaurus for what you just described?

DR. MEADOWS: Well, within, the common data elements themselves are like questions. They are very different. But when we have the valid values that are responses to the question and then we can use the SNOMED or various other terminology systems to have as a response like the diagnosis, we would go not to a CDE but the response would indeed be something that would be a standard term or code. We wouldn't make up something of our own.

DR. COHN: I think there's additional comments from NCI.

DR. HAVER: Yes, Walter, Margaret Haver from NCI. I'm working on development of NCI thesaurus and meta thesaurus. I thought I might just clarify.

There's actually communication between the data standards repository which houses the common data elements and the thesaurus system and the meta thesaurus system to whenever a user queries the data standards repository for valid values in the particular domain when they are working in a template, say, this system will call out to NCI thesaurus for the control terminology for that domain. If it doesn't find that set of valid values, then it will hit the meta thesaurus system so there actually is a system connection.

Also, with regard to the question as far as overlap in things like the drug information space, we just did an analysis of drug overlap for the oncology drugs that we are using for NCI and NDFRT, for example, because we are working on a project for convergence between RX norm, NDFRT and NCI thesaurus drugs and I don't know the percentages but our initial analysis of 2700 single agents that we put into a development data base, with NDFRT there were 500 matches so we do have a significant gap and part of that is because NDF is formulary but part of it is because we represent both standard oncology drugs for treatment plus we represent trial drugs.

DR. COHN: Walter, other questions?

DR. SUJANSKY: No, I'm okay.

DR. COHN: Stan.

DR. HUFF: So just first, I just wanted to congratulate Beverly and the NCI for doing the work to try and standardize data elements within that important facet.

And then my second thing, and you are welcome to comment, but it's mostly just a point of information which is that there is a collaboration that's been initiated between the common data element folks and the LOINC committee. Peter Kovitz, Jeff Abrams and Denice Mosell came out in the fall and presented the CDE and the structure and it, at least the preliminary look is the LOINC strategy is more compatible with the 11179 strategy and we are continuing that.

They are going to attend the LOINC meeting that's coming up next time and that looks like a very fruitful collaboration as well and we are excited about that. Speaking again from the LOINC committee side.

DR. MEADOWS: We are trying to clean up -- we had terms we've been at the CDE project since 1997 so it was a steep learning curve about how to create a unique vocabulary. Everybody's got one and what we realized we did have to interface with other terminologies that were standardized.

What we are hoping to do is finish cleaning up, converting the terms, the 3,000 terms to their ISO compliance and then we are going to share that with you and see where we have convergence but what we would like to do is make sure we reach out to other terminology systems. Those are valid values.

But the questions themselves didn't really match anything and really that's where we have is the common data element like our data amended. I mean, this isn't going to be any more aside from yes, no, but it is a question where we want to standardize on a case report form so everybody is saying the same thing across all different dimensions of cancer research where we are trying to gain information and that will eventually -- not particularly that question -- but other questions will go into logistical data bases for analysis so you can see if we have any efficacy of a treatment regimen and/or adverse events and see whether, which of those a associate with the different regimens.

DR. BLAIR: In the written testimony submitted by many of the vendors they indicated that one of their highest priorities when they were referring to SNOMED, one of the highest priorities for looking at SNOMED was its, because of the hierarchy, the ability to support decision support and outcomes analysis so, Sam, I was wondering if that is one of the high priorities for EPIC also for looking at SNOMED or whether you were looking at it for other reasons.

DR. BUTLER: That is a very high priority.

DR. BLAIR: And then related to that same question is, Bob and Mark, are you able to share with us the degree to which you have been able to either implement some decision support applications, using SNOMED or do you have plans to do so?

DR. DOLIN: We haven't yet. When we start using EPIC, we will actually have data out of the application and we are actually going to be meeting with our research team to start building a -- using the hierarchies.

This is one of the areas where I think we will also do a targeted Q-A of the hierarchies. We do believe that the hierarchies within SNOMED store knowledge that can be taken advantage of for decision support.

DR. PITTLEKOW: And, yes, I would certainly agree. We've had some internal prototype efforts, one of which I displayed which was this opti coding for ENM coding which will extend then to some decision support, not actually functional yet, but in some, you know, conceptualized, on the board development.

DR. COHN: I have one question and then I think we will finish. Mark, it's really a question for you. I was just curious.

The example you showed, with the use of the terminology, looked to me like natural language processing example. As best I could tell, I mentioned one fundamental conundrums in the natural language processing is you need a lexicon underneath it to help standardize and reference the MLP.

Is what you were proposing that SNOMED and all of these other things together would be as sufficient as the lexicon under beneath and then all key applications. Is that what you are testing?

DR. PITTLEKOW: We are approaching that. Obviously, what's the ideal is I think up for determination but yes, it was clearly based on some of the processes which we've already been through and as I say, I wish Dr. Elkin were here and some day I'm sure he would be happy to, when he has returned to supplement my comments for the committee on those specific issues.

But yes, I believe that the potential is approachable with SNOMED CT and some of the other lexicons that is employed in this one prototype product that's called Opticode.

DR. GRAHAM: One clarification. So in the Kaiser implementation, will MedCine come to bear at all with SNOMED or it will just be that you will just use SNOMED products?

DR. BUTLER: That is up to Kaiser and I think I can speak for them in saying that it will would be SNOMED. It will actually be the CT.

DR. COHN: I want to thank our panelists. That was a great set of testimony. We will be breaking for 15 minute and then we will get back together to reflect on what we've heard. I think obviously Susie and Jeff will help us walk through that along with Walter.

The questions are A, what did we hear and B, I think, the other question I want to bring up and are we at a point where there's some sort of a letter at a high level knowing that the report isn't going to be ready in June but is there any letter or communication that we want to share with the full committee in June? So we will bring that one up also.

DR. GREENBERG: Are you also considering a letter related to testimony you heard on the first morning?

DR. COHN: That's already in the works.

DR. GREENBERG: Okay.

DR. COHN: And we'll find out whether there's a third letter that needs to occur regarding the enforcement ruling on development. Thank you. 15 minute break.

(Brief recess.)

DR. COHN: Jeff, I'll turn it over to you for the last session.

DR. BLAIR: Okay, we have about an hour and a half and during this time there's three objectives that we'd like to be able to cover.

One is to get some feedback from all of our subcommittee members and staff as to what we've learned during this last day and a half and the impression of the relevant issues and facts that we've arrived at.

The second is the idea of a quick letter to the Secretary and Simon will kind of be articulating what that piece is and the next piece is the next steps.

Now, to accelerate, I think accelerate the first objective which is a review of what we've learned, Susie Bebee and Walter have both pulled together a framework and a lot of summary of what has gone on this last day and a half and if I may, Simon, can I turn that over to Susie and then Walter?

DR. COHN: Sure.

DR. BEBEE: I'll start. What I wanted to do was give a more high level framework.

So if you remember we actually had a questionnaire that had gone out to the testifiers.

It was broken into two parts. The first part was specific with three questions. The second part was general -- three questions in the specific and then six questions in the general section.

In the first part we looked at the actual use of the terminology. We also looked at the strengths and weaknesses and this was based on the essential criteria and then we looked at mapping.

In the second section that was more generalized, we looked at the ease and difficulty in using the data for data analysis. We also wanted gaps and benefits and finally we talked to the issue of recommendations. We offered six and asked the testifiers to actually rank the six and then we asked for free form where they could actually write in to us their ideas on recommendations.

First the free form from a highlighting perspective. What we heard was first of all the vocabulary convergence is one thing that they wanted. They wanted a core vocabulary or vocabularies with the main vocabulary integrated or mapped so that it would reduce, overlap and ambiguity.

They wanted the harmonization of the administrative side with the clinical terminology and if we remember this dealt with the CMS reimbursement issues that were brought up. They also felt the need to include all health professionals as users of the core terminology. They wanted to free or low cost terminology and they wanted support for the developers of tools for the terminology.

And then what I have handed out prior to the meeting was -- and you should have a copy of the more specific, the six specific recommendations that that was one of the things I wanted to go over. Actually, you can just set it aside because it's already changed. I'll show it up on the screen.

And so it starts with the six recommendations that are on the left side, the column. I can read those. Recognize and/or adopt one or more clinically specific terminologies that can serve as the core set of national PMRI terminology standards.

This in fact was ranked number one, by 13 of the 18 responses and number two was analyze clinical functions and identify the gaps in existing terminologies for fulfilling these functions. This ranked second. It had 7 of the 18.

I'm going to skip down to the fourth one -- supporting existing terminology, developers to fulfill the gaps. This actually ranked second with six and then if you see it also ranked high as a third so we are talking about those some -- it was clear that the first one recognized and adopting one or more clinically specific terminology was first and then analyzing the clinical functions and supporting existing terminologies, terminology developers were close in the second area.

The third one actually -- wait a minute, that was supporting existing terminologies, that's right and then developing PMRI terminology standards to fulfill these gaps, that was the third one that didn't rank very highly.

Do nothing in terminology development and let the private sector fill the gaps as it will -- that didn't rank very highly and finally develop a single master terminology from scratch. That didn't rank very high as well.

One last thing to I will comment about this is that everyone had an opinion about what should be number one and again, that was recognized and/or adopt one or more clinically specific terminologies.

As we went to the second priority, third, fourth, fifth and sixth, you'll see down at the bottom, we had opinions drop off; 15 people out of the 18 had opinion what should be second; 11 had an opinion about what should be third and it went down from there. So basically the top three seemed to be where the focus was, where the testified.

And that is the recommendation portion of it and Walter has put together a more specific information that he's come up with so I'll turn it over to Walter.

Agenda Item: Discussion on previous terminology testimony - Walter Sujansky, Contractor

DR. SUJANSKY: Thank you. What I'm going to do today is -- they hooked me up to the projector here. What I'm going to do today is go over, summarize a little bit what we've heard, the testimony we've heard over the last couple of days.

Also, I mentioned what I'm going to do now is summarize what we've heard a little bit, a little bit of what we've heard over the past couple of days and give you my take on some trends that I believe were expressed among the testifiers.

I'd like to ask the subcommittee at that point as I go through that to give me feedback as to whether I've identified right trends and whether there are some that I've missed to that we can get the group consensus of sorts on the meaning of what we've heard and then at the last part I'd like to talk about the next steps in the PMRI terminology selection process where we go directly from now, what we need to do in the medium and long run and a little bit about the schedule for all of that.

Let me just launch in as we are getting the AV stuff set up here.

Over the last couple of days, we've heard from 20 testifiers for those of you who weren't counting, and they really represented a wide variety of the terminology user community.

DR. BLAIR: While you are switching, we know that there were 20 testifiers, Susie, do you know how many different terminologies were assessed in total?

DR. BEBEE: Eleven of the twelve.

DR. BLAIR: And the testifiers in many cases did more than one to do you know how many different individual terminologies were evaluated during the 20?

DR. BEBEE: I sure do. I don't have quite a summary number but I can tell you that if this is what you would want to hear, you can tell me but, for instance, Dianne Oliver testified to NDFRT; Keith Larson from Intermountain Health Care looked at two, NDF-plus, NDDF-plus and LOINC; Dr. Gay looked at both the dental ones --

DR. BLAIR: Actually I wasn't thinking of, you know, that's --

DR. BEBEE: That specific?

DR. BLAIR: Well, I was thinking of like for example how many folks did we have that assessed LOINC; how many did we have that assessed SNOMED, that type of thing.

DR. BEBEE: I've got that. We have actually two people that looked at NDFRT. We had three people that looked at NDDF-plus and we had emultin and we had 11 people that looked at PT; we had three people look at MedCine, one SNOMED, one ISO-2 designation; one HL-7 version 3 but was more focused on the CDA. We had one look at the medical device nomenclature. We had two NCI, seven LOINC and then we had as incidentals in ICD and CPT.

DR. SUJANSKY: Okay, I'm just about ready here. Where was I? That's right, 20 testifiers and again, as I was saying, they represented a range of types of users from clinician users to representatives of IS vendors, other system developers -- integrated delivery networkings, IHC and categorize in that regard to some extent as well as academia, some middleware vendors and as Susie just mentioned, we actually heard about nine of the terminologies, I believe, in the oral testimony but two more were represented in the written testimony and we didn't receive anything, I believe, about ISO 11073 terminology which were are still considering.

What were some of the uses of controlled terminologies that testifiers told us about? How were they using these terminologies in their applications in a practical sense.

Some are benefiting from control terminologies for supporting decision support; others mentioned interoperability, aggregation and reporting of data; some are using it directly for structure data entry into an EMR. There's for order entry more specifically.

We heard about indexing for information retrieval this morning and at least one user was using it just as a domain ontology for their application. Some of them needed a drug domain ontology specifically.

We heard a lot of different and interesting input from the testifiers. I think we would all agree it's a very worthwhile undertaking and it's challenging to go through and try to summarize what was many hours of testimony, but I think that there were certain trends that we could extract from everything that we heard and interestingly, I prepared most of these slides last night and after listening to the testimony this morning, we heard a range of testimony from a range of types of users, I didn't have to change the trend, the summary slide at all really which validates to a certain extent that there was some consistency there. At least that's the theory.

So what were some of the trends we can identify and again, I encourage people to jump in and add to or modify anything that I'm about to present.

I think we heard from a number of people that SNOMED CT is an excellent basis for standard terminology, standard clinical terminology.

I heard there's god do main coverage, rich multihierarchies for reporting support, reporting and decision support and there's a good terminology model, a good formal model and maintenance practices that maintain the integrity of the model.

DR. STEINDEL: I think we heard that there are maintenance practices that could be improved. You spoke to the maintenance of the model and I think that's a little different of the maintenance of the actual terminology.

DR. SUJANSKY: That's what I'm referring to is the model. The model remains, --

DR. STEINDEL: I think with respect to the model what you are saying is quite correct.

DR. SUJANSKY: That's right. We did hear, though, on the other hand that there could be better processes for submitting requests in terms of extracting those requests over time and so forth.

We also heard, while SNOMED CT is an excellent basis, that it's not sufficient. We heard this from a number of people. Specifically the drug content is not sufficient to support all the uses of electronic patient medical record system and there needs to be some additional specific lab content. We heard a lot of uses of LOINC and the specific detailed comprehensive content of LOINC.

DR. COHN: I was actually going to comment there that this begins to beg question of exactly the limits of the domain we are talking about, but, you know, certainly there are areas where we know SNOMED is not going to be sufficient. We also heard device codes so if indeed we are talking about that as part of the domain, that probably would be another point.

DR. SUJANSKY: Okay.

DR. COHN: I mean and others to be determined.

DR. SUJANSKY: We did hear at the same time that there's more and more nursing content that's being incorporated into SNOMED itself although that's something relatively new but that's an area in which SNOMED is accommodating requests for additional types of content.

We also heard I think quite consistently and in no uncertain terms that SNOMED CT is currently too costly for widespread deployment in vendor systems. They have stated their customers simply are unwilling to pay the additional cost of licensure to include SNOMED in their applications and although we heard great things about SNOMED and the many potential uses for it in commercial systems, we also heard very uniformly that the systems in which it has been incorporated are not yet deployed or have not yet been deployed are not in use in the commercial marketplace among these vendors and that is because of the cost issue primarily.

However, at the same time, I think we got a sense from the vendor community and others that the government license that will change that is certainly anticipated at this point. It doesn't sound like anyone is banking on that, but there is certainly anticipation and hope.

Additional trends. I'm inferring a little bit more here, but I heard that and I took away from what we heard that the SNOMED CT structures per se are not used for data entry, they are not necessarily useful for data entry directly. We heard from a number of testifiers that they developed their own customized hierarchies or forms to create the actual user interface or they are using the MedCine terminology.

We also heard from Kaiser this morning that they are using SNOMED but they have added the notion, they used actual facilities within SNOMED to a certain extent to create context and hierarchies and additional subsets in order to create navigation paths that are more appropriate for interface use on top of SNOMED.

Bud nevertheless I don't think we heard from anyone who is using SNOMED right out of the box or SNOMED terminology server as their direct user interface.

So this creates the need on the part of these who are developing their own user interface who are using a different terminology to map to SNOMED and to gain all the benefits that they express that SNOMED gives them. So there is this mismatch there.

We also heard from a number of users of the MedCine terminology who expressed that they found it to be a very effective method for reporting clinician data entry in structured data entry data applications that would provide intuitive and useful navigation path.

DR. STEINDEL: Walter, I realize at this point you are speaking on specific terminologies, but later on are you going to talk about trends and comments regarding mapping?

DR. SUJANSKY: I am.

DR. STEINDEL: Okay, good, thank you.

DR. BLAIR: By the way, it seems like we are really talking about observations at this point rather than trends.

DR. SUJANSKY: Well, by trends I mean observations on the part of multiple testifiers rather than more -- yes, commonalities.

DR. BLAIR: Findings or commonalities. Whatever. Trend is a good word.

DR. SUJANSKY: So MedCine was described as effective terminology for clinician data entry. However a number of its users also mentioned or acknowledged that it was weaker for data analysis and especially when compared to SNOMED. For instance, there are no multiple hierarchies and there's some redundancy and ambiguity in the terminology.

They nevertheless, although it's not on this slide, there was some evidence and some testimony that MedCine could be used for reporting and for data analysis though it doesn't preclude it but perhaps that capability is not as powerful as with other terminologies.

In the area of drug terminology, we heard from a number of people, actually, perhaps except for Bob Dolin's testimony this morning that at a routed generic level the distraction is adequate for most decision support functions.

Perhaps consistently with Bob's testimony this morning, it's not adequate for order entry and perhaps other functions. This raises a question that we'll get to later about the scope and the requirements and so forth of terminology selection that we are doing and how that, what the implications of that, for which terminology we selected.

We also heard that timely updates were important in drug terminologies whether it's being used in a genetic knowledge base or it's being used in actual clinical documentation applications.

As to LOINC, we heard that LOINC, as opposed to SNOMED CD is widely deployed in vendor systems and other systems. It's very successful.

However, we also heard from a number of testifiers that LOINC requires further development in the classification hierarchy. This morning we heard that one vendor is -- and this is true of a number of vendors -- they are building their own classification hierarchies and abstraction hierarchies to make up for the deficiencies of LOINC in this regard.

The various detailed concepts for certain lab concepts that are, that clinicians think of those as an abstraction, those don't exist in LOINC yet.

We also heard that LOINC would benefit from explicit semantic model. That would help one to map it to other terminologies as well as to maintain it and that although some laboratory panels exist in LOINC, there could be more added and that would be helpful.

Mapping. I think we heard from a number of -- this might be the most common observation actually is that mapping of terminologies is difficult and costly and that was putting it politely. The increase in maintenance cost of a terminology set or terminology group increases log rhythmically as one person put it as more terminologies are used, as multiple terminologies are used.

Also we heard from a number of people that mappings are often context specific which possibly implies that mappings themselves may not -- mappings between a terminology, for example in our current world view of the core terminology and other terminologies, related legacy terminologies, that those terminologies maybe shouldn't be part of the terminology standard itself. It should be provided outside of it. We can discuss that.

We also heard that within a core group of terminologies, there's a need for relatively tight cooperation and interoperability to make that core work, to really make it different that, make the situation different than where we are now where organizations are already using their own group of terminologies but having a lot of difficulty in managing the overlaps among those terminologies and the disparate and uncoordinated maintenance of those terminologies.

In order to have a tightly integrated core, you need more cooperation of interoperability and we heard there's a need for non-over lapping and consistently modelled content in that core terminology. Again, this is related to the mapping. Consistent modelling is important in order to enable mapping across, between those terminologies.

DR. COHN: Can I ask a question about that one because you, it seems to me there's sort of two different issues. One appears to be interfacing and the other seems to be mapping.

Is there -- I guess I'm struggling with that one. Mapping to me is redundant or can be a sort of redundant and you map back and forth but then there's sort of like extension of concepts and the sort of interfacing cross domains. What are you referencing or are you making a distinction or do you think it's valuable. Should we go back to the last overhead?

DR. SUJANSKY: We heard, that's when I think about both of those issues in that both of them having a common model was useful. Recognize, for example, the interfacing type where you have different domains where you want to relate, for example, we heard that chemicals appear in multiple domains and it helps you to relate concepts between terminologies if you have a common model, common reference hierarchy that share those terms in their definitions.

DR. COHN: Okay, I guess maybe it is the last bullet you have there which addresses that issue.

DR. SUJANSKY: Right. But also in terms of identifying where you have synonymous concepts across different terminologies. That would benefit from a more consistent model as well.

DR. STEINDEL: Simon, just as a comment, if you recall the August testimony from the experts, mapping was also identified as a very key element and a difficult element.

DR. SUJANSKY: There are two, as we get to the implications of this, I think we'll need to discuss both of those issues. If we identify a group of terminologies, then what recommendation should the subcommittee make regarding mapping among those and also more generally, what recommendations should we make about mapping to other terminologies where mapping is relevant.

And then I believe this was the last observation. This was also made by a number of testifiers. That clinical terminologies need to co-exist and support business processes. Organizations will not sacrifice much in terms of their business processes and their bottom line and so forth in order to adopt highly structured clinical documentation practices and standard terminologies and so forth. It needs to fit in with their core need as going into these and so forth.

Any other comments? Any trends? In the sense that I mean trends that I've missed or that I've misstated.

DR. GREENBERG: You didn't have very much about the drug terminologies that we heard about. Maybe because there weren't too many trends, I don't know although none of them were efficient, it seems there was some actual use of a few of them that was more or less meeting the needs of the users.

DR. SUJANSKY: That's a good point. I think we could say that a number of people testified that they are using First Data Bank, NDDF and they are satisfied with that. It's meeting their needs.

DR. BLAIR: There's a lack of consensus on exactly what would be the best solution. Many different options. There was NDFRT, Rx Norm, First Data Bank and there was a lack of consensus as to exactly how we go forward although a number of people were winding up recommending that we should have something that didn't have a cost a la the RX norm and the NDFRT and that would be public domain and that would be a high priority so maybe in our further deliberations that will carry the day.

That, I think, is subject to further discussion. Is that a fair way of summarizing that, Marjorie? No?

DR. GREENBERG: Yes, although I think it was made clear there's always a cost. It's just who bears it.

DR. SUJANSKY: I think what the vendors are saying is someone else should bear it besides us and our customers. That is what I heard as what they meant by low cost.

DR. BLAIR: One other thing, and actually, Walter, I think you caught it but maybe not with the emphasis that I think at least I was hearing and that was that for those folks that were choosing SNOMED CT, so many of them seemed to mention that they were being driven by a business need for decision support and outcomes analysis and that was what was driving them in the SNOMED CT. That appears to be like a set of emerging applications that I think we should take note of.

DR. SUJANSKY: I don't think I picked up on that as strongly as you did.

DR. FITZMAURICE: One of the things that I heard -- I agree with everything that Walter said and added to, I think he's done a great job on this.

One of the things that I heard was that yes, we need a reference terminology and then spread out to the applications but one of the things we have to do is have something for those who enter the data, that there needs to be either using their own natural language, using a different set of terms that they are comfortable with, they have to be mapped into get the data into someplace like the data repository. That's just translating into applications. Its also translating into inputting the data into the system.

DR. COHN: Yes, my sense, Michael, is many vendors will be more than happy to help you with that.

DR. FITZMAURICE: I didn't say they were solutions. It's just --

DR. COHN: Okay.

DR. SUJANSKY: The point you are making, I think what was articulated by a number of the testifiers is that the user interface issues in applications are very, very important and they shouldn't be other looked as we envision an entire solution, that if you want to have control terminologies for all the data analysis, you can't overlook that the data needs to be entered consistent with that in one way or another but I don't know that we got, aside from the fact and SNOMED itself out of box isn't a good way to do that, I don't think we got any consensus on what is a good way to do it.

DR. COHN: Yes, I think maybe what we heard the terminologies will not solve all the problems.

DR. FITZMAURICE: It's one good way.

DR. COHN: Walter, maybe I -- and I apologize, my short term memory seems to be failing me on these sort of visuals. Look back at the first one but I'm trying to remember -- what I heard very strongly was this sort of reaffirmation that the government has a major national leadership role in all of these. The industry really does want terminologies but wants to have government leadership and direction to help guide the way on this one and I think --

DR. SUJANSKY: I was just going to say that. In the first slide that Susie showed, I hadn't seen those results at the time I prepared these slides and I was just going to mention that having seen them now that a clear, one of the responses clearly that was rejected was the government should do nothing and let the market solve the problem.

DR. BLAIR: Could I echo in a little bit on this because maybe I paid more attention to the written testimony and Susie summarized it and there's just overwhelming support for the point that you just made that it's documented when they picked what should the government do and the numbers were overwhelming in support of that. Susie had that in her summary.

DR. GREENBERG: Just on this issue of user interfaces or entry versus data entry and user friendly data entry versus reporting and data analysis and I'm no expert on SNOMED so I'm not even sure I completely understand the uses of these terms except that I repeatedly heard about the issue of precoordination versus post-coordination and that enhanced post-coordination was definitely needed so that seemed to come out in so many different people.

DR. SUJANSKY: That's true. A number of people did mention it. I think that they are specifically asking for a more constrained or better defined model for post-coordination and relating post-coordinated.

DR. GREENBERG: I think it's a standardized grammar.

DR. SUJANSKY: Good point. I believe that no one pointed to any particular terminology as having that already.

DR. GREENBERG: Right. Although I mean, I think the person who, one of the people who talked about MedCine thought it did better as an entry mechanism but even that, I thought what was really interesting was that Dr. Zinder from DOD, when he talked about the use of MedCine, I think shed some light for me in pointing out that he really didn't see DOD's use of MedCine as necessarily a competitor -- that MedCine and SNOMED were competitors but that, at least in his case he thought they worked well together.

They had different strengths and they worked well together and as you pointed out, if you weren't going to use MedCine you have to do something maybe like what Kaiser was talking about with the sets, but there needed to be that kind of user interface.

DR. SUJANSKY: Right and that there needed to be a mapping from that to something that was well suited for -- something that was considered more optimal for the data analysis.

DR. GREENBERG: I think these are findings that are important, too, and a number of people around this table were participants in CHI but relevant to that process because as Simon said, standards in and of themselves are not sufficient. I mean, we know that they need to be implemented but I think this gave me -- certainly I learned a lot in the last few days just about some of the issues are but I want to congratulate those who put together the testimony as well as those who testified.

DR. KRAHM: It really piggybacks on what Marjorie said. I thought Lee Min Lau brought it to light in her presentation. It's not unique to LOINC, but the fact that just adopting these in many situations that the assumptions made as they are implemented are crucial, especially if you are the interoperability side of this and she pointed to LOINC in a large organization being implemented but if you didn't have control and it gets back to implementation guides and the assumptions made, then you don't have the complete interoperability just by adopting the vocabulary, the terminology itself.

It's to this need for detailed implementation guides. I think that's been brought up before but should be noted.

DR. GREENBERG: This kind of goes back to the CHI discussion, but I think it's kind of relevant as we look at not so much trends but one of the things that kind of was missing.

I thought we heard a fair amount about decision support and actually as a way to engage the clinicians in doing data entry as well as really have value, I guess in the questionnaires which I will admit, having just returned late on Monday I did not have a chance as Jeff did to read all those questionnaires, but apparently in those you heard about outcomes so I don't think we heard much in the testimony about outcomes,

And I was thinking back to what Carol Bickford said about the domains that were identified by CHI and what about the outcomes piece.

One area that is part of the outcomes piece is issues related to functional status. We did hear about adverse events and that's kind of, that's one aspect of outcomes, but I felt that that was kind of a gap and maybe it's because nobody has really implemented these to the extent that they could look at outcomes because you obviously have to have kind of a data repository and then you need to do analysis and all of that before you can actually look at out comes as opposed to an individual outcome as opposed to an individual outcome by one person had a drug and had an adverse event.

But to me, I just think we should keep that in mind. I mean, I was talking a bit to -- let's see, names are the first thing to go -- not Carol but you know who I mean from the Kansas city, Judy Warren. I even know her. Jet lag, that's my excuse.

And she said this is an area that they are going to be working on, with SNOMED, the nurses, the folks with the nursing terminology because functioning and other aspects that is certainly assessed by nurses to I just think we should -- I don't know exactly what happened with the ICF and I think being more of a classification, it probably didn't meet the criteria but I don't think we heard much about functioning and those domains and how, you know, the coverage of the terminology.

DR. SUJANSKY: So you are saying it was a trend in something that wasn't said by the testifiers.

DR. GREENBERG: It was trend in something that was not said, yes. At least to me it was important.

DR. BLAIR: I have one piece here and I don't know whether this is an issue or refinement or even beyond -- well, one thing wasn't clear to me.

It sounded as if there were many people, in discussing the relationship between LOINC and SNOMED, that either referred to mapping. Some people said integrating LOINC into SNOMED.

I think Jim Cimino was winding up indicating that the immigration would some how reduce some redundancy but I didn't hear redundancy mentioned very often, the degree of overlap. I always tended to think of LOINC and SNOMED as not totally mutually exclusive but pretty much complementary. But anyway, is there an overlap issue between LOINC and SNOMED that is an issue that needs to be addressed or resolved?

DR. HUFF: Yes. There was less overlap until CT and the incorporation of the read codes caused substantially more overlap between the two terminologies and there's been ongoing, you know, unresolved sort of, well, the addition of the read codes from -- or coded term version three from the UK caused both overlap in lob LOINC and especially overlap in clinical LOINC and so those two things that, you know, --

DR. BLAIR: Is this a valid issue for us to include among the things we would --

DR. HUFF: Well, it did not come out of the temper say. There are probably a lot of things that could be said that I'm not sure we need to say.

DR. COHN: I was going to say I think Walter actually handled this in his bullet very well when he talked about strengths of SNOMED versus -- I think what we heard is a lot of people are out there using Lab LOINC. I think that was their -- that's where they felt that really what should be used for lab pieces.

DR. SUJANSKY: That is right. Of course, part of that might have been because of the cost issues they also expressed.

DR. BLAIR: Susie has alerted me that Dan Zinder would like to contribute a comment on this.

DR. SUJANSKY: We are going to go on. Just to kind of focus everything, we are wrapping up the part where we are talking about the trends and what we heard across the various testifiers and we are going to go on and talk about the next steps and implications of those trends perhaps for the next --

DR. ZINDER: I just wanted to comment on the, what you said on weak necessary of MedCine being redundant, ambiguous, you focused on one point from the EPIC discussion and a lack of multi-hierarchies.

That redundancy what was discussed was a specific example based on ICD-9 and important to understand that MedCine has incorporated ICD-9 as their diagnosis portion. The rest of it, however, we found in DOD, it's extremely non-redundant and non-ambiguous throughout symptoms, history, physical exam and does incorporate multiple hierarchies which is very valuable to us.

DR. HUFF: Well, okay, that's an update because especially on the multiple hierarchy part. Yesterday we heard that it --

DR. ZINDER: Well, on diagnosis, that's what I'm trying to explain but maybe I'm using multiple hierarchy in a different way but for instance, for -- the example I use of the ear is a type of multiple hierarchy through symptoms.

DR. HUFF: That's not a multiple hierarchy. That's standard taxonomy subsumption.

DR. ZINDER: Fair enough, my error on that. However, the redundancy and ambiguity part is a point that I wanted to bring up it emphasizes the complementary nature of the tools.

DR. SUJANSKY: That's fair that perhaps that was a little too anecdotal testimony to include in the summary and at the same time we also did here, I believe that there is some degree of redundancy in SNOMED CD as well, to be fair.

DR. BICKFORD: I'm Dr. Carol Bickford, American Nurses Association. I know that you are going to be moving into your next session, but I wanted to provide some think-abouts as you move into in a process.

As I was listening to the testimony, it became unclear as to who were the users of whatever it is you are going to be deciding is going to be recommended. Is it all clinicians or is it only physicians and diagnoses and interventions related to billing and costing although you talk about this being the PMRI.

So I would remind you that there are non-physician users who need to be considered in the decision making. I would also caution you to talk about decision support and assure that you have a common definition because as you toss comments out about this product or this terminology, supports decision support, what does that mean.

And then the discussion about MedCine, I don't know that those of us in the nursing community have considered MedCine and tested it in our domain of practice so I'm concerned that you may be making some decisions that don't think about the rest of the clinicians in the health care delivery system

DR. COHN: Carol, thank you very much for your comments. Walter, would you would like to move into --

DR. SUJANSKY: Yes, why don't we move on. First to kind of recap where we stand as far as the subcommittee and this particular project and this particular activity. Jeff talked about this at the outset of the session a little bit when he was setting the context, that this is a little redundant unto itself but I'll go through it anyway just a frame the next part.

So the first thing we did way back when was articulate the scope and criteria. We did in a number of ways but there was some testimony from experts that back in August we incorporated that into a discussion that ended up being a document of scope and criteria and coming out of the discussion, that document specified certain things.

One was that we would be striving towards a core terminology group, not a single core terminology is a recommendation. There are also important related terminologies that we needed to consider and at least consider in our recommendations and work to accommodate in some way and we might want to think about relationship of the terminology standards the message standards that were previously recommended, but as we talked a little bit about yesterday, trying to develop some kind of comprehensive information model standard was outside the scope of what we were doing in this project.

But then more recently to help flesh out that scope and criteria earlier, Stan Huff, with comments now from a number of people on the subcommittee developed a more specific requirements document of what are these standards for PMRI and what are the requirements for the standards and how do we envision them being used and so forth.

I think that's a very useful document as well that we need to consider as we move on and we will touch on as we go to the next step.

Then we had the questionnaires, of course, for the terminologies relevance. We got back 40 responses. We did a formal analysis with respect to technical and organizational licensing criteria that we had specified earlier in December and from that we got our 12 candidate terminologies that we then heard about and what's really more informal user vendor testimony over the last two days.

And that testimony touched on a lot of very important practical considerations to complement the criteria, technical and organizational licensing criteria that we have articulated.

So in some sense now, we have a good data set. We've defined the scope and criteria fairly well at this point. We've gotten some data about the terminologies that we compared to those criteria and we've also received some good input from the user community about the practical considerations of selecting a terminology standard.

Where do we go from here? I think the next step is really taking our 12 terminologies, candidate set, what I am calling our candidate set and refining this into a preliminary recommendation for a core terminology group but before we can do that, or as part of doing that, I think we need to resolve some differentiating questions that remain.

So what we are going to strive for is a small group of terminologies that have required domain coverage and no minimal overlap. That's ideally where we want to go with this core terminology group but we need to resolve some of the things and I'll get into that in the next slide.

Beyond this immediate step, just to look a little bit further out in the future, I think it's very important -- we heard about the problems with mappings and the challenges of that and the real desire on the part of the user community to have a tightly integrated and coordinated core terminology group. I think we need to give some consideration to how that's going to happen and make that part of our recommendations.

We heard a lot about that from Jim Cimino yesterday, a proposal in some detail. We may want to undertake that and think about that and probably other ways to achieve coordination a governance of such a resource and indeed how much coordination and governance is desirable and possible so this has to do with mapping and harmonization of the core terminologies and processes for maintenance and release and taking requests for enhancements and so forth.

And then also, as I mentioned, down the road we want to think about the relationship to the messaging standards and to these other important relate the terminologies and what we want to say about that, how much we want to say about that.

But again, the first step, the most important step is honing this candidate set into preliminary recommendation for a core terminology group.

This is what I think we need to start having a discussion about, certain differentiating features. So the first one that's come up many times today, recently and throughout the testimony ask this notion of a reference terminology versus on interface terminology. What is this standard, this terminology standard that we are working towards about.

Is part of our work to define the interface, quote, unquote, interface terminology or terminology that can being directly used to capture structured data entry for users or is it more or noted for reference terminology that allows users of that terminology to develop their own interface terminologies to structure as they will, ask for their particular purpose or based on their particular preferences or using their particular favorite available interface terminology and then map that to the reference terminology.

I'm going to read the first paragraph of Stan's document. I apologize if this isn't, I've kind of got this, look this up quickly and I believe this is the latest version of it but if it's not, please correct me, Stan, if that part, especially if this part has changed.

The overall goal as stated here, NCVHS shall select a set of standardize terminologies for recommendation to the secretary that enhances data comparability and interoperability of health care systems that enable the exchange, aggregation and interpretation of computable, patient specific health care data.

Assuming that that's the current statement and that we as a subcommittee agree, you as a subcommittee agree with that, what are the implications of that statement for this differentiating criteria, reference terminology versus interfacing.

DR. HUFF: No, that hasn't changed. Some stuff before that, but that hasn't changed and I don't see these as impeding necessarily as potentially a progression of things. That is, I think you need to start with the reference terminology and as you establish that, you know, the mechanisms of context groups and value sets within HL-7, etc., start addressing the parts of the interface terminology and that allows people to contribute their experience where their experience is found in the reference terminologies and so you progress from a reference terminology to, in fact, being more inclusive of the characteristic of the interface terminology over time and so I don't see -- I see this assort of, you know, the necessary foundation of reference terminology and then greater and greater sharing of interface terminology based on that common core and not an incompatibility but in fact an evolution that would be very systematic and very beneficial.

DR. SUJANSKY: So just to probe on that a little and make sure, clarify for me, that, is an implication of that, the interface components of that that will evolve over time should, from the beginning be mapped and linked to the reference terminology as opposed to being developed independently and later map.

DR. HUFF: That's right. What I see happening is adoption of the reference terminology and then people saying oh, you know, the more user friendly term for this or synonym from this is this or here's the set, that should be the most commonly used set and problem list or, you know, making more and more useful subsets that support structure data entry and common terms that are used for those things, etc., but you are exactly right.

You start, all of those things are not very useful if they are created in a vacuum or created de novo in some of their environment then you have to map them. What should happen is to say oh, I've got the reference terminology. These are the concepts I want to use.

That's defined in the reference terminology and then you develop synonyms and groupings that are useful for structure data entry that people can share because those definitions, that additional information is based in the reference terminology.

DR. SUJANSKY: And is it also another implication where you are saying that the immediate task, either the very next step of a preliminary recommendation of a core terminology group or even this, the initial recommendations that will be made by the NCVHS, if those selected terminologies do not at this time support direct user entry, they don't have these features that will evolve over time, then that's okay.

DR. HUFF: That is correct. I would say let's adopt a reference terminology and then start getting contributions that really make that easier to use as an interface terminology.

DR. STEINDEL: Thank you for asking Stan first because you have just resolved everything and I would just like to see what you have just said articulated, you know, in the report that we are going after a reference terminology that must evolve and link to interface terminologies.

This is the experience they are having in the UK right now, is the problem with the interface terminology and how to link it no with the reference terminology and actually how to put that burden on top of SNOMED

DR. COHN: I guess I'm sort of agreeing with Stan. I listened to your definition. I think that probably put decision support in terms of the purposes here but I guess I had always thought we were talking about a reference terminology. Meaning that somehow an interface terminology and maybe I've been around too long but I sort of felt that this was sort of the province of smart developers where that's a piece worthy terminology. How exactly do they it and their reason with the interface and all this.

As I said, I've been very impressed how MedCine sort of made that all fit together but I thought that was sort of something different that would vary from one vendor to another to another potentially.

I'm not sure at the end of the day that it really is the province of the reference terminology. The developer would also be the maintainer of the interface terminology but that's something to be determined.

There may be a number of choices.

DR. ZUBELDIA: On the term interface terminology I think we need to make sure that it can never be confused with the interface engine and with the interface with an external user of the reference terminology that there may be two systems talking to each other used as an interface. I would rephrase it with human interface or user interface to make sure it is not confused with the machine and machine interface.

DR. SUJANSKY: That's a good point. Yes, we shouldn't assume everyone is steeped in this terminology as much as we are

DR. BLAIR: I really sense there's a great deal of convergence among the members of the subcommittee on these ideas. It almost seems like we are immediately working towards refining a number of thoughts so that's kind of what my question goes to.

We've talked about a core set of terminologies, and I think one of the criteria for being in the core is that the core consists of reference terminologies, but in this recent discussion it was as if it was singular a la SNOMED.

DR. STEINDEL: No, no, I don't think so.

DR. BLAIR: Thank you. Okay, because previous to that, we had talked about the idea that in addition, we needed a drug reference terminology of some type, a LOINC in addition, so we seem to be looking at three, is that right?

DR. SUJANSKY: Why don't we go through some of these differentiated criteria and we may get to the same place.

DR. BLAIR: That's fine. I just want to make sure that, from a terminology standpoint, when we are talking about a reference terminology, that is a criteria or characteristic of the terminology that would make it qualify for the core, is that correct, Walter?

DR. SUJANSKY: That's right. We are talking about the core now, exclusively, right now. So unless there are any more comments on that point, I'll go on to the next point I have.

Acceptable licensing cost. What do we want to say about that and let's just get right down to brass tacks. We've heard from a number of testifiers that First Data Bank NDDF works really well for that. They are already using it, it's already incorporated in their application.

It's got all the features they need. It's got all the different levels of abstraction where it's just about every function so however we, at the same time, just a finish my thought here, at the same time, we know that the costs of license are not trivial. What are the implications of that?

DR. HUFF: Well, you know, I have very strong feelings about this, and I think it actually came out pretty well with the other people.

I think they are, I think we have the requirement that this terminology, these core set of terminologies should be free for use for, you know, everybody in the US and I think anything short, and there are two models for doing that or maybe there is just one model.

I mean, LOINC there's a bunch of volunteer stuff going on, but they have, essentially they are funded for the hard work through the National Library of Medicine and we've got the contract coming with SNOMED we hope, anticipated as you said. We are anticipating that and I think that's the right model and the reason I feel that way is that if you do anything else, while there's no amount of goodness that I know of that would overcome the potential barrier ideas for people to develop knowledge bases, capabilities and tools around these terminologies if it's obscured in anyway by cost and/or by intellectual property rights.

I just can't imagine any amount of goodness in the terminology that would over come that and so I'll stop there because I obviously have strong feelings about that.

DR. STEINDEL: Yes, I agree with what Stan is saying in this, and I think we have to take a look at some of the terminologies that were talked about and also briefly mentioned here that do incur considerable license costs to the institutions that use them and realize that the institutions are paying these license costs like, for instance, for CPT, for different reasons than for the use that we are putting it for.

First Data Bank is used primarily as a system within institutions to drive their pharmacy systems which is a totally different need than what we are talking about, the delivery of pharmaceuticals to the patients and maintaining of pharmacy inventory and in that mode, the institutions are willing to pay what the cost of licensure, First Data Bank, and willing to get weekly updates because that's the type of systems that they need to drive the pharmacy systems and First Data Bank has competition in that area.

And not every institution uses it and we have to realize that when we are making our decision, we are making our decision on this system for a different use and for that use I think we need to go along with what Stan articulated in terms of the cost to the users.

This is what we heard from EPIC, and I think we have heard from many other people over the years, is for the use that we are proposing at this point in time, the clinicians, the clinical support systems are not willing to pay the money for licensure cost. It has to be absorbed in another way.

DR. HUFF: Just a brief clarification. What I didn't say and what I should have said is that my statement shouldn't be taken, for instance, as saying that we shouldn't do First Data Bank, that if we decided to do First Data Bank, we should do it in the same model that we did SNOMED CD; that there should be a government contract with First Data Bank to do that if, if that's what we decided to do and our choice, I think in that case, is should we do a combination of RX norm and NDFRT or is it more cost effective to contract with them.

I just think we have to be willing to make it free for use one way or the other either because it's already being supported through a government agency like RX norm or contract with somebody else to do it like we've done with SNOMED.

DR. STEINDEL: What I was pointing out, Stan --

DR. HUFF: I'm willing to do a SNOMED, I guess.

DR. STEINDEL: I agree, even if we choose something like First Data Bank, yes, that would be the model that we would go to but what I was pointing out was we heard of a, from a lot of people who are currently using first data bank and probably the reasons they are using it is because there are other areas of their institution that have a business need to pay for it.

DR. SUJANSKY: But at the same time, Steve, just as a clarification, I also know a lot of organizations that are just PMR vendors that are developing clinical applications and are not doing any pharmacy applications to license products like -- data bank products like First Data Bank because it's really the only source of a drug knowledge base that's integrated with decision support and for CPRE and everything else they want and so forth.

DR. STEINDEL: Because they have found a need.

DR. SUJANSKY: They have found a need, but I think that's beyond kind of being a detail. In fact it also has implications for what we are doing because it implies there will continue to be a need for all the decision support, logic and knowledge and so forth that in the clinical realm that exists now -- and I don't think any of us envision the development of all those knowledge bases being something that's part of terminology standards.

That's where this interoperability in mapping and so forth and the related legacy terminologies come about. That's where it comes in.

That's why it has to fit so in my opinion, as a standard comes out that perhaps is not First Data Bank, perhaps there won't be a model such as for SNOMED.

DR. COHN: Why don't we let somebody from First Data Bank comment?

DR. SUJANSKY: Just to finish this up, that there isn't this model. We need to answer the question of well, how is this going to work for me in my CPOE or my EMR system where I need to do decision support that now this other thing is the standard.

We need to think that through and be able to answer that question for people.

DR. ZARRO: Tom Zarro with First Data Bank. I'm Vice President of sales and marketing. I'm also a pharmacist and, of course, I have a very distinct view of the drug data bases and how they are used. They are certainly used in many cases in pharmacies and institutions are also used in a lot of different areas in the development of an EMR, the electronic medical record.

I would also say that I would ask the group to keep in mind the intense work that goes into maintaining a drug data base and the different concept levels of a drug product that is going to be necessary for this type of coding. It's not just for a statement but also our competitors who do the same type of work.

But I think you are looking at a drug concept at a lot of different levels and we've heard them here, starting at a med-name only down to the NDC level so it's not going to be a single concept, I don't think, that you see as a drug concept that's necessary to do this.

And the cost involved in maintaining this, of course, as some of you and many of you know, is quite high. There's a lot of people involved in it.

We certainly, at First Data Bank, and with our competitors add value to information about a drug product with our clinical data bases and drug interactions that you are all aware of and many other things in that area.

So as a comment from First Data Bank about this initiative to have a common code set or code terminology, I think it certainly is a great idea. I would like to see that happen going forward also.

I also think that the interoperability through the private side, the private data base providers, is going to be a necessity that we've do deal with, both First Data Bank and our competitors.

So I have a different views here. I just want to make sure that the subcommittee looks at the huge task that it will be for someone to maintain a timely drug data base going forward.

DR. COHN: Amen. Let me make a comment here and maybe this will help smooth over this discussion. A, I think we all agree. We obviously have gone to the hard areas. One is here, another one a devices. We are sort of, the issue is how that all works. I would think that we would be all, I think that these are sort of the critical elements to a reference terminology, but I don't think we have the answer on the basis of the testimony yesterday or today.

I think that these are both likely areas that we are going to have to sort of probably say gee, these are critical elements and we are likely going to have to do more focused hearings. I mean, we heard that there were some things that were being developed in the public sector, none of them have been out or are being used yet.

There's obviously a long history of things developed in the private sector that have use. Where is the line that goes between the interface, the reference terminology and all of this value added stuff so I think it's going to be something that we are likely -- I don't believe we are going to come to a firm conclusion except that we were going to talk about acceptable licensing costs, and whatever it is that we are talking about at the end of the day is likely to the end user going to have to be, I mean, at least for purposes we are describing which is different than pharmacy, is different than necessarily a whole devices. It's literally going to have to be zero.

It isn't so much because of the cost. I think it's because the barriers, the confusion, that just involve the whole issue of the license. I mean, as one who has done licensing for terminologies, just having the discussion with a terminology vendor of license, a terminology can cost hundreds of thousands of dollars by the time you are done.

And it may be that the cost of the terminology is not very large but you get lawyers, you get whatever and it's six or eight months. So just a comment.

DR. STEINDEL: Yes, Simon, just picking up a little bit on that. And, you know, we are talking about actual, some of the nitty-gritty that's involved with the recommendations etc., and one of the points that Walter made and a lot of people made is about non-overlap in the core terminologies.

And you know I'd like to point out that we heard very good testimony on SNOMED. We've heard some mention that the government is anticipating doing the license for SNOMED. We've heard Stan mention that with the incorporation of the re-codes, there's now considerable overlap between SNOMED and LOINC.

I'd also like to point out that SNOMED does have the drug area. It's not that well developed in the US but it's very well developed in the UK and actually serves as the First Data Bank equivalent in the UK and is maintained on that basis and so I think we need to be careful in the recommendations that we make in saying that we really would like, if we have a set of core terminologies, that we want to make sure if there is overlap between various terminologies that we designate which one will be the core terminology for that area.

And that if there is overlap, that that overlap from the other terminology is excluded by some type of subset mechanism or something like that so our uses, the US uses are relatively clear on what our recommendations are for that.

DR. HUFF: I would agree. Just one point of clarification is that as I understand it, the anticipated SNOMED contract does not include the use of drugs so that might help a little bit in that area.

DR. COHN: Walter, I think none of us could really comment on the nature of the contract.

DR. SUJANSKY: That's fine. I mean, obviously there are implications for that with respect to the recommendations. To some extent there are some timing issues, but I think we've got some work to do in this area of the drug terminologies.

I think there's some further analysis that needs to be done of RX norm and NDFRT. We heard some issues about some of the coverage gaps perhaps.

You know, 29 percent of the oncology drugs, plus there's even a bigger gap in the other information that we heard so it that something that is well under way to being addressed? Is it just in that therapeutic class? What's going on?

DR. STEINDEL: I think that gets to the issue that Bob raised and some other people have raised concerning maintenance to the terminologies. I think when you actually take a look at any one of these terminologies, you are going to find gaps and in some cases, large gaps.

And as long as there's a mechanism that exists within the terminology vendor and the community to fill those gaps in an acceptable time frame, I don't think the existence of the gap in itself should preclude a terminology.

DR. FITZMAURICE: I would also suggest that we might want to analyze device terminology as well. That may be a larger gap, one that can't be filled immediately, but pointing up that gap may lead to its being filled.

DR. SUJANSKY: There's an area where licensing costs is an issue as well because the UMDNS also has non-trivial licensing costs so that's an area we need to consider.

DR. BLAIR: Just briefly, one of the other considerations is that if the criteria that we've discussed so far is that a core consists of a reference terminology that would be available at no cost, then according to that criteria, SNODENT probably would also, you know, be in there as well.

DR. COHN: That's part of SNOMED, isn't it?

DR. SUJANSKY: You know, it's actually not. I've gotten it on -- I explored that issue with SNOMED because I had been told the same thing, that SNODENT, license to SNODENT, was subsumed in SNOMED and that in particular in the license that SNOMED is negotiating now with NOM, and I was told by SNOMED that that is not the case.

DR. STEINDEL: I asked Dr. Gay that question because I've heard both sides of this story, and it's one of the those I think historical, you have to look at the history of what's going on. The agreement between the ADA and the College of American Pathologists to include SNODENT in SNOMED was made with the issuance of up to version three of SNOMED, SNOMED International and that version of SNODENT is included in SNOMED and if you look at the current version of CT which, of course, moved the SNOMED version 3 terms forward, there are extensive dental terms in there which I'm assuming came from that version of SNODENT. What I am not aware of is if there are newer versions of SNODENT which are not in SNOMED and I think that may be what SNOMED people are referring to.

DR. BLAIR: I've kind of violated what I'm about to say here but I think that maybe we should keep the conversation at the level that Walter is taking us which is basically in defining what the criterias would be for what is in the core revenue, then getting down to specifics other than putting up other issues.

We are close to 12 and I don't know how much further you want to go, Simon. What is your inclination? Do we want to work 15 or 20 minutes?

DR. COHN: I guess I would ask the committee members. We have a couple of options. We really need at 1:00 to start on ICD-10 discussions and I guess the question would be is can we -- I mean we can either go forward for the next while and then take a very short break or we can break now.

DR. SUJANSKY: I have just one more here on this slide that I can address and then I only have one more slide which is going to talk about the rough schedule over the next couple of months.

DR. COHN: Okay, and then we'll talk. Why don't we go and then we'll break and take a short time for lunch.

DR. SUJANSKY: This next point doesn't that I can too long but you never know the way thing are -- we'll still be able to have lunch in 10 or 15 minutes at least.

So I think we've kind of, we haven't defined exactly what acceptable license.

DR. COHN: Actually I thought we just did. I think we heard from all the members.

DR. SUJANSKY: Is it zero? Okay zero to the end user. That means to anyone who was testifying: Vendor, organization, end user, academia.

Now, what is the importance of current widespread use in the criteria that were previously articulated. For example, the messages were used to guide some of the messaging standards, what I call market acceptance was very important and to a certain extent, acceptance of whatever is recommended, I think, at least down the road is highly desirable of course, and important.

DR. BLAIR: I think you just picked on the relevant words. I think it's desirable, but in this case we are heading into an area where these things, these reference terminologies are emerging and don't have widespread use.

DR. SUJANSKY: Right, and is there a difference -- I mean, I'm kind of positing that there is, but I want to get everyone's input on this -- between widespread use and market acceptance.

We heard a lot of very good things about SNOMED CT. That would tend to indicate there's a lot of acceptance of it, some risk that does, as I mentioned that is an excellent basis for standard terminology, yet it's not being widely used. It hasn't been widely used ed by the vendor community because of the licensing.

DR. BLAIR: A good point.

DR. SUJANSKY: To some extent, there is acceptance, but is there agreement on that?

DR. HUFF: Yes. I mean, if all things were equal and we had two terminologies and one was in current widespread use and one wasn't, then that's criteria. If you were in an area where nothing is in widespread use, then I think you go with what people tell you they want, you know.

DR. HUFF: I think people could argue the NCI thesaurus might be in more widespread use that SNOMED CT.

DR. HUFF: You have got 100 percent of the market in a very tiny niche.

DR. BLAIR: I think we should use the word market acceptance for the first reason. There's two reasons. One is for the point that you made that we did have a lot of testifiers, especially the vendors indicating that they were looking forward and starting to implement it and were kind of enthusiastic about certain terminologies.

But the second reason is that, you know, when we take a look at the drug knowledge bases, that may become a consideration as well and if you say market acceptance rather than widespread use, I just think that's an appropriate guidance for us.

DR. SUJANSKY: Okay, we will just need to be clear, in our communications what we mean by market acceptance.

DR. BLAIR: And I think we would list it as a desirable rather than a mandatory.

DR. SUJANSKY: Then the last point, which I think is a little premature to get into, but I'm just going to float it out there and maybe we can just move right past it in the interest of lunch is how important is agreement, participated coordinated maintenance on the part of terminology developers of these terminologies that will be selected, and when and how do we want to factor that into the cost?

Okay, now we'll move on.

DR. COHN: Steve, looks like you are ready

DR. STEINDEL: Yes, I think it's essential also.

DR. SUJANSKY: So this means we can't make any recommendations until we essentially are confident that these terminology development organizations will agree or have agreed to participate.

DR. HUFF: I think we can make a recommendation but I think we couldn't go forward on the recommendations without fee agreement. We can make our recommendation without that. Basically it just means that if somebody we recommended to participant it will be eliminated.

DR. SUJANSKY: I think it's an important enough criterion that one might --

DR. HUFF: I mean, who realistically is going to say no, don't choose me, we won't play. I just don't see this realistically happening.

DR. SUJANSKY: I can see it happening.

DR. STEINDEL: Or I can see someone saying we are not going to play because we are going to get used anyway. We won't need to play.

DR. STEINDEL: Walter, I can see it happening in the first round. I don't see it continuing.

DR. COHN: Walter, I think the other piece is we make recommendations contingent on this and that would be how I would characterize it.

DR. SUJANSKY: I'm in agreement, by the way, that this is an important criterion and that's why I think this should work. Moving it up in the evaluation process, it's something we can discuss down the road as well.

Certainly with regards to preliminary recommendations, I don't think this is something that needs to be resolved. In other words, we don't have to get agreement in order to make preliminary recommendations.

DR. BICKFORD: Dr. Carol Bickford from the American Nurses Association.

The question I have for you, and I don't know how you would like to answer it, but I'll be curious to hear what the answer is, if you select one of these or multiples of these terminologies to support the clinical practice, but they could indeed then be used for the billing transactions terminologies, what happens to those that have already been selected?

DR. SUJANSKY: Are you referring to ones that have been selected as part of the HIPAA transaction and code set?

DR. BICKFORD: Correct. From the transactions billing pieces when you are looking at your clinical terminologies that you would be capturing the information you could easily do your billing associated with that.

An example of that would be the cam initiatives that could be used for clinical descriptions but it also could be for the billing pieces.

I guess I'm asking, you are moving into the next step where you are talking about the clinical pieces. Well, what does that have as an impact for the existing terminologies which didn't meet the criteria?

DR. SUJANSKY: I think I can address, well, I'll give you my viewpoint. I don't think it has any, in fact, because really we are talking about terminology standards for the clinical domain here, the clinical area.

And so if there are requirements and standards in place for billing, then those need to be respected as the requirements and standards. And there's nothing in the recommendations of the subcommittee that will say -- and correct me if I'm wrong, Simon and Jeff, that will say you need to use these for billing.

DR. COHN: That's correct, yes.

DR. BLAIR: And the other piece is that our focus right now -- we have to take a first step. Our focus is on the reference terminologies, the core. There are other very important terminologies that may cover clinical domains or other purposes that will need to be mapped and that need to be part of the total picture but we just can't address everything all at once so our initial step is to focus on the core

DR. GREENBERG: I just wanted to say I think that Jared said this two days ago that this is certainly something CHI is looking at in the relationship between the clinical terminology and the terminologies that have already been adopted or might be adopted under HIPAA administrative simplification, and I think the one thing that I heard that would probably be included in the subcommittee recommendations would relate to the mapping issues.

And, I mean, it's not, this has been suggested. I think we had one of our presenter suggesting that instead of submitting ICD codes, you could submit SNOMED codes I mean, it's not completely out of the possibility of the future I guess but --

DR. COHN: Actually what I heard was medical necessity as being also specified in clinical terminologies as a number, I believe the testimony was.

DR. HUFF: I would have just said it pretty much like Walter. I mean, we are making the decision about clinical terminologies. If those ever become used as billing terminologies, that's a decision for another day and I think for a different group or at least us a different day maybe, but it's not today's question.

DR. GREENBERG: But I mean, we really did hear, as you pointed out that the mapping issues are non-trivial and that if those can't be resolved, it is going to be an impediment to implementing.

DR. SUJANSKY: As far as implications for existing standards for billing what we are doing, no.

DR. ZUBELDIA: The only potential implication could be with the too simplified claims attachment where you send an attachment back to them that actually incorporates this reference terminology.

DR. SUJANSKY: But then again, whether that's allowed or required is a decision outside the scope of what we were doing.

So anyway, just to move toward lunch, just a quick scheduling information. This is my current picture and take on what's coming up in the next month.

The next step really is a final draft of the analysis of the questionnaire responses heard in the final draft that will include the terminology developer, feedback and the incorporation of some data that we received but we wish to include nevertheless for the sake of completeness and that will be prepared by Mitch and in late June there will be a -- there's a scheduled deliverable for what I'm calling a composite report on all the activity to date, summarizing all the input and all the data and all the discussion and where we stand and at that time or around that time there's a meeting of the entire committee and so the question mark in here, we may hear a letter on the progress that reflects more or less what's in that composite report I imagine.

However, by that time there will not be -- that report is unlikely to have the preliminary recommendations in it. Those will be prepared by mid-July. It will require more information gathering, more analysis. Some of these issues that we talked about today, going lack and exploring some of these cost issues, exploring some of the coverage with some of the existing terminologies and the adequacy and their process for filling the gaps, etc., etc., so we can really make a more informed decision although, albeit, it would be preliminary and that will in mid-July.

DR. COHN: Let's talk about this for a minute because I think it's, I know everybody would like to take some lunch but should we talk about the "if" and if we have a letter for the NCVHS, what it might include.

DR. BLAIR: Yes

DR. COHN: Is everybody willing for that conversation for a minute. I thought that actually first of all, Walter did a very good job of bringing out some criteria as well as background and what we saw. Now, I'm actually usually happy to wait for the entire process to play out but given our testimony on the first day of our meeting which was NCHI is on a very fast track, wants to have X, Y and Z done by the end of the year.

The question is, is whether the letter to the secretary A, informing us of our progress as well as whether it's appropriate for me to make any preliminary recommendations. It may be appropriate for the June meeting.

DR. GREENBERG: You said a letter to the NCVHS on progress.

DR. COHN: I actually meant a letter for the NCVHS.

DR. GREENBERG: To forward to the secretary.

DR. COHN: Exactly.

DR. SUJANSKY: I did not understand that in our brief discussion of it this morning.

DR. GREENBERG: Because either way you look at it, you will be reporting to the NCVHS on the progress but the question is whether you want to forward something on to the department

DR. COHN: Exactly and I guess I would look at everyone and I can imagine. I might make some posits of what we might say because you feel comfortable at this stage

DR. ZUBELDIA: You think about is letter to the Secretary with recommendations for CHI or the final recommendations.

DR. BLAIR: That is a disconnect here a little bit in that Walter was just telling us that he have preliminary recommendations for us in July. You would have an update of the analysis in June but not the preliminary recommendations until July.

However, I think, Simon, if I may blend these two together, if we were to wind up making a progress report to the Secretary that indicated that the testimony that we received over these last two days were strongly supportive of certain things, not preliminary -- you don't have to use the word preliminary recommendations, but we can indicate that we have strong support for certain things that we've heard is that --

DR. COHN: I guess that's what I was sort thinking about. Let me just try a couple of things that we might say to the Secretary. For example, reaffirmation of the need for national leadership in this area, for example. The reaffirmation by the industry and others of the importance of a core set of terminologies that need to be well integrated and non-redundant for example.

We heard a lot of testimony, much of which we, I could sort of make a comment about what I think the end result in terms of some terminology recommendations might be. We may or may not want to go there. One could, for example, imagine that even the report that comes out in the end of the summer, there's still going to be areas that we recognize requiring additional work.

I think we've already recognized that the committee is going to need to take on some additional looks at device terminology which we consider to be essential but we need to do a further analysis of that and I don't know that Walter's analysis is going to change that conclusion. I may be wrong.

The question is, do we need to say that now? Do we want to go -- I mean, we can also make comments on it, either on SNOMED or LOINC or whatever if we so choose or not at this point.

Steve?

DR. STEINDEL: Looking at this from the CHI point of view, as we heard on Tuesday and as we are all aware, CHI is moving on in a very aggressive schedule. CHI has also had approved recommendation for the use of the laboratory portion of LOINC.

We are all aware that it has been mentioned in this committee multiple times, the government has been negotiating the licensing, anticipates putting that license into effect for SNOMED CT and I heard and I think everybody is in agreement in what we saw from what Walter presented in the early part and that we could indicate to the Secretary that we saw nothing contradictory in the testimony that we've heard over the last few days that would indicate that we would express any concerns about those moves, and I think that would be very useful to the secretary. DR. COHN: So you are suggesting a comment that we, for example would reaffirm support for the government moving forward with licensure of SNOMED for use or further analysis will need to be done to determine exactly which domains are appropriate in the use of SNOMED, be provided in our final report and use of LOINC for, I forget exactly what words, for the lab domains. Are others comfortable with that sort of a set of comments?

DR. ZUBELDIA: I think it would be very positive. Initially I would recommend it to the Secretary that this should be mandated standards, that the government should lead by example. I think we need to catch up.

DR. COHN: Also let's lead by example. Let's do something. So Walter, are you okay so far?

DR. SUJANSKY: Yes.

DR. COHN: Okay, so we are okay with, this is going to be a draft letter that we'll have a chance to review, but does this sort of, along the lines of things that we could give him. Michael, I have another one, too, but go ahead.

DR. FITZMAURICE: I guess in the letter I'd want to emphasize the fact that we have a process, a mandate and a process and that we are following this process and we expect it to result in something for the Secretary in September.

I agree that we also recognize that the Secretary relies upon CHI and has these two things coming down the road. We have not heard negative things about the actions that the Secretary may be contemplating in that area but emphasize that for the national committee, we have gone out and done things that the CHA hasn't done but we find that what we are doing is consistent with the results that CHI has so far.

We are a simpler process and whatever you want to say about CHI, you can also say about the CHI.

DR. COHN: We are also advisors to the CHI so we can mention our role in that.

Now, another thing that I guess I would wonder about has to do with whether we should also be recommending something around -- my handwriting has not gotten better over the while here -- that we also recommend that the NLM be responsible for distribution of these terminologies. Is that within the scope on this one or should we --

DR. SUJANSKY: I would say that's premature at this point.

DR. COHN: That's premature. Okay, that's fine. I mean, these are things we might say and I've heard a number of bullets --

DR. HUFF: I would have been with Simon on this one. I would have thought that's not premature. Why do you say that?

DR. SUJANSKY: Because we haven't discussed it. Betsy hasn't said she wants to do that.

DR. HUFF: We discussed it last meeting.

DR. BLAIR: It's getting into preliminary recommendations which we are really not going to be able to get to until July.

DR. SUJANSKY: Also we haven't discussed this whole process and governance of integrating and coordinating terminologies and who is going to undertake that and if that should be the same organization that distributes it and so forth. I think there's a lot of issues there that we haven't discussed yet.

DR. HUFF: Just go on record as saying I think it should be the NLM anyway.

DR. STEINDEL: Stan, in the absence of Betsy, I agree with you.

DR. COHN: I think it may be something that begins to address this issue about the needs for all of this that the government is going to have to think about the need for distribution, this there is going to be evident need for coordination, etc., etc., and maybe that's how we phrase this.

DR. STEINDEL: I think we might even be willing to go, I suggest we might go a little bit further and one thing we did hear very clearly is that the selection of the terminology is just the tip of the iceberg and that there's a tremendous amount more work that needs to be done before we can achieve the goal of interoperability, that just a selection of the terminology itself does not achieve that goal.

DR. COHN: Those are great words and maybe that will be part of obviously the final report.

DR. GREENBERG: Steve really took the words out of my mouth and I think this would be helpful to David.

DR. COHN: I think the Secretary wants to have this solved by December, including interoperability across the country. I don't know that we are even, despite the best efforts, we'll be quite there by December of this year. We obviously need to communicate that.

DR. HUFF: That's true. I guess we could get things handled a little quicker

DR. COHN: Is there anything else in the letter that we should be saying to the Secretary?

DR. HUFF: Not in the letter, but I wanted to, the other point which is given our schedule, I agree with you that we actually probably need another day of hearings about drug terminologies and testing devices. Maybe we could do it all in one day. When can we do that in the schedule?

DR. COHN: I guess the question would be, I mean, from my view, I'm just going to sort of say this. If we had drugs and if we had devices, the core terminology would still not be complete. It's going to be done as a substantially evolving piece.

So the question is, do we need to have that hammered down before September report, and if we feel we need to do that, then we may have to add an additional day to the August hearings.

I'm just warning you. Because we've got ICD-10, I think, coming up in the summer and we may have to do that.

If you are willing to have it be a follow-up to the September report, then obviously we have time in October and December that we can schedule this in. What is the interest of the committee?

DR. HUFF: Just again expressing my own bias and this is coming from my experience at IHC. If, going back in the 30 years that we've been doing this, if we ask clinicians, kind of in order what is the most important data to you, they would say lab data, medication data and then probably text reports like pathology, radiology reports, etc. Those are the things we've always done first.

I feel bad about another date, too, but at the same time I think medications is in the top three and that we are, the faster we move on, the more beneficial it will be.

DR. COHN: Okay, well, let me look around to the other committee members, just before we finish this.

DR. GREENBERG: What is this other date?

DR. COHN: It's in the middle of August.

DR. GREENBERG: You are not talking about hearings separate from the August hearings?

DR. COHN: Let me be clear. I'm not talking about yet another trip out. I'm talking about a day either before or after the next meeting and that's obviously assuming that ICD-10 hearings, things will be ready for a day and a half hearing.

If it isn't, you might be able to consolidate it so that's -- I just want to warn everybody. Barbara, are you okay with this? Well, I guess that answers our question.

DR. STEINDEL: I knew we would have to do it.

DR. COHN: So I believe that at last we'll actually have a letter of sort of preliminary progress and some really pretty obvious recommendations for the Secretary that will go to the NCVHS. In the June meeting, we're obviously doing more work in July and August and September on the rest of this.

Is there anything else that we need to do for the moment about all of this? I want to thank Walter for some very good work. Thank you for doing a very quick job of putting things together.

Of course, Susie and Jeff and Steve and Marietta and everyone else. This is been obviously a, so far it's been a very useful couple of days of hearings and a lot of things are finally coming to fruition.

So with that, I think we've got to adjourn and reconvene at 1:00. Thank you.

(Whereupon the meeting recessed at 12:18 p.m.)

DR. COHN: Okay, we are going to resume. The topic is the ICD-10 cost benefit impact study. I think Donna Pickett, you are going to sort of lead off with the introductions and introduce our presenter.

DR. PICKETT: Yes, I will. As Simon just said, we are going to have, actually it's a status report of the intact analysis.

On the program it says I'm going to be doing the update but actually we have the principal investigator, Martin Libicki who will be doing the status report. Martin?

Agenda Item: ICD-10 Impact Study - Martin Libiski

DR. LIBICKI: Okay, thank you. Good afternoon, Mr. Chairman and members of the subcommittee. My name is Martin Libicki. I'm a senior policy researcher at the Rand Corporation, a non-profit institution that seeks to improve public policy through research and analysis. I'm accompanied by Dr. Helga Rippen who directs Rand's Science and Technology Policy Institute as well as by my colleague Irene Bromulculum(?).

My presentation today will cover the work of the Science and Technology Institute of Rand is undertaking for the subcommittee on the costs and benefits of switching from ICD-9 to ICD-10. What we hope to do is to lay out the questions that we've asked, how we are thinking of organizing our efforts to answer those questions, and whatever progress we have made to this point.

By this point we know we are now somewhat less than two months into this project. We expect to complete this project in time for the next quarterly meeting of this committee, which we understand is in August.

Our testimony is meant to both inform the committee as well as provide on opportunity to discuss the framework of this study which is to say how we parse the problem, what kind of steps appear reasonable in solving it.

We solicit input and examples bearing on this question not only from the subcommittee but from those in attendance and other interested parties. Examples in particular can be a touchstone for subsequent analysis.

Rand was asked three questions. First, what are the costs and the benefits of switching from ICD-9's diagnostic codes to ICD-10-CM; second, what are the costs and benefits of switching from ICD-9's procedure codes to ICD-10-PCS; and third, if it is advisable and switch both to ICD-10-CM and ICD-10-PCS, should they be done sequentially or simultaneously.

Ideally, after all the calculations are made and all the estimates complete one would like to arrive at some ratio of benefits to cost, for instance, 1.47, which would indicate that a switch is worthwhile or maybe say .78 indicating that it's not.

But for reasons that will become clear through the presentation, we cannot pretend to obtain anywhere near that level of precision.

There are categories of costs and benefits that he can be estimated with some degree of confidence, but there are also costs and benefits whose exact estimation is both difficult and not entirely meaningful.

What we hope to have three months hence is a reliable delineation of the major categories of costs and benefits, an understand of whether they are positive or negative, and some sense of their size and order of magnitude.

It is also important to be clear about what we are not addressing. First, we are dealing with actual rather than notional encoding systems. Our job is not to go back and redesign ICD-10, or even study what features it should or should not have had. We have to deal with the choices that are presented to us.

That said, one of the arguments for not making a switch today is the possibility that it may be more cost effective to write a better standard and implement it a few years later than it is to implement ICD-10 now and there by freeze the code for a longer number of years.

Furthermore, how the transition between ICD-9 and ICD-10 is handled, that is, over what time period, with what degree of assistance, may well influence the ultimate run of costs and benefits and therefore some assumptions on that process may be in order.

Second, we are dealing only with the switch between ICD-9 and ICD-10, not between ICD-9 and some other encoding system such as CPT. In other words, we are not asking whether ICD-10 is the best encoding system to switch to, only whether it is worth switching at all.

Third, and conversely, we are not dealing with any transition between encoding standards other than ICD-9 to ICD-10.

We assume that if another encoding standard is use the today, such as CPT, that it will be used afterwards, even if empirically this turns out not to be the case.

Outpatient procedures not included in ICD-9, will, we assume, not be encoded in ICD-10 after the switch. In economic terms, if I want to put it in that way, the market share of ICD-9 vis-a-vis it's competitors does not change. Only ICD-9 changes to ICD-10 but that still leaves us enough work to do.

Before setting out the main parameters of our study, I'd like to spend a few minutes going through some of the arcana on exactly what constitutes costs benefits.

The assumption that we work under is that we are considering the issue from the point of view of the aggregate public at large and not one sector of the public as such.

Granted, shifts in costs and benefits may be of considerable interest to various constituencies, kind of the major politics in this town. However this level of analysis is beyond its scope of our current study.

For instance, if, as a result of a coding change, a patient now pays $10 more for an office visit and this office visit cost no more to provide than it used to, this is not on the face of it either a net cost or a net benefit.

True, the patient pays $10 more. In that sense, medical care costs the patient more money but the doctor's office also gets $10 more and the two cancel each other out.

Similarly, if, as a result of a coding change somebody has to spend $100 on a piece of software and that software has already been written beforehand, then only a fraction of this $100 is in fact a legitimate cost. The rest is the transfer payment for work already done.

However, if, as a result of the shift in code, people have to write new software, then this is a cost, even if no copies of the software are actually sold.

Similarly, correcting or, alternatively, exacerbating misalignments of the costs of performing a service and the reimbursements of such a service are, up to a point, I have to emphasize, up to a point, neither costs nor benefit in the sense that somebody gains and somebody loses and up to a point the two cancel each other out, but beyond some point misalignment is a cost.

If the misalignments are large, if they are frequent, if they are systematic, then what economists refer to as dead weight loss, as well as distortions that occur as people adjust their behavior artificial prices, it becomes a true cost of not providing a service.

In other words if something costs $10 to provide and someone is only charged $5 to provide it and somebody decides as a result not to provide it, that's a cost.

However, if somebody decides to provide it anyway, then it's a wash and that's just the nature of cost-benefit analysis.

The efforts that are made to game a system, that are not optimal, in other words the system has some systematic distortions between costs and benefits, also has to be considered the cost and there are some indications that, at least in some DRG's variations in certain procedures between costs and reimbursements can exceed two to one in either direction.

Finally, there is a general moral consensus that fraud should not be considered to transfer payment which washes out the calculations. Payers lose, the gains to fraudulent providers do not count and the ethical tone of whatever profession is involved suffers.

Thus, if there are reductions involved it can be credited to a shift in new codes we count as a clear benefit and vice versa.

Cost benefit analysis also has to account somehow for secondary effects for instance, the advent of more fine grained coding rules may increase the use of certain decision support systems that hitherto were not worth using or not worth thinking about.

These decision support systems may offer new benefits, but this is not a direct benefit. It actually requires that somebody actively make the decision to invest in such a system so changes reduce options. Some options, when taken advantage of create benefits and, of course, change can also eliminate options so one has to take all this into account.

It is a little trickier and account for the opportunity to do unproductive things. This may sound like a paradox. Let me explain.

Some of the arguments against ICD-10 suggest it would entail complex and costly renegotiation between payers and providers over the cost of various services. It is not entirely clear the renegotiation is necessary. We'll get to that point below.

Yet, if the transition creates the opportunity for one or another side to bring up an issue and if raising the issue involves the true cost of negotiation, irrespective of the outcomes negotiation, simply the fact that people have to engage in the negotiation is a cost and should the cost be counted on the negative side of the cost-benefit ledger and that's a good question.

Finally, if investments are made in response to a change, the ancillary costs and benefits they offer should also be factored in.

For instance many hospitals, many people not in a hospital setting as well, concluded that they needed a new computer system to accommodate the Y2K roll over and/or to accommodate HIPAA. Having invested in new computer systems, they undoubtedly found that the advance capabilities of these systems were useful for other matters.

Some might go further and indicate that anything that accelerates the transition to electronic medical words is ultimately a good thing and so the transition to ICD-10 does this, in some people's opinion, so much for better.

Turning to the analysis itself, we start off with a very simplified view of the life cycle of coding information and I'm going to sort of try to describe this because unfortunately I didn't bring my figure with me. But essentially, what I want to try to describe is you start off with a patient record.

In other words a doctor, a nurse, etc., writes things in about a patient that's descriptive of the patient. This coding record, most of the time is sent to a coder. The coder, particularly in an in-patient setting, looks at the patient information which can be a very rich source of information and reduces this information down to a handful of procedure and diagnostic codes.

The word reduce is important. I will get back to that later.

These codes are then used for several purposes. The primary function of the codes is as part of the billing system. It goes off to the provider or the payer, excuse me, or payer or payers and they, in turn, process their claims on the basis of the code and they may do other things in terms of health management.

The codes may be used in health management systems directly within the health care provider. The codes are also used for hospitals and other broader settings to do back and look over the provision of health care services and, finally, these codes are also used first by statisticians and analysis to get a better sense of the total health delivery system.

Payers in turn analyze the various codes and charges and determine whether the charges are to be paid, the charges are to be rejected in whole or in part, or whether more information is required to do a determination.

The codes can also be used for analysis. Health care providers can use the information to measure the trends in the industry and asses the complex relationships between patient conditions, treatments and outcomes. The results of this analysis might suggest alternative protocols or inform providers that they should specialize in certain cases and not others. In other words, some hospitals are better at dealing with cardiac cases, hospitals with kidney cases, etc.

Health care managers and payers can do similar analyses to determine their payment policies and advice and standards of care as well as negotiate costs and premiums. And the public health community would analyze information to draw conclusions over the entire universe of health care interventions and for targeted disease management programs.

Third, codes can be used for operations as a way of managing health care delivery for patients who aggregate in automated methods and the cycle completes itself on the information gleaned from the codes is ultimately used to improved quality of health care itself.

The purpose of laying out this life cycle of codes is to set up the questions whose answers help illuminate the potential of costs and benefits and switching codes.

In other words, this life cycle at a blow is a way of separating out and parsing the problem into its constituent components. And to continue, when examining the patient record one might ask that the will physicians and nurses etc. have to change the patient record and conform with the requirements of ICD-10.

If you are thinking about code entry, at least with the questions what are the costs of training and re-trains the coders, both experienced ones and entrants. How much more or less productive will they be in assigning codes immediately after the change and over the long run. What will happen to the quality of their coding, error rates, both the gross error rates and subtle error rates.

When looking at decision support systems, one asks whether the change in codes might have any positive or negative effects on the rate at which medical errors or at least suboptimal decisions are made.

When looking at billing systems, one asks, will more manpower be required to process claims? Will a higher or lower percentage of claims be sent back for further processing and more information? Will payments be there as a result, perhaps as a result of changes in DRGs?

When looking at analytical uses, one can ask what light would new encoding shed on the true prevalence of disease and of the outcomes of particular procedures and protocols.

How much better would our understanding and hence delivery of health care be as a result of punitively better analysis that is made possible by improved codes or vice versa.

Finally, across the entire system one would ask, how much will it cost to update entry access and analytical systems to accommodate a change in code sets.

To frame these questions it might help to take a brief look at the nature of the codes and the nature of the particular codes entailed in ICD-9 and ICD-10. As noted, coding is a process of translating from words, numbers or alphanumerics.

Capturing the essence but not retaining the details. ICD-10 authored diagnosis, but especially for procedures and designed to capture more information than ICD-9 does and it does to by makes minor distensions. The mere doubling of diagnostic term, digital terms, is equivalent to capturing one more bit of information because you basically, from going from, to use scientific notation, you are going from two to the thirteen number of possible codes and something closer to two to the fourteenth number of possible codes and in information terms, that's a quotient for bit.

When we talk about procedural codes, we are talking about a 50-fold increase in procedural codes and that's equivalent to capturing eight more bits of information.

In both cases, ICD-10 represents a process that loses less information than ICD-9 does. In other words, more is captured from the patient record, but there is also the risk of a converse effect.

In some cases, filling out the code requires information be put down that would not be put down today and so compliance requires that more information be collected and recorded by doctors and nurses. If the information is useful and if one could say it should have been collected all along, then forcing it to be recorded represents a gain or at least no loss, but if the information is only useful to the extent needed to satisfy an imposed standard, then it does count as a cost, albeit a cost that is rather hard to estimate.

There has been some criticism in past testimony of the fact that ICD-10-PCS with its 200,000 codes exceeds what conclude easily set down in the book. Somebody talked about a book four inches thick. I don't think you can get it in a book that thick unless you do very, very small writing.

DR. BLAIR: Can I just ask a question. I'm afraid I'm going to forget.

Where you wound up indicating that the alternative is that if ICD-10 has so many more choices, is it just for the purpose of complying with the standard, but I sort of felt that the other option is whether it is of value or use to the clinician versus value or use to researchers, outcomes analysis folks, reimbursers, public health, so it may not just be that it's a standard.

It's that it has value to either other individuals or to a clinician if the patient returns for another visit, having that information recorded.

DR. LIBICKI: Yes, and I think that is a good way to take a look at it because that allows you to separate out costs and benefits. I don't want to get into ad hoc estimates of how big I think the numbers are going to be so let me just sort of continue on that. Right now I'm just sort of trying to parse the issue.

Okay. As I mentioned -- by the way, in case anybody has any questions, on the subcommittee, feel free to interrupt. Is that okay?

DR. COHN: Sure. Well, we'll be happy to. I'm sure getting the information will wash over and I'm sure it will be all talked about but certainly, members of the subcommittee, obviously there will be, from a public comment, too so we'll probably hold back until you are done with your comments.

DR. LIBICKI: Okay. As I mentioned, it can be easily exceeds what can be set down in a book. Fortunately and perhaps not so coincidentally, books are losing their pride of place in the coding process.

The automated coding process is taking over. Indeed, Canada is making the transition to ICD-10-CA without using any books at all.

Now, with the changes come some corresponding changes into what is not considered important in good coding. A typical process -- a typical process has the following steps. First, clinical terms to describe something are entered into a computer such as the terms neoplasm, comma pancreas. Second, the computer will often offer a menu of choices to refine the choice so that the coder has something to choose from which in turn, by the way, can cascade to more choices depending on which code we are taking a look at.

And third, when the menus are completed, the computer selects the code and enters it into the record. I saw this actually taking place in one coder's workstation and it looked to be very interesting way of doing coding.

I would add, by the way, that's not the way they do things in Canada, that in Canada they have an electronic I think PDF file which constitutes the core of the standard.

Whether there aren't software packages that don't do that in Canada is another issue but that's how Canada is distributing its code in terms of an electronic book.

Incidentally, similar software can be used to convert a complex procedure such as a whipple automatically into a number of constituent procedure codes if that's what's desired. The fact that ICD-10-PCS requires disaggregation into individual procedures in many cases, may not be as important as one would initially make.

DR. COHN: Martin, I'm sorry. I wasn't going to interrupt, but you mentioned Canada and I was just going to ask as you are going through that -- what is Canada implenting? The are obviously the Canadian modification diagnosis. What are they doing for procedure?

DR. LIBICKI: They are going -- and I hope I have the acronyms right -- they are going from something called CCI.

DR. COHN: And how big is that?

DR. LIBICKI: It's going from 3,000 to 17,000.

DR. COHN: Okay, so that does work in a PDF format.

DR. LIBICKI: Yes.

DR. COHN: I just couldn't tell because you were sort of zipping off on PCS again and I wasn't sure if they were doing an early implementation of something on that level or not. Okay, thank you.

DR. STEINDEL: A clarification question. When you said Canada is distributing in the form of a PDF --

DR. LIBICKI: Something like a PDF file. An electronic book is what they said.

DR. STEINDEL: But a PDF file is not readily readable by computers. It's readily readable by humans.

DR. LIBICKI: That's right.

DR. STEINDEL: Are they always releasing it in a computer readable format?

DR. LIBICKI: Well, the way it was described to me, and this isn't to say that there aren't other tools that coders use in Canada, but the way it was described to me is you look in the electronic book, you find the code, you do a cut and paste.

This isn't to say that everybody says thank you very much and I'm going to turn to this other software and use that. But that's as it was reported to me.

DR. GREENBERG: They are developing a data base. They have because we've actually met with them on that.

DR. LIBICKI: Okay. In reviewing this particular process as I've described, it is clear that some facets of the coding architecture are more important and some of them are less important.

The most important facet of the coding architecture are the distinctions it makes as it goes through as it were, the decision tree. How are classes of diagnosis or procedures differentiated in the subclasses? What information is important to designated a single coder, encapsulating the most important information in a diagnose or procedures?

In other words, it's architecture of the coding which is of primary importance. Of middling importance are the words used to describe these differentiations. In other words the good words are those which make it easier for coders to understand and apply the correct differentiations that characterize the code architecture.

However, there is no requirement that the words offered in the software that assists the coder be the same words that are written in the standard.

For instance, ICD-10 PCS has been criticized for using non-standard nomenclature. I think there was in the testimony before the subcommittee about the use of the words detachment instead of the word amputation.

But there is no reason if the software developer believes that the word amputation will lead to a more reliable assignment of codes that the software person couldn't put in the word amputation and have the coders go through the software basically clicking the word amputation than the word detachment. That's something for the software folks to do.

Finally, of least importance are the codes themselves, except insofar as they may be harder or easier to memorize. I think memorization as a practice is not considered a good thing but it happens anyway.

The coder in this particular computer process does not see them. And the computer itself doesn't really care very much what the code is. But beyond this obvious statement is an important implication.

Today's codes reflect their architecture in the numbering scheme. That is to say, codes that begin with 1 are substantially different than codes that begin with 2 and 3, etc., and 1-1 codes are different than 1-2 codes and the numbering scheme is used as a proxy, in fact, as the hierarchy for the architectural scheme.

This is useful, if code entry is manual or if code entry is assisted by electronic books, but it's at best a modest convenience for the coder, once code entry is automated.

One of the arguments made against ICD-9's procedural codes is that we are running out of numbers. In fact, fewer than 40 percent of the numbers are in use.

It doesn't matter that we are running out of numbers that are close to or can be grouped with existing numbers. It might. A great deal of coding were purely a manual function or if computer, then people wrote queries by hand.

But people can also use look-up tables to convert what otherwise would be codes scattered among 10,000 different numbers, in the codes that are meaning fully grouped.

In other words if you look at ICD-9 the way a computer does, it doesn't quite look the same as if you look at ICD-9 the way is person does and the role of computers in this process, I think, is a very important critieron when we start looking at the costs and benefits.

Now, I hope this doesn't seem like a side matter, but I want to talk about another standard which has nothing whatsoever to do in medicine and that is the internet coding scheme. The internet coding scheme is a four byte, which is to say gives you a choice of four billion addresses by which to send a unique address to everybody in the world.

This coding scheme was developed in 1969 when the internet was a plaything of the Department of Defense and no one could possibly imagine four billion people on the internet.

Well, in 1992, when the National Science Foundation started to drop its acceptable use policy and the internet started becoming commercialized, there were a number of people who were engineering internet standards that said we've got a problem coming up.

It's not that we are going to run out of codes, but we are going to run out of places to assign codes because of the way we pass out what are called domains in the internet.

So they struggled for about five or six years to develop a new standard which they did called Internet Protocol Version 6 which, five or six years after that, has not really been adopted. One of the reasons is, faced with the choice of adopting a completely new standard which would require the rewiring of all the systems and making incremental changes which did not make the system very clean but nevertheless let it hobble from one year to the next, people decided to go for those changes and the result is they managed to postpone the day of reckoning on that particular item.

Now, I don't want to make an exact analogy but I think it's sometimes useful to look at the other fields and see how things have turned out.

Okay, let us now take a look at taking -- let us now turn to taking a look at the codes themselves. One clear feature of the switch is the extent to which one code maps into another.

In particular, if there is a one to N expansion of the code set, this means that every new code maps into one and only one new code. Old codes, for their part, may, probably will map into one or more new codes and one to N expansion in many ways makes the transition easier.

Those who do not wish to exploit the extra information in the new codes can merely map them into the old codes and do their processing and analysis on the basis of their old codes.

This by the way, is not to say that a one to N expansion is always better than something that is not a one to N expansion. Advancing knowledge may reveal the old distinctions carried over to the new code to be irrelevant or even misleading and thus best amended.

In fact, if you looked at ICD-9 and ICD-10, there is a lot of migration from one sub-category to another because our state of knowledge since 1979 has improved considerably.

In coding architectures that undergo one to N expansion, one argument can be made that is made against new codes can be put into perspective. It is claimed that by making more categories -- for the sake of argument assume it is by adding an Nth digit, more errors will result.

Yet if these errors result from miscoding only the Nth digit, as long as one a aware of the possibility of error, one still gets more information from having an Nth digit that is usually correct than we have from having no Nth digit at all.

Okay, finally I want to go on and categorizing and estimating the various categories of costs and benefits as far as we've gotten them.

The most obvious first place to measure is the cost of retraining coders and the effects of moving that a new standard may have on their productivity.

Fortunately, this has been a well-studied area and if suffices to extrapolate from earlier results to make conclusions.

Start with ICD-10 PCS, an in-patient and thus, for the most part, hospital related standards. There are roughly 50,000 hospital coders currently employed. The time to get them in proficiency in new codes has been estimated at somewhere between a half a week to a week. Canada's experience, by the way, is about a half a week. It's three days and a little more actually.

If such retraining costs $2,000, split between expenses and lost work time, then the bill is roughly about a $100 million dollars give or take a factor of, say, two.

Calculations for ICD-10-CM are somewhat softer. The reason is there are several hundred thousand doctor's offices and thus a population of five to ten times as many coders. However, because ICD-10-CM is less of a departure from ICD-9, and because many folks just have their look-up sheet, you know, they only do a certain number of procedures and that's the ones they code for -- excuse me, a certain number of diagnoses and that's the ones they code for -- the retraining time has been estimated at four to eight hours which is about four or five times less.

Many people will simply retrain themselves on the job and there are potential efficiencies in training that may be available by using the web as an instructional vehicle.

Factoring these two together suggests roughly -- and I am underlining the word roughly, about $100 million dollars with a similar, larger give or take factor. That sounds roughly right.

What about the initial training of new coders? On the one hand, ICD-10 noticeably ICD-10 PCS is said to be a more logical construct. On the other hand is does have more detail in it. Furthermore, the turnover rate among coders is relatively low. I think someone estimated it's below ten percent a year.

So here the costs and benefits probably cancel each other out.

What about coding efficiency? One study conducted in the mid-90's suggested at least initially coders required an additional million and a half which is about a 50 percent increase to do their job using ICD-10 PCS compared to ICD-9.

Unfortunately this gives us just a point on the learning curve but there is no good way of knowing from that study how quickly coders will move down the learning curve as they get more experience.

Nevertheless, because ICD-10 PCS, however more logical it may be, asks for more bits of information. There is reason to believe that even the profession of working down the computer based menus will take longer even after people have become used to ICD-10-PCS.

To toss some numbers around, we can estimate that roughly 12 million in-patient procedure codes are written every month. This translates into roughly $10 million in extra work in the first month, if one assumes there's a six month learning curve and a residual of a 30 second permanent increase in coding time. The break-in costs are roughly $20 million a month, and $20 million dollars total and the additional long term costs are about $3 million a month.

Because ICD-10-CM is less of a change from ICD-9 there are grounds for believing that its impact on efficiency will be less and it is plausible -- I merely say plausible that in the long run there will be no impact on the overall coding efficiency for moving from 9 to 10.

Incidentally, there is some preliminary evidence from the Canadian experience that a full week of training may not be required and it only takes a few months to go down the learning curve. I think they estimated six weeks to a few months. And that there is no detectable long run loss in efficiency.

Having said as much, the Canadian codes are only a four-fold increase in procedures rather than a 50-fold increase and so has to be taken into account.

Such data needs to be carefully analyzed and Canada's transition has unique features that should be taken into account.

Another frequently cost of transition is the cost of upgrading computer systems to accommodate new codes. Some of the costs associated with transition can be safely ignored such as the cost of additional storage capacity. If anybody has priced the Sunday supplement ads from Best Buy, you know that stuff is pretty cheap.

Indeed, it is hard a believe that there are any hardware ramifications worth measuring but software does have to be rewritten and software is inherently expensive.

That noted, most of the packaged software has already, we believe, we are going to have to check this out, has already been rewritten in anticipation of ICD-10 even if there is likely to be some reworking as the efficacy of ICD-10 software is tested in actual use.

So the issue on packaged software may be one less of cost and one more of transfer payments. Furthermore, many software houses have suggested they would cover the cost of the transition within their annual maintenance fees.

We suspect, however that non-trivial costs may be associated with two categories of users -- those who have developed and operate their own information systems and those who used ICD-9 categories for analytic support.

In the former case, that is, those who build their own systems, the transition ICD-10 is likely to be the third wave behind Y2K and HIPAA of change that in previous cases have persuaded enterprises to abandon their own systems and go to commercial products. So there may be fewer such systems than there used to be.

In the latter case, serious work to meld new categories and new benefits to termination algorithms will have to be undertaken. As suggested, were the new codes one to N expansions, the task would be easier, but oftentimes they are not.

The potential benefits of switching from ICD-9 and ICD-10 are at the same time both bigger and less certain, larger in the sense that they could be larger, not that they are larger.

Many depend on ICD-10 in fact being a better, more logical, or at least more detailed and more specific system. If not, these purported benefits may actually be negative.

Will the use of ICD-10 lead to fewer coding errors? Now, let me just say incidentally there are two types of coding errors. There are transcription errors which are errors made in the code itself and there are classification errors. The transcription errors in many ways get handled differently than the classification errors so I'm just going to talk about the classification errors.

Fewer fine distinctions are less important than the overall quality that fit between coded categories and actual conditions as described by the symptoms noted in the patient reports.

Many journal articles suggest that a significant percentage of diagnoses are coded in an erroneous or misleading manner, owing in large part to the differences between what a doctor chooses to note in the record or what a coder must infer from such notice. Errors can mislead analysts about health care outcomes so we end up with the question can ICD-10 help in that regard.

One interesting category of potential benefits may be to reduce medical errors, particularly those that result from a mismatch between diagnoses and procedures on the one hand and drug delivery on the other. Programs exist to flag these discrepancies. Most of them use ICD-9 as a basis for encoding the patient's condition.

ICD-10 may well improve the accuracy of such systems, even accelerating their use in the event that improved accuracy drives the use of such systems past the decision threshold, which is a fancy way of saying ICD-10 may make it worthwhile to buy the system whereas ICD will have 9 may not. This is an hypothesis.

DR. COHN: Let me just understand what you are saying. You are saying that decision support in hospitals used by clinicians runs off the ICD-9 now.

DR. LIBICKI: That's a hypothesis.

DR. COHN: Oh, hypothesis, okay. I'm not sure I know of many cases where that's the case, but it's a reasonable thing a further investigate

DR. LIBICKI: Yes, and we want to go down the road and find out specifically what's been in use and how does it work and what the market share is and what is role of aggregation and segregation is and all that sort of stuff.

I'm going to offer an example. I do it with a certain amount of tentativeness because I don't understand the data source that I'm pulling this from as well as I hope to.

With that caveat, as a modest example of how codes may improve such effectiveness, take the condition aspergillosis. It is covered under one code in ICD-9, that is 117.3. It gets six codes in ICD-10-CM depending on the locus of the disease. Every locus calls for a different medication.

Thus, an automated backstop system can detect errors in medication when using ICD-10-CM in ways that are not possible when using ICD-9.

That said, and, again, I offered tentativeness on this judgment. Sometimes the transition does not always lead to more diagnostic discrimination. For instance, distinctions between controlled and uncontrolled diabetes melitis that existed in ICD-9 appear to be lost in ICD-10-CM.

By way of further note because ICD-10 encodes laterality, such systems may be able to catch the mistake where a diagnosis is on one side of the body but the procedure and scheduled for the other.

It would be useful to have a sense of how large then benefits are as well as the relative contribution of fine grain coding in the context of that potentially better quality algorithm.

DR. COHN: Martin, can I ask another question about this one because I'm struggling a little bit.

I actually am following what you are saying. It's just that you started earlier talking about the impact of ICD changes being effectively on the coders working in the physicians literally and other care givers were not going to be doing the coding and that was part of your earlier hypothesis.

Now you are obviously talking about it as in the actual process of care and obviously we've spent two days now talking about clinical terminologies and decision support and all that.

Are you now beginning to develop a use case that physicians are actually doing the final coding during the time they are seeing the patient on rounds and things like this where that decision support would be triggered or is this something that everything gets handled, the paper base, it gets into the coder applications being discharged and you go: Ah! The patient is on the wrong medicine but they are already long gone, they have already been through the complete treatment.

DR. LIBICKI: That's a good question. I was wondering about that myself. I have to think that one through.

As noted such assessments may be made in the public arena through peer-reviewed research or in the private arena as hospitals, etc., to determine what they are good at and how to get better.

So the question we ask is are the finer distinctions of ICD-10 necessary? Are they distinctions in areas that make a difference but were hitherto understated or fuzzed over.

In some cases they appear to be. ICD-10-PCS, for instance, has a far better coverage of laparoscopy and interventional radiology. That gives us a statistical basis to answer questions about procedures such as do they work.

Most patients return to the hospital more frequently after one of these procedures and it is these new procedures which are the least well represented in the current code set which stand to gain the most by finer grain coverage where we expect to profit most by systematic analysis.

I mean, it sort of makes sense. If something is new, we don't know about it. We want to know about it. We know about it by using statistics, but if we haven't coded for it, we don't use statistics so there is, it's emphasized in that end.

Nevertheless, there should be limits on our expectations. Studies that infer conclusions from hospital discharge statistics, those that are labelled uncontrolled studies sit at one end of a spectrum of analytical techniques, the other end of which contains controlled studies where patient diagnoses, procedures, protocols and outcomes are encoded specifically for the analysis and therefore for which coding standards may play only a modest role if at all.

Furthermore, while finer distinctions have their merits in doing analysis, the additional distinctions have to be ones that make a difference analytically if they are going to be of a benefit to the analyst.

Even if they did, the gains from fine distinctions are often vitiated by the smaller population sizes within each category, thereby eroding statistical differences. In other words the finer distinctions we make, the fewer people there are in the population. The fewer people there are in the population, the worse your statistics get because you I don't know if you are seeing an artifact from basically taking a statistic and in fact if you are seeing something real so there's a trade-off you are going to make in any case.

Lastly, long range studies are going to require some reliable means of translating pre-transition diagnosis and procedures to make them compatible with post-transition diagnoses and procedures so as to make comparisons over time.

The last major category of potential benefits arises from the tendency of claims, especially claims for hospitals to be rejected by payers as a result of their not presenting enough information to judge their validity.

Thus an encoding standard that supplies more details may lead to less initial resection and thereby lower cost of adjudication which are paid for either by the payer or the provider. Against this hypothesis is a greater likelihood that in the short run, as people try to get used to a new coding system, you'll see a higher percentage of claims being rejected while payer and provider try to sort out the eligibility appropriateness and reasonable rate implications of the new code.

To summarize therefore where Rand is at the stage roughly one-third of its way into the study, we have made a first order round up of the categories where we expect to find costs and benefits.

On the cost side of the ledger are the requirements for retraining new software, initial perturbations in the claims process and the possible long run efficiency loss from dealing with codes that embed more information in them.

Many of these costs can be estimated plus or minus a factor of two or three. On the potential benefits side of the ledger are more precise decision support systems, a better understanding of health care delivery outcomes, and the claims payment process that does not reject claims the first time for more information so frequently.

Many of these benefits are potentially much larger, but they are also fiendishly difficult to estimate with the same degree of precision.

To give a hypothetical example -- and it really is hypothetical because I'm quite to sure this applies but I just want to make an example about small percentages and large numbers.

Of the uncertainties and magnitudes involved, let's assume that through better coding one could reduce by one percent the odds that a new disease could emerge before being detected in the populations at large.

One percent does not sound like a very large number, and it's awfully difficult to prove a hypothesis that deals with such low numbers as low as one percent.

On the other hand if you are talking about stopping an epidemic, the cost of an epidemic is enormous. There are estimates that SARS will cost the Chinese economy $10 billion. So when you take a very small number like one percent and you multiply it by your very large number like $10 billion, you come up with $100 billion dollars, which is the same number I talked about for the cost of the training.

But the uncertainty of that number is enormous. Do you know if it's one percent, do you know if it's point one percent, do you know if it's ten percent? Because the range of uncertainty is enormous. Your ability to make an estimate on the small percentage likelihood of a big event is enormous.

DR. COHN: Martin, I'm again, I guess, sort of struggling a little bit. I'm trying to think of -- just using your SARS example. Obviously, part of the problem with SARS is it was a new disease so it didn't have a code so you have disease, viral disease NOS in any system because it isn't there so you get a lot of viral disease NOS coded a couple of days after the visit sent in. That is going to prevent an epidemic?

DR. LIBICKI: You are exactly correct. And that's maybe not the best place -- let me give you another example.

Let's say as a result of a coding somebody says --

PARTICIPANT: There is one example you can use. It's HIV. HIV was associated with a lot of unusual diagnoses that were very rare, carfacov sarcoma, the question of some of the more exotic fungal diseases and so there was a change in instance and prevalence and the question is if there is some kind of differential in being able to capture a more granular or more specific detail, there may actually be some interesting things that one would then provide food for thought to say is there something new going on.

Another one my be something with regards to the questions of pryons and long term diagnoses for dementia and things like that so there are cases where you, if you are capturing more specific information, it may lead you to detecting things that you might not detect until later but again, this is just one small piece. It may or may not be.

And the other thing is, I guess, the conversation assumes that we are just focusing on the hospital. That may or may not be the case.

For example, I do know that there are a lot of organizations that do disease management that actually rely on some of the codings that may have happened in the hospital so then move patients forward to do outpatient treatment and management.

So again, I think if we look at the continuum as opposed to only one, there is more options to think about is there a benefit or not. And that's not saying that they are but just throwing it out as a point.

DR. COHN: Very good point. Steve?

DR. STEINDEL: As many people in the room may be aware, and CDC has talked about this quite a bit recently, we have been engaged in several studies involving syndromic surveillance with the private sector and one of our biggest project right now is involving Harvard pilgrims. We are monitoring claims data. Unidentified claims data is coming into a clearinghouse and being tested against algorithms to see if it shows anything unusual.

What we are presently monitoring is ICD-9 codes -- ICD-9-CM, of course, and I think we are monitoring some procedure codes. I'm not totally certain because I'm not daily involved in the project but I know we want to.

So I can see -- and one of the problems we are having with this syndromic surveillance system is this signal to notice ratio because the codes are very non-specific and I can see where potential where more specific codes could be useful even in just receiving crude information, claims information like this.

DR. LIBICKI: Let me just sort of continue and take just take on that point. Let's say the results of study is a billion dollar improvement in health care. Now, and let's say there's a ten percent change that on the next day there's new codes that can improve that.

It's one of these multiplying by zero by infinity things and you come up with every number in the universe.

Anyway, in conclusion, I want to thank the members of the committee for allowing us to brief where we are in this study. We welcome questions and other inputs as we try to deepen our understanding and craft our estimates.

DR. COHN: Martin, thank you very much. I'm always fascinated listening to you try to describe the metrics you are using. I think this is going to be a fascinates study as we go forward.

DR. GRAHAM: I just have two comments. I'm Gail Graham from the Department of Veteran's Affairs. In our current electronic system which of course we developed, we do use ICD-9 for clinical reminders, that is to say if I have a diabetic that I have a reminder for a diabetic foot examine or a diabetic eye examine.

In many cases, that's in lieu of a reference terminology so it's kind of a chicken-egg, considering the last two days of testimony so I just wanted to add that.

The other thing I wanted to caution you of is that there are many vendors of software as you have probably discovered. Some of them use hierarchies and some of them just are electronic versions of the book so it sounds like you looked at the one that was in the hierarchy, but I just wanted to caution you that there are many flavors.

DR. BLAIR: Is it possible there's a -- one of our subcommittee members who is not able to be here this afternoon, Clem, I'm wondering if we can get a copy of this draft to him for him to comment on.

DR. GREENBERG: It will be in the transcript. We are transcribing the whole meeting.

DR. BLAIR: Well, I think it would be -- if the report comes to its conclusion and Clem shows up on that day, I would rather do everything we can to solicit his critiques before.

DR. GREENBERG: We could sent him the transcript.

DR. BLAIR: A copy of the report is better.

DR. COHN: I think that we can handle this one offline. I don't think this is a policy decision that needs to be made by the subcommittee. Staff, I think, can handle it.

DR. BICKFORD: Dr. Carol Bickford, the American Nurse's Association.

As you are taking a look at doing the cost benefit analysis or return on investment or whatever this is called, are you going to be considering that there are non-physician providers who are a majority of the caregivers in the United States as part of that discussion?

DR. LIBICKI: I suppose. I can -- okay, short answer -- yes.

DR. COHN: I want to ask if there's others in the audience who want to make comments or critiques.

Do you just want to come up to the microphone and introduce yourself. This is obviously just meant to be an opportunity to sort of hear and understand the methodology and really suggest improvements or otherwise.

DR. LUSKO: Tom Lusko with the Health Insurance Association of America.

I just want to make sure you heard Martin correctly. In terms of systems upgrades, the cost of one sector of the players involved might be a wash in terms of benefits to another sector, in terms of software developers versus those organizations that need to upgrade their systems.

I know, in a sense, that might be a wash, whether an expense to one sector might be an income to another but it seems like that might be a huge number and I was wondering if you would at least itemize this information in our final report.

DR. LIBICKI: We will certain try to. We will certainly try to have that information.

I was just, I guess, making a cost benefit statement here. In many ways, if the software has already been written, there are costs of distributing it and testing it and all that sort of stuff which are not trivial costs but we just have to make a decision between prices and costs.

On the other hand, having said as much, give the kind of variances in estimates that we are going to be talking about, this may be more of a theoretical point than not.

DR. LUSKO: But you also mentioned the legacy systems that a lot of organizations might have that might need replacement and not just upgrades. Is that pricing?

DR. LIBICKI: Yes, and that has to be factored in somehow.

DR. FITZMAURICE: I wanted to pick up on a point if, because of the change happens and a provider has to pay $10 more for something but the user of the software has to incur costs of $10, I'm not sure if that's a wash because if you look at society as a whole, if beforehand they were getting along just fine and that's debatable, but afterwards if they have to both produce something for $10 and then sell it for $10, that's $10 worth of costs that would not have otherwise occurred so the question is, is that increase in $10 worth of cost measured by an equal benefit somewhere or greater benefit so it doesn't sound like it's a wash to me.

DR. LIBICKI: In the case that you said, that's exactly correct. In other words, if, as a result of a change, I've got to go out and write software and I price fairly, whatever fairly means in this context, then yes, it's a cost and it has to go on the cost side of the ledger.

If it's something that I long ago wrote and I have the opportunity to sell it and it doesn't cost me very much to sell it, then it's more of a wash.

DR. FITZMAURICE: Okay, but I think Tom was probably talking about the former rather than the latter, but --

DR. LIBICKI: Well, it will end up becoming an academic point, more or less academic point in the report given the wide sizes estimate if you know what I mean.

I would be very surprised if it drove the numbers in the report, and mostly because of my opinion it's going to be the folks that have their own systems that have to incur real costs that are going drive those cost numbers.

I may change my mind by August based on the facts of the matter. That's my going-in guess.

DR. COHN: I'm trying to think if that's true or no. Obviously part of the issue here is that most of them have on interesting tangle of purchased and home grown systems so it's never say never and never say always so you may wind up having a new claims system which may be easily upgraded but your data repository might be a homegrown system so you have to work on it and make significant changes and I guess those are the sorts of things you'll be thinking about.

DR. LIBICKI: That's correct.

DR. COHN: Let me just ask. I mean obviously, I've always said I'm fascinated. I guess I probably shouldn't have been an economist because I made a pretty good doctor but I was fascinated by all the various parameters and all this.

What sort of a report are we going to wind up with? Are we going to wind up with something that has 20 parameters and ranges being minus a 100 billion but plus a 100 billion of each of these parameters?

DR. LIBICKI: I hope not.

DR. COHN: I was just sort of asking because, I mean, a lot of these are going to be very difficult to try to figure out. What do you think? I don't mean to put you on the spot here.

DR. LIBICKI: That's a tough one. I mean, well, let me just sort of, broadly speaking, I want to produce a report that can be understood and read by the target audience.

DR. COHN: Thank you.

DR. LIBICKI: And usually I've been able to pretty much do that. But in that sense, it means I will be saying something like roughly this rather than this plus or minus with a T. In other words, to make it understandable, you often use precision in expression, but sometimes precision in expression to describe something imprecise is just a waste of time.

DR. COHN: I guess one of the questions I'm just sitting here mulling about this one is I think you are probably taking a very appropriately taking a view of societal costs and I think others, at the end of the day we have to be we have to be most concerned about societal costs but I'm also reminded of cost benefit studies that the federal government regularly has to do for every proposed rule and final rule I've ever seen where they take a look at consumers and patients, and they look at various key segments of the industry.

My understanding is that you were not going to get down to that level or are you considering at this point that maybe a couple of the key parts of the industry you might look at to sort of see where the things are, were they skewed one way or another in terms of benefits and costs?

DR. LIBICKI: You ask an interesting question, and let me just sort of just venture off the top of my ahead with the issues that have come up to me.

And that's the question about what happens when you unpack a code that has a lot of different and I'll say procedures. That could be diagnoses together which vary greatly in their complexity and this has come up both in conversation with hospitals and, in other words, payers and providers.

You've got a procedure which right now lumps a lot of different things together and then you disaggregate them and you find out statistically or after the fact that this procedure is a lot more complicated than all the other ones and therefore has to be in a different payment class for one reason or the other.

Now, the payers are worried that the result of this is that they will have to pay out more money. That is, they will pay out just as much money for the simpler stuff and more money for the complicated stuff.

The providers are worried exactly the opposite, that having taken out the complex stuff, the rest of the money will go down.

The problem is in economic terms, it's often a matter of who's got the bargaining power and that's really something beyond the scope of the study.

In other words, can the payers force the doctors to take less money? Can the doctors force the payers to pay more money? That's a fascinating question, but I can't answer that one and because I can't answer that one, some of the biggest shifts in the costs and benefits are going to be not the actual aggregate but the shifts in the costs and benefits are going to be very difficult to predict.

Now, having said as much let me just add another item.

I suspect strongly that in the short run there will not be changes in DRG's based on the changes in codes. But that once the short run has ended, that is one the initial transition period has ended, that the rate of change in DRG's will, in fact, accelerate to accommodate the changes that we are talking about.

I don't know if I answered your question.

DR. COHN: Actually, you can gave us a fair amount to think about. I mean, obviously speaks of various clout of the players in the industry and if you are, you have major clout, you can dictate on which of them are paid and there are obviously some, there's an industry that can do that.

You can use this to, I mean, to insure the neutrality or whatever you want to and it just becomes a reallocation question.

But obviously, potentially for others in the industry, it may actually be a different sum because it's not a zero sum game for them potentially.

DR. LIBICKI: There has been an argument made that more specific codes will make it very more difficult to do fraud because it will make it more difficult to write down something which is inaccurate and say, oh, well, that was just a classification judgment that I made and you are free to disagree with it.

That is, with more precise codes it will be difficult and misassign something without really lying about it. That's a claim that -- that's also a very hard claim to evaluate.

DR. COHN: I'll ask another questions or two. Certainly, others feel free to comment or question, but you obviously are using the Canadian example. It's close by, it's convenient and all this stuff and probably for the diagnoses movement, it's a pretty good model for moving from ICD-9 diagnoses to ICD-10 procedures.

I guess -- and that I obviously don't in anyway argue about -- I guess in my own mind I'm sort of trying to figure out if they are really moving the procedure code, moving from a small procedure code, if you come in something four-fold versus fifty fold, whether this is just sort of the same thing or whether this is really is whole different beast we are talking about.

And I don't know. I just ask you, I presume you are probably mulling about that yourself.

DR. LIBICKI: Yes, there is another factor to consider and when you start talking about the very large, 200,000 number, is that -- I'm virtually certain that there will be unused codes in that 200,000. You know, nobody goes down and it ends up with something. The number of codes that were actually used in the 200,000 may be in fact substantially less than 200,000 because a lot of the 200,000 were picked basically on the theoretical model and they just may not be instances or alternatively, there may be a lot of codes that are used once a year because nothing fits in them or they only fit in them because you have a code or there's an unconventional in their categorization.

So in practice it's not the 20, 000 you are choosing among, you are only choosing among the probable ones, given the kind of decision cycle you are going through. So in other words, the Canadian change from 3 to 17,000 may be closer to the US change from 3,500 to 197,000 than it appears on paper simply because of the way it turns out but until it happens, it's almost impossible to figure out how these things are going to go. I mean, really, to test that would be expensive.

DR. COHN: Well, actually, is that something you could either ask the developers of the codes, I mean, the sponsors or the actual developers themselves

DR. LIBICKI: We could go back and examine that.

DR. COHN: That would actually be an interesting question.

DR. GREENBERG: To extent that there's a crosswalk, you could look and see, you know, because I think the X percentage of the procedures are X number of codes in 9 CM and how many codes crosswalk to those I don't know.

DR. LIBICKI: I wouldn't be surprised if there are codes that are not being used today because nothing fits the category.

DR. GREENBERG: Oh, probably. I think it's much smaller number of codes that are typically used.

DR. LIBICKI: Right.

DR. GREENBERG: You know, it's sort of the question, I mean, this issue of the number of codes and the extent to which that's a problem.

This has come up, I mean, just as an analogy with ICD-10 which was used around the world for mortality data and the question of whether for countries that don't have a history or real strong infrastructure of implementing, of using ICD for mortality such as they have been using 9 or 8 or whatever, whether you should come up with a more simplified version of 10, and I think the conclusion of a lot of people has still been you certainly, you don't want to simplify the index at all.

It makes it harder if you have more -- if you don't have the level of detail. I mean, you are thinking how can we make this easier for countries that haven't been using this, but it actually makes it harder if you don't have the level of detail that allows them to say yes, this is it than if you have that level of detail and then if you can, in fact, identify the right condition or the right procedure and just put the code on there, it doesn't really matter how many. I mean, actually having more made it easier than -- I know that that has been sort of thought processes with ICD-10. I just offer that.

DR. COHN: And I don't really have on opinion on that. It's really an issue of if, indeed the code set, I mean, you earlier commented that the question is what is the position, what are the caregivers really saying versus what's being coded. As long as things are sort of okay on that, you are probably right about what you are describing. Obviously those are things that you are going to be looking at to sort of try to estimate and all of that.

Yes, obviously I'm going to be very curious about what the results of the study are. Are you interviewing people in the private sector about this one?

DR. LIBICKI: Oh, yes. We have tons of phone calls including phone calls to people whose names are on the table.

DR. COHN: Okay. That's obviously the key piece to make sure you get enough industry input and reflection about all of this.

I don't have any other questions. Mike?

DR. FITZMAURICE: You know, as the methodology gets honed down finer and finer, you get some sense of where you are going to be spending more time and less time.

It might be useful for us to take a look at the methodology, take a look at the frame work in front of us so that we then have kind of a better shot at judging the final product. So you are looking at criteria before you actually make a judgment.

It's like seeing a framework, seeing the methodology before you see the final result. Part of that was on here today. We went through a long list of it, but as Jeff mentioned, we don't really have it in front of us yet.

DR. COHN: So you are suggesting that we have the chance to thoughtfully critique it and make our suggestions.

DR. FITZMAURICE: Not even thoughtfully, just so that we become comfortable with the methodology whereas it's here are the results and we have to explain this to somebody and say, well, I think it was this or that.

DR. BICKFORD: A question I have and that's in relation to the testimony we just had earlier in the last two and a half days and that is taking a look at terminologies and redundancies.

Is that going to be a consideration in the fact that you are looking at ICD-10 as being a more encompassing terminology that could replace some of our existing terminologies?

I'm thinking of it from a standpoint of the inclusion of more content. I'm just raising that as a question.

DR. COHN: Carol, when you comment, you need to make sure to get near a microphone. I think she was just sort of reinforcing her general comment.

I guess I should also comment that in some of our testimony we talked about clinical terminologies the last couple of days and there was one testifier who had suggested that at least you recognize that there are, are going to be more robust clinical terminologies that hopefully will be implemented.

I'm not sure if that really has anything to do with this particular study as you commented. It begins to add additional factors and become almost impossible to quantitate.

DR. LIBICKI: Well, in the end the codes are the codes and what the codes are called is a guide to coding, but the codes are the codes themselves. That's going sound like Humpty Dumpty.

What I'm basically trying to say is there is the definition of the code and then there are the words that are used to describe the code and in the end it's a definition that's controlling. Hopefully they line up with the words.

DR. COHN: I think that the person -- once again, I don't know if John's, whether he was -- do you want to come up?

DR. FONNEN: John Fonnen. I work for McKesson in the health informatics and one of the things we talked about was in the world where you are translating from this free text to ICD-10 or ICD-9, you might have one set of costs and benefits.

In the world we are hoping to go to where we have these rich terminologies that provide capacity for a lot more decision support, a lot more automation of care, the kinds of things that you build like drug contraindications on, real knowledge base stuff, how does, when you put those into the equation, that might obviate some of the specific benefits you mentioned or it may which I think the costs. It could change the picture a fair bit but I think it also introduces a lot of complexity.

DR. LIBICKI: Yes. You raise a very interesting point, I think a very broad point about, in fact, what information is transferred and how it's transferred.

In the end of the day, however, you have got to take that information and translate it into pieces of information that will be consistently, reliably and predictably processed by payers and providers to transfer money.

From an analytical point of view you have to be able to translate it into something that can fit into analytical categories that are meaningful and can be processed by computers or processed by analysts whether or not they use computers.

Whether you maintain the rich data set, how you use the rich data set in terms of the more narrative descriptions of going on, in many ways is a different question.

But this, if I continue talking now, we will actually get into a different, very fascinating, but different topic, kind of an ICD-19 topic.

DR. COHN: Steven, then we should be able to wrap this up and we can talk about the enforcement rule.

DR. STEINDEL: Oh, why don't we just talk about the enforcement rule.

I was just going to comment in the last stages of discussions this morning and in some of the CHI discussions on Tuesday, there was a touch between the clinical terminologies and the administrative terminologies slash regulatory terminologies and I think Carol's question, touched on that interface and I think right now we have to look at that as some question that we are going to ask again in the future and really doesn't apply very well to the discussion of the transition between the various ICD-9 terminologies through the ICD-10 terminologies and what we are talking about the introduction of clinical terminologies.

I do see the interface of those becoming blurred probably when clinical terminologies are more widely used and then we will most likely revisit the question from a regulatory point of view but that's further down the line.

DR. COHN: I think the other piece, of course, also is that obviously Martin talked some about the whole issue of decision support with the terminology that was positing potentially a benefit related to that and the question, of course, is if you have got these clinical terminologies that you are using that.

Martin, in thinking about this one, we ought to stay away from this one probably, this adds a whole another N to the complexity of this one and I just don't owe that you can --

DR. LIBICKI: It seems pretty complex as it is.

DR. COHN: Yes. Are there any other questions or comments?

Agenda Item: Enforcement Rule - Dr. Simon Cohn

I do want to spend a couple of minutes talking about the enforcement rules, and so it looks like we have time, but certainly if there's others who want to offer testimony, who have suggestions about the methodology, you know, it's still obviously time to do a comment.

Okay. I guess it sounds to me like I've heard a couple of people comment it would be nice to actually see this written in such a way where we could, if something jumped out at us at an additional piece to the analysis, we would obviously like to be able to recommend that.

I understand that this is a draft, in-process document and will not be available for general distribution at this point although that's Marjorie's call.

DR. GREENBERG: We can discuss it.

DR. COHN: Martin, thank you so much for coming.

DR. LIBICKI: My pleasure.

DR. COHN: Now, we will move to this last topic. I've mentioned four times now the enforcement rule and I'm, first of all, how many of you around the table have read the interim enforcement rule? This will, I'll be talking to myself. There's one other person to talk to.

I think, as you know, the department has issued an interim enforcement rule, and for those of you that haven't read it, let me try to explain what it is.

DR. BLAIR: Does interim mean the same as an NPRM?

DR. COHN: No. This is interim while the NPRM is determined, what it should be as published and then the final rule is published. This is actually sort of interesting that it was a primarily a procedural rule that was meant to be in effect for about a year and I don't have it in front of me so I don't remember exactly when it goes out of date, but I think sometime next year as I remember and it is primarily procedural. It doesn't dwell much on any of the substantive issues really around enforcement.

The interesting piece is the comments are, can be taken up until mid-June even though it went into effect in mid-May so this is a statement of what interim rules, I guess, are about, and I'm not on expert on interim rules, but I think the intent would be is that there would be any comments, they would be reflected in future versions or in the notice of proposed rule or in the final rule. And once, again, remember I'm not a lawyer so this is my understanding having read all of this.

Now, the question, of course, is whether the subcommittee or full committee ought to in anyway write a letter to the department related to the interim rule recognizing that the request for comments ends on June 19th so it's before the next full committee meeting.

Now, I am, let me make the following comment as just a suggestion and I don't know what process we would use. I mean, my own view, quite frankly, is I don't know that it's critical that we actually make a comment, given that it's an interim rule.

If we were to make is comment, I mean, the main gist, once again, not being a lawyer, is that most of these are not very procedural things which I want to insure we have a any particular comment about and I would think the gist of our comment would be that A, we think it's a good idea that there would be an interim rule while our final rule is being developed and that we are likely to have a hearing to help shape the final rule and get industry and public input.

I would hope sometime this fall, and that would hopefully be a joint hearing with the Subcommittee on Privacy.

Now, as I say all that, I don't think I have anything much more that I would add to the, to that letter. It might be a very short letter.

Do others, I was going to ask others on the subcommittee whether you had any particular view about whether, if you wanted anything else added, do you think we should even have a letter? Normally our, responsibility is not just the subcommittee but the full committee to respond to things that are HIPAA related. Marjorie, I see you are trying to say something.

DR. GREENBERG: I was trying to think what the value added would be if we don't, if there's really nothing substantive to say. I mean, I have to admit that I have not, I found it on the internet but I haven't read it. I mean, I have look add it but I haven't read it

DR. DOLIN: How many thousand pages is it?

DR. GREENBERG: It's not that long.

DR. STEINDEL: Simon, I'm looking at it right now, and it says this interim final rule ceased to be in effect on September 16, 2003.

DR. COHN: No, that was an error. It's actually 2004. That's already been amended. There was an error in the Federal Register notice.

DR. STEINDEL: But I've read through it, I admit not in complete detail, but I agree with you. It was just basically a series of procedures that are to allow people to know how they can handle the process until a real enforcement rule is in place and I didn't really see anything significant to comment about except if I was a lawyer.

DR. GREENBERG: How they can file complaints.

DR. STEINDEL: Right. How they file, where the complaints will be filed, how the complaints will be handled administratively within HHS and where. You know, I just didn't see anything that we, that was worth commenting on.

DR. COHN: Certainly it would be administratively easier for the whole committee since otherwise we are going to have to have a conference call all this stuff. But I did want to bring it up.

I'm absolutely fine with not pursuing it. It's real not alone our responsibility. I mean, if Mark Rothstein from privacy feels that there's a need for a letter, he certainly is within his right, especially as a lawyer to develop something for us to look at but I really didn't see that other than a bunch of legal procedures that there was much useful for us to comment on.

DR. GREENBERG: This was on interim final rule you said?

DR. COHN: Yes.

DR. GREENBERG: But there is going to be an NPRM enforcement. As long as there is going to be an NPRM, I think that will provide ample opportunity for a hearing and then for comments.

DR. STEINDEL: The situation is, Marjorie, that the transactions will go into effect on October 16th.

DR. GREENBERG: Right. Well, privacy is already went into effect.

DR. STEINDEL: And privacy is already in effect and there's no enforcement rule so CMS had to get out to the world what procedures they would follow during that period.

DR. GREENBERG: Yes, that's what I thought. Okay.

DR. COHN: Stan, did you want to write this letter?

DR. HUFF: No, related, a different question was, I mean, I didn't read this not because I didn't know I would -- I mean, how would I know that I was supposed to read this particular thing and comment on it? I mean, to my knowledge, I didn't get any notice from this committee that that was the --

DR. GREENBERG: It was mentioned at the -- when did it come up? You know, it might have been on the executive subcommittee conference call actually. So probably you are right. I mean, now, I mean, since it isn't that long --

DR. HUFF: If I'd heard of it, maybe I'd have been in a different condition.

DR. COHN: We would be happy to send it to you. But I mean, that's certainly a good point.

DR. HUFF: It's not this particular one I'm worried about. I just wonder if there was something I was supposed to be doing systematically that I wasn't, that I didn't hear about this.

DR. COHN: Well, we think in general it's a good idea for you to read every single NPRM final rule related to HIPAA.

DR. HUFF: The problem I was trying to avoid was reading the whole Federal Register. I don't mind reading those parts that are pertinent if somebody would just point them out.

DR. GREENBERG: Go to the administrative simplification web site.

DR. HUFF: I'm looking for something that was push instead of pull.

DR. COHN: Stan, I actually do want to apologize because I think it would have been most appropriate for me to have sent out copies to everybody just to alert you to its existence and maybe we can get somebody in the -- either I'll do it or I'll get somebody on the staff to send you a copy and Jeff a copy also.

I guess it was more of a -- I guess what you are hearing is I didn't think there was much there. I felt a responsibility to at least bring it up and discuss it. I guess I'm getting sort of the response I expected which was: Huh?

And then on the other hand, we don't have a lot of time probably to write a letter on this, but I'm also hearing that what I will do is send out copies to you and Jeff and Clem and Kepa, within the next 24 hours. If there is something you review that you feel we can really, in retrospect, that a letter is appropriate, what we will do is convene a call for the subcommittee and if we still feel it's appropriate, we will draft something and have the full committee review it posthaste. Does that sound okay?

DR. HUFF: Again, I'm not worried about this particular thing. And I'm not complaining in anyway. I just wanted to make sure I wasn't missing some channel of communication that I should be paying attention to.

DR. COHN: No, I think you are okay on that.

Okay, well, listen, unless we hear otherwise from subcommittee members, we will just remain silent on this interim rule.

Now, with that, are there other issues? We obviously have a meeting of the full committee on June 24th and 25th. At some point during that meeting we are going to have a breakout for the subcommittee, and I presume it's probably the afternoon of the 24th.

Are there any issues -- I mean, obviously, at this point we have a couple of letters that we are writing to the full committee and we are going to have a letter on the first morning, hearings about issues related to the transition to administrative and financial transactions.

We are obviously going to be having a letter related to our PMRI testimony and some early recommendations and communications with the Secretary. I don't think we'll have a copy of a draft on a report that we can review which is probably okay.

Time permitting and willingness of participation, I think maybe we will have at least a brief update on the claims attachment rule and what's going on with that and sort of changes to that standard according with an HL-7 and X-12. That will be depending on the time that we have and somebody will update us on that.

DR. GREENBERG: Who are you planning to have testimony from?

DR. COHN: To be determined.

DR. GREENBERG: Okay. What about the annual HIPAA report?

DR. COHN: Jim Scanlon and I are working on the annual HIPAA report. I guess that will get added to the agenda for the subcommittee to review, but that will need to be distributed to other committees also since it isn't just standards and securities subcommittee.

DR. GREENBERG: Is there anything we are supposed to be doing related to the ASKA(?).

You want me to leave, don't you?

DR. COHN: Well, I think so far as our ASKA responsibility will include that letter that we are sending to the Secretary about transition to the full administrative and financial transaction.

From our last meeting, we felt that there needed to be some work occurring to the web site, connecting it to the other NCVHS web site directing users to do some best practice. So we obviously need to do that. Other than that, I don't think there was much more. Mike did you remember?

DR. FITZMAURICE: There is an ASKA report. And we make come inferences about best practices but we don't address it directly since nobody reported best practices.

That report has been sent up, issued -- it hasn't gone around to SSS. It should go around to the subcommittee. Subcommittee should give us comments. That's Steve Steindel and myself and then it should go to the NCVHS. It's ready.

DR. COHN: Okay, so what we will do then is to give the full subcommittee a week to review that. Could you do us a favor and resend it? Per Stan since I'm sure somewhere -- I know we've all received it, but it's been a while ago. I don't think it's something to be voted on, but this is more for information for the full committee.

DR. FITZMAURICE: Information and comment.

DR. COHN: Information for the full committee.

DR. FITZMAURICE: In the sense of is it ready to be sent to the NCVHS members before the meeting.

DR. COHN: So, are there other things for June? We'll just talk about the rest of the year very briefly. Are there other agenda items for the June meeting?

DR. BLAIR: Will we have most or all of our subcommittee members present at the NHII session on June 30th to July 2nd? Is everybody -- I'm assuming everybody knows about it. Stan, I'm assuming you know about it?

DR. HUFF: Yes, and I will be attending.

DR. COHN: I can't speak for those who aren't here. And for those on the internet and others, there's been a public announcement of an HHS sponsored summit on national health information infrastructure that is being convened starting on June 30th in Washington, D.C., continuing on through July 2nd. I believe there was information on --

DR. GREENBERG: There's a web site.

DR. COHN: Is there a web site specifically related to this?

DR. GREENBERG: Yes. I don't have the URL. I'm trying to think if there's a link to it.

DR. COHN: I guess I'm suggesting based on this comment that there ought to be some information on the NCVHS website which is www.NCVHS.HHS.gov, at least informing people of this meeting or pointing them to a website for the meeting.

DR. GREENBERG: We could do that.

DR. COHN: That would be great.

Now, future meetings of the subcommittee currently are scheduled for August 19th and 20th. I think we just added a day and I don't know whether we will poll the committee today decide whether testimony be the 18th or the 21st.

I guess I should ask the members here do you have an opinion as to which day you would rather have it, the beginning or the end? Why don't we assume that it will be on the 21st which is Thursday as opposed to the Monday unless there's something related to logistics and scheduling

DR. GREENBERG: That is August 19th through the 21st. And that extra day was to take testimony related to --

DR. COHN: Drug terminology and devices. And are certainly if we don't need it, we will drop it. That, I think, was the, we will probably do, I presume, a final review of the recommendations of related to PMI for the September meeting.

DR. GREENBERG: And I will provide Steve with a T-shirt for that meeting.

DR. COHN: And then obviously we have other meetings of the subcommittee currently scheduled for October 21st and 22nd which will hopefully remain just a two-day meeting and then on December 9th and 10th, a formal meeting of the committee on September 23rd and 24th and November 5th and 6th.

Okay, great. Anything else for the subcommittee?

I want to thank everyone. It's been a great meeting. We really appreciate everyone's hard work. With that, we are adjourned.

(Whereupon, the meeting was adjourned at

2:47 p.m.)