[This Transcript is Unedited]

DEPARTMENT OF HEALTH AND HUMAN SERVICES

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

SUBCOMMITTEE ON STANDARDS AND SECURITY

May 21, 2003

Crowne Plaza Hotel
1489 Jefferson Davis Highway
Arlington, VA 22202

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
(703) 352-0091

TABLE OF CONTENTS


P R O C E E D I N G S [9:13 a.m.]

Agenda Item: Call to Order, Introductions, Review Agenda - Dr. Cohn and Mr. Blair

DR. COHN: Well good morning everyone, I want to call this meeting to order. This is the second day of three days of hearings of the Subcommittee on Standards and Security of the National Committee on Vital and Health Statistics. As you all know the Committee is the main public advisory body to HHS on national health information policy. I am Simon Cohn, chairman of the Subcommittee, and national director for health information policy for Kaiser Permanente. We want to welcome Subcommittee members and HHS staff and others here in person. We obviously also want to welcome those listening in on the internet. We want to remind those here to make sure to speak clearly and into the microphone like I am doing so that people on the internet can hear us.

Today and tomorrow morning we will continue our work on health care terminologies, and I want to thank Jeff Blair and Walter Sujansky and Suzie Burke-Bebee for their very hard work, and Mike Fitzmaurice and Steve Steindel for really putting together the day and a half of agenda hearings on this, this has been the big issue, and I think we’ve been struggling mightily to get our arms around it. Jeff will be leading the session and after we’re done with introductions we’ll turn the microphone over to him for some introductory comments. Finally, tomorrow afternoon, Thursday afternoon, we will talk about the federally funded ICD-10 cost/impact study being undertaken by RAND, and that will be our final act of these three days.

Hopefully, and I would remind everyone, we will find sometime over the next day and a half to talk at least briefly about the interim enforcement rule and determine whether or not the Committee needs to draft a letter related to that to the Secretary of HHS.

With that, let’s have introductions around the table and then around the room. For members on the Subcommittee I would ask if there are issues coming before us today for which you need to recuse yourself, would you so state. With that, Jeff would you like to introduce yourself?

MR. BLAIR: Jeff Blair, vice president of the Medical Records Institute, vice chair of the Subcommittee on Standards and Security of NCVHS. There’s nothing that I am aware of that I think I need to recuse myself of with respect to any of the terminology discussions for the next day and a half. I am, however, a member of HIMSS, HL7, and ASTM, and I’ll pass it on to Suzie.

MS. BEBEE: Suzie Bebee, NCHS, staff to the Subcommittee.

MS. GREENBERG: Marjorie Greenberg, NCHS, CDC, and executive secretary to the Committee.

MS. PICKETT: Donna Picket, NCHS, CDC, staff to the Subcommittee.

DR. STEINDEL: Steve Steindel, Centers for Disease Control and Prevention, liaison to the full Committee and staff to the Subcommittee.

DR. FERRER: Jorge Ferrer, Centers for Medicare and Medicaid Services, staff to the Subcommittee.

MS. GRAHAM: Gail Graham, Department of Veterans Affairs, staff to the Subcommittee.

DR. GUAY: Dr. Al Guay from the American Dental Association.

DR. OLIVER: Diane Oliver, Stanford University.

DR. BROWN: Steve Brown from the Department of Veterans Affairs.

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Healthcare Research and Quality, liaison to the National Committee and staff to the Subcommittee on Standards and Security.

DR. MCDONALD: Clem McDonald, Indiana University, Regenstrief Institute, I’m the chairman of a LOINC Committee and I have to recuse myself for any votes discussions on that.

DR. HUFF: Stan Huff with Intermountain Health Care and the University of Utah. I’m a co-chair also of the LOINC Committee, and so I need to recuse myself from those discussions or votes, also a former chair of HL7, I don’t think we’re going to be voting on anything to do with HL7, but I’ll give you that background also.

DR. SUJANSKY: Walter Sujansky, independent consultant and advisor to the Subcommittee on terminology standards for patient medical record information.

DR. DASH: I’m Rog(?) Dash, I’m an assistant professor of pathology at Duke University Medical Center, and work in both clinical diagnostic work as well as in medical informatics, and am a member of the SNOMED international authority.

DR. MADDEN: I’m John Madden, I’m also an assistant professor of pathology at Duke, and I’m a member of HL7 and I’m a member of the SNOMED editorial board.

DR. LARSEN: Keith Larsen representation Intermountain Health Care.

DR. ZINDER: Dan Zinder from the Office of the Assistant Secretary of Defense for Health Affairs.

MS. WILLIAMSON: Michelle Williamson, National Center for Health Statistics, CDC.

DR. WARREN: Judy Warren, University of Kansas School of Nursing, I’m also a consultant to the SNOMED editorial board and a co-chair of HL7.

DR. BICKFORD: Carol Bickford, American Nurses Association.

MS. BROUCH(?): Kathy Brouch, American Health Information Management Association.

DR. FAUGHNAN: John Faughnan representing McKesson.

DR. EMERY: Jack Emery with the American Medical Association.

DR. BIZARRO(?): Tom Bizarro from First Databank and co-chair of NCPDP, product identification workgroup.

MS. ECKERT: Karen Eckert with Medispan.

DR. RIDGE: John Ridge with the American Neurological Association.

DR. LEVY: Dr. Brian Levy with Health Language.

DR. CHUANG: Ian Chuang with Cerner Corporation.

DR. LAU: Lee Min Lau, 3M.

MS. STRAUSS: Kathleen Strauss, Department of Veterans Affairs.

DR. LOREAU(?): David Loreau, Medicom Systems.

DR. LAB(?): Robert Lab with American Dental Association.

PARTICIPANT: --, Cancer Information Products and Systems, NCI.

PARTICIPANT: Inaudible.

DR. COHN: Before I hand the microphone over to Jeff I did want to in my own way of public disclosure and recusing. Obviously as we all know I’m a member of the CPT editorial panel, I don’t think there’s any CPT issues coming before the Committee today. But I’m also a member of the National Uniform Claims Committee. Kaiser Permanente, who is my employer, as you know is a large, in fact the nations largest non-profit health maintenance organization. We are obviously a combination of a health plan plus a very large provider organization with about 100,000 physicians, nurses, and other health care professionals. As such we are obviously a very active user of clinical terminologies and I actually will be testifying tomorrow in our experience with it.

With that Jeff would you like to lead off the introduction.

MR. BLAIR: Good morning everybody. We’ve been working for quite a while getting ready for today, let me make sure I’m close enough to the microphone here. Let me just put this in perspective for those of you who may not be familiar with the process that we’re going through. The original mandate to the NCVHS from Congress related to this activity came as part of the HIPAA legislation, within the HIPAA legislation it indicated that the NCVHS is to study issues related to the adoption of uniform data standards for patient medical record information and the electronic exchange of that information and to report to the Secretary no later than four years after the passage of the legislation. In August of 2000 we made our report, the report essentially set forth a framework, guiding principles for the selection of specific PMRI standards and other recommendations. The guiding principles have been adapted to the selection of message format standards, which those recommendations were set forth on March, excuse me, on February 27th 2002, and the Secretary of Health and Human Service, Tommy Thompson basically took the nucleus of those recommendations and they were set forth on March 21st of this year in Dearborn, Michigan, as the CHI standards. So we’ve seen a little bit of a shift here, so they weren’t announced as HIPAA standards, they were announced as CHI standards, CHI being the Consolidated Healthcare Informatics Initiative, that’s part of the Administration’s eGov Initiative, and that means that they’re being adopted by HHS, DOD, and VA in a coordinated fashion.

The process has continued as we’ve begun to look at PMRI terminology standards, and I just would mention to you PMRI, patient medical record information, is the words in the legislation so we’ve continued to use those words to be consistent with the legislation. If you feel more comfortable thinking of them as clinical data standards or electronic health record standards, that’s, I think many of us tend to make those same connections.

Last August the Subcommittee on Standards and Security began the process of evaluating, selecting, and recommending PMRI terminologies. In August we wound up having, hearing from testimony from a number of industry experts over a day and a half period giving us guidances to what should be our direction, what should be our scope, what they thought of the criteria for selection and how it should be modified to be able to select PMRI terminologies. We continued those deliberations in October, in November and December we began to draft a questionnaire which was then sent out to as many PMRI terminology developers as we could identify. We looked under rocks, we found a few of them under rocks, and we found them hiding in other places, as many PMRI terminology developers as we could identify we sent that questionnaire to on January 6th. And we got actually over 40 replies by, well most of them came in by February 14th but others managed to come in a little bit later.

And Walter Sujansky who we have employed to assist us in this, Dr. Sujansky is a consultant to the Subcommittee, then went through the process of analyzing the results from the terminology developers from that questionnaire, it was a 14 page comprehensive questionnaire, and produced the first draft of that analysis and presented it to the Subcommittee on March 25th of this year. And what we did at that point was to identify four technical criteria that we felt were absolutely essential for what a set of core terminologies needed to have. They are concept orientation, concept permanence, non-redundancy, and explicit version identifiers, and there were at that point ten of the over 40 that met those criteria. We then when we were presenting on March 25th and 26th indicated that we wanted some feedback from the terminology developers to make sure our analysis was correct and the information that we had was correct.

A second draft was produced in April, it has been sent out to the terminology developers for additional comment. By the time we had the second draft two more, well, actually there was some other changes, one dropped out and three more were added, so we now have 12 that met the technical criteria. Out of those 12 we wound up then seeking additional input from the users of those 12 terminologies in terms of the strengths and weaknesses, the gaps, the interfaces, other issues, and that’s what we’re going to be hearing from today is from the users, users being providers, users being health care information system vendors that might be using these terminologies, and terminology servers and other experts that might have experience using these terminologies that could help us understand the strengths and weaknesses and how they relate to each other and other issues.

So, before we go directly to the panel, there’s one other comment that I do want to make because Suzie Bebee has done an outstanding job in not only pulling together these panels but as the people that are going to be testifying to us have been nice enough to be able to send in their responses to the questions for today’s testimony, Suzie has pulled that stuff together into a spreadsheet and that was sent out this last Friday to help the Subcommittee members, and I just wanted to verify because I haven’t had a chance actually, that Clem and Stan, have you had a chance to receive the spreadsheet and to sieve all the documentation beforehand?

DR. MCDONALD: Not the spreadsheet that I first saw last night.

MR. BLAIR: That was the second one we sent, but do you at least have the hard copies in front of you, that should help.

DR. MCDONALD: What I see I wouldn’t describe as a spreadsheet, it’s a bunch of separate, it doesn’t look like a spreadsheet.

MR. BLAIR: But Suzie sent out the first group of those on Friday night and then last Thursday we actually sent out the individual ones.

And for those of you who have been here for the first time, I happen to be totally blind so as you’re giving your testimony if by any chance my eyes veer to where I’m hearing the sound instead of where you physically are located do not feel that I’m not paying attention to you.

I think we’re ready to introduce our first provider panel, we have three testifiers, and maybe if you just briefly introduce yourselves and then we’ll go to your testimony. Diane, would you want to start first, and then Keith.

DR. OLIVER: I’m Diane Oliver, I’m from Stanford University working on a research project called PharmGKB under the direction of Dr. Russ Altman. In this project, it’s a pharmacogenetics --

MR. BLAIR: I’m sorry, I’m just asking for your introductions first and then we’ll get started.

DR. LARSEN: My name’s Keith Larsen, I’ve worked with Intermountain Health Care since 1976, my background is pharmacy. I’m currently the director of information systems planning at Intermountain Health Care.

DR. GUAY: I’m Dr. Al Guay from the American Dental Association, it’s the provider panel and not developers panel so I have a little explaining to do. You see my designation says private practitioner in American Dental Association, it’s sort of a chronological thing, practiced clinically for 30 years and when my son joined my practice I decided there were other interests I wanted to pursue, and that was about the time of the Clinton Health Care Initiative, and so I found plenty of things to do, so I’ve been working with the American Dental Association for ten years. Today I’m speaking as a user of coding systems.

MR. BLAIR: Thank you all for coming here. Now please be aware, if you look at our agenda, we have a very, very complete tight agenda, so we are going to ask you to stay strictly to the time limits. If you have one terminology you’re testifying about please keep your comments to ten minutes. If you have two you have an additional five minutes, if you have three there would be additional five for that. Suzie, do you have any other comments or suggestions to the testifiers before they begin?

MS. BEBEE: Just if we’re going to have questions at the end of the whole panel presentation.

MR. BLAIR: Yes. Please, if we could save our questions until all three of our testifiers have made their testimony. Diane, could you begin?

Agenda Item: Panel 1 - Terminology Users: Healthcare Providers - Dr. Oliver

DR. OLIVER: Thank you. The terminology I’ll be addressing this morning is NDF-RT, National Drug File Reference Terminology, and this is the drug terminology we’re using in PharmGKB. First I’ll give you a brief overview of PharmGKB since this is a research project, a little different from patient records. I’ll talk about what we did for our selection of our drug terminology. I’ll briefly describe a few things about the NDF-RT content and structure. I’ll show you some screen shots of the PharmGKB user interface so you see how we’re using it, and end with a brief conclusion.

MR. BLAIR: Diane? Will this give you time to be able to give the responses to the questionnaire?

DR. OLIVER: Yes. Ok, briefly about PharmGKB, it’s a pharmacogenetics knowledge base, definition of pharmacogenetics is the science of how genes influence an individuals response to drugs. Basically we’re looking for gene drug associations so that we can associate drugs with genes that affect an individuals response to a drug, such as an adverse effect or efficacy. Briefly, an example of an adverse effect, there’s a drug, 6-mercaptopurine which interacts with the gene TPMT which can cause fatal bone marrow suppression. In terms of efficacy, the drug coding interacts with the gene CYP2D6, and if you have a particular genotype you’ll have no effect on pain.

This is a screen shot of the PharmGKB homepage, I’ll describe it a little more in a few minutes, basically you just see that there’s headers and places for entering search terms and so on.

In our selection of a drug terminology our selection criteria were as follows. Primarily we wanted something that was low cost, that is low cost to us because we’re on a research budget. We wanted few proprietary constraints because we’re building a web based systems that is accessible to anybody using the web. We primarily needed generic names because most of our work is done with generic names, so we needed trade names so that people could search for drugs based on trade name. Importantly, we wanted a classification structure that made sense to us. When we started we weren’t sure really what this meant so we had to kind of take a look at things to see what was available.

In general, our goals were as I said, to link a drug, each drug and each gene that are pharmacogenetically related. Within this project we’re collecting experimental data from a number of research collaborators around the country who are submitting experimental data sets about pharmacogenetics and we wanted to be able to index these data sets with, for instance, drug names. In addition, we store information about pharmacogenetic articles and we need to index them with drugs.

We considered a number of terminologies, I’ve listed just some of them here, NDC, VA-NDF, Multum, Gold Standard Multimedia Clinical Pharmacology, Micrmedex, FirstDataBank, Facts and Comparisons, NDR-RT, there were others. But a number of them offered more than we wanted in terms of, we just wanted the drug terminology, we didn’t need a lot of drug knowledge beyond, drawing a line between knowledge and terminology is always an issue but we wanted just some basic things about terminology.

So NDF-RT, National Drug File-Reference Terminology, has a number of features that we did use and a number of features that we have not used at this point. The ones that we’ve used are generic names, trade names, ingredients, mechanism of action, physiologic effects, and conditions treated. These last few, mechanism of action, physiologic effects, conditions treated and so forth were features that we found useful and were part of what we were looking for in terms of classification structure. Because this is not a pharmacy management system it is not an electronic patient record, it is not a prescription writing system, we did not use dose forms, strength, manufacturer, package size, package type, and we have not yet used the UMLS Concept Unique Identifier and MeSH definition, although we may in the future.

A few words about NDF-RT content and structure, this slide shows the beginning of the XML file, the first version that we used of NDR-RT. As you can see it’s XML with XML tags and basically it was delivered to us as a big text file in XML, 85 megabytes in the first version and 95 megabytes in the second version. And here at the top level you see six categories or kinds, drug, ingredient, function, pharmacokinetics, disease, and HL7 dose form. If you scroll down further in the file you’ll see the concept definitions for concepts such as a drug like furosemide and all the information, property values for furosemide.

Because it came to us in this XML file and it’s organized in a frame based system, frames and essentially concepts, roles, and properties in a hierarchical structure we initially wanted a relational format but in this particular structure lends itself more to a frame knowledge based system, so we loaded it into Protégé which is a frame knowledge based system that we have at Stanford, and for instance, you can search by term and bring up the information that, azathioprine in this case, maps to if you click here say you can drill down and find the classes, direct super-classes, subclasses and so forth, and the roles, has ingredient, mechanism of action, physiologic effect, and treat. In addition you can navigate your way through the hierarchy and click your way down to find azathioprine preparation in that manner as well.

In addition, we decided that we wanted additional trade names so we went to the FDA Orange Book, we mapped the generic terms from NDF-RT to those in FDA Orange Book and we added some additional trade names.

Some slides from the PharmGKB user interface, this is the home page that I showed you before. As you can see there’s a simple search box where you can just enter anything to see what you’ll get back, a description of the types of evidence that you can get have for pharmacogenetic relationships, clinical outcomes, pharmacodynamics, pharmacokinetics, molecular cellular functional assays and so forth. There’s also ways to submit data so if you click up here to submit you get a form where in this case you can submit a gene and a drug and Pub-Med article that provides the evidence for that drug/gene relationship. So here you’d enter the drug term, you can look it up here if you’d like to search for it, or when you enter it it will map it directly to the terminology and the underlying terminology.

Alternatively, you can search here like you can search for all drugs and this we just have an alphabetized listing of all the drugs in NDF-RT, you can search that way. You can also search based on drugs that contain primary data. By that we mean experimental data that people have sent to us. For instance, this list shows the drugs that have experiment data associated with them, if you for instance click on etoposide you’ll see the etoposide screen. And down here you see the information that comes from NDF-RT about the ingredients, mechanism of action, pharmacologic effects and so forth. If you click here you’ll get the phenotype data set, this is the experimental data that have been submitted, in this case by Mary Relling(?), PharmD, and information about the pharmacokinetic data that they have submitted, and then here’s a slide of all the data, this is just raw data in a big tabular display of the pharmacokinetics of this drug. And if you click on these you’ll find the people’s genotype associated with them.

I mentioned also this index to articles that are pharmacogenetics articles. Here’s prednasone(?) and ABCB-1, the gene, if you click down here you’ll get the Pub-Med article that references this drug gene relationship.

So finally, in conclusion, NDF-RT satisfies our requirements. We did add some additional trade names from FDA Orange Book. It’s organized as an inheritance hierarchy of concepts with roles and properties. Because of this we needed a tool such as Protégé to view it, although we’ve also converted the data into relational tables for our database back end. And we look forward to upgrading to new versions.

Thank you very much.

MR. BLAIR: Thank you, Diane. Keith?

Agenda Item: Panel 1 - Terminology Users: healthcare Providers - Dr. Larsen

DR. LARSEN: I’m going to talk about use of two vocabularies today, one is the NDDF Plus vocabulary from FirstDataBank and the second one will be the use of LOINC vocabulary. I wanted to give first just a quick background of what Intermountain Health Care does with vocabularies and then talk about the two vocabularies specified.

Intermountain Health Care, again, is an integrated enterprise centered around Utah and South Idaho. We have 21 acute care hospitals, there’s 25 free standing clinics. We have a physician division consisting of 400 employed physician that staff our clinics and our hospitals. We also have affiliated physicians, and then we have health plans. Underlying this, then, we have a central data repository where a patient has a single identifier and all the data that comes in from any of those sources then is stored under that patient identifier. This repository’s purpose is really for transactional processes, we also have an enterprise data warehouse that we use for population analysis which really extracts from the transactional database and allows us to do outcomes measures.

Underlying these two things, then, are our health care data dictionary that was jointly developed 3M Health Information Systems, and the characteristics, again, these vocabularies are really measured by the purposes that you’re trying to accomplish. Our first purpose when we started our systems back in 1975, 1974, was a help system, and it was started on the premise that we could do decision support and that would aid clinicians in their practice. And so that mandated that all the data be encoded so that it could be processed by computerization.

Our second purpose is really to have a robust data dictionary to support our clinical and our financial processes, and for that we need to have a very comprehensive data dictionary and it needs to be up to date.

Our third purpose is really interoperability, not our first purpose, and mainly that’s to communicate from our core systems out to vendor supplied department systems, like communicating with our lab system or from our regional lab system ARA(?), and then communication also to outside agencies. But it’s really the third purpose, it’s not the first purpose.

The health data dictionary that we have has the characteristics of having a non-hierarchical concept identifier, it has multiple surface forms, I’m using these terminologies later on during the presentation, which allow us to attach synonyms or other ways of stating the same concept. Then we have multiple relationships that allow us to support hierarchies and domains.

Again, our first purpose is to have a very robust vocabulary, and so we create the concepts within our health data dictionary really three ways. One is just manual creation of the concepts. Second is import of available vocabularies, and in that case we are importing so we’re giving our own concept identifier, and we maintained then the external identifiers as surface forms so that we can update our dictionary with the updates from that source dictionary. And then sometimes we import vocabularies and then create relationships manually to those or create relationships to our manually created concepts as we do for instance with ICD-9 codes.

The use of NDDF Plus, we’ve been using that since 1987, originally it was used to create concepts within our Health PTXT dictionary, that was our original dictionary file, and now it’s being used to create concepts in our health data dictionary.

We import a number of concepts from the NDDF, ingredients, drug routes, dosage forms, dispensable medications, medication products or NDC concepts, therapeutic relationships, and allergen and cross-sensitivity information. NDDF also contains some knowledge bases, usually represented as relational files. They have a number of them, drug-drug interactions, drug-food incompatibilities, IV incompatibilities, patient education monographs, and pricing, those are the ones that we use. Additionally, the ones that we have not used are dosage checking, side effects, disease contraindications, indications, precautions, duplicate therapy, and physician order entry module, which is common full orders.

Again, the processes that we’re trying to support with this vocabulary are allergy entry and checking, medical ordering with decision support, we use the FirstDataBank drug-drug interactions, etc., as additional decision support to our own decision engine, medication dispensing, pharmacy department system, respiratory therapy charting, microbiology sensitivity patterns, so we use the same codes to express the antibiotics in the microbiology sensitive patterns, and then patient education and clinician education via link through Microdedex.

The strengths that we’ve seen with NDDF Plus has really been adoption of recognized good vocabulary practices. The use of numeric identifiers without meaning, the use of domains in relationships as expressing domains and hierarchies as relationships. There’s a differentiation between permanent storable concepts and transitory concepts, usually the permanent are these identifiers and transitory are the associations. There are multiple therapeutic classifications. The use of primitives building up into composite concepts that I’ll show in a diagram here in a minute. And then the knowledge bases, and then the last point is really that they’re an experienced supplier of this information. Again, we’ve been using their services since 1987, they have expanded their market share, they do have a good service experience, we get a monthly update from FirstDataBank, others get a weekly and we’re trying to move to a weekly update of FirstDataBank.

This is very busy but it’s mainly looking for the patterns. This just shows the build-up or the definition of primitives, building up from drug names, routes, forms, and ingredients, then to composite primitives which would be lists of ingredients, and then clinical concepts and what they have are name concepts, clinical concepts building up from a routed generic drug, a routed form generic drug, and then a generic drug with an identifier called GCN Sequence. Also under name concepts there’s a routed drug or difference between the name and the clinical concepts, the name concepts also encompass trade names, common trade names, and are really used for different purposes as far as being a better user interface than the clinical concepts. And then building down to the NDC concepts and then also allergen groups and allergen cross sensitivities.

This also shows that where there are some holes in the system, in particular in the primitives, the coded units, the coded units are, they do have a coding system for the units but it’s text rather than numeric identifier, which makes them a little more difficult to use. They do not have the concept of a strength associated directly with an ingredient, although you can make that association indirectly, and that compares like with RX-NORM or RDF project where you have components, so that’s missing. The orderable drug that’s here and the packaged drug are really not really necessary for the interoperability but for the process of ordering. And again, how these should be judged is really what you’re trying to do. If interoperability is the issue there’s very few weakness with the NDDF system. The only one I guess would be the units and the ingredient strengths. Concentration units are also being bundled in composite units so you have milligrams for 5 ml, although that can be broken out in the system.

The orderable medication concept, again, what we’re trying to do is support our processes within the hospital, and so the orderable medication concept and the package drug concept are both associated with those and not with the interoperability. And those are harder things to do because the vocabulary clearly creates a unique concept, but when you get into processes those don’t always follow the needs that you have. The orderable versus dispensable, again, the issues there are strength versus administer dose, again, in the vocabulary I can have a eye drop that has a strength of 0.3 percent, but that’s not how I administer it, I usually administer it as drops, not in terms of the strength. The setting does have an effect of whether it’s an inpatient or outpatient, and if it’s a combination drug usually the definition of that drug, which can be very precise, is not very useful for clinical practice.

Just an example, here using the identifiers from NDDF, if I’m talking about administer drugs in inpatient setting that’s normally dosed as a unit dose the vocabulary works very good and the concepts. If I’m going into package dispense like I do with eye drops and creams, then you have problems with this administer dose versus the strength, you’re not dosing in terms of the strength. For prescriptions on an outpatient basis you have the same issues versus package versus unit dose. Again, you’ve all seen that in RX NORM form, this is just using a slide there that shows a combination drug, again, it would commonly not be ordered that way because the clinician has no ability to effect the terms of those strengths, usually this is dosed in a separate tablet rather than, or a dosage form unit rather than just a strength unit.

Overall assessment is that the NDDF has to us been very stable and a consistent source of vocabulary and knowledge basis. Again, they are an experienced supplier, they do have ability to, a good service organization. And the missing concepts that we’re talking about are on their agenda to be solved.

Let’s talk a little bit about LOINC. As Stan Huff is on the Committee, he’s from Intermountain Health Care, as he indicated we’ve been a collaborator since the inception of LOINC. We did not use it to build up concepts in our PTXT dictionary because it really comes after our PTXT dictionary just in time, and we’ve used it to build our concepts in HDD.

The concepts for laboratory LOINC, we use it to create concepts. Again, our goal is not to have to maintain all these concepts manually, and so we use LOINC to generate concepts and then we use the LOINC code and the LOINC definitional name as surface forms to those concepts, so that we have the ability to update them and to recognize when we have a new concept. In the case of clinical LOINC then we’ve manually created those concepts, and then we’ve mapped those with surface forms from clinical LOINC, so we don’t use it to generate the concepts, but we use it to map.

The processes that we’re trying to support again, we do have the interoperability issue here where we didn’t with the pharmacy, and that’s that we communicate with a regional lab system, we’re trying to then also communicate to our current lab system. We use it for lab ordering, and for clinical observations.

The defined uniqueness criteria in LOINC, again, it’s very well managed, it’s part of having a good precise definition of what constitutes a unique non-redundant concept. The name is definitional. The vocabulary tools that are provided with it have been helpful to us when we’re submitting to be able to find and map our concepts to the concepts that are in LOINC. The coverage for the lab concepts have been very good. The short names start to address the issue of the definitional name in the lab. The response in this for a new concept is usually the turnaround time is about two weeks. Adoption of lab LOINC concepts in the industry has helped us, again, to communicate with outside. And then the price is right, it’s free use.

The weaknesses are, again, the names are definitional, they are not very useful for an ordering process, and in the documentation of LOINC it really assumes that people will be using this to map their concepts to the LOINC concepts, and that they will provide their own better names for ordering. It is volunteer content, and so it depends on how robust the volunteers are to contribute to it. There are not legal values for the nominal, what are called nominal fields and this was talked about yesterday with the other panel. The panels are expressed in there that helps support ordering but there’s not the computable link between LOINC panels and the results. And the hierarchy is somewhat flat with classes.

The overall assessment is it again, IHC continues to be a significant contributor to LOINC, it does require further development in some of the areas of clinical LOINC in particular, but it’s proven useful in the marketplace and it’s actually helped us again with interoperability issues. And we really endorse the use for both lab and clinical LOINC.

The general recommendations that we have just is for a phased adoption and trying to adopt vocabularies that are well established and critical for communication and population analysis, so we’re hoping from this Committee definitions for patient problems, body locations, medications, the lab tests, and then clinical findings.

Thank you.

MR. BLAIR: Thank you, Keith.

Agenda Item: Panel 1 - Terminology Users: Healthcare Providers - Dr. Guay

DR. GUAY: Thank you. Dr. Al Guay, American Dental Association, former clinician. I’m very pleased to be here to address the Committee, thank you for the opportunity. There are two subjects I’m going to discuss today, one the tooth numbering system, the ISO tooth numbering system, and secondly the SNODENT diagnostic coding system. I’ve chosen no audio visuals, I know a picture is a thousand words, I’ve chosen to use thousands of words, primarily to keep Suzie and the other staff, Committee staff people on their toes.

Talk about coding system first, but before that let me say as a clinician there are really two requirements that are important to clinicians. One, that any kind of a system that’s used to record information be able to clearly record the information that’s essential. So clinicians hate thumbing through, looking through terms that might apply, terms that more closely apply, clinicians like terms that apply directly to the condition they’re trying to record. And secondly, it’s critical that the patient record be the sole source of all kinds of administrative information and clinical information so that there’s not two or three multiple kinds of records being kept for different purposes, so it’s critical that the patient record serve both clinical and administrative purposes. That makes the system efficient.

On the tooth numbering system, I’ve used those for about 30 years, those of you I was hoping some would challenge me by saying you look so young, how could you be a clinician for 30 years, but I don’t see any voices being raised in that objection. There are two systems that are currently used in the United States, the ISO system and the national system, National Universal System. Both are mappable to one another. One, the ISO system is a little more clear, it talks more clearly about super numeric teeth, for example, rather than saying supederme(?) to the area such and such, the ISO sets a specific designation for that and makes it much more clear. It’s unambiguous, it reports areas of the mouth as well as just individual teeth. The terms are not redundant, they’re clear, they clearly designates permanent teeth from primary teeth. It’s totally unambiguous. Once a terminology is seen it is very clear what they’re talking about, there’s no differential, no problem differentiating left from right, upper/lower, permanent/deciduous, it’s very clear. And the practice management system that employed it in clinician’s office now have that system, both systems, or one or the other, or one mapped to the other in their systems so it’s very easily used by clinicians. The data that are included with this system is very clearly, very easily, can populate the required data elements by the HIPAA standard.

I think that the tooth designation system is fairly simple and fairly straightforward, and probably doesn’t require any more comments than those comments. Just by saying that as a clinician it’s easy to use, been used for a long time, and it pretty much fits the standards that would be acceptable to the clinicians.

The SNOWDENT system diagnostic coding system presents somewhat of a problem for us because as we’ve looked around for users of the system, you have to say in all honesty there are no users to the system, and let me explain why we’re in that particular dilemma. First of all, you may or may not know that in adjudicating dental claims there’s never been a requirement for a diagnostic code, so all that payers were looking for were procedure codes, the exception being if a dental procedure is reported in a medical claim form for reimbursement by medical insurance policy then the ICD-9 diagnostic codes were used, they’re primarily for oral surgical procedures. And so the general dentist, and as you know or may not know, about 85 percent of all dentists are general dentists as opposed to a medical configuration where only about 20 percent are general practitioners, there’s been a requirement to do that. And so necessity being the mother of the invention there was no necessity so there was no diagnostic coding system invented or in common use.

And so the profession about ten years ago decided it was time to develop a diagnostic coding system and the profession looked around, a group of clinicians got together and looked around to see is there was some kind of a system that we could either add to or embellish or accept with modification for adequate dental diagnostic coding system, and two systems came very prominently to the fore. One was the ICD-9 system and was very quickly determined that it was pretty inadequate for dental diagnostic code because it was very general and it didn’t have the specificity that clinicians required, I’ll give you some examples of that, and it was sort of not developed in a hierarchical or the logical type of structure, it was pretty much here’s a number and it goes onto this particular diagnosis.

The second coding system that we looked at was the SNOMED system and that was a structure that the profession liked, clinicians liked because it did have an arrangement that was logical that could be built upon but it was very deficient in dental terminologies. So then it was determined should we go with a new system of our own or should we try and see if we could modify one or the other, and the determination was made because of the structure in SNOMED, it’s hierarchical nature and its ability for infinite expansion, that that was a code set that the dental profession decided to employ in its development.

And so a committee was formed of clinicians and created over a number of years a diagnostic coding system that we call SNODENT, SNODENT is structured exactly like SNOMED, it’s an integral part of SNOMED, it’s integrated into the system yet it’s a stand alone system, so it’s a licensing agreement, the SNODENT part of SNOMED is there by license agreement. There are obligations the profession has for updating and that kind of thing and for use, but it’s been integrated into SNOMED’s system. We have arrangements whereby we can extract just the dental coding part from SNOMED for those who want to use just the dental diagnostic codes and not have to be involved in using the entire SNOMED system.

We mapped the SNODENT diagnostic codes to ICD-9 and that was easily done, the reverse is not possible, because the infinitely more rich SNODENT system and ICD-9 can’t be mapped backwards because it is much less rich, mess less granular type of a coding system. And let me just give you some examples. There’s a code for dental carries in ICD-9, and dental carries is as you know a prominent disease in dentistry. There’s one code 521.0 in ICD-9 and there are some 40 different carries codes in the SNODENT system because it goes into much more detail, arrested carry, semented(?) carry, chronic carries, acute analicarries(?), chronic carries, insipient(?) carries, associated with hypo ameniozation(?), salivary dysfunction carries, so it becomes very clear that the SNODENT system is rich enough so that whatever the clinician needs to record in the patient record, it’s there, and it doesn’t need interpretation or extrapolation of some other existing codes. As a matter of fact there’s about 4,500 terms and so on in SNODENT, strictly diagnostic coding system.

SNODENT shares the same hierarchical structure as SNOMED does, and this allowed pretty easy data entry and data manipulation. It’s important that the coding system be able to be used for several things besides recording patient information and administrative tasks that clinicians are required to comply with or use in their daily practice of reporting dental claims and whatever without any kind of a translation required, it can come directly from the patient record to the claims and that can be done pretty much electronically without having to start a whole new system for adjudication of claims or other areas. Research is important, that if there’s going to be some studies about carries for example, just the term carries is insufficient, there needs to be much more definitive description of what kind of carries we’re talking about, is this new carries in enamel of a ten year old or is this recurrent carries under an old filling in a 60 year old, totally different situation not discernable, not able to be distinguished in other coding system, SNODENT allows the distinction of a very broad and rich terminology related to that specific diagnosis. Both have the same platform that can be used electronically, so it’s very clear to clinicians that if they were to use a system that would enable them to do all the things that would be required they need a system that’s much more rich than the system that we’re available prior to the development of SNODENT in SNOMED.

The cooperation between the SNOMED organization and the profession has been spectacular, it’s the only word I can use. We have had a long sometimes rocky relationship primarily in trying to establish the limitations for the two systems, and the SNOMED people said to us one time well, you can use 8,000 SNOMED terms to develop your code and I think we used 2,000, so the relationship has been fine over the years and I expect that would continue.

And I’d be happy to answer any questions.

DR. COHN: I want to thank all the speakers, do we have any questions, actually maybe I’ll start with one for our last speaker. Thank you for coming and talking to us about the dental issues. I had a couple questions for you, this is probably more clarification than anything. I notice that you’re obviously with the American Dental Association but you also list yourself as a private practitioner, though as you describe it’s probably been a number of years since you were a private practitioner. Question number one is do your views really represent your own personal views or are they really representative of the American Dental Association and dentistry as a whole in terms of what you’re describing, number one. And number two is maybe you can comment and clarify for me, obviously you talked a lot about the value of SNODENT, yet you started by indicating that it was not being used. Can you maybe explain to me a little bit more why it isn’t being used at all, if indeed it is as good as you describe it. Start with the first question.

DR. GUAY: Ok, first question, are my views those of my own views or the ADA’s views, fortunately they’re both. Being employed at the ADA I’m very well aware of their views and helped determine some of those views, but they happen to be my personal views also, so you can take that as being Al Guay’s views and also the views of the ADA.

On the second part, why there are no users of SNODENT, I would say that the question gets back to why there are no users of any diagnostic coding system, because there has not been the administrative need to have a diagnostic coding system in adjudication of claims, and so clinicians are very practical people, they do only the things that are required to be done administratively, and so those who file medical claims use the ICD-9 system but they’re a very small minority of dental practitioners, so since there’s need on the dental claim form, prior to just a couple years back there was not even a place on the claim form for a diagnostic code, so no need to report a code for diagnosis, no coding system developed.

DR. STEINDEL: Simon, I have questions for all three speakers, I’ll start with Dr. Guay. With SNODENT, is it included in SNOMED in the usual license, is it the latest version that’s available in SNOMED?

DR. GUAY: In SNOMED, the SNODENT is a component of the SNOMED, it’s integrated fully. We have no relationship licensing arrangements, relationship with the SNOMED organization, so I don’t know what their deal is. My guess is, I guess I’m correct, if you license a SNOMED system you’ve already licensed the SNODENT system. Now if you want to take the SNODENT part out on its own the ADA does not have a licensing fee for the use of SNODENT, the only cost that would be involved would be, I’m saying for clinicians, would be if you want to purchase the book that gives you the codes, then a fee for the publication, not been established but these are, procedure coding system as example, we have I think it’s a $30 dollar cost for purchasing the manual. In the procedure codes which every dentist uses, there are about 135,000 dental offices in the United States, I think our sale of our CDT manual is about 16,000, so it’s not essential, you don’t have to purchase anything to use the coding system.

DR. STEINDEL: Are teeth numbers included in SNODENT in typical fashion?

DR. GUAY: Yes, they are.

DR. STEINDEL: Thank you, those are my two clarification questions. And clarification questions for Dr. Larsen mostly concerns NDDF, and you indicated in your responses to the Committee a three in the area of cost and NDDF is not a free system. Can you indicate to us how much it might cost you and how universally NDDF is used? Do you have any knowledge of that?

DR. LARSEN: The cost of the system within somewhat of a secondary user, the way that they work is they normally sell to vendors of systems, and so there’s a cost that goes to the vendor and there’s secondary costs that go to the users of that system, and because we jointly developed with 3M Health Information Systems we’ve been a secondary user in the licensing, we’re now changing that to be a primary user and I don’t have cost information. It’s indicated as a three because there is a cost associated with it.

DR. STEINDEL: Thank you. And my next questions are for Dr. Oliver, this also concerns drug terminology. You gave a list of drug terminologies you considered before you selected NDF-RT, any specific reasons why you selected NDF-RT over the others? And then my corollary question to that would be to you and to Dr. Brown, do you have any idea of the extent of use of NDF-RT since it’s a system in development?

DR. OLIVER: One of the primary reasons that we chose NDF-RT was cost, and there was some of the proprietary systems were expecting us to purchase a lot of content that we did not need because we were looking specifically for a terminology, and certainly NDC was inadequate, it’s free, we liked the classification structure, the lost cost and the lack of proprietary constraints.

DR. BROWN: Regarding use, it really is sort of hard to say, that NDF-RT is built upon a core of RX NORM that we’ve collaborated with NLM to do that, and RX NORM is put out there with the UMLS without much in the way of restrictions, so it’s really hard to know. Clearly it’s not a whole lot right now, especially with the NDF-RT alone components of the hierarchies and the like, we’ve given it to anyone who’s asked for it as long as they’re willing to state up front that the understand it’s a work in progress. We’ve had a couple requests that we have filled at this time, but it’s as you say a work in progress.

DR. STEINDEL: What is the extent of use in the VA?

DR. BROWN: Right now it’s development.

DR. COHN: Jeff, I think you have a follow-up --

DR. BROWN: We have something in place, NDF, which is our baseline material, that’s in place at 128 implementations and thousands and sites, and it has to be good and ready to go before we replace something that’s working for us.

DR. COHN: Steve, do you mind if Jeff has a follow-on, Jeff, on the specifics.

MR. BLAIR: This is to Dr. Oliver and Dr. Brown. Help me understand a little bit, I do understand that NDDF is produced by FirstDataBank and it’s proprietary and I imagine that there’s some kind of a cost associated with that, but I believe that Multum has indicated its drug knowledge base would be available in the public domain, so Dr. Brown you felt that there was a need to develop NDF-RT despite the availability of Multum, could you help us understand the need you were addressing that Multum, for example, wouldn’t address?

DR. BROWN: Well, I mean I don’t want to throw any rocks at Multum, what we were using was our NDF, which was largely from my understanding of what Multum is giving, is or was at least giving away, largely equivalent. So what we were trying to do by formalizing NDF was to improve our own internal practices regarding vocabulary and just like NDF, it’s a government produced terminology or intellectual content so we’re happy to give it to whoever would like to have it. I really haven’t spent a lot of time in all of the knowledge basis of Multum, though I have browsed some of what I was able to download a couple of years ago, and it seems largely the equivalent of our national drug file types of information where there’s a product name and some links to some NDC’s, but I’d be speaking out of school to say that I know it in great detail.

DR. OLIVER: I believe that from our point of view a lot of it had to do with the classification structure, I’d have to review it more carefully currently to know where it stands now, but we --

MR. BLAIR: I think you indicated in your testimony that your needs were not as broad as --

DR. OLIVER: Part of the issue is that we haven’t fully utilized all the features that we would like to utilize, and for search we would like to, we like the conditions treated feature, for instance, that you could search on a disease name and get some, a list of drugs. We do feel that some of the features that have been added in NDF-RT in terms of this information was the kind of the sort of thing that would help, that potentially can help us provide search opportunities.

DR. BROWN: If I may add one more point, we started with NDF, what we were interested in is exploring utility of formalizing terminologies in general, some of the folks in this room have done some work that would lead us to believe that we could do a better job by using off the shelf tools, so I guess that’s another reason why we wouldn’t just take something, but when we were using it as an exploratory device to improve our own internal operations as well.

DR. OLIVER: And I’ll second that. We liked the formality, that formal approach that was taken by the NDF-RT developers, and one more thing. We felt that if we needed, if we were going to use the Multum version we were going to have to go with Vantage RX, which is the second, proprietary product for additional content that we might want.

DR. COHN: Walter, Clem, and then we have some from the audience.

DR. SUJANSKY: All three of those were excellent presentations, they raised more questions than I have time to ask right now, so I hope you won’t mind if I follow-up with you afterwards also with some additional questions. Right now, let me ask Dr. Larsen first, at what level of abstraction is most useful to you in the decision support and interoperability goals you have? As we know, NDDF has many levels of abstraction from simply the drug name down through routed drug, all the way down to NDC code drug product level. What is the best level of abstraction for the clinical decision support and interoperability that you have for control terminology?

DR. LARSEN: We really tried this by the ordering process, and so the one slide that I showed that it really depends on where you go or what is the ordering setting and what type of drug it is, there’s the Met ID , which is an identifier for dispensable drugs, and if something is normally dispensed as a unit dose, and it may not be packaged that way but it’s dispensed, is that that’s the most useful concept for that activity. If it’s dispensed as a package then there has, the Met ID works on inpatient basis because you usually do not want the clinicians to specify how big of a tube of ointment to dispense, that really is a physician, I mean the pharmacist. On outpatient you do want that specified.

DR. SUJANSKY: Actually, if we could ignore the ordering for the time being and just assume that somehow there’s ability to order using the terminology or different terminology and so forth, but in the medical record you’re using NDDF to support decision support and interoperability alone, could you comment on the level of abstraction that’s best for that?

DR. LARSEN: For the level of abstraction it really doesn’t matter for the decision support because of the relationship between the different identifiers, so by using different identifiers to support the ordering process but having all the relationships between them, it doesn’t matter which level you use, you can get the information and support the decision support.

DR. SUJANSKY: I guess what I’m getting at is what is the minimum detail that you need, because we’re assessing a number of different terminologies here, not just NDDF, and so my question is getting at is in general, not specifically with NDDF, what is the appropriate level. Sorry to really drill down on this.

DR. LARSEN: No, that’s fine. For most of the decision support the routed generic is sufficient to do the decision support, because I know I can differentiate between oral or perentral(?) administer drugs, I know what the entity is. When you get into the dosing decision support even there you’re usually looking at the order dose, not the strength, and so that seems to be the most useful level.

DR. SUJANSKY: If I could just take up a little bit more time with one more question for both you and Dr. Oliver, and your answers will probably be different, but what is the timeliness of updates that you require for your applications from the terminology developer? Maybe Dr. Oliver can go first.

DR. OLIVER: Our current needs are not very demanding in terms of our need for timely updates. We’re using the version that we started with a year ago perhaps, and it’s largely due to our other demands on the PharmGKB project which take a higher precedence to this particular task.

DR. SUJANSKY: Down the road as this is rolled out and perhaps ultimately integrated as a resource in clinical care, is there a time lag between the time a medication is introduced on the market and information is generated for the pharm domain knowledge base such that it’s not that important to get --

DR. OLIVER: No, as soon as this gets into clinical practice things have to be up to date very quickly, just like any other task in clinical practice.

DR. SUJANSKY: So ultimately it will have to be available in the terminology as soon as it’s available on the market. Would you say the same is true in your environment?

DR. LARSEN: I would say the same is true. As I indicated we’re on a monthly update now, we really feel the need to go to a weekly update just for the practice.

DR. SUJANSKY: Thank you.

DR. COHN: Ok, Clem?

DR. MCDONALD: I also have a number of questions and comments. Let’s first clarify, there’s actually at least I think three products that are sort of entire knowledge basis of which one is the FirstDataBank’s Medispan and Multum, and I think most of them have this fairly elaborate set of data elements which includes things like it’s a generic Medicare, lots and lots of stuff you need to run your business. Regarding Multum being free, I’d like to have some clarification on that because I’m not crystal clear that’s the case in the core anymore. If it were I’d like --

DR. LAU: This is Lee Min from 3M. Just yesterday I was talking to a company in Salt Lake that told me that Multum use would be free, as software, like this year or last year, the content is still free but the knowledge, the relationships are no longer free and they’re paying to the tune of $20,000 a year. So nothing is free nowadays.

DR. COHN: I think nothing’s free in this world, isn’t that the lesson?

DR. MCDONALD: I’d still like to clarify because we’re focused mostly on codes and just a little bit of core stuff, I don’t think anyone, we’re trying to standardize the world in terms of the commercial interesting extra added value that can be added, so I think it would still be worth clarifying that, maybe as the Committee works through it, is what’s actually free, because that’s an added value when it’s free and if there’s a whole lot of structures but the core is still free it’d still be worth noting, and I don’t know what the case is.

DR. COHN: Well, Clem, I think I would even extend from that, I think in my view of the world very little is actually free, the question is is what is the business model for support of it, and some things are, some things are free to the consumer, some things are paid for by tax dollars, some things are, it just sort of goes along I think a fundamental business model of how things get updated and maintained, which I think is really your question.

DR. MCDONALD: No, it isn’t, my question is is it free to the user who wants it now, not that it doesn’t cost something, I mean life is hard so we all know that, but does the user have to pay a fee to use it. That’s an important dimension in a standard, at least something that should be weighted. If the Multum central stuff is free to the user who wants to use it I think that should be noted and given points, and I don’t think we know for sure what the case is.

DR. COHN: Ok, and Clem, really what I was commenting on is I agree with what you’re saying, I think the other issue is obviously the sustainable business model, to make sure that it will continue to be maintained, updated, and usable and so that’s the other piece to it.

DR. MCDONALD: Let me just finish my questions, and then you can, I just wanted to bring that up. It’s related to the drug ordering that Walter asked and you answered, and I think you answered the way I think about it and I’m not sure any of the coding systems really give it to you that way, is the physician has to say roughly it’s an oral ampacilent(?) and he wants to give a certain amount of it, he doesn’t have to say the pill size to get it done, that’s sort of the minimum necessary to say and none of them quite have that and it’s partly because it’s a complicated beast, but I’d like to keep that alive, that it would be nice if you could say I want ampacilent oral and you know, shoot, if you got pills or caplettes or tablets, I don’t care that much, I don’t know what’s going to be in the pharmacy, 250, 500, I don’t care, that’s not quite supported I think in many of the coding systems but it’s because it’s hard.

DR. LARSEN: Well, actually in the NDDF there is a concept of again, it’s routed drug, so you can use that as an ordering concept and say ampacilent oral, and then just collect an order dose.

DR. MCDONALD: Well, I’d cheer that. I applaud that, I’m glad to hear that then, we should see more of that.

And then I have a question about the ADA. The use of SNODENT you said is not used because it’s not required in billing. Would you recommend that it be required in billing, for dentists to always use that, so that would be used?

DR. GUAY: I didn’t get this old by being that dumb. I think, requirements, administrative requirements are pretty much part of the administrators, if insurance payers think there’s a need for a diagnostic code to fairly and efficiently and quickly administer dental claims then they’ll ask for them, and if it’s not required for those purposes then I don’t think there would be a need for them as been shown in history. Fortunately, or unfortunately, depending on how you look at it, there are other uses for diagnostic codes other than adjudication of claims which are probably in the long term more important than the claims adjudication, and I think that’s where the main use of a diagnostic coding system will be.

DR. MCDONALD: I’m still just confused by the fact you said no one uses it but there’s a good use for it, so could you be more explicit?

DR. GUAY: I think that probably before the days of electronics it was cumbersome, it would be very difficult to keep track of what the interests were. And as you know, with the increased costs of the health care system there was a need to look at outcomes on a massive basis across population basis rather than individual patient basis, and that’s a relatively new phenomenon, so that’s a potential use that I think has probably more value in the long term. I think as a commission in the past, I would look at my cases and sort of see what worked and what didn’t work, it was difficult to keep, I tried doing it on cards, I had diagnoses and files and I put the names, and it was very cumbersome, so I think I relied on my memory more than records to do that clinically. I think now with electronics it’s much easier to do this electronically, so that’s I think the real purposes.

DR. MCDONALD: That second question is regarding again, back to cost to the user. If a dentist wants to use SNODENT does he have to pay any money at all?

DR. GUAY: No.

DR. MCDONALD: And if a non-dentist wants to use it does he or she have to pay any money to use it?

DR. GUAY: There’s a fee for users that use the system to generate income, for example, a practice management software developer, I think, just going by procedure coding system, now they pay like $10 dollars per user, which if you think about that, $10 dollars, it’s usually a two year or one year deal, that’s a pretty small cost. For insurance carriers who use the data from the system to develop customary fee ranges in geographic variation, that kind of thing, then there’s a fee also. But for the clinician working every day there’s no cost, the only cost that would be involved would be if the clinician decides they want to get a handy dandy reference book to do things and there’s be a cost for the book itself.

DR. COHN: One more question and then we will take a break.

DR. SUJANSKY: This is for Dr. Guay. You mentioned, I wanted to make sure I understood what you said about the terminologies and dental practice management systems. The question is about the terminologies practice management systems. Did I understand correctly that the ISO tooth designation system is integrated into all dental practice management systems? Or is it both ISO and SNODENT, or neither are necessarily integrated?

DR. GUAY: I can’t answer the question because I don’t know the characteristics of each of the practice management systems, other than to say that they’re mappable with each other, so there will be no barrier to taking the National Universal System and mapping it to ISO or vice versa, or the fact it’s done in every system, I can’t answer the question.

DR. SUJANSKY: I’m sorry, taking which system and mapping it?

DR. GUAY: Either.

DR. SUJANSKY: Taking on and mapping it to the other.

DR. GUAY: Yes, either.

DR. SUJANSKY: To your knowledge one or the other are in practice management systems or --

DR. GUAY: I can’t answer that, I don’t know the characteristics of the practice management systems, all I know that were this to be named as the system that it can be used either by itself or easily mapped into the Universal National System, the differences are not very great.

DR. SUJANSKY: Ok, just to abuse Jeff’s generosity, I’m going to ask one last question, again, of Dr. Guay. Would you say that between SNODENT and SNOMED combined you have enough content there for a dental electronic medical record system?

DR. GUAY: Absolutely.

DR. SUJANSKY: Ok, thank you.

MR. BLAIR: Well, let me do this. I’m going to modify our schedule just slightly. We just finished our provider panel with three testifiers, our next panel is a vendor panel with four testifiers, so we’re breaking actually about six or seven minutes early. If you could keep your break to 15 minutes, that’s a regular 15 minute break and please be back in 15 minutes, that will give a little extra time and we’ll need it for the next panel because we’ll have four testifiers. Thank you.

[Brief break.]

MR. BLAIR: Ok, may I ask our second panel, vendors to testify, and maybe if you could just introduce yourself briefly first and then we’ll go from left to right.

DR. FAUGHNAN: John Faughnan, and I’ll be speaking for McKesson Information Solutions.

DR. LEVY: I am Dr. Brian Levy with Health Language, I’m also formerly the medical director at Multum and in the evenings during my spare time I still try to fit in a little private practice of medicine.

DR. CHUANG: My Name is Ian Chuang, I’m the physician executive from Cerner Corporation with responsibility for the knowledge framework and nomenclature.

DR. LAU: Hi, I’m Lee Min Lau from 3M, I work on the 3M Health Care Data Dictionary.

MR. BLAIR: Ok, normally we just go from left to right unless you have preference for a different sequence amongst yourselves. And let me remind you that while we can hear you in the room without much difficulty, in order for the people on the internet to hear you you have to get close to the mic.

Agenda Item: Panel 2 - Terminology Users: Healthcare Vendors - Dr. Levy

DR. LEVY: Once again my name is Dr. Brian Levy with Health Language, I want to thank the Committee for the opportunity to speak this morning here. I’m going to provide some perspective on our use of terminology within a language engine or terminology service. We are a middleware(?) provider of terminologies to other users such as those who are also on this panel with me, and perhaps here and there I’ll also throw in some of my perspectives as a potential and hopeful end user of these terminologies at the point of care as well.

I’m going to talk a little bit about SNOMED CT and its suitability for use within a language engine. I also want to raise some of the issues that we are facing now and will begin to face with maintaining the currency and updating these terminologies. I also want to discuss some of the misconceptions that are out there about mappings along the terminologies, and finally just share a few insights about the coexistence of multiple terminologies within this space.

First just to talk a little bit about SNOMED CT within a language engine. What I mean by a language engine briefly is a software tool or product that allows the maintenance and storage and updating of a multiple number of terminologies within a single mechanism so that we can facilitate the access to more than one terminology using the same kinds of mechanisms. We find at Health Language and from our experience that SNOMED has served as an ideal in a sense core for a language engine. SNOMED following the basic principles of concept based terminologies, the concepts and hierarchies within SNOMED facilitate their use within a language engine and from our experience then often serve as a core from which other terminologies, be they other standards or other proprietary terminologies that the vendors create will then map to.

A couple points about SNOMED within the language engine, the concept base also provides the framework from which additional terms can then be added onto it, so that what we find is that, and I’ll present some more slides on this shortly, is that SNOMED, and in fact any terminology used at the point of care will require additional local modifications, and the conceptual model of SNOMED facilitates this, along with the hierarchies and relations as well. One point that we make a lot to our users of the language engine is that the SNOMED hierarchies are not designed to be used at the point of care, so as a doctor I’m not going to be browsing down the entire SNOMED tree. The SNOMED hierarchies and relations, however, do facilitate the creation of alternate views, views that may be more appropriate to a specific practitioners or a specific user of components of the SNOMED terminology.

Certainly the object permanence or concept permanence within SNOMED makes it easier to store and maintain. We have found from our personal experience in extensively analyzing the SNOMED CT updates that they do a very nice job of following their own terminology rules so that the integrity of the updates is quite excellent, they don’t delete concepts, they don’t change the text of various terms, and in general they follow their basic terminology principles.

Speaking of updates, though, what we find is that one of the complexities of using terminologies over time, and in fact what I believe to be the most complex aspect of using terminologies, is not going to be the initial load of them but it’s going to be the maintenance and upkeep of terminologies over time. This maintenance and upkeep is going to require not only software tools to process them but considerable planning. And as I mentioned before, we’re going to be expecting and we’re actually seeing this now, it’s that users of SNOMED CT are just beginning to extend the model, that is despite the broad coverage of this terminology, no terminology is ever going to have every single concept, is going to have every single term that every doctor wants to use, and so we expect that users will be locally modifying them, that is adding additional terms to them, potentially adding concepts to them before the updates are actually released, so for example a great current example is the SARS outbreak, SARS we now want to use to be able to document within our electronic medical records, that concept may not be in the SNOMED update for another three to six months, and we need mechanisms in place to extend the model.

And yet, at the same time, as we begin to extend the model it is very important that users continue to update back to the standard so that we don’t create versions of SNOMED that semantically drift away from the standard core. This process is going to require considerable what we call conflict resolution, that is the ability to locally resolve what I have added to the terminology back to what the standard might then add three to six months later.

If we take sort of a step and say ok, now we’re using SNOMED, what about mapping to other terminologies, how does SNOMED play. And one of the points that we have found to be quite true is that there was no such thing as one kind of mapping among terminologies, there’s often a misconception out there that I’ll simply map something like SNOMED to ICD or to CPT and then fine, I can now interchange and move back and forth. And unfortunately as many of this audience knows, that doesn’t quite hold water. In part because these terminologies are basically mixing apples and oranges here, they’re all intended for different purposes. ICD is a classification, SNOMED is a description of medical truth. And so we find that there can be many kinds of mappings among these terminologies and it is important for us to when we think about mappings to define what we mean by the use case, in other words, what are we going to use this mapping for. Am I using a mapping between SNOMED and ICD for billable purposes so do I want to get the billable ICD code? Am I using it for conceptual purposes, so do I want to get a roughly equivalent conceptual map? And likewise, I heard briefly earlier some of the notions of what we call reverse mapping, that is not only can there be mappings from lets say SNOMED to ICD, but there’s also mappings from ICD back to SNOMED that satisfy a different set of use cases, and may in fact in many cases the reverse map may not be the same as what we see in the forward map.

So although I’m throwing some complexity on this whole mapping issue, I do want to acknowledge that the SNOMED group itself has formed a clinical terminology working group of which we are a part of to help address the notions of trying to standardize these mappings in the sense of, to specifically define what a given map can and should actually be used for, some of the rules that we can commonly use to create the maps, and some of the common ways that the industry might decide to validate the maps once they have been created.

And finally I just want to share some of my perspectives on the coexistence of multiple terminologies. I think we all acknowledge that there’s not going to be one terminology that’s going to satisfy all possible use cases and I think that we can all understand that actually the number of terminologies that users and especially the medical record users here, that they’re going to be facing is actually going to continue to increase over time. I find it useful to look at terminologies in this fashion here with along the X axis to chart terminologies according to their appropriateness to be used at the point of care, so something like LOINC, the intended use is not to be used at the point of care but more for what I call indirect care here. Then along the Y axis to look at terminologies according to the broadness of their actual coverage, so a terminology like SNOMED has very broad coverage there, versus a terminology like the medical device which is very, very narrow in focus there.

This slide to me points out a couple of major points. First, there is no terminology directly out of the box that’s going to directly be applicable to the point of care. After all to some extent we already have the perfect terminology, it’s the paper chart, what I get to write in everyday, I can write what I want and hopefully other people can read by handwriting, but of course that’s not the role we are after here. So for example even if we look at SNOMED CT, a cardiologist at the point of care is never going to want to browse the entire SNOMED CT structure. As we see these terminologies being used it will be important to begin to subset them, to break them up into pieces, and that’s not to say that the standards bodies will need to actually do this. I see that the vendors themselves will begin to break up the pieces of SNOMED, for example, into different hierarchical structures, into different content structures, and expose those pieces to the correct end user, whether or not it be a specific doctor or a specific hospital, etc.

Also, and finally, in order to bring SNOMED and the other terminologies to this point of use here, it will require a considerable software and development effort.

Once again, I want to thank you for your time today.

Agenda Item: Panel 2 - Terminology Users: Healthcare Vendors - Dr. Chuang

DR. CHUANG: On behalf of Cerner Corporation I would also like to express a note of appreciation for being invited to express our opinions and thoughts regarding terminology. And what I’d like to go through in this brief ten minute period is to identify and describe the issues that we have in providing terminologies to our client end users and how we struggle to come up with strategies to make terminology work within electronic medical health record systems.

MR. BLAIR: Ian, let me give you a little bit more flexibility, because I think that you have either two or three terminologies you’re going to be describing, is that right?

DR. CHUANG: There are two that are actually on this list, yes.

MR. BLAIR: Ok, feel free to take 15 minutes, and each of the testifiers, if you have one terminology, ten minutes, two, 15, and three, 20. And we should have time for questions as well.

DR. CHUANG: Thank you. As a software vendor, content is very much a key aspect in our design and an issue regarding usability. Nomenclature content has a significant impact on how clinician end users perceive, utilize, and ultimately derive value from electronic health record system like ours. So terminology that we support or provide are factored into our design requirements related to usability. Ultimately we have to make the contents, terminology content work within an information system structure, so things like subsets, synonyms, preference settings, things like that, world based preferences, all have to be accommodated just around handling the preferences for terminology.

There are certain issues to consider regarding standard reference terminology. Some of these are external issues that may really extend beyond informatics and the desire to codify a structure knowledge. It’s more than data interoperability, comparability, aggregation abilities, and quality, and these things get considered and have to be factored in. One challenge our clients face and we have to help them address is there’s no single reference terminology that’s completely adequate on its own. Some are stronger in certain areas in terms of breadth and depth, and then there’s issues regarding gaps and overlaps.

In external influences one of the key ones is endorsement, endorsements help because it sets a momentum and sets a direction that influences our clients and it helps give us a sense of direction. The market acceptance and demand for certain terminologies is wide and varied, and there are many factors that impact that. A good example, the pathologist, they’d still really like the old versions of SNOMED, so we have to struggle and help them migrate forward to SNOMED CT.

Providers in general are willing to accept and adopt standards, however, they perceive uncertainty and lack of clear standards presently, so they’re resistant or hesitant to move forward. And invariably there’s these external factors that lead to the usage of still code sets as a substitute for clinically based terminology, and some of these are just the desire of a provider to do what is bare minimally necessary to complete his or her task and meet the minimum requirements around data collection and information capture. So there’s issues regarding organizations, policies, and procedures, quality assurance, and accreditation bodies make demands for certain data via certain codes. Payers make demands for information codified by using certain code sets, so clinicians focus on just doing their bread and butter patient care work and just want to adopt bare minimum data capture to meet data requirements.

Each additional terminology that gets adopted and accepted creates an algorithmic challenge in terms of maintenance. As Brian mentioned, invariably every client extends and localizes. From a vendor perspective across the breadth of our client base, you can imagine the permutation and combination of terminology that needs to be supported, and it’s a significant overhead that we have to factor in.

SNOMED CT is our core clinical reference terminology, it’s gaining wider acceptance, not just within the U.S. but globally and we are a global company so that helps us a great deal. We find it’s got excellent breadth and depth for certain discortized(?) domains around disease findings, body structure procedures, especially, obviously it’s very broad in all the other areas as well, but a lot of our client priorities is to be able to capture and document these areas very well.

Now despite the breadth and depth there’s still the challenge, there’s always things missing. If you’re trying to completely structure a documentation and for the section like an HPI, History of Present Illness, you really need the whole English dictionary codified and that’s not realistic. But if I want to really structure and codify the fact that this pain, this person had chest pain while shoveling a driveway after three hours, there’s those kinds of concepts that are non-clinical, they’re just part of our English vocabulary. And obviously cost up to this point is still a barrier to adoption.

LOINC is excellent and is our terminology related to laboratory result names. Once again it’s increasing acceptance, it’s got standards recognition. It is a very narrow scope of focus and one of our challenges is making it usable in an electronic health record system, spanning the enterprise, to have meaningful descriptions at the various venues of care the way the clinicians or providers would like that, so there’s a lot of back end work that we have to do on top of the core LOINC content.

And intermediary expressions, LOINC does a very good job of getting the test result name and with a high degree of specificity, however, at various points in the care process those kinds of concepts need to be declared or expressed in sort of a less, a specified or a more granular state, and that’s things that we’ve had to extend.

Multum, although not on the list, I just identify the disclaimer that I have to provide is that Multum is a subsidiary of Cerner Corporation, but until now Multum has been sort of packaged with our CPOE solutions and although the data model supports the alternative of drug vocabularies, Multum is the one that we tend to default to. And we need it because Multum can enable the expression of medications at the point of ordering or at the prescription, but also on the back end in terms of what a pharmacist needs to know and understand about that from dispensing perspective, so for CPOE and decision support we kind of have to connect those chain of hand offs and understand what’s being ordered and what’s being dispensed.

Now I’m going to just spend a very brief moment on two administrative code sets, only to identify the challenges that we have and the realities that are clients are facing, and what they have done and what we have helped them to do to provide a clinical description based on administrative code sets. There are regulatory requirements that make clinicians have to deal with CPT-4’s and ICD-9’s and it’s part of their administrative workflow, but that administrative workflow intersects with clinical workflow, so this is one area where we’ve got to reconcile and be able to accept the fact that there’s these two diverging needs yet from a data capture perspective they have to somehow interrelate.

CPT does a good job for our diagnostic imaging concepts, only for the fact that doctors tend not to order or document things that they don’t get paid for, so we’ve got a nice one to one correlation. Granted the descriptions are not intuitive to clinicians, and that’s where our data and terminology model supports the ability to accommodate synonyms and localizations, so they can describe it the way they want yet use an administrative code set to support a documentation, granted it’s not perfect, there’s no smart hierarchical structure, yet it kills two birds with one stone and since they’ve got that code that their billing system is happy, so we’ve got interoperability but we’ve compromised in terms of the data validity and reliability. There’s a price associated with this but when reimbursement is tied to the use of this terminology it ends up not being a point of contention.

Similarly, ICD-9, it has been used as a proxy for diagnostic information once again because of the need for sending diagnostic information with their billing submissions, and they’ve just ended up using it as a proxy. Once again, the descriptions are not intuitive, they’re not clinically useful a lot of the times, and the clinicians recognize that so they use synonyms and recognize that there’s just issues related to what really is being represented from a knowledge perspective. But without standards the clinicians are going to revert back to bare minimum data collection.

Now regarding standard clinical vocabularies, what has it been able to help us do? If we’re looking at data interoperability, comparability, aggregation abilities, and quality, definitely a terminology like SNOMED CT is an excellent definitional knowledge. It’s got the complex semantic relationships that help us understand concepts along different axis. It has the structures and concept permanence that helps us with clinical decision support. Granted, the concepts that are automatically assigned through transactions within an information system is more reliable for decision support, and what I mean by that is, for example, lab results, when a system is feeding an EMR system and that information is very specific and codified, the use of that information is much more reliable than if I have to rely on a clinician to describe a condition where even though the concept is codified I’ve got user issues and variability. Definitely reporting analysis and categorization or aggregation of data, something like SNOMED does an excellent job for us.

Now there are issues that we still struggle with. Outside of interoperability when it comes to comparability and data quality there are issues that terminologies themselves don’t solve. As Brian mentioned we’ve got out of the box usability. SNOMED from a definitional perspective is very good, very robust, but the clinicians don’t view it, don’t use it that way. They try to hone in quickly to identify a valid concept that expresses what they’re trying to describe, and boom, they grab it and that’s what they want to record in the document.

We’ve got issues with end user compliance and understanding of what really that concept means, and it is hard. You really have to look at that concept with all its relationships to fully understand it. A good example, in SNOMED, myocardial infarction, there’s two synonyms for that under the disease hierarchy, one is related to myocardial structure, it’s a subset of myocardial disease, the other one is coronary artorethrum(?) athrosclorosis(?). So when a clinician’s just trying to look for that synonym term and he’s not validating that he’s pulling the right concept to represent that, those are issues that we have to deal with when it comes time for analysis, when it comes time for decision support.

Aggregation logic helps us to roll it up, and that is reliable, but the specificity of the concepts becomes an issue. Versions of terminology is also something else that we struggle with and we have to factor that in in that decision support. Terminologies evolve over time, more concepts get added to it, some get replaced and retired. For a certain domain, when I document, let’s say iron deficient anemia due to inadequate dietary intake, I may have included iron deficiency anemia of pregnancy in that because physiologically that’s the reason, intake didn’t keep up with the physiological demand. But then if two years later a new concept comes up, iron deficiency of anemia of pregnancy, I’ve got inclusive differences between what that concepts representing because of different points in time. So once again, for usability, I’ve got to factor that in in my decision support rules and things like that.

And invariably, if clinicians don’t find what they want quickly they revert to free text or they just grab the most general concepts, so even though there may be a valid concept for iron deficiency anemia of pregnancy they must just grab anemia and go with it.

So just a quick example of what does that mean when it’s time for decision support, when it’s time for analysis and reporting. It means I’ve got to leverage the other data available in an electronic health record to understand this issue or this concept better. If the clinician only tells me it’s anemia is it because that’s all he knew about the patient at that point in time, or is it that that’s all he was willing to declare because he could understand the patient better by looking at other data points. In this example, we’re looking at lab results of iron levels, ferritin, TIBC, folate, all those other parameters related to anemia that help define the type of anemia that it is. So ultimately the clinician knows that this is anemia due to iron deficiency, but when I look on the problem list or if I’m looking on the clinical diagnosis table, I may just see anemia and that’s not a terminology issue, that’s end user variation variabilities. So we’ve got ambiguity of expression that influences the activity data, convenience of expression that influences how a clinician declares and expresses things, so when you have a disassociation between the activity side, the activity data and the concepts and codes that are represented there from the reference and the definitional side, there’s always going to be a challenge from our perspective in terms of all the other things we want to do with concepts and terminology.

So this last slide basically, we provide usability within a number of frameworks. Obviously if I can control and predefine the expression of those concepts that is the best, but that in itself creates challenges in terms of usability and it also creates risks related to validity and reliability, because once again out of convenience, clinicians may just say that’s close enough of a concept, and if I’ve grabbed a concept that’s too high level then I’ve lost the ability to capture more specific concepts.

The other is to do this post coordination approach or compositional method. Once again, usability is a challenge, that means you’re grabbing granular concepts and composing them in real time, but you do have the flexibility of creating prose using granular concepts in a post coordinated manner.

Thank you for your attention.

MR. BLAIR: The Subcommittee need any guidance on finding the testimony that was submitted before hand last week? If you do, Ian I think gave us five different assessments on CPT, ICD-9, SNOMED, Multum, and LOINC. I think that’s correct, is that correct?

DR. CHUANG: Yes, it is.

Agenda Item: Panel 2 - Terminology Users: Healthcare Vendors - Dr. Faughnan

DR. FAUGHNAN: Thanks for the opportunity to speak today. I’m John Faughnan, I’ll be speaking on PMRI terminologies from McKesson’s perspective. Briefly, McKesson is a fairly large multinational company, 24,000 employees. We have two businesses that span all of health care, Supply Solutions, where we use a lot of medication data for retail pharmacy and distribution of medications and medical management, and then McKesson Information Solutions providing software and consulting services where we’re using these data for everything from revenue cycle software to expert order, which is a expert system supporting order entry in the acute care setting. And we have half of U.S. hospitals that are over 200 beds using our products.

We use a variety of clinical terminologies, we’ve used the nursing terminologies for several years, NIC, NOC, NANDA, and PNDS, as well as proprietary concept collections we’ve reluctantly maintained. LOINC we’ve used for test results, and today, however, I’ll be speaking primarily about NDDF Plus and SNOMED CT, I think LOINC’s position is well established and accepted. When I speak of SNOMED CT it also incorporates through their convergent terminology effort many of the nursing terminologies that we use.

Briefly, about NDDF Plus, we’ve been using it extensively for many years throughout the enterprise, for everything for dispensing meds at pharmacies, managing packaged information, using it and leveraging it as the basis for acute care medication order entry, which is quite different as others have pointed out, from either dispensing or writing a prescription in the ambulatory setting. We found FirstDataBank who maintained and owned NDDF Plus to be very responsive to our requests, and we’ve seen the terminology underlying this knowledge base really improve over the last three or four years and incorporate a lot of the principles of a well designed terminology. We expect to continue to use it for medical allergies, medical history, documentation, order entry, and really packaging and medical dispensing in acute and retail settings. We are, however, looking forward to using SNOMED CT instead of ICD-9 to past diagnostic data such as indications and contraindications to the drug information framework. I have not been as happy with ICD supporting medical decision making maybe as Ian has been, I’ve actually been very dissatisfied with it.

Why SNOMED? And three things that we’ll try to touch on in this limited time. Why did we end up choosing it? It was a very difficult choice. A lot of it was architectural, feeling it was designed for growth and maintenance, that it was as close, maybe not perfectable, nothing is, but it could be improved with experience and a nice framework to build on. A lot of times that’s what’s going to be most important. A little bit worried, although I know you’ve got some real experts here, that if you partition your terminologies as much as we saw yesterday through ten or 12 domains that it’s going to be very hard not to have overlapping concepts because in medicine a concept like immunization, it’s partly a procedure, it’s partly a medication, it’s partly unique things related to an immunization. Adverse reactions, they all cross these terminologies, so if you’re just taking chunks and bits and pieces of terminologies and saying hey I like this for this purpose and this for this purpose, you’re going to have a lot of overlapping concepts. And they may not be mappable, they may not be equivalent as Brian mentioned.

We liked the capacity of SNOMED to map to the administrative code sets, yes, it’s really different. It’s an ontology, it’s truth, but it was crafted in part so that you could do those mappings, at least for ICD. We like the health care coverage particularly for nursing and allied health care, they’re really stronger in nursing, they’ve really done a nice job incorporating the core concepts in the nursing terminologies, NANDA, NIC, NOC, PNDS, NOC they’re working on. They’re really, Deb Conisec(?) at SNOMED has done a great job leading this effort, and it makes it possible for us to have a reference terminology, but in our applications to deploy the full richness of the nursing terminologies where they are needed, and yet not have this conflict overlap, we can integrate them.

And then very important for us was the expectation that it would be affordable, and that’s been a big issue for our customers is in the past the price ticket was a hard sell, so key is this agreement that we’re hoping to see UK adoption, and all the things that gives you, a marketplace, people building on it, customers being willing to take it.

Well I won’t go much into nursing and allied health care because I’ve already mentioned this. It’s just to say that I would like, I think you’ll find in this Committee’s work that you want to encourage that approach so that you have a reference terminology where concepts are well maintained and special purpose terminologies with extensions for particular domains.

One thing I did want to mention was when you look at these administrative codes sets and SNOMED is in your cost/benefit analysis for ICD-10 implementation to say hey, not only what’s the cost/benefit today, what is it if you’ve got a PMRI terminology in place, because it may be really different. ICD-10 is a lot bigger than ICD-9 and the interoperability issues may be a lot harder. And then you have to say well what’s the benefit if you’ve got a PMRI terminology in place, what does ten give you over nine. So I think these are key issues that I hope will be incorporated in the cost benefit analysis.

I’m going to focus for the remainder of my few minutes on the competition between reimbursement needs, regulatory mandates and clinical software. If there’s a few percent difference for a large health care provider in using ICD and CPT in terms of reimbursement, they get extra one percent a year if they stick to ICD and CPT, they will never adopt anything else. It’s just, reimbursement in the real world is so critical, so many of these facilities, sometimes they’re doing better, sometimes they’re looking at downsizing and getting rid of half their staff. They can’t hire enough nurses, they’re very stressed. So we need to eliminate that clinical penalty, and that really is something CMS and NCVHS can advise CMS that they can do that. I think there are ways to do it practically speaking, and I’ll just talk about medical necessity here, and I’ve only got a couple of minutes so I’ll make it brief.

Typically medical necessity is written as rules saying for a given procedure we will accept certain ICD-9 codes. Usually the underlying clinic logic is actually pretty straightforward, but when you try to express that clinical logic in ICD you’re having to fetch terms from all over ICD, you’re having to deal with the versioning issues, you’ve got some things that are ranges. I’ll give you a good example in a minute. SNOMED CT actually could be a lot easier way to represent the underlying clinical reasoning even if the end users continue to submit using ICD and CPT. Here’s one example. Ferritin levels, which is a measurement of serum iron saturation, if you’re going to order that test you need some supporting reasons for it. Well, it turns out diabetes is one of the supporting reasons presumably because diabetes can arise from some of the iron overload disorders, so if a patient is diabetic you’re saying hey, do they have an iron overload disorder. Well, in ICD diabetes codes are frankly a mess and that’s not the fault of the people working on ICD, it’s structure problems with ICD, they overload the terms, there’s really no unique way to differentiate between diabetes unspecified and diabetes type II, it’s very strange. The codes are scattered across ICD.

Well, in SNOMED, it’s one term. Diabetes, underneath it many terms from multiple hierarchies, but with that single concept you can write the rule. Much simpler rule, I think you could do instead of a 150 in ICD, ten SNOMED’s. That doesn’t mean that all the world is going to be using SNOMED, maybe forever or quite a long time, but as long as, and I can go in more detail in questions, as long as there were understood mappings that were officially blessed by CMS we could make it work in our systems.

In summary, I want to thank everyone for the opportunity of presenting today. I did update my slides and I can send the new versions out. What we would ask is that the government decrease the penalty of implementing clinical systems by working to close the reimbursement and regulatory gaps, and I think there’s some really simple approaches CMS can take that would be cost effective. Second, that the cost benefit calculations should assume the integration of a PMRI terminology when deciding what the costs and benefits are of going to PCS and ICD-10. Third, and I left this out for lack of time, we do want, we don’t need RX NORM and SNOMED to have two different views of medications and we really need to encourage collaboration and interoperability. And lastly, just to worry a little bit about the hidden costs in terms of are you going to have something that’s maintainable if you mix and match too many terminologies in a kind of a cherry picking best of breed approach.

Thank you very much for the opportunity to speak today.

Agenda Item: Panel 2 - Terminology Users: Healthcare Vendors - Dr. Lau

DR. LAU: Hi, this is Lee Min again from 3M and I’m going to be talking about using LOINC, the Logical Observation Identifiers, Names and Codes, and actually I’m going to specifically talk about lab LOINC. Now I want to first thank, not you guys, but Pam Benning, who actually did the written testimony that you should have a copy of, and it is important for me to have her do that because she’s on the LOINC Committee from the beginning, she came to 3M from ARA, and she’s been working on LOINC, or using LOINC, within 3M for five years now together with a bunch of other HDD people and 3M. And I get to give the talk because I live closer to Washington, D.C.

Now this, I’m sure all of you guys know this, it’s just a quick overview of LOINC, but for the sake of our internet audience I’ll just go through very quickly. LOINC’s from Regenstrief and it aims to provide the universal ID for the OBX.3, which is the observation ID of an HL7 transaction. The fully specified names in LOINC has a 6-axis model meaning it has six attributes to describe that one single LOINC name, analyte, property, timing, system, scale, method. And I threw in a bunch of examples there to show the six definitional attributes of LOINC for each name.

This is an example of an HL7 OBX message, as you can see in OBX number three, which is if you count all those vertical bars, you see where, the first example would say something like NA and then that upside down thing which I never know what to call it, serum sodium, and then the next one says K, upside down thing, serum potassium, and so on and so forth. That’s the OBX-3, and the NA and the K and so on is the observation for the OBX-3 which LOINC aims to provide an identifier for. So this is the ideal situation. I’m hungry so I’m talking fast, if you guys want me to slow down say so.

MR. BLAIR: You can slow down, you have plenty of time, take your time.

DR. LAU: The ideal that I’m sure LOINC and all the other standard vocabularies are hoping for is that people will get to use those standard codes in their messaging, in their storage, in the CDR’s, in the EMR’s, so that we then can have interoperability if we’re all using the same codes, when it comes times to exchange data, compare data, everyday data and so on. So this as you can see would be the ideal that no matter where you send a message from, who you are, you’re going to use the same LOINC code for the same lab result that you want. And the LOINC example here is 2823-3.

This certainly was what we hoped for when we started the 3M HDD in 1995, and I know, well I hope that we are kind of known for mapping nowadays, the 3M HDD, but we really come up to do that kind of reluctantly and to our surprise because we find that in reality our customers are going what, are you kidding me, you expect me to send you what. So this is the situation we end up seeing, we see different codes in the OBX segments and computers cannot recognize that these three are the same thing, as far as the computer is concerned they’re three different codes, they mean three different things. One says 20002, one says K, one says 6778. So I said reluctantly, we ended up doing a lot of vocabulary mapping.

Now that term has been referred to in previous presentations, as far as our work is concerned in the HDD, we use mapping to mean the establishment of equivalence between a practical data element from one set of vocabulary or coding system to another data element from another set of vocabulary or coding system just so that we can then translate or move or interoperate from one set to another. Some of the situations we’ve heard about not really equivalent, we completely agree with, and in our case then we actually, even though conversationally we call it mapping, we actually use relationships to establish that kind of crosswalk. But because of the situation we see with customers sending us all kinds of different funny code we end up doing the mapping of those codes to LOINC.

So for us we want to capture all our customers lab data into the 3M clinical data repository and therefore we have to take all the codes that they are possibly going to send in an HL7 message and get it into our 3M health care data dictionary so that those legacy codes can be used to retrieve the appropriate concept identifier to put into the CDR. Now right from the beginning in 1995, end of 1995, we decided that we were going to use LOINC lab result names as the standard for lab result names. So we’ve captured each LOINC lab result as a concept in the 3M HDD, HDD is Healthcare Data Dictionary, and we assign that LOINC concept a numerical concept identifier which we call NCID, so the LOINC code and the LOINC name then become additional representations of NCID of that concept in HDD. And as I’ve said before, we make sure that our customers dump their master files or data dictionary or whatever you want to call it out and pass it to us and we actually have to go through item by item and say for this legacy code which LOINC concept is this for, for that legacy code which LOINC is this. So you can imagine it’s kind of labor intensive to put it politely and we’ve had to learn how to ask for the appropriate information from the customers and actually came up with tools to do this very fast.

I can just remember our very first customer would say could you tell us the property of your lab result names, and our customer when, some of you may have heard this joke before, what do you mean the property, it belongs to us.

Now this is where it gets fun. As you know how LOINC is built, obtained is starter sets from a bunch of lab and that it still relies on customers and other users to submit the new terms for it to keep enriching it’s dictionary, the LOINC database. So that’s what we’ve doing, too, because you can go tell a customer oh guess what, out of your 5,000 lab results only 3,000 have LOINC codes for right now so please don’t use the other 2,000 until the next LOINC update. Even if LOINC is willing to do the update tomorrow, you know what customers are like, I’m sorry, but you’re not going to wait until tomorrow. So we are using the dictionary really as sort of intermediary, we provide numerical concept ID to ones that I mentioned, and that is the one that gets stored in the CDR and in the meantime we will turn around and submit that new term to LOINC in the LOINC format, and when LOINC assigns a code then we just add the code back into the dictionary on the back end.

So from a customer’s point of view they really don’t want to know anything about it. All they want to know is it’s been mapped to LOINC, hurray, done, submit to LOINC, get a code back, whatever updates and versions and all that at least so far haven’t found our customers to be willing to deal with it.

So basically that’s all we’re trying to achieve, we’re trying to get it such that no matter what the representation is going to be sent in HL7 message, the same NCID or numerical concept ID is stored in the clinical data repository. And that’s from our point of view but if you look at it, even if you’re not talking about a CDR, the fact that all of these have been mapped to a LOINC means that the customers, even without us, will be able to communicate with anyone else who has been mapped to LOINC.

Now, surprisingly perhaps, most of the rest of my talk is going to be on this one slide, because I got lazy after a while and I stopped writing slides. Basically what we found over the last few years, and we’ve mapped with a lot of systems to LOINC, it’s been ongoing since 1995, we’ve gotten to the point where we are building tools that can let us run a customers master files through the tool and just 10,000 rows, 15,000 rows, it’s matched to LOINC, so we’ve got a lot of experience in this. And the basic understanding we have is that the standard of LOINC is just fine, even five years ago it was fine. I mean as someone just said it’s not perfect, but I don’t think perfection is really the issue here, it’s commitment to standards. As I used to joke with my husband he’s lucky to have commitment to him because he’s certainly not perfect, but I’m not out looking for another one. It may be funny but it’s true, is that if we keep looking for the next perfect terminology and try to switch and this and that we’re never going to get there. It’s much better just choosing one and stick with it and work with it and find ways to use it.

And that’s really what I’m trying to stress again and again, is we have been so focused on building that I think we really need to start paying some attention to using. And I’ll tell you why. As I’ve said we’ve mapped to a lot of systems, and we started with one, two mappers, and now we have seven or eight mappers. Now the person who maps and I will tell you, on a good day he or she hopefully will make the same decision again. On a bad day probably not. Because if you look at it LOINC or standard vocabularies, they have to be specific in each of their code so that they can describe something properly. Now because of that you do have the very real possibility that the same person even on a different day or maybe on different times of the same day may not choose the same code again even for the same situation. So if you imagine propagating this to multiple people over multiple places trying to choose the code and you end up with people choosing different sets of code. Your interoperability just went away, and that is really a real problem that we have seen and add to it the fact that usage is still right now left to be sort of a voluntary if you can do it that’s really nice thank you very much situation. I would like to see more attention paid to that, really, just so that you don’t down the road in a few days, few years time come time to look at the data and find that oops, now we have more work to do because we all use the different codes even though they aren’t from the same system and we have more work.

Actually I think I covered all these points. One suggestion would be if LOINC wants to find more things to work on things like guiding rules, for instance, if there is insufficient information what we should pick, if there’s no method information would you suggest picking methodless or do you want the published statistics, for this serum potassium methodless really is find, and 90 percent of our people have picked this code, so you try to get as much as possible of an overlap situation as you can. Because, for instance, I’ll tell you an anecdote, one of our current projects is to try to validate the self mapping to LOINC of one of our customers which has mendatall(?), it’s sites to map, and to do the validation obviously we have to say look, give us the same information you have used to map your LOINC so that we can check your LOINC, if with the same information you and I we map to different things we’ll talk. And we couldn’t get the information so I say in that case how did you map LOINC, and the answer is oh, we guessed. That’s not going to do the interoperability any good at all.

I’ll just end there with that slightly depressing story. Thank you very much.

MR. BLAIR: We have about 40 minutes for questions and I think with this panel we may need 40 minutes, so this is great. Simon can you?

DR. COHN: Sure, why don’t we start with Steve and I’ll follow up from him. Others? Walter.

DR. STEINDEL: My usual set of questions to virtually each member of the panel, I’m excluding Brian for the first question, but to each of the other three, I’d like some indication of how many systems are actually using SNOMED CT that you have out there in customers hands, and some indication of how long it’s been used and if it hasn’t been used very long if you have any indication of perhaps how many charts have actually coded that information, etc.

MR. BLAIR: Steve, could I extend your question slightly?

DR. STEINDEL: Certainly Jeff.

MR. BLAIR: Many of you in the written testimony you gave us, when you referenced SNOMED CT you referenced the different applications you used it for, so if you could answer Steve’s question according to whether that’s in terms, separate out whether it’s in terms of laboratory results or whether it’s for decision support, or problem lists, or the other applications that you indicated on your written testimony. Steve, is that acceptable with you?

DR. STEINDEL: That’s fine, I would like some sort of aggregate indication as well.

DR. FAUGHNAN: That’s easy for us. As I’d mentioned quickly in my third slide I think, it’s really looking at it for our new product development, for us it’s really just with, we’ve looked, I had been looking at SNOMED in its various incarnations over the years, it’s really with kind of the crossing the boundary of CT and critically U.S. government agreement that it became feasible for us to consider actually using it. I think prior to that time the cost to our customers made it not acceptable. So I think it’s that, RT probably I could have used, would have gotten good mileage out of, but it’s the critical change in addition to the improvements of CT is the license, and that’s fundamentally why we’re looking for it as part of our new development going forward. Does that answer your question about CT?

DR. STEINDEL: For CT, now LOINC?

DR. FAUGHNAN: For LOINC we’re using it in lab results, we actually have been using it for several years in our ambulatory care product to internally aggregate different types, specific, you might order tests different ways and want to have a longitudinal view of the result, use LOINC and some independent aggregators to put those together. One of the things that I’m hoping that we’ll see with SNOMED CT or its descendents will be to have a nice way, because LOINC is tied into CT, to have a nice way to aggregate several different LOINC codes that are very similar in terms of the desire to display them in the record together, so we’re hoping to play with that, too. But LOINC is the one we’ve used. NDDF Plus we’ve used forever extensively.

DR. CHUANG: For Cerner specifically SNOMED CT, we have none as of right now. We have a backlog of clients at stages of implementation that are on hold, like John’s comment, waiting for the government and CAP agreement to go through, the cost was a significant barrier to adoption in that regard. Previous versions of SNOMED, absolutely, a majority of ours have some variation of that.

DR. STEINDEL: By previous versions you mean before RT?

DR. CHUANG: And including RT.

DR. STEINDEL: And including RT, thank you.

DR. CHUANG: Correct. For LOINC, it is part of what we called our standard starter set, starting this year on the current platform. We are migrating existing clients to utilize that as a concept identifier within our system for previous clients. Multum is, like I said, that’s our default medication terminology and clients on CPOE or our pharmacy solutions, they get that packaged in. The functionalities that they desire in terms of dose range checking, duplicate checking, and all those things are part of the Multum functionality.

DR. LAU: SNOMED CT, none. LOINC, our customers didn’t really have a choice, all of them are using LOINC, that would include, I forgot, maybe 15 or 16 large health care enterprises like IHC, and also the Department of Defense, and currently we’re having a prototype project with VA but we cannot claim credit for that because VA has already mandated the use of LOINC.

DR. COHN: Can I follow-up on that one, just a little more clarification? I think I’m hearing that there’s widespread use of LOINC for lab, at least my understanding is there’s a couple of other domains where there are LOINC codes. Are any of you using, I hear it described as clinical LOINC, if that’s the right terminology, are any of you using the clinical LOINC aspect?

DR. FAUGHNAN: I guess I can say no, don’t currently have plans to use clinical LOINC, just because it’s still very narrow coverage. In that domain we’re planning to actually use CT for core components of many of the clinical observables, and then we expect that we’ll need to do some intermediate extensions which we hope will then be reincorporated into CT.

DR. LAU: We are using clinical LOINC, like I said I focused my talk on lab LOINC because I really wanted to illustrate my editorial comments, but we are using clinical LOINC just like lab LOINC, our customers don’t have much of a choice, it is standard for HDD.

MS. GRAHAM: For what components of the HDD?

DR. LAU: Like vitals, EKG, whichever we will have clinical LOINC codes for, because I mean, I hope that came across clear, is that the way we do it is we load a standard vocabulary in as concepts in HDD and then we will, they will be assigned an NCID, but then LOINC codes will be one of concept forms. So if a customers codes get mapped to that concept, LOINC is sort of mapped already.

DR. CHUANG: Like 3M we are also using it for clinical but we’re being very selective so we use it for vitals as an example, and part of what we’ve been trying to do participating at these venues as well as the private public sector collaborations like the Markle Foundation, things like that, is trying to get a better sense of which one should we be turning to, because in that that there’s some duplication and cross coverage.

DR. COHN: Ok, thanks, Steve, I think you had other questions.

DR. STEINDEL: Yes, I had some other questions, getting off the actual usage question, and Brian, I actually did have a question for you so you didn’t get off Scott free. In your talk, and I found this somewhat interesting, you made the point that the structure of the terminology is use dependent, that people tend to vary the structure for their specific needs and specialties. As is corollary to that question because we’ve talked about hierarchies in the Subcommittee, etc., should we really worry about the structure of a terminology?

DR. LEVY: Well, that’s an excellent question now. Especially during the training sessions, we go out and we train people how to use SNOMED CT. One of the issues that I focus on quite a bit is that at the end of the day the initial documentation, the importance is to begin using the SNOMED CT concepts, that the hierarchies of SNOMED CT are not as important for initial structured data entry, so as I said earlier, we’re not going to see doctors browsing down the SNOMED hierarchies to find the concepts that they want. That’s not to say that the structures are not important, that’s to say that the structures are not important for the use case of structure data entry. The structures are clearly important for at least a couple of reasons, one of course is decision support and outcomes analysis, so being able to aggregate a patient. The other thing that the structure helps us do is what we are finding is our vendors are taking SNOMED concepts and not changing the SNOMED structure, but using tools and processes, recreating the SNOMED structure to fit their specific needs, so they might create alternative views, alternate ways to aggregate SNOMED concepts. And those alternate views are not necessarily standardized, that is the way McKesson wants to present a hierarchy of cardiology concepts may be very different than the way Cerner wants to present that depending on their users, depending on the way their application is actually structured. So just to kind of summarize it, we see and what I envision is that there will be multiple ways to break up and reorganize the hierarchies, and SNOMED currently provides the definitional way.

DR. STEINDEL: Thank you.

DR. FAUGHNAN: I think the structure is absolutely essential, absolutely essential. I agree with what Brian has said, when you’re creating views or data entry you’re not necessarily at run time replicating the structure of the terminology, but we’re leveraging that structure and the relationships in the process of building those things, it’s absolutely critical to maintenance. You can’t maintain these things without some kind of an ontologic integrity reflected in the structure, and it’s very fundamental to decision support to know that there’s a class of things like a beta blocker and what are the things that are members of it, and write your rule at the higher level. So there’s the structure of truth, which is definition which is very important to maintenance, and then there’s a wide web of relationships which can be rendered, I guess, as structures, but are fundamentally relationships that are all about building knowledge rich applications and absolutely need those, and one of the things that I look forward to seeing is a knowledge marketplace of vendors going out, taking these standard things, and then building knowledge structure and relationships on top of those, and selling them to Cerner, McKesson, and others, that we can bring to our customers.

DR. STEINDEL: I have a couple more questions, and since John has the microphone my next question is for him. You indicated in your responses that you use both SNOMED and LOINC for laboratory orders. Why?

DR. FAUGHNAN: We’d been using LOINC for aggregating lab results, we have been using it to name orderables, which is not a good use of LOINC, I think that was, that’s not something I did, it was more of a matter of convenience, it’s not an uncommon use but it’s not what LOINC would say it was made for. As I said before SNOMED is our development environment, SNOMED strategy going forward, and I think when we are building orderables we will be probably drawing the orderables primarily from SNOMED, we know they’ll be areas that have to be extended, nursing orderables I think they’ll be a lot of growth in, but I don’t foresee continuing to use LOINC as an order entry terminology.

DR. STEINDEL: My last question is for Ian, you stressed in your talk the need for post coordination of terminologies. Do you see the need for development of a formal mechanism for post coordination that could be used across all terminologies?

DR. CHUANG: The post coordination that I was mentioning was at the point of, in real time at the point of documentation by the end user, and it’s a necessity because it’s a combinatorial nightmare if I try to even extend against a reference terminology what I think are all the necessary additional concepts that may be necessary, and some of these concepts can be very complex. What I can do is understand it if I can concatenate and string multiple concepts together in my data model, which is what we’ve done.

In terms of formalizing it, you’re still relying on the definitional knowledge which is if I’ve got a concatenation of three concepts to fully describe these more complex concepts, what I need is the fact that those three concepts are valid for what they stand for on their own, and hence are understood against my referential side by which I can then write a rule and say in the presence of these three in this context and I understand what this means. So because it involves end user the formalization is always a challenge.

DR. LAU: I’d like to comment on that. Are you referring to something like, instead of saying right arm pain I want to put down pain as diagnosis where complain, right is laterality and arm is location? Well, I see that as the rule of something like the information model and here’s a plug for HL7, version three rim.

DR. LEVY: I also had a brief comment to that. SNOMED itself has taken what I would call a very early step towards helping to solve some of this coordination issues. The new version of SNOMED CT includes what is now called qualifying relationships, that is in the past SNOMED has only, the SNOMED relationships have been definitional, so that a myocardial infarction for example, points to the heart structure itself, well they’re now beginning to add qualifying relationships where, for example, laboratory order tests would be related to the set of appropriate things I could qualify that with, such as here’s a stat CBC versus a routine CBC. Again, it’s a very early step but it’s a step in the direction of trying to include some of the post coordination issues in the definitional model of SNOMED itself.

DR. FAUGHNAN: I think it’s very exciting to look at what can be done to develop standard templates or rules for doing the information model, and Lee Min has really spoken about that for many years very well, and I think there’s a knowledge marketplace that I’m hoping to see evolve and once you get sort of the foundations in place that you can build, people that will be able to use the information models and experiment with those, find ones that work and provide things like expected values, allowed values, answers to questions, predicted values, all those sorts of things that are part of the information model, that’s going to really reduce the costs of bringing systems to customers.

DR. COHN: Steve’s done, good, so Walter you’re next, then I’m after you, then Clem, then Betsy, then Jeff, and maybe by that time it’ll be time for lunch. We’ll see.

DR. SUJANSKY: A question about, this is primarily addressed to Ian and John, but the others can jump in as well of course. You both mentioned the importance of CPT and ICD I believe, and the mappings of those to SNOMED I assume would be very important. How much would the value of SNOMED in the public domain to a certain extent be diminished if those mappings weren’t part of that? Or do you see that really as an add on that you’d be happy to license separately or maybe from a third party?

DR. CHUANG: Let me start. Definitely it would be an additional barrier to user acceptance, because of the intersection between clinical workflow and their administrative tasks that they need to do, so the fact that it’s available in SNOMED does become a good selling point. The challenge is from an HIS design perspective, I have to make that usable. In order to accomplish that, in order to jump from a clinical documentation activity and to support the ability to quickly and hopefully easily be able to select a matching billable code and billable procedure, if it takes six clicks, even two extra clicks, that becomes a barrier and becomes a point of frustration for our users, and that diverts their efforts to just doing the bare minimum again.

DR. FAUGHNAN: Well, obviously, I could talk for days about ICD/CPT integration in clinical applications, it’s very, very difficult. To specifically address your question, Walter, about deployment, I think, I would like to get everything for free, so would our customers, I don’t think it’s a killer, for example, if CPT mappings is not part of the governmental license, if that’s something we’re having to buy from the marketplace, it’s not creating mappings from SNOMED to CPT is difficult, you certainly can’t create equivalence because they’re so different, but it’s not a huge task. I mean I could do it, make money off it, so I don’t think that’s a huge deal.

DR. COHN: I was actually going to respond to that also a little bit, and I guess my concern, I’m sort of listening to our speakers and I’m, I don’t know whether it’s difficult or easy, though I do know it’s difficult to assure quality, but don’t any of your users worry about compliance issues? It’s one thing to buy a mapping, it’s a whole other thing to be sure that CMS will not take you to court because of fraud and abuse because of the issues that the mapping, yes, it may have been purchased, it may be supported by CAP or whomever, but CMS thinks that they’re systematic up-coding occurring because of the mapping.

DR. FAUGHNAN: Talking about this clinical penalty, that putting together administrative code sets with clinical code sets, if there’s any disadvantage in terms of reimbursement or regulatory compliance our customers won’t accept that, it has to really be equivalent if not better, so I think there’s action required from CMS to at least in some domains do some validated mappings, I don’t think it’s realistic for CMS to expect the transactions to be common with SNOMED codes any time soon, like within six months, but I think there’s ways to continue to have the transactions contain administrative code sets and yet, and I can talk about that in more detail from the presentation, by just saying in this limited domain, here’s mappings that as long as you’re following these mapping rules you won’t get in trouble. And to come out and say, reassure our customers by saying hey, you’re vendors using these mapping rules, you’re vendors using SNOMED, you’re not going to get in trouble, we’ll be happy to work that way.

DR. LEVY: If I may also just add a few comments as well. As a member of the terminology working group that SNOMED has and is trying to work to standardize mappings and also as a disclaimer that Health Language is a provider of mappings that we create as well, I think there are a couple of relevant issues here. First, we as I said in my presentation, we have to be careful to understand what a map is intended for, and one thing we are doing right now in the Clinical Terminology Working Group is to be very clear that mappings right now that are provided between let’s say SNOMED and ICD-9 are not auto coding, that is we can’t expect to go from a SNOMED code to the absolute right ICD code to use to submit to CMS, for example. And the reasons for that are several fold. One of the main reasons for that is that as I said before, we’re mixing apples and oranges here, there’s very rarely direct matches. One of the other very important reasons is that you look at how codes are applied in the real world, we sit and we chart in our medical records, then that medical record gets sent down to some basement somewhere, and the medical record analyzes the whole chart and gets to take into account patient context. In other words, what other diseases does the patient have, what other procedures were done. In addition to patient context there are a number of very complex rules that are involved, as many of us know, in trying to assign a correct ICD or a CPT code, a coder has to go through a number of various rules, and so we have been very careful to say at least in this Clinical Terminology Working Group that we’re not auto coding here, and we can’t auto code yet, we have not reached that stage of terminology development yet to be able to auto code. To reach that phase of being able to say well, I’m going to go directly from a SNOMED to an ICD-9 code and send that without further addressing, there’s going to need to be changes in the way ICD is actually structured, there’s also going to need to be an attempt to computerize the rules that ICD actually has in place.

DR. CHUANG: We used the mapping as a navigational convenience, not as an auto coding capability.

DR. LAU: And I’ll just add this since 3M probably has a significant market share in all these coding, billing, reimbursement, compliance, and so on and so forth for the other side of the house, not my side of the house, that definitely, in fact we made it very clear in the HDD that we call it suggested billing code, we won’t even use the word billing code directly, and it has to go to a whole different set of products to generate the one that 3M will guarantee for billing and so on.

DR. COHN: Walter, I didn’t mean to --

DR. SUJANSKY: Brief follow-up question, I know there are a lot of other questions, so if you could just answer this briefly that’d be great. Not that you haven’t been brief. LOINC mapping, a number of you talked about it mapping from your clients ancillary systems to LOINC. What is your perception of the adoption of LOINC as the native coding system or mappings to LOINC by the lab system vendors, is that happening? How much is it happening? How much is there left to go?

DR. LAU: My impression is that for newer systems which have the luxury of doing that they are trying to do that directly, please like ARA and all that, they are doing they’re own mapping and I see with a customer who has mandated his own systems to do mapping. The issue is continuity with old data. The easiest way really is to start from scratch and say that’s it, I’m going to have this system, I’m just going to use LOINC code. If you can accept that then you may lose compatibility to old codes unless you do the one time mapping. So it really is more a business decision, logistic decision than anything else if you know what I’m trying to say.

DR. SUJANSKY: But specifically are vendors in their new products putting LOINC directly into lab systems?

DR. LAU: I can only answer for things that --

DR. SUJANSKY: In your experiences --

DR. MCDONALD: -- you have a pretty good idea, as a confessed, I’m a LOINC developer, I mean --

DR. COHN: You’re the LOINC father.

DR. MCDONALD: -- but I think it’s a bad idea, and in fact if you look at pharmacy systems in hospitals, we use whatever they use, they still have their own internal code system to handle Mary’s magic mouthwash and all the other oddball stuff, I think there’s probably a formalism you could say that you want your own sort of unique identity code, not to screw things up but just because of the slight glitches you have to do to make things fit. If you look at pharmacy systems, I don’t know if there’s any inside that don’t have sort of that local code system plus they then branch out to the knowledge base with the other code system that gives them everything else they want.

DR. FAUGHNAN: I think the practice there is that if they can use the knowledge base they do use it, they have to add these things as extensions, wouldn’t that be maybe what you’re talking about the lab vendors doing? Use LOINC where it works but where it doesn’t, or at least have an intermediate layer between LOINC and the application.

DR. MCDONALD: Well that’s what I think I meant to say but you typically still see a unique pharmacy drug code within every hospital, that they don’t really use for much, but it’s sort of there.

DR. FAUGHNAN: It’s an indirection, like a shim layer.

DR. COHN: Walter, anything else?

DR. SUJANSKY: No.

DR. COHN: Ok, I just wanted to verify sort of what I was hearing, I want to thank all the speakers. I actually sort of really appreciated the presentations only because I felt like I was back in the real world where people have to get reimbursed for their services, worlds where medicine is really a trillion dollar industry and one percent of a trillion dollars is a lot of money, and this is obviously the world you live and I think the world I live also. I guess what I was hearing was a couple of things and I was sort of reminded a, of the critical issues around government leadership here. That you’re willing to move forward in terminology but want to make sure that sort of the government is leading the way identifying what you need, sounds like your people that you’re contracting with are also willing to go forward, but they don’t want to take undue risk. Is that sort of correct? So that really has a major role here.

Now the other piece, I mean we were talking about this briefly but it really has this issue at the end of the day of getting reimbursed for your services, and doing all that right, and Ian you came up with a very interesting idea about CMS and the federal government, I’m sorry, I’m looking at John and calling Ian, it might have been John, this issue, it seems to me there probably is also another major government role in this issue, and I’m not sure whether the government role is to actually do the mappings or certify mappings or whatever it is that allows there to be a low barrier for people to use clinical terminologies and be able to be appropriately reimbursed for them.

DR. FAUGHNAN: I think it’s absolutely critical and I think it’s quite --

MR. BLAIR: Which of his suggestions was critical? He said three things.

DR. FAUGHNAN: The one I sort of picked up on, I probably mentally focused on that one, was to reduce the barriers or what I call the clinical penalty, reduce, we really need to eliminate the loss of revenue that would accrue from using a PMRI terminology in an application. We’ve got to make that go away. I think for medical necessity not only could we make it go away completely, we could actually reduce the management burden for the people who handle medical necessity rules. I think for procedure we can lesson it, I mean CPT is so different from a clinical terminology that all you can do I think is suggest possible CPT codes. But in the real world there always has been this step of practice management systems looking at the input from the clinical system and twiddling it to fit the administrative rules. The world of medicine is not unlimited, there’s science, there’s nature, there’s evolution. The world of reimbursement is limited only by human imagination, and so we can’t ever match that complexity, there’s always going to have to be this intermediate stage where someone takes the clinical data and says how far can I take it without breaking the law and get reimbursed.

DR. CHUANG: I would agree that government leadership would help a lot because private sector and public sector often go together in health care, and to the point about setting some mapping standards, if we define that then we’ve lost the freedom to clinically express the way that the clinician needs to at the point of care, and that constrains and creates risks about the validity of the data, in addition to all the reimbursement rules that have to be thought through, so it makes better sense to allow the clinicians to use a clinical terminology to express at the level of specificity that makes sense based on either what he knows or what his preference his, leverage other data points so we can understand more concretely what the patient has or is dealing with, and the reimbursement should be just a short hop away where all those other issues influence what you pick for that. If we have those influences at the point of clinical documentation, then that starts to cause drift of validity of that information as well.

DR. LEVY: Just to add to this, in terms of what SNOMED is doing with the Clinical Terminology Working Group, one of the things we are wrestling with now is how do we validate mappings and who should validate mappings, clearly there’s going to need to be internal QA and there’s also going to need to be external validation, and that may depend on the actual use case, so if we’re talking about trying to send an ICD code directly to CMS after the choosing of a SNOMED code, then CMS itself may need to play a role in validating. The standards bodies, the World Health Organization and SNOMED, as creators of terminologies, may also need to play a role in validating. In the current use case of these mappings where we are providing suggested mappings, the end user may need to validate that as a doctor at the point of care who choose a SNOMED code and then sees a list of suggested CPT codes, may need to play a role in validating that they’re seeing the right things or the things that they expect.

DR. MCDONALD: I’ve got a number of quick ones and I need to talk to, matters of fact not matters of pressing. Just the first thing that CMS and ICD-9, one of the best gifts CMC could give us is to make us not have to code down to the bottom so we get out of some of those crazy problems that we got that the mapping is finally done to our problem list and each visit we have to change it because the patient might be slightly diabetic or more diabetic, and let us send test results of the measure of the severity, like hemoglobin A1C, instead of having to code that, because that we could do automatically.

In terms of clinical LOINC I think, and unfortunately this is Stan’s area, got a bad name, it really should have been called everything else LOINC, because if one takes the specifics within it, like EKG LOINC, and OB ultrasound LOINC which has 800 terms, it very well covers those areas and I think it’s unfortunate that we kind of had such lumping, because then it gets blamed for not being complete and the world is, nothing is going to be complete dealing with everything else.

The lab manual ASCII specifically says to disagree with the statement that LOINC should be used for orders, so the LOINC Committee does not find it a bad thing to use it for LOINC. And in fact, if you look at the 25,000 lab terms there’s probably about 2,000 that are single test things, they can be ordered as single test things, so why would you do it twice, so that’s really the main reason why I bring that one up.

And the last thing is, this hasn’t to do with LOINC but with HL7, I think the worst idea on earth is to define a concatenation of stat in CBC, because there’s well defined fields that are used in all lab systems and all systems already for those things and why make another model on top of that model and make us less standard. That’s one thing that’s fairly standard in terms of everybody’s lab system.

DR. COHN: Clem, thank you. Any questions or no? Well, I don’t think those are questions but we appreciate the clarification. Betsy.

MS. HUMPHREYS: This may be minor by way of clarification or comment, I just want to, Steve knows this but in terms of his question about what of the structure of the terminology, obviously as the structure of the terminology is as important for the vocabulary producer to produce an acceptable and correct terminology, even if no user ever used it, the developer of the vocabulary needs it.

The other thing is that I think Clem is exactly right about local ID’s, when we talk to anybody about, anyone who’s building, having their own vocabulary server approach, they need they’re own ID system because they have to know things about their use of that term, and it’s easier for them to connect it to their own ID, it helps with updates, it helps when they have to create local terms, it helps when they’re trying to figure out not when the terminology was added to SNOMED but when it was to their system, which may be two different dates, so I think it’s very difficult to manage the use of multiple terminologies which we’re all dealing with because we’re, even if we had one clinical terminology we’ve got the other ones we’ve just been talking about. So if you’re integrating these into a single system you need your own ID, otherwise you will quickly lose track of what’s going on and if somebody for a very good reason they made a mistake in their vocabulary, deletes something that you used to have, if you don’t have your own ID’s there’s no way to keep track of what’s going on in your own system.

And the other thing I’m interested about, too, is, it’s just a question, maybe nobody knows the answer, but I would wonder whether, I agree with you entirely that if you’re going to map you have to know what the map is for, and I would even wonder whether the mapping you do to report for billing is identical to what the hospital might do to map from clinical data to report vital statistics to the local public health department, and I would imagine that it probably isn’t.

DR. LEVY: Yes, in fact to specifically address that, we’re right now trying to define some of the use cases, for example, from SNOMED to ICD mapping, and you’re exactly right, it may be different. I think right now in the working group of SNOMED we’re trying to wrestle with that. It is actually one of the hardest things to do is not necessarily do the mapping but decide why you’re going to do the mapping and how you’re going to actually do it, that’s why I think you brought the point home that we will see several kinds of mappings between these terminologies.

MS. HUMPHREYS: We have mapped ICD to certain terminologies within, well equivalence is not the issue, we hope if we can determine it means the same thing then yes, we want it equivalent across all the terminologies. But we did some mapping early on in the UMLS project between the medical subject headings and ICD, but the notion was truly if I’m in a patient record system and the only identification of what this patient has is an ICD code, and I might be interested in doing some sort of search of MedLine about recent evidence or something, then what should I do? How could I automatically go from this ICD code to something that would retrieve something in MedLine with the medical subject headings? And that’s a totally different use. I mean obviously you have to understand how indexing is done, how coding is done, and then you can do something that might work pretty well, but if you don’t know those two things, just doing a straight off mapping would not work at all.

DR. LEVY: Yes, exactly, in fact we at Health Language have understood that the mapping from ICD to SNOMED, which we have created is a very different thing, it’s completely different use case in going from SNOMED to ICD trying perform in a similar fashion data outcomes and analysis research using the SNOMED hierarchies rather than the clunky ICD classification.

DR. COHN: John, did you have a quick comment, and then we need to close off?

DR. FAUGHNAN: Yes, a real concrete example of that dilemma. Many times at the leaf end of ICD-9 all the way down, there is a semantic equivalent, the unspecified code, but it’s legal, it’s a leaf code, problem is payers don’t like it, a lot of times they won’t, they’ll want an other code, or a specific code. Well, the common practice in the world of medicine is to fudge, is to use the other code because you get paid for it, even when you look at the clinical record it should have been the unspecified, and that’s one of those nasty compromises we like to leave up to the customers, and not to us, because of the legal implications, those are the calls the customer makes, and that’s an example of a specific mapping.

DR. COHN: I think Stan did you want to make a comment or, I hope you’re making a comment.

DR. HUFF: A comment. Just to reiterate a couple of points, one is all mappings are task specific, I think everybody said that. Hierarchies are all task specific, and so our, I’m speaking now, our experience with Intermountain Health Care, the hierarchies that exist either in LOINC or in SNOMED are excellent starting points for what you want to do, but then all those are just starting points for what you actually have to do in terms of making these things work in decision support or in your order entry system or anyplace else.

The last thing is just an editorial thing, we’ve had substantial experience trying to use composition real time in the user interface, and there’s no amount of goodness in composition that overcomes the time that it takes to do it has been our experience, so basically we were using it in problem lists and you could, if you put in fracture in combination with all of the bones, people would use that and choose it. If you put in fracture and they have to choose a body part that was fractured, you’ll end up with fracture as the only thing that they’ll actually put in. And so our experience has been that in fact, again, there’s really no amount goodness in terms of flexibility that you gain from composition that people will actually tolerate in the run time environment when they’re maintaining problem lists and other things, it’s not, I’m being more absolute than it really is, they’ll use it .05 percent of the time that they’ll do that, and that’s about it so.

DR. COHN: What we have is, Jeff has given up his time, Tony Martinez from Alternative Link has requested I think a one minute, I don’t know if it’s on this point or I suspect it’s, is it related to our discussions or it --

MR. MARTINEZ: Just a brief, just quick update on the progress.

DR. COHN: Ok, you’ve got 60 seconds.

MR. MARTINEZ: My name is Tony Martinez from the firm of Martinez, Bass & Associates, I’m here on behalf of Alternative Link and the Foundation for Integrative Health Care. We just wanted to briefly provide the Committee an update on the demonstration program regarding ABC codes for nursing and complementary and alternative medicine. Currently right now over 10,000 organizations and practitioners across the health care continuum have already registered. In opinion polls that we’ve conducted the vast majority noted the value they believe that ABC codes would offer the nation by advancing the health of individuals and improving industry efficiencies. This registration process ends on May 29th.

In the market directly affected by HIPAA, current code set users and registrants already include Medicare Plus Choice, Medicaid Military, and leading private health plans, federal, state and local government agencies and institutes, administrative and claims related businesses, leading software application developers and information technology companies, conventional complementary and alternative medicine practitioner, integrated delivery network, hospital system, sub-acute care facilities and medical groups, health plan buyers and top ranked academic institutions. It’s actually we’re very pleased and impressed by the breadth of the interest in participating in this program.

We’re going to have a more detailed statement on an update on this available as a handout, and in closing, because we believe that ABC codes are developed objectively on the basis of official terminology of health care professions rather than on medical technology assessment and prevalence criteria used by other code development authorities, a lot of the people and entities involved believe that these codes offer the potential to vastly improve health care research management and commerce for all approaches to care, not just those approaches that reflect conventional physician practices.

The Alternative Link and the Foundation will continue to keep the Committee apprised of the program, the progress and development of this program. Thank you very much. And my assistant Maury Silverman will have handouts at the staff table.

Thank you.

MR. BLAIR: Thank you. Could you be back at 1:30 and we will continue.

[Whereupon, at 12:40 p.m., the meeting was recessed, to reconvene at 1:40 p.m., the same afternoon, May 21, 2003.]


A F T E R N O O N S E S S I O N [1:40 p.m.]

MR. BLAIR: We have four testifiers now, could each of you briefly introduce yourselves and speak closely to the microphone so the folks on the internet can hear you, and then Dan Zinder I’d think you’d be first after each one introduces himself.

DR. PETERSON: My name is Pete Peterson, I’m from the University of Colorado Health Sciences Center in Denver, Colorado.

DR. ZINDER: My name is Dan Zinder from the Department of Defense, I’m an otolaryngologist and I’ll be talking about Medcin today.

DR. WARREN: I’m Judy Warren, I’m from the University of Kansas, School of Nursing.

DR. MADDEN: I’m John Madden, I’m a pathologist, I’m from Duke University Medical Center, and also a co-presenter is Rog Dash who’s sitting over there, he’s also a pathologist at Duke University Medical Center.

MR. BLAIR: Very good, and Simon was alerting me that it’s going to be more convenient if you go in the order in which you show up on the agenda, is that correct? So Dan, I think you’re first off.

Agenda Item: Panel 3 - Terminology Users: Healthcare Providers - Dr. Zinder

DR. ZINDER: Thank you for having me. I don’t have any slides for you today, I thought you’d be poisoned by Power Point by now and I thought I’d give you a break. I really appreciate the opportunity to come here, I tell you I look around the room, though, it’s, I don’t know if intimidating is the right work but pretty impressive, I feel a little bit like the guy whose house got flooded and I’m trying to explain to Noah what it’s like here. But I’ll do my best for you.

MR. BLAIR: One piece of instruction that I should give you, we’ve been pretty good on staying on time, we’ve got a lot of testifiers, but the guidelines is that if you’re talking about one terminology please stay within ten minutes, if you are discussing two, 15 minutes, if you’re discussing three, 20 minutes. Sorry for the interruption.

DR. ZINDER: I’m talking about my experience with Medcin, importantly why I’m here, the Department of Defense has incorporated Medcin into our computerized medical record that we’re just now rolling out, so we gone through a formal testing period and now limited deployment. I’ve had quite a bit of experience with it and I’ve been the point of contact interface with Medicom Systems, the owner of Medcin, and we’ve run about 15 or 16 focus groups within different specialties looking at the content and the completeness of it, in fact there’s one on oral surgery going on right now where I left them yesterday for the second day. We’ve done training with hundreds of users and I’ve sat side by side with not a hundred at least close to it, you trying to use the system and using it and getting various feedback and input from it. So my breadth and experience with Medcin is probably at least more than most people in DOD, so I was asked to come here. And I’m not here representing Medcin, Medicom didn’t invite me, I was asked by U.S. CHI because of my background.

What’s important when we’re thinking about these is the framework which I’m looking at this as a nomenclature. We’ve talked about a lot of nomenclatures this morning, a lot of different pieces of it, I’m going to reference a paper actually from Dr. McDonald’s who’s unfortunately not here right not, but in this one from JAMIA(?) 1997, where he has one great quote here, and he says the one remaining problem is the efficient capture of physician information in a coded form. And that’s really the reference I’m taking this from is the patient/provider interaction, and that is the beginning of all medicine and where the start of all of the things that we need to do, and everything else we’ve talked about so far really other than a few people and a few places has been the ancillary’s, the other things that we attach on to encounters. And I really want to focus on that patient/provider interaction and that’s where I think Medcin has its greatest strength.

Now the challenge for that, because that patient/provider interaction is so complicated as we discussed, is to keep a terminology that follows the common desiderate that we’re all familiar with from Dr. Cimino’s papers, but also to allow the provider at the point of care to cross that human computer interface, and that is the ultimate challenge. And so when we get there we end up with a knowledge system, and we have to have a way that will follow the rules of knowledge systems, and there’s a pretty good literature on this. But the way I look at it is from a classic description of knowledge systems, and that’s the system has to be expressive, usable, and tractable. And a lot of times those things will be mutually exclusive, and it’s very, very difficult to get a system that will take all of those and balance them nicely. Now Medcin does a pretty good job of all of that.

So when I say expressive I mean just to say what the provider wants to say, it has to say what you need in the clinical realm. But to be usable, as we discussed, pre-coordinated, post-coordinated, however you want to go about it, it’s got to be the path of least resistance to get the documentation into the system. And then tractable, it’s all the other desiderata things, permanence, single concept, non-redundant, non-ambiguous, etc, etc., and again, I think Medcin does a nice job of that.

Now Medcin in its most basic form is a pre- coordinated structured clinical terminology. There’s a lot more to it than that, but really those are all in my opinion secondary to getting that clinical documentation down so that you can do all the triggers, all the knowledge management, or artificial intelligence if you want to call it that in the background way. The pre-coordinated aspect of it is critical for that path of least resistance, the pre-coordinated terms are set up in a way in terms that providers use in their clinical notes, so it’s a point and click interface that you can click on and it starts writing a note for you in terms that providers use. And they’re very complex concepts, they’re complete concepts, for instance, to expand on that a little bit and maybe belabor it, the term right has no real meaning in Medcin, it’s always related to something else, it’s for instance, pain of the right finger, index finger, pain of the right index finger is a single unique concept in Medcin, it’s not concatenated with other things, it’s just that term, and that’s uniquely identified and searchable.

Now the power of that is that we can do symptom surveillance, which we in DOD are very, very interested in, from things like Desert Storm Syndrome and other things, but certainly in the world today with bioterrorism, other symptom surveillance epidemics, etc., we want to be able to survey for those things so we need the providers be able to put that information in as they think about it.

Now some people think of Medcin as a competitor of SNOMED, and I want to really emphasize that we don’t see it that way at all, in fact we’re using Medcin in our clinical documentation portion of our computerized medical record, but DOD just kicked in a big chunk to the National Library of Medicine for the SNOMED license because we think that’s very important. We see them as very complementary because of the different roles in them, that Medcin is very, very useful at the point of care for capturing that clinical documentation in the way the providers think, but SNOMED is more of, a more granular, more atomic and post coordinated tool that is better, perhaps better at the database layer without a very good interface. But National Library of Medicine if I can bring this up has expressed some interest in mapping the two, and we’re very interested in doing that and getting the two mapped because we see the value of having Medcin mapped to SNOMED for not only higher level of granularity for the database layer but also for mapping and translating between some of the legacy systems and other systems that are out there. So we see great value in both of these systems, specifically with Medcin being at the point of care and ease of use at the point of care.

Now when thinking about Medcin and answering the questions finally from the questionnaire after that framework discussion, it’s very, very important to distinguish between three very distinct pieces, tools, content, and processes. Medcin tools themselves as I was explaining meet the criteria of a knowledge system, they’re very expressive, it’s a great interface for the point of care, it’s point and click, we’ve got hundreds and hundreds of users, we’re doing a thousand encounters a day right now at our limited roll out and having great success with it of capturing data. I didn’t bring a laptop with me but I can go back and do a search and say of the people with sensual hypertension, what’s the most common symptom they have, and I pull it up, and I know that the number one symptom that is being asked is do you have chest pain. It’s the number one symptom they have but I know that, now I could assume that that was the case, but I can do it right now because we’re using Medcin, that’s pretty exciting, that is very exciting, that’s true symptom surveillance, and so it’s very, very expressive.

It’s usable like I said, point and click, you also have other tools associated with their suite, that’s purchased with them. It allows templates so you can build things for your common practice patterns, form tool to make easy of entry easier, etc., etc., so ease of use and usability is very, very good. And it’s very tractable, it follows all the criteria of permanence, very, very low level of redundancy, extremely low, and non-redundant. And the searchability as I was saying is outstanding. It’s also very scalable, it’s hierarchical, it can scale up, it can add content, etc.

Now the content, that’s why I distinguish these because the tools are fantastic, the content like all structure tools and all clinical vocabulary are limited in one way or another, and everybody will find that it doesn’t say exactly what they want it to say. Some of the areas are outstanding, some are ok, and some are non-existent, and they need help and we then we give them a lot of help. Like I said I’ve run 15 or 16, I don’t remember how many it is, groups now, we’ve given them thousands of terms to put in. So that’s all fixable as long as processes are in place, and as Dr. Cimino in the previous paper, if I can quote, we need formal, explicit, reproducible methods for recognizing and filling the gaps in content, and that gets to that process part.

Now the processes is where I think Medicom could use some help from this Committee. The processes, they’re a relatively small company, but the product is just so good that we continue using it and we see the value of the product, but the processes need some assistance in that they don’t have a well formalized content submission process, we’ve worked out different ways with them to give them content, but it’s not really well formalized and there’s not a great feedback mechanism for us.

And then just a standard continuous improvement for the company to do that is, we’ve been doing it on our own and handing it to them, but yet the company doesn’t do that, it’s more of a opportunistic thing, if people call in and ask they’ll try to put something in, and I think from my perspective to be a standard I think that should be something that’s on an ongoing basis, for something like clinical terminology where language changes and grows and develops over time.

And then just general quality assurance things, this is very rare but occasionally something will go in where it literally just gets typed in wrong, and it emits in a way that doesn’t make sense and so our users don’t use that term that way but in that case until it gets fixed, but those are the kind of things that just the company could like I said use some help.

So in summary, when I think that Medcin from our experience has just been tremendous, and it has such a great potential for a national standard for that point of care, we’ve seen nothing else like it that can be used at the point of care. We’ve heard a lot about SNOMED but I haven’t seen any interface that can be used at the point of care for SNOMED, certainly nobody wants to go through the hierarchy as was discussed before, and that’s not to slam SNOMED by any means, like I said we’re very interested in that but we want something at the point of care for all the things we’re discussing.

These tools are extremely strong, the content is pretty good and can be improved over time as long as those processes are put into place. So we see basically the system is great, however, the company is a little bit immature as a company and could benefit from the assistance of NCVHS to help set up the structure of the company a little more so that more than one person is controlling all the content as well as the quality improvement and content improvement pathways.

Thank you.

Agenda Item: Panel 3 - Terminology Users: Healthcare Providers - Dr. Peterson

DR. PETERSON: Hello, I’m Dr. Peterson as I mentioned before, and I’ve been called to give some information from the trenches, I’m actually a real doctor who sees real patients in real time, and I work at the Student Health Center on the Auraria Campus in downtown Denver, it’s administrated by the Metropolitan State College of Denver. We have about 30,000 college students from the three different academic institutions on one campus. It’s a non-traditional campus largely, the average age is 27, a few more women then men. We have in the population, we have a total of 13 clinicians with 9.3 full time equivalence covering six specialties, we have mid level providers, and see depending on the busyness of the day around 100 patients a day. I’d say we’re a small to medium size primary care clinic with some subspecialty expertise in house.

We’ve been using Medcin as well and have used a paperless system now for nearly a year, it’s used by all providers to enter their notes, we do prescriptions, billing and insurance is driven through the system, and we’ve done some passive surveillance and quality improvement activities along with the implementation of this paperless system.

I think the strengths of the Medcin system, which just to let you know, the vendor is Medical Manager, we’ve been using their system, it’s an integrated with Medcin product as well as a front and back office program called Entergy(?). Our experience with Medcin is that it’s easily used at the point of care by our physicians. We can document the entire clinical encounter from the front desk to the check-out, and multiple people can use this, not only the docs and the mid-levels, but even our MA’s use Medcin forms to enter vital signs and do initial encounters with the patient. It’s a fairly robust structured database, the system thinks for the most part like a clinician thinks, and as Dan said, gives you a point and click option to generate terms that look like they’ve actually been written by a physician. The semantics and terminology are well organized, logical, and pretty easy to follow.

There are some weaknesses, and I want to point out, this may not be totally a problem of Medcin, because there’s always a middle person, the vendor, before the end user gets to use this, though Medcin does frequent updates, sometimes these aren’t carried out as regularly in my experience by the vendors, so we don’t have access to those updates. Medcin is a small company and direct training is in general not an option, although our center has had access to that service. There’s relative slowness to change in terminologies and adding new things as they come up, identifying shortcomings of the database. Again, much as Dr. Zinder had mentioned.

We’ve been doing some data analysis and been recording all of our patient entry since early August 2002 and the huge advantage of Medcin is it allows real time effortless gathering of clinically relevant data without extra time invested by the health care provider. It doesn’t slow down your care because simply by documenting what you see and what you’ve done, the diagnosis you’ve made, the lab orders you’ve requested, those structured data are automatically recorded and then you can look at them at a later time and as I’ll show you, we can look at them probably real time now, very soon.

It helps you characterize patient populations, imbed triggers, reminders, best evidence data, references if you need them, and you can easily do outcome studies based on diagnosis. We’ve always been interested in quality improvement projects and do them quarterly, and we’ve just developed with the help of Medical Manager some passive electronic surveillance.

A disadvantage of the product that we have, it’s not necessarily a unique Medcin problem, but as users right now is we’re unable to separately bundle data by problem in patients with multiple problems, and in internal medicine, I’m an internist and primary care, it’s not unusual to have a person present with five or six problems. The way Medcin is structured in our current data electronic program, they are spilling everything out in one glob and it makes it fairly difficult. It’s more a programming problem than a Medcin problem.

What we’ve done in terms of passive surveillance is that we’re very interested in our clinic in identifying trends. If there’s a new symptom complex that may suggest a new disease or recrudescence of an old one, we’d like to know about it early so we can get our resources focused and allocated in the right way. We just presented at Tepper(?) last week our passive surveillance program, and I want to just take a minute to show you this, hang on, I’ll go pretty quickly.

We generated these structured data through provider engineered disease specific clinical outcome documentation forms. They’re used on every patient by every provider and as I said it has many advantages. This is a little snippet of our upper respiratory tract infection form, point and click and you can simply get a readable text generated from a point and click form. It’s Medcin based, it’s a clinical note writer that gives you a nice running text. The forms draw on a field of over 200,000 concepts so it’s again a fairly robust form.

We looked at patients through the last quarter of 2002, de-identified them so we could look at the patients with protecting their privacy, placed that data in an analysis repository and then used the standard query language series of queries to evaluate various symptom complexes that we thought might be common in our population. We saw almost 5,000 patients in that quarter, got 150,000 structured observations, were able to record symptoms, physical findings, diagnostic codes and therapy terms, and this is just an example of the output in men, we were able to find that in that quarter, no surprise, upper respiratory infections led the pack with we saw 45 in that fall, followed by a host of other diseases. So it’s easy to find out what’s going on in your clinic.

We also then asked of our database to tell us what the various complexes were in terms of symptomatology, we looked at fever, headache, neck stiffness, and then most interestingly, since we’d lost a couple of kids from meningitis in the college population in the Denver metro area we wanted to know if we were missing meningitis, we looked for fever, headache and neck stiffness, and low and behold we found 12 patients who presented with that complex. As I reviewed those charts carefully I found that two of the patients did have a viral or a septic meningitis, but no bacterial meningitis. So this provides me as medical director an excellent opportunity to kind of go through and look for clusters of illnesses.

We’ve now developed a program where we can do time based monitoring, this data looks at the symptom of fever and cough, not only for the last quarter of 2002, the first quarter of 2003, and as you can see, fever and cough is fairly common throughout the fall and as people returned from holiday and then the first part of January of the winter semester, we had a sudden rise in fever and cough, and we started seeing a lot of influenza, so very nice, this tracked perfectly with the outbreak of influenza on our campus.

Conclusions regarding Medcin, I think the vocabulary does have a lot of usefulness at the point of care, it does provide a structured clinical documentation tool. The reporting possibilities I think are only limited by your imagination. The key to successful acquisition of structured data, and I want to emphasize this from a clinician’s point of view, if you’re going to get this kind of data at the bedside you’ll have to have creation of documentation tools that are flexible, multi-purposed, and adaptable to use by the entire health care team, the physicians, the mid level providers, the support staff, and administration.

My thoughts are that currently there’s probably no single structured nomenclature that appears to fulfill the needs of those who want to, and I think we should, create a standardized national database. Similarly, there’s nothing out there that totally meets the needs of the clinician on the front line who will generate these data, therefore, it seems to me we need to identify strengths of current terminologies, identify and correct existing gaps, and my guess is it might be useful to have some sort of a hybrid system that will serve the needs of all parties.

Thank you very much.

Agenda Item: Panel 3 - Terminology Users - Healthcare Providers - Dr. Warren

DR. WARREN: I’m going to talk to you about a slightly different system then what we’ve heard about today. I work at the University of Kansas School of Nursing, and we’ve started a new project called SEEDS, which standards for Simulated E-hEalth Delivery System. What we were trying to do, and since obviously I can’t do that I’m going to use a button, we had an opportunity in Kansas City to make use of one of the IOM recommendations that came out when they published their report on the Quality Chasm. And one of those recommendations was in order for us to really get electronic records within our system we need to start teaching students with those records. So as our students learn their workflow habits they learn them as students, they don’t learn one workflow as a student and then come into practice and have to learn a different workflow. And by workflow I’m talking about how they manage data. And most universities, especially in schools of nursing, most of our workflow with our students is still paper and pencil.

So what we were trying to do was to make use of this and to take a full up live patient information system and to turn it into an educational tool. With that we had to have a corporate partner, now my partner has already presented this morning, because they also were in Kansas City, and that’s Cerner Corporation. And basically this was one of those partnerships that occurred as most innovative ones do, between the CEO of Cerner and the dean of our school wanting to meet this IOM challenge. At that point it became my problem to try to make this happen.

Now everyone already knows what these uses of a terminology are, but what I’m going to start talking about is why they’re also useful in working with our students. We all know we want to document clinical detail over time, problems, interventions, and outcomes, and I will also add to that findings. Now in one of my past experiences I served on the ANA Committee for Information Infrastructure, and we encouraged SNOMED CT to come in and apply as a recognized terminology because we had no nursing terminologies that captured findings. SNOMED does that in spades, and so with that SNOMED RT and then just a few, I guess about a month ago SNOMED CT has received recognition.

So one of the things I wanted to do in a nursing system was to really highlight how we used nursing terminology, but also highlight how we teach the students not only to use their own terminology from a nursing practice but how to also communicate with the other disciplines. So we also wanted to go in with structured data entry, with flexibility. One of the things we’re finding is in structured data entry there is a difference between the novice and the expert. Remember novices don’t know anything and so a lot of the terminology are brand new words for them. As was mentioned this morning, sometimes even our experts don’t always use words in the same way, so we’re finding this is a time to start teaching the students the terminology.

Retrievable of coded data using multiple attributes at different levels of specificity. One of the things we’re beginning to find is that as faculty start incorporating using electronic record in the teaching of the students we can actually query that and start pulling together reports that tell us what concepts the students have interacted with. Now the only way to get that data before was to look through faculty notes, actually sit in on our lectures because I know of very few faculty like me that actually give the lecture based on their notes, you think of a story to illustrate a point, you see students not getting what you want and so you come up with different examples and things like that and so you’re not always sure what you’ve totally covered within classes. And then if you have several faculty teaching they may omit one particular content area. Well now we have a way to do that.

We wanted to teach the students, too, to be useful consumers of electronic systems and so the need for decision support at various levels was a key advantage in what we’re trying to build into that. And then a shared a understanding across various disciplines on what the terms mean, so that our students begin to see how the same data, especially in findings, are used by nursed, physicians, dieticians, etc., and that’s one of the things that we try to do.

We want to take a look at identifying and monitoring health outcomes. Now I will say at this point SNOMED is in a developmental phase, one of the most probably difficult structures to present within an electronic record is the whole notion of goals and outcomes, because not only do they have their own terminologic model but they also have a component of timing, such as a goal is something you’re going to measure in the future, and so we need to have our systems recognize that, so some of the work that I’ve been doing and consulting with SNOMED in my work at HL7 is this whole that was already been talked about today, is where are the dividing lines between things that should be in information models and things that should be in terminology models, and I think I’m still at the point, it’s a very gray, fuzzy area, and depending on the day and what I’m doing will depend on where I throw some of those characteristics.

We’re also beginning to find that we can actually check the quality of our education and providing benchmarking by watching what the students do within our system, and that for me is a very interesting idea. We’re in the process of working with Cerner that this application may actually be available for other schools to have and when that happens then the schools can start benchmarking against each other, and we’ll be in the same basis that all the clinicians are in currently. When you have two information systems how do you query them and get the same data, because you know that they’re going to migrate because of some local adaptation.

Supporting research activities, we have a lot of faculty and one of the visions that I have for the system is we are at the point where they can start putting their research findings directly into this database for students to interact with, trying to come up with a way or proposal for getting research data into clinical systems in a much faster track than what we currently have. And then looking at enabling externally specified statistics, and in this case not only what happens with our virtual patients we have on our system, but how the students interact with those virtual patients, what data they’re pulling together.

And then identifying individuals in need of proactive intervention, in this case it’s not the patient, it’s the student. What student behaviors do we see that tell us that they are not getting how to go about documenting clinical care?

Now SNOMED CT was chosen for a variety of reasons, one has been my own interaction with SNOMED, but what it does is it puts together a suite of languages and terminologies that nurses need, so we need SNOMED for the findings. If you’re in acute care, you’re probably using NANDA, NIC, and NOC, and I think McKesson has already talked about that. The maps to NANDA and NIC are current in SNOMED, NOC will be placed in there this fall. We also have the Perioperative Nursing Data Set, that has been mapped into SNOMED, only though on the diagnoses and the interventions. The goals are going to be waited until this fall because we’re still trying to figure out what the best approach and strategy is for mapping goals and outcome statements into a terminology and between each other. And then we have Omaha and the Home Care Classification System.

What is also of advantage to us is as we teach students to operate in different venues, such as Omaha, is a terminology optimized for home health care, what we can show the students is how you can cross back and forth between these terminologies because of the maps that SNOMED has created.

Now this is one screen, what I will tell you at this point is I can’t use SNOMED CT in my application, Ian has already talked about that, there’s lots of challenges of getting this application into a piece of software, and right now the only software I have to work with is the software that Cerner providers, our partnership. But what you can see on this screen although it’s very small, and let me see if I can use my arrow here, this is a nursing diagnosis of ineffective coping. What you can tell here, this is what we use with the students to start teaching about vocabularies, is they’ll have the ability to reveal what nomenclature terminology the problem came from, so this one, what’s currently loaded in our database is SNOMED international, or version 3.5, and in that one there’s a direct one for one map between NANDA and SNOMED because they took NANDA and just pasted it in at that point. What is there now is very different because at the time they did this was about ten years ago before NANDA was copyrighted.

And then you have the NANDA 2002 terminology term with the codes in there and you can see that they’re the same.

Now this is what SNOMED CT can provide me, is that I can now also tell the student, I just need to enter it once, I get the SNOMED code, I get the NANDA mapped code, the Perioperative code, and also the Home Health Care Classification code. I also get a list of synonyms that all mean the same thing. If I’m designing the interface I can use one of those synonyms and still have it refer back to the SNOMED CT code. I also know a little more about my diagnosis now in knowing that it’s a finding and it’s related to coping, so I’m getting some more information that I can use with the students.

Another example of just showing how SNOMED goes back and forth and I think everybody has talked about this purpose, but this is a little bit more colorful, it allows us to go with several different codes, and as you can see here we have the billing codes, the different nursing code maps, and then the SNOMED maps.

Now this is the need that we have for terminology and what we’re working with students on, it’s very important for them in order to document this stuff, and as with practitioners or expert practitioners, they want something that’s very fast, they don’t want to sit there an document it. What we’re trying to tell them is remember you need to give to the next clinician all the data that they need to make decisions about the patient. In order to do that we need a structured format so that we don’t forget things, you need to have structured data entry. You need to be sure that what you enter is accurate, that they do have control over.

What we’re hoping from the testing of this is what it will do is reveal the impact that nursing care has in clinical care and also in nursing staffing and administrative values. So much of what nurses do is preventative that you don’t see what a nurse does until she’s not there, and then you see all kinds of problems. What we’re trying to do now with the structured language is to come up with ways to show what happens while the nurse is there so we can start seeing the trends.

We also wanted very clinically patient focused assessments that were in terms that they see in the textbooks, that they hear about in lectures. We wanted to capture the surveillance notion. And then fundamentally what we’re hoping to do is make students aware that when they come into a clinical discipline they are also learning that discipline’s language, just if you went into France you learn French, so you learn your language of nursing but you also need to learn the language of medicine, pharmacology, occupational therapy, physical therapy, because you are a team brought together that is trying to give patient care. What we see in selecting SNOMED CT is it gives us that foundation that we can bring in all the concepts that are used by the different disciplines.

Thank you.

Agenda Item: Panel 3 - Terminology Users: Healthcare Providers - Dr. Madden and Dr. Dash

DR. MADDEN: I’ll be talking today about the pathologists perspective on the role of standard terminology in the electronic health record. And as a pathologist, the standard terminology for anatomic pathology for many years has been SNOMED, and so I’m primarily going to talk about SNOMED. And incidentally, I should make a disclosure that I’m on the SNOMED Editorial Board, and Dr. Dash, my co-presenter, is on the SNOMED Authority. But actually I’d like to make some points about things that SNOMED doesn’t have that I think are really important for the electronic health record. And that is, or that SNOMED needs some improvement in in order to support a later health record, and these are some things that people have already mentioned.

I think that a standard vocabulary is most effectively leveraged when it’s used in conjunction with an expressive syntax for rendering statements about cases, and Dr. Peterson talked about how difficult it is to code a report with multiple diagnoses without a way to actually, instead of listing codes at the end of the report actually make statements about codes, and which codes apply to which elements of the report. And I think that’s very important that there’s some other terminologies from which SNOMED could certainly take ideas and extensions.

And the second thing is that in the future there’s an enormous amount of medical information imbedded in the context of medical statements, and in order to access that contextual information I think we need a standard document architecture model that permits us to make references to medical statement in the context which they occur, and I’m going to talk a little bit about HL7 clinical document architecture, because I think this is an important ancillary technology to SNOMED. So I’ll talk first about some terminology challenges and solutions, and then some pathology systems we developed at Duke using SNOMED CT as the backbone.

When we look at the pathology electronic medical record there’s certain recurring issues that come up all the time. One is that pathology diagnoses are usually formulated with enormous degree of localizations to see different practice environments, terminology preferences, clinical context, and so forth, so that any electronic medical record terminology solution has to have strong support for comprehensiveness and for synonymy.

Secondly, pathologists are being required evermore to include additional relevant, prognostically relevant detail into their report, so reports have gotten incredibly complicated, so any terminology needs to be interoperable with templates and subsets that assist with user input.

Clear support for complex searches is necessary, and this requires a terminology that has strong hierarchical pre-coordinations. And finally, pathology reports go not just to clinicians but to many, many data consumers, including tumor registries, state health departments, and medical researchers, and so we need a terminology that preserves the maximum amount of semantic payload and a way to conserve context as much as possible in medical reporting.

The reason that SNOMED CT is enormously well suited to pathology reporting is its comprehensiveness in its support for synonymy, it has over 325,000 concepts, nearly a million hierarchical relationships, and it has supports for post-coordination in the sense of attributes and roles in these categories. Also, SNOMED CT is enriched by the fact that it has cross mappings to a number of other standard terminologies including ICD and LOINC, and Dr. Dash is going to talk at the end of the presentation briefly about the cross mapping to LOINC.

When we look at requirements for our terminologies, this rich built-in relationship model, or the sort of world view of the terminology is definitely very important, and SNOMED CT definitely thinks like a doctor thinks. The pre-coordinated relationships and the hierarchies are the kind of relationships that a physician would think about, so its pre-coordination is very strong.

I mentioned that we need conceptual linkage mechanisms or ways of actually expressing medical assertions. I think we need to get away from a model where codes are simply appended at the end of the report as a group, we need to get to a model where codes can be expressed using a syntax in relation to each other, not just that the patient, not just lung carcinoma, left upper lobe at the end of the report, but actually a standard way of understanding that the lunch carcinoma is in the left upper lobe, that this metastasis in the patient’s mediastinum comes from this lung carcinoma in the patient’s upper lobe.

So one type of post-coordination, the simple kind, is post-coordination via modifiers. We need additional, we need further extension to post-coordination and SNOMED has incipient support for this where we can post-coordinate concepts to other concepts, and basically constitute assertions or sentences about particular medical instances in a report.

And then finally, in order to preserve medical context, the highest level of post-coordination, is we need a way to communicate about assertions themselves, to attach concepts to assertions. To say that this observation of the lung cancer in this patient was made in such a such a document on such and such a date. And Dr. Warren mentioned timing, the ability to coordinate information about timing with reports, and so this is one aspect of this way of post-coordinating context to, or concepts to assertions.

The support for templates I already mentioned and it’s quite important and I’ll show you an example of that, and then finally, this concept of the structured document model. SNOMED CT is not a document structure, it’s a terminology, but I think that we’re going to need increasingly in the future structured document models like the HL7 CDA that allow you to reproducibly address medical statements in the context which they occur, and this will be very important for constructing the web of statements that constitute the electronic health record.

I’d like to show you two examples that use SNOMED CT that were developed at Duke in the pathology domain. The Medical Assistant on the World Wide Web and the Cancer Reporting Standard system. The Medical Assistant on the World Wide Web was developed by Dr. Dash, I’ll refer to it as MAW3. It’s a prototype for a web-based multi institutional medical database. It supports data, it has support for data entry in the form of templates and also in the form of a natural language processor that encodes free text at high granularity. It has support for database storage, which is done using the SNOMED CT model, and for data reporting in the form of dynamic HTML output. It is very flexible. It supports multiple data paradigm. It’s functioning as the clinical autopsy information system at Duke, but in addition it’s been modified to serve as a tissue banking database that’s linked to clinical information. It’s even used at Duke for a smart database for tracking consultation cases. SNOMED CT is the underlying terminology that encodes all data elements.

This is an example of a data entry tableau in MAW3. On the left, this is actually an autopsy report, and on the left is a panel that encapsulates the structure of the document, the sort of templates, but it’s a hierarchical template, it includes sections within sections, and template fragments inside at the lowest level of the sections. On the right is a dynamically generated preview of what the final report is going to look like, the gray background text is general automatically by template choices. The template choices are made, this is a blow up of the top of this screen, the template choices appropriate to the individual section being work on at the time appear at the top, they’re implemented as drop down menus, fill in the blank boxes, sort of standard GUI controls.

Dynamically, as the user makes his choices, the text is generated in here, and this text, if the choice is a standard choice this text is pre-encoded in SNOMED CT. In cases in which the user includes free text we have a sophisticated natural language processor. This is the natural language processor operating on this fragment of text up here, the text being encoded is in black. The system maintains at all times a link from the source text to the encoded representation, so in this way we maintain the context, so that the location of the text that generated these codes is actually maintained in the database.

The natural language processor results tableau is shown over here, and here we have a browser which allows you to browse related SNOMED CT terms and make manual choices if the automatically generated encoding is not appropriate. This browser allows you to drop down identifiers for the SNOMED CT meaning browse the hierarchy and view the canonical forms of the SNOMED output, so this is all done on a SNOMED CT backbone.

Finally, the other system that we’ve implemented at Duke is a Cancer Reporting Protocols system, the College of American Pathologists has specified in its standards required data elements in oncology diagnostic reports, and the American College of Surgeons is going to require compliance with these reporting standards beginning in 2004, so these protocols are over 30 site specific protocols with complex, very complex cascading requirements that pathologist in trials have found difficult to implement manually. SNOMED CT has been very cooperative with the CAP Cancer Committee and prospectively developing codings for all the data elements in these oncology protocols.

This is an example of a fairly simple oncology protocol in a written form and you can see that it has a natural, it has a sort of structured document type style to it, but there’s a lot of different choices and some of them are mutually dependent choices, so it’s hard to implement without assistance.

The Cancer Reporting System at Duke operates on a web enabled backbone so that the entry assistant can pop up over the pathologists, whatever LIS the pathologist is using. It operates off of relational database back end that includes all the elements, the SNOMED CT codings for these elements, and all the item logic that encapsulates all the interdependency of the items. And there’s a visual template construction environment for creating new templates.

This is an example quickly of the screen that pathologists would see these items dynamically change depending on the choices that the pathologist has made, there’s hide and reveal, and there’s computed elements. There’s also area checking and type checking.

This is the template construction environment, in the upper left are a series of response constraints, reusable response constraints or template items that can be dragged and dropped to create new templates. Here this panels shows the items that are attached to the template fragments with their SNOMED CT identifiers, and there’s a browser screen here to look at the SNOMED CT synonymy and to make changes to the concept ID’s as necessary.

One of the nice things about this system is that it offers one input template but has many output representations. The system was originally designed to produce text output, either unformatted or formatted as RTF, which is pushed to the clipboard and then the pathologist can paste it into their own native system, but we also have support for XML output, including output in an HL7 CDA level unwrapper. And this is proprietary XML encoding. CDA level two is going to provide more support for detail in the body, and we’re in the process of migrating some of our proprietary mark-up to CDA level two mark-up. And also the content can be customized in terms of the order of the selection of items so that you can generate a different diagnostic appearance if it’s going to a clinician versus a tumor registry.

I’d like to close by imagining a future medical record, and I hope that this is going to work, it looks like it’s not going to work on your screen, but I think what’s very important is that there’s going to be linkage between the ability in a future medical record to link items across multiple documents, to link related items within in each document, and to relate all these to the timing and to the patient, and I think that SNOMED CT’s richness supports number one and number two, and that all terminologies need to look for additional assistance from HL7, from the clinical document architecture, and other components to do number three.

So in summary SNOMED CT has many advantages for coding the electronic medical record including pathology, raw domain coverage and rich hierarchical relationships, but we’ll need richer support of post-coordination in the future and to optimally preserve context we need all three, vocabulary, standards, syntactical standards and document standards.

DR. DASH: I’m just going to talk very briefly about LOINC and mapping to LOINC, which is really my only exposure to LOINC. The only reason that I got involved with LOINC was because we have a huge clinical repository of laboratory data that’s stored in multiple different formats and I wanted to bring them together. I thought SNOMED CT was the way to do that, and indeed I can do it in SNOMED CT, however, LOINC provides one very valuable feature, and that is true post-coordination. SNOMED CT does not provide a way of taking concepts and saying this is the order in which you put them, this is the way that you express them so that every system can read the output. LOINC is very explicit, very specific, about how the codes should be generated, and I fully support Dr. Zinder’s statement about these not being mutually exclusive, I don’t know if the animations haven’t been working too well here, and in fact, it is working. This is basically showing the hierarchical relationship of SNOMED to the LOINC codes at the bottom for rechetzial(?) antibody serologic study, and one of the criticisms of LOINC was that it was flat, although it had these categories for organizing and aggregating it was a relatively flat database. SNOMED provides a hierarchy on top of LOINC, which allows you to aggregate data, so in fact they’re very complementary, LOINC providing a nice specific manner in which to express the data, laboratory data in a common universal manner providing the post-coordination, and SNOMED providing aggregability.

The only last other point that I would make relates to SNOMED CT, LOINC, Medcin in general, and that’s that this Committee has to pick, or recommend a standard, and in choosing a standard that implies selection of one foundation upon which to build and tie together all of the others, at least in my mind, and in this respect I think SNOMED CT currently provides the most developed starting point.

SNOMED CT needs to be mapped to these other terminologies, like LOINC and Medcin, in an effort to make the massive amount of content that’s in SNOMED CT usable in real clinical world settings and in the different applications. Further, as was touched on earlier today, SNOMED CT needs to be mapped to administrative terminologies in order to promote adoption and eliminating duplicate coding efforts because as a physician, I’m only going to do this once. So my recommendations to the NCVHS would be to promote SNOMED CT as a base standard, promote mappings to domain specific standards such as LOINC and Medcin, and to promote development of specific post-coordination that’s required SNOMED CT to make it usable using the standard for messaging and communication HL7.

Thank you.

MR. BLAIR: Thank you. I think we have about 25 minutes for questions. Simon, maybe you can --

DR. COHN: Steve, are you raising your hand?

DR. STEINDEL: No, actually I’m not.

DR. COHN: Ok, Walter you’re raising your hand.

DR. SUJANSKY: I’ll start this time. Well, this has been great that we’ve heard about Medcin and SNOMED in use in the same panel, because they’re, in my opinion they represent the Ying and Yang of clinical terminologies to some extent, with Medcin being primarily designed for structured data entry with some support for reporting and SNOMED perhaps being designed for reporting with some support for data entry, just as my personal opinion, but I’d like to probe that opinion with the panel members now. So first I’d like to ask Dr. Zinder and Dr. Peterson some specific questions about the use of Medcin for reporting.

My understanding is that Medcin does not have a multi-hierarchy per se, every concept is only organized in one hierarchy, so for example viral meningitis, you could not search for it using the abstraction, both the abstraction and infection disease, and disease of the CNS if you wanted to run a query on patients who had either one of those. Is that the case in your experience? And secondly, is that a practical problem that that capability doesn’t exist from the point of view of reporting?

DR. ZINDER: We’re just really starting to get into the reporting aspects of it. What we have as was discussed earlier, a transactional database, we’re using the 3M product in our back end and we’re storing the Medcin terms and the concepts in one NCID, etc., I can give you the details, but now we’re transferring that over into a reporting tool, a big data warehouse for extraction. It’s flat in the respect that you describe it, it hasn’t been a problem but like I said we haven’t explored it tremendously. However, more importantly, it is hierarchical with parent/child relationships for symptoms and things that we’re very interested in extracting out of it. For instance if I said toe pain it may say toe pain referred to the knee or something. I could take a parent and all the children and bring those up in a reporting thing, which is a little bit better. It’s different then you’re asking but it’s a different kind of hierarchy.

DR. SUJANSKY: You’re talking about reporting still?

DR. ZINDER: Right, exactly, so what I’m saying is, for instance pain in the ear, let me just say that, or ear anything, in the ear since I’m an otolaryngologist it’s easier for me to talk about. So if I want anything about what went wrong with the timpanic(?) membrane there may be 20, 30 different concepts related to the timpanic membrane, whether there’s a hole, whether there’s fluid behind it, whether it’s red, whether it’s bulging, etc. Well I don’t want to go search for every one of those. Because it’s hierarchical and those are all children of timpanic membrane I can search for timpanic membrane in all these children and pull all those things out automatically. So it’s hierarchical in that respect but the diagnoses and I may be speaking beyond by limit of knowledge here, but that’s never stopped me before. The diagnoses are not hierarchical, I couldn’t say viral illness and pull up all the viral illnesses that are diagnoses. But it hasn’t been a problem and I don’t expect it would because we’d be able to pull specific ones that we choose.

DR. SUJANSKY: Dr. Peterson.

DR. PETERSON: I would only add what Douglas McArthur once said when being talked to by a bunch of reporters asking him some very complicated questions, he said I don’t understand it but I’m against it. But to answer your question I think really there is that problem, you can’t kind of search in reverse, ask a very general question and get a specific answer. Or in the opposite way, ask a very specific question about is there bacterial meningitis and then try to focus more and more clearly on that. And that is a relative weakness. The beauty of SNOMED is probably that, it is so granular, but in terms of a clinician it becomes so granular, ground so fine it becomes dust and you can’t grab hold of it, as a clinician trying to document patient care. And so yes, it has some shortcomings, and I think you identified one, but in a clinical sense it’s not a practical problem because I can identify symptom complexes which really speak the language of a physician about a particular disease or disease possibilities.

DR. SUJANSKY: To follow up, the flip side of that question for Drs. Warren and Madden is using SNOMED CT as your basis of your applications for structured data entry, have you encountered problems in usability? For example, I understand Dr. Warren the primary users of your system are students right now who are in some sense instructed to use the system, are there others using the system on a voluntary basis to do structured data entry versus their usual form of documentation? And similarly Dr. Madden, are there pathologists who are voluntarily using the system rather than dictating, dictation and transcription or is that not an issue, not a problem?

DR. WARREN: Let me go first because I think mine’s easier because I have novice users, and novice in every way. What we are trying to teach them is the value of structured data entry and the structured data, say that they have a patient who has a respiratory illness and they’re doing a respiratory assessment, so we have a form, and it just dawned on me, I should have brought a screen shot of one of our forms, although I think it’s pretty typical of a lot of the other vendors, is that one of the items might be respiratory effort, and so you have a list of the terminology for respiratory effort. What I’ve done is just pulled those out behind them, I know what the SNOMED codes are, but the students themselves don’t see that. They’re more interested in this is terminology I should probably know, and I don’t have a data dictionary, a medical dictionary with me so I provide that, too, in the background because I’ve got the ability to right click on terms and can show them what a definition is. Again, using a functionality that you would not see in any clinical area. If I wanted to and once we have the ability to actually put the codes up there, I could also put the code numbers if they’re interested in learning code, although I don’t think our students would since they’re first semester and second semester students. But so far I’ve not found a limitation, I’ve been able to find the codes to underlie the concepts that I want to present on these forms. The students are interacting with them well but then they don’t know any difference yet. They start learning differences as they rotate out into different clinical areas.

DR. SUJANSKY: So in a sense, just to follow-up briefly on that, you’ve defined a form that’s really specific to the function that doesn’t necessarily follow the structure of SNOMED --

DR. WARREN: Right, they’re not having to look up any terms. The only look up of terminology they have to do is on the problem list and Cerner provides a really good, easy look-up facility for that that’s quicker than what I was, I’ve ever used before.

DR. MADDEN: I would say that you’ve put your finger on the reason why I think enhanced post-coordination is going to be necessary to support a medical record. When you’re inputting a particular patients problem, there’s always a context that conditions the meaning of the terms, so there’s never going to be a concept that’s 100 percent applicable. What SNOMED has been extremely helpful in coming up with new terms to match each entry in these let’s say cancer reporting protocols, some of the concepts that the committee came up with didn’t have SNOMED concepts, and so SNOMED has gone and enumerated concepts to fit them. But at a certain point that model breaks down, you can’t create a new concept for every single report. And that’s why I think in the future we need to get into a situation where we can actually go from tagging a medical diagnoses with a single code to a model where we go from taking concepts from a standard ontology and screening them together in statements about what’s going on. And I don’t really know, I wouldn’t say that this is a weakness of SNOMED, I don’t really think that there’s any medical terminology out there that’s comprehensive, that supports that kind of compositional coding at the moment.

DR. WARREN: I’d like to just follow-up on that. One of the experiences we’ve had as we made, as SNOMED made a commitment to map the nursing terminologies, one of the things you’ll see is in the January 2003 release, there are 1,000 new nursing terms, so there’s been a huge effort on SNOMED’s part to really work with the developers of the nursing classifications to bring their concepts into SNOMED. One of the things that I’ve been working on with the SNOMED Editorial Board, though, is al these terms we call context dependent, and I noticed that on some of John’s he had up there of no atherosclerosis, I mean that’s probably one of the biggest problems we have is do you have a whole separate vocabulary or terminology for things that aren’t there, because you have atherosclerosis, do you have it in the patient, do you have it as a history of the patient, do you have it as a history of the family, do you not have it, so this notion of how to represent context and not make the vocabulary so huge that we can’t deal with it. And those are all things that are being addressed by SNOMED, and I have to say it’s one of the strengths I see of SNOMED CT is their ability to bring together working terms, working teams and the right level of knowledge to handle some of these to come up with a good solution, and the ability to be multidisciplinary about it, I think that’s a major strength.

DR. MADDEN: The negation issue is a great example and actually I can tell you that SNOMED Editorial Board is having active discussions about negation and this issue of implementation negation as a post-coordination, consistently as a post-coordination rather than as a pre-coordination.

DR. DASH: I just want to make a quick comment about the end users using the system I found don’t want to deal with the terminology at all. I don’t show them codes ever unless they’re a researcher and they want to dig down drill into data, so when a resident is putting in an autopsy, he wants to do it, get out of there as quickly as possible, he’ll do the pull down list for a normal liver description of a microscopic description, it will put in codes for the bile ducts for the appearance of the liver, for ten, maybe 15 different codes, SNOMED codes, that he never sees but he sees the text that’s generated from it and knows that inherently those codes are part of the database.

DR. SUJANSKY: So there’s a mapping you’re saying under --

DR. DASH: There’s a mapping which took a lot of work to do, by the way, and is something at, at the point of care something like Medcin could provide.

DR. COHN: Well we’re going to move on to other, you’re done, Steve, Gail, things are getting right with the universe, we have Steve back on now, and I’ll go after Gail.

DR. STEINDEL: No, this is more or less a clarification question for Dr. Madden. From what I gathered from your talk at Duke you’re using SNOMED CT primarily, exclusively for anatomical pathology, is that correct?

DR. MADDEN: The applications that I use is, yes, that’s --

DR. STEINDEL: Is there other uses at Duke for SNOMED CT beyond anatomical pathology?

DR. MADDEN: It’s used in our, SNOMED CT is used in these experimental systems, actually in our laboratory information system, which is Cerner mapped that, there is no support for CT and so we’re using an older version of SNOMED.

DR. STEINDEL: And on your comments that you gave us, the written comments, does that apply just in the realm of anatomical pathology or in all realms?

DR. MADDEN: I’m principally addressing, I’m coming here today to talk as a pathologist clinical user.

DR. STEINDEL: Thank you for the clarifications.

MS. GRAHAM: Dr. Zinder, I just want to ask about your use of SNOMED, or your use of Medcin in your administrative areas. This is done in your front end clinical system, it was my understanding you’re sending the code set on to your administrative areas, do those go through a screening process then before they’re used for billing or how has that been working?

DR. ZINDER: Real quick, just to touch on previously, with Medcin we do do negation as well as family history and all that stuff already, which is important for symptom surveillance when we’re looking at bioterrorism and stuff. We take it out, for your question on the administrative side, we take all the codes and then send them on to our normal routes, so we use this as level one coding and then that goes into the next level echelons that are already in place for the systems that we have. So we have legacy systems doing all that already and we’ve just replaced the level one with this. Does that answer?

MS. GRAHAM: When I think about yours is that 80 percent of it can go right on a UB92 or HCVA 1500 or --

DR. ZINDER: I don’t have specific numbers right now because we’re, like I said, we just did our formal testing, we’re just getting limited deployment and we’re rolling those out and so the number compared to our entire system are just being evaluated right now. But what we do know is the accuracy of coding is much, much better. In DOD we don’t the HCVA’s and the UB’s, all that stuff right now, we’re probably moving to that but our people aren’t as fastidious with the coding that they probably should be sometimes, and the billing is different because of that, but that’s changing and this is going to enhance the change and the ability to change that.

DR. COHN: I actually have a question and then Mike’s on next. First of all I wanted to thank Dan and Vern for the comments about Medcin, I actually saw the system, it’s probably been a year ago now and I was, this was a system where I sort of said geez, it almost doesn’t matter what terminology you stick into it, it’s a really great tool, and came away feeling the tool was sort of overwhelming and it speaks of how a knowledge base can really help with physician documentation, and I think Dan you were sort of reinforcing that view with your own experience. Now, having said that, Judy I wanted to talk about nursing terminology and ask you questions, I’m always afraid, this is an area where it’s easy to say the wrong thing so, I want you to help me --

DR. WARREN: So you’re going to make me say the wrong thing?

DR. COHN: Well, no, I guess, you had mentioned some issues really, or you had used a terminology in your description about nursing codes and SNOMED, and I will apologize, I’m not exactly up to date on exactly how things are put together, but you had talked about NANDA being, an older version being integrated into SNOMED, but I guess not the newer versions, and then you had talked about other nursing terminologies being mapped into SNOMED, and maybe you could explain to me, I’m pretty good on terminology but this one I’m not, I don’t know what this all means.

DR. WARREN: One of the things that the, and I’m hesitating because I don’t know who made the first move, whether it was SNOMED or the ANA, but basically there was an agreement that there needed to be some collaboration back and forth between SNOMED and some of the nursing terminologies because SNOMED’s nursing terminology content was fairly minimal when we first started working with them. They had a very old version of NANDA when it was in the public domain, and I think the version was like a 1984, 1986 kind of version. They also had nursing interventions that were taken from a textbook that David Rothwell happened to find and those were just kind of literally pasted into the SNOMED version 3.5.

Fortunately, when I took a look at them, they were pasted in in all the right places so they had been classified in the right axis but I don’t know how knowledgeable those placements were. But they were placed in using the classification structures of NANDA for the diagnoses and of this textbook for the interventions, without really looking at whether or not that was internally consistent to SNOMED.

As SNOMED started on their effort to create SNOMED RT they then approached, I actually, Sue Bacca(?) and I approached SNOMED and asked them to come into ANA to be recognized as a terminology to underlie findings, because there were no nursing terminologies that had findings terms in them. And we knew right away that for us to really show what nurses do, especially when you get into the realm of surveillance, you have to be able to document what findings they’re seeing and documenting on the patients, because until there’s an actual problem there’s no need to write a diagnosis or a problem or whatever. And so that happened.

We then brought together all of the ANA recognized developers and SNOMED to begin some dialogue about the functionality of a reference terminology and the functionality of an interface and/or administrative terminology. Our belief was most of the nursing classifications were both interface, or what you see on the screen, and also administrative of how you’re going to aggregate it to make reports, and that we saw that SNOMED could be an interface, a reference terminology that would be underneath that no one would take a look at. Because one of the phenomenon that we’ve had in nursing is we’ve had many different developers develop diagnoses but they’ve taken very different approaches to developing what those look like, they also are optimized for a particular practice area, so something used in home health didn’t work very well at hospitals, didn’t work very well in other venues or clinics.

So that began a relationship that SNOMED developed with NANDA, NIC, NOC, the Omaha System, the Home Health Care Classification, and the Perioperative Nursing Data Set. There are contracts with all of those to take the content that those classification systems represent and to map the content of those concepts into SNOMED. In the interim, SNOMED went over and became SNOMED CT, it was determined that we would still continue those project because it would enrich the SNOMED content area. So all of the nursing diagnoses and nursing interventions from the Perioperative Set, NANDA, NIC, Omaha and the Home Health Care Classification have been mapped in, so what SNOMED has is that content is in SNOMED, there are unique SNOMED concept codes for those, but there’s also a map saying if you are looking at a Home Health Care Classification or this nursing diagnosis, this is the SNOMED code that maps to that, to the Home Health Care code.

We are currently working on trying to figure out how to map goals and outcomes into SNOMED, because those really do have some unique properties. If you look at the NOC codes, they really are rating scales, so they’re very much like if you were to look at the SF-36 short form on maybe well-being or activities of daily living or some of these, all of those are scales that have internal reliability validity and then have some sort of score that goes with the scales, and so we’re trying to figure out does all of that go into a terminology, does part of it go into a terminology, some of the scales really belong in LOINC as the place to handle those, or do they belong at SNOMED. We’re not really sure about those.

The issue about goals is what we’re beginning to learn about goals is all they are are findings, except we specified a day what finding we expect to see in the future, provided that certain interventions are done, so how do you, it’s kind of the context thing all over again, it’s like a history of, etc. So that’s where SNOMED is in working with the nursing terminologies.

One of the things that we think is going to happen is that in smaller nursing centers that are not able to afford say a license for SNOMED although that may not be a problem anymore once this contract gets in, is what a hospital could do is to send them some sort of referral or something with the Omaha codes in them, or with a Home Health Care code, for the hospitals nurses never see those because they don’t practice that way, they’re using NANDA, NIC, and NOC, and so it would go through the reference terminology and come out the other end in this other classification system, so that’s how we’re seeing or kind of marrying all these together. Does that help?

DR. FITZMAURICE: Judy may have answered, I have two questions, Judy may have answered one of them, I’ll ask Judy’s first and then one to Dr. Zinder. Dr. Warren, is there a place for nursing terminology as used in conjunction with a terminology used by other practitioners, so that you can link the nursing notes, the physicians notes, you can get a continuum of diagnosis, procedure assessment, nursing care assessment, patient outcome discharge, and then perhaps even follow-up patient outcome? I’m looking for something that’s describes the continuum of care so that they can see the effect of different decisions along the process of care and the impact on patient outcomes.

DR. WARREN: That would be my dream and why I keep doing this stuff. I really, I don’t know, do you mean do I know of any place that does that?

DR. FITZMAURICE: Any place that’s doing that, even if it’s a research project as opposed to an operational project.

DR. WARREN: No, in fact one of the things that I was struck by what’s happening in Colorado is this thing of thinking like clinicians. I’m still struck by dealing with an application where I cannot connect the orders to the diagnosis that I’m ordering for, and I want to be able to do that. And I cannot connect a goal or an outcome to the orders or to the diagnosis, unless I do something like a critical path where everything is a bundled package, and then I can do that, but I can’t do it if I’m just, if I’m a clinician looking at a particular patient. And one of the things that I think is the most difficult in teaching students and also working with clinicians is physicians look at one particular thing and that’s they’re domain of practice, nurses look at some different things. We look about what’s happening to that person and how they’re coping with all the things that the physicians are diagnosing and treating, and so you may not have the same kinds of problems occurring that you’re dealing with. But I would love to see us be able to put it all together, such as like within an episode of care or an encounter or something.

DR. FITZMAURICE: So would I. Dr. Zinder, you gave a glowing presentation of how you use Medcin and what it can do. Is this being used in the CHCS version two, the medical record, and is it being used across the three services?

DR. ZINDER: Yes, that’s where we’re using it is in CHCS-2, that’s our computerized medical record where we’ve just gone to limited deployment and it is across three services, it’s in hospitals from each of the services, in specified clinics in both primary care and specialty surgical and medical fields.

DR. PETERSON: I would just like to comment a bit on Judy’s thing. In our clinic we are able for simple single problems to bundle the whole thing, and that’s a real advantage, you can see it all in one package. What the challenge then becomes, and I don’t think it’s a Medcin problem as much as the vendors need to figure out how to utilize this, is to take care of the person who has high blood pressure, diabetes, hypercholesterolemia, a touch of the gout, and they’re having a fight with their husband and they’re depressed. That’s what happens in the real world and the challenge is to be able to think about those problems, one at a time, deal with them in a way a charge, a lab, can be segregated to that particular problem.

DR. ZINDER: That’s interesting, it must be your implementation, because with CHCS-2 we do that very thing, all the problems are individual and then we link any ancillaries that go with that problem to that so then it comes out in the note that way. So I think, it’s not nomenclature specific --

DR. PETERSON: No, it’s not a Medcin problem, it’s vendor specific.

MR. BLAIR: Could I pause it at this point because on our next panel we have some of our testifiers that are going to catch some flights so I need to keep on time here. Can we take a 15 minute break and be back promptly since we need to start on time.

[Brief break.]

MR. BLAIR: Can the next panel introduce themselves please?

DR. BRANDT: I’m Sam Brandt, I’m the vice president of clinical informatics for Sieman’s Health Services.

DR. MAYS: I’m Eric Mays with Apelon.

DR. WOOD: I’m Les Wood, I’m with Joint Medical Logistics Development Center at Ft. Detrick Maryland for Health Affairs with DOD Health Fairs, development of logistics system using medical terminology for devices.

MR. BLAIR: Ok, and if you didn’t hear the guidelines that I was giving to our testifiers earlier, if you have one terminology please keep your comments to ten minutes, if you’re discussing two, 15, three, 20.

Agenda Item: Panel 4 - Terminology Users: Healthcare Vendors - Dr. Mays

DR. MAYS: Thanks, Jeff. I would have taken the 20 minutes but I don’t have that much prepared, hate to lose that opportunity though, let’s see what we can do. Anyhow, just a point of background, I’m an engineer by training, not a clinician, and one of the things I would hope to get across today is one of the principle engineering guidelines and that is that simple things are good things.

So with that, just to provide some perspective as to where Apelon fits in this picture, we are a terminology middleware vendor as it were, it’s a big space, and we provide terminology software and services. We also provide content management and creation services for a variety of customers. We’ve two principle products, a terminology development environment which we call the TDE, that’s a description logic based terminology authoring environment that allows people such as SNOMED and Kaiser and NCI to create some of the terminologies that you’ve been discussing here today. We also have a server software product which takes content and provide it out into the enterprise in a scalable run time environment, and it has a localization component as well, which is a semi-authoring kind of capability to add local concepts and local terms and do mappings to local coding systems. We have a fairly broad customer base from a number of government customers to non-profits, as well as commercial entities.

I think the desirable features of a terminology are well established. The ones that we find particularly useful in our software and content delivery are rich hierarchies, compositional definitions, and by compositionality we would mean definitions that make use of reference taxonomies that is reference taxonomies that are in turn hierarchically structured themselves. That those definitions have a formal semantics underlying them, so that large scale terminologies such as SNOMED can achieve a level of conceptual scalability so that they make, they’re internally consistent. And another very important feature which we hear from our customers and we feel this pain quite a bit ourselves, is the problem around change management, how you deal with releases of new terminologies, updates to terminologies, how change management, how change is managed and how it effects the installation and customization in the local institution which is at the end of the terminology food chain as it trickles down from the original content provider. It’s a very, very big and very important problem, especially as you get to terminologies the size of SNOMED as opposed to terminologies such as, administrative terminologies such as CPT and ICD.

And finally, a very important point is to distinguish the interface characteristics of the terminology from its reference characteristics, we need the ability to navigate for the clinician with the terminology but don’t want to get those confused with the actual definitions and meanings of the concepts and the terminology.

Briefly we commented on three terminologies that we’re particularly familiar with, and I’m going to make a remark about two others. The three that we’re most familiar with are the NCI Thesaurus and the FRT, and SNOMED CT. All of these in our opinion have the necessary features or in the process of adopting those features, and it’s an important distinction to make this morning, talking about drug terminology. NDF-RT in particular is extremely interesting, but nonetheless unproven, and we believe that all three of these need more practical experience, that is a certain level of maturity.

A comment on the NCI Thesaurus, it’s not really a clinical terminology in the same sense that the others are, it’s more or less oriented towards research or the pre-clinical level, and I think that’s important to consider when doing this evaluation.

LOINC and UMVNS need improvement based on the criteria that we established on the previous slide, that’s not to mean that say, for example, LOINC isn’t a great lab terminology, it is a great lab terminology, but there’s certainly room for improvement there, and I’ll be talking about that a little bit later.

As a vendor we’re seeing that the marketplace is stalled at this point based on licensing issues, and that needs to get resolved, it’s definitely stalling adoption of terminology enabled applications in the marketplace.

Finally, the inclusive process that NDF-RT has with respect to including various stakeholders such as the FDA, HL7, the Cancer Institute, for example, is a really nice process and deserves to be looked at and replicated as opposed to a more authoritative sealed off kind of process which people don’t have input to.

One thing that hasn’t come up which is very important is the creation of subsets, and I just wanted to remark that in this particular case the practical considerations tend to dominate here. You need to be able to provide reasonable subsets to primary care providers, for examples, they’re not going to deal with all of SNOMED. Interesting though to distinguish, the interoperability issues are not compromised by the creation of subsets, and this is a great place that we think the marketplace will address the creation and maintenance of these subsets provided that barriers to innovation are removed primarily around IP considerations, that indeed people in the marketplace are able to create subsets of their clinical terminologies and they make those available.

One comment on mapping and this will get to the specific recommendations that we have, mapping is a great approach for dealing with the impedance mismatch as it were between clinical terminologies and administrative terminologies, that is the ICD’s, CPT’s, DRG’s, and APC’s. There’s an incredible entrenched infrastructure around these administrative terminologies that it’s important to provide mapping from these clinical terminologies to enable all those administrative terminologies to continue to function. On the other hand, mapping would not appear to be a very good solution among clinical terminologies. There isn’t the infrastructure in place that would necessitate that particular approach, and we believe that the creation of common reference taxonomies provides a much better approach to dealing with the interoperability issues in the clinical space as opposed to mappings between the clinical and administrative spaces.

So two examples that are pretty easy to see in the area of chemicals, it’s very reasonable to think of having distinct terminologies that have different maintenance structures and so on between laboratory and drugs, but there’s no reason why those two terminologies shouldn’t share a common reference taxonomy for chemicals and so on. Similarly for organisms amongst labs, drugs, and diseases, those, it’s a great place to decide on common reference taxonomies.

So the actions that we would like to see taken are that there’s a recognition of some core clinical terminologies, that those core terminologies be non-overlapping, and that it be a small number. There are some small terminologies and it would appear that the small terminologies can be incorporated into some of the larger core clinical terminologies, keep it small, keep it simple, don’t confuse people, don’t confuse the marketplace.

It would be good for the government to promote the harmonization of the core models so that if we have five core clinical terminology models that those models be harmonized amongst each other so that there can be interoperability and again, with the promotion of the common reference taxonomies we can do two things, we can increase the interoperability amongst this handful of core clinical terminologies and hopefully lower the maintenance costs to them.

So finally, one of the points that you raised in the questionnaire was the notion of identifying gaps and enhancing core sets, and once those core terminologies are chosen a gap analysis would seem appropriate and then fill in the gaps as need be.

Thanks.

DR. COHN: Do you have copies of the presentation that we could have? Or is that something that Suzie has?

DR. MAYS: I’ll email them to you.

DR. COHN: Great, that’d be super, thank you.

Agenda Item: Panel 4 - Terminology Users - Healthcare Vendors - Dr. Brandt

DR. BRANDT: Again I’m San Brandt, I’m here representing Siemans, along with Rosemary Kennedy who also joins me from Siemans, and purportedly I’m going to speak about SNOMED CT but actually in a more general sense about our needs for nomenclature and the implementation of nomenclature, so maybe I can weasel into some of the extra few minutes after ten there.

I’m trying to be compliant, though, and follow the questionnaire which asked about data analysis, and what I would say is that an otology such as SNOMED’s is very useful for us in that we can have a term such as post-streptococcal glomerulonephritis and yet be able to model system intelligence and system knowledge at the level of glomerulonephritis and have that then drive alerts, drive reminders, drive the construction of water sets and documentation templates. And that may seem trivial but the issue really is that physicians and nurses really have a need to be able to specify clinical knowledge at the level of granularity that is most pertinent to themselves and what they’re doing, and yet as a vendor, it’s impractical to be able to model medical knowledge at every level of granularity, so that we have to be able to say just like any textbook, here are the subjects, if you will, that we are going to represent knowledge at, and we need to be able to traverse from more specific to more general in order to be able to invoke those subjects.

In terms of the ease of creating user interfaces, one of the problems with user interfaces is you think of either predefined lists or templates is that they need to be designed in advance for some purpose that some clinician will ultimately use them for when exposed to a patient. And as we heard before, if patients were considerate enough to just have diabetes or just have congestive heart failure or just have chest pain, that would work perfectly well. But regrettably they’re not, and so you have diabetics with congestive heart failure who come in with chest pain and have a litany of other problems. And the clinician is now faced with the problem in that I need to be able to create a unique constellation of both documentation and orders that are going to fit this patient, and the problem with precognition is it becomes almost impossible for somebody building a template or an order set to be able to construct some usable interface which would be easy for the clinician to use that is going to fit the patient. So that’s kind of the problem that we’re really addressing, it has to be easy for the physician and nurse so it’s usable, and yet it has to fit the patient, and we just find that that is very hard to do ahead of time.

There really have been two traditional approaches to the problem. The first is to say that we will have a congestive heart failure template or a chest pain order set, in which case when a patient comes in with multiple problems there’s a real tendency to make the foot fit the shoe, to be able to say I will document those things that are in the template and order from those things that are elaborated for me, but it’s very easy to not add the additional detail which actually might be very salient and rich because they’re not presented to me in the template.

The other option is one that many systems employ which is that information is organized in hierarchical lists, so in fact I can go to the ear and to the timpanic membrane and I can go to findings regarding colors and then I select color, and I can go to findings regarding effusions and I can select effusions, but the experience unfortunately in the industry is that that is such an incredibly tedious exercise in order to be able to document the breadth and volume of information that clinicians needs to document, that it really doesn’t work.

So our approach has been sort of a more pragmatic intermediate approach. There really is information which really we all can benefit by capturing in a structured form. And sometimes that’s predictable. If that information is going to be used to measure outcomes it’s important then that it be comparable and normative. If it’s going to be used to drive system behavior in terms of the surfacing of alerts or the creation or order sets or feedback, then it’s important that it be structured so that it can be acted on programmatically. If it’s going to be used for reimbursement, such as whether a physician is contemplating a gram negative organism as a source of pneumonia because that drives DRG-79989, it’s pretty important to capture that in a structured fashion.

But on the other hand there’s a lot of information where that’s a whole lot less relevant. If I want to say that a heart murmur is musical crescendo, decrescendo early systolic, it’s probably less important than the fact that I am documenting the presence of a murmur and ultimately the valvular problem to which that murmur is attributed. So we believe that the combination of templated structured frameworks, with free text within the frameworks really makes for the best approach, the best practical approach to documentation.

We have examined natural language processing as a way to extract structure from that unstructured information and quite frankly we think that still has a ways to go before it’s really going to be practical and be able to deliver what we really need. One of the interesting things is we finally embraced nomenclature, and finally aspire to fully structured notes, because that technologies have really come online this year, which make the entry of free text for the average clinician quite practical, and I’m really speaking of continuous speech recognition, being able to dictate into the computer and have it transcribed, but transcribing in free text, that’s actually really quite usable for anyone who’s willing to put in the effort, and also the touch pad technology which can even read my handwriting, which is really quite remarkable.

So now we’re actually, there’s the ability to give physician tablets and nurses that they can carry around the facility and hand write into blanks and templates and have that be transcribed automatically, and I think it’s actually a practical solution. So I think as we look at wanting to deploy structured documentation against a structured nomenclature we really need to think about what are the pragmatic approaches for that.

Again, I think these really have been the things that have driven the creation of the terminologies that we’re discussing, the ability to capture documentation, the ability to capture reimbursement, but I guess from our perspective those are really not the most pressing, urgent, nor valuable uses of structured terminology in the industry. I think point of care decision support is really much, much more important, as we look at the IOM report, as we look at Leapfrog, as we look at errors within health care delivery, there’s a tremendous opportunity to be able to bring using system intelligence, the right choices to the clinician at the point of care to essentially make it easiest to do the right thing. And that really demands a nomenclature underlying how you describe the patient and what the things that you’re going to do are, and that I think would drive the kind of quality and efficiency interventions in the health care system that really all of this is ultimately for.

Currently I would assert that there is no complete standardized terminology for the representation of drugs, labs, radiology, nursing procedure, and other ancillary procedures so that they can be expressed as actionable items in physicians orders. We’ve done a good job of thinking about how a lab transaction is going to be transmitted to a laboratory system but we haven’t done a good job of saying how can I ask for blood cultures to be drawn from two sites, 30 minutes apart, after the patient receives roseffin(?) antibiotics, there really is no standardized way of expressing that.

One of the things that we really need is a standardized nomenclature because as a vendor we are really forced to say we want to model system knowledge, we want to model dynamic order set creation, dynamic documentation creation system behavior based upon structured terms, but we have to go to multiple terminologies, and essentially subset terms from those multiple terminologies into what effectively becomes our reference terminology against which we are modeling behaviors that our system implements. And we would love to have help for there to be a singular normative nomenclature against which we can model behavior.

On top of that, the ability to create things like order sets and rules and guidelines and care plans really require us to create either additional pre-coordinated terms or to be able to post-coordinate terms using some proprietary combinatorial grammar that we invent, which is therefore not easily translated elsewhere, and we think that that is an excellent opportunity for this Committee and for the U.S. government to help us.

Again, the most effective way to influence clinician behavior is to make it easiest to do the right thing, not to tell me what I just did wrong, not to tell me what could go wrong that I already know about, and certainly not to tell me what you think I did wrong last week because my antibiotic bills are higher than the average for my department. What that does is it provides me with a system that gives me the best choices but allows me freely the autonomy that goes along with my license to make clinical decisions for this particular patient, so you’re not wresting that autonomy.

And what we need to be able to do that is, if I were on call last night and I were coming off call and I were checking out a patient to another physician, then I would say Simon, I admitted an elderly frail diabetic lady who fell last night fracturing her hip and she’s hemodynamically(?) stable and orthopedics is going to see her. I mean I would try to bullet out in some logical way the salient attributes of that patient that I believe another physician would understand and interpret in a like way, and would therefore be armed to interpret the data that’s coming back from that patient and hopefully make consistent conclusions. So part of what we need is a language for being able to describe those attributes in a consistent way.

SNOMED certainly has all the terms, we don’t need all of SNOMED’s terms to do that, but we need to be able to subset them and say here are these things which we call patient conditions or problems against which we ask everybody in the industry to model knowledge. And then those decisions that are related to those particular attributes can be modeled, and they can be represented as orders or as elements within a documentation template that can be constructed dynamically for the patient with chest pain, congestive heart failure, and renal insufficiency so that they’re all there together

I think SNOMED is unique based on its comprehensiveness and its breadth, but I think again it is not sufficient alone. And we are forced to aggregate multiple terminologies which forces us to face the issues of redundancies and ambiguities where things are modeled at a slightly different level and don’t exactly map to each other. For decision support we need to be able to model other things then what SNOMED has traditionally thought of as its charge. SNOMED would not model the relationship between peritonitis and appendicitis, because appendicitis doesn’t always cause peritonitis, and not everybody with peritonitis has appendicitis. And yet if I’m seeing somebody with abdominal pain who has symptoms that are associated with peritonitis I would like to be able to model that appendicitis is one of the potential causes of that peritonitis. So we need to be able to think of symptoms that can be modeled, the causes of symptoms and conditions, the complications that my orders and documentation would reflect my desire to anticipate and manage, the differential diagnoses, and again, the associated actions or orders with that.

So I think the gaps are that we need support for the expression of clinical orders as intended actions or outcomes in the kind of language that we’re used to writing in the clinical space. I think we need a combinatorial grammar, I would support John’s comments earlier that I think the real issue here is that we need to essentially be able to put these terms together in a way that provides unambiguous meaning. I, however, don’t share his thoughts that the HL7, CDA, and template committees will provide the answer for that, and the reason why I say that is this. That if we look for the meaning of a term, if I have, I guess the example, if I have swelling of the proximal IP joint of my right third finger and the reason why we know that the swelling that I describe is on the PIP of my right third finger is because it’s in a template that started with extremity and hand and finger and joints and IP joint and now I describe that swelling. As a vendor then in order for me to preserve that meaning, that the swelling is really associated with that particular part of the body, I either need to save that entire template structure, that entire hierarchy which gives it meaning, and then would only be able to really re-express it in the same hierarchy.

Or I need some way of being able to say really you’re describing this joint effusion of that joint of this finger, let me create a phrase that means that, associate the phrase with the finding, and now if I’m going to render that phrase in some other document or template I can test to see whether the hierarchy that that template provides matches the meaning that was actually captured when that finding was captured. And so I think it’s very important for us to say how do these words combine, in the same way that you’re understanding what I’m saying now, and not saying that we’re going to create ten million unique combinations which we could never programmatically manage, or to say that we’re going to have a word that can only have particular permutations and that’s the only other modifiers that can be associated with it.

Lastly, Siemans is certainly an international corporation, so one of the problems that we face is that we would not like to create clinical system solutions for the U.S. and then a different solution for England, and then a different solution for Australia and every other country. Ideally we would want to be able to leverage the economy of scale across the globe, and be able to create a clinical system that is dependent on a reference terminology that is global. And then we can have interface terms that obviously are important for each of the countries in which we’re deploying. I think most of the other vendors in our space are also looking at the international market, so as we think about adopting a terminology we also need to think about what Europe is doing. We also need to be thinking about what the Far East is doing, and I would just encourage in the strongest way that we really look to the establishment of global standards because that allows us then to deploy whatever resources we apply to the problem more effectively, we don’t have to split them up into what we’re doing for Europe and what we’re doing for the U.S. and what we’re doing for the Far East.

Barriers, conflicting sources of terminology overlap have ambiguity, the license issue is a real problem. Every terminology perceives that maybe five percent of the total revenue would be the appropriate cost for that terminology, but there are 50 of them, and you can do the math. And again, we need to think about, I would allege that point of care decision support improving the effectiveness and the quality of the healthcare system is really our number one priority, and that the ability to do outcomes analysis to learn the things that could further be improved is really secondary because we have a whole lot of low hanging fruit that we all know we don’t do well, and we need the systems to help us to do that.

So I think the federal government can establish the scope of each effort, can make implementation practical and affordable for us by implementing national standards and dealing with the licensure issue, can determine what the real priorities for modeling terms are, and I would advocate point of care decision support, and then can create incentives so that the suppliers actually successfully fill those gaps.

Thanks.

Agenda Item: Panel 4 - Terminology Users: Healthcare Vendors - Mr. Wood

MR. WOOD: I’m Les Wood, again I am with the Defense Medical Logistics Standard Support System, so if the engineer thought he’s a fish out of water, I’m a clinical engineer that works in logistics. A little bit of background, what is DMLSS, is a medical logistics system that’s being developed and is being deployed to hospitals within the Department of Defense, both peace time and war time.

The background is in 1991 when we went to Desert Storm we had three different services over there using three different systems that couldn’t talk to each other. The Army was supposed to support the Navy and Air Force for medical supplies and equipment. They couldn’t order from each other, they’re systems couldn’t talk to each other. Today we have one system.

Ms. Coats has asked me to come here to show how we use their nomenclature system within our system. The first thing we wanted when we started developing the equivalent modules of this system was we wanted to be able to know number one, what is it, and number two, who made it. We adopted ECRI’s, both manufacturer and their nomenclature system in order to accomplish that within a system.

You should have a brief handout and I’ll kind of show you how we tie it together. We start off when we get the devices from, device nomenclatures and device codes from ECRI, we take those devices and nomenclatures and we split them out into consumables versus equipment, only the equipment side of the system uses their nomenclature system. We then add management data which is distributed to all our activities on how this item is managed based on the tri-service risk assessment, whether it has maintenance requirements, whether it’s accountable, and guidance on how to manage that at a local level. And this next release, this screen, will have their definitions added to this as well. The code, the nomenclature and as we say we’re adding the definitions.

We created from a structure standpoint there is, we wanted to be able to pull back not one specific nomenclature and we saw that as a problem and from past systems that the nomenclatures are very specific. For example in this screen shot on the third page here it shows there’s about eight different nomenclatures for a defibrillator. We created what we call a device class and we proliferate both the class and the nomenclature throughout the system. The class basically is the root word that the classification system uses.

We also are unable to create centrally managed maintenance plans, tie those to the central line device codes, and send those out to satisfy the JCAH requirements in our peace time hospitals, how they should be maintained, if there in use in a hospital either recommended for deployed, which is mobility, and some of the services have stuff that’s stored floating around in oceans that don’t get looked at very often and the Navy guy is smiling.

When the users in our services want a piece of equipment that have to basically go through a process that’s called review and approval to say yes, this piece of equipment is needed in this facility. And we start with the nomenclature at that point in time with an equipment request, put that nomenclature and that equipment request and that request is tracked through the review process to the funding process to the approval process through the ordering back until the day that the user has the piece of equipment in her hand, and is issued to him. Once it’s received, the equipment, an ordered piece of equipment, a catalog record has to be established. As I said, the catalog record is core for our logistics system.

The supply or consumable side of the house elected not to use the classification system so the first screen here has, they can pull whatever they want in here for a nomenclature. However if it’s classified as a piece of equipment we start at the point of saying with the request and then building the catalog record of using that nomenclature system. And then the centralized management data and the guidance is defaulted from the records that is imbedded in the database.

Once their equipment comes in, all equipment ordered against that catalog record is populated with that management data and that standardized nomenclature and is managed throughout its life with that nomenclature. Users, of course, add the model number, manufacturer or cell number and that type of data to that. The point being is that this equipment record is not only used by the logisticians but within our system its what I call a local enterprise system in that throughout the medical facility the operators can look at this equipment record, they can, the maintenance people look at it, we have responsibility for control of that equipment and of government property and if any of you are associated with the government you know you have to have custodians that manage that equipment and control that equipment, they lose it, they signed for it, lose their paycheck at the same time. So this record is now visible by the operators and users throughout the system as well as the maintenance people, as well as the people who manage budget entire life cycle of that equipment.

The maintenance data is also defaulted again based upon that nomenclature. The tri-service maintenance representatives come out with standard maintenance scheduled, they use that and it is built within the system, all they have to do is select a nomenclature.

In the questionnaire it was discussed about whether you had to use codes or whatever, the only places on the basic screen that we ever show a code, the nomenclatures are all basically drop down nomenclatures with type ahead capability for both the device and the class. You type in, as an example in the sheet shows you type in DEF, you get defibrillator, select that, you get them all. Nowhere does the user have to remember it in codes anymore. In our old legacy systems from 20 years ago you had to look up the codes and write them down and you had to produce all of this and paper documents to give people guidance on how to produce guidance, this guidance is now automated based upon having a standard nomenclature.

The computer uses the codes and the users use nomenclature.

The last sheet in here shows the quality assurance screen. We have a quality assurance system that we pull down FDA recall records and things of that nature and display them before the users to know about them. When they come in they’re not a standard nomenclature, unless we get them from the ECRI we don’t have a standard nomenclature from manufacturers, FDA, or anywhere else for quality assurance data. We select it here so that we can build in and search our equipment records and find that.

As I wanted to point out that this is a logistics system but is has spread throughout the system, throughout our MTF hospitals at this point. Clinicians, I mean the users can view the equipment records, the users see these quality assurance records, and having a standard nomenclature has been able to tie it together. In the future we are building, for example, and why I think that a standard nomenclature for devices is relevant, is our interfaces currently as he said, first off you’ve got to take care of administration, you’ve got to take care of finance. We’ve interfaced with finance systems and those type of things, we’re also interfacing with a point of use cabinets for dispensing the drugs and supplies on a ward, one of our next interfaces is to an OR management system where they want to pull in and schedule and make sure that they have the equipment and the supplies on hand in order to do the procedure and schedule all of that. Having a standard nomenclature has enabled us to just tie the whole system together.

Thank you.

MR. BLAIR: We have a little bit of logistics here, let me discuss this with you. Our next speaker needs to begin at 4:15, which is only 15 minutes away, and I understand that of this panel here that at least one of you also can’t stay that long, but I don’t know who it is. Because what I’m postulating is, can you all stay until 5:00 or 5:15? Then why don’t we have, if the Subcommittee could direct questions especially to Dr. Brandt for the next 15 minutes, and then if Eric and Leslie if you could stay if we don’t have all of the question in the next 15 minutes then after our next speaker speaks at 4:15 then we could come back to you. Would that be ok?

DR. COHN: Why don’t we leap into the questions here and I’m sure we’ll have it handled within the timeframe we need. Steve, you have some questions, Walter also, Stan and I have a question or two.

DR. STEINDEL: Yes, I have my standard question, which I’ve been asking of all the vendors, how many people do you have currently using SNOMED CT?

DR. BRANDT: Currently it’s in our development for the next release, next version of our product.

DR. STEINDEL: That makes the next questions non-existent, thank you. My next question for you and this may be a little bit what Walter might have gotten to, I don’t know, but you were very strong on saying we need a terminology, whereas like Eric was talking about that we could have multiple terminologies related through a reference taxonomy or something like that. Would that also serve your needs?

DR. BRANDT: I think that from the perspective of deployment we need to be able to model knowledge against a reference terminology. Now whether the task of mapping that reference terminology using multiple other terminologies falls to us or whether there is a terminology that we can use, I mean I think that’s the question on the table, but I think that when we deploy knowledge that knowledge really has to be modeled against specific reference terms. And then those reference terms within our terminology can have mappings to other vocabularies. So it would be very helpful actually if there were a normative terminology.

DR. MAYS: If I could just hop in there. I think two would be a good number, for example, and I’m just saying drugs and everything else, I just don’t think it’s practical that we would have, I don’t think it’s achievable that we would have less than four or five.

DR. COHN: So Eric you’re saying that you’d love two but you think you can survive with four. Though I presume you’re talking about separate domains, not overlapping, well integrated, Sam, does that change your view as long as they’re not overlapping?

DR. BRANDT: Well if they were all for instance incorporated into the UMLS, and you ended up, I mean I’m not saying that they cannot be multiply sources, I think if you talk about multiple sources that are non-overlapping that are incorporated into some coherent whole I think we’re saying the same thing.

DR. SUJANSKY: My questions were somewhat along those lines for Dr. Brandt initially. If a terminology, hypothesizing that a terminology, or small set of terminologies, will be identified as a standard, given what you know of the existing terminologies, perhaps many of the ones you’re already using and cobbling together if you will for your proprietary terminology, do you think it will be possible to really interoperate using that terminology? In other words, how much customization and enhancement do you foresee at your organization and perhaps other vendors like yourselves based on existing terminology content in order to really support your application needs and given that there may be some or a significant amount of such enhancement and customization how will that affect interoperability perhaps among products from different vendors. Do you see that as an issue?

DR. BRANDT: And I think that that really addresses the issue of a combinatorial grammar. Our experience has been that most of the root terms that we need are within SNOMED, but that we need to be able to combine those terms into a new term that perhaps infers specific meaning, for instance right lower quadrant abdominal pain exists within SNOMED, right upper quadrant abdominal pain doesn’t. So if you have appendicitis you’re in good shape, if you have colocystitis(?) you’re in trouble. But from the perspective of working up somebody with abdominal pain we need the ability to express that. Now certainly right and upper and quadrant and abdomen and pain are all concepts that are there. If there were an absolutely standard way for combining those and being able to specify which term is modifying which term, then I think it’s entirely possible for each vendor to be able to say well, conceptually in order to represent this knowledge we need to be able to express this, but we’re able to express it with standardized terms using a standardized grammar that then is easily programmatically mappable to another expression that means the same thing. I think short of that then you have a problem, then there is an explosion of we need a new term for this, and they need a new term for that and those terms are somewhat redundant but not exactly the same and all of a sudden now you have a mess.

DR. SUJANSKY: My other question for you is you mentioned explicitly that there was a gap in the area of terminology for order, and so again, could you expand on that a little bit across say SNOMED and your drug terminology and LOINC, where is the gap if you combine those, what’s missing?

DR. BRANDT: I’d actually love to and it reminds me of a point I forgot to make, that there’s an even bigger gap for expressing actions. As you may know we are moving into the workflow domain, where you begin to think about health care processes as involving multiple health care providers each of which might contribute some task towards a larger process, like stroke management in the emergency room or something like that. There is not a good language for expressing obtaining a blood culture sample. So I can express blood culture and I can speak of it as though it’s a laboratory term, and I can say it has been ordered but it has not been processed, or its been processed and not confirmed, but what do I call the bottom which is now sterile and the process of doing venipuncture on the patient and putting blood in the bottle and then the thing that I have that I need to bring down to the laboratory. That as we get into thinking about processes in work steps we need consistent words for actions that allow us to define processes. Most of the time when physicians write orders they’re actually saying here is the outcome I would like you to achieve, genimiacin(?) 120 mg. IV stat, 80 mg. every eight hours, they’re really meaning could you do whatever it takes to ensure that my patient receives that, and that involves pharmacy and the couriers and nursing and med IV pump and documentation and medication administration record and lots of steps in the process. Whether there are actually terms that mean that the patient should get genimiacin as opposed to the drug is genimiacin as opposed to the drug level is genimiacin as opposed to the NDC preparation is genimiacin, it’s not clear that those are distinct.

But when you start specifying work steps it becomes important to be able to have distinction because you’re actually talking about some step in the process. So I think simply approaching the language from the perspective of I am going to work up, diagnosis, manage a patient, I’m going to make decisions, I’m going to request orders, I’m going to fulfill orders with particular work steps, surfaces the fact that the nomenclatures we have were not originally intended to fulfill those needs and therefore the words don’t necessarily apply.

DR. COHN: I was going to comment, that’s not a simple grammar anymore, we’re getting into sort of complex grammar I think.

DR. BRANDT: Well here we’re talking about different words for actions, I mean I think, I don’t think this part is more complex, I think it’s just that there are different things that need to be specified, so I think that we will find that there are many new terms that need to be added and those terms need to be in the context of actions and we need to have singular source terminology into which those terms can be requested and implemented.

DR. SUJANSKY: Would you say just as a quick follow-up for what most of us might conventionally think of as orders, the orders that clinicians place into a, for example, an order entry, electronic order entry system, that the existing terminologies are adequate, the ones I mentioned for example?

DR. BRANDT: I would actually say, I would say that they’re not quite, I would say we could probably kluge them, I would say that we have a tradition of taking a physicians order, translating it to an HL7 and communicating it to an ancillary system, and thinking that the expression of the order was the same as the output of the HL7 message. But it’s really not, because the HL7 message is really an input into a lab system and what the physician is typically talking about is a process for acquiring, measuring, reporting, and that is actually the term that’s being expressed. So I think you could use the word for a CBC and you could mean it, what I really want you to do is obtain the sample and get me the report, and you might use the same word which the lab uses to represent a report or to represent a culter(?) counter test, but in fact they are really not truly equivalent. And when you starting running, the real issue is when you start running programmatic logic against those things. It could become important to understand am I speaking about your intention, am I speaking about an action in progress, am I speaking about a result, or am I speaking about the test itself, and how do I determine that from the language. That I think will become increasingly important as we move into the process space.

DR. HUFF: Eric, you noted that LOINC and UMDNS needed more work. Could you say more about that and what you’d like to see happen to make those more maintainable, supportable, integrable?

DR. MAYS: Primarily we’re referring to the structure, the hierarchies that could be imposed on LOINC and UMDNS, and the explicit, inexplicit semantic model as it were. And I think that those are important for a number of reasons, I mean decision support is important also for the purposes of supporting back office operations like the creation of flow sheets and templates where there’s a lot of search navigation, etc., to populate that information and make it more accessible to the provider at the point of care. It’s primarily around richer hierarchies and semantic models for LOINC and UMDNS.

MR. BLAIR: Dr. Brandt, you mentioned if I heard you correctly that for Siemans clinical decision support and outcomes analysis right now is your number one priority, and when I was reviewing over the weekend written testimony from several of the other vendors, the same phrase or reference to clinical decision support and outcomes analysis was on theirs also, and so I want to make sure that I understand and the Subcommittee understands is this, help us understand the nature of this priority and the breadth of it.

DR. BRANDT: And I would correct what I intended to say is I think that point of care clinical decision support is the priority, and I think that in the last decade we have addressed nomenclature thinking more about outcomes, and I don’t mean to disparage that in any way but I actually think that nomenclature can support a closer higher impact more urgent need in supporting point of care decision support which is saying using system intelligence in order to make it easiest to do the most appropriate thing for physician or nurse or other clinician who’s interacting with the patient.

I think if you look at the folks who are looking to buy clinical systems now they are fundamentally doing that because of the issues raised by the Institute of Medicine, because of the issues that are raised by the Leapfrog group, because of the focus on accountability, because of a number of consumer metrics which are becoming available for provider organizations, and I think what they’re really looking for are successful interventions as opposed to a better ability to measure they’re failures. And I would even say, although perhaps I shouldn’t, is obviously any organization that has a new system for measuring its inadequacies is always skeptical about how eager they are to implement that system, because obviously once you begin to measure you begin to find out that you have some deficiencies.

I think it is most important for the purchasers of health care systems to have the tools with which they can say you know what? When we know what we should do, and we know when we should do it, we would like to harness technology to help make sure it happens reliably. And so that really has more to do with the decisions being made at the point of care and then the reliability and efficiency with which the organization executes upon those decisions. And it’s a slightly nuanced use of nomenclature but it’s actually relatively profound, and when you start digging into the actual details of what you mean when you’re asking for something, when you do something, or when you’re modeling why you should do something.

I guess I would also add that the other side of that is we see a world in which medical knowledge vendors, the drug knowledge vendors of the world, the medical publishers of the world, are able to actually publish their knowledge using nomenclature in a way that becomes executable within systems. And that’s actually what we’re working on currently. So that obviously, that’s a quite different use, nobody really thought about how would such and such a publisher publish their medical knowledge so it’s executable using the terminology, but I think you can see how powerful that would be for implementing consistency and quality of care. So that’s the kind of focus we would really encourage.

DR. COHN: I want to thank everyone, and Sam, thank you for your thoughts on that one. I think what you’re saying is very important, I think as I’m reflecting on work that we need to do obviously for this first phase nomenclature is the focus, that certainly that whole issue of how you structure knowledge so they can be readily implemented into decision support and guidelines is obviously a key issue, so thank you. Sam, Eric, thank you very much, Les, a pleasure, I’m sure we’ll be following up with you with more questions about ECRI and the UMDNS. We’re going on to our next session which is the last. And Jim I think you’re under some time pressure as I understand?

Agenda Item: Organizational Activities Relative to Clinical Terminologies - Dr. Cimino

DR. CIMINO: I apologize for throwing the most important government committee there is today into disarray.

MR. BLAIR: Jim, we’re usually in disarray so don’t worry about it.

DR. CIMINO: Well, I’m here, my name’s Jim Cimino from Columbia University but I’m actually here on behalf of the Markle Foundation and Connecting for Health to present some consensus statements that have come out of a recent meeting from that group. The Markle Foundation and Connection for Health have as a goal an interoperable health care system and part of realizing that goal requires the widespread adoption, I’m going to emphasize the issues on adoption, of a comprehensive set of terminologies and that that’s not only desirable but critical. To that end, Connection for Health convened a meeting about seven weeks ago of experts to look at the key issues related to terminology development and adoption.

So the objectives of the meeting were to explore the need for development and distribution of comprehensive terminology set for health care, to understand how current terminologies relate to the UMLS, those that are both included in the UMLS and those that are not, and discuss the methods that would be required for mapping additional terminologies into UMLS.

Then we sought to look at what kind of work would be needed to take the UMLS and turn it into something that could be usable in clinical systems today or subsets from the UMLS. And recognizing that there are gaps in current terminologies, figuring out how to prioritize those gaps so we could direct resources to addressing the most important ones. And finally, to generate the options, look at some options for the long term maintenance, distribution of terminologies, and identify the strategies that would encourage providers to adopt these common terminologies.

So surprisingly in a one day meeting we did not achieve all of those objectives, instead what we did was we found that there were differences of opinion and that was part of the reason for bringing that meeting together and attempting to develop a set of consensus statements, and so what I’m going to present today here is a set of 16 preliminary consensus statements from that group. Those consensus statements are still under discussion, we’re collecting additional ones from the group, we are trying to be as inclusive as possible in including all the stakeholders, and as I said, these are draft statements. You’ll see some overlap in them as they’re still being refined, and I thought about trying to refine them for the purposes of this presentation which was given to me last night, and I thought if I did that it might no longer reflect the consensus but rather my own spin on things, so I’ve left them more or less unchanged. Although I did move the summary slide to the end of the presentation, I’m not sure what it was doing in the third place.

Ok so the consensus statements are divided into two groups. First one is the terminology requirements and the second is process requirements, so first there are five terminology requirements consensus statements. First is that an interoperable health care systems requires an information terminology set that has as you might imagine, a lot of familiar things, it’s cross domain. It’s open both in terms of its availability and also the ability to participate in its development content. It’s inclusive, not only with respect to domains but also with respect to the constituents that it serves. Promotes movement and interpretation of health care data. It utilizes appropriate domain expertise in its design and construction and maintenance. And allows accurate exchange, aggregation, and interpretation of health care data. Finally, we felt that this interoperable terminology should have a uniform structure across all of its domains.

The second requirement was that the health care terminology should be integrated into a single reference terminology set, so set is an important word there. We’re not trying to create a single terminology but a terminology set. Users would require an integrated single reference terminology for their systems.

Third, a single integrated model of health care information is required as a foundation for the integration of current and future health care terminologies. So it’s not just a terminology but an information model as well that’s needed, and that model would have a couple of characteristics that would cross all the terminology domains and would rely on contributions from specific domains for its construction. And also support various levels of abstraction so that those who were creating the data could have the granularity they need in order to, for instance, for documentation purposes, while those who would use the information would not be subjected to that fine grain terminology but could use a higher grain, a higher level of abstraction.

Also continuing within that third consensus statement the integrated health care information and terminology model would deliver the following to users, would deliver value to the following users, system developers, clinicians, knowledge managers, pharmaceutical manufacturers, terminology developers lest we leave them out, and international bodies. And I’m sure that patients are probably supposed to be on this list somewhere as well.

Health care terminology, consensus statement four is that health care terminology supports a single systematized way of describing every aspect of health status and health care of individuals and populations of individuals.

And fifth, the integrated terminology set should be, and here’s another laundry list of things that we’re working into, a coherent consensus statement is transparent to source, which doesn’t mean that we don’t know where the things came from but we can use them in the same way regardless of which source they came from. That the set would be common to all users, there would not be a duplication the sets, that there would be a single repository, there might be multiple repositories but each one could serve as a single repository so you could get all your terminology at one stop shopping. And support would be done with clearly defined responsibilities for maintenance, and we would provide, there would be provision for appropriate credit to the contributors of the terminology.

So those are the terminology requirements themselves and then with regard to the process of how we get there, we have 11 consensus statements. The first is that the process for developing integrated health care information and terminology models should have the following characteristics, openness again, it needs to meet the needs of all users. And the second is that the same terminologies would be used by all stakeholders, so there wouldn’t be different, for instance, different laboratory terminologies for different groups, for instance for order entry laboratory systems, decision support, and so on.

The development, integration, maintenance, and adoption of standard terminologies requires a comprehensive set of functions to be specified and clear roles and responsibilities for each of these functions. And we have a diagram that we put together trying to show how we see, what we see as the important aspects of this process from design to adoption and maintenance of the terminology. So at the stop there would be some sort of oversight group and I’ll say more about that in some of the other consensus statements, that would include the evaluation process and would oversee the entire rest of the process. There would be a single point at which that oversight group would influence the terminology and that would be in the modeling phase. The modelers however would have to work with integration folks in order to have a consistent internal models for each of the terminologies and the terminology set and also consistent integration across the terminologies within that set. Those two functions then would impact on development, endorsement, and maintenance of the terminology, and all of those in turn would then go downstream to distribution, implementation, and actually we put adoption last. And we did that because we felt that adoption could be reasonably expected to occur until implementation had been demonstrated as feasible.

And then finally, and I’ve added another arrow to this diagram from the handout, we have adoption within feedback to both development and maintenance process to inform those who are building and maintaining the terminology what’s going on where the rubber meets the road.

So given that diagram, a number of consensus statements about the different elements in it. The integration function would have the following characteristics: serve the public interest, participation by all interested parties, include a coordination and oversight function, it would be empowered to get through the “brick walls” which would be the obstructions that prevent people from cooperating in integration efforts, independent of financial interests in a products, would have endorsement by standards development organizations, determine the domain where specific terminology needs need to be addressed, it would be impartial, utility driven, accountable, responsive, and would have long term funding.

The fifth consensus statement is that the terminology integration functions encompass the following responsibilities: oversight, process management, and repository maintenance. And different bodies could carry out each of these roles, obviously they’d have to work very closely together, or it could be combined within a single organization.

Sixth, the terminology integration oversight role would be carried out by an entity with the following characteristics, and this is that first box at the top of the diagram. It would have public and private representation, it would be independent, it would have stable funding and also would have some teeth, it would have an authority and mandate the ability to break down barriers. And finally would encourage a coalition of sufficient influence, prestige, and clout, and support to act as a tipping point, that is in order to get a critical mass towards adoption and implementation.

Consensus statement seven, the goal of the terminology integration is harmonization. We define harmonization as organizations with overlapping terminologies creating a single integrated terminology set, the word set I think should be in there, that is supported by all of the stakeholder organizations.

Consensus statement eight, the terminology domain boundaries should be clearly defined, and within each domain redundancy should be eliminated, so this is a terminology set that we would expect there would be clear delineations between the domains, so there wouldn’t be

redundancy across the domains.

Terminology integration should result in a single terminology within each terminology domain. So where there are multiple terminologies contributing to a particular domain the goal would not be mapping of those terminologies but merging them into a single terminology that covered that domain completely and non-redundantly.

Number ten, the integration process needs to encompass linkages between domains, so it’s not just the information within each domain but we have to pay attention to how those domains relate to each other and we’re actually, I heard a number of folks talking about that today, about how the different terminologies would relate to each other, for example, to provide definitional information about concepts. Cross domain linkages would have the following characteristics: they would be scalable, they would explicate the significance of the linkage, and represent medical knowledge associated with those linkages.

And finally, demonstration projects will be needed to determine how to optimize the development and implementation of integrated health care terminology. The demonstration should test and evaluate the deployment of standard terminologies over time, and there are a number of projects underway that could be considered as demonstration projects including VA, DOD, and Kaiser.

So we conclude now with one of the early slides that I moved to the end, a summary, summary points is that there should be some oversight entity that does this, it should not be left simply to terminology developers to produce what they think is needed and what feedback we give them but there needs to be some oversight entity with responsibility for this project as a whole. There needs to be some focal point for that oversight entity to effect the process management. And that there needs to be a single repository for all the terminologies in the terminology set.

As next steps we’re going to conduct a series of conference calls with the workgroup to refine and augment the consensus statements, and then we will turn it to the steering group for Connecting for Health to get their endorsement, and then we’ll start to disseminate these consensus statements to a broader community, which actually I guess I’ve already started today, but when we have final approved ones we will disseminate those and get feedback and enhance them as well.

MR. BLAIR: Jim, how much time do you have for comments before you need to leave?

DR. CIMINO: I have ten, 15 minutes if we need that.

DR. COHN: So we don’t want to have HIMSS --

MR. BLAIR: They’ll come up after, this way, at least we can get questions to Jim and then HIMSS.

DR. COHN: Questions for the first presenter, Walter?

DR. SUJANSKY: First of all Jim, thanks a lot, this is very helpful, a lot of ideas, clearly a lot of though went into this. I do have some specific questions about the proposals. One is the internationalization that’s mentioned as a requirement to put it that way. First of all is that considered to be a firm requirement of such an endeavor for its success in the U.S. market or for the value that it might provide in the U.S., and what is the rationale behind that as a requirement? Is that a siniquanine(?) of this?

DR. CIMINO: Well, it comes from a number of different corners, in fact the group did not have international representation on it per se, however, a number of the stakeholders have international interests. We’ve heard that vendors, of course, have an international interest and so their cooperation and participation will be helped by having international bent on this. Also a number of the standards development organizations, HL7 for example, brings a lot to the table but it’s from an international standpoint. If HL7 were to say ok, we’re going to work on this terminology and it’s going to be a U.S. based terminology, then they would have problems marshaling the resources of the rest of the organization because they have this international aspect. So in terms of the cooperation of the stakeholders I think many of the stakeholders have a need to pay attention to the international community and then otherwise thinking about where do we go from here if we’re the leaders in world health care, we want to be able to provide something that’s going to help not just in this country but across the world.

DR. COHN: Other questions?

DR. SUJANSKY: You mentioned in describing the oversight body that it would need to have authority and mandate, can you talk about that a little bit more? Has there been any more discussion of what kind of authority and mandate --

DR. CIMINO: It would have to be authority from the federal government, it would have to have authority from the federal government to oversee this process, to resolve disputes, to break down barriers, to get cooperation. Nobody is using the word mandated or required in terms of terminology use, I think some people would like to use it, and we kind of dance around that. We hope that that won’t be required, we hope that the benefits of these terminologies will be sufficient that they are used, the advantages of their use will be obvious. But I think in general we believe that this is a group that has to be supported by the highest level in the government in order to accomplish this because there are too many conflicting opinions as you know and conflicting priorities that there’s going to have to be some arbitration to resolve that.

DR. STEINDEL: Jim, I’d like a little clarification on that last set of statements. There’s a little bit of difference between control at the federal government level and support at the federal government level, and I think what we’ve seen with the HIPAA process is that control at the federal government level can be somewhat problematic. For support at the federal government level, the federal government is one of the largest users of this type of information, and support is very important. So I’d like you to comment on that.

DR. CIMINO: We’re still batting around some of the different ways, how would this look. For instance I think one proposal would be for example that the National Library of Medicine should be in charge of this. That’s an easy out, I don’t know how the National Library of Medicine, how they’d feel about it, but that would be for instance a model, whether it’s good or not, but that would be a model. Another model would be that there would be some independent organization who would have government funding and the government would give it a stamp of approval but its decisions would be unaffected by direct government contact. The problem is that may not have any teeth. Now you can look at something like the Joint Commission on Accreditation of Health Care Organizations. That’s not a government body, it’s an independent body, but it’s got lots of teeth. So I think there are a lot of different models, and of course we want to find one where the people making the decisions know what’s going on and have the best goals at heart, but they need to have some big stick to swing as well.

DR. COHN: Well Jeff and then Walter.

MR. BLAIR: Jim, my compliments to Connecting for Health for coming up with a very thoughtful, I understand it’s preliminary and I understand it is draft, but a lot of very well thought out ideas for where we might want to go in the future, and since a lot of us that are here on the Subcommittee today and tomorrow are struggling more with what we can do during the next few months or year. The piece that struck me is that you seem to be articulating really good long term goals. Was there any discussion or consideration about interim steps or major milestones? And what I’m thinking of here is that you made the comment that you’d like to go beyond mapping to integrating all of the sets of terminologies that could become a core, and I think that in a lot of ways I would think that if we can wind up selecting a set of terminologies and get some degree of consensus that they’re the right ones and be able to map those that that might be an appropriate intermediate step that may take us through many years before we could get to a point where we’re really integrating them. Any discussion of that nature go on or?

DR. CIMINO: Well, I think actually the intent of the meeting was to try to accomplish some of that and lay some short term goals and tasks. But what happened was we found that when we brought everybody together that there was a lot of distance between people’s perceptions of what needed to be done and so we sought to address the long term goals first, and then figure out what the stepping stones were to get there, and that’s part of the ongoing process that’s going to continue. So that’s part of it.

With respect to your specific issue about selecting some terminologies and mapping them versus merging them, if the terminologies don’t overlap with respect to domain, for instance if you were to say well, we’re going to use LOINC for laboratory and we will use all of SNOMED except the lab part of SNOMED for everything else, just for example, then there’s no overlap and so you don’t have the mapping/merging issue at all because now you’re talking about different terminologies within the terminology set. But the merging/mapping issue was, if you were for instance to say well let’s take all of SNOMED and let’s take all of LOINC and let’s put all of that in the mix, then you’d have to figure out well what about the overlap and how are we going to do that, and what our recommendation was not to do mapping, because mapping requires a lot of ongoing maintenance. As each terminology changes the mapping changes over time, and so it becomes much more difficult to do in the long term although of course it’s easier to just in the short term to just do mapping and we thought that merging was going to be more beneficial down the road. That was one of the few areas where we actually got down into some of the nitty gritty but mostly we were staying well above that level.

DR. COHN: Walter, I think you have --

DR. SUJANSKY: That addressed my question.

DR. COHN: Other questions on this? Jim, thank you very much, very interesting, I guess we’ll see more updated views after this has gone through the steering committee and, are you going to be having future meetings?

DR. CIMINO: Right now we’re having phone conferences and so we will have future meetings, I think there was some key stakeholders that were underrepresented at that meeting that we need to go back and decide are we going to bring them in and pursue the discussions. But in terms of plans we don’t have any specifics on the calendar just yet. Thank you for the opportunity to talk to you.

DR. COHN: Thank you for joining us.

MR. BLAIR: Ok, our next testifier should be from HIMSS, Ed Larsen.

Agenda Item: Organizational Activities Relative to Clinical Terminologies - Mr. Larsen

MR. LARSEN: Thank you. My name is Ed Larsen, I’m an independent business strategy consultant, and I guess longevity is my credentials, I’ve been doing this for 25 or so years. And I’m here representing the Health Information Management Systems Society, HIMSS, and basically presenting an analysis that we did of the summary analysis of the terminology questionnaires. But within the broader context of NCVHS one of the problems with being around for 25 or more years is this isn’t the first time we’ve addressed standard codes or EHR and we would certainly like to contribute to success this time.

Anyway, we begin by noting that the report that you put together is still in draft format is very solid, it obviously goes to great lengths to look at as we’ve seen a thorny and long standing issue in health care informatics.

MR. BLAIR: You’re talking about the analysis report that our consultant Walter Sujansky, is that what you’re referring to?

MR. LARSEN: Yes. But we come to this with two thoughts. First, for what purpose are we here? And then secondly a few comments on the analysis and how it may impact on a go forward basis. I don’t mean to start with the question of why are we here, or even to question the interpretation and why NCVHS has the charter that it does. Let’s say for example, though, that the core terminology group is correct in all respects, that it’s technically accurate, it meets all the needs for a national standard medical terminology. The question is does it move the ball any closer to achieving a universal health record. Allow me to use the EHR terminology as the most global, even though I know that PMR has been used by the Committee for legislation purposes, and quite frankly I just went through for me exhausting examination of what’s been going on in the EHR world for the last ten or 15 years, and terminology, if we think clinical terminology is bad just calling what it is we’re about is very difficult.

So the question is really as we examine it from an industry perspective, we don’t believe that the primary and secondary users of the EHR, the society in general, patients, all of us, have agreed on what a EHR is. Within that context any code set will do if you don’t know what you’re trying to accomplish. The Committee issued their report in ’99 I believe on the National Health Care Infrastructure and had a three dimensional model of the things that an EHR really could support, and that’s really the crux of the problem, because we all have different views of what we would like it to do, and that really drives the requirements. We kind of submit that until we get that definition down and in fact decide what’s feasible, phase it, whatever we have to do, that it’s going to be very difficult to move forward.

You’ve got a dilemma. If you make a recommendation and in fact the industry, because they’re the ones that are going to have to implement it in the product first to make it available, is really waiting for it, if that’s been the hold up, then for sure whatever you do freezes technology and other approaches. That’s on the one hand. If on the other hand the industry really isn’t waiting for codification of this is the way to do terminology, then the question is why do it now. Why not let some of the other initiatives, the web antilogies, the structured documents, lets see how that plays out if it isn’t needed now.

I’m not here to too much divert, but I think that we have to understand that in selecting a standard code and terminology set we are having significant impact on other things. We’ve talked about some of the issues of international, but fundamentally as you all know, electronic health records is more than the semantics contained by codes and terminology, it certainly involves structure, it certainly as the CPR study of ’91 pointed out, it involves the system that it is imbedded in. And to the extent that you say this is the codes and terms that we’re going to use, you’ve said something about what the structure that is imposed has to be or can be, and something about the system, and I’ll speak to that later. The issue is that it can be a trade-off between how much semantic meaning comes from structure and how much comes from code. The point on international and the point on specifically on HL7 is that the decision has been made that each realm, national realm or combination of countries, can select their own codes if necessary. Well, that really kind of drives the structure of documents and the standard to be relatively agnostic to the power of the coding system to the extent that we imbed a powerful coding system into the EHR then we’re saying something about the structure and the system that has to surround it. That may be a good thing, it may be a good thing for the U.S. to go forward as a realm to decide what it wants to do. That’s not the point. All I’m trying to make the point is that it does have ramifications, it’s not simply saying these are the codes and terminologies that are preferred.

And I think that that goes to the second point, which is really that without explicitly defining the purpose of the EHR we really can’t judge the value of the code set. We can go from one extreme where all we want to do is electronically move a patient’s health record from one primary care provider to another, or the mythical case of the middle aged businessman that has a heart attack in Las Angeles and they want his medical record to come somewhere back east. That’s one use case which is far different than trying to identify in real time a bioterrism incident, or conducting clinical studies. The codes are different and then we get into the discussion you’ve had about pre- and post-coordination. How much of the concept do you want to capture in a code form at the time the concept is in the originating source’s mind? Again we would suggest that without knowing the answer to what we were trying to do that the terminologies, we can’t answer the question of what’s the optimal terminology.

A couple of questions that we would raise on the analysis from an overall perspective. First of all, I and I’m not representing HIMSS in this, are not as linguists or informaticists that some of whom here are much, much more knowledgeable. But we do think there is a valid trade-off, particularly if technology isn’t the sole problem that is preventing us from having an EHR. A trade-off between business realities, market acceptance, and so forth, and the purity of the code. Particularly when you define what the purpose of the EHR is. So we would certainly encourage the Subcommittee as it goes forward to examine some of the issues that were not resolved in the draft report because they can overwhelm as they have had for 15 years or so the other issues, business overwhelms the technology. I could almost guarantee, and I think most of the vendor representatives, is that if you could tell them what the business case is in the problem they were trying to solve they’d probably get it to you in a 12 month cycle.

So in conclusion, I would first of all note that you were very, very adept in your recommendations on messaging standards, in balancing as recommendations, and encouraging emerging standards. I would note we have never examined the electronic health record, we’ve looked at messages and we’ve looked at codes, but the structure of a record has not been examined. And I think then we would recognize that there is an issue of trade-offs that are necessary.

It may not be the right time to come up with hard recommendations on terminologies to be adopted. It may or it may not but we would certainly encourage further analysis.

The last couple of points on the industry, I listened to Dr. Brandt’s discussion, I think he represents a fairly predominant view of the industry of what is being developed at this time. The issue though is that a standardized code set is much more a problem when you communicate between enterprises than it is in developing an intra enterprise clinical information system decision support system. If you can control your environment as one organization can, even if it’s multiple hospitals, you have a far different problem to solve, and simpler problem, then when it has to transparently move between different organizations. And in the United States we have no structure for that kind of movement, at least not electronically.

The HIMSS sponsors along with RSNA, Radiological Society of North American, integrating the health care enterprise. This group of primarily vendors is a standards implementation initiative, in other words they take existing standards and try to use them to come up with a definitive implementation of an enterprise system. They have moved in the last year into from radiology domain into the broader question of the health care enterprise. So far standard codes have not been a problem, partly because they’re new at it, and partly because the problem within an organization isn’t quite the same as without the organization. So in terms of how badly we need the standardization, I think we would all welcome it if it were allowed a certain amount of flexibility in how to incorporate it into a system, that it was backed by some kind of an EHR mandate, and that from a standpoint that it could be implemented internationally as part of an international standard for many of the reasons that Jim Cimino articulated.

Thank you. I would say in closing that HIMSS is very interested in this specific project and the overall projects, so our comments aren’t to be construed as negative or anti any of this. What we do though think that there needs to be some more work done.

Thank you very much.

PARTICIPANT: First a comment and then a couple of questions. I agree with your overall assessment that there’s danger in HL7 of things splintering around realms, but it’s more constrained then I think you represented. And it’s more constrained because of this, the organization as a whole determines the slots that the codes have to go into, and so the whole organization has to agree whether we’re going to represent a diagnosis by a single code or by a combination of codes, and ongoing discussions about how much you can compose within a given coded field. And so I agree there’s danger there but there are also fences and other things that are constraining that so that it’s not just a free for all, the fact that people recognize that to even call it the same standard it can’t be something entirely different in the UK and in Canada than it is in the U.S.

The purpose of realms within HL7 is to say if billing codes are mandated in the U.S. to be ICD-9-CM codes and are mandated to be some other set of codes in Australia the standard for that purpose, the slot where the billing code goes would be the same in all of those realms, it would have the same semantic purpose but they could choose, actually they’re usually mandated by legislation or the fact that they want to get paid, to be a different code scheme. So I’m not disagreeing with you but I think it’s less of a free for all then maybe you indicated.

So the two questions, one is you talked about timing and is this the right time and that there should be more work. I’m trying to think how we would know what work we would to determine if the time was right or whether the parameters that would indicate timing is right to standardize these codes. Then I’ll ask my second question after.

MR. LARSEN: Well, first I’m not going to dispute what the mandate given to the Committee and the Subcommittee are in terms of what they should produce, but I guess if I were solving the problem and if I looked back at 15 years or so of progress or lack thereof toward the EHR, I would start at a different place and perhaps some of the initiatives on defining the EHR that the federal government is involved in might be a better place, because if you know what you’re trying to do or why timetable you’re on on trying to do it, it might be a very important criteria as to when to select specific codes. The other flippant answer is any time is ok, I mean if you don’t really know why any time will work.

PARTICIPANT: Again, your comment was that we’ve looked at code use in messaging and some other, and particularly for instance I would just expound on that particular aspect. We’ve looked at use cases and said for instance standard codes would be tremendously useful for public health surveillance, infectious disease, detection of bioterrorism, many other things. I think your assertion was that again, we should look at the structure of the EHR and understand how these things would work and I guess that hasn’t happened as publicly. At the same time I guess I would argue that we’re receiving testimony and in fact many on the Committee who are very familiar with the structure of the EHR and in fact today we’ve had testimony from Siemans, from McKesson, from Cerner, from 3M, all of which have great depth of knowledge about the structure of EHR’s and those databases, and so I’m wondering what you would see as necessary preliminary work from your point of view that needs to be done in that regard.

MR. LARSEN: Well, I think as I tried to convey, industry is moving forward on creating an EMR at the core of a clinical, enterprise clinical information system. And each one of those vendors is deciding how best to solve problems of interoperability and support clinical decision support and so forth. The problem becomes one of, and the market will buy it, large health care enterprises see great value for the reasons that Sam indicated, Leapfrog, compliance, improving patient safety outcomes. Enterprises will buy it. That doesn’t get you interoperability between the enterprises and it doesn’t meet the needs of many of the secondary users of the data unless there is something explicitly put in place because --

PARTICIPANT: When you start talking across enterprises now you’re back into the messaging space which I think we’ve done a lot of analysis on.

MR. LARSON: I think we have looked at messaging, the question is is that what we mean by electronic health record, that it’s something in an enterprise and we send messages about it back and forth to someone else as opposed to some other view that we move a record or an abstract or something that has document nature as opposed to message nature.

PARTICIPANT: Well even if you send documents they’re messages, I mean in the sense of this is a thing of communication which is what a message is, whether you’re moving folders or you’re moving documents of you’re moving, those are all messages, I mean those are things that are exactly the considerations within HL7 for those parts, and so the EHR, one here or one there or if there’s one major one and you send the things, but if you’re talking about interoperability --

MR. LARSEN: It’s when you open the package, the message got there fine, you open the package, do I understand it, which gets to document structure or whatever we want to call the package, of one of which element is the codes, the common code set.

DR. COHN: Other questions Walter?

DR. SUJANSKY: Yes, I have one question to preface my question. This is kind of obvious but I just want to make sure I understand this clearly. You’re here representing the official viewpoints of HIMSS, is that correct?

MR. LARSEN: I don’t think anyone would say there’s an official viewpoint and certainly I can’t represent it, what I can represent is that they asked me to do the review, perform the analysis which was reviewed internally, and they said ok, go ahead.

DR. SUJANSKY: Ok, that sounds like you’re representing their official viewpoint.

MR. LARSEN: I would not call it an official viewpoint, they have certainly seen and approved of what I have put forward as my analysis of the situation.

DR. SUJANSKY: And they sponsored you to come here to present this to the Subcommittee?

MR. LARSEN: Yes, they paid part of my travel.

DR. SUJANSKY: Thank you. I just wanted to clarify that. Another point of clarification, and this follows up a little bit on your discussion, the last point that Stan was making your discussion on that, that the work of the Subcommittee right now per the scope and criteria document that the Subcommittee put forth in December doesn’t necessarily move towards the specification of a standard for electronic health record. It may be terminology support as you were just discussing simply for exchanging of patient medical record information between systems that internally store that information in different ways. And Jeff and Simon you can jump in here if I’ve got that wrong, that the document of December expressed that ideally there would be a complete information model underlying the terminology model and message standard and so forth, and that was a very good long term goal, but in the short run it was more important to focus on terminology standards and possibly the integration of those standards with messaging standards to achieve tangible, to move the ball forward in terms of interoperability of clinical data across vendors, across enterprises, between providers and the government and regulatory agencies and agencies doing bio surveillance and so forth. And I want to make sure, again, that I understood your comments, that well, if we’re trying to standardize the electronic health record then we shouldn’t be starting with terminology. Is that a fair paraphrase of your comments?

MR. LARSEN: I would say this. My concern would be that in codifying things that move between enterprises implies that there’s got to be some kind of a standardization in the way that data was originally captured, authenticated, and coded or whatever, and that you don’t just because, for one silo purpose or another put it into a standard nomenclature in a standard message and send it to somebody else. I mean that’s certainly, messaging could be the focus and I have no quarrel with that, but how valid and authentic and secure has the data been before it got to the message. And if it’s not part of the workflow of the system then, and just add it on later, I’m not sure what the advantage is --

DR. SUJANSKY: Could not the same thing be said of the HL7 message standard which has been used successfully for a very long time? It’s up to the sender and receiver of a message to make sure that that data that’s put in that message reflects the data as it was captured --

MR. LARSEN: That’s correct, and it’s done within an organization where there’s a lot more control than when you have independent enterprises.

DR. COHN: Ok, Jeff and then Steve.

MR. BLAIR: Ed, thank you for your comments. And I think I shared the idea when we all began this process some years back that maybe the first thing that would be appropriate for standardization would be content and structure of the record or an information model that would wind up identifying the record, and maybe that still is in a perfect world the most logical way to proceed. As you can tell, the Subcommittee hasn’t proceeded in that way and I’m not trying to, my comments now are not really trying to say we’ve done things right, but just maybe to articulate the reasons why we’ve gone in the path that we have, and you have very thoughtful insights on this, so I would continue to like to have your comments and critiques now and in other times, but let me explain why we headed down the path we have.

First of all our directive from HIPAA was to study uniform data standards for patient medical record information and the electronic exchange of that information. And when we were to study these uniform data standards for adoption we had to look at what was available for adoption, and with the brief experience that we had with the earlier HIPAA standards it really did seem very compelling to us that we needed to pick standards that had demonstrated some degree of success in implementation, and message format standards had some degree of demonstrating success. So that appeared to be an area where we could go forward and there would be some degree of industry consensus, not total, not perfect, but some degree of industry consensus and that at least could be a start, so it was a pragmatic choice.

And then we began to consider what would be the next logical area and I’m afraid we listened to the industry and the industry was really telling us that they needed to have code sets and terminologies so that they could go forward with the next steps and one of the things that we heard today was the high priority for decision support and outcomes analysis as well as terminologies needed to support messages, as well as terminologies needed to support other activities.

And so that does appear to be something we’re hearing from the industry but there’s another source that we’re responding to, there’s another you might say from a marketing standpoint, another customer of our Subcommittee, and that is the needs of the federal government, and while we’ve waited many years for the federal government to put standardization of patient records on a high priority that has finally happened. And now that that has happened the Consolidated Healthcare Informatics Initiative is a vehicle that is trying to drive that forward among DOD, VA, and HHS, and as they’re trying to drive that forward they’re also communicating to us some of their priorities and information requirements and our Subcommittee is listening and trying to see what we can do to be responsive to their needs as well as the private sector.

So I’m only explaining this not to say that we’re necessarily making the right choices, but just for you to understand the reasons why we’ve made some of the choices that we’ve made so far, and now that I’ve said that I would repeat that I respect and admire not only your experience with respect to standards, which has been very rich, but also feel as if HIMSS, which is a very large representative organization, represents many constituencies within the health care community, we really want to make sure that we also are listening to the membership represented by HIMSS.

MR. LARSEN: Well, Jeff, certainly I would not imply in any way that I was criticizing the Subcommittee or any of the work that it’s done, or that questioning what the HIPAA mandates and so forth, and the general need. I was expressing based on quite a bit of experience of why we don’t have an electronic health record, what the problems as I see them are, and again, I would suggest that they probably don’t start at technology, but technology and codes and terminology, that is something that has to be solved, there’s no doubt about it and I think that the work the Subcommittee has done is excellent, so I’m not, that’s not the point of my comments.

DR. STEINDEL: As Walter noted I think in one of the previous sets of questions, Jeff has encompassed my comments.

DR. COHN: Are there any other questions, comments? Ed thank you very much for your testimony, look forward to talking to you further especially as we, at some point we’ll obviously be reflecting as we do frequently about things that will help accelerate the implementation of electronic health records which are I think way beyond the technical issues, into financial business cases and all of that, so we’ll be talking.

MR. BLAIR: One last comment before we conclude, is that Suzie reminded me, thank you Suzie for keeping me on track, is that tomorrow morning we are scheduled to convene at 8:30.

DR. COHN: I was going to remind everybody, I think Jeff slipped that one by me.

MR. BLAIR: Suzie, did we forget anything else, any other suggestions or comments?

MS. GREENBERG: I think that the previous panel, you would defer it if there were any questions for the last speaker.

DR. COHN: Actually I hadn’t, but if there are any additional questions --

MS. GREENBERG: I don’t know if anyone had any questions but because we needed to move on to --

MR. BLAIR: And a matter of fact, while he’s getting settled why don’t I just point this out also, is for the Subcommittee members and staff we’ve all been running at 90 miles an hour and that includes Suzie Bebee and Steve Steindel and myself and Michael Fitzmaurice, so there was a lot of, and Marietta Squire, thank you, and there was just a plethora of information to try to get to you before these several days, and I don’t know if you’ve had a chance to look at several different documents that were sent last Thursday and last Friday and then again on Monday, but let me tell you what they are so you know what to look for. The first of which was electronic versions of the responses of the users to the questionnaire in preparation for their testimony for today, and we refer to that as written testimony which may in many cases be somewhat different than they’re oral presentation that was presented today, and you’ll find that in a uniform manner.

In addition to that, Suzie Bebee took the time to pull together a spreadsheet so you could begin to compare the information which I think is extremely valuable and you will find that, the latest, you sent two versions of that, what was the latest version? Last night at 9:00, so I don’t know if this will be homework for tonight.

Now the third thing that we sent you, and I struggled with this because we had a little bit of concern which affected the timing of when we sent it, was we had responses to our original questionnaire from the terminology developers, that is a large document, it’s in a ZIP file. We were hesitant to distribute it before the terminology developers had a chance to make comments on the analysis and feed them back to us but I felt as if you really needed to have a chance to see them if you wished prior to our testimony today, so I sent you that in a large ZIP file last Friday. So you may want to look at that, although it’s a big file, and you haven’t downloaded already I think you may have difficulty downloading it here in the hotel.

Any other documents? Ok, then Leslie, Les, one of the things I was interested in with the UMDNS was could you give us some idea of how widespread it is being used and where it’s being used?

MR. WOOD: I think Ms. Coats could answer your question overall better than I could. I am understandably, been the Air Force for 30 years and have used it for the past 20 something years in the Air Force and since I retired from the Air Force have been involved in developing this new system for DOD. If you’re referring to where we’re using it, we’re at approximately 40 hospitals right now and each service is deploying approximately two new hospitals a month. That includes a couple that are receiving patients from Iraq.

MR. BLAIR: Is it pretty much stand alone or has it been integrated by any of the traditional health care information system vendors into their systems?

MR. WOOD: Are you talking about the defense medical logistics system or are you talking abut --

MR. BLAIR: The code set.

MR. WOOD: The code set, as I said, I’m not familiar overall with the code set, Ms. Coats is here who is from ECRI, I was asked to come here to show how we used it as a user, you understand what I’m saying, I’m not here necessarily as a knowledgeable user of overall code sets.

DR. COHN: Jeff, I think we have an expert --

PARTICIPANT: Vivian Coats who is with ECRI who developed it is here and she would know the answers to your questions I think.

MS. COATS: To answer your question, specifically you’re asking where is it used, how widely is it used, it’s being used in hundreds of hospitals in the U.S., perhaps within a given hospital it’s imbedded in systems and software for equipment management and procurement. It’s not something that a physician is using when they treat a patient, it’s something that is used in the background for storage and retrieval of information about the equipment and devices that are certainly used in patient care and it’s extremely important in patient safety, for example, for tracking device related adverse events. It’s used internationally, it’s been translated into many languages, it’s used by the Australian Patient Safety Foundation to encode device related concepts in their adverse event database, for example. It’s been incorporated into the UMLS now for what ten years, Betsy, 11 years, as part of that been mapped to other terminologies like SNOMED, like ICD, and so on and so forth. It’s been translated into Spanish and through PAHO, the Pan American Health Organization, distribute free of charge to all hospitals in Latin America. It’s been translated into German, Russian, Turkish, many other language.

DR. COHN: Fairly significantly, I have a question and I see Steve has a question --

DR. STEINDEL: Just a clarification or expansion, does the FDA use it? Or are they looking at it?

MS. COATS: The FDA is currently using their own system that they’re going to be replacing, and UMDNS was part of an effort to create through the European standards organization CEN(?) a global medical device nomenclature system. We participated in that effort and in fact UMDNS was the primary source vocabulary for that project, that was multi year project. At the end of the day that nomenclature has problems, there’s no maintenance infrastructure in place and so on and so forth so the FDA is sort of straddling the fence of analyzing and deciding where they’re going to go. They will abandon what they currently have, though.

DR. COHN: Ok, let me ask a question. Obviously Les I think talked to some extent about the actual equipment and identifying that and I think that’s very important, but certainly I think from many of our views around patient safety, is you commented the issue is do we know what hip was put in a patient and if there’s indeed patient safety issues related to that hip prosthesis or set of hip prosthesis, I guess I just wanted to clarify, it’s these sorts of things that ECRI and the UMDNS also captures in addition to defibrillators and chairs around the hospital and all of that. Is that correct?

MS. COATS: Yes, that’s correct. The scope of the terminology comprises equipment such as radiologic equipment, implantable devices, reagents and invitro diagnostics, and associated test kits, and right now we have a project underway, a collaboration with the National Library of Medicine to extend the terminology to encompass emerging technologies that are not yet on the market and also devices that are important in counter terrorism and bio defense, emergency preparedness equipment and so on and so forth, and many different kinds of environmental monitors.

DR. COHN: Ok, let me ask then just another question because I’m not as familiar with this as I am with the other terminologies, but Les, for example, stuck a device class into it which I understand was sort of an add-on something that you created, is that corrected?

MR. WOOD: The real reason for that was to be able to do searches and be able to get all specific nomenclatures. The nomenclatures are pretty specific as you saw from the brief example, there’s like eight for defibrillators. Aspirators probably I think there’s 15 I counted the other day. You want to know all aspirators, you don’t want to know whether they’re surgical, low volume, high volume, that, you want to classify those but you want to be able to pull back all of those equipment records.

DR. COHN: And I agree, that’s obviously something that isn’t in the nomenclature now. Are there plans for any sort of structure like that?

MS. COATS: Yes, as a matter of fact there, what Les is talking about if I may paraphrase, he’s using in most cases the base concepts of the terms that we have that are in the nomenclature as a kind of higher level category. We are also, a comment was made earlier that UMDNS needs improvement with respect to hierarchy, I think it was Eric Mays, Apelon, who said that, and that is something that we’re working on now, also in conjunction with the work we’re doing for National Library or with National Library of Medicine. We are creating and defining a hierarchical structure that sits above the current data set of control terminology.

MR. WOOD: I’ll add to that. In most cases like in the case of the defibrillators or aspirators, there was a numerical code already exists. They didn’t, that would give you that, but I wanted to be able to put a code in a record but I wanted to be able to pull back all of those records like that. It was only in cases where, I can’t pull an example right now, but they might only have one type of, one nomenclature and that was the entire set for that piece nomenclature, that was where we really, we used their codes for like aspirators and defibrillators, they already existed as a generic class, it was the ones where there was only one entry and that type of device, it didn’t have a class, it only had that one specific entry and we created our own class on top of that. But the creation of the class was to make it easier on the users.

DR. COHN: Other questions?

DR. FITZMAURICE: I’m not sure if I’m confusing the nomenclature with the application. For example, when I take my car in, somebody smashed me in the back end, I took it in, they entered my VIN number and out popped a diagram of my car, they could focus right on the rear end of the car, and all the different parts that had to be replaced, I mean you had to make a judgment. When I went to get insurance I give them a VIN number, they know just what kind of car it is, what it has, and they can quote me a rate if I tell them who’s going to drive the car. Take it into a garage, the same thing, they key is off the VIN number. Does the application let you put in the model number for the particular device and it calculates all of these tables? Or is the content part of the ECRI application or is the ECRI part of the nomenclature and the concepts, and do you have to fill in all of these blanks?

MR. WOOD: Well I don’t think the medical industry is quite as sophisticated as the automobile industry is because you can’t enter a serial number in any system, including FDA, and come out with all that information. Now the concept that we have here is essentially we adopt a nomenclature, we put in management guidance, we do a risk assessment for all the DOD hospitals, and say this is a high risk item, we don’t care what it costs, you’re going to carry this on records, you’re going to have inventory of this, you’re going to do maintenance on this and here’s your maintenance. Based on your nomenclature we put that in what is known as a null(?) set, it is distributed to each site, and when they build an equipment record they inherit that management guidance. The model number and that type of thing is specific to each piece of equipment. The manufacturer, the model number, the serial number, that’s where we’re not up with the automotive industry. You have to record that with the federal government about what that vehicle identification number is. We don’t do that for medical equipment.

DR. FITZMAURICE: I see the problem, I see the part of the solution that you have adopted.

MS. COATS: If I may also comment that we have as I said before many users of UMDNS and how they implement it may be different depending on the context and they may add to it, they may create extensions, they may use it for different purposes. I asked Les to come here and tell you about how he has implemented it on behalf of DOD because I thought it was a particularly interesting example but there are many other ways in which it’s being used.

DR. COHN: Great. Other questions? I’d sort of like to wrap up since it’s getting to be 5:30. Jeff, do you have a final question that you want to?

MR. BLAIR: Eric, are you here? Jim Cimino presented some thoughtful ideas in terms of the ultimate direction that we go to, and it made me think a little bit more about the mappings issue and of course Apelon is, my understanding has done a lot of the mapping on contract that’s part of the UMLS, and we also had testimony from Health Language Center that has also done mapping, and I was wondering if we could have some of your thoughts about any guidance that you may have, if we can’t get to a unified integrated set and we’re still at the mapping stage and we wind up coming up with two, three, or four different terminologies that we’re recommending as part of a core set of PMRI terminologies, what mapping implications should we be considering if we do a recommendation like that?

DR. MAYS: I think there’s two distinct phases of the mapping, there’s mapping from the clinical space into the administrative space, and then there’s mappings among, potentially mappings among clinical terminologies. Now the mappings from the clinical space to the administrative space will presumably be driven by economic factors and if people are documenting in a clinical terminology and a business case can be made to facilitate the capture of administrative code sets from that clinical coding then presumably vendors will jump into that space and provide those mappings. Or they’ll be a business case made within the government within CMS to provide those kind of mappings. Now distinct, and I should say that it is, we’ve had a lot of experience with the mappings both in the preliminary work that we do for the National Library of Medicine and the creation of the UMLS and a lot of those mappings amongst the terminologies are done at the Library of Medicine, and it is, Betsy can comment on how much work is involved with that. I will say that it is, while it’s reasonable to do these mappings from the clinical terminologies to the administrative terminologies, it is rather unrewarding work for the individuals that are tasked with doing that, and we do whatever we can to facilitate the burnout as it were that is involved in sitting people down and I mean, just to give you an example, CPT is a relatively small code set, the mapping of CPT to SNOMED is a non-trivial task and it requires a lot of care, it requires a lot of clinical knowledge and experience and it’s not easy, it’s not easy.

Now I think in distinction, however, among the clinical terminologies, I think that even if there are non-overlapping terminologies there would still be the need to provide correlations among those terminologies as it relates to the reference taxonomies, so for example, even though drugs and labs say are non-overlapping terminologies, they nonetheless make reference to chemicals and organisms to which the clinical, all the clinical processes that have been discussed today could greatly benefit by having common reference taxonomies so that we could do all that really cool stuff that people want to do at the point of care and for later outcomes analysis and so on. So I think that just because you would choose a set of non-overlapping terminologies doesn’t mean that you wouldn’t have to do any mapping or as I tried to say my preference wouldn’t be to do the mapping it would instead be to have common reference taxonomies and harmonize amongst those non-overlapping terminologies.

DR. COHN: With that I think we’re about 5:30, I think it’s time to stop. I just want to remind everyone we start at 8:30 tomorrow morning. I do want to tell everybody that we will be done at 3:00 tomorrow, so you can bank on it. Anyway, with that the meeting is adjourned.

[Whereupon, the meeting was recessed at 5:30 p.m., the reconvene the following day, May 22, 2003, at 8:30 p.m.]