[THIS TRANSCRIPT IS UNEDITED]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

Work Group on Computer-Based Patient Records

May 18, 1999

AHCPR Conference Center
6010 Executive Boulevard
Rockville, Maryland

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
(703)352-0091

TABLE OF CONTENTS

Call to Order and Introductions

Medical Code Sets

Nursing Code Sets

Drug and Device Code Sets

Discussion: Next Steps


PARTICIPANTS


P R O C E E D I N G S [8:51 AM]

Agenda Item: Call to Order

MR. BLAIR: My name is Jeff Blair. Let me welcome you to the second day of our CPR work group hearings on medical terminology. For those of you who were not here yesterday, let me put this a little bit in perspective. The CPR work group within the NCVHS has the role of studying uniform data standards for patient medical record information and the electronic exchange of that information and providing recommendations to the Secretary of the DHHS by August 2000.

We have developed a work plan during 1998, which we are essentially following. The work plan was reviewed in our December work group meetings where we got feedback from a number of folks and modified it and updated it, and basically it has about six or seven major focus areas.

Those focus areas include message format standards and in March, I guess it was March 29, we had hearings on message format standards. On March 30, we had at least a couple of folks testify to us on data quality.

Yesterday and today we are receiving testimony on medical terminologies from terminology developers. Later this year we expect to receive testimony from vendors and users of terminology to be able to get some additional perspectives. We, also, have focus areas on how different state laws may affect our ability to create uniform data standards for patient medical record information and a few other areas, but in short that kind of gives you a little bit of perspective.

I think most of you are aware of the fact that I cannot see you. How many folks do we have at our panel right now?

PARTICIPANT: All but Harold Pincus.

MR. BLAIR: All but Harold Pincus. Okay, then why don't we do introductions first. Would the panelists please introduce themselves? Actually, let me have the Committee members first.

DR. COHN: I am Simon Cohn, a member of the National Committee and Chair of the Subcommittee on Standards and Security from Kaiser Permanente.

MR. MAYES: Bob Mayes, Health Care Financing Administration, staff to Committee.

MS. FYFFE: Kathleen Fyffe, member of the Committee. I work for the Health Insurance Association of America.

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Health Care Policy and research, liaison to the Committee and with Bob Mayes co-lead staff to the Computer-Based Patient Record Working Group.

MR. GARVIE: Jim Garvie. I am with the Indian Health Service, staff to the Committee.

MR. JORDAN: I am Ron Jordan. I am immediate past president of the American Pharmaceutical Association and President of H. Caliber(?) Consulting Corporation.

MR. GABRIELLI: My name is Elmer Gabrielli, and I am interested in electronic databases.

MR. KENNELLY: I am Bob Kennelly. I am the Executive Director of the Medical Device Communications Industry Group.

MR. BLAIR: All right, the audience as well?

MR. ROSEN: I am Michael Rosen, Vice President of Consumer Affairs, Wellmet(?).

MS. MORLEY: I am Sue Morley from the University of Iowa.

DR. MC CLOSKEY: Joanne McCloskey, University of Iowa.

MS. NOSTRAL: Judy Nostral, Vanderbilt University.

MS. PROPHET: Sue Prophet, American Health Information Management Association.

MR. FREEHOLD: Bob Freehold with the Health Language Center.

DR. ROTHWELL: Dave Rothwell, with the Health Language Center.

DR. BERGLUND: David Berglund, National Center for Health Statistics.

MS. MEREDITH: Terri Meredith, Multum Information Services.

MR. HOOLEY(?): Jack Hooley, Life Cycle(?) Technologies.

MR. BEAVEY(?): Michael Beavey, American Medical Association.

MR. WAN(?): Elliott Wan, IBM Health Care.

MR. LAMB: Bob Lamb, from the American Dental Association.

MS. BECKFORD(?): Carol Beckford, American Nurses' Association.

MR. BLAIR: Okay, that is everyone. Bob Kennelly, could I ask you to begin and when Howard Prophet comes in, then we will fit him in when he comes in.

Agenda Item: Medical Code Sets

MR. KENNELLY: Okay, I am all set up. Good morning. My name is Bob Kennelly. I am the Executive Director of the Medical Device Communications Industry Group which is a group I will explain to you a little about who they are. It is a group that was formed about 3 months ago.

I am happy to be here this morning to talk to you about medical device terminology. The Medical Device Communications Industry Group is an industry program of a new portion of IEEE. IEEE is the Institute of Electrical and Electronics Engineers which is a 320,000 member professional society throughout the world.

IEEE is, also, a charter member of ANSI who is one of the oldest standards developing organizations in the United States, has been developing standards in power generation and power transmission for almost 100 years and has been doing many of the computer and network standards. All the Ethernet standards are IEEE standards and fire wire and printer cables and all of those things.

Recognizing some of the problems with an entirely volunteer standards process and the problems being mostly how do you get work done in between meetings, IEEE has formed as of January 1, 1999, an industry standards and technology organization. The goal of that organization is to allow members of an industry to work together in a more collaborative fashion, in a more intensive fashion with some funding to get some work done on a much more rapid basis than the traditional consensus standards process offers, and they are going out to many different industry offering this.

So, as an example, Sun has come to the industry standards and technology organization and asked about perhaps having Java run out of that organization. It is an attempt to have the staff and organization of the consensus- based standards be able to bring to bear on what is going on and what is really important to industry so that the FTC regulations are followed and things are open, but it doesn't necessary have to have a full consensus standard.

So, people can do specifications within this organization. The medical device communications industry group is the first organization formed under the IEEE. So, that was formed in February, announced in February at HIMS(?). There were press releases for that. The industry group members which I listed a little later, the original ones are ALARIS Medical Systems and Hewlett-Packard, followed by GE Marquette, Siemens and now Abbott Laboratories. So, there are five companies that are in that industry group right now.

The goal of that industry group is really to foster implementations and accelerate the standards development process, so to both get standards done quicker by getting some technical editing and then to get people out like me into the community telling people what the standard is and to get people within companies to adopt the standard so that implementations are done.

The standard that the medical device communications industry group supports is IEEE 1073 which has been under development since 1984. When I started on this in 1992, people told me 8 years might not have been too long to develop a standard. Well, maybe 15 is.

The 1073 standard is specifically designed for patient connected devices and has been done so on a volunteer basis by the members of that community. Patient connected devices are typically deemed to be things like infusion pumps, ventilators, patient monitors, pulse oximeters, things that are actively attached to patients that are either controlling things like drug infusion and ventilation or reading physiological parameters, typically in an acute care setting or critical care setting, more typically now, also, in home settings and other remote settings. So, it is not set up for laboratories. It is not set up for images and things like that. It is optimized for that setting where you have devices that are very small, tend to be, are using a lot of their processing power already for medical algorithm calculations, don't have a lot of processing power left over for the silly communications stuff, and that is how the standard is optimized.

That is a little overview of where that community is coming from. From the point of view of medical device manufacturers in looking at the questions for this Committee, the PMRI data types really have two different types. There is the historical summary data which is typically gathered for later review and would include primary care physician and summaries of major events, and the other is comprehensive data on the present condition, and this is especially important in critical care settings where you have the ability to have rapid changes in patient condition at highly unpredictable times. Patients are funny that way. They don't have events in predictable ways, and this is, also, very key in remote care settings.

Some of the uses for remote care right now are where people are going out on oil rigs and indian reservations and places like that where you can have infusions going on and blood pressure measurements going on that you have qualified health care physicians or clinicians in a remote place wanting to control devices, get real time information from those devices to do new applications that you cannot really support with the technology that exists now.

PMRI data types, it is important from our point of view to recognize that the patient medical record information has many different sources and they are all vitally important.

I would never disparage nursing notes. There is nothing my community from the device side is going to do about nursing notes. Okay, I would never disparage lab tests and any of those things, but I would, also, say that the infusion rate and the volume that was infused in the ventilator settings and the ECG readings and the strips from that and what the blood pressure was second by second for an event is vitally important information, that the PMRI is captured by many different methods. It is not sensible to have nurses' notes captured automatically by some process or machine that reads the mind of a nurse or something. A sensible technology is likely to be dictation for nursing notes or other clinician notes.

It is, also, sensible to understand that a device like an infusion pump that has a microprocessor in it and knows its settings in the subsecond range, many times per second and has alarm settings that have to go off in less than 2 seconds back and forth it is not really sensible to think that the data capture method for that is a clinician who reads that by his eyes every 8 hours and writes it down.

You have different sources of information that are all important. You have to recognize that there are different methods and different uses for that information. All of these types of information are needed to give clinicians what I have called a systems view of a patient. What is going on with this abstract system of a biological, physical-chemical system that for some reason is in some kind of failure and we want to understand? What is going on with this patient over time? When I first started in this we had someone who I was working with who had an MI and wound up in the hospital. He told us that at 2 in the morning he woke up one day. He felt funny. He called the nurses, and by the time they got there he didn't feel funny anymore, and he said, "I don't know what happened," and nobody knew and the data was gone.

I can guarantee that some important piece of information was lost. Now, he is fine, and maybe that didn't matter but I don't know. I mean something was going on. He knew something was going on, and that data was gone.

All of this data is really needed for a total analysis of this patient, and if you really think of the messages of the Human Genome Project and all that has gone on with marketing, retail marketing where retail marketing has done a lot of information to find out how our genes influence our purchasing decisions, our genes, also, influence what is the best treatment path for pools of us, and the medical view right now tends to be two extremes, that the patient is absolutely unique in the world and that the patient is common with everybody else, and it is both at the same time, and patients as we are with the Human Genome Project really are falling into very predictable subpopulations that react in very predictable ways. We just don't have enough data to be able to see all of that yet.

The tools exist in all other industries because that is how all the retail people do that. The portion of IEEE 1073 which is the terminology is called the medical device data language, MDDL. It is an object oriented representation of device parameters and their attributes in a very high level, object-oriented method, and it has been under development for several years. This terminology was developed by the medical device community in the United States and Europe. So, it presently has something like 3000 terms that are all based around concepts for medical devices. They include physiological concepts like pressures and readings and settings and ECG wave forms and wave sample arrays and things like that, and they, also, include device terminology like battery settings and alarms and things like that.

There are episodic scanners available. There are periodic scanners available to allow this to be as rich as possible but, also, to be highly constrained and optimized, and we call them canned messages for people that have extremely simple devices like pulse oximeters or infusion pumps that don't want the entire information system associating with them each time to get information from them.

They are want to just send simple reports and say, "Whoever wants it, be my guest. Here is what I have." The domain for this MDDL is any device setting, and those critical device settings, again are especially critical care and remote care settings where the devices are.

Now, just to explain again the IEEE 1073 standard, similar to DICOM(?) is the seven-layer communications standard which specifies all of the connectors and the cables and the data rates and the network and transport and all of those things all the way up to the data language for the representations and formats of the data.

This is an object-oriented language which has an ISO registered identification scheme so that any other object scheme can just adopt this entire language. So, it can just look and say, "Use the IEEE registrar," and then say, "Look in that table."

So, any application can essentially punt on what a term means. It says, "Here is the IEEE ISO object identifier. Go look in that table. I don't know what it means" and pass it on to whatever application wants it.

In terms of market acceptance of the standard, I didn't sit here proudly saying that it has been 15 years since people were working on this. There have been some fits and starts in the market need for device data. There have been some higher priorities within the health care provider community.

It is starting to come around, and the device vendors are seeing this that hospitals and providers in general are seeing that they need data from devices for applications that they want to do now. People want to get data from devices. People want to automate charting, okay, and that gets into medical records.

People want to be able to do real time inventories. People want to be able to do pay per use, minute by minute, and one very effective way to do paper use minute by minute is have data communications and know how many minutes. So, there are many applications that are in the process of being introduced into the market that require robust reliable data communications with medical devices.

One of the key signs of market acceptance is the formation in February of this medical device communications industry group. Each of these members has paid $25,000 annual fees to advance this IEEE standard, to get things moving forward, to get some technical editors, to get some representation more like this, and those are Hewlett- Packard, ALARIS, GE Marquette, Siemens and Abbott who have all paid $25,000.

There are approximately 10 other companies that are in the process of making those decisions at high levels. It is not a tremendously large amount of money, but it is not a market manager signature level either. So, it tends to take some time for people to look at this and see what benefit they get.

Another sign of market acceptance is that the nomenclature itself has already been adopted in Europe as the CEN TC251 prestandard. Now, the IEEE version adds some terms because the CEN work completed about a year and one- half or two years ago, but they are essentially 95 percent entirely compatible. That was worked on by some of the top European experts with assistance from people in the IEEE committee from the United States. Angela Rosomori(?) from Italy who was one of the gurus of terminology from Europe had a major impact on what the CEN IEEE standard is and the formatting of that and how the data is represented.

Additionally based on the CEN IEEE agreement on that prestandard and the CEN IEEE agreement in principle on the additional work being added into the IEEE standard the terminology portion of the IEEE standard has been proposed as an ISO work item and may go out in a couple of weeks as a fast track work item with an attached draft international standard.

There is about 2 weeks for that to be decided, but it will go out as a work item. The only difference between having a draft attached or not, it is about a 6-month difference in the ISO process, but in either case the terminology will be adopted with broad acceptance in Europe and the United States.

The United States Technical Advisory Group has already voted its support for this terminology document. There is, I hate to say, "Guaranteed," but noted support from the UK, from Germany, from Japan, from Spain, from many other countries in Europe that are already supporting this. So, this will be in the near future and ISO standard.

There was a question about terminology expansion. This terminology to be frank was not one of the things that my committee wanted to do, and we tried before I joined that committee in 1992, we tried to have many other people do it. "Please do this for us. We need these terms. We don't want to develop them," and everyone whom we went to and it included the National Library of Medicine and other SDOs kind of looked and said, "We don't want to deal with medical devices, and you guys should do that because you know the terms, and you know what is important for you."

So, we did. The expansion of the terminology from the IEEE standard will never go into any other area than what is important for the devices. There is no move here to move into other areas. There is no interest from this community to move into other areas. It is purely for devices, and the terminology expansion will come from two sources, that there will be new devices that come out that have terms that need to be standardized, and there will be new uses of present devices that come up with new parameter names and we will have to define those.

My favorite as I tell people, someone will come up with the banana test, and we will need banana test parameters. I don't know what they will be, but someone will come to our committee and say, "This is what we need," and there is a process within IEEE and within the support of the medical device communications industry group to rapidly accept new terms and make sure that they fit into the format and are adopted rapidly so that we can do market implementations.

The relation to other terminologies mostly is through mapping. There is a mapping in progress of IEEE 1073 terms into LOINC being done by a student at the University of Utah. That is Stan Huff that we have got working on this who is taking the IEEE 3000 terms, seeing how many match similar terms in LOINC. The first guess is that there are three, four or five hundred of those terms that are similar.

That mapping tends to be because the devices come from a very specific point of view and have many terms that are important to them that other applications don't need or require and other applications have points of view that don't match the device community's.

It is relatively uninteresting to most device manufacturers to represent their data in an HL7 based clinical data repository. Okay, now that is not the point of view of many other people within the industry. So, you cannot, from my community drive an entire terminology standard based on one application. There are applications for real-time data that the device community is extremely interested in and that they wanted to make sure they had terms for that and to go to some other community like LOINC and say, "Please develop these terms for our use?" they say, "We have enough to do."

So, that is where that goes, and that is really never going to be any different. That has to be. That was some of the messages we saw yesterday. Let the experts in a clinical area develop a terminology and then let us make sure we can map between them.

The mapping does need to be bilateral to allow new device uses. There are for example, infusion pump manufacturers who are extremely interested in having patient mass to be able to do drug dosing calculations on the pump. There are patient monitor manufacturers who are extremely interested in having patient name and ID and mass and things like that that they can get from an ADT system so they can display it on a patient monitor and not have to have it keyboarded in.

There is need for that data and that mapping of data needs to be bilateral because an infusion pump or a patient monitor does not want to know the extensive 50-year patient history or possibly all the allergic reactions or who the primary care physicians or all of those things that would be in a demographics package. So, there needs to be a mapping both ways and as I said earlier the entire terminology can be used by any other application through the ISO object identifier. So, when the object is identified you can just say, "Look at this table." So, you can do that fairly straightforwardly.

The government role that we see in this is not in development of the standard but certainly in the maintenance. Maintenance of any terminology is a long-term problem. There is commitment from the device industry to support this in the near term. In the long term having maintenance of that be done by the government would be beneficial especially since the terminologies will tend to converge and you can have that done in a coordinated fashion.

Another key government role we see is encourage the use of standards through policy setting, and those two sets of policy for devices are through purchasing rules, through government-owned hospitals, DOD, VA and also, through FDA rules.

It is sensible at this time with the maturity of that standard to imagine a scenario where the FDA would say that if you state that you have data communications on your device, you should use an industry standard, and they are not done now. Most device manufacturers are doing proprietary things. So, that is in the process of changing.

The government, also, recognizing that different sources of PMRI have different business models would be helpful to that overriding coordination, that there isn't a single method that this can be done because there are different points of view that you want to make sure that they are comparable as was evident in the testimony questions.

In conclusion devices are a rich source of data on patient care, on the status of the patient and on the near- term history of the patient.

Data from devices is needed for many new applications that people are in the process of implementing and device data is a critical component of improving patient care.

I thank you for your time this morning.

MR. BLAIR: Thank you, Bob. As we did yesterday we are going to save the questions until everyone on the panel has had an opportunity to testify.

Our next witness will be Dr. Gabrielli.

DR. GABRIELLI: As far as my credentials are concerned, I am happy to report to you that with my team we have successfully climbed the Mount Everest.

After 18 years of intensive research we have now completed and made operational the first medically intelligent computer system which can analyze and digitize free narrative medical text. It is not easy to summarize the hard work of 18 years within 20 minutes. Therefore I will limit my presentation to the major problems we had to solve in order to reach our goal.

Medical text analysis is inherently a multidisciplinary task. We had to recruit linguists to teach us grammatical analysis and semantics, philosophers specialized in science of classification, computer scientists to write the programs and physicians to keep us focused on the road and remain clinically correct.

We needed a person that could compose the text, identify the individual sentences one by one and characterize each sentence and its clauses.

Then we had to learn the structure of the different types of clauses, the characteristics of the various phrases and the grammatical relationship of the various parts of the speech or partially decompose the sentence, reveal the syntax, the structure but not the meaning. We then recognized that we need a special lexicon which could provide intelligence to the various parts of the sentence after it is parsed.

The most important initial criteria were one that the lexicon must be comprehensive, to list not only all medical terms but, also, all non-medical words and phrases in order to handle the narrative text. It took more than a decade to build such a lexicon which is now just a little bit short of a million entries.

Our medical lexicon must be hierarchically structured to facilitate retrieval and it has to link each term to its supra terms and sub terms as well as to its synonyms. The granularity of this lexicon must be defined as creating the most accurate comprehensive coding system and able to expand as the medical vocabulary continues to grow.

In addition the lexicon must store all grammatical characterization of each word linking noun singular and plural forms, verb tenses, adjective and adverbial forms.

As our lexicon was designed in detail and was under construction we met the next problem. If we taught our computer all the grammar, all semantics and linguistics, then what? What is the next step? We agreed that we must search for clinical information buried in the text.

We had to find those structures in the text which carry clinical information. This required a brand new line of thinking. We discovered that the information is scattered in the narrative text in the form of groups of words with unique semantic cohesiveness which carries the unit of clinical information. We called them clinical facts.

We, also, began to recognize that if we could detect all clinical facts we would have the gist of the narrative text which is our ultimate goal, the occurrence of information in the individual clinical facts such as pulsating headache or blood pressure of 170/100. Then we had to characterize these clinical facts and establish the rules about them.

We, also, found that the narrative part of the medical record consists of two components, the clinical facts covering about 25 to 30 percent and the rest of the narrative which is free of medical clinical facts but is still carrying relevant information such as the reason for office visit or death in the patient's family.

The next major block was the characteristics of the English language itself, the ambiguity of medical words and phrases. There are about 43,000 English words, 43,000, excuse me, English nouns, and there are 54,300 different meanings. This means that every noun has 1.24 meanings.

Also, there are about 9000 words in our clinical narratives with 19,800 different meanings. This means that an average word has about 2.2 meanings. In the past this ambiguity was of interest only to the academic linguist. Now, it is very much our problem. This ambiguity was a daunting bit of news. In the past many research teams gave up the task of ambiguity handling. It was too large and too complex to deal with.

Let us look at a simple word like left, l-e-f-t. The word may be an objective, meaning a side look to the left of the midline, but left may, also, be a very, the past tense of the verb leave; that means to go away, but left may, also, indicate political liberalism, and left may, also, be the verb in some idioms such as to be left behind or let us look at the word train, to train a man to fight, to train a plant to grow up on the wall, to train the mind, a line of wagons drawn by a locomotive or to train horses for racing. These conditions change the meaning of the word "train," and the lexicon must know all this.

Ambiguity turned out to be a word all by itself. First we had to handle homonyms that are words with identical autography and pronunciation but unrelated meanings, such as fast, meaning moving quickly or fast, abstaining from food.

Another category is grammatical ambiguity. A word can belong to more than one part of the speech. For example, the word "meeting" is both a noun and a verb. The lexicon must list all these ambiguities and direct the processing. It becomes apparent that the lexicon to support automated processing must list all the medical terms and phrases as well as all the words and phrases of medical conversation or English and resolve all ambiguities. This is a big statement.

Existing word listings can be readily categorized by their chosen goals and targeted users, first, the reference dictionaries like Dorland's, the ICD for Public Health Service, CPT or current procedural terminology, a price list, the National Library of Medicine unified medical language for effective search of scientific literature and finally, SNOMED with 100,000 terms and British red codes, both seem still in development. Red codes grew over 200,000 entries and SNOMED is over 100,000.

Our own medical lexicon at this moment contains over 400,000 medical terms, and we believe that the total comprehensive listing should be around 450,000. Thus, even red codes are barely at halfway for text processing.

Even more importantly a lexicon to support automated text analysis must also include terms, phrases of non-medical character and the two lexicons and a large family of smaller lexicons like idioms, abbreviations, legal terms and so on are currently just short of a million terms.

We believe, ultimately, that a single umbrella lexicon will be required which will cover SNOMED, red codes and also, the excellent codes of psychiatry. This umbrella code should cross reference ICD, CPT, LOINC, FDA's COSTAR(?) and all other terminologies in use.

If the patient record is fully coded, one can perform automated reimbursement coding. Alone this should save for the country about $45 billion annually. As a result the current armada of billing clerks could be terminated. The proposed consideration of current term listings may prove to be a complex political problem since several term listings are proprietary and developers may view it as a governmental intrusion, but on the other hand, if we accept the visionary recommendation of the Institute of Medicine we need a formal dictionary to support medical text throughout the country. For this a large comprehensive lexicon seems to be essential. Perhaps AHCPR and/or HCFA could play a central role in the orchestration of such a large-scale implementation.

For HCFA this may eliminate the need for PROs, the professional organizations since when you have all the patient records on line at a central place an operator could apply algorithms for monitoring all medical cases with some clinical condition.

For example, they could match the diabetes management guideline of the American Diabetes Association against all the actual diabetes in the country. This would be an enormous step in the improvement of medicine.

Such a plan would have some new problems. One is keeping the content of the lexicon current. There are some 4000 to 6000 new medical terms annually. Another area in need of urgent development is creation of brief computer oriented factual guidelines which are designed to be processed by computers. These guidelines should be confirmed by clinical statistical inferences, drawn from millions of patient records, combined with cost/benefit studies to move toward a fact-based optimization of clinical medicine.

In order to achieve all this another area in need of accelerated development is the anonymous clinical database for millions of clinical case histories. After removing all identifiers the risk of confidentiality will be minimized. It could be the main source of authorized guidelines in our future.

Bertrand Russell, the great British philosopher predicted that medicine would soon be developed from an art to science. Medicare care will be optimized with one solution for one problem. This will, of course, not develop easily since our patients are not uniform, but optimization of clinical care could very well be the goal of tomorrow.

Electronic patient records can provide the necessary data. I, also, feel that speech recognition should be developed in an accelerated fashion. Automated medical text processing research has developed a sophisticated new method of disambiguous phrases. This could be readily transferred to the phonetic ambiguities.

Last, but not least the idea of Larry Reed of delivering the knowledge to the care side should be supported. Introduction of electronic medical records should find the currently best clinical decisions but, also, encourage research and improvement by the stimulus from the areas where our clinical capabilities are still limited such as Alzheimer's disease or cancer just mention just two.

In closing, the history of Western civilization was profoundly impacted by the single technical improvement and that was the printing press of Gutenberg. As a consequence knowledge was disseminated rapidly, Renaissance adventurous trip of Christopher Columbus and ensuring enlightenment changed the landscape of our civilization.

Mendeleev with his periodic table of the chemical elements changed chemistry from alchemy to science. Darwin authored biology and Lindley systematized the world of plants.

I would like to respectfully submit that electronic medical records have a similar enormous potential in clinical medicine somewhat analogous to Gutenberg's improved printing and the work of the three heroes, Mendeleev, Darwin and Lindley.

The change is now within reach because the problems of text processing are now fully resolved. Explosive progress is now a realistic option if we recognize the historic moment for the US health care.

Thank you for your attention.

MR. BLAIR: Thank you very much, Dr. Gabrielli.

Our next witness is Ron Jordan.

MR. JORDAN: Thank you. You have a copy of my text. So, I won't read all of it exactly, and I have one addition that just occurred to me as I listened to Dr. Gabrielli that we should probably put on the table and we will supply some further information in writing to the Committee about after this meeting.

Good morning. Thank you for the opportunity to present to this panel the views of the profession of pharmacy and the American Pharmaceutical Association. I am Ron Jordan, President of H Caliber Consulting Corporation and immediate past president of the American Pharmaceutical Association.

I am, also, a past chairman of the standardization committee of the National Council for Prescription Drug Programs, the ANSI accredited organization tasked with developing standards for the pharmacy sector of the health care industry.

APHA represents the third largest health profession composed of more than 210,000 pharmacy practitioners, scientists and pharmacy students. Since its founding in 1852, APHA has been a leader in the professional and scientific advancement of pharmacy and in safeguarding the well-being of the individual patient.

My testimony focuses on the questions provided by the Subcommittee. Patient medical record information should describe the past and current experiences, status and results of an individual patient's medical therapy encounters.

We believe that all relevant information that can be assembled from the patient's numerous medical care providers should be included where relevance is determined from the perspective of each individual provider at the time of the encounter and therefore the collective group of patient's medical providers.

Much like each provider determines appropriate treatment at any given time, the appropriate level of documentation in the medical record for a given encounter should be weighed individually by the provider considering the current and future consequences of such decisions. An obvious essential component of patient medical record information is information about medications, both prescription and over the counter drugs that are prescribed, dispensed, actually used and otherwise encountered by the patient.

From the pharmacist's perspective which many other health providers will, also, find essential, beyond tracking the medication dispensed relative information pharmacists gather about patient's progress on certain therapies, potential drug-related problems uncovered and other information that pharmacists use to help patients manage their drug therapy such as glycolated hemoglobin levels, lipid profiles and peak flow volumes that pharmacists are now monitoring are all integral to an accurate and complete PMRI.

To explain further, information about medication use is vital to understanding health care. Medications are one of the most common, if not the most common method of intervention to treat disease and control symptoms.

Patient medical record information must include more than just the information from the prescriber about the initiation of therapy. It must, also, contain information from pharmacists about whether that medication the patient actually acquired and the interaction of that medication with other therapies that the pharmacist may be aware of.

It should, also, include pharmacists' collected assessments of the patient's compliance and achievement of expected outcomes.

Pharmacists can provide information about whether medication is actually secured and may, also, find out from the patient that the therapy was discontinued without physician's knowledge.

Pharmacists uncover similar potential drug related problems daily. When they are communicated to physicians and other health care providers they often help improve the outcomes of therapy.

Today, physicians and pharmacists attempt to share this information through informal mechanisms. Including pharmacy specific information as a component of the PMRI would help formalize and support this interchange.

The National Council for Prescription Drug Program Standards provides clinical support tools within the standard using the professional pharmacy services codes. These codes support the conduct of on-line, real-time drug utilization review, ORDUR. They provide information to the dispensing pharmacist about medications the patient has secured, even from other pharmacies. They identify potential interactions, dosage problems and permit the pharmacist to assess the relevance of these warnings.

These standards support coordination among pharmacists and could be used similarly with other health care providers further enhancing the patient medical record. Accuracy in confirming the medication provided, the conflicts identified through on-line, real-time drug utilization review and the services provided are extremely important.

Information gaps or misinformation about medication therapy can have serious consequences. Medications are safe when used appropriately and any information gap that affects appropriate use can cause problems.

Next, I will discuss the intended purposes of medical terminology I just talked about, what it is currently being used for and its market acceptance. I will, also, mention work to improve these code sets and another code set actually that occurred to me that ought to be on the table in this testimony, also.

The medical terminology I first want to discuss is a component of he NCPDP telecommunications standard, the professional pharmacy service codes. The purpose of this code set is to facilitate the precise and efficient documentation of electronic transmission of information related to professional services performed by pharmacists and to support the efficient on-line, real-time drug utilization review activities administered by third-party claims processors and within pharmacists' own practice management systems.

The objective of the code set is to improve quality and continuity of care delivered to patients, standardization of the electronic documentation and billing infrastructure that supports efficient compensation mechanisms for the delivery of professional services by pharmacists to their patients and uniformity across the pharmacy provider industry and electronic prescription claims processing industry regarding their transmission of drug utilization review conflict messages and responses in an on-line, real-time environment.

The codes create an environment which aids in the identification and prevention of inappropriate prescription drug therapy with the lowest possible impact on the operational costs and efficiency of pharmacy practice.

The code set classifies medication therapy related problems that constitute threats to patient health and safety and pharmacists' care activities that may be performed to correct or resolve these problems.

The three primary clinical topics that are addressed by the code set respond to the three stages of clinical problem solving in pharmacy, identification of the drug therapy problem, selection of an implementation or corrective course of action and three, the results of outcome of the intervention activity.

As part of NCPDP's telecommunications standard, version 3, release 2, and higher versions the PPS codes are implemented by virtually every vendor that is currently active in prescription drug benefit industry and within all pharmacy computer vendors.

The vast majority, well over 95 percent of the more than 61,000 community pharmacies that are active in the US utilize NCPDP's telecommunications standards. There are, also, a number of foreign countries that utilize the same standards, including PPS codes in their communications with electronic claim processors and within the drug distribution industry.

Of the state Medicaid programs, 47 currently use or anticipate using on-line claims adjudication via NCPDP's telecommunications standard. I must, also, mention that there have been some concerns about the value of PPS codes in the drug distribution industry. These codes, particularly their use in conducting on-line, real-time drug utilization review have been challenged by some users as providing too much information.

As these systems which send ORDUR messages are screening systems, some tolerable level of false positives is inevitable. Unfortunately, in some user circumstances t he messages are provided so often by the administrators, the third party or PBN industry and are commonly identifying problems that pharmacists have already addressed that pharmacists frequently override or ignore the messages. This noise created in the system may tend to desensitize the pharmacist and other practitioners if they are using these systems to truly important messages.

To address this problem APHA is now working with the United States Pharmacopeia and other organizations to identify the source of these problems and recommend changes to the industry.

These solutions then will be provided to NCPDP and the user industry to improve the terminology and its use. Such ongoing quality improvement activities identifying problems at the user level and developing solutions are important to APHA and NCPDP and have supported the broad acceptance of the telecommunications standard and the PPS codes.

We are confident that these efforts to identify and resolve the problems will improve this tool further and coordination of the information in the patient medical record will further enhance patient care.

Now, as an aside on the other issue, APHA and seven other pharmacy practice organizations has, also, developed a pharmacy practice activity classification system, PPAC being incorporated into NCPDP standards at this time and also being improved through a consensus process within the profession.

It has, also, been accepted by the National Library of Medicine as a valid professional activity vocabulary and classification system.

This system should, also, be incorporated or through the process of being in NCPDP standards would, also, be useful to the patient medical record information.

As the need for drug therapy management services expands, so will the need for PPS and PPAC codes. Adverse drug reactions and the problems resulting from medication use are extensive, estimated at yielding more than 100 billion in health care costs annually.

The need for drug therapy management services is high, as we all know, and the role of on-line, real-time drug utilization review, PPS and the PPAC in supporting the identification and resolution of such problems is essential.

Pharmacists have a unique perspective on how electronic standards can facilitate and sometimes hinder progress down a path toward improved patient care and a truly integrated health care system. Pharmacists are the most frequently encountered health care professional, and they, also, rely more consistently on their computers in providing care than many other medical professionals.

In the United States pharmacists in inpatient and outpatient practice settings use computers to perform their job 100 percent of the time day in and day out.

The majority of standards and terminology used by pharmacists are currently used only by pharmacists. Documenting and tracking information about medication dispensing and use is unique to the profession with the exception of some limited dispensing going on by physicians at this time.

As mentioned previously, a patient medical record is incomplete without information about medication use. Integration of the information from pharmacy systems with other health care providers is a goal and required to facilitate interdisciplinary provider-to-provider electronic health care knowledge exchange.

Multiple syntaxes and standards can interoperate in the health care system if appropriate business models, transaction data element compatibility and clearing requirements are specified.

Volatile market structures in the rapidly integrating health care environment will ultimately determine the best standards and standard setting organizations in business models.

It is clear, however, that some areas of health care information exchange standardization will never be solved without stronger, higher power intervention.

Coordination of standards development organizations, data dictionaries with assigned secretariats for various health sector domains should go a long way to ensuring the success in coordinated and useful patient medical record information. Your recommendations as a committee must ensure pharmacy's compatibility with the clinical information systems of other health care disciplines and organizations.

In conclusion, patient medical record information requires coordination of information from many sectors. Information about medical use and drug therapy management is an important component, and the professional pharmacy service codes as well as the professional practice activity classification codes are vital to communicating such information.

Ultimately better coordination and exchange of information between health care professionals will lead to systems we all desire for patient care and optimal health outcomes for our systems.

Thank you.

MR. BLAIR: Thank you. We have time for questions now, and are there questions from our Committee members?

DR. COHN: I will start. Ron, actually I had a couple of questions for you. I am trying to think of where to start. First I want to thank the panel for some very interesting presentations and issues that you are all bringing up.

Ron, first of all for your information there is actually on Thursday the Privacy and Confidentiality Committee meeting with the pharmacy benefit managers, and some of the issues that you are bringing up are probably issues that you ought to be talking to them about, also.

Now, as I look at what is going on in the pharmacy, and as you know I am a physician and not a pharmacist, I am sitting here looking at the professional pharmacy service code in the order system which I think you commented on, and I am trying to figure out sort of what information is going back and forth there and how it relates to the information, to the rest of patient medical record information, and I am convinced from your discussion that there needs to be a connection, but for the life of me I am not sure if there is or can be the way things are currently structured. Maybe you can help me with this.

Certainly I agree with you that a critical part of patient medical record information is to know not only whether a prescription was ordered but whether it was actually picked up and dispensed and whether there was any sort of interaction.

Now, you are talking about a set of codes that relate to that that you describe as professional pharmacy service codes.

Now, can you tell me a little more about what those are? I mean do they relate to anything else that we have talked about over the last day and one-half? I mean are they mapped efficiently to other code sets? Help me with this one?

MR. JORDAN: I think they are unique, Dr. Cohn to pharmacy at this point. We developed them because when we started doing on-line, real-time processing in the prescription drug area it was clear that one new party in the equation of the distribution of drug products, the prescription benefit managers had a picture of drug therapy that might be different potentially than anyone else in the system if the patient was seeing multiple pharmacies and moving around the country.

Their insurer, their prescription benefit manager might know more about what the total therapy looked like than anyone else, and so they began doing this on-line, real-time drug utilization review. As part of the process to improve that system and to add more value to it, it was clear that pharmacists had an assessment of those drug interaction screening warnings that could be developed by just comparing the types of therapy that a patient was on, looking for duplicate therapy or inappropriate dosages or a whole number of other standard drug-drug interactions that might come along or drug-disease interactions that could be interpreted from the patient's drug therapy, and the industry realized that we needed to capture those assessments that the pharmacist was making in order to really optimize the use of the screening system.

With feedback from pharmacists that was obtained from their interaction with physicians about potential drug- related problems the system might be continually optimized in that way or continuously improved, and so a structure was devised in this two-way, real-time communication that goes on between a payer and a pharmacist at this point that permits not only the exchange in both directions of these three areas of coding, the reason for service or the conflict that exists which could be either spotted by the pharmacist when the patient walked in before it went into the computer system or by the PBM, if they see a larger perspective than the pharmacist may have, that can be communicated in two directions at the moment as well as what action a pharmacist took about it. Did they call a physician? Did they make a judgment on their own based on their own knowledge or conversation with the patient that this code was not relevant or did they determine the relevance was so important by talking to a physician that some interaction or further intervention was necessary? So, that is the second part of it, the professional service intervention codes, and these are basically structures that were developed within NCPDP's consensus process that reduce all of the 20 or 30 for each one of these categories, activities that might be communicated down to a two-letter code set basically so that it can be communicated very efficiently, and the last area is the result of service or the outcome that was observed, and that helps put information into the PBM's records and into the pharmacy's computer system about whether the therapy was actually discontinued, whether the dosage was changed, whether the patient instructions were changed in some way, etc., to handle that problem through the intervention taken by the pharmacist.

So, they are kind of unique at the moment, but we believe that they would be useful, also, to other medical professionals that are looking at an automated patient medical record. Hence, we are suggesting that they be included in your evaluation, in your Committee's decision making.

That process is actually going on to some extent now where NCPDP, also, has a standard called script that is actually used for a physician to prescribe and send an automated electronic prescription to a pharmacy.

There are a number of namely integrated health systems that are using that right now, but a number of the physician practice management systems that are out support script transactions, and there is a system set up between the payer industry, pharmacy chains and these physician practice management systems that have electronic communications occurring between all three of these parties, and at the moment there are physicians that are seeing these same interaction warning codes that pharmacists see, and in the future I think there will be a lot more of that as electronic prescribing increases in order to improve interaction and efficiency.

DR. COHN: May I ask another question?

MR. JORDAN: Sure.

DR. COHN: I am sort of understanding, but now my presumption is that in the pharmacy environment that information, a la NDC codes are being sent out to the pharmacy benefit managers for capturing the system. Is that correct on the first level?

MR. JORDAN: NDC codes, yes, that is one piece of the information.

DR. COHN: That is the thing that identifies the drug?

MR. JORDAN: Yes.

DR. COHN: Is there additional information?

MR. JORDAN: There are other code sets that identify drugs that are code sets like the generic GPI is in a couple of them that therapeutically classify drugs, and those are often used in these same communications. If you don't want to select a particular manufacturer's drug product, you might use a GPI-based system that says, "Send this patient ampicillin," and then an NDC is selected after that fact by the pharmacist.

DR. COHN: Right, okay, and the pharmacy organizations have created their own sets of adverse drug reaction codes that come back from pharmacy benefit managers or sometimes, also, within your system.

MR. JORDAN: Yes, also, within our system. I am not sure that those adverse reaction codes have been appropriately and completely mapped as they should probably be to other adverse reaction codes that exist in medical terminology today. I don't think the opportunity for that type of mapping and work has gone on, but if that is what you are getting at, Dr. Cohn, I think --

DR. COHN: I was just trying to --

MR. JORDAN: -- that would be appropriate.

DR. COHN: Yes, I was trying to figure out whether they were sort of out there as orphans or whether they were more integrated into other things that were going on.

Could I ask one final question about this?

MR. JORDAN: Sure.

DR. COHN: My understanding is that there is another terminology called MEDRA that talks a lot about adverse drug effects.

MS. FYFFE: How do you spell that?

DR. COHN: M-E-D-R-A. I was going to ask, does that have anything to do with --

MR. JORDAN: I am not familiar with it, personally. There may be other people within NCPDP that are. It probably, if it is something that the FDA is using that is trying to describe this area it ought to be certainly input to the process going on at NCPDP, and that is another area.

DR. COHN: May I ask one final question, and I am going to stop on this line of thinking, but I am obviously a little concerned that there is, I mean I am convinced from your testimony that there is a lot of important information that pharmacists have that needs to be part of a cohesive view of the patient, and yet you are, also, sitting here telling me that there is a bunch of things out there that somehow we need to bring them in and normalize them and be able to understand them in other contexts.

From your view what sort of recommendations would the pharmacist make in terms of things that might help us move towards that sort of a vision?

MR. JORDAN: We have been trying to move the standards setting process sitting with the APHA hat on and not the standards junkie had on. We have been trying to move the standards organizations to coordinate their activities better among themselves. We think it is important that pharmacists be electronically connected and not orphans from the system, and so we have been a force within NCPDP to be sure that they are coordinating with HL7 and with X12 and other organizations that have been going on, and I think there is a significant amount of cross discussion and interaction occurring between the groups right now.

I think as your Committee comes out with the framework for and suggestions for a patient medical record system and recommendations on how that can be improved and enhanced and hopefully speeded up as Bob talks about that there will be more interaction between NCPDP as our kind of industry consensus group standards experts and the other organizations that have an interest in the patient medical record.

I think the model of IEEE is an intriguing one to me. I had never heard it before. I have kind of been in the APHA professional hat and not in the standards arena for a year, and I think the model that Bob talks about to accelerate the activity is an interesting one that we might apply somehow on our side, too, in order to get pharmacists working closer with the medical community.

MR. MAYES: I just actually wanted to make a comment, a follow on, Simon to what, it is actually sort of turning what you were saying a little bit around.

I think that there is a tremendous opportunity within the pharmacy world, not so much to integrate them into what the broader community is doing but for the broader community to actually look forward because certainly in the area as we discovered when we had our hearings and made recommendations in the claims processing world, pharmacy, retail pharmacy is far ahead of any other part of the health sector in terms of fully automating their business processes.

The work that was discussed today by Ron shows that they have taken that lead and pushed it further into actual practice management or patient management practices here.

When you think about it pharmacists as was mentioned are the most computerized group. They are actually the group that exposes the patient which never seems to come up in our conversations much around all this; we are always very provider centric here, but it is the one area in which patients or all of us as patients are currently exposed to electronic medical records or to the electronic management of our health care system when you go into CVS or WalMart or any of these stores, I mean outside of what we always focus on in traditional health care setting. So, I would strongly encourage this, not only encourage the pharmacy world to look to see and get more in step with what is going on, but we really should use this as an opportunity for testing out some of the ways that we want to take the rest of health care and health care providers into this.

The world is moving away from health care provided in historical health care settings. More and more those are becoming specialty niche areas and health care is now being provided in a whole variety, schools, offices, supermarkets and other places and really the pharmacists have pointed the way at how we might be able to leverage this type of PMRI activity to meet today's business needs in health care.

MR. JORDAN: If I could make one comment just on top of that, and I appreciate the comment. I have certainly always believed that being on the pharmacy standards side --

MR. MAYES: And I am not a pharmacist.

MR. JORDAN: -- that we are leading the way. I love to hear it come from across the table. The other classification system which I apologize I didn't get worked into this testimony as I should have, the pharmacy practice activity classification system which was a consensus development effort on the professional association side, not a real standards organization effort, but an effort by the profession to describe, define and to codify pharmacists' activities is, also, a very important piece of work, and I think it has been recognized by the National Library of Medicine already, and we hope -- NCPDP has accepted it as a data element within their standards. It is now going to be possible to transmit it, but it is a piece of work that I think you should, also, have in front of this Committee because it is pharmacy's attempt to keep up with some of the practice classification going on in other areas. We used a nursing-based model in our development of it, and we think it is useful and APHA will be sure to get that information to you, also.

DR. COHN: Can you submit that to us?

MR. JORDAN: Yes.

DR. COHN: Great.

MR. FITZMAURICE: I have a question for Dr. Gabrielli. We have heard from several different standards developers, terminology developers, code developers and it seems to me that one analogy might be that terminology is like a watermelon, that you have a certain amount of hunger and so you take a slice of the watermelon, and you parse out the seeds, and you say, "All right, this standard will do for my particular transaction, my particular target," whereas Elmer, it seems has swallowed the whole watermelon and then spit out all the seeds and says, "All right, here are the answers to all of the problems."

Elmer, do you believe that your solution is more cohesive than the solutions where you slice off a part of the watermelon and develop a terminology or a code set for a specific application?

DR. GABRIELLI: Thanks for this question. As I am listening to the pharmacy world I recognize a huge gap between the clinical record, the patient's record and the pharmacist. We discussed this over and over again how this could be improved, and the big problems are No. 1, confidentiality, No. 2, the pharmacist's understanding of the clinical terminology, and No. 3 to put some kind of a control over what the pharmacist would say to the patient.

After all this is intruding into the doctor- patient relationship. Now, all this can be worked out very carefully with the electronic record where you are assigning access and information transfer so that the adverse drug reactions, the need for changing the health sensor log would reach the pharmacist. To answer your question this is 18 years of work, a big melon, and we have an operating system. We produce hundreds of medical records a day. So, we sowed as we can see our way to do it. I am sure there will be other ways, for instance, CORP(?) examination. CORPUS(?) is a huge dictionary of standard text, newspaper or whatever where certain words are selected and all the phrases following it and preceding it are lifted out statistically and an enormous amount of assistance can be extracted from that, long phrases, that, t-h-a-t, what the words before them, not in 2000 uses, and then move that, and this is still ahead of us. It is another enormous undertaking.

The solution is, I think, is that people who are in this field should be more exposed to these issues. Yesterday we heard a lot of talk about terminology. Nobody talked about what happens at the other end of the patient record, and I believe the issue is urgent because some commercial intrusion in the field with so-called "electronic records" has been practical and probably financially productive but it misled medicine that that is all that the technology can give.

So, we have to improve the image of this whole project, and we have to prove the power. We computerized an HMO with 1 million visits a year and we pulled out just blood pressure. That was just typing in blood pressure and background statistics. It was interesting to note that 48 percent of the hypertensives were undertreated, 48 percent, all the way up to 200/110. So, in one stroke a major clinical problem was precipitated.

Medicine does not look at this. That HMO has a physician who is in charge of quality. I sent this to him. He never answered, and he didn't answer my telephone call because it is embarrassing that somebody on the outside knows about it. These are all very operational problems.

We believe that there will be, if there is a lot of leadership here that there will be an explosive growth within the next year and one-half, and we have to be very careful that we don't cause harm.

MR. FITZMAURICE: May I follow that up with a question that is puzzling me, and that is I hear Ron saying that the pharmacy has information that the doctor doesn't have? I hear Elmer saying that he has information that the health plan doesn't have. I hear Bob saying that somebody felt funny, and there was information that nobody had but that a device could have picked it up.

I would like to have a short answer from each of you, and I will start with Bob. Where should the repository of information about me be? I always thought that it was at the hospital if I had an acute care or the physician's office if I saw a physician. Now, I don't know. Where should it be? I know it is all over the place, and sometimes it doesn't exist. Where should it be?

MR. KENNELLY: For me the answer is easy. It should be wherever it is needed for your care, and that has a lot to do with that being accessible, and that gets into long-term web based, card based, whatever you want. Those are more long-term solutions. It needs to be wherever you need it.

DR. GABRIELLI: That is a very good answer but not specific enough. My personal feeling is that there will be at least two large partitions of clinical information. One will be the identified clinical record which has to be carefully protected and access should be heavily controlled, and the patient controls his or her own record. The other one is the disambiguated(?) electronic database which is the most powerful impact of what we are here for today because if we were to have 10 million patient records longitudinally, the whole textbook of medicine would be rewritten because the prejudices of experts would be replaced by facts. It is a huge change.

MR. JORDAN: My answer would be similar to Bob's that it needs to be where it is needed, and I think that that probably means at least for the foreseeable future as far as I am concerned a lot of disaggregated information in various professional settings where medical care is being delivered.

I would hope though that we can develop a central system of indexing. I forget what the terminology was for the master patient index. It seems to me a master patient index of encounters with medical care providers where automated medical information exists could easily be developed that would at least allow the pathway, the portal to that information exchange if there were appropriate rules for people to have that exchange occur, that handled confidentiality, that handled the issue of control, and so, you know, through a master patient index of encounter dates and where those things occurred for each patient, I think we could establish that portal, that Internet portal that would allow various providers to exchange what is needed when they need it, and that is what I would envision or hope that you are headed toward.

MR. BLAIR: Thank you, Dr. Ferrans?

Dr. FERRANS: Thanks, Jeff. I think this is just really an excellent panel, and I just wanted to make a couple of comments and ask a couple of questions. With regard to the medical device standards, it is interesting we have a group talking about nursing classification afterwards, it has always been my opinion in clinical practice that a lot of the unnecessary busy work that nurses do is documenting by hand information from devices that occur at the bedside where that time could be much better spent actually taking care of the patient. So, I think that there is a lot of hidden clinical benefit in the ability to automate the capture of that and even to provide decision support on that to nurses and to physicians.

I wanted to ask Ron Jordan about it seems to me, I was listening to you, and I get this vision of this outpatient pharmacist in this network community, everyone with decision support which causes me a tremendous amount of envy.

I am sure internists would love to have access to all that information.

MR. JORDAN: We would love to give it to you.

DR. FERRANS: And I think that is one of the points. It is astounding to me the disparate nature between the two and actually how closely we are supposed to work together.

My question was what is the penetration of these systems in the inpatient setting and what would you say about the rate of decision support tools that are available to inpatient pharmacists, and how does this all differ, and then I will just follow up, and I have a question for Dr. Gabrielli.

MR. JORDAN: Inpatient pharmacy has, also, been automated for a long period of time, but interestingly enough when it comes to the decision support outside the walls of the institution, the larger view of what is going on with the patient there really isn't as much as there is in the community at this particular time, and although they do internally within hospital records, within HL7 type systems have information stored and some decision support for clinical activities of pharmacists, it is much less, I think.

DR. FERRANS: Just because I have been looking into the situation of adverse drug reactions in the inpatient setting I would just like to put a couple of statistics on the record. There was a recent meta analysis published in the Journal of the American Medical Association last year with an overall incidence of 6.7 percent for serious adverse drug reactions in hospitals. The costs that were estimated, drug related morbidity and mortality $136 billion a year in the United States, and that comes from a study in the Archives of Internal Medicine, and there have been a number of very good studies published in the past couple of years on decision support systems that are really showing anywhere from over a 66 percent reduction in morbidity and mortality as a result of use of the system. So, I certainly hope that really those inpatient clinical decision support systems for inpatient pharmacists and for physicians that would be integral to part of the medical record, I think that is, and I think it has been identified in our previous panels as where a lot of the financial return on investment is, but more importantly a lot of the morbidity and mortality related return on investment and I just cannot stress that enough.

Dr. Gabrielli, I very much enjoyed your presentation and my concern about lexicons is always that I cannot see a nation of physicians doing pick lists and dragging and dropping their way into the future. One very practical concern that I have about sort of systems where you sort of have to construct sentences from these different paradigms or picking things from lexicons is the idea of the homogenization of patients, and that is if we are all busy people we will probably document the minimal possible to get by, and therefore 100 patients that I would see with pneumonia would probably all have a 3-day history of fever, cough, followed by shortness of breath whereas if I were to take a more detailed history as one does when they are writing furiously or if they could do voice dictation, there is the opportunity to have all the rich historical information that provides the context, a much more accurate context that allows people to really get a much better snapshot of that patient.

So, I think sometimes in some of these systems there is a tendency for all the patients to sort of look alike. So, I love the idea of things going into a lexicon basically on the back end. That would, personally, be my vision of the future. I certainly have not seen the system that you have, but I very much like the paradigm.

I would like to ask a question though. It seems like we have a number of these different lexicons that are very well established and verified with UMLS and have done a number of mapping activities. Are you planning on looking at any mapping activities with other vocabularies or with the UMLS? I would be very curious about that, and it almost seems like since you have such a different paradigm, how do you feel; how would those mappings sort of work out? It almost seems like it is a -- I don't know how to phrase the question, but it is a different paradigm the way that you are coding information.

DR. GABRIELLI: Thanks very much for all these seven questions you had. Our internal rule is that we serve the physician. We don't force the physician to do anything. Actually taking the data from the dictation back to the physician should be so that physicians don't even know what happened. It is not his concern, like you put out your shoes in the hotel, and they get cleaned.

The second one is that homogenizing the patients, time didn't permit to speak about longitudinal records at this meeting. I think that will be one of the major revolutionary changes in medicine where the patient can be a reference source of his own what was the blood pressure 6 months, 4 months ago and today, and your term "snapshot" is what we always used. In medicine we practice snapshots instead of longitudinal thinking, and that is not only wasteful, but bad medicine, but we just don't have the culture for that.

There are big problems with longitudinal records. If any cardiologist is in the room you may remember the name Paul Dudley White who was the god of cardiology 10 or 15 years ago. He had a longitudinal patient record all his life starting off with a row on the left and the right in every visit and he continued. This is the future. When you give a drug you have to know how much impact it has on the clinical manifestation. You could quantify medicine, and we never had this, and for instance, in diabetes we have to establish the stage that the patient is in numerically, not only with cancer maybe where we already have it but diabetes and rheumatoid arthritis. Every one of those chronic diseases should be classified so that progression can be measured and drug therapy can indicate whether we did any good or not.

You, also, mentioned how we feel about other terminologies. We feel that they are about at the point we were about 12 years ago when we had to worry about the lack of absence of any nosology, that is the nomenclature of medicine. There was SNDO before SNOMED and those were profession oriented. There is no system in medicine. If you tried to do it by etiology or clinical course it is a mess. We developed one. I don't think you have the time for it. I will be glad to send it to you which is an information content of a term, and its complexity creates a layering.

We, also, feel that the lexicon should be very source sensitive. Who said it? If the patient said it, it is a symptom. If the doctor said it, it is a sign. If the chest x-ray or the electrocardiogram said it, it is a machine, and the code should reflect that because when you calculate your judgment you include the credibility of this.

The future of medicine will be databanks where there is a tendency of that in statistics, and the other important thing is longitudinal health records should be enormously important.

MR. BLAIR: I hate to close. We are running about 10 minutes behind. So, that is going to have to be our last question.

Judy, I know that you are standing there. Maybe you could catch one of the Committee members or rather the panel members at the break.

Please return at ten-forty-five. That is 15 minutes.

(Brief recess.)

Agenda Item: Nursing Code Sets

MR. BLAIR: Okay, we have our next panel which is primarily focused on nursing coding systems, and our first presenter will be Dorothy Jones, and Dorothy, could you introduce yourself and go ahead?

MS. JONES: Thank you. As you said, my name is Dorothy Jones. I am president currently of the North American Nursing Diagnosis Association, commonly known as NANDA. NANDA is a membership driven organization that has been in place since 1973. Recently we celebrated our 25th anniversary.

I thought we would have some overheads and the powerpoint, but it doesn't seem to be working well. So, I have a handout that I gave which is a compilation of all of the overheads and slides that I was going to use, and if the machinery works in the interim I will return to that, but I will speak from my notes and move forward.

The North American Nursing Diagnosis Association for the past 25 years has been focusing on development of nursing language, and they use the terminology nursing diagnoses to define a clinical judgment about an individual, family or community response to an actual or potential health problem or life process.

The nursing diagnosis provides a basis for determining both patient phenomena of concern and helps to direct the nursing and direct the nursing interventions and directs outcomes as well.

There are three major components of the nursing diagnosis, the label itself which is a concise term of phrase for the name of the diagnosis, a definition, that is a description that delineates meaning and helps differentiate one diagnosis from another and risk factors, that environmental or physiologic or genetic chemical elements that increase the vulnerability of an individual, family or community to an unhealthful event.

As a membership-driven organization NANDA has as its commitment and mission to increase the visibility of nursing contribution to patient care by continuing to develop, refine and classify phenomena of concern to nursing and these so labeled nursing diagnoses.

As a membership driven organization NANDA is responsible for the continued implementation of a mission such as I described. The use of standardized language within the context of NANDA's work focuses heavily on the availability of a centralized patient, family, community database to describe health and wellness, to use a national data set that will provide information for continued research to refine nursing's language and disciplinary focus and to help organize behaviors and perceptions about phenomena of concern across populations and settings and to this end NANDA began its work in 1973, to work towards development of a classification which was operationalized as a systematic arrangement of categories according to established criteria an arrangement of phenomena into groups based on their relationship.

A taxonomy of classifications was created and defined as a theoretical study of systematic classifications including those facts, principles and procedures as well as rules necessary in order to facilitate the development of language.

Inherent in this is the development of a hierarchy which would then organize information from the most abstract to the most concrete. It is critical that nursing language be included in any standardized database or patient medical record because it is an integral component of the patient's health care. The important information about the patient's experience as described by nursing helps to not only articulate a disciplinary focus of care but a perspective within the interdisciplinary team.

In addition the American Nurses' Association social policy statement along with the American Nurses' Association standard of practice mandates that the professional nurse assess, diagnose, treat and evaluate given responses to health and illness.

This is an essential component of practice and has social, as well as legal implications delineating that care must be documented in the patient's record so as to reflect nursing judgment, action and evaluation.

The current status of nursing diagnosis is organized under what has been referred to as taxonomy 1. This taxonomy was developed in 1984, contains nine patterns of human responses, later called human response patterns.

Within this context there are four levels of abstraction, the first level the most abstract and sometimes diagnosis referred to as alterations, to level four which is the more concrete, highly clinically relevant observable and measurable diagnostic statement.

There are currently within this framework nine pattern areas which you can see outlined in the handout which range from exchanging, communicating, relating, valuing, choosing, moving, knowing, perceiving and feeling.

These categories of human response patterns form the structure for taxonomy one and were generated by a group of nurse theorists who worked within the confines of NANDA to generate this structure.

Currently there are approximately 140 nursing diagnoses in use by nursing. Most recently 21 new nursing diagnoses and 37 revised diagnoses were added to the current list. Within NANDA this work is accomplished through a series of committee structures and the diagnosis review committee is one committee that receives submissions from a variety of sources, including members of NANDA as well as practicing nurses, nurses within specialty organizations, as well as research teams such as NDEC(?) out of University of Iowa.

When a new diagnosis is brought forth or an old diagnosis revised, it becomes part of this committee's work to review the documentation supporting the diagnosis. It is, also, reviewed by external teams, as well as an international committee.

This is then presented to the board and the membership for voting, approval, etc. The NANDA board makes a final decision on a diagnosis that is fully accepted for clinical testing that becomes part of the NANDA list, and in the material that I am submitting a book entitled Nursing Diagnosis, Definitions and Classifications provides a summative compilation of all of the diagnoses to date, including labels, definitions, etc.

This book has been translated into at least 10 languages, including Japanese, Dutch, French and Spanish and currently versions are being translated by Brazil and Mexico. So, it has an international appeal.

Some of the problems related to that have to do with the translatability of some of this language across settings. Unlike medicine some of the diagnostic labels referred to in nursing may not have the same meaning, context or structure, syntax when you start moving the language across cultural areas.

The taxonomy committee has been responsible for the staging and classification of nursing diagnoses and the structure was generated to identify those things that nurses locate in patients, attempting to reach consensus about a consistent nomenclature to describe the domain of nursing and identifying classes and subclasses so that relationships can emerge.

The goal is to make terminology computer accessible on top of that. We do have some visual cues here.

Current rules for classification really follow no actual inherent order because the patterns that were mentioned before have equal levels of accountability within the structure. Levels of abstractions are determined and placement within the taxonomy occurs in relation to that. Placement in the taxonomy is, also, influenced by expert opinion as well as the literature and increasingly evidence based on research.

Definitions of the pattern and definitions of the diagnosis must be consistent in order to consider placement and also, the theoretical views of nursing at a given point in time need to be considered when the language is being created.

The current rules, also, account for the numbering system created to facilitate computerization of the taxonomy. Each of the nine categories represents a first code area. This one happens to be the fifth pattern area of code, and also the level of the diagnosis and in particular this one documented ineffective individual coping and has definition defining characteristics and related factors.

The taxonomy is responsible for the staging of the diagnosis and the stages are generated with a 1.0 level where the diagnosis is received with no specific development.

Oftentimes these come from specialty groups or subgroups who just have an idea for a concept and move through stages where the label is developed, where definitions are created and defining characteristics identified.

Most of the diagnoses are still at the 2.0 level, some below, accepted for clinical development where you have labels, definitions, and some having review of the literature. None of the diagnoses really are at the level of 3.0 where there is much validation or testing or 4.0 where there is final revision and acceptance.

This overhead just presents one of the pattern areas, seven, perceiving and gives you an understanding or a sense of some of the ways the diagnoses have been staged under that category, the same for the pattern area of knowing and feeling and just represents some of the constructs within that.

It is critical that these structures be standardized and efforts are continuously being made to put forth work that will have clinical relevance and be scientifically based, research based, and this work continues throughout the efforts of NANDA.

The past taxonomy has been in existence since 1984, and multiple textbooks utilize this resource. It is, also, found in some computerized databases. It is included in the medical library, the UML and also in other organizational structures such as the American Operating Room Nurses including that in their standards of care.

The evidence based on the work from taxonomy one has moved to continued refinement of the diagnostic classification into a taxonomy two. The proposed taxonomy two is there for consideration by the membership.

This is an effort to bring to bear more consistency within the diagnostic statements, more of a conceptual focus, better articulation of the knowledge within the discipline and more clarity in terms of developing a language that can be more clinically useful to the practitioner.

The proposed taxonomy contains 12 domains and six axes. Domains as identified range from health perception, health management through cognition, perception, sexuality, values, beliefs, comfort. These are currently still under revision but within these broader areas of domains, for example, health perception and health management there will be classes identified as subareas within the domain itself.

For example, the domain health perception and health management refers to the awareness of well-being or normality of function and the strategies used to maintain control of an enhancement of well-being, normality of function and energy and then a class under that might be something called health awareness. Another class might be health management behavior. Another class under that might be health promotion behavior.

In addition, the proposed taxonomy contains six axes. These axes include the diagnostic concept, level of acuity, unit of care, developmental stage, potentiality and this thing called descriptor, and as was indicated before one of the concerns is that a nurse taking an assessment of the patient spends a lot of time getting the person's story and then trying to synthesize that information into this coding structure.

To the degree that all of these axes then can be responded to, it may take some time. As now the diagnosis stands, for example, ineffective family coping, when it comes up in this new form it would be presented as a noun phrase and then these axes will provide selection of a diagnostic statement.

So, for an example, multi-axial approach to the diagnostic concept of parenting may look something like the descriptor axis may be called altered. The unit of care would be the individual. Developmental phase may be adolescent. Potentiality may be risk, and the diagnostic statement would read risk for altered parenting by individual adolescent, and so, this is the goal to try to move toward this creation of the statement which would allow for greater interbase with other systems but, also, provide probably somewhat more of a unique representation of what the patient experience is like and allow for more dimensions of care to be represented.

The other side of that is the time frame which somebody mentioned earlier which the practicing nurse may have to consider. On the other hand, the amount of time that it takes to actually write out all of this information takes you to another level, but I think when we are thinking about knowledge development in the discipline and looking at the work that nursing is doing in terms of the practice arena, some of it lends itself to those kinds of activities that are in concert or complement physician care and other things that might have more of an independent focus within the practice itself and the diagnostic labels that are being generated may move us to that.

The important piece is that the knowledge and the information to make the diagnosis comes from assessment that would provide cues to support nursing diagnosis within the assessment framework and be available to compliment the domains.

The ultimate present patient medical record information contains data that captures the patient health/illness experience within the existing framework of medical practice.

It reflects the concepts and constructs relevant to the domain of medical and describes diagnosis and related signs and symptoms, treatments and outcomes to a variety of patient illnesses.

It is essential that the information contained within the patient medical record also be precise enough to capture the total experiences being manifested by the patient, accurate enough to describe the patient experience and nursing diagnosis is another classification structure that allows for this to happen.

If information in the record is not accurate, comprehensive and up to date it could reflect compromised patient care, inaccuracy in diagnosis and include misleading information about patient care and, also, have a cascading impact as we heard earlier on billing and recovery trajectories.

Current information contained in the medical record reflects language that has both national and international clarity within medical domains at least.

A diagnosis made in the United States for example around diabetes is labeled and defined the same way no matter where the person is diagnosed, whether the United States or Japan, while choices in treatments and interventions may reflect personal preference. The PMRI does incorporate options that give the physician an opportunity to document relevant information as needed.

For nurses practicing in a collaborative role such as the nurse practitioner the current patient medical record information system may contain vocabulary that is relevant and useful.

However, in its present form the PMRI does not contain vocabulary that allows the nurse to document data relevant to nursing practice within the domain of the discipline.

Research published in multiple journals including the NANDA Journal and conference proceedings give growing evidence of how nursing is using this nursing diagnosis in clinical practice research, education and administration.

Its value is helping to focus on the work of nursing as a thinking discipline, not just a doing discipline, makes nursing judgments visible and helps to articulate judgments and concepts based on knowledge, provides evidence for practice outcomes. It more clearly articulates nursing's contributions to care, and the taxonomy provides a way to document and organize care delivery.

Currently the nursing diagnosis is included in the unified medical language system, the National Library of Medicine, and its incorporation with the American Nurses' Association to develop a unified language system also works in concert with NIC and NOC that you will be hearing from in just a moment.

The NANDA taxonomy provides a way to describe nursing phenomena of concern, creates a structure to document care, develops a database that will identify high incidence nursing problems associated with specific illnesses, helps to define complexity, describes care needed within the hospital and outside of the hospital.

Also, it helps to identify an appropriate case mix on units, documents the impact of nursing interventions and outcomes on cost, quality and helps promote an understanding of the human experience across populations and settings.

The current patient medical record needs to reflect a multidisciplinary approach to documenting care. Patient perspectives captured by nursing need to be incorporated into any documentation system so that the nurse can be more comprehensively representing the patient experience.

Nursing diagnosis has created a language that offers a unique perspective about the patient's response to health and illness. The work of NANDA will continue to interbase with other language developers such as the National Intervention Classification System a nursing outcomes classification system to foster linkages among language developments and to enhance the usefulness of all languages that document nursing perspectives to the total patient experience.

Thank you.

MR. BLAIR: Thank you. You all have agreed on your own sequence. So, why don't I just let you introduce yourselves in order?

PROF. MC CLOSKEY: Thank you. My name is Joanne McCloskey. I am a professor at the University of Iowa. I am, also, the principal investigator of a research team that developed the nursing interventions classification or NIC, and I, also, am director of the Center for Nursing Classification at the College of Nursing, the University of Iowa which is responsible for the continued upkeep and use of NIC and NOC, the nursing outcomes classification.

I am here today to explain NIC, and my colleague, Sue Moorehead will explain NOC to you. There is a handout that goes with this presentation. I know everybody at the table has it. It is the one that starts with the white top cover with the Center for Nursing Classification card and there are many others in the back for the audience.

I don't know where they are but someplace in the back of the room. I would like to begin by saying that there are approximately 2.6 million registered nurses in the United States and approximately half again that number of nursing assistants. Nurses are the largest group of health care providers, and we spend the most time with patients. Yet the nature and the impact of nursing services are virtually unknown and invisible.

Nursing data must be included in the patient's health care record in order to be able to study the costs and the effectiveness of nursing care, as well as to determine the relationship of specific nursing interventions to the interventions of other health providers.

The Nursing Interventions Classification, NIC, is a comprehensive standardized classification of interventions that nurses perform. It is useful for clinical documentation, for communication of care across settings, for integration of data across systems and settings, for effectiveness research, productivity measurement, competency evaluation, reimbursement and curricular design.

The classification includes all interventions that nurses do on behalf of patients. An intervention is defined as any treatment based upon clinical judgment and knowledge that a nurse performs to enhance patient outcome.

While an individual nurse will have expertise in only a limited number of the interventions in the classification, reflecting her or his specialty, the entire classification captures the expertise of all nurses.

NIC can be used in all settings, from acute care intensive care units to home care, to hospice, to primary care, and it can be used by nurses in all specialties from critical care to ambulatory care, to long-term care.

While the entire classification describes the domain of nursing, some of the interventions in the classification are, also, done by other providers and NIC can be used and is being used by other providers to describe their treatments.

Now, if you have the handout, if you look at the yellow page, bold yellow, it says, "N=486," this is an alphabetical listing of the 486 interventions in the forthcoming third edition of the classification. The ones in the bold are the new interventions in the third edition. The second edition which is in print now has 433 interventions.

The white page following that which starts with mechanical ventilation is one example of a full intervention in NIC. A full intervention has a label, mechanical ventilation, then a standardized definition. Those two components are the standardized language and cannot be changed unless they come through the review process at Iowa and are printed in the next edition.

In addition, however, there is a set of activities for each intervention listed, if you will in logical order, logical order as to what a nurse might do first to last. These activities may be modified. However, there are guidelines for that to fit the particular patient and the particular setting. So, car can be individualized modifying the activities. Each intervention, also, has a short list of background readings that were used that were found to be helpful in developing the intervention.

Now, if you look through just the alphabetical list of labels you can see that these interventions include both the physiological, for example, acid base management; they, also, include psychosocial interventions such as anxiety reduction. Interventions are included for illness treatment, for example, hyperglycemia management, for illness prevention, for example, fall management and for health promotion such as exercise promotion.

Most of the interventions are for use with individuals, but there is a whole domain for use with families, for example, family integrity promotion, and there is a new domain in the forthcoming third edition for use with entire communities at the aggregate level, for example, environmental management community.

Indirect care interventions, those done away from the patient on behalf of the patient, very important to carrying out the direct care of nurses and the direct care of other providers are, also, included, for example, supply management.

The classification was first published in 1992, the second edition in 1996, the third edition as I have said will be published or I guess I haven't said, but it is forthcoming this fall to be published with a copyright of 2000.

New editions of the classifications are planned for every 4 years. The third edition will contain 486 interventions grouped into seven domains and 30 classes and in the handout the second page, blue, are the seven domains and the 30 classes. Each of the interventions then is located below these within the classes at the third level of the taxonomy.

The seven domains are physiological basic, physiological complex, behavioral, safety, family, health system and community. Each intervention has a unique code number. Those are next to the interventions on the gold sheet.

NIC interventions have been linked with NANDA nursing diagnosis, with Omaha system problems and with NOC nurse sensitive outcomes.

The classification is continually updated with an ongoing process for feedback and review. Work that is done between editions and other relevant publications that enhance the use of the classification are available from the Center for Nursing Classification at the College of Nursing, the University of Iowa, and the last page of the handout has brief information about the Center.

The research to develop NIC began in 1987 and is ongoing. It is conducted by a large research team at Iowa and has received 7 years of funding from the National Institutes of Health. Multiple research methods have been used in the development of NIC.

An inductive approach was first used to build the classification based on existing practice. Content analysis, focus group review and questionnaires to experts in specialty areas of practice were used to augment the clinical practice expertise of team members.

Methods to construct the taxonomy included similarity analysis, hierarchical clustering and multidimensional scaling. Through clinical field testing steps for implementation were developed and tested, and the need for linkages between NIC and other nursing languages was identified.

Over time more than 1000 nurses have completed questionnaires. Approximately 50 professional nursing associations have provided input into the classification and hundreds of nurses in numerous organizations that have implemented the classification have provided feedback.

NIC is recognized by the American Nurses Association, is included as one data set that will meet the uniform guidelines for information system vendors in ANA's nursing information and data set evaluation center.

It is included in the National Library of Medicine's Metathesaurus for a Unified Medical Language. Both the Cumulative Index to Nursing Literature, CINAHL and Silver Platter have added NIC to their nursing indexes.

NIC is included in the Joint Commission on Accreditation for Health Care Organizations Manual as one nursing classification system that can be used to meet the standard on uniform data.

The National League for Nursing has made a 40- minute video about NIC to facilitate the teaching of NIC to nursing students and to practicing nurses and alternative link has included NIC in its ABC codes for reimbursement for alternative providers.

The Center for Nursing Classification maintains a list of users of NIC in practice in education by state in US and by other countries.

It is estimated that over 300 clinical agencies located in 46 states and 20 countries are now using NIC for communicating the care provided by nurses. In addition, over 150 schools of nursing across the US are using NIC in their curriculums.

Interest in NIC has been demonstrated in several other countries, notably Brazil, Canada, Denmark, England, France, Iceland, Japan, Korea, Spain, Switzerland and the Netherlands.

NIC has been translated into Dutch and Korean and several other translations are in progress including French, Japanese, Chinese, German and Icelandic.

It is my pleasure to state without any exaggeration that there is continuing widespread interest in NIC. Users report that it is a clinically meaningful language that clearly communicates the work of nurses and simplifies the documentation of nursing care.

It has been my pleasure to be invited here. Thank you very much.

MR. BLAIR: Thank you, Joanne.

MS. MOOREHEAD: I am Sue Moorehead. I am, also, from the University of Iowa as Joanne indicated, and I am here to speak to you on behalf of the nursing outcomes classification team at the University of Iowa.

Mary Ann Johnson, the lead PI on this grant was unable to come. Marydene Moss and I are both co-PIs on a research team numbering about 30, and so I would like to present an overview of this classification which compared to NANDA and NIC is in the toddler stage of development, at least in terms of years.

Health care providers, not matter their specialty are struggling to maintain quality patient care while holding down or decreasing the costs of health care today.

This environment has made outcomes of care a key concern and a focus for individuals and organizations that provide health care to individuals, families and communities, something that may be very unique to nursing's focus.

Until recently no reliable comprehensive system of measuring outcomes has been developed that captures the variety of nursing specialties and allows information about patients to cross different health care settings. To measure outcomes effectively a classification system that is useful across these settings is needed so that outcomes can be followed over time regardless of where the patient might be at that time.

The nursing outcome classification which we affectionately call NOC is a comprehensive standardized classification of patient outcomes developed to evaluate the effects of nursing interventions or treatment. Standardized outcomes are necessary for documenting in electronic records, for use in clinical information systems and for the development of nursing knowledge and the education of professional nurses.

In this classification an outcome is defined as a variable concept representing an individual, family or community condition that is measurable along a continuum and responsive to nursing interventions.

Prior to this nursing used to focus on goal statements, and if I could use the watermelon analogy we were plagued by thousands of seeds.

In my opinion NOC clusters those seeds and creates watermelons. So, in an Iowa analogy NOC is a field of watermelons, each having a label name that makes nursing able to communicate goals beyond the seed level in a way that is more communicable with patients and with other providers.

The outcomes are developed to use in all settings and with all patient populations. Since the outcomes describe patient status, other disciplines have found them useful for the evaluation of their interventions.

In the handout that is provided you will see an example toward the back of an outcome. Like NIC it has a definition that is standardized, a label name that is standardized and indicators or perhaps the seeds of the watermelon that have been grouped under that outcome.

The one I brought forward was anxiety control as an example that many disciplines might be interested in. NOC was first published in 1997, just 2 years ago and contained 190 outcomes listed again in alphabetical order. The list in your handout is the list that will be in the second edition, and those with an asterisk are those that have been added.

Each outcome has a definition, this list of indicators and can be used to evaluate patient status in relation to the outcome with a measurement scale that ranges from one to five. Five is always the highest level that the patient could achieve on that outcome.

We have a variety of scales. The two that I would like to point out are a compromise scale and a demonstration scale, but each outcome has a standardized scale that allows the nurse to measure progress based on intervention toward meeting the ultimate status possible for that patient or family on the scale.

A second edition of NOC to come out in the fall alongside NIC has 260 outcomes listed including individual, family and community outcomes which are new editions to the classification.

The taxonomy is in your handout. It contains seven domains and will be published for the first time in the second edition.

The seven domains are functional health, physiologic health, psychosocial health, health knowledge and behavior, perceived health, family health and community health. It is the belief of the NOC team and others working in this area that down the road we may be able to develop a taxonomy that blends with NANDA since both of these classifications focus on patient behavior.

This classification is continually updated to include new outcomes and to revise older outcomes through feedback from users. Our initial work began in 1991 with some pilot work on patient outcome development. We were funded and began clinical work to develop the outcomes and have done that primarily from 1993 to the present.

Right now we are in the second phase of our research testing the measurement scales. We are currently using nine test sites in five states in the Midwest. Funding for Phase I was received from Sigma Theta Chi International and funding for Phases II through V from the National Institutes of Health through the NINR.

I have to say on behalf of the research team this is a small bucket in the resources used to develop this classification. Thousands of hours of volunteer time from nurses who believe in this project have contributed to the classification you see here today.

We have used multiple methods as described by Joanne and have essentially built NOC using the methods that were refined through the NIC investigation.

Currently a number of methods are being used to evaluate the reliability, validity and sensitivity of the outcome measures in clinical sites, the focus of our second grant. Clinical sites used in the setting include tertiary care hospitals, UIHC in males, community hospitals, community agencies including parish nurses in Chicago and long-term care.

Several tools are available to assist in the implementation of NOC and outcomes have been linked to NANDA, to the Omaha system presented yesterday by Karen Martin, to RAPs, resident admission protocols used in nursing homes and finally NIC.

The last page of your handout is an example of the linkage work currently under way by both the NIC and NOC teams to link the interventions and outcomes to NANDA diagnoses that nurses use in care.

NOC is one of the standardized languages recognized by the American Nurses' Association. As a recognized language it meets the guidelines, standards set up by ANA's information and data set evaluations that are known as NIDSEC(?) and for nursing information system vendors. It is, also, included in the National Library of Medicine's unified medical language and CINAHL's index to nursing literature.

The use of NOC in practice, nursing education and research is the most accurate indicator of NOC's usefulness. NOC is being adopted in a number of clinical sites for the evaluation of nursing practice and is being used in educational settings to structure curriculum and teach students clinical evaluation and judgment.

To me this is quite an endeavor given it has only been out to the lay nurse for 2 years. Interest in NOC has been demonstrated in other countries and quite a lot of our efforts this year have been working with those groups. NOC is being translated into Dutch, French, Japanese and Spanish with many other translations pending.

In conclusion since publication of NOC just 2 years ago interest in measuring the effectiveness of nursing interventions on specific outcomes has become a key concern for nurses in all settings. Nurses are excited about measuring accurately the outcomes of care they provide and NOC is the only classification system to date that captures the outcome of nursing regardless of specialty and setting and offers nurses a unique tool for making their contribution to health care visible.

It is important that the contribution of nursing is accurately capture in the development of a computerized patient record for the future.

I thank you very much for having this opportunity to present our work.

MR. BLAIR: Thank you, Susan.

Judy?

DR. OZBOLT: Thank you. I am going to address the issue first of required information for the patient record, speaking particularly about nursing but also about other non-physician providers, often known as allied health.

I am Judy Ozbolt. I am a professor of nursing and biomedical informatics at Vanderbilt University, and I am, also, a member of the Executive Committee and the Board of Directors of the American Medical Informatics Association, a member of the Executive Committee of the American Nurses' Association, Council on Nursing Systems and Informatics and a member of the American Nurses' Association Committee on Nursing Practice Information Infrastructure.

What I would like to talk about today first are your questions, definitions and requirements for patient record information, issues for government action, issues of comparability, then about the patient care data set which is one approach to dealing with record information and finally about the nursing vocabulary summit conference coming up at Vanderbilt June 10 through 13.

The patient medical record information includes all the data recorded during care by providers of all disciplines. Usually this includes assessment, diagnoses, orders and documentation of care. Rarely does it include goals and clinical outcomes whether they be physiological cognitive, affective, behavioral or functional outcomes.

To nurses and to physicians, also, I believe, outcomes are not just length of stay and total charges. Outcomes are what happens to the patient, and those data are often difficult to find and retrieve. The purpose of the record information is first of all as a reminder to providers so each provider can keep a clear picture of what is going on with the patient and what services are being provided, also, for communication among providers. Nurses particularly are aware of this as we change shifts several times a day and come and go and yet have to provide continuity of care to patients.

In addition the record information has increasingly become a source of data for payers, managers and researchers.

This becomes problematic when clinicians record data in idiosyncratic ways. That means that the data become difficult to retrieve; how do you find the information that you are looking for, difficult to interpret; what was meant by this particular phraseology and extremely difficult, therefore, to aggregate and analyze in valid and reliable ways.

There is, also, a problem because clinicians and other users may actually require different concept representations to get at the semantic meaning of what they need to do their work, and yet as has been discussed very much in these hearings busy clinicians care more about patients than about recording, and they will record what they must record, and it will be a more or less accurate representation of the clinical event, and it is the less accurate that concerns us.

Now, we have heard a lot about the difficulties of using data sets that are required for administrative purposes as proxies for recording clinical events. In nursing as you have heard a number of people have made magnificent efforts to devise terminologies for recording nursing information but none of these has emerged as a de facto standard, and as a result we have no retrievable data for the services of nurses and allied health personnel in most settings, and yet these services consume one-third, typically one-third of a hospital's annual operating budget. It is a black box. We don't know what we are paying for. We cannot study the effectiveness and the cost effectiveness of these services.

Nursing has essentially been treated as part of the furniture. Nursing gets rolled into the room rate and yet what those nurses do may or may not impact length of stay when patients are getting bounced out of the hospital regardless but it will impact how well the patients are and what level of services they continue to require and whether they bounce back to the hospital sooner than anticipated.

Likewise in home care and nursing home care, nursing is the reason for being of these services, as well, indeed, as the reason for being of inpatient hospital care and yet we lack the means to study the effectiveness and cost effectiveness of those services.

Now, we are not without resources. The ANA has recognized seven terminology sets as being useful for recording clinical nursing data, and these data sets, also, are recognized by ANA's MIDSEC(?) group as being appropriate for inclusion in commercial information systems, and they have been or are being incorporated into the unified medical language system, but our colleague, Suzanne Henry and her associates have written a series of very carefully researched critiques of these nursing vocabularies pointing out in what respects they lack comprehensiveness. They don't cover all of the clinical events that nurses deal with. They vary in granularity and they may not have the level of granularity necessary to distinguish clinical events, to know what works and what doesn't work and how well.

In general they have not until quite recently included atomic level elements and a combinatorial grammar so that they can be coded at a very finite level and recombined in a variety of ways to represent what actually happens in the clinical event in a way that is understandable to the computer, sufficiently flexible for clinical practice and able to be aggregated and analyzed.

These nursing vocabularies have been diverse in the purposes for which they were created, in their scope of coverage, in the form that they take, in the content and in the modes of development with the result that we not only don't have a de facto standard but because of their diversity we don't have -- they don't even all make up a real unified language for nursing, and we will talk in a bit about what we are going to try to do about that because we are not just wringing our hands.

Who should care whether nursing has a useful vocabulary or not? What would be the return on investment of such a thing? If we had these standardized atomic level elements with combinatorial grammars, we could use those as building blocks for care plans and pathways and one of the ways that we could use that, one of the ways that we are proposing to use that at Vanderbilt is in fact, to take the pathways for upcoming shifts for the patients in a unit and project the care requirements and the staffing needs from the clinical plan of care so that we don't need redundant data collection from an expensive nursing classification or acuity system. We can simply use our plans for clinical practice to project what it is going to take to provide that care, and incidentally we can track the variances and begin to see what we need to change.

We can provide decision support via hyperlinks to these atomic level elements and they ways that they are combined. We can create databases for quality and research, and we can have what Sue was referring to, sensitive measures of quality and effectiveness. Dead or alive is not good enough for quality of care. Length of stay and total charges are not good enough for quality of care. Even patient satisfaction is not good enough because patients don't know that their IV didn't infiltrate because the nurse did the right things.

We need sensitive measures to let us know what we are doing right and how patient outcomes change with different configurations of resource allocation.

So, what can government do about this? We had some discussions about what government should and shouldn't mandate, but I think it is reasonable that with all the money government is spending on nursing care and every day of inpatient stay and every visit of home care is nursing care, then you need to have reports on the quality and the effectiveness of that care and the cost of that care based on clinical nursing data.

We have an opportunity to avoid some of the compromise decisions that have had to be made in other aspects of medical care using less good representations of clinical phenomena. Let us get the clinical nursing data that we need for these reports.

You can require that the reports be based on clinically validated terminologies at appropriate levels of granularity. You can mandate that terminologies meet emerging criteria for clinical terminologies, for example, ASTM has a draft standard in the works that very likely will go to ballot that suggests requirements and desiderata for clinical terminologies, and government can support research conferences, research and conferences to develop and test terminologies in nursing and allied health.

In terms of the comparability of the record information currently we don't have comparable data in general in most places, not across institutions, not even within institutions, and yet comparable data are critical to identify best value services, those that provide the best balance of cost and outcomes. You know what often gets called best practice really just means cheapest, and cheapest may not be giving you cost effective outcomes.

We need comparable data so that we can reimburse study and improve patient care services which is what we all care about anyway.

So, what has been my contribution to this picture? In 1998, the American Nurses' Association recognized the patient care data set. At that time that consisted of codes and pre-coordinated terms for patient problems. We had 363 terms for problems, goals. We had 311 terms for goals and orders, 1357 terms.

What is a little bit different here is that outcome was defined as goal evaluation status, and this is consistent with the HL7 standard for patient care data. It is useful, I think, to, if you want to study the effectiveness of care, you need to know whether the condition that the patient arrived at, at the termination of care was the same condition that you were trying to reach with your therapeutic goal. I think that goals are best used in combination with the kind of sensitive measures that NOC is developing for example, so that we know precisely what the patient was doing and we know how that compares to what we were trying to achieve with the patient.

These terms of the patient care data set were developed and tested at the University of Virginia in a collaborative project with the University Health Systems Consortium with funding from UHC.

We really made a catalog of terms in common usage, and these were long phrases, precoordinated terms for problems, orders and expected outcomes or goals, and we validated those in a national sample of six hospitals and found them to be comprehensive with most of the terms used in acute care.

Now, I have taken to heart Sue Henry's critiques and have been revising these precoordinated phrases of the patient care data set, parsing them into atomic level elements and combinatorial grammar, also, specifying values of the elements taken from those terms that we collected from practice and what is the most fun in a way is as we do this discovering the clinical knowledge that defines the links among the values. If a particular value occurs for a patient what does that suggest about other problems that need to be considered or other actions that need to be taken, for example?

If you look in the Vanderbilt packet that you have one of the handouts is a profile of the patient care data set. I don't know that enough copies were made of that for the audience, and I apologize, but it will be available on the web site, I understand or I will certainly provide things to people, but that handout, that profile shows you that there are 22 care components in the patient care data set. These were derived from, modified from those that Virginia Saba developed in the home health care classification. They are very close to those. We split immunology from metabolism. We added things having to do with procedures, and we made slight modifications in some of the others.

I am just going to flip through these. You can see that they cover physiological, psychological, social, mental health, behavioral and so on.

Now, in the patient care data set we are saying that we have three axes, problems, goals and orders. On the problems axis we have these elements, the subject who is the recipient of care, the object which is what NANDA calls the focus of care, the likelihood that the problem exists; is it confirmed; is it rule out or suspected or is it potential?

The status, sometimes called the modifier or judgment, is it impaired, disrupted, whatever, the degree, one-plus, two-plus, three-plus, duration; when did it start; is it acute or chronic; the value, for example, the score on the Glasgow(?) coma scale and frequency, how often does a problem such as migraine headaches occur, the body side and laterality, and in the handout there are tables. There is a table showing the possible values for each element in the component activity, and those can be readily combined into a statement like this. Patient has confirmed chronic, moderately impaired range of motion, 60 percent of norma of the left shoulder.

At Vanderbilt our plan is to use such tables in building our pathway statements. So, it isn't that the clinician on the fly every time is having to go through a pick list and choose all these things, but as we build our pathways of care we are using these elements which means then that the parts of the pathway will, also, be comprehensible to the computer and enable us to do our databasing and decision support work.

On the goals axis, the subject and object elements recur, and we have performance. What will the subject do, the level of performance, at what level, by means of what equipment? Manifestations become a lot like the NOC outcomes. What precisely will the patient be doing; how will we know that the patient has done that, and the goal evaluation status is a Lickert(?) scale ranging from achieved to abandoned.

An example gold statement is the patient will achieve range of motion within an acceptable range for that subject, 80 to 90 percent of normal by use of appropriate equipment.

Orders are interesting because they are rather complex. We have the same subject and object elements. We have an action; what is the health professional to do? The indicators if what we are doing is assessing or monitoring, what are we assessing and monitoring for? The methods are the modes of care to be implemented. The risk factors are things that we need to, also, be managing or monitoring for in dealing with this, and we find that to develop values tables for the orders we have to develop a different table for each object. It is very specific in that way, and we don't come up with a single order. We come up with order sets, and these are a few of the orders that would be relevant activity restrictions, assess patient's patterns and levels of activity and so on, and you will probably notice that these are rather similar to the activities in the NIC classification in terms of the level of granularity.

So, comparing the patient care data set to the home health care classification in Omaha all came from patient records. Home health care in Omaha referred to home care. The PCDS came from acute care, and the components of the patient care data set were modified from those of the home health care classification.

Compared to NANDA and the international classification of nursing practice, there are a number of elements that are either synonymous or, indeed, overlapping between the two or among the three.

In comparison to NIC the patient care data set orders comparing granularity to the NIC activities and the processes of development have been rather different. We compiled ours from patient care documents, and the NIC had a consensus process that has been validated by a variety of research methods as Joanne pointed out.

In comparison to NOC, NOC is providing valid and reliable measures of conditions or behaviors as outcomes, and that is terrific and much needed. What the patient care data set does is define the outcomes as goal evaluation status which we think is useful for effectiveness studies and consistent with HL7.

Now, you have heard from six of us about these various nursing vocabularies, and they are all different, and what are we to do about that? So much work has gone on by so many people and so much effort has gone into the various vocabularies that there is a lot of knowledge out there.

So, what we are going to do in June is bring together all of the vocabulary developers whose work has been recognized by the ANA and even some whose work is under consideration by the ANA.

We are, also, bringing in language and standards experts, many of whom you have heard from in these hearings, Jim Cimino and Stan Huff and a number of others, including Sue Henry. We will have representatives of federal agencies and professional organizations and health care agencies and the health informatics industry. We are all going to put our heads together and try to come up with guidelines and desiderata for nursing terminologies to represent these events, some recommendations for further development and we will produce some papers for publication and some presentations

We will have a briefing book of background reading so we can all start with a common base of knowledge. We will spend a day learning about language and standards and what is happening in the mainstream and setting goals for the nursing vocabularies. We will spend a day doing small group work toward our goals and a half day reporting and disseminating.

This work is supported by a grant from the National Library of Medicine, by a contract from the Division of Nursing at HRSA, by a contribution from the AMEA(?) Nursing Informatics Working Group, by a gift from the American Medical Association and by contributions from 3M Health Care Systems, from Care Centric Solutions, from the Cerner(?) Corporation, from IDX, from Kaiser Permanente, from Lexical Technology, from McKesson(?) HBOC, Oceana, SMS and SNOMED International, and if anybody else would like to participate, please see me afterwards.

What I would like to offer the Committee is a form report of the conference, whatever our work product is by the end of the summer.

We will have the conference in mid-June, and it will take us some time to write up the report, pass that through the editorial board of that conference because we want to make sure that we are accurately representing the ideas and then would be happy to provide that to the Committee.

Finally, thank you very much, and if you can take the time to look at the full text of the testimony that will give you details that I only hit the high points of, likewise the profile of the patient care data set, also, includes figures that show how these elements relate to one another, and I would be happy to provide further information.

Thank you very much.

MR. BLAIR: Thank you. I want to thank all of the witnesses that testified, excellent testimony, and we have about 15 minutes for questions.

What is our first question here?

Simon?

DR. COHN: I actually, also, want to thank our panelists for a very interesting set of presentations, and I appreciate it.

I just had a question that isn't specifically related to your code sets but is sort of perhaps peripheral. A couple of you mentioned nursing acuity systems, and I was sort of curious. What code sets are used in these systems? Are any of your code sets used or nursing code sets used or is it something entirely different?

DR. OZBOLT: I think probably Joanne can, also, address this, but by and large these have been commercial products that have tried to predict resource consumption for patient care. There are prospective systems that look at what you anticipate you are going to have to do for patients in upcoming shifts and predict staffing that will be necessary for that.

A fault with those is that you never know whether the patient actually received the care or not, and there are retrospective systems that look at what care was actually delivered and try to determine whether the staffing was adequate to deliver that care, but in fact, you don't know whether what was delivered was what was needed or more or less or what.

Phyllis Giavenetti did a comprehensive review of these systems for the Division of Nursing, I believe in 1978, and concluded among other things that not only were they not, we didn't have good evidence of validity and reliability, but there was nothing really to show that these systems were better than just asking the charge nurse how many staff you need to cover the patients for the next shift.

That information was reviewed a decade later, and came to similar conclusions, and yet, because of the magic of numbers and computers I think as much as anything hospital managers have invested a lot of money in systems to try to predict or determine how much staffing is really needed to care for patients, and they usually require a, in fact, they generally require a redundant data collection in addition to the assessment and care planning that nurses are doing, and furthermore nurses have quickly learned how to game the system so that you can record the data in such a way to get the care that you know you need anyway.

So, it has been not a very satisfactory situation. I am probably one of the most pessimistic and cynical voices you will hear on that side.

Joanne, you know a lot about this.

PROF. MC CLOSKEY: I just wanted to add that first off to agree that these systems have usually been individualized for a particular institution. Even if they were standardized such as GRASP(?) or MEDICUS(?) they were so individualized you couldn't recognize them anymore.

When we developed NIC and pilot tested this with field sites in multiple settings we, also, developed with these field sites a patient acuity classification system. Five categories prototype, just describes patients in five categories, and that has been used by some of our sites with NIC so that you can get a standardized measure that, also, travels across settings.

Now, that is just a category, one to five. It does work across settings, but it compares, you know, with the one to five, one to seven and so on of these other categories.

There is some beginning discussion and some people talking about comparing the interventions to any existing patient acuity patient classification system. It is possible in the future to perhaps think about an index of the interventions to have some sort of, that be a proxy if you will for the acuity level of the patients. That is research that needs to be done. I don't know that that is going to be the only way to do this, but certainly type of interventions and amount of interventions and so on delivered could possibly be a proxy.

Right now we advise taking some acuity system. We have a prototype of one, five categories used across settings that has not been well tested at all.

DR. OXBOLT: What we are proposing to do at Vanderbilt as we roll out our networked pathway system is to develop a way of projecting from that the care that is required in an upcoming shift on a unit, have that coded to the time and complexity of each of the activities and then present to thee care manager a bar graph that shows by blocks of time, hour by hour or otherwise the minutes of care at different levels of complexity that are required to take care of those patients, and then leave it to the manage to decide how to staff the unit with that evidence, whether it means staffing at budget, above budget or below budget, and see over time how the staffing level and its adequacy in relation to the projections affects patient outcomes among other things.

MS. JONES: One comment, part of the dilemma, I think, also, is what is acuity, and how is it being operationalized because if the defining determinant around that is only the phenomenon for which the person is being brought into the hospital and doesn't get at the complexity or articulate the complexity of the issues involved in the care, and if the language is not sensitive to draw that out, it still isn't adequate in describing the full complement of care that goes on with the patient.

So, it begins to question even how you are operationalizing that word "acuity" because there is some concern about the dimension of that and within nursing if you are looking at maybe the big picture you are trying to get an account of the total response which may go beyond some of the more formative things that are associated with the health problem and even go beyond those issues, and once you go outside of the acute environment into other settings it has an even different interpretation.

MR. BLAIR: Do we have other questions?

Michael, you got us just before we ran to lunch.

DR. FITZMAURICE: That makes me Mr. Popular, doesn't it? I am thinking about care in a hospital, and much has been made about the patient comes in, and there is a diagnosis and a care path developed for the patient. The patient goes through the care path and is discharged and has certain functions. So, the critical point is the diagnosis of the patient, the treatment decision and then the patient goes down a slide and out the door. That is not a good way to describe it, but what I am wondering is with nursing interventions, outcome studies, patient care data sets it would seem to me that an appropriate use of this would be to better measure what the patient is when the patient comes in the door to look at the care path that is being followed, to examine variances from the care path and see if that leads to better or worse patient outcomes and then continuously improve, and I hate that term, continuously refine care path and relate it back to these variances in the patient coming through the door.

It is not just a patient with an ICD-9 diagnosis. It is a patient with a lot of different functions. That is where I see a lot of the contribution of the nursing code sets and classifications are. It can lead to useful research that improves the quality of care. Is this research going on?

DR. OZBOLT: To the extent that the data are available it is going on and we are rolling out a pathway system at Vanderbilt now and have proposals in to enhance that, and I know that a lot of research has been going on with NANDA, NIC and NOC along those lines.

So, let me let my colleagues address that.

PROF. MC CLOSKEY: There are multiple questions and multiple answers to what you raised. Just one piece of it, the research is beginning to go on, but it cannot go on until we get people using this and using this with information systems.

One of our colleagues, Roy Simpson of McKesson HBOC has stated and Judy probably knows the figures more accurately than I do that there are over 100 vendors operating in clinical information, clinical nursing information systems. No one of them owns more than 4 percent of market share.

Now, that is an incredible community, and the group of people, the group of users who has been the most reluctant to come in here are the vendor communities. We have McKesson HBOC who is about to sign a contract for NANDA, NIC and NOC, but they are the first large vendor.

There are three other small new vendors, but then there are still 90 plus others, and what happens here is that you have hospital and other systems that have bought into these vendors for non-nursing clinical data and this nursing clinical piece is missing, and the vendors aren't quite sure which way to jump here, and it is very expensive then to incorporate all this and to have their users who might be 400, 500 people out there to make changes.

They are beginning to come around now. The emphasis that ANA has put on standards with nursing data sets is helpful, but if there is anything that your Committee can do to encourage vendors to use standardized language, we would then have the data to do effectiveness research and yes, to improve our clinical pathways.

I would, also, like to say that the clinical pathways or critical care, whatever they are called are for homogenous populations, and you know a lot of people in hospitals and even other places are not homogenous. They are not predictable. So, I mean we view, if we just take the three languages of NANDA, NIC and NOC and any one of these can work with others, but I would say working with the patient what are the nursing diagnoses; what outcomes am I trying to achieve, are we trying to achieve working with the patient, measure the outcomes that I picked using the NOC scales now upon entrance and then say, "Now, in this institution or for this length of stay or for this encounter where would I try to go on these particular outcomes, link on a scale maybe from a two, maybe just to a three, maintain only to a two, try to go to a five, whatever, and to do that what intervention," and then to take a look at the research database with aggregated patients and just say, "Look, what worked for the achievement of what outcomes in what kinds of patients, and what does for this kind of population the standardized critical path say?" and only recently have we begun to see critical paths incorporating nursing standardized data.

For a long time they came from practices of only physicians. They differed as the physicians here know very much by the individual physician. If you could agree on the IV solution, that was great, but beyond that there was not a lot of agreement.

Now they are incorporating nursing languages, but they are, again, based upon judgments and past practices rather than a database of research.

DR. OZBOLT: At Vanderbilt about 80 percent of our patients are on clinical pathways at any given time. Part of what we want to achieve with standardized language built into these pathways and coded is the ability to customize pathways on the fly, to combine two different pathways. The patient is not only multiple trauma but pregnant, for example, and to just put in the individual things in relation to the problems that the patient is experiencing, the individualized customized actions that you need to take and the appropriate goals for that person, and standardized language should make it easier for us to put tools into the hands of clinicians to do these things without driving themselves crazy or neglecting their patients while they attend to the computer.

DR. FITZMAURICE: I just want to say that I applaud the effect of the nursing summit to bring together these code sets, and I hope that the goals of the summit are to improve just this kind of research and to foster measuring the outcomes of nursing care.

DR. OZBOLT: That is where we are all trying to go.

MS. JONES: I think that, too, there is a lot of work that has been done and unfortunately because we are all coming at it from very different perspectives the work looks like it is in isolation, and so the collective, you know, cannot be really spoken to, and throughout the country I think there are isolated cases where people have made significant changes using NANDA, NIC and NOC in particular to care pathways to make them more patient specific.

The problem though I think has been the lack of uniformity and so what gets communicated seems like an isolated case rather than the aggregate.

MR. BLAIR: Are there any other questions?

Okay, thank you, again, for excellent testimony, and let me see how we are doing on time here. It is 12:08 p.m. We are running only about 8 minutes late. So, I am going to ask us all to return here at one-thirty, please, and we will be back on time.

(Thereupon, at 12:08 p.m., a recess was taken until 1:33 p.m., the same day.)


AFTERNOON SESSION [1:33 PM]

Agenda Item: Drug and Device Code Sets

MR. BLAIR: Okay, let me convene our next panel here. It is predominantly a drug-related terminology panel. However, we have one person who will testify who would like to go first and that is David Rothwell, and I think his topic is a little bit broader than just drug knowledge bases.

David?

DR. ROTHWELL: My name is David Rothwell. I am an editor with the Health Language Center, and today I would like to talk to you about the center and the structured health mark-up language, a product of the efforts from a group of people who are involved or participants in that health language center.

A small group of individuals came together about 2 years ago to explore new ways to capture, characterize and to access information in the patient medical record with the understanding that data capture is a critical issue but, also, the characterization is, also, critical, and with those two pieces in place, access then becomes reasonable or we think can be achieved.

It was decided amongst a group of individuals, and I can talk to you about who they were that the problem was more than just a terminology vocabulary issue, and as many of you know my background is as a previous editor, developer over 20-some years of SNOMED, and as of some 3 years ago we essentially left that effort behind and have gone forward with this other and in some ways parallel effort.

It was decided then that it wasn't just terminology; it was vocabulary, that it was really a language issue and hence the word "language" in structured health mark-up language and it was the ability of language or inability to process language which is either the or a major issue.

Other new technologies need to be utilized in order to address the problem, most notably SGML or XML, its derivative, and a new way to characterize information which we call the structured health mark-up language or SHML, and you will see reference to that throughout this talk.

The Health Language Center is a group with expertise in each of these areas that are needed to address this problem, namely, natural language processing, and we are working directly with the unit from NYU, Naomi Seger and her crew who have some 20-odd years dealing with natural language, individuals from the XML community who have expertise both at the ISO level and the HL-7 level and individuals who are experts or who have an involvement with dictionary building, and I count myself among that group.

So, it is really the coalescence of the language processing, the utilization of the XML formalism, actually an extension of the XML formalism which I will be talking to you about and the issue of dictionary or lexicon building.

This is a work in progress and in a sense this is a status report. I presented a roughly 1-hour talk at the TEPER(?) meeting 2 weeks ago in Orlando and obviously today this is going to be just a summary of that information that was delivered at that time.

I am going to give a quick overview of the work and the slides you have provided to you in written form, and a disk has, also, been provided.

The major issues, and I will quickly read through here, and I will go through some of the slides very quickly and if you have questions as you review the printouts I would be more than happy to answer at the end of the session.

The major issues affecting patient medical record information we have heard from a number of speakers and we all recognized the continued barriers to data entry are a significant problem, the increasing role of patient choice and the need for involvement of patients, the shift from acute inpatient disease to prevention management, the impact of genomics which was referred to yesterday and the rise of Internet standards have become increasingly important in the work that needs to be done.

The Health Language Center's approach is then to take the maturity of the two major technologies, the Internet standards, namely, XML and natural language processing and then to try to use those to keep pace with the rapid changes that are occurring.

Now, if we look at the patient record, the provider patient document, provider of any description, the utility has been listed. We have seen care process, research aggregation, epidemiology. Adherence to guideline and eligibility are the two areas that we are particularly focusing on in the work that we are doing currently.

We think that if we can get it to work, that is we can extract the information necessary to understand or to know whether or not adherence has occurred whatever set of guidelines is proposed, or whether the eligibility for protocol study, that if we can do that, that is do that real time, or do it improved to the Committee, to the industry that in fact the processes that we are proposing and beginning to have in place will, in fact work in these two areas, and if we then show to everybody's satisfaction that that is the case, then we can go on to the areas of care, direct care and the other areas, secondary uses which have been outlined by previous speakers.

So, our patient record, basic medical record information, whatever we are going to call that, whether it be in text or structured data entry it would appear that both forms are prevalent. Both forms will be used. The structured data entry whether it is a set of pick lists or whether it is a set of guidelines or protocols that have been outlined represent a similar problem.

Our job is to extract from that either text or structured data, and our attempt is to integrate the information derived from both of those sources. To get at the clinically important information units, minus the verbiage, what we call and derive a clinically relevant content unit or CRCU. It is a finding. Those are findings, but within our own working group in order for us to distinguish and to understand one another as we speak this is a temporary placeholder for this notion of finding.

In order to accomplish that and to achieve the goals outlined here we think we need these three technologies that are currently being developed, and I think we are making reasonably good progress, namely, natural language processing, the use of XML formalism, an extended one and a description of a set of tags using the XML formalism to characterize the information in the record in a somewhat different way than it has been done previously.

We are going to start and take an overview, 10,000, 30,000 feet of each of these language processing XML and a look at some of the tag sets that we have currently developed and are under development.

Natural language processing is a reasonably mature, I think not perhaps at least not in our view the group that I am working with at NYU don't think that problem is solved.

We heard this morning that that was a done deal, and if it is we are not aware of how some of the problems have been addressed.

In any case it is a system which turns natural language in the clinical documents into structured data for a variety of applications. It makes explicit the information structure of texts using linguistic methods in the most advanced information technology that is available to us.

Language and language processing, automated language processing, natural language processing presents information linearly and strings phrases, sentences, paragraphs, sections in discourse. The information is carried by the semantic types of words occurring in combinations, and that is key.

Co-occurrence of terms that are identified by their linguistic classes is absolutely essential to an understanding or processing of actual language, and that skill, that discipline is in the linguistic community, and we are, in effect, piggybacking onto that.

What we expect, and what we hope to do, and I will show you examples, here is a dictated encounter note that was actually the second visit, 28 year old, etc., medications, examinations, other issues and so forth, and we will give you some examples of how the system we have in place is able to automatically process sentences that have been extracted from that discourse or that dictated encounter note into a set of information units, formats in a natural language world, information units CRCUs. They are all equivalent terms.

They include demographic data, deal with verbs, deal with patient state, data related to diagnosis, sign and symptom, status data, that is to say patient, what anatomic parts are involved, treatments, tests, time, uncertainty, negation, response and changes, hugely important within an information unit, and examples of this will be shown in which the time at which things occur, the uncertainty or in the language world the modality, the presence or absence of negation and a set of terms that suggests negation within natural language are important to incorporate into this unit of information, this CRCU.

Processes are in place to do that. This is an information format or a patient state. As you can see it is fairly complex. It is on your handouts, but what it does is take the different elements as defined by the language processor, each of which have their subparts and bring together in a sentence those ones that fit together.

Does it work? Does the language processor work? A HEDIS study was performed using this tool at NYU in which beta blocker treatment after heart attack and the hospital discharge records were actually prepared; these are real- live records done off line obviously were processing analyzed, formatted for retrieval of information on the following queries: Did the patient have an acute MI? Was the patient give a beta blocker? Were there contraindications?

The beta blockers were given. The contraindications were given, and the results from that query of 91 studies from a very major hospital, by the way in Massachusetts indicated that the beta blockers were given when one or more of the contraindications in 42 of the 91 instances and in two instances it should have been given but was not given, and that information was derived from analysis using the natural language processor and strictly doing string searches of those terms.

Could that be improved upon? We think it will, and in order to do that we are going to now look, go back up to 30,000 feet and look at the ostensible mark-up language which is highly detailed. We have experts in our group, as I say, working at both the ISO and HL-7 level to help us work our way through that.

It is a streamlined subset of the standard generalized mark-up language. It is a language, and we are in a sense piggybacking on that language with our structured health mark-up language, that is a set of tags to define conditions that occur in health that are of importance.

The document type definitions define the use that is really the syntax, if you will. It is the schema by which this information is collected, and I will show you examples of that as we go along, designed for data exchange. XML's primary motivation, SGML much too difficult to move. HL-7 it looks like it is going to adopt in their version 3.0. XML is an exchange format or a syntax, I should say.

Data processing oriented rather than publishing which is what the SGML was originally designed to do, and one is able to create your own tags, what you need to know, annotate the document with meaning, and up to this time XML has been used principally to define the architecture of a document not the content within the document, not the blobs, the textual statements that have been just referred to as blobs of information in the current year up to now versions and what in effect we are doing is going into those textual elements, textual blocks, and we are annotating on a word- by-word, term-by-term with the help of the natural language processor to assign not only the medical classes of the terms, I am sorry, the language classes of the terms but the medical classes of the terms in a way that is quite meaningful.

There is more mark-up language. The data and document are combined. In a sense you can take the text, and you assign your tags, and the text is there, and it is annotated, and so, it is retrievable and analyzable at that level.

The tag text, in effect, transforms it into data. There are granular descriptive views of text, and we will show you lots of examples. They are meta data. They are information about the data. It is not explicit in the text which puts meaning and interpretation on top of text. It has the ability, with a properly developed set of tags to catalog all the information in a record or document and importantly it preserves the fundamental structure of the document and perhaps even more important the tagged data can fit into any data model. It has the capacity to do that. So, in a sense, I suppose another way of saying that, it is vendor, that is database neutral.

It is designed for an object-oriented approach. That is the one that we are pursuing. I don't want to get into the technical details of that, but that offers a great deal of facility.

What does it look like. There is element type in brackets, content words, flow element type. So, the information you want to say about this term or this phrase or this architectural structure is contained within tags or formerly spoken, the element type.

Here is pneumonia. It is a diagnosis, open, closed diagnosis that tells you then yo can retrieve that term either by asking what were the diagnoses, in which case pneumonia would be retrieved or by retrieving pneumonia itself.

So, it offers a great deal of facility over and beyond what is currently available to us. Pneumonia, right lower lobe, open diagnosis, open location, right lower lobe, close location, close diagnosis.

Pneumonia right lower lobe, superior due to Klebsiella, open diagnosis, open location, close location, open position which is superior, close position, links, organism, open Klebsiella, close organism and finally close diagnosis.

That could be presented in any one of a variety of formats and an extension of XML is the extensible style language which allows documents that are formatted in this way, that are tagged and conform to the XML structure and formalisms to represent, to render the information that is tagged in any kind of a set or any kind of format that one wishes.

For example, one could say, "Diagnosis," one could say, "Location," one could say, "Organism," and it makes a very concise way of reviewing information.

This is a statement actually from a paper written by Kaye about 6 months ago in which he took a sentence, demographic, node positive, carcinoma left breast, treated, chemo, radio. Here are the treatment schedules. Using XSL in the marking we are able to take that text and represent it for review and instead of having to see this and wondering what was going on and having to read through all this, to isolate what the diagnostic tag, to spread to previous treatment and the treatment is outlined in very concise, very precise, very readable and very transportable to databases, whether you use relational or object. Again, we are not going to get into that portion of it, but it is there.

Now, structured health mark-up language, the third component, third limb, the third leg utilizes the linguistic model. We are not in the coding business. We think that the tags that we assign to these terms and to these phrases are the descriptive tags, and we can assign as many as we like as long as we stay within the formalism and as long as we define the syntax and the understanding of that in RTDT, a document type definition to schema, and one of the beauties, one of the strikes of XML is that it has a great deal of flexibility in presenting data in any way that one likes. It utilizes XML, designed to integrate structured data entry and text, and again, this is an important point.

I am going to focus on the natural language part of this, but we have another portion of our team, actually a vendor who is in fact using the same technology, using the same set of tags that we are developing and assigning them to structured data.

So, we will now, we hope, or our purpose or our design is in fact to extract the information from both structure and from text and put it into the same kind of data structure and have it available for review and analysis.

So, the SHML and medical language processor, and medical language is a bit easier to process than natural language. We heard a bit this morning about all the issues surrounding natural language. Many of those carry over, but medical language is really a subset of natural language, and the ambiguity although present is present to a much lesser degree, and if we stay and decide that we can just deal with the medical language at this point in our evolution, I think we can make progress.

So, the document then is standardized, sentence identifiers applied. The medical language processor isolates the thing, the statements into the CRCUs, and examples will be shown. The assignment of both the medical SHML content, tags and the assignment of processing tags, medical language processing tags, that is linguistic and medical into the tag and done in parallel and the use of the DDT that we define, the style language which we are free to define as long as we stay within the rules, and this is a new development, the SQL which is being adopted or adapted to work with documents that are tagged in XML That work is a work in progress now.

The standards committee, W3C group think that by this year's end that in fact they will have arrived at, there are tests or there are proposal materials out. We have looked and actually worked with some of them. We are quite happy with that. Retrieval from these kinds of structures works very nicely.

So, the mapping, the terms we have in our dictionary, each of which is defined, the parts of speech, the language class assignment and you needn't worry about what these are in terms of detail. You have them, and again if you are interested, we can go through them.

The mark-up language, this is a finding, a neurologic finding, altered awareness. Alternate is a time repetition. Alternate in between is a preposition. It plays a role as a linkage term in language. That is really all prepositions do, and that needs to be understood. Altogether is an amount term. Aluminosis is a toxicologic term. Augmentation, change term. August is an exact time, etc., as we go on down the line.

The dictionary that has been processed up to date is the entire, characterized in this way, both with language class and with medical knowledge class includes the entire dictionary that is used at the NYU site which has been processing documents, and it includes with Peter Goltra's permission the entire medicine dictionary that he has in place at the present time and information from a variety of other sources.

The tag set that is applied, the traditional stuff, demographic, anatomic, medication, organism, chemicals, devices, occupation, procedures, diagnosis, that is really ho hum stuff, and obviously what we are doing is building on the expertise that exists in those communities in working and by communicating and agreeing that we can use for the purposes outlined the different sets of terminology which they are expert in.

Now, in addition to the traditional set, however, characterized the verbs, prepositions, the foods; we didn't know that, 1700 dietary elements that are available to us from the American Dietetic Association. Issue of time is a hugely important problem, and it turns out that the language processing, in fact, have time exact terms, begin terms, end terms, frequency terms, location terms, time location. There is a whole family of terms that belong in each one of these classes, and these are defined and help us to process this language.

Change certainty, much of what we do in medicine has to do with certainty or uncertainty, and we need to be very explicit about that. This office has the opportunity to be very explicit and to do precisely that, and the other things that we see here are shown.

Findings, vital signs, signs and symptoms by organ, laboratory, image data, behaviors characterized as such, living injury exposure, travel, activities of daily living, as we go on down the list, allergies, immunization, functional status, disability compliance, alternative care, an increasingly important issue that we are confronted with today. I believe a year ago it was said that over 50 percent of the visits to health care providers were to alternative care providers in the United States.

So, here is again a little closer look at things. Here is our tag, the content words, close tag and each word in the vocabulary table or lexicon which is defined. So, it is a dictionary, the terms that provide taxonomic knowledge, hierarchical knowledge, synonyms, equivalent terms, linguistic knowledge are defined.

Terms that are ambiguous, and I will show you an example of those and the tag knowledge is found here, and so when text is analyzed one goes to this dictionary and finds and assigns the correct tag, and similarly in a clinical table here is a chest pain, its location. There is an attribute for which variety of values, onset, variety of values brought on by and so forth; each of those is characterized with this similar system.

The vocabulary table, digestive tract, you have got upper which contains a set of structures. You have got lower which contains a set of structures, nothing new. That is all ho-hum stuff.

Anatomic structure is labeled. The dictionary is laid out in a three place table. That is useful. In fact, there are more than three places, three columns. So, we can say that anatomic structure has as a member integumentary system, breast elements, musculoskeletal elements, digestive system, digestive tract; digestive system is made up of a tract and digestive organ, and the digestive tract itself has an upper and a lower. The upper has its structures. The synonyms are noted, and these carry the tags that are shown here.

Similarly for findings, and these go across the board, there is a whole set of findings, constitutional tissue, skin, musculoskeletal, respiratory, neurologic. Under neurologic we have as a member dyslexia, aphasia and phobia. Under phobia we have as a member the two shown here, agro and claustrophobia, each labeled as a finding.

So, with those there pieces let me show you this example from the text that was shown to you at a low-power view. On a 16 haselvayer(?) electrocardiogram showed probable atrial fibrillation, a ventricular rate of 100, with premature ventricular contractions, possible old inferior anteroseptal MI.

The text processor will divide that into its information units or CRCUs and on 16 hospital day an electrocardiogram showed probable atrial fibrillation at a ventricular rate of 100, a single unit of information.

Premature ventricular contraction, a second unit linked with the and, with and, possible old MI, possible anteroseptal MI. So, they are segregated, separated and each then goes through this tagging process, and this is all done electronically. This is tough stuff to look at, but in fact, the processing speed, the page that I showed you it takes with the non-optimized processor that is currently in place, it is something under a minute per page. We think if we can demonstrate that this is going to be a viable approach that our processing speeds can be enhanced significantly.

The assignment of these tags to indicate that this is electrocardiogram showed on the 16th hospital day fibrillation, probable atrial rate 100, ventricular I should say, atrial rate of 100.

The second statement, premature ventricular isolated and tagged with both the linguistic tag and the medical tag, medical sense of those tags, possible old inferior and anteroseptal infarct, old possible inferior infarct, old possible anteroseptal, each one of those separated, segregated, isolated, put into a data structure which is analyzable and retrievable either by tag or by the strings, word strings, the synonyms of which or the equivalents of which are defined in the dictionary table.

So, here we then have a structure in the kind of notes which we hope to process, and what we would like to see or one of the ways or one of the outputs of this kind of process, then we could take that whole page of text, and we could have that data presented to us with the age, gender, time, symptoms, history findings, medications, allergies and what the physical findings were or presented to us in this way for review, isolated into the individual independent information units for review and for access which is quite powerful.

The ability to resolve ambiguity and to deal with multiple hierarchies, hierarchies are obviously an issue we are dealing with. We heard Clem McDonald say yesterday that depression was an issue that he was confronted with, and we learned that there were multiple variations, but if we can counter the word "depression," in fact it has a psychological sense, in fact, if you say depression of the surface, I have a fracture or a skull fraction, and there is depression, it really refers to shape. So, it is a finding in tissue or if I have a depression of a white count or a platelet count it is an amount term. So how do we know, or a change term; how do we know how to assign those terms? If we have SG segment depression quite arbitrarily editorial board has said that that is an idiom. That is a phrasal term. So, we can deal with depression when it is used in this sense, in this context, but when it is presented in text, how do we separate it?

EKG reveals sinus bradycardia, perfectly sensible. We are tagging, isolating, identifying the word class and the medical knowledge within individual tags. Cardiogram is a procedure, cardiovascular electrocardiogram. Reveal is a show term in our world, in the linguistic world. Sinus, in fact, we can talk about the AB, SA nodes. We can talk about the sinuses in the respiratory tract. There are sinuses within bone, and in fact rectal sinuses or fistulas, tissue findings.

So, how do we know given encountering that word? In fact, what we do is look and say, "Look, sinus bradycardia is associated with a cardiovascular finding or a cardiovascular term." So, we know then to select through context, and our language processors allow us to do that.

Structure data entry I don't want to say. I have really gone over that with you. We are integrating, and we think that is equally important and the structured SHML or the health language center will be developing the vocabulary or dictionary tables but, also, a set of structured data entry formalisms that we think will be useful.

So, in summary, then adoptive rules and notations replace for SGML, XML for the medical record, create an architecture for data type structure, and the EMR utilizes the same, the notation, extensible mark-up language. It works with language. It doesn't reinvent it, XML. Provides structure and contextual meaning to a document and it is a self-describing data structure. Subcomponent TDT is defined and rendering can be given in any way one needs. Tags assigned to terms and phrases which are specific for health and specific for language processing and together we think they make a powerful process.

Structured syntax of tags, rules of well- formedness(?) and the style language allows us to render as we will. The element types that are defined so far are what we feel are needed, one having to do with this entire parts of speech for language processing, shortness of breath defined as a noun. The co-occurrences, another set of again language specific issues, shortness of breath is an indic(?) in their world which basically says that it is a sign or a symptom, something is wrong, indication of abnormality.

For us shortness of breath is a respiratory finding. So, the combination of those two processes allow us to capture the information and label it and make it available for review.

This is the DDT for the CRCUs for one of the five that I have outlined. This is this, showing the treatments, showing lab tests, and the detail there is shown to you in your slide.

So, in summary then an echo performed on the CCU, global dysfunction, ejection rate, da da da da, isolated, analyzed and style presented echo CCU and what the findings were is isolated defined entries with their time, with their negation, with their certainty in place.

So, the mission then on the last slide here of the language center and the development of this structured health mark-up language is to define a granular representation of terms and phrases and in a given domain on ambiguously defined clinical concepts, provide adequate representation of the terms and concepts in an easily understood architecture, provide for discrete mapping to other nomenclatures and code sets, and we don't think we are in competition with any of the other applications. Utilize easy, available, inexpensive and widely supported tools for authoring, maintenance and use but inexpensive and widely supported, we are doing the bulk of our work in Oracle but using Excel and Access spreadsheets and to provide this as a non-proprietary standard under the auspices of a private non-profit entity which is the Health Language Center.

I thank you for the opportunity to present.

MR. BLAIR: Thank you, Dr. Rothwell. I think our next witness is Andrea Neal.

MS. NEAL: In my diskette here I will let you know what materials I have provided. I have provided written testimony which basically follows the questions that were posed or the proposed questions, and along with that is a history of the development of MeDRA and then I have a two- page document, copy of both sides with a Federal Register notice of the advanced notice of proposed rule making which I will address.

My name is Andrea Neal, and I am a program manager at the FDA in the Center of Drugs and my responsibility there is to implement the new MedRA terminology into the works of the center.

Not knowing how much of a presentation you had had before on MedRA I decided to provide a brief overview and then I will provide an update. The other things that don't get covered in this powerpoint presentation I have covered in the written testimony, and if I have time at the end I will go over those point by point.

So, MedRA is an ICH harmonized medical terminology that was developed for regulatory purposes, and the ICH is the International Conference on Harmonization. The ICH has the larger goal of harmonizing regulatory requirements and documentation for medical products in the three ICH regions which are the US, Japan and the EU.

It is an international medical terminology. At one point it was actually called that, although now the preferred name is MedRA, and I will go over the acronym. It is the medical dictionary for regulatory activities, and it is a multi-axial terminology built upon a five-level hierarchical structure.

I do have a copy of these slides, and I will, also, give the floppy to the Chair. As you know I probably don't have to spend much time here. I was here yesterday and heard many of these things discussed repeatedly, that there are very good reasons to have standards in place.

I think the situation at the FDA is a little bit unique in that we are a regulatory agency, and so we have input and output, input from the industry which in turn gets its input from the CROs, from research institutions and from practitioners and then our evaluators, whether they be on the post-market side or on the pre-market side have to evaluate the information that is submitted and make regulatory decisions based upon that, and so, the need for a standard terminology I think is great.

There are some benefits to improved regulatory communications, to facilitate consistent coding so that information isn't lost in the translation and to, also, decrease time spent by the reviewers on normalizing data and trying to make comparisons.

One of the other benefits that we see in having a standard terminology is to decrease the submission time for new drug applications and investigational new drug applications, to improve product labeling which is what the provider and the patient see and to facilitate the shift to electronic submissions, both on the pre-market side and the post-market side, and in the end it should, also, improve the ability to perform epidemiological studies on a global scale.

So, when the ICH set to build a terminology that could be used for regulatory purposes their goal was to build from an existing terminology, if at all possible, to focus on international community needs and to ensure that they would be used throughout the regions and perhaps even further by promoting collaboration and participation in the development of the terminology.

It was seen that there would have to be some structures in place to enable translation into other languages and long-term maintenance of the terminology was seen as very important.

So, the terminologies that are included in MedRA, version 2.1 is out on the street, and I will get to that, are the MCAs, medical terminology, COSTART version 5, WHO- ART. It has probably been updated since this, J-ART which is the Japanese adverse reaction terminology, ICD-9 and the clinical modification, as well as HARTS.

Okay. So, in the progression from version 1.0 which is what the expert working group started with to 2.0, there were a lot of changes made although the same 5-level hierarchical structure was maintained. Thousands of additional terms were added and several system organ classes which is the highest level were entirely written all the way down to the bottom.

This is an example of what one thread of the hierarchy looks like. So, if you look at the box that has the numerous terms in it in small print, they are the lowest level terms here and then you proceed upward through the preferred terms. In this case there are five of them. This isn't hanging off of here for any particular reason. It is just my inability to move it with the computer into the higher level terms, the higher level group terms and then the system organ class.

So, again, each of these would have multiple branches off of them as well. So, as far as scope, MedRA covers diseases, diagnoses, signs, symptoms, therapeutic indications, investigation names and qualitative results, medical and surgical procedures and medical, social and family history.

There were some things that were not included in MedRA early from the outset and on purpose because it was too much to bite off in one chunk. However, these items here are not eliminated from future development. It is very possible that in the future these would be incorporated into MedRA and that includes drug and product terminology, equipment, device and diagnostic products, study design, patient demographic terms, device failure terms, population qualifiers and descriptors of severity.

It was designed for use throughout the regulatory cycle. That would include clinical studies, adverse event reporting, regulatory submissions of all types and regulated product information such as advertising and labels.

Now, why are we using it at the FDA? We have a very rapidly changing environment in the Center for Drugs. We are receiving more and more reports on an annual basis. Now, we are at over 200,000 reports per year that we receive from industry and from the lay public through the MEDWATCH program. The review cycle, of course, is faster due to recent regulatory changes which promote a faster review cycle, and, also, there are more complex pharmaceuticals out there and with the population in general aging it is not surprising that we are seeing a quickly growing number of adverse event reports per year, and frankly, if we continued on analyzing our reports using paper we would be drowning in paper.

So, we began using MedRA at the FDA in the Center, for reports both from the Center for Drugs and for the Center for Biologics, their therapeutic line in November 1997, and MedRA is used in the new adverse event reporting system which we launched in January 1998, and that was right when I came on board.

So, currently and since that time, all of the adverse event reports that we receive are being entered, that are entered into the database are encoded in MedRA.

Now, I have provided as a handout a copy of the advanced notice of proposed rule making and this went out in November 1998, and it basically announces the FDA's thoughts that they are planning to regulate the -- or to require the use of certain electronic standards that were developed in the ICH process and MedRA is one of those. So, if you are interested in reading through this and you see M1 mentioned, M1 is the miscellaneous category in the ICH process and that refers to MedRA.

We do at the FDA have a goal of achieving a paperless environment by the year 2002, and I think this is actually a government-wide issue. In this advanced notice we specifically requested comments regarding whether exemptions should be granted to the rule and if so what would the basis of those exemptions be, how industry or other users would see cost/benefits versus the cost burden and what industry and others think about proposed time frames for implementing the requirement to move to MedRA and the other electronic standards from the ICH, and the closing period for this has now closed and at this time comments are being considered and evaluated, but I don't have any other information for you at this point.

I think that you can watch in the near future although I cannot say when. I can never predict anything when things go to regulatory for a notice of proposed rule making to come out that would have responses to the comments that were received during the comment period.

So, as far as next steps, at least with FDA we plan to continue the implementation of MedRA on the postmarket side. The agency needs to prepare the notice of proposed rule making for electronic ADR reporting and have a comment period for that as well before a final rule could be published, and we plan to lend the experience that we have had in implementing MedRA for both the Center for Drugs and Center for Biologics on the postmarket side to work being done on the pre-market side.

This being the nineties, I have listed a number of web sites for people who want to get more information. The first one is the general FDA entry site, then the CDER site. The third site has information about the adverse event reporting system and includes some information about a pilot program we have going on where information is already or postmarket reporting is already being submitted electronically.

The fourth site is the MEDWATCH site and there you can download if you want the 3500 or the 3500A form and they are setting up a provision for reporting to be conducted over the line although that isn't quite there yet, but you can at least get the form on line and then I think fax it in, and the fifth entry there is for the maintenance organization for MedRA, and actually Kathy Huntley is in the audience today with the MedRA maintenance organization, and if any questions come up about maintenance in particular she can answer those.

Do I have a few more minutes? Okay. There were some questions that I didn't address in my presentation. You wanted some information about how the terminology relates to the message format standards, and since we have been slinging around all kinds of acronyms I thought I would let you know that the format standards are an SGML DTD with a UN edifact(?) envelope wrapper around them, and I think everything else I pretty much covered in the presentation.

Thank you.

MR. BLAIR: Thank you, Andrea.

Bill Hess, are you next?

MR. HESS: Yes. Thank you for inviting me here today. My name is Bill Hess, and I work in the Food and Drug Administration Center for Drug Evaluation and Research, the same center that Andrea works in.

I would like to speak to you a little bit about the national drug code. There is a lot of mystery behind the national drug code, and I would like to hopefully explain some of this. I gave this presentation back in March at the ANSI conference downtown.

As a historical background the national drug code was originally established as an essential part of out-of- hospital drug reimbursement program under Medicare in the late 1960s, and at that time it was a voluntary effort.

As the voluntary effort came along it was quickly realized that many firms did not want to voluntarily do this, and a mandatory requirement was then written into the Food, Drug and Cosmetic Act of 1972.

There are three configurations for the national drug code, and I am not sure why these three different configurations were ever instituted, but basically they are 442, 532 and 541. I will explain a little bit about that later on.

The first set of numbers, either the four or five digits refer to the registration of a firm. On or before December 31, of each year every person who owns or operates an establishment in any state engaged in the manufacture or preparation, propagation, compounding or processing of a drug or device or devices shall register with the Secretary, name, place of business, all establishments.

That is the first portion, and the Secretary assigns a registration number. The Secretary, also, signs a listing number which is the last two sections of that code to identify each drug or class of drugs listed under subsection J of the FD&C Act, Section 510.

Here they are applied to certain groups like pharmacies, licensed practitioners and so forth. So, as you see, the list is this part here, and that refers to the, this portion here refers to, as I will show later refers to the drug itself, and this portion here refers to device. The entire three digits, whatever the configuration is is the NDC.

The first segment is the labeler code, and that is assigned by the Food and Drug Administration when a person who owns or operates any establishment registers their establishment. The labeler is any firm that attaches labels, manufacturers or repacks a drug product. A distributor is a labeler that does not actually register but is required to obtain a labeler code. It is a fine distinction in the regs, and labeler codes have either four or five digits as I mentioned before, and these are computer generated when a party is entered into the Food and Drug Administration's developers and distributors system which is a comprehensive database of all names and addresses and telephone numbers of any person or firm that has ever had any correspondence with our center. So, really there is no control over this beyond the computer generation.

The second and third segments of the national drug code are totally uncontrolled by the Food and Drug Administration. They are totally controlled by the firm itself.

The second segment is the product code. That is assigned by the person who owns or operates any establishment. The product code identifies a specific strength, dosage form and formulation for a particular product.

Product codes have either four or five digits depending upon the NDC configuration selected by the firm. They will have a different number of digits that they are capable of assigning if they chose the four versus the five digits as the labeler code.

The third segment is the package code, and this is, also, assigned by the person who owns or operates the establishment, and these identify package sizes, and they either have one or two digits, depending again upon what NDC configuration is selected.

Here is an example in the 1995 NDC directory, and it is color coded to show the numeric and value associations. So, you can see here you have Nizoral shampoo and that is the actual proprietary name and the generic name is ketoconazole and the strength is 2 percent, weight per weight and it is a topical shampoo, and that refers to the middle portion here. Janson Pharmaceuticals is the firm and that is the five-digit configuration. Since they chose five digits here, they could only use either three or four digits in this particular case, and then also, you can see here you have 120 ml bottle the firm selected with a code of 01, and there is probably a difference, a 120 ml bottle that maybe looks a little bit different, may be packaged as something that is given out as a sample to a physician versus one that is available with labeling for the actual consumer when it is dispensed or for the pharmacist and then these are very small units, possibly to be used in a hospital setting where you don't want to have a whole bottle up on the hospital ward.

Okay, and the current edition of the national drug code directory is limited to prescription drugs and a few selected over-the-counter products.

This is a very important point. The inclusion of a firm or its products in the national drug code directory does not in any way denote approval by the Food and Drug Administration and it is very important to remember because a number of people have stated in the past that they feel that just because it is in the publication that it is something that FDA sanctioned.

There are, even before a product is approved it has to get a national drug code if it is to be marketed. So, there are in many cases right around the time of approval products that will have a national drug code assigned to it so that it can hit the market right away after it is approved, and these files are available on the Internet. They are basically flat files which you have to download. They are not anything that is searchable or retrievable in the general sense that you would normally be used to on the Internet, and they are, also, available on a two-diskette format through the FOI office from our agency.

Who is our audience? Our current audience is third-party providers worldwide basically. We don't plan any enhancements for the national drug code, at least at the present time. There is a possibility that we may have to go with extra integers and that would totally change the three different configurations for the national drug code but that is down the road. That is not something that is being planned at the current point in time, but we are running out or we will run out in a few years for numbers. We just have a limited number. What has occurred in recent years is that when a product goes off the market or if a firm goes out of business we will recycle parts of the NDC.

So, conceivably what could happen, there usually is a 5-year period, so conceivably what could happen is 10 years from now you could have the same NDC assigned to a totally different firm and a totally different product that was being used 10 years prior.

So that is a concern, I understand in industry, but that is something which started around 1972, because we were thinking that this might be a problem down the road. It is indirectly mapped to the UMLS through Micromedex, and there is some overlap with some other files that are out there such as the drug product reference file which is an FDA file. That is the file that produces the publication known as approved drug products with therapeutic equivalence evaluations otherwise known as the orange book, and that is it.

Thank you.

MR. BLAIR: Thank you.

Vivian Coates. By the way, we are running late. So, please stay within your time frames.

MS. COATES: My name is Vivian Coates. I will not be talking to you about drugs today. I am Vice President for Information Services and Technology Assessment at ECRI, and I would like to thank you for the opportunity of presenting these remarks today on behalf of ECRI.

ECRI is a non-profit, non-governmental health services research agency and a collaborating center of the World Health Organization for information transfer on medical devices, and for more than 25 years at ECRI we have developed and maintained and continuously updated a standard controlled terminology and coding system for medical devices.

MR. BLAIR: Vivian, could you just define ECRI for us, please?

MS. COATES: Originally it stood for Emergency Care Research Institute and that was about 30 years ago, and that was too narrow a definition, and so we just use the acronym.

I am going to respond to some of the questions from the perspective of a medical device terminology developer and from that perspective we believe that the PMRI must be comprehensive, comparable, accurate, confidential, accessible and retrievable to support analysis from a broad range of perspectives.

It must contain high-quality data to support clinical decision making for the individual patient as well as outcomes research and epidemiologic studies of populations, and it must capture essential data at a level of detail granular enough to prevent loss of critical elements, and as the developer of a highly specialized terminology ECRI would like to emphasize the importance of this need for appropriate granularity.

With respect to the functions that comparable patient medical record information serves, to support data retrieval for health services research, outcome studies, technology assessment, post-market surveillance or any evaluative purpose where it is necessary to look at administrative data concerning the use of similar interventions, comparable patient medical record information must be available. If the information is inaccurate or imprecise or if the data resides in multiple incompatible systems in dissimilar or anomalous formats analysis is at best compromised and at worst impossible.

To illustrate the consequences of imprecise or inaccurate capture of medical device information consider broadly used coding systems such as CPT and HCPCS that do no in most cases explicitly identify devices used in procedures. A procedure may be heavily device dependent but this may not be apparent from the CPT or HCPCS code.

This has long been a problem, and it is a growing problem as the number of devices continues to increase. The problem is compounded because many of the institutional controls that exist for inpatient care do not exist in outpatient settings where increasing amounts of care are delivered. Hospital admissions in a cost containment and managed care environment are often intended to provide patients with access to a specific technology or medical device, and without device-type specific coding it is difficult to track the diffusion of devices throughout the health care system and ensure payment for their appropriate use.

The role that our medical terminology plays is as follows: Our terminology is called the universal medical device nomenclature system or UMDNS, and it is intended for classifying medical devices for the purposes of indexing, storing and retrieving device-related information for a number of different applications. The terms and the corresponding five-digit codes are widely incorporated into publications, databases, information systems used by government agencies, health care systems and facilities worldwide.

The scope of the terminology is that it covers all medical devices, equipment, supplies, disposables, clinical laboratory instrumentation, reagents, test kits, dental instruments and equipment, selected hospital furniture, test equipment and thus defined it includes almost any non-drug item that is used in patient care.

At this time the terminology includes 6500 preferred terms and codes and thousands of additional entry terms. The market acceptance, as I said before, the terminology is in use internationally in thousands of institutions including regulatory agencies, other agencies within national administrative health, hospitals and health systems. It has been translated into 10 languages including Spanish, French, German, Polish, Russian, etc.

It has been used to index and structure a wide range of clinical and technical information and the applications utilizing it range from regulatory databases on medical device adverse incidence to software for inventory control in hospitals, to bibliographic databases such as National Library of Medicine's HealthSTAR database, and recently under a memorandum of understanding with CEN, the official standards body of the European Union the terminology has been adopted as the interim standard for electronic communication and medical product registration among EU member nations.

The nomenclature that ECRI has developed will, also, serve, is, also, serving as the primary source vocabulary for the final European standard to be known as the Global Medical Device Nomenclature or GMDN. It will become the basis, as well for the international standard currently in development under ISO TC 210.

Under a partnership agreement between ECRI and the US Food and Drug Administration Center for Devices ECRI is currently assisting FDA to harmonize its present nomenclature system with ECRI's UMDNS and ultimately with the final global standard to support use by the global community including health authorities, medical device manufacturers and health care payors and providers.

Plans for expansion. As part of our contribution to the global medical device nomenclature initiative sponsored by the European Commission and endorsed by ISO, we are expanding and enhancing the terminology to incorporate terms and codes for European products not currently marketed in the US, for assistive devices and aids for the disabled and for additional in vitro diagnostic test kits and reagents.

How the terminology relates to others, the concepts have been incorporated into the unified medical language system of NLM for the past 9 years and NLM has contracted with ECRI on a sole-source basis to map and link medical device concepts to the other controlled medical nomenclatures in UMLS. Currently our terms and concepts for medical devices are already linked to MESH, ICD, CPT, SNOMED and many other of the source vocabularies, and our terminology is, also, incorporated by reference in SNOMED and now as David Rothwell mentioned in the structured health mark-up language and lastly ECRI has, also, mapped its device terminology to the FDA's product categories and to the device nomenclature developed by the Japanese Ministry of Health.

Appropriate government agencies, we believe, should encourage the adoption of standard terminologies for the patient medical record information without which there will be no comparability. Government funding should be made available to terminology developers to help them allocate the necessary resources to enhance and maintain their terminologies to meet the highest quality standards.

Maintaining and disseminating these vocabularies is expensive, time consuming and labor intensive, especially if the user base is international and multilingual. We feel that there is always a need for better coordination of efforts in this field, particularly when different standards committees may have overlapping scopes of work and for example, we have already seen this with various ISO technical committees, and they may end up taking very different approaches.

Perhaps the National Library of Medicine in cooperation with other appropriate federal agencies could sponsor a state-of-the-art conference on standards, data elements and policy issues for the patient medical record and publish the proceedings.

Thank you for the opportunity of presenting these remarks.

MR. BLAIR: Thank you. I think our last witness is from Multum.

MS. MEREDITH: Yes, my name is Terri Meredith. I am the Director of Clinical Vocabularies at Multum Information Services, a subsidiary of the Cerner Corporation. My responsibilities for Multum and Cerner include codifying terms for drug vocabularies for clinical drug information systems.

I represent both Cerner and Multum here today. We appreciate this opportunity to testify and to provide our views as the National Committee on Vital and Health Statistics prepares its report on the development of uniform data standards for patient medical record information and its electronic transmission.

The Cerner Corporation is a leading supplier of clinical and management information and knowledge systems for health care organizations in the United States and abroad.

We service more than 1000 clients in the United States and around the world. These clients include integrated health delivery systems, community hospitals, ambulatory clinics, physician practices, health management organizations, blood banks, laboratories and home health agencies.

Multum Information Services focuses on the creation, maintenance and distribution of drug information. Multum's clients include the Cerner Corporation and many other private and public organizations.

Over 2000 organizations, including electronic medical record system providers, physician offices, hospitals, insurance companies and government organizations, license drug and drug product information from us. Multum's drug terminology is a reference vocabulary in standards produced by the National Council on Prescription Drug Programs, NCPDP, and is being incorporated into the Unified Medical Language System of the National Library of Medicine.

Multum provides drug product information at multiple levels of granularity to address the business needs of our clients. As you know, in the United States even the most general terms used to describe a drug, its generic name, has not been codified in a way that promotes interoperability. In a health care setting use of the generic name can provide information to clinicians about the pharmacological properties of a drug.

This information includes interactions with other drugs, detection of allergies, recognition of therapeutic duplications and facts about the drug's pharmacology, dosing, side effects and warnings.

Such general terms are not appropriate, however, for describing a drug that is to be ordered, prescribed or included in an electronic patient medical record. The generic drug name must be paired with the strength, the dose form and the route to provide enough information to match the drug prescribed with a drug product that can be administered.

We, like many organizations, in our industry have faced significant business challenges in the area of medical vocabularies. The problem is not necessarily the absence of standards but more often the proliferation of numerous disparate and incomplete pseudo-standards that lack either the clinical depth or the industry-wide acceptance needed to facilitate interoperability.

Current drug information available from non- commercial sources is based only on national drug codes, a system of drug product nomenclature that has serious deficiencies.

Information based on national drug codes is so unreliable and inconsistent from one NDC source to another that standard message formats like NCPDP generally require an indication of the source of the NDC used in the message. Other shortcomings of the National Drug Code system include the absence of a central code assigning authority; the absence of a common meaning for each NDC; the absence of a standard format for an NDC; the absence of a public open process for code assignment; the absence of regular updates to public NDC lists; the absence of NDCs for a large number of products that clearly are drugs such as vitamins and the assignment of NDCs to a large number of products that are clearly not drugs such as medical devices and, also, the NDCs are blended with other 11-digit codes like HRI or UPC in most lists of drugs.

While NDCs may be an appropriate system of nomenclature for certain types of uses like the manufacture, sale and regulation of bulk and packaged drug products, NDCs are not an appropriate nomenclature for clinical practice. For example, when a physician prescribes a drug for a patient and that information is placed either into a printed or electronic medical record, the information concerning the prescribed drug is conveyed at a general level.

An example of this type of information would be penicillin VK 250 milligram oral tablet. This level of drug description is very similar to the description of a drug product used by the Health Care Financing Administration to publish federal upper limit processing for drugs. This general information about a drug that is prescribed currently is not interoperable with any other terminology and can only be sent as a coded message to systems that use the same drug vocabulary provider.

Any other type of messaging must be conveyed as text. Since the dispensing pharmacist ultimately determines the actual product that will be used to fill the prescription order an accurate representation of the NDC based drug terminology can be produced only at that point.

We believe that the health care informatics community needs a reference terminology for drugs, a drug RD in the words of one of our industry counterparts. A drug RT should contain the generic name information, as well as modifiers for strength, dose form and route of administration. Most drug information providers and standards organizations generally agree that this set of terms provides a basis for interoperability. The principal impediment toward development of a drug reference terminology is not what should be done but how and by whom this terminology should be developed.

The establishment of a federally supported set of drug reference terminology would provide a level playing field where all standards organizations and drug vocabulary providers could guarantee interoperability. We believe that the establishment of a drug RT is achievable in the short term over 1 to 4 years. Maintenance and update of the drug reference terminology is not a labor intensive undertaking.

As an example, a survey of the new drug approvals by the FDA in all of 1998 showed that only 30 new generic entities and three new dose forms would require an addition to an existing drug reference terminology. If the need exists, development of additional terminology could be developed over the long term.

The Federal Government can approach the development of a standardized drug reference terminology in several ways. One, the Federal Government could encourage the medical informatics community to create a drug reference terminology. After years of work, this has not come to fruition. If this course of action is pursued, the Federal Government should provide a leadership role.

Two, the Federal Government could create and maintain a terminology. By creating a drug reference terminology and requiring its use in messages sent to government agencies a standard for representing drug nomenclature would be created.

Or three, the Federal Government can create a partnership with the drug informatics community to create the drug reference terminology using collaborative efforts. This method may be preferable because it provides for input from many sources and may produce a well-rounded terminology that takes into account the needs and views of many users and providers of drug-related terminology.

We believe that the establishment of a standard drug terminology is essential to the successful deployment of electronic patient medical records. We believe there is no code set problem posed by HIPAA that would benefit more from strong governmental leadership. We urge this Committee and all federal agencies involved in the implementation of HIPAA to work toward the selection or creation of a standard reference technology for drugs for the benefit of all drug- consuming Americans.

Thank you.

MR. BLAIR: Thank you, Terri.

The testimony was excellent from all of the panelists, and I am assuming we will probably have questions from the Committee.

DR. COHN: Terri, I am going to start with you. This is actually a question more about Multum which I understand is, also, a drug terminology as best I understand.

MS. MEREDITH: Yes.

DR. COHN: Can you explain to me we have heard about NDC and we will look at them in just a second, but can you explain to me exactly what is in the Multum terminology and how that differs from the proposed drug reference terminology or the NDC that you have just been talking about?

MS. MEREDITH; Actually Multum's terminology now as it exits is very similar. We have different levels of granularity, but we do present a product description much like what I describe in here and have codified that and of course, we are all for-profit companies and you know, there should be a term that we can all meet with commonality so that any person who referenced a term, no matter who the vendor of their information was would be able to find a meaning for that term.

DR. COHN: Okay, let me just follow this along? I am presuming that you are not offering to get out of the business and turn over all that you have, everything over to non-profits.

MR. MAYES: HCFA licenses for public use for free Multum's drug list. The name differences just as a user I see because we looked at NDC and we had big problems with the way it was structured because it really was made for codifying manufactured packaged drugs, and it is not, it was never meant to be a clinical vocabulary for drugs and if you are trying to use it in that sense it really doesn't work very well, but just to Multum's credit as far as their basic vocabulary it is available on the web site which is updated daily, I believe for the benefit of anybody who wants it.

DR. COHN: Okay, let me finish my question then? What I was asking was what you perceive as if this were produced the value added that would allow, I mean presuming that you would see your company staying in business and producing some add ons to this base vocabulary; is that correct?

MS. MEREDITH: The value to, right now if someone adopts the terminology where they can use a term just like the NDC system I cannot tell you from the FDA's set of the NDCs which NDCs actually represent a penicillin 250 milligram tablet. It is impossible for me to do. I have to have this common term to find it, and of course, like I said, all the vocabulary vendors do produce this term, but they represent the term in a different way, you know, and there is no way to jump from if you are using one person's terminology, there is no way to jump to the next one, and what I am proposing as standard reference terminology is to provide a bridge so you could go to any terminology and find the same meaning.

DR. COHN: I guess I understand that. I was just wondering whether your company was going to -- I presume you concern that you have proprietary add ons to whatever base terminology you have that you consider to be a proprietary value that you would continue to market.

MS. MEREDITH: Yes, the basic drug NDC and drug names are free of charge. We offer lots of advanced clinical services that detect allergies, interactions, warnings, side effects.

DR. COHN: And those you would continue to sell?

MS. MEREDITH: Yes.

DR. COHN: Okay. Now, can I ask a question of Bill Hess who has obviously been here representing NDC? I think Terri and I think others yesterday have indicated that they viewed that there are significant problems with NDC. You have obviously indicated you have no plans for any enhancements whatsoever.

MR. HESS: As far as NDC, NDC is written into the Food, Drug and Cosmetic Act. So, NDC is mandated, and we are basically coding NDCs to carry out that mandate, and there are a number of relational databases at our center that are maintained that not only are used to basically present the NDC and are behind the NDC but we use those to produce other publications like the orange book.

Every drug that is currently on the market and even ones that are off the market that are discontinued, ones that are in development, they are all in FDA databases, and they all are coded with standardized nomenclature for the drug's proprietary name, for the drug's generic name, for the dosage form, for the routes of administration. All of that is already in the database. It is just a matter of what you ask for when you go to retrieve this. If you are just asking for NDC, you could have some problems because NDC, the numbers change. They are recycled sometimes although very seldom, and there are different configurations, and that is confusing, and the numbers themselves don't mean anything. I mean it is totally arbitrarily assigned either when a firm has generated a label or code or the latter two portions of it when they are assigned by the firm for the actual product code and for the package code.

So, you can get that information of the NDC number but it was never designed to get that information out. It was only designed to identify a product, period.

DR. COHN: I just want to follow up which has to do with the issue of regular updates. Can you describe the updating process?

MR. HESS: The actual file that maintains the national drug code is updated on a daily basis. The national drug code directory I think the last time it was published was back in 1995. Prior to that I think it had been a 10-year period from the time it was published before that.

The original plans were to be published in hard copy format every 5 years from the 1972 point. That didn't come to be because of resources. Since it is available on the Internet in downloadable files, basically there is no need for a hard copy, and I believe that they are going to have a retrievable format on the Internet and a searchable format in the next few months from what I read.

DR. COHN: So, you would disagree that there is an absence of regular updates, and you receive these things on a daily basis from the manufacturers?

MR. HESS: What is actually on the Internet, I am not sure how often what is on the Internet is updated but the actual files are updated on a daily basis and if you need a current copy of those files you should ask for a current copy and the FOI office can provide that to anybody who asks. So, it is available basically on a very up-to-date basis.

MR. MAYES: Their internal systems are updated continuously. The problem with FDA's system is that they are not made for ongoing external use. You have to request a copy from their information office to get it. There is no mechanism at the moment and NDC is only one of their databases and you have to know exactly what database you want and what questions you ask. There are a public agency. They will answer your questions as long as it is not proprietary information, but they are not in the business of supplying information in a commercially usable form.

HCFA talked with them, and we were looking around, and we just even as another federal agency, it wasn't that they didn't want to give us the information. They just don't have it packaged in the way that you would think of in terms of broader use. Would that be fair?

MR. HESS: That is absolutely fair, yes. We are not, like you said, Bob, we are not in the business of publishing this information unless it is mandated by law, and if it is not mandated by law there is no reason to publish it.

MR. BLAIR: Louis Kun, do you have a question?

MR. KUN: Yes. I have a question for Dr. Rothwell. I was thinking in terms of listing globally from a standards point of view. You were comparing, for example, structure, health market language versus NLP and you talked about syntactic and semantic versus based on medical knowledge. I was wondering what happens when you are having a study that was made perhaps in a foreign language and you use perhaps automatic machine translation and when the terms are put in the wrong order between the tags, if you can make perhaps a wrong diagnosis or if you could get wrong information because of the ordering of that information?

DR. ROTHWELL: Foreign language represents a significant issue. I don't think there is any doubt about that in anybody's mind, and it is simply not a one-to-one, as you are suggesting term-for-term mapping. We have people in the European Community in the XML community who are working with us and who are -- I am not going to address. They are going to address that problem, but it is a known, and I am not going to say that it is understood; it is recognized and it will be addressed because they are as anxious as we are to develop this technology, and we are working as partners with them.

MR. BLAIR: Dr. Kolodner?

DR. KOLODNER: I have a question, also, a follow up on that as far as the SHML. How adaptable and flexible is it and for example, can you represent acupuncture information or other kinds of things in it or do you set the structure and then you have to adapt?

DR. ROTHWELL; Those terms are placed in our dictionaries, defined. They are assigned their tags, both linguistic and the medical sense or senses under which those terms should be used from which you would like to retrieve those terms. So, yes, it is inclusive. Our attempt is to do that. How far along are we in this process? As I indicated at the beginning, it is a work in progress. We think we are going to be ready in 6 to 12 months we will have something that is ready to look at, to use or to have evaluated by the community, but we are working very hard at the present time revising our tag set, that is the way in which data terms, phrases are characterized, and I have only a working set. I am sharing it with our working group. We are a little bit reluctant to -- I will give you the broad categories, certainly, but those are as I said a work in progress.

MR. BLAIR: Dr. Cohn?

DR. COHN: Dr. Rothwell, we heard testimony at our last set of hearings from HL7 when they were talking about I think it was recent record architecture activities. Can you explain to me the relationship of what they are doing to the best of your knowledge anyway to the work that you are doing and are they --

DR. ROTHWELL: It is perfectly complementary. The work that they are doing, Bob Dolan, obviously from your organization and Rachel Sokolosky who is working with us by the way, the issue they are addressing is the architecture of the record, which pieces of the record go together, family history sections and other components.

The textual information that is contained in one of those segments is the issue that we are addressing. We are going down into the, as I indicated the binary large objects, the so-called "blobs" that are indicated. That is really where the heart and sole of the information in the record is.

So, the current HL7 initiatives are addressing architectural issues. We are addressing the semantic, the content issues contained within those sections that are outlined by their architecture. So, it is perfectly complementary. It just simply takes their work and provided depth to it, and obviously a principle with lots and lots of advantages we think would be that it has got a built-in messaging structure. It in fact allows the electronic record, the web-based record to exist. We don't think you know, we looked as we say at these new technologies, and the web is clearly going to be the heart and sole of how we do business in medicine we think in the future with collaboration, cooperation, participation of both the providers as we know them traditionally in whatever setting, nurse, ultimate care of a patient and this provides the mechanism. This provides the tool. This provides the platform under which those kinds of information can be passed among us patients, us who have need.

MR. MAYES: Commander Neal, just a quick question, is there a formal information model behind MedRA, is there a structure, a formal model?

MS. NEAL: If you are talking about the hierarchical structure, yes, there is one that exists.

MR. MAYES: And it is available from one of the sites?

MS. NEAL: They have been adding to their site on a practically daily basis. So, yes, if you hit the MedRA, MS and SODOTCOM(?).

MR. BLAIR: Michael Fitzmaurice and then me.

DR. FITZMAURICE: I, also, wanted to ask Andrea a question. I am picturing a physician working for a health plan and an adverse drug event comes up, and so, the large health plan says, "We want you to code this is SNOMED," and FDA says, "We want you to code this in MedRA," and the physician says, "I have got to learn both these code systems." Is there an easy mapping from one to the other without having to go through UMLS to do it; is it pretty easy to do it in both?

MS. NEAL: At this point in time to my knowledge there is no mapping between SNOMED and MedRA although we are having some discussions with NLM to MedRA into the Metathesaurus, but I would imagine that other efforts could be made to facilitate that type of activity.

DR. FITZMAURICE: When physicians do report adverse drug events do they do the coding?

MS. NEAL: Basically what is happening right now is if a physician reports directly to the FDA they report on a MEDWATCH form and our coders who are on a contract with us code the narrative into the MEDRA(?) codes which go into the database. If on the other hand that provider chooses to report directly to the company, the manufacturer of the product, then actually what is happening right now is that they are sending us the form and we are coding it, but what is envisioned to happen in the very near future which is partly why we are published proposed rules, etc., is for the industry to actually submit the report electronically precoded so that it basically enters our database directly and we basically would only have to do QA on a sampling of those reports.

DR. FITZMAURICE: By industry do you mean the drug company or do you mean the physician that discovers it?

MS. NEAL: The drug company.

MR. BLAIR: Okay then maybe this is getting very close to clarifying my question. So, you are really looking for reports from the drug companies of drug-to-drug interactions or adverse drug reactions. You are not really looking for MedRA codes to be implemented with an electronic patient record that a physician or clinician would directly use. Is that correct?

MS. NEAL: That really wasn't the purpose for MedRA being developed although we are intending for MedRA to be used on the pre-market side meaning in clinical trials. So, however the conduct of the conduct of the clinical trials fits into primary care practice that could be the case.

MS. BICKFORD: Carolyn Bickford, American Nurses' Association. I have two questions in relations to the drugs and pharmaceuticals. It seems to me that as we are moving into alternative therapy, complementary medicine that we are missing a whole product line that is not captured in the manufactured pharmaceuticals and similarly we are, also, missing the environmental chemicals, toxins, exposures that may not be manufactured and may not fit in this product line. So, I am inviting the Committee to consider these alternative compounds, entities for consideration as part of the health record. I don't know how we are going to be able to accomplish that. There are two groups of products that aren't necessarily encountered.

MR. BLAIR: Simon, were you able to capture that comment? Carol, I am sorry, could you restate that? I think some of us lost that.

MS. BICKFORD; Okay. I am commenting on the fact that we have been talking about the manufactured pharmaceuticals but are not addressing the alternative natural compounds or uncontrolled FDA products from that perspective, and there may be some other mechanical devices that aren't fitting in that arena but also, we have not talked about the environmental toxins that don't fit in those categories, exposures from the occupational health perspective that may need to be categorized as we are looking at the longitudinal health record and occupational exposures. A perfect example is the Gulf War syndrome and the folks exposed to Agent Orange. It wouldn't necessarily be accommodated in the structures reported here. So, I invite consideration of these two groups of entities that we haven't necessarily classified and may not be classified in any structure.

MR. MAYES: Both excellent areas of interest. I cannot address the first which is non-regulated drug products. However in terms of environmental work actually there is quite a bit of work being done by the Environmental Protection Agency.

There is very interesting work on the global, GMAT is the acronym, global environmental something or other thesaurus. So, there really is a fair amount of work being done in the broader context of environmental compounds or what have you, and there is certainly a lot of activity that I think we will be seeing as the result of some recent legislative requirements to begin to look at bioterrorism and that sort of thing. CDC is quite interested in that. So, although it wasn't discussed here there is actually quite a bit of work, very formal work in the lexical sense and in the meta data sense being done on the environmental side.

MR. BLAIR: I have got Robert Kolodner and then Simon.

DR. KOLODNER: A question for Commander Neal. I guess in view of the presentation that Chris Chute did yesterday and I don't know if you were able to see it, he talked about starting with a core set of highly granular data and then rolling it up for special purposes.

What I am hearing you say is that MedRA really is a roll-up of that data for special purposes rather than a new primary vocabulary that you are wanting the on-line clinicians to use. Am I understanding that correctly?

MS. NEAL: It certainly was developed for regulatory uses, yes, but the terminology covers a vast expanse of procedures and symptoms and signs and diseases and diagnoses.

It isn't just an adverse reaction terminology.

DR. KOLODNER: Obviously a lot of things are occurring in parallel, but with the recent announcement of the merger of the SNOMED RT and the NHS REED(?) codes which has a lot, I would expect has a lot of the detail that you are talking about is there any plan rather than moving forward with MedRA and announcing a new terminology or new way of reporting of looking to see whether there might be something that could fit into what the primary care --

MS. NEAL: I can say that at this point in the ICH there isn't any plan to not move forward with MedRA. The recent mergers are very new information I think, and certainly the REED and the SNOMED weren't to the point where they could be usable at the time that MedRA was, people began development on MedRA.

MR. BLAIR: We are running about 20 minutes late. We have the last two questions which will be Simon and then Michael Fitzmaurice.

DR. COHN: Actually I have a question, also, for Ms. Neal. Since MedRA is being planned for pre-market use and you had mentioned drug trials which of course immediately puts it into patient medical record information and will probably put it relatively widely into either large hospitals, physician offices, potentially and elsewhere, can you explain to me is this a not-for-profit offering I mean in terms of the pricing and all of this? Will this be, I mean who is going to, how much is it going to cost people to use this thing in the pre-market environment?

MS. NEAL: The terminology was developed, of course, in the ICH process through folks who were engaged, you know, employed elsewhere other than, you know, specifically MedRA development but because the need for maintenance of the terminology was seen to be so great the mechanisms were put in place early on to provide for the maintenance of MedRA and if you will look at the last page of the questions that I responded to in the testimony that has a time line of MedRA's development, and you will see in there notice of a call for tenders for maintenance and support services organization, and so that entity has been selected. It is TRW working as the lead of a five-member consortium of companies that is now set up to manage the maintenance, the distribution of MedRA, and that is a very new thing; just in March of this year MedRA became available for general use.

DR.COHN: Is it planned that the charges, I presume this will not be given away to people?

MS. NEAL: Right. Cathy can probably address charges. Do you want me to go ahead? If I recall the base there is a base subscription level of 2600 and the core subscriptions levels are basically set up for industry. It is a sliding scale based upon gross revenue, anywhere from I think 5000 up to 40,000 depending upon the size of the company, the structure, the subsidiaries, etc., and I think all that information should now be up on the web.

So, that is how MedRA is set up. They wanted, the management board specifically wanted to have a provision in place so that there would be an affordable version or an affordable way to get access to MedRA so that institutions, hospitals, etc., could get a copy of it and use it without having all of the sort of provisions that are available to core subscribers which includes the ability to request changes to the terminology, addition of terms, having help desk support available on a 24-hour basis, etc., things that are very necessary if you are a company designing or testing a new drug and you need a quick turnaround time, but things that probably aren't really necessary for libraries and institutions.

I hope that helps.

MR. BLAIR: Michael?

DR. FITZMAURICE: I guess I get the honor of the last question of the panel. The question is this one. What codes should a computer information system vendor use or have physicians use when ordering drugs or devices electronically? I am thinking that there is a linked set of codes for reporting laboratory results and that passes information very nicely. Is there a single leading candidate for drugs, a single leading candidate for devices that physicians should be able to code that on an order slip or type that into the computer or should be behind the text that would transmit the information very efficiently?

I have heard problems with existing drug sets. We heard from one device. I think we probably didn't have room to have the FDA device codes be accurately described here but in your opinion what should a vendor do? Should they use the NDC codes for drugs? Should they use another set for drugs and what set should be used for a device, just kind of a fairly brief answer but in your opinion?

Bill?

MR. HESS: If I could answer that very briefly, right now every pharmacy and every pharmacy vendor across the United States uses the national drug code in their computerized databases for ordering and for filling prescriptions.

DR. FITZMAURICE: But that is when they know what the package is. The physician sitting here at his desk says, "You ought to take this," and writes a couple of things out.

MR. HESS: It would be very difficult for a physician to determine that because basically that information is on the actual package itself and in the databases. If you somehow got the databases into a physician's office which very well could be a possibility and in hospitals I guess it is, they could go through and they could select it but just based ont the code itself that doesn't tell them anything. They would have to have a code that is associated with more descriptors. It would have to be cross referenced somehow in what they look at.

MS. MEREDITH; Just one more thing I would like to add is when you are choosing an NDC code, using an NDC code to choose a drug you are choosing from a very large pond. As an example, the penicillin tablet example that I gave earlier, there are 76 different products, 76 different national drug codes that represent that same exact product. So, the physician, of course, is tasked with the extra work of digging to find the exact or something to represent the concept of penicillin.

DR. FITZMAURICE: What about from the device side?

MS. COATES: From the device side the ECRI terminology was actually originally developed for that purpose for organizing information about medical devices to support procurement in hospitals, but you have to understand that the level at which the manufacturer's device is characterized is at a lower level. It is at the individual product level. So, while there may be 6000 generic categories of products which is the level that our nomenclature encompasses, the individual product descriptions are at the level of, at a lower level comprising maybe 1 million different products out there.

So, it is a systems issue that the classification and the coding and the descriptors generically widely utilize what we have developed, but then the manufacturer will have a product catalog, and we have an effort under way to map from trade names and product catalog descriptors to the next higher level which is our terminology level.

DR. FITZMAURICE: Is this a case for a uniform product number to be assigned to all of these devices?

MS. COATES: Yes, the UPN number would be at the level of the million different products that are out there and then we would take that and --

DR. FITZMAURICE: Run a classification system?

MS. COATES: Yes, exactly. So, you really need both.

MR. MAYES: Did you mean prescribe or purchase when you said, "Order"?

DR. FITZMAURICE: I am sorry, I meant prescribe.

MR. MAYES: Prescribe. That is different. I think they are all thinking purchase when you say, "Order."

DR. FITZMAURICE: It can be both because if you have identified what you are buying then you have, also, identified what you are prescribing, but if you the decision to prescribe is made at one place and the packaging is done at another place, then you have a problem with mapping.

MR. MAYES: But generally prescription is different than ordering, ordering meaning purchase.

DR. FITZMAURICE: It depends on whom is unaware in the health delivery system when you are talking about the order, whether it is a pharmacy ordering it from the manufacturer or whether it is a physician ordering it from the pharmacy.

MR. MAYES: Prescribing or treatment.

DR. FITZMAURICE: Exactly.

MR. BLAIR: Folks, we are almost 30 minutes behind schedule, and the reason that we are is because this last panel was so informative that we really allowed the questions to run because there was just so much important information to be gotten.

So, could we consolidate our break to 10 minutes before we reconvene for our Committee to go through our piece?

(Brief recess.)

Agenda Item: Discussion: Next Steps

MR. BLAIR: We have a hard deadline at 5 o'clock, and we are going to make sure that we get to two major items. One is to collect our impressions, the issues, concerns, observations of what we have learned during these last 2 days, and the second item is to quickly go over our first draft of the agenda for our June meeting.

Simon, are you here with us?

PARTICIPANT: Simon is not here yet. Jackie is going to go get him.

MR. BLAIR: Okay, as we were saying, there are two major items that we have to complete before we get out of here at 5 o'clock. The first one is to capture all of the concerns, observations, ideas, what we learned during these last 2 days and the second is to go over the first draft of our outline for the June meeting.

So, both Michael and Simon have been capturing a lot of thoughts on the system. They are maybe better prepared than, well, certainly I am to review these ideas.So, if Michael goes through the ideas first that he has captured, Simon will then go through he has that maybe Michael missed, and the rest of us, if they have missed anything, then the rest of us have our chance to add to the list.

So, Michael, why don't you take us through first?

DR. FITZMAURICE: I have about eight pages of notes, and since I cannot see a whole page at once on my laptop, let me run through parts of each presentation and glean some points from it.

I am looking first at Jim Cimino's presentation, and that was outstanding I thought in terms of what quality should a classification or terminology have and what qualities should it not have. I won't go through them all, but it is things like don't limit the breadth or depth of hierarchies. I think those are principles that we can distill and will help guide us.

The next presentation, Chris Chute, what is needed for success and the marketplace and what is the central role of terminology? What I learned from Chris is a lot of collaboration is beginning to take place, and I talked with some other people in the audience, and they said that they went to the Terminology II conference, and they sensed as well that there was a greater synergy among people willing to share and work together than they had felt in the past. So, that is the main take-home point from Chris' presentation that I got, but there was an awful lot there.

Let me see, where are we going next? Mark Tuttle, Keith Campbell. Keith was good, also, like Jim Cimino with lessons learned. What do you do and what do you not do? What does a terminology system have to be? It has to be scientifically valid, well maintained, self-sustaining and have a scalable infrastructure and process control. So, I think there are good lessons to be learned from that.

Keith is used to working within a large organization that has to solve problems and has to deal with principles like these, and since we are looking for information that is useful for the country in this report that, also, carries over very well to us.

MR. BLAIR: Just so that we are consistent when we refer to both of these, Jim Cimino refers to his list as a desiderata and I thought that Keith Campbell referred to his as basic principles. Is that right?

DR. FITZMAURICE: I think he referred to them as strategic something, strategic incentives. I don't see the term right here in front of me. It is probably further down on the screen.

Next, Mark Tuttle, different than the rest. He gave us a scenario whereby would we have an airline industry today if the government hadn't marginally subsidized e-mail, and so what he got me thinking about was should the government identify something that is useful at the margin and pay for it providing funds for development of information systems and the uses of codes and terminologies in doing this, and so, one of my take-home lessons from that is go back and think of things like if we are sending claims in do we pay extra for quality measures being added onto the claims information or clinical information from which you can draw quality measures; do we pay them 20 percent more or 25 percent more to supply that information?

You want to start with a pilot. You want to see if that is useful, but it may be a way of getting information we need for quality improvement and at the same time generating the development of information continually.

Next we went to the statistical classification and code sets. I am going to try to go a little faster. This could take all day.

All right, from a lot of these presentations on the clinically specific code sets I learned that there are some new and exciting things coming down the road. I liked the very simplistic, the DICOM(?) and the geographic mapping and the arrows pointing to here is where you really have the problem. It seems so simple, but technically it must be so hard to impose that on a digital image, and then to keep them separate if you need it for medical evidence, but they are doing it, and I think that is a great breakthrough, and Dean says that it is, also, useful for wave forms as well as for radiologic images.

Let me skip. Some smaller messages, industry wants to have RBRBS next to the procedure code, doesn't want to have to negotiate on a code-by-code basis for payment. We are always looking for efficiency in the uses of codes and terminology, always looking for the business purpose for something to stay out on the market and be supported in the private sector.

Some of the codes we don't get reported in the record because a physician doesn't get paid extra for doing something. It is all bundled together and so whether a physician gives a subcomponent or not for a particular medical service we might never know. It might never be in the medical record if it is not a basis for payment and not deemed clinically important at that time.

Question authority. The people who testified were good at giving us what should government actions be. Often it was support our own code systems, our own terminologies, support and make available freely some of these codes and terminologies, make cost effective choices, don't study things to death, make a choice and then go with it.

Okay, moving down, the market is becoming more aware and interest is picking up at encoding. Peter Goltra told that to us in his presentation from his trips to conventions, and that is where I think you see it. You see people kicking the tires at conventions, but we don't always know what people are buying. That happens after a convention.

Where is the funding for standardization of patient medical record information was another question that came up a couple of times.

People must see the value of data for care not just for payment requirements. Yet it is the payment requirements and the cost effectiveness that is driving a lot of the medical information being distributed today. Is there a way to bring it together? Devices can record and disseminate a lot of information. It is nice to know that standards are being developed for doing that. How much useful information doesn't find its way to the medical record from devices, and since you cannot record everything that a device can pick up how do you triage that information? How do we make the information that does find its way to the medical record more valuable?

Banana test parameters. I don't exactly remember the reference to that, but I wrote it down.

MR. BLAIR: That was IEEE.

DR. FITZMAURICE: Yes. Elmer Gabrielli talked about the watermelon. I paraphrased his watermelon, but he talked about taking all the textual material and parsing it down, getting the basic information out of it which is a different way than most all the other organizations have attacked it because most places don't have an unlimited amount of funding and cannot do the whole thing nor do they have the expertise to do the whole thing, but his presentation gave us a new model for looking at patient medical record information.

Do we look at the whole thing and figure out how to parse down to what is most important or do we continue to go along specialty by specialty, procedure by procedure and build the slices of a watermelon?

Nursing codes and diagnoses, what I learned from the nursing codes and diagnoses is that there are lots of different nursing coding systems, but I think they could fit together well, and more is needed for doing research into how they fit and what difference they can make in the care of the patient. If you have care paths how do you look at variances from those care paths and look at intermediate outcomes, not the patient had something and went into the hospital and then 6 months later the patient is still alive but in the course of the hospital, say, in the course of home care and care at other sites what makes the difference? We have to measure. We have to assess the patient and then take measurements along the way as the patient moves from onset of something to getting back to a normal life or as close as possible.

The nurses seem to be doing a better job of coding those facets of patients and patient recovery than anyplace else.

David Rothwell is doing what I guess if I had thought about it I would have liked to have seen really happen and now may happen, merging XML with the structured language in a way that you can look at the representation of it and see what it is about very quickly. That may be an ordered way of looking at patient information.

I did want to ask him can a machine do it like Elmer Gabrielli's machine or is this done by humans coding things for the first time and then when you find repetitions of it they get coded, as human coded at the first time. I didn't get a chance to ask that question. I wanted to know from David how he resolves ambiguity in free text. The reader does it or the human does it?

MR. KUN: My belief is that when he was comparing NLP with SHML that SHML was, he was showing, and I think it is in one of his charts that he was breaking that ambiguity. So, actually I think it is technology not man that has to do it, rather by using that particular language. We are avoiding it; therefore the system is doing it.

DR. FITZMAURICE: In looking at the drug codes and the device codes though we didn't have all the device codes here, but there is a big puzzle. The puzzle is how do you get the information about what a doctor prescribes, what a pharmacist gave a patient and how the patient took the drug to the emergency room doctor who has to deal with an adverse drug event. If what the doctor prescribed, he doesn't know what the pharmacist ordered and the pharmacist doesn't know how the patient took the drug, all of that can be important information when you are in an emergency room, and it is in somebody knowledge base. It is in the physician's knowledge base, the pharmacist's knowledge base, the patient's knowledge base. Maybe there needs to be, patients need to consider keeping a medical record at home so that this information about compliance with drugs can be better thought out. Maybe we should pay patients, a sample of patients to keep this information to find out if it does have value.

What codes should be used for moving this information around for drugs and devices? Still in a quandary, I think maybe we don't have them yet.

Those are my highlights based upon reading a screen. Simon, you may have the same difficulty that I did. As you go through a screen something catches your eye, but you are not sure that it is the most important point because you cannot see the whole page or the whole testimony.

DR. COHN: Yes, I guess I am going to suggest a process, Mike, also, that, and I have like you about eight pages of notes. What we will do is to give you copies. I will give you copies of mine.

DR. FITZMAURICE: I will be happy to give you copies of mine, too.

DR. COHN: Anybody else around the table or elsewhere who had any notes about lessons learned, issues identified, I think we would ask you to give copies to Mike so that he can begin to synthesize them.

Probably my focus since I didn't have to really take notes was a little different. I was looking primarily at what I thought might be major issues. Actually I was looking for a number of things. What I was looking for was actionable items or pieces that we needed to somehow reference or think about as we moved on, recognizing that at some point we are going to have to make recommendations to the Secretary and the more actionable those recommendations are the better we will be, and I was, also, looking at pieces that referenced various focus areas, and so that was really more my slant rather than trying to do a comprehensive view of what happened over the last couple of days.

I observed from Dr. Chute's talk that there was at least some information related to business case and the value of terminologies that we probably will want to reference at some point in comments to the Secretary and certainly since Kathleen Fyffe isn't' here, but I know she is very concerned about business issues that we make sure that we are actively collecting the business cases around why it is we are talking about that.

Dr. Chute made a comment in terms of his report from his most recent national terminology conference that there was a view of fair recovery of costs which is probably an important issue in the development of terminology, and I had noted a whole number of different business models in place for fair recovery of costs.

Some of them have to do with the actual selling of terminologies. Others have to actually do with uses where terminology may be free, but we are looking to the government to subsidize. So, in reality there is nothing free. It is really more a question of are you looking to the marketplace to fund the development, enhancement, maintenance of the terminology or are you looking for federal grants to do it, but it all is the same sort of thing in the sense that there is a fundamental philosophy t that there needs to be recovery that people aren't over the long term going to do this completely as a volunteer effort and completely for free and that there is no money ever needed for any of this.

Dr. Chute mentioned that there were a number of tasks involved or at least reported from the national terminology conference. One of them had to do with fully engaged payors and providers and terminology issues and had an issue creating demand. Terminologies are getting better but somehow there still isn't the market quite for them.

There was, also, a need to begin to think about this issue of terminology and terminology structure and its relationship to the information model, and I think we saw there was mention there, and Dr. Huff further emphasized that issue where he was talking about the ability of HL7 to capture data but that because of its permissive nature you could put the data in in any number of ways, and I think he had an example talking about RH and blood types as an example of that.

I think in the discussion we talked about the issue of how one was going to go about all of this, and I think it was observed by at least one of the speakers that there needed to be sort of a combination of a top down framework approach as well as a bottom up approach to make all of these things happen.

There were a number of people who mentioned the need for conferences to support more consensus development and obviously the issue of funding for maintenance.

I actually thought that Dr. Campbell's, I wasn't sure whether the principles and maybe they need to be referenced in our principles discussion around terminology, this issue about scientific validity well maintained. I, also, captured that that it needed to be vendor neutral.

DR. FITZMAURICE: Strategic imperatives.

DR. COHN: Strategic imperatives, and without delving into it, it probably needs to be referenced when you talk about principles of how we are going to proceed forward and principles we might recommend to the Secretary.

Now, in terms of ways that things are going to move forward I think many people felt that the government had a role, and one of the roles that the government had to do was to obviously adopt and start using standards that it thought were worthwhile and as well as that there needed to be collaboration.

Some felt that there needed to be some last minute infrastructure and that there might be a need for the government to come up with strategies to help encourage use and certainly Mark Tuttle came up with a very innovative proposal related to encouraging use, but there were many other people who came up with other views of how the government might encourage adoption of standards.

In presentations about ICD-9, ICD-10 and CPT I certainly came away feeling that there was, that they really fell into the areas that we were describing, that neither of them is solely a statistical coding system but that they both have clinical usages and so we need to sort of consider them here.

DR. FITZMAURICE: Say that again, Simon?

DR. COHN: I heard that representatives from ICD and CPT that they are not just solely statistical database systems but they actually have significant clinical uses, and I think therefore merit discussions. We talk about patient medical record information.

Since we are really talking about the issue of comparability I did hear in the area of data quality that a significant impediment to data quality had to do with the issue that various payors did not necessarily follow official coding guidelines and while I tried to push them to see whether the issues related to reimbursibility, I still heard that there were fundamental issues having to do with official coding guidelines, and that may be an issue that we may want to get into as a work group, recognizing that it is still a payor issue about whether or not various codes get reimbursed, but that it is certainly a national issue if the terms themselves are being perverted based on different ways of interpreting what the codes mean by payors.

DR. FERRANS: May I add something to that? I heard them say that they do have it for clinical, that they do use it for clinical purposes, and I heard them say that the level of what it is designed to be used for clinical purposes, I heard them say that it was fairly high level and sort of say that given Dr. Chute's research on the limitations of its use for clinical purposes, sort of I heard saying, "Well, that is true, but that is not what it was designed for," and so that shouldn't be surprising, and I think what you are talking about is the piece about it is not so much that the government is using them, it is the additional use for purposes it is not designed for perhaps as a way of phrasing it as a quality metric in the absence of anything else.

DR. COHN: I am not sure that I am, I don't think we need to debate whether ICD and CPT fall within the spectrum of concern by this work group today. I think it is something that we will have to talk about.

I guess my own sense after listening to testimony was that I concluded that it did fall into the areas that we want to --

DR. FERRANS: Oh, it does. That was really my main comment on that.

DR. COHN: Let me see what else as I am moving along here pretty quickly? I heard in relationship to nursing terminologies that they are actually having a summit in June, and actually I think I speak for the work group where I say that I think we are going to be looking forward to hearing the report from that summit and hopefully that there will be some recommendations there that may be things that we want to consider seriously.

Let me keep going here. I think the good news is that we saw a lot of redundancy in terms of recommendations. Certainly Dr. Gabrielli and Dr. Rothwell I was going to comment that I thought that they were doing much the same sort of thing. They both highlight the importance of when we are talking about patient medical record information that there are many ways to get information into a medical record. I mean one of them is using a structured entry, but there is, also, sort of the back-end processing that has to do with taking free text and turning it into something that is comparable and useful, and I thought they both had very innovative proposals and approaches to doing that piece and of course, as we saw, Dr. Rothwell was, also, approaching the other piece, taking the information and moving it, also, into some structured entry, but certainly they both highlight the importance of free text as an issue.

I heard from Ron Jordan about the importance of information from the pharmacy and I think we need to investigate that area a little more to understand best how to put it into the overall framework, and as I go down to Page 8, I think I come away still with the question about where NDC and Multum and other pharmacy terminologies, not pharmacy, drug terminologies all exist and where they should exist.

Certainly I heard some concern expressed about NBC, and I think we are going to have to talk to other users.

MR. BLAIR: An issue that didn't even come up out of the testimony is that first Databank and Multum and a third drug knowledge base have all been struggling to wind up seeing if they could converge within the HL7 vocab and what really wasn't expressed was that Multum virtually was ready to offer, has offered it for free and --

MR. MAYES: There were allusions made to that, Jeff. If you knew what was going on --

MR. BLAIR: If you knew what was going on, you knew what they were saying, but they avoided referencing some of the things. So, part of it which is part of what I am trying to express for those folks who haven't been sitting through some of the HL7 sessions is Multum, this is a case where there is a developer that is frustrated with the coordination process and is saying, you know, that they are hoping that the government would be a focus or catalyst and assist.

Most of the other cases are ones where they are saying, like the nursing code folks, they are saying that they are doing the coordination just fine, but with the drug knowledge base vendors, there they are saying that they need help.

MR. MAYES: After Simon is finished I want to actually expand on that because it is an area that there is something further.

DR. COHN: Okay, I guess I would finish on that note basically. I mean my observation is more that there is an issue as opposed to a solution at this point and recognizing that there may be a lot of possible solutions. I, also, heard, for example, and maybe I am overstating this but that the FDA given the appropriate direction might, also, play a role in a piece of the solution.

MR. MAYES: If I might, Jeff, I agree with both the synopsis that Mike and Simon put forward. What sort of struck me as interesting was some of the different perceptions. Obviously coming from the federal sector I was sensitive to the recommendations involving the government and what it might do.

On one hand I found that a number of the people making suggestions or suggesting that the government could be involved showed a certain naivete might be the wrong word but a certain lack of understanding of actually how the government does business.

There was certainly in my mind a definite leaning toward the academic view of the world, and that had to do with this issue of grants. You will notice that in all the government presentations everybody that is actively maintaining and paying for maintenance of anything does it through contracts. Grants are used to develop things, one- time projects. They are not used for continual funding. That is not how you get our budgets for continual funding of long-term ongoing programs.

What I, also, found very interesting is those organizations that were commercially successful or successful within their own venue such as the drug people really didn't care whether the government got involved in it. They were doing quite fine, thank you and being fairly successful within their commercial business world.

There were, also, several organizations that had already had a brush with the government regulation and government running of things and at least one of those, the American Medical Association I thought was quite clear that they didn't feel the government really ought to step into this, that they should be very careful.

So, I actually was intrigued by Mark Tuttle. He seemed actually in my estimation to have really the best grasp of how likely the government from a funding standpoint is really to get involved in this.

So, I think that while there are certainly some real roles, this idea of us being able to present the recommendations within the context of the business case, whether it be a commercial business case such as Kathleen is very, very concerned about or even the federal business case because frankly we have to present business cases as well. I mean there are no monies given just because it is the right thing to do.

We like to pretend there are, but for anybody who has to actually run something operationally in the federal side you have to make the business case as well, but I guess that is my take away that whatever recommendations we put forward really need to be actionable, as you said, Simon. In fact, the more specific, the more actionable, the more tied if on the federal sector to agencies, whatever if it is FDA, HCFA or whatever, the better that those recommendations are tied to underlying mission of that agency and budget of that agency the more likely that it will actually occur, and that in fact we do need to put them within the context of a business case, both commercially or in the private sector and the public sector.

MR. BLAIR: The three of you I think have covered probably just about everything except for one point which was sort of the major thing, and I actually had three individuals come up to me at different breaks and point out that gee, everyone that is here giving us observations is a developer which of course they are and that we are missing the viewpoint of vendors and other users, and that included IBM and Judy Ozbolt and Harold Rothwell and I think there was somebody else, and all of them were winding up adding the types of perspectives that we were missing from these last 2 days. Actually I don't have to go into what was missing. We had already wound up indicating that we wanted to have a day to be able to solicit the viewpoint from vendors and the SDO's in dealing with terminologies, and end users, yes, end users and vendors and the folks that are going to use the terminology, what problems they have either implementing them or picking them or paying for them or getting licenses, you know all of those issues from their viewpoint. So, that was echoed. Other than that, I think that Michael and Simon just really captured things very well.

DR. COHN: I think there are other people who want to add their ideas and all that. I am looking at Louis.

MR. KUN: Jeff, I am going to tell you a few things that I perceived, but not being a frequent participant of the Committee perhaps my views are a little different, but basically I saw like two presentations. everything that was said and everything that was missing, and the missing is perhaps because of the direction that you orchestrated the questions, perhaps, I am not sure, but let me give you a few samples.

Yesterday when Virginia Saba made a presentation she was talking about home care, about the increase of their activities on the nursing side. She mentioned, for example, that there were huge gaps on how to collect information between visits and this might tie a little bit with what Mike was saying in terms of who is going to write that information down.

So, I believe that there is a lot that is to be desired that needs to be incorporated into that CPR or medical record that could include things such as activities of daily living, but there is much more. There is this whole area that will be indirectly connected to what I was talking yesterday, for example, with disease prevention. What is going to happen with wellness?

MR. BLAIR: And Carol Bickford's comments, also, echoed all of the things that are missing.

MR. KUH: So, my concern is this. On the one hand you have and area that is extremely difficult to tackle and is going very slow. On the other hand, you have if you look at Chute's feedback loop where you have knowledge, medical knowledge that is changing fairly fast compared to these standards that are being created at a certain rate, and I am afraid that the end product that people might be seeking in these committees might not fulfill the requirement of that new medicine or the new health care system that we might have in a few years because of genetics, because of all this newcoming. So, on one hand you have agencies represented by Bob where they are addressing today's issues, but --

MR. BLAIR: Michael, I would hope that maybe if you could capture just a topic of all the things that are missing because they came in bits and pieces, but Louis is almost summarizing that from the genetic information to the environmental information, to the alternative medicine pieces to the pieces of home health care, to what happens between visits --

MR. KUH: Alternative medicine.

MR. BLAIR: Yes and that is maybe a real good viewpoint.

MR. KUH: And if these things become more cost effective and people start saying, "We should go that route," and we are going to start doing some of the things from acupuncture to whatever, to having massages and having exercise classes and diet and so on and so forth and getting a lot of this information available from the web and allowing people to incorporate that information and having not only that, not only that, not people incorporating the information but having intelligent agents that provide information to your medical chart, now we are getting into a very different profile where, also, terminology, vocabulary and everything that we have been touching becomes very important because how are you going to get that information to the right people?

So, I think that there is a whole area right here that is like the tip of the iceberg that I suggest strongly that that needs to be pursued because otherwise you are going to get recommendations that will become obsolete by the time that you present them perhaps.

DR. COHN: Could I make a comment, also? I actually do appreciate what you are saying. I guess that is why I think some people feel that the issues of processes and infrastructure become so important because we know that ICD-10 is not going to be what we need in 15 years without changes, and similarly we know that no matter what we do it is going to need to be changed, and the question is how are we going to deal with that and certainly we don't know what the future is going to look like. We need to somehow in all these systems put enough flexibility in.

MR. BLAIR: Did we capture that one, too, Simon's comment?

DR. FERRANS: I was going to say that Keith talked about these processes, no matter what get the process right. He talked about that.

DR. KOLODNER: The process and what we alluded to earlier on, the principles. We are really growing the complexity, and we want it to be flexible and adaptable and we cannot just impose some sort of a rigid solution, but we need to start simple, start small and by getting the principles right and the processes right then it can grow and adapt, and the question is how do we recommend that that be done.

MR. MAYES: In some ways it is a shame that we don't have a bit more experience with the other HIPAA transaction standards that are going to be out had the time frames been different because I, personally, have grave concerns as to whether or not we are going to develop the processes within the transaction standards to actually have them encourage ongoing modification and changes rather than become ossified and drags upon that, and certainly I would agree. That is my major concern is that we don't just pick something and then forget about the process because the something we pick is likely to be as we saw in the code sets that we picked for the transaction standards somewhat behind the times anyway even by the time we picked it, and if we don't focus on the process of how we are going to make that an actual agent for change, we could be doing more harm than good.

MR. BLAIR: Simon just noted to me that Rob and Rich Ferrans and Dr. Garvie all have some points. Let us make sure we capture those.

Rob?

DR. KOLODNER: The point that I just made about the growing of the complexity is probably the most important one because I think how we go about making a recommendation or what kind of recommendation we make is going to be vital. We keep thinking in terms of models or analogies or metaphors or other things, but in thinking about Chris, well, I guess, Simon, it was your slide that he claims he cleaned up by taking all the words off, and if I recall the things and the diagrams he took off were all the different pieces. We know we are not going to have a single set that is going to meet all the needs of the ones that we heard about; the one this afternoon, ECRI seemed to be discrete enough that it looked as if that kind of fit within without colliding a great deal with some of the other terminologies and vocabularies that we have been looking at, and so, maybe the idea of using the analogy that is sometimes used in these of we are creating an ecosystem of a forest and we have all these different things that need to interact and it is by their interaction that we get the richness and sustainability of the set of terminologies and in fact there is an evolution as some parts wither and die, others come take their place. It is almost as if we are doing that.

The problem with terminologies is it isn't as if you have a tree over here and a bush over there and a blade of grass over here. They collide at least in terms of the terminologies, and have fungi maybe. We may need a different analogy, but somehow this evolution in thinking over time and viewing it over time to figure out how is it going to meet our needs now in the best possible way 5 years from now and how could we conceive of it evolving over the next years and changing as medical knowledge changes, as the health care system changes.

MR. BLAIR: Richard, could we get your thoughts?

DR. FERRANS: I was just thinking that you know first as far as recommendations we have to, someone said earlier today as an aside not to do any harm, and I was thinking that Rob's fauna shouldn't choke on Mike's watermelon. In any case I was struck by the variance of recommendations. I came in hoping that somehow I would be enlightened in a way where some patterns would emerge and I would get a clear picture of what we ought to do. Instead I have been enlightened so much that it is rather confusing which is probably progress and I do mean that.

I think that it gets back to the principle of what Paul Tang talked about earlier about alignment of incentives, and I think it is important that we capture that because that needs to be a guiding principle, and I think a lot of different people in their own particular domain have tried to verbalize how we can align the incentives, and I think the approaches vary widely, some of them from philosophy or experience but others merely from the practical nature of where they are, and I think the different vocabularies and code sets and standards are all at different places in time, and it is definitely not a one size fits all and so somehow that leads me down the road that somehow our recommendations are not going to be -- they are either going to be very, very high level and broad and principle based or they are going to be somewhat complex responding to the different components and the different segments.

DR. COHN: Or maybe a combination of both.

DR. FERRANS: Exactly, and I was, also, struck by the dichotomy in the terminology area between the groups who have done the work on terminologies, most notably SNOMED and all the terminologies that have been gone through, a tremendous amount of validation, but I don't hear them talking much about the user interface, and I think that that is an important issue that I don't hear enough about and then on the other side I hear about the new and emerging technologies in natural language processing which have the best user interface because it works on the back end, and somehow I wish that, and I guess it is just my wish that the two could somehow converge magically, and other than that I think we got a lot of very good education, and I am borrowing from I don't know who said it, whether it was Jim or Bob, but with respect to the nursing, the panel regarding nursing classifications I really hope that this work group takes their convergence meeting and looks at the process that they are embarking upon because they seem to be doing a lot of things right, that we look at the process that they come up with in their consensus conference as to how they are going to proceed forward to stitch together the various constituent nursing vocabularies and look to that to see how we can use that as lessons learned.

MR. BLAIR: Richard, any other thoughts?

Then let me go on to Jim.

MR. GARVIE: Just a few thoughts. I think what struck me more during the past couple of days than in the previous sessions is that we are certainly not lacking for breadth in terms of the subject area, and in places we are not lacking for depth either, and I wonder if we might need to step back and kind of regroup or reconsider if you will how best we can assimilate information from these kinds of settings, these kinds of sessions, especially relative to the work that has already been done and the inventorying of systems, inventorying of terminologies and classifications and coding systems and so on and whether we can keep in mind I guess whether we know when we are getting new information, you know, when we are essentially getting information again that we have already captured and in short I guess how best to assimilate the information we are collecting.

MR. BLAIR: I think that we need to wind up pulling together a summary, running it by you seeing if that captures either the market segments or the different approaches or different processes and seeing if we -- and we may have to go through a couple of iterations to do that, and the other piece is that we still have to get the vendor perspective in here, you know, and make sure we have a vendor-user perspective, but Mel Greberman, you had a comment?

DR. GREBERMAN: Yes, I have a few comments, and I agree with the one size doesn't necessarily fit all categorization you said before, but I think in addition to recognizing the problem I think one thing that I would hope that we could develop as a future course of action would be to figure out ways of addressing those issues and looking at some of them maybe using some specific examples.

This afternoon, for example, obviously we heard a lot about the NDC versus some other drug code schemas, and I think we are going to have to recognize and we do recognize that there are various uses, various parties who will want to have different types of information for different purposes and I think one very important contribution we can make is to help develop an approach to how those things might be resolved. So, the various users will be able to have access to the information they need and want and yet not just toss everything out all at once but find ways of working either through mapping or some other approach but at least work on some options for dealing with those issues.

The other point I wanted to make is, Rob, I am glad you were able to perceive about the medical device terminology, but I think we had one presenter in that subject, and we have with the FDA particularly the Center for Devices and Radiological Health has been working with ECRI, but I think we are looking at other issues, too, in terms of the adverse event reporting and one of the things internally is we are trying to see how we might maybe go towards more of a collaboration with the MedRA effort.

So, I would like to review what ECRI said with the person in our center on devices who has been most closely involved with that and see if it makes sense maybe to have some additional discussion of this issue at our next session in June.

DR. KOLODNER: I agree that we heard one presentation. That doesn't mean that is the one, but a vocabulary, a terminology about devices, maybe something that is referenced by others without having to be part of the continuum with patient signs and symptoms.

MR. BLAIR: Any other observations about what we learned?

If not, then let us go on to our next topic, and our next topic is our agenda for the June meeting, and Simon, can I ask you to take the lead?

DR. COHN: Sure. I think in a sense the discussion we had actually set us up perfectly for a discussion in June because Jeff and I have talked, and I think there are some things that we need to catch up about and there are certainly some people that we had intended to have testify to us, but I think that most of the day really should be devoted towards a little more reflection of where we are, regrouping and identifying how we can be most productive in the remaining hearings as well as what sort of focuses.

I notice that I think Jim Garvie sort of commented about that, that we have been just sort of getting tremendous amounts of information. The question is now what do we do with it, and what more information do we need, and certainly if every hearing we have between now and the end of the year is a similar sort of avalanche of information we are probably not going to be much smarter at the end of the year than we are now.

We need to figure out ways to be a little more selective and a little smarter in getting information and we certainly need to be beginning to hear people who have different views of the information that we have begun to receive, and I think Jeff has mentioned quite appropriately vendors, users, and Mel Greberman was mentioning others in his shop who have, I think probably some different views on the terminologies, for example.

Having said that and recognizing we just have 1 day, I think we at least have drafted out an agenda, and you may have some additional thoughts or ideas about it. As our first discussion of the morning I think we had intended to have Claudia Teshiet who was unable to come to our last standards hearing come and testify if she is available and, also, John Madison to talk a little more about XML and the business cases around that.

DR. FITZMAURICE: If I could interrupt, I talked to both Claudia and John, and they are very happy to be invited and they will be here.

DR. COHN: Good, super. We have, I think, also, discussed with the GCPR and I am looking at both Dr. Kolodner and Dr. Garvie about I think the need to now that we are 8 months further along in all of our processes, both the work group and the GCPR process take a look at focus areas, have those critiqued by those of you doing the work of the GCPR, as well as a frank discussion about where the collaboration and where the opportunities and synergy may exist now that we are moving a lot further along in the process, and we see that as potentially a partial presentation as well as hopefully a lot of interactive discussion and hopefully with some time for us all to be thoughtful about that because we have certainly heard that the government can play an important role in all of this. So, let us talk about that.

Not on the agenda but I think I will just reference it, and I don't know whether Betsy Humphries will be ready by that time. Betsy, unfortunately, was unable to be here. She is in Chicago at another meeting, and I think she had offered, and once again I don't know if the minutes will be ready by that time for her to critique, but we are hoping at the beginning sort of that the evaluative and introspective views that we are going to ask for her thoughts and opinions on the testimonies that we have heard, and I think that she may have some other thoughts, and it may, also, be at the same time that Mel Greberman's staff has some other views of things that we have heard here, and we would certainly invite a critique if it is appropriate from his department also.

Having said that, we are really hoping that most of the afternoon could be really spent with us thinking about what it is that we have learned, identifying where we are in our work plan, what needs to be changed on the work plan, what we need to do over the next 6 months so that towards the end of the year we can feel that we have made some progress and have digested and conceptualized what it is that we need to conceptualize and feel that we are coming forward with the appropriate recommendations, and I think a view of that in the afternoon and a discussion was really where we thought that the afternoon needed to be.

Certainly I have suggested to Jeff and I think everybody is in agreement that we need to come up with a letter to the Secretary in September, at least beginning to talk to her about our focuses and the type of work that we are engaged in and so, it will, also, be a time for us to begin to talk about what may be elements of that.

Mike, you had raised your hand. Did you have something?

DR. FITZMAURICE: Yes, you had mentioned about FDA and I wondered if we should have an FDA testifier on device codes join the first panel on our June 22, meeting as a way to round that out?

DR. COHN: Great. I don't know whether Betsy would be available or even ready by June.

DR. FITZMAURICE: I will check with Betsy to see if she has something she wants to say, and if a summary is out, then good. She may have something to say anyway. I don't propose that she would want to listen to 8 or 10 hours of testimony on a tape recorder. I wouldn't want to do that, and I wouldn't impose that on her either.

DR. FERRANS: I talked to her late last week. She was planning on it.

DR. GREBERMAN: I guess one other area there was a lot of discussion about other terminologies, alternative medicine, etc., and do we have a strategy for including those in some fashion in our hearings?

MR. BLAIR: Giannini testified yesterday.

DR. COHN: But as a more global question I think we have the question of all the other terminologies we didn't get to which is, also, referenced in the issue of where are the borders of patient medical record information. So, what is out of scope with this effort which I think is something we are, also, going to have to come, I think to some understanding of because if there is no border to our scope sort of by definition we never do anything.

DR. FERRANS: Should we try to capture that prior to the next meeting or have a conference call or an e-mail going or something?

DR. COHN: On what, scope?

DR. FERRANS: In terms of the other list.

MR. BLAIR: We do have the work plan that we executed and laid out for the year, and the thought was when Simon was referring to the afternoon in our June 22, meeting it was to wind up saying that according to that work plan we were to cover data quality, message format standards, medical terminologies, the state, you know, issues and I forget the other two, and we have opportunities for testimony in September and October and December. So, we have like three more hits of maybe 2 days a piece for each of those.

So, we really need to be careful with what our scope is and where our priorities are because we have a limited amount of time to capture information, and we want to make sure we are capturing the information that we need.

Am I answering your question? Is that addressing it? I cannot see if you are nodding your head yes or no?

DR. FERRANS: I guess I was just wondering whether we should do it sort of off line in terms of rather than focusing on that in the next afternoon meeting since as you said we have limited time whether one should try to draft a list of others.

DR. COHN: Rich, I think rather than dealing with the quote, unquote, other, I think that the next effort really needs to focus on coming up with a definition of patient medical record information to help us identify what is in and what is out until we figure out what other might be.

My understanding is that you have already volunteered to help with that effort, and I know Mike Fitzmaurice is sort of leading the effort to try to put that together. So, we want to thank you for that, and, Rob, I think you were trying to say something.

MR. BLAIR: Could I just give Richard a response? I will tell you I have no problem with beginning that discussion by e-mail, but a number of other Committee members have felt that that is an awkward way to do it, and they struggled with that and the conference calls, and that was the reason that we felt that responding to many of the Committee members' comments that we needed to all be sitting around the table and discuss it at the same time.

So, I don't want to preclude us doing it, some of that over e-mail, but I think many Committee members want to be able to sit around the table and discuss it.

MR. MAYES: I have something that I think we need to keep in mind which is if the Secretary is increasingly preoccupied with the issue of confidentiality and privacy you will remember the Secretary will have to being the publication of regulations on this area in August if Congress does not pass laws. There are several bills in mark-up now, and I can tell you this is a big topic of conversation within the department.

So, one if we want to look at state laws we should probably do it between now and then and two, whatever letter we write to the Secretary, if we are going to give her a letter in September or October, we need to be sure that we link it to the discussions that the privacy subgroup is having.

I know we gave them the responsibility, but the Secretary will definitely read whatever is sent to her in the context of this discussion because we are down on the Hill testifying this week in fact on some of those issues, and this is not going to go away between now and then.

DR. FERRANS: Bob, with regard to state issues I know that the Southern Governors' Association has a medical technology task force because I am a member of it, and it was modeled after the Western Governors' Association which has been very active, and I would suggest that those, if you wanted to get testimony from a large number of states you could get it out of two individuals, you know, one representing each one of those organizations, and I can tell you that they are particularly concerned about federal ceilings and would love to have a say in any discussion.

MR. BLAIR: Great, and maybe you can give those names to Michael so we can get them on the --

DR. FERRANS: Lou Compair.

MR. BLAIR: Other comments?

DR. COHN: Henry, I see you raising your hand. Do you want to go to the microphone?

MR. HEFFERNAN: An observation is that had you had vendors present and users for this discussion on terminology I think your conclusions would have been different, and I am pretty sure that your conclusions would, I mean you heard enough from the variations among these terminology people to realize that even though SNOMED people say that there is convergence, nevertheless these other people are not -- and that is why SNOMED has very wisely moved into the concept of a reference terminology, but the implications of a reference terminology, the structural implications have not been considered and architecturally there are very significant implications of that, but their model of the universe is you have a user terminology that can be anything, and then it will be presumably SNOMED's role to define mappings to those terminologies, but that organizationally and from a policy perspective has very significant questions associated with it.

MR. BLAIR: So, you are echoing again that we need to hear from the vendors and the users, is that correct?

MR. HEFFERNAN: The point is you are either going to hear from them early or late, and one of the things I haven't heard anyone suggest is that there be sort of a straw man summary of issues that have surfaced here that then can be circulated out for comments, so you can get some written comments.

I mean we are very familiar with the normal regulatory process, but quite apart from the regulatory process to have sort of a summary put together that especially with question in it, you know, is this convergence taking place; is it going to be satisfactory, and I think that then the issue is how can you handle potentially the volume of complaints.

MR. BLAIR: I very much agree with you, and I have heard this same thought from a number of other folks and like Richard was saying, you know, there is so much stuff here, how do we wind up pulling it together, and I am going to make a first draft of an attempt at trying to pull together synergies. There will be multiple ones. It is going to be imperfect, like you said. It is going to be a straw man that folks can pick apart, but at least it will be something that maybe can begin to help us coalesce some of the ideas, and I will have that before our June 22, meeting so that we can begin to poke at and start to see if those are the areas of idea coalescence.

MR. HEFFERNAN: To give one example, Bob has this USHIK activity which is a converging concept, that is it organizes the convergence from the point of view of being able to identify differences, and thus far he has gotten, I think very few comments on it in large part because people haven't seen enough of it, but they have seen the Australian model, but they haven't seen enough of this to realize how it could relate to their activities, but there you have an initiative that has significant potential for relating across terminologies if they are all referenced in his registration.

What I am saying is those types of issues haven't been out on the table, haven't been discussed. Perhaps some papers that, in other words if it would be possible and here I am suggesting actions from Bob but I have never stepped back from that in the past, have I? But if the Committee could receive something which would be something along the lines of, that is in the same model of what Mark Tuttle was presenting as a model and then have these competing, and Mark is going to develop a written version of that that could go out as something for comment, to have another model that Bob would develop as to how the USHIK could be used as a convergence because my reading of what we have been hearing even from the developers is this whole business of getting some conversions and therefore being able to use terminology, computers being able to use terminology rather than human beings.

I mean the whole assumption underlying all these terminologies is that human beings are going to be at both ends of the communication and you are not going to get administrative simplification if you always got to have a very skilled interpreting human being at the other end of these electronic transfers.

So, that is the type of issue that I think needs to be surfaced, and I see Bob looking with those sort of glazed eyes at the very idea of more work. So, I will stop.

MR. BLAIR: Thank you, Henry.

Any other comments, questions or concerns?

DR. FITZMAURICE: I would like to extend something that I talked to you and Simon about and that is to get initiated sets of working papers, working drafts of things like, the first one would be the definition of patient medical record information compiled from the testimony that we have heard and maybe taking a definition out of the computer-based patient record book of the IOM. It doesn't need to be large, maybe three pages with a one-page summary of here is what we would put into, propose to put into a report.

I would follow that up with a second issue paper on what are the functions of patient medical record information to kind of guide us into what we are looking for. I would make that a second paper. I would then take additional direction from the working group and say, "Would you like working papers on each of the focus areas that have been defined?" and again, we are talking three pages, maybe five at the most but some meat to start chewing on to say, "Let us expand it here. No, this isn't what I meant, Can we get some references for this?" Because I am seeing that the train is coming down the track. We do have quite a bit of time left, but we need to have in mind where we are going, what we want the project to look like and not only an outline but what is some content to it and to find out where the Committee is on the content.

MR. BLAIR: On the first issue paper I believe isn't that what Richard has volunteered to --

DR. FITZMAURICE: Richard is not in the room, but Richard volunteered to work on it, and since he was the first volunteer I will talk with him to see if he will head up that effort. I will be willing to be on any or every working group to provide guidance and give comments and help with the leadership if it is not there, but we need to draw in more people. We need to get something so that the working group can say, "Yes, this is what we should be focusing on. No, this isn't what we meant to focus on," and do it without having a large investment in pages.

DR. COHN: I was just reminded that I think it was Dr. Yasnoff at our last meeting indicated some interest and I am not sure willingness to lead but some interest in the issues around business case based on the literature around some of these areas that we are talking about. I am not sure if I am expressing it precisely, but --

DR. FITZMAURICE: You could be right. Since Louis works closely with Bill Yasnoff, in fact, is his representative here at the meeting maybe Louis would want to take that request back to Bill and see if he would want to head up a working group on the business issue focus area since that has jumped in importance.

DR. COHN: I believe that the main focus had more to do with the research literature supporting some of these pieces.

DR. FITZMAURICE: I do remember that he said that there was a body of literature that he was familiar with that showed scientific evidence of what worked and how it worked, things like decision support systems for example. So, maybe he has --

MR. KUN: Is the context that you are putting it like three or four pages?

DR. FITZMAURICE: Yes. So, we are talking about two things. We are talking about the business case, and we are talking about evidence that patient medical record information has been found to be useful in such things as decision support systems or are we talking just about one?

MR. BLAIR: First of all, could you finish the sentence when you say, "Business case"? Is it the business case for standards or is it the business case for medical terminology? Is it the business case for patient --

DR. FITZMAURICE: You are asking the wrong person.

DR. COHN: Let me tell you what I think it is, and we will try from that one. I don't know that there is a business case at the high level for medical terminology as a completely separate being. There are I think business cases for drug information and the impact that it may have on populations, lives saved, money saved, etc.

There are probably other cases like that, and I think that we need to use some of that to help guide us in terms of priority setting.

MR. BLAIR: So, you are saying the business case for drug information?

DR. COHN: I am saying the business cases for pieces I think as related. I am just using that for an example. I am saying that it is a level below high-level terminology. I am saying that it is a level where certain types of terminologies, if we look at the literature and systems which have terminologies in them that can actually save money and save lives but --

MR. BLAIR: Let me paraphrase and make sure that I am getting with you? So, you are saying the business case for electronic patient medical record information, and that may be many components.

DR. COHN: Yes, I guess.

MR. BLAIR: Does that give you the outline?

MR. KUN: I am not sure.

DR. FERRANS: If we have a business case that is built upon sort of some sort of natural flow of the business case for structured data or here are the benefits of medical records most notably and decision support and outcomes assessment and that the ability to do this is predicated upon standards and here are the sort of categories that we have put forth and that individually there are some business cases for some of those on their own, but I think it has to sort of flow from the top down so that we have the big picture.

MR. KUN: The problem is that I think when I am starting to come to the conclusion is that you can have a system that has structured data. You can have a system that has drug information and yet it is not a patient medical, the clinical medical record. So, I am trying to coincide a little bit with what Jeff is saying. Do we want a business case for the actual medical record or for subsets of information? I am not sure.

DR. FITZMAURICE: May I add something, Simon, that might clarify this? As I remember what Bill was saying it was that there is a body of literature out there that shows that patient medical record information is useful for patient care, for improving the quality of care and at times it reduces length of stay and reduces adverse drug events all of which would be part of an important business case.

Suppose we focused this, so let us call it a second issue paper on what are the functions of patient medical record information.

MR. BLAIR: In electronic form, I am assuming?

DR. FITZMAURICE: In electronic form.

MR. BLAIR: Louis, is that all right?

MR. KUN: Yes.

DR. FITZMAURICE: That way I think it gets more to what Bill showed expertise in, and I will be happy to interact with you and maybe Simon and Jeff will, also.

MR. BLAIR: I understand we have 3 minutes to five. Any other final observations or comments?

MR. GARVIE: Jeff, this is Jim Garvie. Mike, I will get with you and help with the first paper, the definitions.

I won't volunteer at this point to work on the other papers. I think we can do that when --

MR. BLAIR: I want to indicate that it is my observation that these 2 days have been just absolutely fantastic with the richness of the information and the relevance of the information we have gotten.

Thank you, everybody who is here for your wise questions and your insights and the information we have drawn out of it and the contributions you have made to the success of these 2 days

Simon, do you have anything to add?

DR. COHN: No, I share your comments and thank everyone very much.

(Thereupon, at 5 p.m., the meeting was adjourned.)