[This Transcript is Unedited]

Department of Health and Human Services

National Committee on Vital and Health Statistics

Subcommittee on Standards and Security

September 21, 2005

National Center for Health Statistics
Auditorium A & B
3311 Toledo Road
Hyattsville, MD 20782

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway, suite 180
Fairfax, Virginia 22030
(703) 352-0091

TABLE OF CONTENTS


P R O C E E D I N G S [8:55 a.m.]

Agenda Item: Call to Order - Welcome and Introductions - Mr. Reynolds and Mr. Blair

MR. REYNOLDS: Good morning, my name is Harry Reynolds, I am with Blue Cross and Blue Shield of North Carolina and co-chair if the Subcommittee on Standards and Security of the National Committee on Vital and Health Statistics. The NCVHS is a federal advisory committee consisting of private citizens that makes recommendations to the Secretary of HHS on health information policy. On behalf of the subcommittee and staff I want to welcome you to today's hearing which has a plethora of topics that we're going to deal with over this next two days.

We are being broadcast live over the internet and I want to welcome our internet listeners as well. As is our custom we will begin with introductions of members of the subcommittee, staff, and guests. I would invite subcommittee members to disclose any conflicts of interest, staff, witnesses and guests need not disclose conflicts. I will begin by noting I have no conflicts of interest. Jeffrey?

MR. BLAIR: I'm Jeff Blair, vice president of the Medical Records Institute, co-chair of the Subcommittee on Standards and Security and to the best of my knowledge I have no conflicts of interest.

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Healthcare Research and Quality, I'm liaison to the full committee and staff to the Subcommittee on Standards and Security.

DR. FERRER: Jorge Ferrer, staff to the subcommittee from the VA.

MS. AULD: Vivian Auld, National Library of Medicine, staff to the subcommittee.

MS. PICKETT: Donna Pickett, National Center for Health Statistics, Centers for Disease Control and Prevention, and staff to the subcommittee.

MS. GOVAN-JENKINS: Hi, Wanda Govan-Jenkins, NCHS, CDC, and staff to the subcommittee.

DR. WARREN: Judy Warren, University of Kansas School of Nursing, member of the subcommittee, and I have no conflicts.

MS. FRIEDMAN: Maria Friedman, Centers for Medicare and Medicaid Services, lead staff to the subcommittee.

MS. WILLIE(?): Shelly Willie, RxHub.

MR. GINGRICH(?): Mark Gingrich, RxHub.

MS. BYRNE: Teri Byrne, RxHub.

MR. ROTHERMICH(?): Phil Rothermich, Express Scripts.

MS. FERNANDES: Lorraine Fernandes, Initiate Systems.

DR. SCHUMACHER: Scott Schumacher, Initiate Systems.

MR. SHEATH(?): Tony Sheath, Point of Care Partners.

MS. BOYD: Lynn Boyd, College of American Pathologists.

MS. INSLEY: Marcia Insley, VA.

MS. WILLIAMSON: Michelle Williamson, NCHS.

MR. REYNOLDS: Okay, before we get going on the actual first speaker who is John Halamka and I don't think, is John on yet? He hasn't come on yet? First thing I want to do is pass around to the subcommittee and staff a chart that we've been working on on what our outstanding items are as far as our agenda and so on, so if you'll take a look at that, this is updated as of our last session and if you will take a look at that and I'll work with Jeff on it to make sure that we had talked about what we had wanted to do for September obviously, we have those covered, December, and we started talking about February, and then we want to spend a little time, maybe five minutes probably tomorrow just getting any updates you have or any new subjects that you think we ought to cover so that we can keep this updated and Maria has got some updates for us tomorrow also. So if you'll please take a look at that between now and then.

I'm also trying to get everybody a copy of the minutes so that we can approve those. Is that the way we do it?

PARTICIPANT: [Off microphone.]

MR. REYNOLDS: Oh, okay, good, so we have none so Jeff and I will sign the minutes.

Maria?

MS. FRIEDMAN: I'd just like to add that originally we were going to have a briefing on our claims attachment reg which is scheduled to be published on Friday and should be on display tomorrow which makes it legal for us to talk about it. But because I couldn't be sure that it was going to be on display tomorrow, we had to pull the briefing and so we would like to offer a teleconference briefing to the subcommittee for that.

MR. REYNOLDS: Because after we hear that briefing we need to consider whether or not we want to submit any comments, is that correct?

MS. FRIEDMAN: Yes.

MR. REYNOLDS: So we'll need to make sure we get in the pipeline as to any comments that we're going to want to have on that attachments legislation so that's the reason we've done it this way.

MS. FRIEDMAN: And I think it will be interesting for the subcommittee to see how things came out since we did a lot of work on claims attachment a while back so this is kind of the fruits of some of those efforts coming to the fore.

MR. REYNOLDS: Judy, while we're waiting for Dr. Halamka why don't you, Stan, I'm sorry --

DR. HALAMKA: Hi, this is John Halamka --

MR. REYNOLDS: Judy, why don't you introduce the topic on matching patients to records and then we'll turn it over to you, Dr. Halamka.

DR. WARREN: One of the things that has been before the committee in the past has been the whole notion of a patient unique identifier and over the years since that has come up it has come to consensus that that is probably not the way we want to go in working with patient records and so we've been looking for alternative ways to ensure that we get the right information about the right patient to the clinicians that need to make those decisions about the patient and for other decision making bodies. So with that we've started pulling together a series of hearings that are starting today to help us understand what those approaches are in matching patients to their records so that we can then make some recommendations based on that. So this is the first go around of those presentations and so with that I'd like to turn it over to Dr. Halamka since he's on a time limited timeframe here. So with that John we'll turn it over to you.

Agenda Item: Matching Patients to Their Records - Dr. Halamka

DR. HALAMKA: Great, and how much time do I have for the discussion?

MR. REYNOLDS: About half an hour or so and then that will leave us about 15 minutes for questions, that'd be great.

DR. HALAMKA: That's just perfect. So you framed this very, very well which is we recognize that certainly in a world of nirvana if we could just start from scratch and everyone would be given 128 bit cryptographic identifier at birth that was private, secure and immutable and followed them for a lifetime of health care, boy, that would be great, but in fact can we get there in the next decade, what about the privacy implications of having a universal identifier that might be linked to your employment records or to your bank records, privacy advocates are quite concerned rightfully so about privacy spills that might result. All you have to do is look at AmeriTrade and MasterCard and recognize what would happen if you had a single universal identifier linked to every one of your records.

And of course when we look at just the logistics of issuing new identifiers and how do we deal with non-citizens who seek health care and of course the issue that Social Security number is really quite a rotten identifier in that transposition of digits of a Social Security number occurs in hospital information systems about ten percent of the time and I trained in county hospitals as an emergency physician and I can tell you we had the remarkable circumstance of 90 year old women being reincarnated as 18 year old men with the same Social Security number so certainly it is not a panacea either.

So given that in the interest of actually getting something done in the next few years we are unlikely to use an existing identifier or issue a new identifier and of course overwhelming privacy concerns even if we did that might prevent that.

What do we do instead? Now do you have the slide stack that my assistant sent you?

MR. REYNOLDS: Yes, we do.

DR. HALAMKA: Very good. Well our proposal is, and we've actually got this running in Massachusetts and I've been running this kind of thing for several years in the context of integrated delivery networks, associated hospitals and doctor's offices, is in the absence of a universal identifier you can create a probabilistic statistical match of an individual based on demographics and use that in a virtual way to link all the places of care that an individual has been. And so let me just talk through that concept and I'll go through, we're on just the agenda slide, I will talk about how one creates this index, how it can be used in a federated and decentralized fashion, and a secure fashion to exchange records, show you some of the working prototypes that we have and then describe the algorithm in some detail and some of the caveats and privacy concerns about it.

So on to slide number three, high level functional architecture. So the way that such a system of having an index as a probabilistic match will function is follows. We recognize that in this country heterogeneity of our hospital information systems is the rule, the likelihood that we're going to be able to adjudicate that absolutely every hospital run the same system or that every doctor's office run a common electronic patient record is unlikely.

However, all these various vendor systems and home built systems typically do have a standard transaction at the point of registration or admission, discharge or transfer, the standard HL7 segment which describes who is the patient. So this PID segment as its called, and whether that's HL7 2.3, 2.4, 2.5 or 3.0, all flavors of HL7 do have the basic elements, first name, last name, date of birth, gender, zip code, and of course there are many other elements which include such things as your Social Security number, email address, etc. But the core data elements of name, gender, date of birth and zip are certainly stored in basically every system in this country. And so hence at the point of registration in a doctor's office or in an hospital that registration information goes via an HL7 transaction today to many other downstream systems, a pharmacy system, a radiology system, a laboratory system. So from a technology standpoint it's not a leap to say in addition to going to all these other systems it will also go to a community maintained master patient index.

And obviously from a security and privacy standpoint there are appropriate business associate agreements which would be signed between that entity which hosts the master patient index for the community and the participant because clearly the notion of storing your name, gender, date of birth, and the fact that you've visited an institution could be disclosing. Example, well if I visit Beth Israel Deaconess that's not particularly disclosing, but if I visit the Betty Ford Clinic and the Gay Man's Crisis Center clearly the notion that my medical record number exists in such an institution in itself could be disclosing.

So we have business associate agreements, that transaction is forwarded off when a patient is registered for care. Now also recognize just because any technology we discuss has to have a complementary policy, that it would certainly be up to the policy of the individual institution in its disclosure and privacy policy and its consenting policy to have an opt in or an opt out to the index. And I'll give you an example, Beth Israel Deaconess think it's a very good idea for exchange of medical records in the community in the interests of promoting quality and safety so it will present to the patient we as part of your care will simply share these five elements of demographic information with the community index. However if you wish to opt out from that you certainly may.

On the other hand McLean Hospital, which is our local psychiatric facility, is likely to say you know we actually are going to opt out by default, you have to opt in if you want a notion of your McLean medical record number forwarded to the community index. So again, recognize we will absolutely ensure good patient control, appropriate enforcement of HIPAA, and state laws which preempt HIPAA, and really do engage the community in the decision making on how to run the thing but the end result is records end up in an index and therefore that index which itself doesn't contain clinical information just contains name, gender, date of birth, other demographic indicators that the community may decide are important from a matching standpoint, medical record number, and the institution that was visited.

Once you have that information that can be a record locator system, a pointer system to where clinical data actually lives. So in the case of Care Group for example, I oversee about nine million patient records, two and a half million active patients, six hospitals, and I have a master patient index with all the entities I interact with and that master patient index facilitates me to go out and say oh, I see, you were at Mt. Auburn Hospital, you now can go and do a query against the medical information systems at Mt. Auburn Hospital because you know the medical record number there and you know that the medical record number belongs to a patient with certain demographic identifiers.

So if we look at slide three really what it's suggesting is is that the system for medical record exchange that doesn't pre-require a universal health identifier has a feature by which at each registration that demographic information with appropriate consent is forwarded to the record locator service, the record locator services becomes the community wide index which then can be a pointer system for medical record exchange.

Let me just move on to the next slide and describe an actual use case. So what we see on slide four is that a patient goes to a clinician, that clinician offers selected treatment, there are medical records about that particular patient, and as part of that treatment yes in fact a registration transaction has been generated, it goes off to the record locator service. The patient then goes to seek care elsewhere, another physician's office, a hospital, an emergency department, and similarly name, gender, date of birth, other demographic identifiers, are forwarded off to the record locator service, and the clinician offering treatment at the second site says well, patient, I think it's really important that I retrieve your medical list from other places in the community where you may have sought care. Do you consent for me to do that? And of course then after the patient gives consent for that look up the doctor initiates the search, the search then finds that yes, this patient has visited other sites, and information is retrieved from those other sites.

In an architectural way, let me just go on to the next slide, this, and I really hate to use this analogy but it fits, in effect you're building Google for health care, that is searching of an index, and Napster for health care, peer to peer exchange of information when you've in fact located that somebody's information is at a particular site. Now obviously Google and Napster are not secure and medical grade and all the rest but architecturally really that's what we're doing, creating a registry and then creating a mechanism by which data will be exchanged.

All of this is standards based, so that implies HL7, standard flavors of HL7, for the case of medications NCPDP 4.2, very standard way of looking at medication history, in the case if we needed to do interactions with payers, for example eligibility information of an aid to figuring out who a patient might be. Or in the case of Katrina it may very well be that the only way to recover clinical data is to go to pharmacy benefit management databases which are hanging off payer organizations. And payers typically speak ANSI X12, the HIPAA transactions, to identify patients.

So as we architect these various components of both index and exchange we recognize the underlying the standards exist, are well described, and are well implemented throughout the country already, so in our state when we went to do this we actually worked not only throughout Massachusetts with payers and providers but worked through the Markle Foundation, Connecting for Health, worked through the folks at Regenstrief and the Indiana Health Information Exchange, and worked with the folks at Open HRE, a group in Mendocino County who is doing an open source version of medical record exchange. And we came to a common implementation guide for this record locator service and clinical data exchange that utilized HL7 2.4 or HL7 3.0, NCPDP 4.2 and ANSI X12 4010A.

So with those standards and those implementation guides in effect what you have is a railroad track that runs from New York to Los Angeles all with the same gauge. And as long as individuals agree to those standards it provides the basis for a network of national interoperability, so let me talk a little bit about that.

Now these next couple of slides just give you a sense of when I really talk about the software components that we've created, it's a bit more complex then just saying oh, here's this wonderful index and here's this clinical exchange software. We recognize that of course if we're going to build medical grade software we need comprehensive auditing systems and comprehension security and encryption, authentication of those individuals who are going to go and query the index, making sure that messages go from place to place with integrity, that they aren't modified, that go only to rightful locations which have been certified as participants in the network. And so we've done all of this by creating a modular architecture, happens to be all SOAP transactions, XML transactions, wrapping around standards based messages that I've described.

And we've completed this whole software stack that you see on the slide called service oriented architecture and in January through the Markle Foundation we'll be giving all that software away for free and all the implementation guides for free, and the hope is that the running example of code will be analogous to the way that the internet itself developed which is come up with an idea, develop a standard, put out some running code and then let the world run with it.

The next slide, just to understand, and this is really a recapitulation of everything that I've said, that these software components are in a sense like fax machines, you don't need one giant fax machine in Washington, no, in fact, everybody can have a fax machine as long as they speak a standard way of interchange of data you can have thousands of these things and they can interact, publishing information indexes, there may be regional indexes, there may be city wide indexes or state wide indexes. Queries to those indexes can be made and medical information exchange can be done using a set of standards.

So let me just go to the communication between record locator services concept, talk through a use case here. So imagine that Massachusetts has an index of all of these patients, so I've got in the case of my records nine million historical records and there are only six million people in the state so some of my records are for deceased people. But let's imagine that Massachusetts ends up with this fairly large index of say nine to 15 million individuals that we have comprising the Beth Israel Deaconess, the Brigham and Women's, the Mass General, and our 5,000 doctors. Well Indiana, through its medical information exchange, has also put up a record locator service, how is it that a doctor in Massachusetts can speak to the Indiana information exchange and do so in a way that's guaranteed to be secure?

Well, to build an architecture that requires that every single participant in Massachusetts and every single participant in Indiana have business associate agreements, have trust relationships and gee, the folks in Indiana need to know the granular details of how our institutions function in Massachusetts so they don't know, hey, is Beth Israel Deaconess a good or a reasonable hospital. It's crazy, so what we've said is let's recognize that we will have a regional record locator services and that we will build a network of trust across those record locator services, something to this fashion, so that in the state of Massachusetts we have let's say 50 acute care facilities, those 50 acute care facilities we say are all trusted members of our record locator service.

And those trusted facilities have the delegated authority to offer their credentialed doctors access to the index, they take responsibility for the fact that the doctor is on staff, credentialed, if they use the index inappropriately they would be terminated. And so therefore if you are part of the Massachusetts record locator service system and Massachusetts and Indiana agree to trust each other, any doctor who's appropriately trusted by Massachusetts could go query the Indiana system using what we are calling an IRP, or intra RHIO proxy.

Now I recognize that that's jargonish, the idea basically is is that a record locator service in one region can communicate to a record locator service in another region so that you don't have to have the nastiness of trying to authorize every single doctor in the whole country to talk to every single one of these record locator services, you authorize a doctor regionally, they use one regionally, the regional one can query other record locator services.

And architecturally, will there be 50 of these, will there be 100 of these? How many RHIOs will there be? Questions that are a bit unknown but let's just suggest there will be a relatively finite number of record locator service indexes so the technology problem of having them trust each other and afford communication with each other isn't that bad, it certainly would scale.

So just next slide which describes overlapping trust relationships, this is really just the gist of what I described, which is if you have a series of indexes of individuals spread across the country and maintained by individual RHIOs or communities and there is this transitive trust concept where it's a federation of record locator services that each trust each other, it suddenly now makes the problem of authenticating the doctor and ensuring trusted participants participate in the network a much more straightforward issue. We don't need a nationwide certification program for the users or a nationwide certificate authority, we can defer that in a decentralized but federated way to the communities who actually know who these doctors are and once the community trusts them and communities trust each other communication can occur.

The slide which described distributed authentication just notes that the way technology wise that record locator services identify each other is using a standard cryptographic certificate method, an X.509 method, which says I only need a certificate for each record locator service participating in the network. A doctor would use a strong user name and password, authenticate into the system, once in the record locator service record locator services can then justify themselves, identify themselves to each other using certificates and their record locator service.

And so all of these things that I've described, building an index, populating the index, ensuring security, doing clinical data exchange, do not require any new standards whatsoever, just implementation guides on existing standards. So that's certainly important as we think about the standards harmonization effort.

The other thing is this is not vendor specific in any way. One of the things we have really tried to do is to say as long as you're going to be sending HL7 messages from place to place whether you use Microsoft technologies or Sun technologies or Python or Pearl, we just don't care, so the slide that's called prototype components layers and platforms just illustrate that such an architecture is truly vendor neutral and any vendor who wished to participate in such a network of indexes could do so without requiring any proprietary technology.

To show you how this actually looks, and then I'm going to talk through a little bit of the actual algorithms that are matching in the minutes that I have left, so we go to the record locator service, then as you see we have a user name and password, we also have a security disclaimer. And underlying all this remember are those business associate agreements so that from a HIPAA perspective and a state perspective we ensure that trust relationships and using this data appropriately are enforced. In the case of Beth Israel Deaconess and my associated facilities if a clinician violates a business associate agreement or this trust they are terminated, and I do have three or four terminations a year, typically these are clinicians who look up their neighbors or look up their spouses who they're divorcing or other things that are obviously unethical and inappropriate. We really rarely have people who are just simply out there fishing for data.

So we log in, then you're asked to enter the name, gender, date of birth, and zip code. Once that is entered, and I've just given you the example of John Clark, 4/1/1927, you get a zip code, the record locator service begins a search of that index and recognize that there are many strategies that could be used to search the index. The simplest of course is exact match, a John Clark, a birth date, a gender and zip code. But wait a minute, what if John is J o n, hmm, well, is that an exact match or not? Well, in the exact match strategy, because it's looking at spelling and case and spacing and dates and everything exact match is a pretty blunt algorithm. If you have a nickname or a misspelling, Johnny as opposed to John, a match would not occur.

So a more sophisticated way to do the match would be to say we will use an algorithm, and there are many such algorithms, the Advanced Linking Technology algorithm, Initiate Systems has an algorithm, the folks at Regenstrief have an algorithm, typically these work by saying okay, I will actually allow a bit of fuzziness, if a date is typed in but it's a perfect transposition of the month/month, day/day, year/year, so instead of 01/05/1920 it's 05/01/1920, ah ha, I do allow that because that is an understandable typo. There are nicknames that are valid for an individual, there are soundex(?) or nicest transformations of names such that my name, Halamka, I tell you, people misspell this like you can't believe, so maybe as long as the name sounds identical but it may be spelled slightly differently then that's okay.

Well such an algorithm obviously has a whole range of as you can see from this slide labeled screen flow probabilistic match and score, of possible false positives, the possible false negatives. In our case what we have said is the algorithm needs to be tuned such that false positives are very, very bad and false negatives are acceptable. So that means if I type in John Halamka and in fact get Jane Halamka's records that show I've had twins, that's bad. If on the other hand somebody registered me with completely the wrong birth date and called me Jim instead of John and you missed that one, well, we're willing to tolerate a miss in the interest of delivering to the doctor that information which is accurate. So allow some blop but don't allow those false positives and tolerate some degree of false negatives.

This particular algorithm is in use today by RxHub with 155 million covered lives and to give you a sense of how that algorithm performs, and again everything is tunable, but what they say is that they are currently just with no editing of the data quality, and we know data quality coming in from most hospitals is awful, without an army of people doing any manual clean-up they are getting a 95 percent hit rate on just using name, gender, date of birth and zip, and to date have tuned it so their false positives are vanishingly small, so basically it just doesn't happen.

You have flexibility as a country, as a community, as a RHIO, of figuring out what the right threshold is and probably as part of a policy that we adjudicate as a country we set such a threshold that allows us to find the medical information of individuals appropriately through the index but discourages random fishing, so that if I type in Ted Kennedy, no birth date, it's not going to list for me oh yes, there's a four year old in Springfield and there's this guy in Hyannis Port, no, that's not enough information to do a valid search and I'm going to return any valuable information to you. In our case we'll set the threshold for the community in a way that balances the doctor's need to know with privacy and minimized incidental disclosure.

So just once you have used this record locator service, this index, to find the patient, you then can go with the medical record numbers that have been retrieved and fetch clinical information, and I just purely as a test bed show you examples of how we're pulling out clinical encounters from various participants in our network.

We have populated our record locator service in the state of Massachusetts with 500,000 patients and in the interest of using this for testing to make sure all the software works and the algorithms work we have created, although it's the same names, genders, dates of birth, we have randomized assorted the columns so that Ted Kennedy is now named Phil and he's a four year old living in the western part of the state. All the names, genders, dates of birth are appropriate, they are distributed in a way that is truly non-disclosing for testing.

And the administrative aspects of this are that we can set the threshold and we can set the kind of search engine that we are using to match as an exact match or probabilistic match, and that we do have features that allow us to publish records or edit records that are in the record locator service if that's something that's necessary to do and that's just shown in the next two slides.

So just a couple of comments on this matching algorithm, one of the things we have to be very careful of is that we're very transparent about how the matching algorithm works, so I'll give you an example. Early testing that we did at a distributed use of the record locator service said that a doctor could go type in name, gender, date of birth, and then we could go out and search, we would do this probabilistic match and we would return date.

Well, it's pretty important if you have actually returned a patient's information with a birth date that's different, or with a gender that's different, or with a name that sounds the same but spelled different, that you tell the doctor you did that because in fact it may very well be rare if you tune the algorithm appropriately but it may be that in fact you have returned that false positive and by saying to the doctor you typed in John Smith, I return it John Smyth, so do note that this record is spelled differently and it's up to you whether you think that's appropriate or not.

I will tell you because we have tuned our algorithm at this point to avoid those false positives, we really just, if it's a Smyth as opposed to a Smith, it's sounds different, we aren't returning it, so but nonetheless, disclosure of any information that's substantially different from what the doctor queries is important in a probabilistic match.

So just describing those thresholds, you can see if we put in a record John Q. Public with a given Social Security number, date of birth, etc., and we search on that individual, we could get a pretty high score, a pretty high likelihood or probabilistic match. But as you change elements of the demographics, John, J o h n, becomes J o n, well, that matches a little less well, J., the first initial, that matches a little less well, change the last name, that's even much worse, and you can see just how such a algorithm based on looking at names, nicknames, distributions of zip codes, etc., will adjudicate how close or how far a match is and you search the threshold appropriately.

So in conclusion I have been doing this kind of thing live in production since 1999 and, so with nine million patients and it's the way that the whole Care Group system works today, to create a distributed index of retrieving patient information. Going beyond the walls of Care Group and going to the whole community is then an effort we have gone through the last six months as part of our RHIO activities, and at the end of the month I have a board meeting where we're making final decisions on moving into production on a couple of possible use cases, for example the Brigham and Women's and Beth Israel Deaconess sharing records using this common record locator service concept, using this as a way to tie together our community health centers who all have disparate medical records. So over the next year there will be some very good publicly available information about how this worked in production and a huge educational effort to ensure patients understand what we're doing and how we're protecting their privacy is obviously part and parcel of all of this.

So let me now open it up to your questions and thoughts.

MR. REYNOLDS: Okay, John, thank you very much, that was excellent. I'd heard a lot about what you were doing in Massachusetts but that's pretty interesting. Jeff has a question.

MR. BLAIR: Hello John Halamka, this is Jeff. How are you?

DR. HALAMKA: Doing very well, thanks.

MR. BLAIR: Good. Thank you for the presentation and thanks for the achievement that you've been able to pull together in Massachusetts, as well as the inter RHIO protocols that you've also initiated. Since you've been running this system now since 1999 you've probably had one or more occasions when you've been able to hear some legal opinion about what areas might be vulnerable to challenge and which ones you feel reasonably safe on. Could you give us some feedback on that?

DR. HALAMKA: Well, absolutely. So one of the great challenges is the Massachusetts regulatory environment is much more severe then HIPAA and of course you look at all the privacy rules throughout the country you're going to have a dizzying array of diversity where for example in California I'm told a consent on an electronic screen is not perceived appropriate enough, in fact you need a handwritten consent from the patient before you could even look up their records. So typically what I find is that the issues all legally revolve around how to do consent, opt in, opt out, what is the form of consent, can one organization serve as the proxy, getting a consent for another organization. What if information is inappropriately disclosed should in fact the person who did the consent be liable or in fact should the releasing organization be liable for that inappropriate release of information?

So I'll tell you that in our communities we've seen two attitudes, and I just find this very interesting, there are doctors that are part of our $50 million dollar Mass E-Health collaborative where they're just getting electronic medical records for the first time, and they've said you know I buy into this idea of sharing medical records across the community but I really am most comfortable with an opt in strategy. I will as the patient registers for care for the first time in my new electronic medical records system ask for their consent to share their demographic data with a record locator service. And I don't have any data anyway right now so there's nothing for me to pre-populate, so that opt in, go as you go, well that makes sense.

Larger organizations like Parkers and Care Group have said boy, we have millions of records and we think there's huge value to the community for quality and cost reduction, of sharing data in a way that is opt out and gets consent at the point of care, but boy, for us to hit everyone of the nine million historical patients we have and ask for their specific permission to opt in to data sharing, that will take such a long time that the system will never be used by doctors because they'll get so few hits.

So those organizations, which have the staff to do the consenting process and are willing to take the burden of doing a proxy, that is I'm going to look up partners information, I'm going to go ask the patient do you consent that I'm going to go look at your Mass General records, I'm willing as Beth Israel Deaconess and Care Group to take on that responsibility and effort.

The lawyers as you've said sort of argue both ways saying boy, this opt in consent to a record locator service is your most risk averse conservative approach, no one is going to be able to challenge that. Well, this idea that you can do it at the point of care, well, no one has ever sued over that but if they did the law is a little murky on that.

So those are the basic issues. I should also just say that one of the strategies that we have adjudicated in our state is no centralization of clinical data. You'll see this record locator service is an index which has nothing more then demographics and medical record numbers, but we don't put all the labs or all the meds in any central database, it's all very much peer to peer, the data lives at Beth Israel Deaconess and Brigham and other places. And we call this the Karate Kid defense because in that movie, long time ago, said the best block is not to be there. So in fact if you don't have a database of medications that can be hacked at a global community or RHIO level, that's a good protection, and that has actually as we think about the architecture and selling it to the public, the idea that we're only aggregating pointers and not their data itself does in fact make the public feel more comfortable.

MR. REYNOLDS: Do you have another question?

MR. BLAIR: What kind of reactions have you had from patients when they've been asked whether to opt in?

DR. HALAMKA: Well, so two reactions. When you think about the fact that we've been using something called Care Web 1999 for all this data sharing across Care Group, to my knowledge in the course of the last five years, six years, there have only been one or two patients who've said gee, I'm actually, I don't want my data shared across the doctors that are caring for me, I mean extremely rare that folks when they understand how this will be used by credentialed doctors at the point of care for serving them, they feel fairly positive about it. I think the worry is that as you create a RHIO, well, will my employer have access to the RHIO? Will my insurance company have access to the RHIO? That's I think where the doubt is, it's not clinician to clinician, it's the fear that governments, insurers, and employers will suddenly learn things about you they shouldn't.

MR. BLAIR: Let me ask, you had mentioned as you were describing some of the scenarios where you might have a patient that is going to a psychiatrist and it is possible that the psychiatrist may have prescribed either antidepressant or other medications. And if they then go to someone else because maybe they have sleep deprivation problems or other problems and that person winds up then asking for permission, and if they do to a general practitioner, internalist, family medicine, and will the records from the psychiatrist then show up in terms of a drug to drug interaction problem and if not what about the patient safety implications of that?

DR. HALAMKA: Right, a very good question and I have two answers, which is at the record locator service level if the psychiatrist for example in discussing with the patient the sharing of data had, the patient had opted out, well the patient's psychiatric pointers would not be included in the record locator service. But let's assume that the pointers are in the record locator service, doctor to doctor exchange for treatment, payment and in operations with patient consent is okay in our state but what's funny is that there is actually a regulatory restriction on getting data from payer based claim systems such as PBMs that have mental health, substance abuse, of HIV implications. So this is a true statement, I'm an emergency physician, if a patient comes to me and says hey John, you have my permission to go search Express Scripts and RxHub for all of my medications including psychiatric, HIV and substance abuse medications, I can't do it, we actually have to in front of RxHub maintain a restricted drug list that strips out all those medications that might have a psychiatric implication. And in fact we're working at Beacon Hill right now to get that law rescinded but doctor to doctor sharing fine, in the interest of protecting privacy laws were put on the books years ago saying that claims and pharmacy benefit management companies cannot be released unless the payer specifically consents to the patient, not the doc.

MR. BLAIR: This is really significant then because effectively what you're saying is that despite the system that's been created in large part to protect patient safety if that patient has substance abuse medication or sexually transmitted diseases or behavioral health medication they're filtered out of the system --

DR. HALAMKA: Right.

MR. BLAIR: Any of those could be a drug to drug interaction and so the physician is left in a situation where he probably has to sit down and say you need to tell me what medications you're taking for any of these other areas. Does that happen where they wind up saying you need to tell me what these other, here you're coming into an emergency room, I need to treat you, I need to know these medications before I can proceed, or how does the physician get protected from providing a medication that will have a drug to drug interaction and have a severe adverse reaction, how do you deal with that?

DR. HALAMKA: Right, and so you're exactly correct. The problem that Dr. Brailer has today is that we've got HIPAA, which is a wonderful federal law, but we got state preemption and so until we harmonize privacy and security laws and practices across the country we're going to run into these regional gotchas like this one. Now ultimately because doctor to doctor exchange of data is okay once we connect every EMR in the state to our network things will be fine but for the moment, because I do want community wide medication information exchange, the best way I can get it is by going through RxHub because the PBMs do have a list of all the medications that were reimbursed and therefore it's a good way for me to at least give a proxy to the active medication list, and you can see that it does have problems. And so what we obviously do is when we have doctors using that query we say note, this medication list may be incomplete and should be validated or verified with patients. But hey, I'll tell you, it's bettering then nothing, instead of their 12 meds you get ten of them and that actually does help quite a bit.

MR. REYNOLDS: Stan, you had a question?

DR. HUFF: Hi, John, Stan Huff. Thanks a lot for testifying. My question takes a little different tack. I think you presented a convincing case that this is what we need to do five years, ten years, just because it's what we can do in that timeframe. I wonder what your thoughts are if, should we start doing things so that ten or 15 years from now in fact we're doing something in a better way or if in fact you would think that we can do now is in fact what we should do basically forever.

DR. HALAMKA: Fabulous question.

DR. HUFF: Let me just throw out, posit some does the emperor have any clothes kind of questions. At a grand Gestalt level you could say now how is my privacy improved by the fact that the very information I'm trying to protect is in fact what passes between these entities to identify me in terms of my name, birth date, address, etc. Secondly you could argue even though again this is the best we can do, in fact if we started planning now are there ways that this could be done more efficiently so that people have cards and other things and what cost reductions could occur in this process if we planned a process to have this work as efficiently and effectively as it possibly could. So those are just some of the issues that I would raise in that context about should we start planning now for a better more efficient way, could such thing exist and what would it look like if it did?

DR. HALAMKA: Sure, and so the beauty about the record locator service design is a national health identifier if ever implemented in the future just becomes another piece of data, another demographic indicator in the record locator service. So in fact if you have one, great, we'll query by it, if you don't have one, okay, we'll use probabilistic matching. So there is absolutely no reason why you couldn't do this now in the next five years the way we do business but hey a decade from now we do have a national health identifier but it's going to take five more years to roll out and that's fine, it becomes a gradual addition to the record locator service.

What Norway does that I think I just simple and instructive is that they simply give every baby at birth a national identifier which is the day they were born and their birth order, so okay, you're the first baby born today, okay so you are 01/01/1962 number 1, and then they just crank up for the number of babies that are born that day and this is a very straightforward way to deal with the identifier. The problem they have had of course is that what do you do with immigrants? What do you do with non-citizens? And so it's quite funny, I've advised Norway on this, is that they have, many of the time when folks immigrate they don't have a birth certificate, they don't know their birth date, so they just give them 01/01 of the year they think they were born. The probably that Norway is having is they're running out of digits because so many people are born on 01/01.

But the bottom line is I concur a parallel strategy just recognizing that the issuance of a national identifier is a big, big project and it will take many, many years and be very, very controversial and the short term strategy I've outlined supports the eventual use of a national identifier should that ever come to pass.

MR. REYNOLDS: Michael?

DR. FITZMAURICE: John, Mike Fitzmaurice, I have a question, you've probably answered a good part of it and that is I see you're using a subset of variables about me to identify me that could just as easily invade my privacy as Social Security number, and probably the same data are on all my credit cards, or the databases behind my credit cards. So the real protection you're offering is the authentication access to the index and to the medical record data itself and so to me to my mind it doesn't matter whether Social Security number is used or a national patient identifier is used, it's not the number that should be controversial, it's how well the data are protected that it links. Your comments.

DR. HALAMKA: Right, I think if you ask individuals how they feel about specific demographic indicators, and this is just a focus groups we run, people believe that it is, rightly or wrongly, that given name, gender, date of birth, identity theft is unlikely. If you're giving Social Security number, oh, that's the bad one, they can steal my identity with that. And so at least from the public's perception the idea that we are having a somewhat inexact match and it's based on commonly available information about you as opposed to something that is secret, like your Social Security number, it's more acceptable to them, perception.

MR. REYNOLDS: Simon? This will be the last question.

DR. COHN: John, good morning. Listen, I had a question, obviously if one is supportive of this particular approach, I think as you move from a localized environment such as you have to sort of the national environment that you were describing, it seems that one would want to have standards around the data elements that one is using for all this stuff, and I guess I'm asking that question knowing that obviously that one can tune up or down the probabilistic matching algorithms. Now is that a reasonable assumption here? And if indeed that's the case how much variation is there in terms of all the data elements that are used from one environment to another?

DR. HALAMKA: So a couple of examples there, in Massachusetts we believe that name, gender, date of birth and zip are good matching criteria, in Indiana they really, really want to use the Social Security number. And so that's fine, what we've said is the PID segment in the HL7 2.X standard affords you a lot of possible demographic identifiers, we would simply mandate those that are required versus optional and an individual community could use some of those optional identifiers as a mechanism of refining the match, ensuring better accuracy and fewer false positives. But you do at least have to start with the basics and of course gender, well, I trained in San Francisco and our triage sheet in the emergency department had five possibilities for the gender box, male, female, unknown, other and in transition. So you better decide as a vocabulary if you're going to use name, gender, date of birth, well, what are the possible genders in that vocabulary and that's where the implementation guide comes in. So common standards yes but then common implementation guides for that standard that specifies issues like vocabulary and mandatory versus optional fields.

And one of the things that I'm happy to provide the committee is that we do have an implementation guide that's quite detailed covering the HL7 2.4 and 3.0 standards and all vocabularies, it's finished and certainly available for your records.

MR. REYNOLDS: I'm going to let one more question come in, John, from Judy, since she talked you into talking to us.

DR. WARREN: Hi, John, this is Judy Warren. I just had one question that your answer to Simon stimulated. If we have these standards for the data what relationship does the standards have to the algorithm that's used? Can we have different algorithms depending on companies or do we need to have some sort of standards applied to these algorithms?

DR. HALAMKA: And that is a really great question because what we've said is, in our prototype designs we are allowing different search engines, exact match, Initiate algorithm, the Regenstrief algorithm, but the challenge to me is if these algorithms have very different statistical performance you may very well go to Indiana and do a query, allowed to get an incidental disclosure about somebody's home address that wouldn't happen if you were in Massachusetts, so that at least at the beginning we are mandating that certain fields we required and certain thresholds be set and we attempt to minimize fishing. But you're right, in a long term if we're going to achieve the same performance that every participant then there may need to be some adjudication of the form of algorithm for search. Could be done.

MR. REYNOLDS: John, this is Harry Reynolds, on behalf of the committee I'd like to thank you for your testimony and we'll all anxiously keep an eye on what's going on in Massachusetts and thanks for fitting us into your schedule today.

DR. HALAMKA: Well absolutely, sorry I couldn't be down there today, I have a delegation of Japanese visitors that flew in and alas I couldn't be with them and with you but I live by email so if you guys need anything from me please email me.

MR. REYNOLDS: Okay, thank you, John.

Our next presentation on the same subject will be from Lorraine Fernandes and as Scott Schumacher is here today I think too, I don't know whether he's going to participate or not.

Agenda Item: Matching Patients to Their Records - Ms. Fernandes

MS. FERNANDES: Good morning everyone. I'm Lorraine Fernandes, senior vice president of Initiate Systems Health care Practice. I am a health information management professional by training, so I've spent about 25 years in my career doing one of two things, either running large medical record health information departments in provider organizations, or in the last ten years or so dealing with the challenges and issues, providing products and services around patient identification. So I thank Judy for inviting us here today to share with you some detailed information and some common industry practices and experiences around patient identification and patient matching.

I'll let Scott introduce himself as I turn the program over to him about midway here, but just briefly Scott is our chief scientist and responsible for algorithm development and the integration of the Initiate algorithm that you've heard John Halamka talk about. You also heard John talk about the Advanced Linkage Technology algorithm which is an earlier algorithm we had out in the industry. So we'll give you a tour of the industry and what the health care industry as well as a few others are doing around patient matching and patient identification.

Here is an overview of what we're going to discuss for the next half hour or so, we will talk about patient identification technology and how it is widely used in the industry, in the U.S., Canada, and other countries in the world there, and we'll give you some brief examples through some of our clients and other components of the industry that actually have been using this technology as a foundation for patient identification and the development of an electronic health record.

So as you well know managing the patient identities across a health care ecosystem is complex, there's a lot of moving parts to the health care delivery system. It's not just hospitals, it's not just physician groups, it's almost an endless stream of places we may interact for getting care, delivering care on a day to day basis in regular. And it's not only electronic records that are out there today, you'll hear a lot of things in the industry that talk about how many manual records are still out there today and it's very true, a lot of statistics say that only 25 percent of physicians are really practicing medicine using electronic health records but the technology can deal with the fact you've got varying standards, you've got varying messaging structures, and you have both a mixture of electronic as well as manual records out there.

We'll talk briefly about a national health care identifier, why we do not believe it's the silver bullet or the magic bullet to actually doing the linking and the matching of patient records that are out there today. Scott will specifically discuss with you the algorithm that you've heard John talk about, particularly the false positive and false negative issue, it does generate a significant amount of interest out there, how do you manage that, how do you set patient expectations, how do you set clinician and provider expectations around the data that's going to be presented using a probabilistic algorithm.

We'll talk a little bit about Canada and our friends a little bit north of the border here. In some ways they're a little bit ahead of us in this discussion and we'll fill you on things they've done over the last few years and how they're actually deploying this technology today in Canada to facilitate electronic health records and the exchange of data within the provinces and across the provinces.

And last and throughout our discussion we'll talk about the federated architecture that John has already given you an overview of and how that's available today, used in health care widely, as well as outside of health care. So that's the tour we'll give you through patient identification and data matching.

The person identification technology is widely used in the health care industry today. As you can see here we cite some basic data points out there, we have analyzed over two billion records from the health care industry, many of them several times unfortunately, as they evaluate, deploy, clean up, standardize, whatever, the patient identification data out there. There are about 1400 health care facilities in fact using this technology today to facilitate accurate and consistent patient identification and the implementations that use this type of technology scale greatly, deployments of 500,000 records maybe to 500 million records, so the technology is there, it's very robust, it's very scalable, and you don't have to sacrifice accuracy for scalability, privacy, or security.

On the right hand side here you see some examples of industries and specific adopters of this. At the very top one you see Partners Health Care in Boston that uses this technology to facilitate accurate patient identification at the point of care. You'll see RxHub is the second one there, and Teri is following me today so I won't go in great detail about that. But as John said they use this technology in their PBM deployment to manage identification of their patients and customers.

The third bullet here probably is what you've heard most about John, this is the CSC prototype that John described for you that uses the Initiate algorithm as one of the algorithms in their development of the prototype to facilitate patient identification.

And the last one I would ask you to focus on for just a moment, I think it's quite interesting, and that's the PAML one, where they use in a large reference lab in Spokane, Washington, they use it for two very different reasons, they use it to facilitate developing a single bill for a patient in order to only have one bill go to perhaps Lorraine Fernandes, even though I've had three different types of lab tests maybe from three different providers, maybe one was Blue Cross, maybe one is my auto insurance, maybe one is an unemployment or a worker's comp claim, so they bring all of that together with this type of technology to send a single bill to a patient. They also use the technology to facilitate bringing all those results together in order to present a single view of the lab data to the authorized clinician who wants to see historically what's happened with the lab data.

And last but certainly not least the Social Security Administration is using this technology in a proof of concept today to facilitate physician matching within the Social Security information and yesterday we got word that in fact the Veterans Administration is licensing this technology to facilitate patient identification across the various points of service in the VA. So it is technology that's widely used, scalable, and has been adopted for many years in the health care industry as well as banking, finance, telecommunications, and other types of industries.

In the U.S. as well as the rest of the world I think we have great expectations for what's going to come with the gradual development and deployment of electronic health records. We expect that we're going to have improved patient care, we're going to have a reduction in the redundancy of testing today. You read many studies that say anywhere between 15 and 25 percent of the tests that are done in the health care system today are redundant because the health care providers either can't find the paper record or don't in fact have access to the electronic record because it's at another point of care that they're not authorized to get today.

What we believe you're going to see in this is a gradual deployment and evolution of creating really a virtual electronic health record because you're going to have clinical data that's going to reside primarily in the sources where it was created, where it's maintained, where it's monitored for privacy and security by the health care provider that in fact created that and has the relationship with the patient. So this virtual health record or medical record as it evolves over time is going to have great expectations, going to have great benefit to the health care delivery system and ultimately we should have perhaps along a parallel pathway a personal health record that you and I could each develop with whatever vendor or process or product we choose so that we have control of our records as well as the provider and the various places where the data may have been created.

I alluded earlier to a complex health care ecosystem and this is obviously a very busy and a very complex slide to really illustrate the point that the evolution or the building of the electronic health record is going to be gradual, we're not going to have a big bang theory in the United States like you might have in other countries, you're going to have many components to the building of the electronic health record, the RHIO that John has talked about, the National Health Information Network in whatever form that actually takes places.

As this is evolving you're going to have a mixture of messaging standards that are related here of timeframes when people can actually share data because there is a lot of manual data around yet today. You're going to have skill set issues perhaps in the various components of the health care delivery system. Large IDNs are going to have large IT shops so they're going to have sophistication, they're going to have depth of talent there to deploy an electronic health record and an electronic medical record on a timely basis.

Judy asked me if there were challenges we should address in this presentation and there probably is one that's noteworthy. In the business of this slide here you see the different messaging, you also have a hint of the different standards that exist out there, so as a vendor you have the challenge of deciding do I develop my product to apply to HL7 2.3, 2.4, 2.5, 3.0, DICOM, ANSI, and the list goes on and on. While patient identification technology can address the plethora of standards we have out there it certainly is a challenge as things evolve in the health care system.

And from a development of this ecosystem to sharing data obviously the multitude of standards you have out there does make it challenging when you start to talk about how you're going to communicate RHIO to RHIO, what type of standards might each RHIO have in place, and how are you going to build that community network. As I said the patient identification technology can deal with the multitude of standards, it just makes it a little bit more challenging.

Let's spend just a moment talking about the national unique health care identifier, which we obviously do not have in the United States today. Should we ever develop one it would just be another piece of data that a probabilistic algorithm that Scott is going to talk about would utilize in facilitating patient identification. A unique health care identifier wouldn't be the silver bullet, you're still going to have some type of algorithm to facilitate patient identification in addition to having a unique health care identifier because let's face it, human beings make human mistakes. You make typographical errors, grandma brings the child in instead of mom or dad, you don't have your card with you, and the list goes on and on of why even with a national health care identifier you're going to have a challenges with patient identification.

Deploying a national health care identifier in the U.S. would be a long and expensive process, it would probably take many years even after we had consensus of what we needed to do and how you would do it. You would have significant challenges with retrofitting the legacy systems that are out there today. Many of them probably wouldn't be able to have a field to accommodate a national health care identifier and therefore would have to probably replace systems at some point in the future.

Lastly you'll note that Connecting for Health and specifically the white paper on the back page there that talks about accurately linking health information, Connecting for Health did discuss this in a fairly extensive way in 2004 when the workgroup met and decided that while we might have one someday, and might is a loose term there, it would just be another data element to facilitate patient identification, it's not a silver bullet.

So let's take a look quickly at what it might look like to have a federated approach to patient identification where the clinical data resides at the point it was created and yet you have as John gave us an orientation a community type index, a RHIO type index that facilitates patient identification. You're going to have many data sources that might contain that demographic information, that basic information about a patient. There would be queries to some type of identify hub that each RHIO would have, that query or search would be done using some basic demographic information, name, address, zip code perhaps, telephone number, perhaps Social Security number if some of the RHIOs might use that. We would commonly see probably somewhere between four and eight data elements used to facilitate that patient identification, yes, there are people out there who use probabilistic algorithms today that will perhaps use ten or even 12 data elements to facilitate patient identification, but it's about a handful that you see commonly used out there and it is the customer's choice as to what those data elements are that are used.

So we've done a query for Robert Johnson using these basic data elements of name and date of birth. What you might get back from that type of query could vary significantly from RHIO to RHIO as the networks evolve. This particular example shows that you're going to get back the facilities where Robert Johnson in fact has data, the local medical record number for that facility, the last date of service, the matching demographic information from the institution that in fact has data for Robert Johnson as well as Robert's date of birth.

Now whether you show the last date of service is obviously a business decision that each local provider and each RHIO would make based upon whatever their business model and their community model has been developed in that organization. You might also have some basic clinical data that would be represented there, perhaps you're going to show the last ER visit, perhaps you're going to show allergies or medication reactions, things like that. So the level of detail that would be presented back to a query as I said would vary from RHIO to RHIO based upon their model of business and what the community has supported in that particular area.

You have managed privacy and security, you've created a balance with this type of federated approach, you've only maintained the clinical data at the local level there, it's authorized with appropriate queries after the patient identification is made using a probabilistic algorithm. By doing this you have safeguarded and kept just a very limited amount of data at that centralized level, the few data elements that are in fact needed to facilitate patient identification. You still have the ability to audit who had access to that data, what data elements were presented to that particular person doing the access, so it is important in the context of privacy and security to have auditing mechanisms in place even at the basic level of patient identification and the technology can accommodate that.

You can do this on a very large scale, as I said there are deployments with this technology that approach 500 million records. You can do this quickly, this is not a year long or years long process, deployments can be done in a matter of six months depending upon the resources that a particular client can bring to it, some even can do it quicker then that. So you can do it accurately, consistently, in a minimal amount of time, and yet not disrupt the work environment for the health care delivery system as it is today.

I'm going to turn things over to Scott who can talk to you specifically about the technology.

Agenda Item: Matching Patients to Their Records - Dr. Schumacher

DR. SCHUMACHER: Good morning, thank you for inviting me, I'm actually a probablist by training and so it's always exciting to me when there's widespread or at least some interest in the things that I find I spend most of my time with.

A couple points before we start on going down into the algorithm and I want to walk through the algorithm, what are the issues with it, the data that go with it, how you develop it, what are the points to it, how well does it perform. But really in terms of doing this there's two components really to put together an identity hub or an identity matching system and one is a database component, a software component. It's very key that you have a system which can scale to large databases into high search rates and that's as much algorithm as it is software development. But the portion I'm going to talk about right now would be the algorithm component.

There are really three things that go into doing matching of patient records or any other types of decision process here, and the three are listed on this one. The first one is theory, there's some mathematical theory that tells you what's the best way to use the data that you have for making the decision you have. This is a likelihood ratio comparison which is what anybody does for the standard to hypothesis test. So there's theory within there and theory is important as it tells you how to extrapolate this from one million records to ten million records.

The other is empirical knowledge, so that how do I look at data, what are the types of errors that people make. The empirical knowledge that come from looking at two billion records, getting feedback from the people who are going out to the field and evaluating your match criteria, that's also a key component. We reflect that really in how we compare attributes together, so we may have attributes comparison techniques which are very specific, name one is very specific, and ones that are general that you would apply to an arbitrary identifier. So the empirical knowledge which is the types of errors you see in recording data is also a key component to the algorithm development.

And lastly is the data analysis component itself, so the theory, how you're going to do the comparison, and then the data analysis component that tells you how to link it all together numerically, so to use your data in the optimal way.

Within this you're always, John introduced the topic quite well this morning, you have two problems, whenever you're making a decision you can do two things wrong. You can link people that you shouldn't link or you cannot find people that you should be finding. Those are the false positives/false negatives, type one, type two in the standard statistical sense, but in this realm they use it as a false negative, something I missed, I should have declared him and I didn't, or false positive, I said these guys were the same and they weren't. And you're always fighting those two things and each of these three components go to address those problems individually.

So if we took a first one at false negatives, why doesn't exact match work, well exact match doesn't work for a lot of reasons, one is primarily it's how people record data, I use nicknames, it's manually done, there are transpositions of first name and middle name, transpositions of first and last, hyphenated last names, variation in recording data is the key reason that you have to do this type of thing. Again, the empirical knowledge is what are the types of mistakes that people make and how do you address them.

The other thing that typically causes you to miss things is missing data, so you have a name but no date of birth, or you have a name and a date of birth but no zip code. When can you, when is that enough to be confident in linking and when is it not enough to be confident in linking? Deterministic rules usually say I have to have a name and this and this and they're very tight along that way. Other times you can get enough certainty just using the attributes themselves to be able to link even when there's missing data.

On the first component of that, the variation, and let me use name for a minute because within any patient record most of the information is from an information theoretic sense is in the name. And so you have to be, it's to your advantage to work hard to understand the recording variations. So for example three years ago when we were working with a set with a large Asian population we had noticed that first name, last name, name reversals, the number of name tokens you got in a particular record or their order was causing us to miss a lot of records and that is in a lot of cultures the notion of first, middle, last doesn't make any sense but they're recording it in an English system. So we revamped our name comparison technique to in fact ignore blocking and just take the tokens themselves and look for the best possible alignment of those within those, of the tokens within two names. This allows us to catch these places where people are using first name, last name reverses, etc., in the Asian populations.

And within that you have to worry about not only do the tokens match, do the name token pieces match, but how do they match, do they match exact, are they nickname equivalents, are they phonetic equivalents, name to initials, are they just type errors. All of those things go into the comparison function and return a number that says this name is close to this by a certain amount.

For other attributes it's not that hard, Social Security number for example, the typical error you see are typographical errors, transpositions, digits wrong, etc. For those the comparison functions are very straightforward.

In terms of addressing thin data the only way around making up for lack of data is to use the data you have the best way you know how, and that is to weight that according to the frequency and the types of errors that you see. So within all of these it's the theory, it's the empirical knowledge in the names, and it's in using the data that fight this particular problem.

And if you go to false positives, that is linking things that you shouldn't link, what's the typical cause of that, and the typical cause of that is agreement on frequently occurring attribute values. Linda Smith is the most common name so Linda Smith and a date of birth is not enough to find somebody in the United States, too many of them. But if you run across Albert Einstein and a date of birth, is that infrequent enough that you would be happy to declare that on a particular dataset, so it's using the values of the attributes as well as the type of attribute in the matching that give you the best use of the data going forward.

The other component to it is that you can in fact have local area variations and this is where the algorithm serendipitously matches up with how RHIOs are built so the frequency of names and the types of errors that you might see in Minnesota looking for Schumacher are different then you will find in Los Angeles looking for Schumacher, so a local instance of the weighting algorithm can take advantage of that particular piece of information whereas a national or overall set of weights naturally couldn't, it has to use the components with that one.

The other thing that bothers you a lot with matching are family members and this is where you really go, at the beginning I said we use a binary detector, same/not same, it's really a trinary detector, basian(?) classifier if you like, these are different people, this the same person, or these are family related. And that's really just a re-weighting of the attributes you have and how you use it. So comparing two members where you think they might be in the same family, the weight for a last name match is significantly less then in overall.

The one thing with this that I think is a key thing is that if you're using theory in deciding how you're doing matching statistics, and before I did patient identification I did remote sensing applications, algorithms for remote sensing applications for the Department of Defense, and it's always the case where you're going to study a small area and then you're going to try to extrapolate performance over the entire world. Theory is a good thing that allows you to be able to take that information and fold it forward so that you know how to use it in the, you know how your matching system will react as you change the size.

Basically all there really is to the probabilistic matching is looking at ratios of probabilities, how many Smiths are out there, how many times do I see a typo on a Social Security number, how many people are in this zip code. So it's all determined by ratios of probabilities and the weights that come out of this realm just tell you in a likelihood sense how to combine the results of different attributes.

So this weighting scheme is designed to take not only a fixed set of attributes but then it tells you how to combine information contribution from other attributes that you may have, so if in a particular environment you're getting birth place, and in the VA that's a key attribute by the way, place of birth, that gives you using this process knowing how many people live in that birth place, the errors in recording that, allow you to fold this in naturally so they all fall in using likelihood units and put that together in a theoretically consistent and scalable way. So that's the data analysis component that goes with it.

One of the things that John had mentioned which is a key point is data quality and its impact on performance. So kind of the question is is how well does this thing really work, does this work on a large scale and if so how well? So we've been doing this for a while, we have both theory and we have a lot of data behind it in analysis, and we did RxHub three, four years ago their matching problem was really what we thought was "on the edge", that they have the data to support the types of batching that they wanted to do.

So back at that time we built this simulation that would allow us to predict performance based upon data quality and data attributes that are available, that fed through our matching system or really any matching system that follows the principles that I put up earlier which would be theoretically based, understanding data variation, and based upon data analysis and true frequencies, any process out there that meets that, so we put together a simulation that we have used over the time and I thought I'd just bring some results here primarily to illustrate the relative contributions of data quality or data availability and matching accuracy.

So for this simulation what I've done is I've set it up for ten million members, figuring that's a typical RHIO size case. And I've set it for an extremely low false positive rate, as you can hear from the conversations, Halamka and later from RxHub, we all think an extremely low false positive rate is the way to go with this, so I've done it that way. Taken four attributes and their normal variations, normal errors you would see, within that I took some client data and did these statistics. And then I take the simulation and I vary how often I get that, how often are these data available, and indicate what's the results, what's my false negative rate. Fix false positive rate and now let's see what are we going to miss for as we change attribute quality.

So the first line is you have name, date of birth, zip, 100 percent of the time and no Social Security number, and it's predicting a false negative rate of six percent, small number, so you're getting 95 percent of the results with data like this, quite a good system you would say at that level, very effective. Just dropping date of birth and zip down by ten percent has a significant impact upon the performance here, that you see the false negative rate within this has gone up to 22 percent, still a usable system but clearly not what we would like to see in the low, you want to be above 90 percent would be a good place to be within these numbers.

If you add in Social Security number, the full Social Security number, and you say you only get it three quarters of the time or 70 percent, which is a typical number you see in hospital data, you're back where you want to be, you're back at, getting 90 plus percent of the matches out there.

And interesting enough if you go to four digits, so you're a little bit, your system is just going to work on the last four digits of the Social Security number, not the full nine, so presumably you are protecting people from privacy laws by only storing four. And you see within that realm the false negative rate is still below ten percent so it's still quite a usable system within this taking account data quality.

Now in a lot of environments you'd like to say you have 100 percent but you don't, and as specifically as you start integrating in information from health care facilities typically 90 is a good number, that's an average over three or four places that we have looked at. So within that realm to come up with a reasonable system additional above the four, Social Security, the last four digits of the Social Security, birth place, other information will be need to folded into this algorithm.

I really just brought this though as an illustration that we can do predictions on this, this is not a fly by night science, we have some theory behind it, and we can do predictions on how these things will operate in the real world.

And with that I think I'd like to turn it back to Lorraine to talk about some applications.

MS. FERNANDES: I said briefly at the beginning that Canada has addressed this issue over the last few years and in fact they made some health care and business decisions across the provinces based upon blueprints that were designed for how they're going to deploy patient identification technology in Canada. There are in fact six provinces that have already selected their technology for patient identification and are in various stages of deploying that to facilitate identification that a local, health authority, provincial, and ultimately across all of Canada.

What this might look like at a national level is a varying degree of adoption that takes advantage of the deployments that are already in the United States today, as I said at the beginning a lot of large integrated delivery networks already have this type of technology and use it to facilitate patient identification, so it's perhaps rolling those IDNs up that are using the technology to provide the data at a RHIO level, communication on a RHIO to RHIO level, so that you have a flexible deployment based upon how health care is delivered, what the community wants for their structure, their environment, their architecture at that regional or perhaps at a state level.

And just a pictorial of what it might look like across the United States, as John said perhaps we have 50 regional or state hubs out there supporting patient identification, perhaps it's 100, perhaps it's 200, we don't know at this point in time. But the technology is going to enable a gradual building of that virtual electronic health record and a national health information network, so you can invest and realize the benefits to this system gradually, we don't have to wait for a big bang of three years or five years or ten years from now, you can adapt to the varying standards, the diversity of data elements, and the fact that you do have data quality issues and you are going to have thinness of data out there in the health care system today.

In summary I would just say that patient matching or identification technology is proven, it's accurate, it's scalable, it addresses a lot of the challenges that we've talked about with standards and messaging in the health care ecosystem today. It does manage the fact that you want a federated approach so that you can have the clinical data residing at that local level and at that local you can manage the privacy, security, the opt in and the opt out parameters that have to be put forward to you and I as a health care consumer.

Thank you.

MR. REYNOLDS: Okay, thank you. I've got a question and Steve and Jeff. Thank you very much. I think what's been interesting to me, one of the exciting things about being on this committee is that you're on multiple subcommittees, one of the tough parts is that you're on multiple subcommittees, and coming into this I was anxious to see the approach and as you hear RHIOs, and I'm all for RHIOs, but as you hear RHIOs and you hear the fact that everybody is getting away from identity theft by getting off of Social Security numbers but now we're talking about using what I would consider more accessible data to find people, RHIOs are not mentioned in any legislation anywhere as covered anythings, they're funded by other then covered entities who put money into them and want return from them, which if you think a RHIO talking to RHIO, you've got lots of players in there that may not be covered on this thing.

So I'd just love a comment from you, it's thrown me back into our privacy discussions that we've had and it's thrown me right, and as standards, going away from Social Security philosophically was a bit of a standard, going now to this other data just was a little different view then I thought, which is exciting, I mean that's why we're having this, to understand what's going on. So any comment you can make --

MS. FERNANDES: I'd offer first a comment that I think a lot of the RHIOs at whatever stage they are in in development are probably the two primary issues are financial sustainability and the privacy issue of how do we manage and secure the data, how do we ensure our community that we have taken care of privacy and security. So what I commonly hear is very early on there are agreements and in fact Connecting for Health is developing a model agreement for the RHIOs to use and my understanding is that's going to be ready by the end of the year or very early next year, so that there's a model that these RHIOs in whatever stage of deployment they are can use as the baseline and that this model is in fact kind of the standard of practice that while I may live in the San Francisco area if a query is done for my data in South Dakota where I happened to grow up I can be assured that in each of these communities so to speak they are adhering to the same privacy and security standards so that I know my data is maintained, there's some responsibility and some liability also around that. So that's what I commonly hear out there.

DR. STEINDEL: Yes, thank you, actually my question is going to be addressed to Scott, what I really appreciated about this particular talk and putting on my gear head hat was getting down into the meat of how these probabilistic matching systems work and what's going on with them. I think we've had a lot of discussion on the policy side of this earlier and also in many other previous hearings. What interested me on the table that you put up about your misses on the probabilistic match was what you didn't change in that was the name. And I would imagine that a lot of errors will occur in the taking of the name and the way the name is transferred between sites.

The other question with regard to that as well is something that Harry just alluded to and that's the movement away from Social Security number to other forms of identification numbers that may vary between regions. And can we just look at the Social Security number as a number and sort of transpose in our minds the type of success we would have with any number, or is that unique to the Social Security number?

DR. SCHUMACHER: Okay, to the first one, I had name there 100 percent of the time because usually the name is filled out in a record 100 percent of the time, but the simulation takes into account the types of errors that you're talking about in terms of doing the recording, so there's a distribution in matching and that's why you really, unless you're very lucky you never really get that number to 100 percent in terms of what you're catching out there because names are recorded in error, so the simulation takes that into account. And it did so for all of the other attributes, it takes into the account the normal distribution of errors in matching that but here what I do is I just say is that population there or not, so there's a lot underneath the hood that come from looking at the data so that's in there, and you're correct, name is by far the biggest mess in terms of doing matching, numbers are straightforward.

In terms of other identification numbers, say a regional number as well, yes, you could feed those in and you would get a very similar response as you would with that one. Now where it wouldn't do you any good would be going from one region to another, but within a region identifying and linking records within a region, a local number would work quite well. And in fact one of the things that I think is an important thing to keep around is that if you look at the number of medical records there's billions of them, but in terms of, and you cannot link those at a national level. You can link those records together at a local level and then produce identities, individuals, single individuals, out to a national identifier, and that will work because you're only talking about 300 million or so. So that number would have value at the local, even though it does not give you anything --

DR. STEINDEL: Thank you for clarifying the way this table was derived, I assume that 100 percent meant the name was 100 percent accurate in what you fed in, that was a very good clarification, thank you.

MR. BLAIR: One question. There's three concepts that I hear referenced, master person index is one, record locator services are the second, and matching or linking patients to their records is the third. Could you just clarify to me what differences, if any, you consider between those three concepts?

MS. FERNANDES: I think all three of those concepts have to have the patient identification, the patient matching technology as a component because the correct identification of that patient across the disparate systems is the foundational element to actually determining where the record is and then the third leg of the journey so to speak of actually getting that record. So in the record locator service the first thing they do is patient identification using either a probabilistic algorithm, like John said it is our algorithm that's being used in the prototype for RLS, they're also using the Regenstrief algorithm, they've done some exact match work in the record locator service. So every approach is going to have some type of patient identification.

MR. BLAIR: What is the difference? Could you clarify why somebody would use one term rather then another?

MS. FERNANDES: I think it's just a matter of geography, of perhaps what we might talk in California where I live versus Massachusetts, perhaps it's the various people that are making up the components of the workgroup. The record locator service itself does have those three components but because you can swap in and swap out the various types of algorithm, I think that's why their core function or purpose is to be able to locate a record and why they've used that terminology.

MR. BLAIR: So they're synonyms.

MS. FERNANDES: Right.

DR. SCHUMACHER: Usually you see an EMPI(?) applied to an enterprise, hospital IDN, large issues, whereas the record locator service say it's been coined recently, is the same concept but applied across enterprises. And each one of those has a matching or identification component as part of what it does, they do other things, auditing, other data to go with it, but they have that component of them.

MR. REYNOLDS: Lorraine and Scott, thank you, appreciate your information. Well get back together at 10:55, take a break.

[Brief break.]

MR. REYNOLDS: Our next speaker needs no introduction to this committee, it's nice to see you're versatile enough to talk about multiple subjects, that's an exciting thing for us --

MS. BYRNE: That's what Jeff said, what don't you know Teri, well nothing, everything --

MR. REYNOLDS: The legend continues. So Teri, you have the floor.

Agenda Item: Matching Patients to Their Records - Ms. Byrne

MS. BYRNE: Hi, I'm Teri Byrne, I'm the vice president of standards and product management at RxHub, and I want to thank the committee for inviting us back to talk about our experience with patient matching and that's exactly what we're going to talk about, we're not going to talk about the theories and the probabilistic stuff, and that's great that Scott is here to do that because I don't like talking about that stuff. But we're going to talk about our application of the Initiate MPI and some of the other things we do around the data.

One thing I do want you to note is the spelling of my name, it's Teri Byrne, it's one of the perfect names to use as an example when you're talking about patient matching and also I'd like you to note my nametag, which says Theresa Marie, which is my real name, so we're going to talk a little bit about nicknames and that type of thing. And I'm one of those terrible people that sometimes I register myself as Theresa and sometimes I register myself as Teri, and stuff like that.

We are going to talk about, first give you an overview of how we use, what we do with our MPI, or master patient index, we're going to talk about the data and patient matching and what we do, what parts of the algorithm we use and what parts we don't. And how we apply it in our clinic and hospital settings. A little bit about the development that we did and the steps we went through and the pilot we did, industry utilization, and then some of the conclusions that we've come to.

So what is the master patient index at RxHub, well basically the MPI is the cornerstone of all of our transaction processing. We call it our lynchpin, the eligibility, our lynchpin transaction, but basically when we, the original query that we take in from a technology vendor or a hospital identifies where the patient has coverage, and that's a key thing to understand because we don't ask for card holder information, or the physicians or the people who register the patients don't have to ask for their card holder information because we find the patient and we find the coverage for the patient. So in essence we're locating the record of insurance for the patient.

We currently have 140 million active member records from three PBMs. We use limited demographics to match the patient, you've heard about these already, name, first name, middle name, suffix, date of birth, zip code and gender, and that's the information that we receive from the requester and we're going to talk about the difference between the information that we store and the information that we receive on a request. We have a robust matching algorithm, we do use Initiate Systems, which we implemented back in 2001. It's statistically sound, it's tuned for performance, as a matter of fact an eligibility request which originally took about three seconds is now taking under two seconds because we reduced a lot of the time it took for us to locate the patient which is actually under a half a second now. Is that correct, Mark? The actual match is under a quarter second, sorry, I'm referring to Mark Gingrich who's also here with me today from RxHub.

We've tuned it for minimal false positives as Scott alluded to and we'll talk a little bit about that. A chance of a false positive is extremely remote, we have processed over 28 million requests with no report of a false positive to date, so that's pretty significant.

Again what we're doing is unprecedented in the health care industry, we do have nationwide access to patient information and we have received transactions from all 50 states to date.

So let's talk a little bit about the data, first of all we're going to talk about the data that we actually receive from the source, or the health plan or PBM. We do receive the full name, typically first name, last name, middle initial, suffix, sometimes we get prefix. We most of the time receive date of birth, we take in the full address, however we only use the zip code in matching but we do take in the full address because it helps in problem resolution, etc. We receive gender, and then we also receive what we term a payer unique ID, and that is an ID that the source of the information ties to that specific coverage for a patient. And it's important to understand that we may have multiple instances of a patient in our MPI because they have multiple coverage, and each coverage is identified separately. So the payer links a particular ID to a coverage for a patient so if one payer has Teri Byrne covered under my own plan and also under my husband's plan I would have two records in the MPI and they would have two different IDs linked to them.

RxHub does not create a patient ID that identifies Teri Byrne as a single entity in their MPI. When we do a query we dynamically link those records, and Scott really talked about how that's done with the algorithm, with a single pass through our MPI. So again, we're doing that in under a half a second, a quarter second, very fast with 140 million records.

And I want to talk a bit about the data loads process and trying to give you more information about how we've applied this, what was really important early on was that we did analyze the data that we had and we went through an analysis process with Initiate and we also go through a process when we load data from each PBM to help them understand how many occurrences of dates of birth are missing, how good or bad are their dates of birth, how many of them are defaulted, how many records have zip code or have missing zip code or missing address, things like that. And one of the things I wanted to address your question Judy earlier that you asked John where you said would it be important someday to have standards around the algorithm.

And I think it's really important to understand that it depends on the data, what components of the algorithm you want to use, and it also I think depends on whether you're coming back with a patient or you're coming back with a list of patients, etc., RxHub does a patient match, we find Teri Byrne and all occurrences of Teri Byrne are me, we don't' come back with a list of here are the Teri Byrnes that you might have. So I'm not sure if you can say you should have an algorithm, a standard for the algorithm, or if it depends on how you're applying it and what your data looks like that you're using.

So when we do what we call our bulk loads we initially load all of the PBM's data into our database and we go through it and we look for bad data. And if we find bad data, which we typically do, then we ask the PBMs to help clean it up and they will go back to the health plans and ask them to kind of clean up their data because the cleaner data they have the better occurrence of matching they're going to get.

We also allow them to do periodic refreshes of the data if they, let's say if a PBM decides to go back and renumber everybody in a certain health plan or across their entire business then they can reload the data. And as soon as they reload the data we can apply that information, there's saying oh, I have to change Teri Byrne's number everywhere else, it's because we find the number on the fly, we do dynamic linking, as soon as we bring that new data in we can find that new number.

We do nightly updates, nightly refreshes, or nightly updates of demographic data changes only from our PBMs, the data that we have is not eligibility data, it's not medication history data, it's just demographic information for the purpose of finding a patient.

We also recommend audits to the PBMs, we typically do an audit after 90 days of initially loading the data so that we can determine that we're in sync, our data and their data so that when we find a patient and we send them a request they find the patient as well and it's the same information. So we go through a comparison process, if we find issues we determine and resolve the issues and then we reload the data if necessary, otherwise the PBM could just do updates on the data that's bad.

So basically we've been looking over the past four years to continuously improve the quality of this data, which is really important because it raises our chance for finding a patient.

So as I talked about clean data, it is better data, the PBMs want, the payers want to find a match because that's their way of being able to reduce their administrative costs and increase patient safety. So the more often we find a patient the better.

So that's basically how we store and load the data, I'm going to talk a little bit about now which parts of the algorithm we use in matching and I'm really not going to talk much about this slide which just talks more about false negatives and false positives which Scott already explained, but to let you know that RxHub does err on the side of false negatives and we'd rather not send back a record if we weren't sure that this is the correct patient.

So the parts of the logic that we use, and I want to explain some of this a little bit by giving some examples using my name, the first one in the name, when we're matching on the name, we do use phonetic comparisons, however we don't use nicknames. And all of this was determined based on the analysis of the data that we had early on. So for example T e r i, if somebody submitted T e r r y, which I actually used to spell my name that way and then I changed it to T e r i, it's my nickname, so I just spell it any way I want. So if they submit a T e r r y we would find a match because it sounds like Teri. Byrne, a lot of, most people spell my name B u r n and it's spelled B y r n e, we would match if somebody typed in Burn, however, a lot of people type my name B r y n e, sounds like brine, probably wouldn't match. Just to kind of give you an idea, and also if somebody typed in Theresa we would not find Teri Byrne because we don't use nickname matches. Some people do, we don't, we just found it for good reason not to use it.

We also use name component matches, or Scott referred to them as tokens, so we take Teri, we take Byrne, we take the suffix and we match all of those components together so if somebody puts my last name in the first name or whatever, transposes those, we can help find those issues. And then as Scott also alluded to more common names are weighted differently.

Date of birth, first of all if we don't get a date of birth on the incoming request or we don't have a date of birth on file we won't match a patient. Date of birth is required on both sides to match.

We use what we term the two changes function, that actually might be an Initiate term, I'm not sure, but where if you can change a number add two, subtract two, in any one of the components of the dates of birth we may match to just kind of allow for transposed, allow for mistyping of the information. And then we have a birth year weight table that we use so commonly dates of birth, 1/1/1990 or the first of the year are weighted probably lower then a less common date of birth.

For zip code we do use the, we first try to match on the five digits and we do have a five digit weight table that we use, and then if we don't match on the five digits we'll use the first three digits, and we have a weight table that applies to that too.

So those are basically the main components that we use in matching that are ally important and again, we don't, we do a positive ID, we don't pass back a list of patients.

So how did we apply this in our model? We are a patient information locator, again we don't store medication history, we don't store clinical data, we find where the data is located. So we route the request to the appropriate source that we find in our MPI and it may include more then one instance of a patient but again that's in the case where I may have dual coverage or the patient has dual coverage. So there's one record, one response record per benefit coverage and again we don't have a patient identifier key that we use but we will pass back the identifier that the source of the information uses. So two occurrences of Teri Byrne will have two occurrences of an identifier, one for each source.

And today we're actually have about a ten percent dual coverage rate, it's been up to 15. in the pilot that I'll talk about later you'll see it was a lot less. But right now we're running about ten percent dual coverage.

Okay, we've actually presented this slide before but I wanted to talk to you again about the model and there's two different models that we use and kind of the standards that we use and how we do this. The first one is the technology vendor model where we do an eligibility and then a medication history. So what happens is on the left side the point of care application sends in a request or sends in a 270 in this case request with the patient demographics that we talked about, name, date of birth, gender, zip code, that they've extracted out of their system or somebody has typed in when they're registering. And we do a match of that patient in our MPI just based on the demographics, and then we send that 270 request out to the source of the data or the payer today. That payer responds back with, and we also add the payer's identifier, so that they know what patient we're talking about, that identifier goes back to the technology vendor, they know what patient, they use that identifier on future requests.

And also in this model is where we return the formulary information or links to formulary and things like that as well but I don't know if that's as important today.

It's important that you know that when we are implementing technology vendors or hospitals we help them understand that it's very important to verify the data that you get back. We've moved towards encouraging the PBMs to respond back with information that they have on file for the patient so that if somebody sends in Teri Burn, B u r n, and they have on file B y r n e we want them to return back B y r n e so that somebody can look at that data and say okay, well I see if misspelled the name, is this really the Teri Byrne I was looking for.

And we also ask that they put disclaimers on the data for medication history to say please validate that this is the right patient, this is the right information for the patient. Because there's no way to eliminate a false positive, although we err on the side of false negative there's no way to eliminate the chance for a false positive. And again, this is also a precursor to the medication history and formulary information so consent issues apply, etc.

So then in a hospital model we use the MPI a little bit differently and we use different standards for that request. From a hospital system we actually receive an admit request, same information on the request, same demographic information, just a different standards, and we look up the patient in our MPI, we find coverages for that patient and instead of sending an eligibility request we actually send a medication history request. And the medication history standard I might add is the new NCPDP ANSI accredited standard as of last week, we're very excited about that, sorry, I had to get that in, and they send back the response and then we aggregate the data into an HL7 response back to the hospital system. So again, we pass back on the information and we encourage the hospital system to look at the demographic information, to look at the medication history information, make sure that we found the correct patient and look at the drugs that are for that patient and validate that list. We ask them to sign a consent on these transactions, we ask them to display disclaimers as well in the hospital systems.

I was trying to figure out where was the best way to bring this in and I think this is the place but I wanted to bring up an issue that we've found and have had to address based on the work that's going on to get medication history out to the Katrina evacuees. And the question was asked earlier and I don't remember who asked it, Harry, maybe it was you, about the different state regulations around, Jeff, it might have been you, I'm sorry, but around what drugs can be displayed, etc.

And it really doesn't have anything to do with locating a patient or the MPI but it is a huge issue and we found out over the past few weeks we've been trying to get medication history information out to the victims of Katrina and there's been first of all a lot of confusion around what do the state regs apply to, do they apply to where the data resides, do they apply to where you're looking at the data, do they apply to where you extracted the data, it's just really confusing and so basically the bottom line is it delayed the RxHub implementation of getting the data out through gold standard to the shelters by four days, which is I mean four days in a just normal implementation of 90 days is no big deal but when you're trying to get information out to people who can't get their dugs that's, and it is filtered now, but that was huge, four days was a long time.

And the thing is we didn't know did all the data have to be filtered, what states does it have to be filtered in, I think we knew that the state regs in Louisiana and Alabama and Mississippi were okay, we could have not filtered the data but what about when a physician in Boston tries to use this application or a physician in California. And so ultimately we ended up filtering all of the data which, and then you bring up those other issues that were talked about, now we have patients who are taking medications for really important things and they don't know what the medication was and there's no way for the physician to understand what that medication was because we've just filtered out sensitive drugs.

So I think, I wanted to bring that up because that is an important issue, and whether you're locating a patient or storing all the data locally it's a really important issue. And I'm hoping that in our review of lessons learned for this Katrina effort that will come up and we'll come up with an action plan around it but I did want to bring it up.

Okay, so I want to talk a little bit about the development timeline, how we developed our MPI and our pilot. First of all we started this effort in July of 2001, which is when the company was formed, this is really the first thing that we, we knew we needed to solve this issue to be successful, especially with PBM data patient's don't carry around PBM cards, they don't even know what a PBM is, I didn't know what a PBM was before I got into health care.

The first thing we did is we looked at all the candidates who had MPI algorithms and processes, etc. We looked at matching accuracy performance and software integration, and in November was when we eliminated the others and chose Initiate, and we started actually the design and the implementation of that system in January of 2002. And again what we looked at really was the population of data in the PBM data, the characteristics of the data, the frequency of the data. We looked at 50 million records initially and then developed our matching strategies and our value and weight assignments. So again, we chose to do some of the things that we do in our algorithm based on the data that we had, if the data was different, if the occurrences of some of the data was different we may choose to do things differently, so that's really an important factor I think.

And then we started our pilot in June of 2002 and that's actually when we piloted the eligibility transaction as well, and we looked at matching accuracy, we looked at tech vendor data statistics, we processed 104,000 plus transactions, we had 379 participating physicians, three participating PBMs and three participating tech vendors.

And what we concluded as a result of this pilot was as far as MPI functionality there were actually a higher number of unique members then we expected found. We found no occurrences of false positives, which is what we were shooting for, and in this case all of the patients returned were analyzed to make sure that they were the correct patient. There was a higher rate of dual coverage then expected and at that time it was 5.2 percent and that surprised us, and like I said now we're actually up to ten percent of finding dual coverage.

We validated the key fields, name, date of birth, and zip code were critical in finding a match. We'll still find a match without zip code but it's not extremely common, it just depends on the commonality of the name as Scott talked about.

And I think this is really important, we found that Social Security number was not helpful, not only was it not helpful, it was kind of detrimental in the match. And the reason for that was the Social Security numbers were just wrong, they didn't match, the Social Security number that came in didn't match the Social Security number that we had on file. And I think there are a lot of reasons for that especially in the health care business because a lot of times you'll use your own Soc for your children, I don't know my children's Social Security numbers so if they ask me for their Social Security number when I walked into a clinic I would either give them mine thinking okay they want mine because they're trying to find my card ID which is typically my Social Security number so if I give them that they'll probably find my coverage, that's logical to me. Or you use your husband's Social Security number, etc., so we found by removing that and not using Social Security number as part of our matching algorithm we had a higher hit rate.

And I think the other thing to understand around that is that you could add in address, you could add in health plan, you could add in health plan ID, you could add in a lot of information to try to use in the match but more data is not necessarily better because it can reduce the effect that it has on the algorithm and reduce your chance for finding a match. So I think that's also a good thing to understand and we did look at other data too, we looked at, we had health plans and health plan ID and stuff on file and you can only get, a patient only knows certain information almost all the time. I typically know my name, my date of birth and where I live, other then that if I don't memorize numbers I may not even able to know my Soc. I do but a lot of people don't.

So we also validate our MPI load functions and our updates and made sure we had the correct data, etc., we validate our switch functionality, we validated the eligibility transaction format and the information that was coming back and was it useful, etc., so we evaluated that model. We validate our certification process which is just as important, how do you certify the people who are feeding you the sources of the data and is it good data and are they extracting the right data from their system. And then we identified operational reports around patient matching and statistics around transactions, etc. So we learned a lot from that pilot about what we needed to do.

So just a little bit about industry utilization and we've talked before about how many participants, we probably have a lot more participants then we did last time that we justified. We currently have 42 different participants in production, seven of those are hospitals, we have three PBMs, we also have more PBMs signed and some others in the works. We have six health plans with formulary only in production, we have 23 technology vendors in production, one pharmacy network and two mail service pharmacies. So between all of those entities we're doing patient matching.

The other thing that we wanted to show you is that e-prescribing adoption is happening, this is our volume of medication history, we're at 5,222,000 transactions so far to date, up from 59 in the first quarter of '03, or fourth quarter of '03, as well as eligibility. And one of the things that we wanted to show you from an eligibility perspective is what we talked about is really have been working on reducing the time it takes to send a request and we're processing more transactions in a lot less time, so we've reduced the time that it takes to process a transaction so the MPI is scalable and that's an important thing to understand, 140 million records we still can process in a quarter of a second.

So in conclusion we believe that a national patient identifier is not needed and again, would we use it if it was provided? Probably, I guess it depends on the accuracy of the data that we have in our MPI from the sources of data, and the accuracy of the data received, are patients going to remember that number and is it going to get typed accurately. So again, it could act just like a Soc, it could be a detriment to the match.

Real time matching in clinical data query is proven, we do it, we don't store clinical data, we do it every day, we retrieve medication history information based on finding a patient, and it's the most accurate and up to date information because it's coming directly from the source. And again, that an MPI can be tuned for excellent performance, we've proven it over the last four years.

So thank you very much.

MR. REYNOLDS: Thank you. You're the first person I've met that's created their own witness protection program with the way you use your name, it's kind of neat --

MS. BYRNE: By the way, I've been married twice too Harry.

MR. REYNOLDS: You've given everybody a lot to think about. Simon and then Jeff and then Mike.

DR. COHN: Teri, first of all thank you very much for your presentation. I actually had two questions and one was clarification to make sure I understood. You obviously have been talking about a lot of data that you use for your MPI matching and I heard a lot about name, I heard a lot about date of birth, I heard a lot about zip code. Now the part that I was a little confused about was the role of the payer unique ID, which I understand is the number that the payer gives you for your plan, not the unique identifier of the payer --

MS. BYRNE: Right, it's for the patient.

DR. COHN: -- in terms of assuring all this stuff, I mean do you use it? Does that contribute to the improvement that you're describing and could you survive without it? So that's question number one.

MS. BYRNE: Right. Yes, we do use it and how we use it is we store it along with that patient record in our MPI, we don't use it on the match, what we do is we supply that back to the payer so that they understand that we've matched the record that they've given us, that's identified by this unique ID, and as well we give it to the requestor of the information so that they can use it in future queries like medication history queries. So on an eligibility response we give them, we found this coverage at Express Scripts and here's their ID for Teri Byrne, use that when you're querying medication history. So it's used by them to talk back to the payer again saying here's the key to the clinical information that I'm looking for, so it's kind of a record locator ID I guess. Does that make sense?

DR. COHN: Okay, so basically you have it but you really don't use it for your algorithm --

MS. BYRNE: Not in matching.

DR. COHN: Okay, great --

MR. GINGRICH: Also consider that a temporary key because again I think we get updates nightly, that key could change to the patient, we expect that every time a technology vendor makes a query or has a patient visit they will once again do an eligibility check to make sure they have that primary key.

MS. BYRNE: That's a good point, by the way that was Mark Gingrich from RxHub, that's a good point because we have had, and that's what I was talking to earlier, we've had PBMs actually re-enumerate everybody and give us new identifiers and reload everything, and that's why we do real time queries to find the patient and find that identifier because if they try to use an identifier that they found last month it may not be the same.

DR. COHN: That's helpful. Second question and this is just a question since you were talking about the hurricane and your response and all of this stuff and I don't mean to, it's really a question of clarification and somewhat surprise and I don't mean to embarrass you, but I guess I was surprised, was this the first time that you've had to do cross state transfer of information, or is this just the first time that this issue really came forward around all of that?

MS. BYRNE: I should clarify that, we were actually, we implemented with a vendor that we hadn't yet implemented with and so with all of the vendors that we're currently implemented with they understand that they liable for what information they display. So if they're displaying information in the state of Massachusetts they need to understand those laws, but this was with a vendor who hadn't come across that issue yet because for whatever reason, so that we had to address it with them. And it's something we do address with every vendor and it isn't an easy thing to address but they have time to go through every state and understand the laws whereas we didn't in this emergency.

DR. COHN: Okay, well thank you for that clarification because I had been presuming that you had been dealing with this sort of all along. Thank you.

MR. BLAIR: Your response to Simon's question kind of got close to the area where I'm looking for clarification, you said every vendor, are you talking about vendors of the algorithms that other people are using? Are there other vendors of the algorithms, because all three testifiers today have all used Initiate, is that what you're saying when you say other vendors?

MS. BYRNE: Usually when I say vendor I'm talking about a technology vendor or a point of care vendor, in what reference?

MR. BLAIR: Like electronic health record system vendors?

MS. BYRNE: Right, right, e-prescribing vendors or EHR vendors.

MR. BLAIR: Okay, good, that clarifies one part of it, the other part is all three of our testifiers apparently have selected Initiate as the vendor that has provided the algorithms and I'm assuming the software that use those algorithms, and are there other providers of these algorithms? Is Initiate the only one? If there are other providers should we be getting testimony from other providers of these algorithms? Should we hear testimonies from others? What guidance can you give us?

MS. BYRNE: Well, I know there are others because we analyzed others, I don't know, I guess I don't know if I have an opinion about whether you need to hear from them as well, I don't know how different their algorithms are or whatever, I don't know if Initiate has an opinion on that, I'm not really sure, Jeff.

MR. BLAIR: Okay, so you've partially answered my question, there are other vendors of these systems.

MS. BYRNE: There are other vendors, I don't even know that I could name them right off the top of my head.

MR. BLAIR: That's fine, that's fine. Thank you.

DR. FITZMAURICE: Thank you for very interesting testimony, this has been a good morning. I want to understand kind of how things work, so it's my understanding that RxHub acts as a switch, somebody makes a query for purposes of determining eligibility of a patient for the drug, RxHub links it, searches all of the health plans and PBMs and says there's a match here and here, we're going to send it out there to find out what they're eligible for. And then do they send the information back through you to the requester or do they send it directly to the requester?

MS. BYRNE: They send it back through us.

DR. FITZMAURICE: The second question would be do you retain any of that data? I think you retain the patient, the patient ID that's assigned by the payer, right?

MS. BYRNE: Right, that's actually in our MPI as one of the keys in our MPI.

DR. FITZMAURICE: But you don't store any of the drug information or --

MS. BYRNE: Nope, nope, we have an audit trail of the transaction, we don't, we store personal health information for a period of time based on HIPAA regs, etc., etc. --

DR. FITZMAURICE: And it works for eligibility, it also works for medication history --

MS. BYRNE: Right.

DR. FITZMAURICE: It's going to begin to work for formulary, and would it also work for prior authorization guidelines when that standard becomes available, that some can come up to you and say I'm ordering this drug, if the patient is eligible what conditions do you have on my prescribing of this drug?

MS. BYRNE: Good question. I anticipate it well and I think it depends on the model, because we pass back that information on the eligibility transaction and actually it is working for formulary today as well, Michael, it was the RxHub proprietary formulary which is now becoming a standard. But we pass back information on that eligibility response that links to formulary information, benefit coverage information, prior authorization information, etc. So once we've identified the patient we give back a lot of information about that patient to the requester of the information so they know where to link the data and that's the key thing.

DR. FITZMAURICE: I had one more question but I forgot what it was.

MS. GOVAN-JENKINS: What are your thoughts on turning the date of service or the date of encounter into an ID in order to match to the patient record? And I ask that because when I was practicing as a telephone triage nurse we had patients that called in and they had two records and the only way I was able to find their file was through that date of encounter because either their name was incorrect on one of the files or the Social Security was incorrect on another, and I couldn't find like I wanted to follow-up on a particular situation and the only way for me to find that particular record was to get the date of service or a round about date of service. So what are your thoughts on turning that into an ID?

MS. BYRNE: We really don't have any experience with that, we didn't analyze dates of service when we looked at doing this, we don't even use effective dates for eligibility in our search. The PBM will use effective dates to send us the information and they'll use effective dates to determine that a patient is still effective but we haven't really analyzed that. I don't know if Initiate has, Scott, have you looked at that?

DR. SCHUMACHER: Not within health care but in law enforcement, the date of the event is something that you would use in the algorithm. That's the first time I've heard that, it would be an interesting thing to try in certain areas. So yes, you can use it in the same process that I described earlier, you would add that into the algorithm.

MR. REYNOLDS: Michael has come to again.

DR. FITZMAURICE: I remember my question. Earlier today we heard from John Halamka and others about the quality of the data is very important and if you're doing the clinician to clinician you want to get a match, but it seems to me that what you have is something unique, you can push the enforcement of the quality of that data back to the requesters and the PBMs because they don't get paid if the data they supply you is inaccurate so all you have to do is say this data isn't good, fix it, and they've got a strong incentive to fix it.

MS. BYRNE: Right, they do, it's a huge incentive, and I think that's an important point that I was trying to make, Michael, because the match is only as good as the data right, and we've used it as a way to go back to the PBMs, to go back to the health plans and say look, you'll reduce your administrative costs, you'll increase safety in these patients if you clean up your date, so it's a good point.

DR. FITZMAURICE: I'd like to ask if I may a corollary question and that is as we get into electronic prescribing I go into my physician and I say doc, you want to prescribe this for me, what are the choices I have, what are the prices for that drug depending upon the pharmacy I go, so then do you anticipate maybe linking down to the physician to say all right, link up with the PBM, get the formulary information, get the prior authorization information, and then linking up to the pharmacies to get the price data? I mean I'm putting a lot of burden on the physician and maybe it's not going to work that way but could it work that way?

MS. BYRNE: It doesn't today and typically you're not going to probably see the PBM/pharmacy data pricing relationship being disclosed to the physician. So what we've settled on is we'll present co-pay tiering information, flat dollar co-pay information, percent co-pay information, we even recommend to the technology vendors that they don't use AWP because that's not going to be what the patient is going to pay at the pharmacy, it's whatever has been negotiated. And it will probably be a long time before you see that --

DR. FITZMAURICE: That would be great because what I really want to know is what do I pay out of my pocket and so you anticipate that that information might flow to the physician if I give them a choice of three pharmacies, that's something we could talk about?

MS. BYRNE: I think when the industry is ready, once we get beyond what we're doing, we may go to real time, what they call pre-adjudication. However that's not ever going to be completely accurate because if I receive a prescription from my physician today and I have a deductible or an out of pocket plan, and then I don't get that prescription filled for two weeks but in the meantime I refill some of my other meds, that price may no longer apply. So that's kind of one of the reasons we haven't moved to a real time pre-adjudication yet, it's difficult to tell the physician and the patient exactly what they're going to pay until the pharmacy dispenses the med. They may dispense a generic.

DR. FITZMAURICE: Thank you very much.

MR. REYNOLDS: I have one other question and then I'll turn the program over to Judy again. HIPAA doesn't like data to be changed, the HIPAA transactions don't like that. If I look at your chart you'll receive one set of information and change it based on the match, the name might be spelled wrong, other things might be spelled, you change it based on the match, give it to a payer, and then it comes back. Is that what I heard?

MS. BYRNE: Actually we don't change it, we send to the payer exactly what we got from the requester and believe me, we have had lengthy conversations with the X12 organization around what needs to be done and what should be done. You should never change the data that you've used in a match, that clearly states that.

MR. REYNOLDS: But I thought you said that you asked whoever you send it to to send it back different then it was received --

MS. BYRNE: Right, the payer --

MR. REYNOLDS: So that is changing the data.

MS. BYRNE: That's exactly the direction that we got from the X12 organization as defined in their HIPAA guide. For the payer should send back the information they have on file so that the requester can compare the request and the response to determine if there was a difference.

MR. REYNOLDS: So you're putting that under the correction capability, you are allowed to change data if it's incorrect.

MS. BYRNE: We actually ask the payers to check the box that they have provided different information then was in the request.

MR. REYNOLDS: Okay, good. Thank you, Teri very much with an i and a y and every other thing, I appreciate it. And Judy, turn it back over to you to lead the discussion please.

Agenda Item: Discussion and Commentary on Matching Patients to Their Records - Dr. Warren

DR. WARREN: What I had planned at this point is really for us to have an open discussion including members of the audience, not only about the testimony that was heard today but ideas for testimony that we need to hear about in the future so that we can come to some conclusion and possible recommendations about the whole idea of matching patient records or patients to their records. So with that I'd like to open it up to anyone from the audience, if they have comments that they want to make or questions or suggestions for the future. Okay, I can't believe the audience is quiet, but that's okay. Stan?

DR. HUFF: Just a few thoughts. One question I think is as pointed out by the testifiers that the quality of the match is related to the quality of the data that's used in the match. And I wonder, given our purview, whether there's a need to hear basically what, would it be useful to have some national standard for the data elements that are used to say we think we want name, birth date, suffix, sex, and those to encourage people to uniformly collect those pieces of data so that matching can happen more accurately and efficient. So that's a question, not necessarily an assertion, I don't know if that's good or bad but should we try and seek some testimony that would say what are the most effective things to use in a match and is there value in some standard that we could state or a guideline, maybe a standard is too strong a word but are there guidelines that we could use that would make it more efficient to match.

The second thing is, I'm starting to sort of summarize what some things that we heard, again, I'm convinced that in the short term we need to do the kind of matching that everybody has described and there multiple vendors and we could talk about the other vendors that are available. But I don't want to have that get in the way of us looking at other things that in fact, that I don't think can be addressed and by that I mean probabilistic matching doesn't help you in cases of fraud detection, if there's a person who's intentionally trying to defraud the system they just give a different name, they match up, and so in the case and we've seen that within our own organization, people who are drug abusers who go facility to facility and intentionally give different IDs and different names, and so probabilistic matching doesn't help you there because that's not, there you're trying to find out who the real person is and if they're intentionally providing false information then there's nothing that the algorithm can do to help you there.

The other things that come into play are just the efficiency of the whole process, if in fact I'm a good citizen and you give me a card with my number, I can change and improve the visit environment, I can come, I can swipe my card, and they say yes, please proceed to clinic A versus saying what's your name, what's your ID, what's your phone, asking me all of those questions when I register for a visit. So I think there are efficiencies in the process in fact that could be augmented again.

That's not saying that we wouldn't use the probabilistic matching but in fact that there are efficiencies that I think can be obtained by the use of an ID, that we ought to ask people what those are and how those could be used. So I think the fact that it's a given that we're going to use these kind of matching algorithms shouldn't prevent us from thinking about what the value of an identifier would be and whether in fact at some point whether that might be justified and the benefits that you get from that might actually pay for the cost of it in the long run.

I guess the final thing that was interesting, this is just one other point of data, within our database the single most discriminating piece of data we use, and we do this exact kind of probabilistic matching within our organization as well, within Intermountain Health Care, the single most discriminating piece of data is a Social Security number, and so it would be interesting to figure out why that's true in our case and so different from Teri's experience. It's the single most discriminating piece of data that we obtain from people and so it probably has to do with, maybe it has to do with confusion about when they provide the number for paying purposes whether it's the payer or the beneficiary versus when we see them it's the patient showing up and we know, and we ask, because it's the single most discriminating piece of data and again I think a national identifier properly done in fact might also serve that purpose, that taken as a part of other characteristics it could in fact be a very discriminating and very helpful piece in the probabilistic matching process.

Just some comments.

DR. WARREN: Steve then Simon then Jeff.

DR. STEINDEL: Actually Stan took a lot of my thunder and I thank him especially for the last two comments, one I was going to bring up the issue that we should be looking at a unique patient identifier and what are the pros and cons. I believe it was mentioned in one talk that even if we did have a unique patient ID we would still have to provide supporting information for it, so I think we need to investigate that a bit further.

And picking up on Stan's other point we need to hear from other people who are doing probabilistic matching as to their success rates and what they're using. I think Stan just brought up a very good point, that he finds a great success in using the Social Security number in his system whereas we heard from other systems where it's not so I think we need to find out a little bit more about what are the attributes of these probabilistic matching systems and how they operate in different environments and therefore we need to hear from other people who are using it, be they vendors or just initiators, I mean we heard Regenstrief, they're probably not a vendor of it but they're probably a user and can have some experience. I just gave you a couple of names of some people that I passed on.

The other item that I think that we need to investigate, and this struck me very much in John Halamka's very nice talk, however his whole talk about the system, it seemed to be a very physician centric system with regard to authentication and trust relationships. And one of the keys to identifying a patient to their records, especially out of their immediate facility, is this trust relationship that he brought up and can we trust the exchange of information between record locator services. And I think we need to investigate that further, the only way we heard from it today really is in John Halamka's case where he was just strictly talking about physicians and he used that term repeatedly. And I think we're all aware that that trust relationship has to extend down to other providers and other users of the system.

And the other place that we heard about it was in the RxHub system, which is basically a closed system where they have established their trust relationships and we heard about that during our e-prescribing hearings. So I think we need to investigate it in a little bit more general sense as well.

DR. WARREN: Simon?

DR. COHN: After both of them talking there's very little more that I have to say. I don't want to embarrass myself by saying nothing. I guess I'd make the following comments, I sort of basically agree with I think at the questions, I think I may differ in some of the answers with what Stan was describing but I do think generally what we're trying to do obviously is to help transform the health care system, make it more efficient as well as provide better care to patients and the citizens. And I think we've heard some now about matching and how that contributes to it, I think maybe we need to get reacquainted with some of the arguments that some people have brought forward about why a unique identifier may be of value and it's just been so long since I think we've talked about it it might just be useful to hear about that a little bit.

Now I will sort of maybe just a slightly different take on what I was hearing about the matching algorithms, which I thought were very interesting and I think we do certainly need to hear more about them. But one of the things that comes to mind is does one need actual standards for data elements or do there need to be performance standards, where you sort of say you can use whatever data elements within your environment you want to but it has to have, get to sort of this level of confidence, and would that work in a national environment. And as I say that I don't really think I know of any environment where one RHIO or one organization is actually transferring data to another so most of this is still theoretic, but it's a question of in that context how it might play out. And I don't have the answer to that one, I just bring up the question.

DR. WARREN: Simon, when you're talking about performance standards are you talking about them in the same way we are looking at the CCHI? Do I have the letters wrong? The certification of health records?

DR. STEINDEL: Judy, can I comment and see if Simon will agree? Because actually I forgot to mention, that was one thing that I wanted to bring up, and basically we heard from everybody that said they were doing because they're basically using all the same system, that they were getting a match at 95 percent. And I think Simon, am I correct, is 95 percent good enough?

DR. HUFF: It's certainly not what we want.

DR. COHN: And there's false positives and false negatives, so it's on both sides.

DR. WARREN: Is that what you were trying to get at?

DR. STEINDEL: Yes.

DR. WARREN: Okay, Jeff first and then Stan.

MR. BLAIR: As we've had discussions in New Mexico with respect to the development of the RHIO that we have there and we begin to look at master person indexes we received a little bit of education on an aspect of this that just wasn't on my radar screen before and when he first started to talk about it my first statement was well this isn't a big thing, and then as I listened some more it became bigger and bigger and bigger.

The department of health in New Mexico for example is trying to make sure that they could respond to infectious diseases from a variety of sources and that also expands to responding to bioterrorism. And one of the things that I just had never thought about was the fact that, and I better not mention the number because I may not be accurate, but many millions of undocumented and migrant workers. And they flow through one state and then another state and then another state and they move and they move and they move, and so we figured well, that's outside of the system, they don't have a PBM number or a health plan number, they may not have a Social Security number and all this stuff, but then they began to explain to me that as we look at infectious diseases they go across borders, country borders, state borders, and as we begin to try to respond to those we need to have some way of identifying those folks and there's programs and plans with identifiers for migrant workers, which was new to me.

So I'm mentioning that from the standpoint that on the one hand the testimony I heard today made me feel very comfortable, I found the rationale for probabilistic statistical approach compelling because there seemed to be convergence, all of our testifiers seemed to have focused in on this as the way they've approached things but with Stan's comments about the fact that we also need to consider Social Security number, why it works in his area, reactions of fraudulent situations, and in the point I'm adding to this is undocumented workers with infectious diseases flowing through the country in increasingly large numbers, I think we need to hear from other entities that either have different approaches or even if they still use probabilistic and statistical approaches maybe they use it in a different way. So I think we have to expand the breadth of the testimony we receive for us to get a complete picture.

DR. WARREN: Stan then Mike then Harry.

DR. HUFF: Again, a slightly different aspect to this, sort of the argument started off, or a lot of the arguments came that we don't want, there's the fear that a national patient identifier would lead to a national database which would make all of my data available to somebody and increases my risk of privacy or security invasion.

Now what we actually hear then in the testimony is because these things work so well in fact I can link all of my data everywhere, even though it's not in one database I'm linking all of my data everywhere so in effect we've created the environment using this capability that people were trying to avoid by not having a national identifier. And that the justification for the national identifier, or for doing the probabilistic matching, is because it works and we need it today, which I'm fully convinced of, but the underlying argument actually is exactly the same. Once people understand this they're going to feel the same threat from this technology that they felt from a national patient identifier if they truly understand it because they're at the same risk.

And that comes back to what Michael said before, then my real security is in, is sort of at the database level and has to do with strong authentication of the people who are accessing data and tying that strong authentication to their permissions and consent of the patient to access that information. That's where the real security comes in this and I think we, simply talking about the fact that we're going to do probabilistic matching instead of a national patient identifier, because we're so, I mean it's, you've created the same concern that you were trying to alleviate by not having the national patient identifier.

And so I think we have, it comes back to those old issues of privacy and security that we talked about on other committee days, I think those areas are ultimately the things that are going to provide real security and privacy and it's actually sort of a red herring that we talk, that this national patient identifier is not done and in fact by the, it's not done because of privacy concerns, in fact by applying the technology so effectively we've created the same concern that people had initially.

DR. WARREN: Mike?

DR. FITZMAURICE: That was so well said that if I ever go into a court of statistics I think I want to hire Stan as my lawyer for that.

I wanted to raise the issue of public perception and Stan covered it very well, I think part of our job is to sort out the issues that are important, public perception is important, and we can help the public by saying here's what the fears are, here's what the protections are, and you don't have to be afraid of the Social Security number any more then a unique health identifier or then you are of the probabilistic matching, are the laws protecting the data and the potential harms to people strong enough, I would be less inclined to regulate, maybe certify linking and algorithms if an algorithm is efficient somebody is going to have a better product and get out there in the market and sell that then somebody else.

But I would suggest a focus on what is the cost of efficient linking, getting the data, having a good algorithm, versus the benefits of linking. I think the benefits of linking are substantial, I can remember Clem McDonald going years back saying the one thing we ought to do is accurately identify the patient and I'd be happy with add three or four more digits to my Social Security number, make one or two of them a check digit, and use that as long as there were protections against using my other data in ways that I didn't want. Right now people can steal my identify and go out and use a credit card and get all kinds of, make all kinds of charges, I may not have such a financial penalty to pay but over the course of years that may come back to haunt me again and again when I try to get credit and that's not wiped out.

So cost of efficient linking and the benefits derived from efficient linking that I would suggest as well as what Stan said.

MR. REYNOLDS: Listening to the testimony today I initially felt everybody started off with no national ID but yet, and I'm going to try to attribute these to the right people, Scott mentioned that maybe there could be a local identifier, John Halamka stated maybe the last four digits of the Social Security number, we've talked about a standard dataset which I think is a good idea. RHIOs we've talked about which are still developing and I think Stan hit it right on the head, this whole idea of trying to come up with a trusted environment because as you listened to Teri give hers, and somebody mentioned it in one of their questions, it's a closed system. And this whole idea of NHIN, this whole idea of everything that we're talking about doing as an industry and all of us are sitting in hearing after hearing talking about is not a closed system, it's a very open perverse environment that passes things around.

So I'm troubled that I heard local identifier and last four digits of the Social Security because if it worked like we heard I'm not sure why they came up and I'm not sure why those tags were discussed. Now it may have been my misunderstanding, it may be that I don't understand the subject that much, but if you have to add those things then what's missing? And if you have to add them then if we go, if we say the words, I know in our industry if we say the word Social Security number right now people become aghast that you would dare use that number and so it started bouncing around a little bit and so I think we need to hear more to really drill down.

But I really want to zero in on this whole idea, we're bringing this new entity in, this is RHIOs, and if it has to do with matching patient data and you're going to turn it over to a RHIO then I just think there's lots of issues or questions that we could at least review. And I hope that if we have a question on a unique identifier that we call it a unique identifier, not a national unique identifier quite yet, under matching patient records because that is a term of art that has very far reaching issues related to it.

So as a co-chair I would recommend that we discuss this as a subject of matching patients to data, one opportunity for that is a unique identifier, whether it will be local, whether it will be some tag that has nothing to do with who you are or what you are, or something else, but I would recommend that we stay away from the term. At least for now until we understand the subject better and then would be in a position that if there were to be any consideration of our direction that we would then make it such a subject. So unless I have any negatives from the committee I hope that we can consider that.

DR. WARREN: I still have Wanda, Jeff, and Mike, so Wanda?

MS. GOVAN-JENKINS: This is just a comment, in looking at the unique identifier I started to thinking about in the correctional system when a person commits a crime, and this is coming from a slightly different perspective, when a person commits a crime at any age they are assigned a federal ID number and that federal ID number follows them throughout their criminal career, whether it's 20 years, it's true, whether it's 20 years later. And I used to work in a correctional facility, in a prison, and we used that number to identify that particular person and we can see the history upon all their criminal activity as far as their health care when they were in the prison. So I want us maybe to investigate what challenges have they had with using that same number throughout the years on that individual as far as with their criminal record, with their health care background, I mean what challenges are there. And I guess that would kind of help us with what kind of challenges it would be if we were using a local identifier or a unique identifier in health care now.

DR. WARREN: Jeff.

MR. BLAIR: One of the things that pulls us together, I'm going to give us the things that pull us together and the things that may make it more difficult for us to reach that last mile of consensus. The thing that tends to pull us together is when we look at the record locator solutions that have come up in the same environment, a la where we have patients that have been identified through insurance companies or PBMs, or other payers. In that case the solutions for matching patients tends to gravitate towards the same type of solution.

Similarly when we have health plans around the country or providers in a local area or a state we could get convergence, and it makes sense, we wind up doing that because it's the least expensive way for us to get to an effective solution. And we're able to either ignore the issue of fraud, or we're able to ignore the issue of people coming over the borders for right or wrong reasons, whatever it may be, and we're also able to ignore the fact that very often we don't have identification for migrant workers that go from state to state to state.

And so my thinking is that as we begin to explore this I think we should keep a couple of things in mind, it is very desirable for us to come up with a single solution nationwide. And I think in our efforts to do so we just need to keep in mind that we may get to the stage where we can no longer ignore the two, three, four, five parts of our health care delivery system that are not "in our regular system anymore", and the reason we may not be able to ignore them, and I'm mentioning again the infectious diseases piece, is because that does flow across states and because that does become major pieces.

And I know that different department of healths in different states are beginning to work together completely independently to come up with a way to track these people so that they could not only provide health care to the folks that are infected but also be able to track to protect those folks that are not infected yet and come up with ways to respond to it, so it becomes both a public health problem as well as a patient care issue.

So in summary I think that it is still a good idea for us to pursue some type of a procedure for matching patients to the records that would be one size fits all but I don't think we should do it to the exclusion of recognition that large portions of our populations may not fit into certain assumptions we make, that have some degree of consensus at the moment.

DR. WARREN: We're beginning to run short of time so we're going to limit it to Mike and then Simon. Mike?

DR. FITZMAURICE: Two points that I'd like to suggest, one of them is even if we had a unique health identifier national or not you would still want probabilistic matching because there can still be errors in putting that number down and you still want to check it against other information that you know about the patient.

Secondly, if you had a unique health identifier and you built a firewall around it and you could use it only for health reasons, if it were efficient it would be used by others. The federal government said Social Security number only for Social Security but then you had Medicaid program and they passed a law saying you can use the Social Security number to avert fraud and abuse maybe saving two billion dollars a year back when it was introduced. Now you can use the Social Security number for virtually everything without federal penalty.

So I would suggest that the protections afforded by a unique national identifier, putting a firewall around it, aren't really the protections, the protections are how you guard the data and the authentication of people who get access to the data.

DR. WARREN: Simon? Okay, go ahead Steve and then we're done.

DR. STEINDEL: Actually I just want to go on record from a CDC point of view because Jeff brought up twice infectious disease with migrant farm workers and public health and this is not a new issue with respect to public health and that CDC with our public health partners have been investigating ways to track, treat, control, not just infectious disease issues but other health issues with regard to migrant farm workers.

DR. WARREN: Okay and with that at close. I would encourage anybody though if there's any more information that you think of or people that you think that we need to hear of please contact me or Maria because we will be having more testimony obviously on this session. And I want to thank our speakers this morning and also the committee and staff for a really good beginning to this set of hearings on the data, thank you.

MR. REYNOLDS: Due to the fact that everybody kind of has to go across the street to get lunch we'll resume at 1:15. Thank you.

[Whereupon at 12:15 p.m. the meeting was recessed, to reconvene at 1:15 p.m., the same afternoon, September 21, 2005.]


A F T E R N O O N S E S S I O N [1:15 p.m.]

MR. REYNOLDS: Okay, after an exciting morning it looks like we've got an exciting afternoon of about four different subjects that we're going to be dealing with, so the first one is under the auspices of update on key issues is the CORE project and I'd like to welcome Robin Thomashauer from CAQH.

Agenda Item: Update on Key Issues: CORE Project - Ms. Thomashauer

MS. THOMASHAUER: Thank you for inviting us today, we have been working on this project now for about, well, including on the research about three years, we've been working on it actively in terms of implementation since January of this year so we're excited to be able to share with you the progress we're making.

CAQH, I've been here before but for those of you who were not involved at that time is an alliance of health plans and networks and we're working together across the industry to see where we can standardize business processes, particularly to make it easier for physicians to work with the health plans, and so that's really our focus, our administrative simplification opportunities in the interaction between the physicians and the health plans.

We had been working over four on five years on a number of initiatives including a credentialing, standardization initiative, we worked on some e-prescribing work which is what I came to speak to you about about a year and a half ago. And as we looked down the road at where we were headed we stepped back to take an overall perspective on all of the administrative activities in the physician's office and all of the places where the health plans interact with those offices. And what dropped out of that were about 11 opportunities for us to look at and to see how we could simply life in the provider's office. And the thing that kept rising to the top in those discussions was eligibility and benefits and access to eligibility and benefits information.

Certainly HIPAA helps that, it's a significant start, but it doesn't go far enough for the physicians, it doesn't guarantee them enough information to make decisions and to understand how to work with a patient. And so what we looked at was how we could facilitate that interaction to enhance the amount and the quality of the information that the plans were making available to the physician particularly, or to any provide really at the point of health care delivery.

What the physicians want was electronic access to accurate and timely data and they want it in a very timely manner. Obviously what they're looking for are significant reductions in their costs and quicker access to the information they need to make decisions. And in looking at how we could help that we looked at a number of industries to see how they facilitated transactions and as so often as we look at transactions we looked at the banking industry and the financial transactions to see how they facilitated the connectivity between the different parties and learned that one of the important things that they've developed to facilitate that were operating rules.

So what are operating rules? As we look at operating rules they are business rules for utilizing and processing transactions and they are the rules that tell people how quickly information is going to move, what the information is going to be, what we mean by the information that goes back and forth, the exceptions that are allowed, error resolution, security, really sets all the parameters for the specific transaction. They are the rules that enable ATM transactions, they enable direct deposit, credit card transactions, all of those are driven by operating rules and as we looked at those transactions a similarity with the eligibility and benefit transaction was really pretty evident and that's why we moved down the road to look at operating rules and to see if that was something that would enable better information transfer.

So after all of this research we looked at how we could go about this and developed the Committee on Operating Rules for Information Exchange, or what we refer to as CORE. It's an industry wide stakeholder collaboration that was launched in January of this year and really what we're getting at is the answer to the question why can't verifying patient eligibility and benefits in provider's offices be as easy as withdrawing cash from the bank. It's a very straightforward question with as you might imagine a very complicated answer.

Our vision is to make available to providers no matter what front end system they're using and no matter what back end systems they're talking to what health plan covers the patient, whether this service is a covered benefit including co-pays, deductibles, what amount the patient owes for this service and what amount the health plan will authorize, will pay for the authorized services. And we wanted that to be available no matter where a provider is asking for that information and no matter through what service they're using to get that information. So that's the vision of CORE.

The mission of CORE is broader then just eligibility and benefits though, as we got together the stakeholders we talked about a lot of other transactions we could look at down the road if this is successful and so the broader mission for CORE as you can see is to build consensus among health care industry stakeholders on a set of operating rules to facilitate administrative interoperability, and as we look at this particular interaction people have a lot of ideas and are ready to move on to the next transactions and what we're trying to do is say okay let's slow down, let's get this one done first, and then we'll have a model to move on to further transactions.

At this point we have over 70 organizations participating in CORE, we have health plans, we have different provider organizations, we have medical specialty organizations, we have five government entities including CMS, standard setting bodies, vendors, and then we have consultants and others who are interested in this particular arena. And every month we are adding more organizations as the word gets out, more and more groups are coming in to participate with us.

The whole effort is governed by a steering committee and I've included the steering committee because I wanted you to see the range of organizations that are taking leadership roles in this effort. WellPoint is chairing it, HCA is vice chair, Humana, PNC, Siemens, Aetna, Blue Cross Michigan, Trizetto, Montefiore and HIMSS are all organizations that are chairing particular workgroups and subgroups to get this done and so you can see the breadth the organization type that are involved with us on this and very active I might add.

The role that CAQH is playing is really facilitator, the structure for the initiative is, there's a governing structure and there are a bylaws and today the steering committee as it is now is appointed by CAQH but the bylaws say that after the first year or when the first set of rules are completed we will add a nominating committee at which point that nominating committee can offer new chairs, new steering committee members, and also change the bylaws. So this initiative will become much more independent after the first set of rules are completed but we wanted to assure that this set of rules moved as quickly as possible and that's why we structured the initial stage as we did.

In getting together early on it was clear that we really needed to agree on a set of guiding principles so that we were all moving in the same direction and really were coming from looking at it within the same context. The first thing is that no one segment of this industry could do this alone, what we learned from the banking industry was that the broader the representation the more success you're going to have in implementing the rules and that's why we really sought to include as broad a range of participants as possible.

Another principle that we take very seriously is that just because an organization is participating doesn't mean that they're committing to implement the rules. We couldn't get people to come to the table early on if we were asking them to commit before they really knew what they were committing to, and so participation does not mean commitment and you'll see a little bit later how we're going to get at commitment.

The rules are being built on HIPAA, so many organizations have put so much energy and resource into HIPAA that certainly it made no sense to go down a different track and so we are building on the 270/271 to develop these rules.

We do support the movement towards real time data exchange but understand that it's going to take a little while for organizations to get there and so our first phase does support both real time and batch data exchange. And the most important thing I need to point out is that we are not building a switch or a database. This has been tried before and really we feel strongly that the market needs to develop the front ends and the back ends that are appropriate for each organization, they need to be making those selections themselves. And so what we're creating is something that will be if you will vendor agnostic, any vendor, any software will be able to implement these rules.

We're going about this through a phased approach, this year we are developing phase one of the rules focused on the eligibility and benefits online, phase two, which we'll be doing next year, we'll be starting them in 2006 for implementation in 2007, would be looking at extending the eligibility and benefits rules to include accumulators and estimated plan payment. Phase three is really unknown at this point and it will depend on what the group wants to move forward with based on how we've accomplished phases one and phase two. So we're taking this, I'd say we're taking it slowly, we're going step by step but in fact we're trying to move this along as quickly as we can.

Which leads us to our timeline, we started as I said in January, in August we distributed a set of draft operating rules for feedback from all the participating organizations and we're in the process of receiving that feedback now. Our goal is to take that feedback, send it back to the workgroups, have them adapt the rules to the feedback that they receive, and then go into testing and hopefully approval by the end of this year for implementation in 2006.

So what are we talking about? The work is being done by three workgroups, the policy workgroup, a technical workgroup and a rules workgroup, and each of those are focused on their specific areas of responsibility. What they're working towards is the development, a commitment process for commitment to the rules, response time, testing, communication standards, service type codes, patient financial responsibilities, acknowledgements and companion guides, those are the content that the workgroups are focused on and where their draft and recommended rules are going to focus.

As examples, the pledge, we really believe, or the workgroups I should say, all the workgroups believe that there needs to be a binding pledge that states the commitment to the organizations to implement the rules. And so we would be asking them to sign a pledge that has been developed and is in draft form to implement and comply with the eligibility and benefits rules. Really what we're hoping to do is establish a commitment and a trust that parties will be participating in the implementation.

The process is also going to include a certification process, we have an RFP out for companies who are currently certifying, on doing HIPAA certification, and the certification is what it sounds like. We're looking to award seals for certified health plans or vendor's clearinghouses and for organizations that are endorsing the rules but don't, their businesses don't require them to make the transactions we're asking them to also be certified as a CORE endorser signifying their commitment to the concept.

One of the rules is around response times and as you can see we have developed proposed response times both for real time and for batch, real time we're looking at 20 seconds and for batch we're looking at between 9:00 p.m. and 7:00 a.m. the next morning for a batch transaction. And compliance will be that an organization measures 90 percent within the calendar month, so that's the guideline for meeting certification.

Another area that we're looking at are patient identifiers, I know that you were talking about that this morning, we are looking at patient search and match, submission of a HIPAA 270 eligibility only one time and with a minimum amount of ID data to find an individual.

Right now we're looking at encouraging all four HIPAA implementation guide minimum search options, the patient ID, the first name, the last name, and the date of birth. And right now we are looking at ways that the plans can implement this within their own organizations and we're looking at standards that have been set and are being used by the Minnesota HIPAA Collaborative, those algorithms have been very successful and rather then reinventing the wheel we're looking at those for our plans.

The status right now as I said, there are draft rules, mission vision pledge certification process, we are getting feedback and we are hoping for participation for implementation in early 2006.

We are very excited to be moving so quickly on this and to have as much support and commitment as we have from the 70 organizations that are around our table and look forward to finishing this first set of rules and really seeing what it's going to take to roll them out and to get even broader adoption in the industry.

And those are my prepared remarks, I'd be happy to answer any questions.

MR. REYNOLDS: Okay, thank you. I've got one from Jeff and then Stan.

MR. BLAIR: Robin, welcome back. Actually what I'm about to say is not as profound as what you couldn't hear, was the establishment of CORE in any way a response to the proliferation of companion guides that --

MS. THOMASHAUER: I'm sorry, the proliferation of?

MR. BLAIR: The companion guides that health plans were developing because there really was not, we realized there was another level of clarification that was needed, I'm just wondering if that was part of what was driving this, or maybe you could just put it in context of how did all of these folks come together, you've got a large number of health plans and providers that are participating in this, there's broad recognition that this is an issue, and when you started to discuss this it took me a while to understand what was behind it and the only reason that I understood some of it was because of other testimony that we'd received here on NCVHS, so I'm sort of feeling as if there's been some difficulties that the health plans and the providers have had that enabled you to get this broad participation and could you help me understand the driving forces behind this.

MS. THOMASHAUER: Sure. Just to back up for a second on companion guides, several years ago we actually tried to standardize the companion guide format, we worked with WEDI to do that and put a standard template up on the web and tried to get the word out and didn't have a lot of success quite frankly as evidenced by the hundreds of guides that are out there. That was not a direct driver though, the driver was really what we were hearing from the physician's offices and practice managers around how difficult it was and how much time their staff was spending trying to get information about eligibility and benefits before they see a patient, the amount of time they spend on the phones, the fact that a lot of plans are making this information available through their own websites but for an office manager to go in and out of ten websites in a day is just not an efficient use of their time. So it was really the lack of information available to the practices at the time of delivery that drove our interest in this area.

MR. BLAIR: And these were practices that already had implemented the 270/271?

MS. THOMASHAUER: I think not a lot at that point to be perfectly honest but I think that their perspective was the amount of information they were going to be getting from the 270/271 was not going to be sufficient and they were still going to have to make those phone calls and go to the websites. So it was really the interest in making more information available to the provider to eliminate a lot of their rework, or the time on the phones.

I was actually around the same time in a physician's office and they didn't have the authorization ahead of time and I sat there while they sat on the phone with the insurer who will remain nameless for quite some time trying to get authorization and in the end they couldn't. So I think a lot of us felt it personally but we did a lot of quantitative and qualitative research to get to that point.

MR. REYNOLDS: Jeff I'd like to comment on that also. A lot of people implemented it initially, it was okay to implement the 270/271, was the answer yes or no, which was not a complete answer, so a lot of what was out there was you could be HIPAA compliant and answer yes or no and this takes it to the next level of actually filling out information rather then just saying yes or no.

MS. THOMASHAUER: And actually the information that we're providing was considered in the HIPAA legislation and they are optional elements so they have been identified but they're option and were not available in most cases, so we are building on that.

MR. BLAIR: Thank you.

MS. THOMASHAUER: Did that answer your question?

MR. BLAIR: That helped me quite a bit, thank you.

MS. THOMASHAUER: And one of the things we are doing through this is again encouraging a standardization of the companion guides, so that is part of this discussion is to try and get there as well.

DR. HUFF: So a question of basically how you relate to other organizations, number one, one could argue for instance that what you create should be input back to X12 so that they could basically reinforce and disambiguate whatever problems there might be with the current standard. And the second question is what is the difference in scope or what you're doing versus for instance the HIMSS integrating the health care enterprise activities, and do you have any relationship or a different scope or feel from that activity? So that's two questions, sorry.

MS. THOMASHAUER: In terms of X12 we recognize that we need to be closely aligned and working with X12 and so X12 is participating with us in our workgroups and on the steering committee so that they know what we're doing and we get their input as well, so there is a lot of discussion and conversation with X12 on that.

In terms of the IHE I have to be honest and say we have not had a lot of discussion with them, Steve Lieber just joined our steering committee very recently and we don't have a relationship, a deep relationship with HIMSS at this point.

RE. REYNOLDS: Okay, Judy?

DR. WARREN: I just wanted to follow-up, one of your slides dealt with patient identifiers and we've heard this morning about those, and I notice that of the one, you're also proposing a match strategy of using certain data elements but you have patient ID in there, could you talk more about what that ID was, or what that ID is and why you chose patient ID as one of the match elements?

MS. THOMASHAUER: The patient ID, health plans use patient IDs, they create patient IDs, and they are interested in retaining that as a, today anyway, as one of their criteria because they have assigned that and it means something to them internally into their systems.

DR. WARREN: So this is the health plan ID you're talking about, okay.

MR. REYNOLDS: Is that it? Maria?

MS. FRIEDMAN: I'm interested in how this all fits together with e-prescribing because one of the promises of e-prescribing of course is to eliminate the rework and all the time spent on the phone and you'll be able to do it in real time with some of these e-prescribing systems. And so I'm wondering how that fits into this in general.

MS. THOMASHAUER: Well in the short term eligibility, pharmacy eligibility is a different type of eligibility, so for example RxHub has access to all of that eligibility through their PBMs, so today what we're looking at is medical services eligibility. We would very much and we continue to talk to RxHub to see if we can engage them with us and at some point work together on it but right now the pharmacy eligibility is a separate set of data and comes from a different place within the plans.

MS. FRIEDMAN: Right, but I think as things evolve and e-prescribing becomes part of a larger context of electronic health record systems and electronic medical systems in general where all of these functionalities and all these different things are linked, it just seems like you have a piece of it, the e-prescribing people have a piece of it, and I'm wondering if there is ever going to be convergence.

MS. THOMASHAUER: Well, we would like the operating rules model really to have broad, we see it having broad applicability around each transaction, this is really the first, it's a pilot if you will, it's a first transaction to apply operating rules and see how they work. As I said our phase two and phase three, we think this is a good model down the road for interoperability, it would just be required to create those operating rules around each type of transaction. So it's really a model to look at.

MS. FRIEDMAN: Just one last question, do you those operating rules that you've established for this will hold for the other transactions or do you think you're going to have to wait and see and judge it transaction by transaction?

MS. THOMASHAUER: Well these rules that we're creating are really just for this transaction and so we're creating a model and a structure really for creating those operating rules and then hope that this process is successful and that as more and more participants engage in this process we'll take on more and more transactions. So I think it has the potential to really support a lot of different types of transaction.

MS. FRIEDMAN: Because I look at those rules and I do think about their applicability across other areas --

MS. THOMASHAUER: And if you look in the financial industry they started with one set of transactions and then built and built and built and today we have a lot of expectations that those things happen invisibly for us and they do because of that, but they also had to build it one transaction at a time.

MS. FRIEDMAN: Thank you.

MR. REYNOLDS: I have a couple questions, Robin. You said after the first year that CORE would become somewhat independent from CAQH but I think one of the reasons that the industry grouped up is because of CAQH's focus and leadership and do you worry about sustainability, especially as the transactions change and especially as people want more and more information, if it can remain sustainable outside your auspices?

MS. THOMASHAUER: I didn't mean to say it would become more independent but it will be more self directed I think is a better way to say it. We believe that to get the kind of support we need as broadly as we need it it can't be just the health plans driving this. And yet the structure, even with a nominating committee, in the end the rules the way it's currently configured with go through the CAQH board. And the CAQH board has a veto power, they do not have an approval power, so it's 20 days and if there's no veto they are the rules. And I think that, I didn't really mean to say it would be independent but it will be more self directed. I think that what we hear a lot, what we heard a lot in the early days had to do with the health plans running this and there's healthy skepticism around that from the provider community and we really want this to be a collaborative effort and want to have broad participation and so we felt that it needed to be more self directed as it got on its feet and had more confidence in their ability to make the decisions. But that is why we kept control the first year because we wanted to keep it moving, so there's going to be a balance there, Harry, but independence is probably too strong a word.

MR. REYNOLDS: Another question, any time anybody talks about batch and real time, real time to most people sitting at a desktop is two seconds. Is the 20 seconds, back to your point a minute ago, is that a health plan standard or is that something that the doctors and other have agreed would be inappropriate real time? Because as one of the things that this committee continues to look at is adoptability, whether or not standards or anything else that are out there are going to be held, and so 20 seconds was an interesting choice.

MS. THOMASHAUER: And it's a compromise, but I'd like to introduce Gwen Lowes(?), who's our project director, who is involved in all of those discussion and really can answer them in some greater depth then I could.

MS. LOWES: It's a compromise really but it's really a round trip, so as Robin has talked about there are a number of stakeholders are at the table, everyone from the providers to the vendors to the clearinghouses to the health plans, and we wanted to make it that it was a round trip versus hops in a system, we're at the point right now we're looking at the hops in who's responsible, what time do they have in that 20 seconds for their hop. We also, we have a lot of the Blues plans around the table and we want to make sure we're working closely with Blue Exchange because that's working so well right now, it's a great model for us to look at with Blue Exchange, we've got some great lessons learned from that --

MS. FRIEDMAN: For the uninitiated what is that?

MS. LOWES: I'm sorry, the Blues have a capability to exchange eligibility data across their national plans and so we've been taking some lessons learned from Blue Exchange and some of the compromise we've learned from that model too. So I guess it's 20 seconds full of hops and do you mind if I comment on the companion guide question you had asked, I think Jeff you had asked it earlier, it wasn't as Robin talked about absolutely the driving force, it was really the administrative simplification piece, but a lot of the vendors are at the table because there are so many companion guides out there and they're hoping to reduce those companion guides by having a set of operating rules. So indirectly it was a driving force to get the vendors to the table.

MR. BLAIR: Thank you for clarifying, and could you clarify one other piece in your response here, you were saying it's a round trip, I have a feeling that round trip may mean something a little more complex then what I thought of as an inquiry and a response, because it is 20 seconds. Could you elaborate what you mean by round trip?

MS. LOWES: From the point where the inquiry is sent from the provider's office to actually hit the buttons and the inquiry and it comes back to their office, so the eligibility inquiry leaves the provider's office through the vendor system of their choice, it maybe goes directly to the health plan, maybe it goes through two clearinghouses, one depending upon on how the vendor works, that's completely up to the vendor, we're not getting involved in any trading partner arrangements in phase one. And then the information comes back to the provider's office so they actually can view it --

MR. BLAIR: Does the fact that it goes through clearinghouses, is that what tends to increase the response time?

MS. LOWES: I think everyone has a role for improvement in the response time, it's not one stakeholder, some of the vendor systems will work a certain way, the provider has a role in they actually are going to have to follow some certain rules in order for the eligibility inquiry to work better. The health plans have a role, so unless everyone is on the same page you can't reach a better response time because we're all dependent upon one another.

I had one other comment, Maria I think you asked the question about e-prescribing and if there was applicability to other rules, there's a few things that the group is looking at with regard to telecommunications like an HTPTS(?) standard in phase one for CORE that may directly be applicable to other transactions as we move in that direction, those that are more specific to eligibility like the patient identifiers aren't as clear but there is areas that some of the RHIOs have looked in and we've looked at lessons learned, like the companion guide rule that Robin had talked about, the flow and format for phase one is involved in phase one of CORE and then also an HTPTS standard. There's a few other ones in that phase one package that may work well for other transactions.

MS. FRIEDMAN: Just a follow-up on Gwen's comment, I think that's important because one of the things that is the subcommittee's charge is looking at authentication and non-repudiation and of course as we go to more open systems having an any more secure way of communicating information over the internet is a good thing and more and more people are doing that and so --

MS. LOWES: And the group has spent a lot with the draft HTPTS standard which we can share with you if you don't have it yet, they've spent a lot of time about in phase one as you were talking about what's implementable, what's feasible, do we set a minimum standard around the format for authentication or is just the data elements I phase one that they have to be included in the format is up to everyone else, and that's the direction that the group is in right now, what's feasible for phase one, the minimum standard, and that authentication needs to be included, certain data elements but now how they're formatted is up to whoever implements it.

MS. FRIEDMAN: Because again, that's a continuing issue not only in terms of e-prescribing, especially as we go, or e-signature or other things for controlled and non-controlled substances, some of these other transactions.

MS. LOWES: You had asked about HIMSS, I think it was, one of the things we've been careful to do as Robin had talked about is getting X12 and HIMSS at the steering committee level so we don't create any rules that are not compatible with other rules that are being created and you can't obviously have interoperability if the rules don't work together so we've been looking and try to do the lessons learned at the RHIO level, HL7, X12, anything that might be good for us to know that we don't want to go down a certain path where clinical and administrative can't speak.

MR. BLAIR: In the answers that you have been giving us I've started to have a different image of the environment that CORE is mapping itself against because originally when you started to talk to us I was basically thinking of providers and payers communicating back and forth directly and it sounds like you're really thinking that this communication is going to be going through a health information exchange or some other hub or multiple clearinghouses. Is my understanding correct now of the background that you're working within?

MS. LOWES: Yes, and I guess the response time is a perfect example is that we've had all the stakeholders at the table to figure out what is feasible and that 20 seconds is feasible but it's a push for the industry, and if you look at with another rule is that we have service level codes that are currently optional under HIPAA but they're not being used in the industry and it would really help the providers to have them but the payers could offer those service level codes but if the vendors don't change their systems to have that detail incorporated into their front end it's not going to do any good for the payers to change their systems if the vendors don't change their systems, and then the providers aren't educated on how to use the data.

MR. BLAIR: Now the one thing that I don't understand, I mean I think what you're doing is great and I think it's the right thing to do, but since you're beginning to have performance rules for the system as a whole how do you relate that back to accountability for individual partners or players within the system? Because I can see finger pointing going on --

MS. LOWES: The group as Robin had talked about there is a certification component in phase one and that group had said they want to do something that is similar modeled around what happens with HIPAA that before any type of enforcement happens or hard core finger pointing that every effort is made to remedy the situation. So there is a process in place in the draft core phase one rules for certification enforcement, that if a complaint is filed against someone that they have to be involved in the transaction, if they feel that for instance the response time isn't being met, a provider's office, they can file a complaint, and then CORE has the responsibility to identify who in the hop may, specific to response time, may have the issue and every effort is made to resolve it and the opportunity to improve it. And in phase one the worst type, the highest level of enforcement is that you lose your CORE seal if you repeatedly fall out of non-compliance after you've been certified.

MR. BLAIR: So that's really neat, so you're setting up a process for accountability and for reconciliation.

MS. LOWES: Absolutely.

MR. BLAIR: That is really neat.

MS. LOWES: But I want to emphasize that the process we're setting up, there's a number of people that wanted it to be much more stringent for the accountability as well as the enforcement, and those that wanted it to be less stringent, and we're taking an approach that the group feels is doable for phase one and focused on as Robin had talked about with the pledge just demonstrating that the industry is trying to work together to solve the problems versus penalizing people so they won't come to the table.

MR. REYNOLDS: One last comment, when the word industry is being used I hope that it includes PBMs, I hope that it includes practice management systems, because if it's the membership of CAQH, I'm just asking, so if it comes across as the membership of CAQH or CORE then back to Jeff's earlier point, if two members of CORE get it to the front door of the doctor's office and the practice management system can't handle it, then 20 seconds to the person who's actually sitting at the desktop may be a misnomer of significance, so can you comment on that.

MS. THOMASHAUER: We couldn't agree with you more. When we say the industry we do mean the broad industry, we're not referring to the payer industry.

Before we started this we took this idea to a broad range of organizations including practice management vendors because without their willingness to adopt the rules there's no way to get the data from one end to the next, so we feel strongly about the participation and the materials that I've left behind includes the participation lists and you can see the range of vendors that are even today on there and certainly we hope to engage a much wider range of vendors but we think it's a pretty good start if you look at that list.

If you are working with organizations who are not involved with this yet please encourage them to do that and to give us a call because we can't do this if every piece of this pie is not playing at the table. And one of the things that we learned from the financial industry was that without the right government agencies engaged it wouldn't happen also and that's why we're so delighted that CMS is involved with us because they are the biggest payer out there.

MR. REYNOLDS: But as we think of, as we look at HIPAA ROI, something like a CORE seal of approval might be a way to get some of this other stuff we've been looking for this ROI might be a way to get people to buy in.

MS. LOWES: One of the things that the long term vision subgroup has been doing that sits under the policy group, they're not only looking at what could happen in phase two and later phases, they've also developed some measures of success that participants of CORE are going to be asked to track and it will be everyone in the hop so the plan, the clearinghouse, the front end vendor, and the providers, and they've set up a process where you measure ROI by tracking what did it cost you to implement the CORE rules in phase one, and for some people if they've adopted more of HIPAA early on that was option, it will cost less, and for those that didn't it will cost more, some vendors don't have an eligibility product or they don't have it to the level they'll need, they'll need to make changes.

So the cost will change, will vary by participant, in the ROI that group has picked up about either two to four measures to track either reduction in FTEs spent on checking eligibility or track increase in electronic transactions, and assuming that two to four metrics are done by stakeholder and if some assumptions are made about the cost of, an average cost of FTE, average cost of electronic transaction up front from the group so we're not, it's going to be something that has to be national versus regional. The group will be tracking the ROI by stakeholder and publishing that after a six month period of using the rules. And CAQH, that's going to be one of our key roles is getting the volunteers to submit their data in standardized formats and agree to the average cost for some of these items which is a challenge as I'm sure you all know.

MR. BLAIR: That's outstanding.

MR. REYNOLDS: There are never challenges with volunteers.

MS. THOMASHAUER: Just to get back to one of the questions and then we don't need to take any more of your time, we don't know what the process is going to look like down the road. Today we think, we know that we need to include the payers, the providers, the clearinghouses, the practice management systems but whether or not in the long run we need clearinghouses, right now they're very important but our rules can facilitate the transaction between any two parties so it may evolve different, we just don't know, but we want to create these rules so that they can support the transaction no matter how many parties are involved in it.

MR. REYNOLDS: Thank you, very helpful information, appreciate it. We wish you success.

MS. THOMASHAUER: And if anybody is interested in seeing the draft rules we'd be happy to make copies available.

MR. BLAIR: Yes, outstanding.

MR. REYNOLDS: We're right on schedule, that's a good thing. Next, she's getting set up, Halley that's going to talk us about, she's from the FHA and going to talk to us about the CHI standards.

Could everyone on the phone please identify themselves?

DR. DESI: This is Dr. Desi, Social Security Administration.

MR. KROTMAN: This is Alan Krotman and Brent Han, we're in the FHA PMO.

MS. ORVIS: This is Nancy Orvis, Military Health System, DOD.

MR. REYNOLDS: Is that all your friends, Beth?

MS. HALLEY: There might be a few more coming.

MR. REYNOLDS: Okay, we'll go ahead and get started.

Agenda Item: Update on Key Issues: CHI - Ms. Halley

MS. HALLEY: My name is Beth Halley and I would like to thank Mr. Reynolds and Mr. Blair for allowing the CHI Workgroup to come forward and present an update to you all today, so thank you very much.

I work with the MITRE Corporation which is a federally funded research and development corporation and we are in a position supporting the Federal Health Architecture with HHS. I'm also the facilitator for the Consolidated Health Informatics Project and have been nominated to come and present to you today. I would like to mention that Marcia Insley who is one of our co-leads for our allergy workgroup is here in the room with us, Dr. Desi who is one of our leads and I'll mention that again in the presentation is on the line, and Dr. Hedrix(?) and Dr. Swasha(?) from NIH, also one of our co-leads, may be joining us.

The purpose today is to present you all with an update of CHI, I am not a subject matter expert in all of the areas but I do facilitate all of the workgroups. So I will be giving you an overview of where we are with CHI today.

One of the main changes probably since the last time you heard about CHI is that it has been reorganized to be under the Federal Health Architecture program under the Office of the National Coordinator, and we'll talk a little bit about that. We're going to give you a brief update on each of our workgroups as I mentioned. I want to talk a little bit about some of the collaboration activities, we've been working with USHIK, the caDSR, as well as NIST, and then we'll complete the presentation today with some of the next steps as we move forward with CHI.

As I mentioned I think the last time we spoke to you the CHI Initiative was one of the OMB initiatives in the eGov presentation, I believe the last presentation you may have received was back in May of 2004 where approximately 20 of the standards were brought forth for recommendation. Since that time the CHI Initiative has moved under the Federal Health Architecture, Captain Forbes at HHS is the program manager for the FHA project, and that is under the Office of the National Coordinator as I mentioned.

The national coordinator is in the process of establishing four different departments and these departments are identified here and the FHA, the Federal Health Architecture, of which CHI is now part of, will be considered part of the Interoperability and Standards Department, so that's where we will fit now as CHI moves forward.

One of the other changes that you may or may not be aware of is when CHI moved under FHA a new website was created with all of the standards and it basically migrated the OMB website over to HHS. Listed here is that URL for those of you who may not have had the opportunity to see it, and you will find it, it is under the Office of National Coordinator, Federal Health Architecture. All of the reports that you all have adopted are part of that site as well.

And as you are well aware that the Office of the National Coordinator is responsible for the nationwide look at health information technology so CHI fits nicely within that organizational structure. The co-leads for the CHI workgroup are Gail Graham from the VA and Captain Mary Forbes from HHS, and that is at the workgroup level.

As we move down to the sub-domains, or if you like the actual different terminology domains where we are working on adopting and recommending to you all standards we have three workgroups that we are currently working on, a new workgroup that you're probably not familiar with because it was created after the original phase of CHI is allergies and I'm going to talk a little bit about what we're doing with allergies, and as I mentioned Marcia Insley from the VA is our co-lead for that and Marcia is here with us today.

Disability and multimedia, these are both domains that you perhaps remember, they were part of the phase one, very complex areas, and they are two of the areas that we have reestablished and are taking somewhat new approaches on how to move forward with adopting standards for those areas, I'll talk a little bit about that as well.

The allergy workgroup, similar to all of the CHI efforts is it interdepartmental, we do try to look across all of the federal agencies and bring in subject matter experts that have experience and interest in this area. The CHI Allergy Workgroup right now, CMS, DOD, the EPA, the FDA, the National Library of Medicine, and the VA. And basically the approach that this workgroup has taken is to get a framework for what are the allergy related terminologies we need to look at in order to come up with vocabulary standards in that area. And the framework that we looked at was the HL7 information segment for both allergy and adverse reactions, and within those information segments we've come up with the different vocabulary needs for addressing allergies.

Also in that process has been brought forth to our workgroup that HL7 is working on the version three information model which really does take a little bit different look at how allergies and adverse reactions and intolerance and observations are considered and the concepts that are related to them. However we really feel like the vocabulary work that is being looked at under the HL7 2.0 versions really will be able to be implemented in a version 3.0 as well. So I just want to give you the framework for how we determined what vocabulary needs we were looking at.

Within the HL7 environment we were looking at things like allergen type, is this a food allergy, a drug allergy, an environmental allergy, allergy severity, mild, severe, moderate, allergy reaction, is this a rash, is it an anaphylactic reaction. The allergen name and the allergen group, this is where this field gets very complex, particularly as we get into the terminology for drugs, drug names, drug classifications, and we have a lot of experts, FDA, VA, DOD, have all been very instrumental in helping to work through this very complex area.

As you may know we do have some CHI standards that have been adopted in this area, the NDFRT is one area for drug classifications but it looks really at only certain aspects of drug classifications so we were trying to look at how to address drug classifications as well as drug names. We also are looking at combination of both proprietary and non-proprietary standards in this field.

And some of the strong candidates just to give you a sense of who seems to be the strongest candidates in this field, SNOMED, the FDA SRS which is including, going to be including a whole combination of drug types of terminologies, the UNII codes, which is the unique identifier, the structure product labeling which will include things like the drug names which we'll be pulling from the RxNorm, the VA DOD non-drug allergens list, this is a collaborative effort that's been worked on through their health data repository effort. We're looking at having that being mapped into SNOMED and hence into the FDA, and then into the FDA SRS and potentially the EPA SRS. The NDFRT I mentioned for drug classifications, and the food allergen label consumer protection act to look at food classifications.

Where we are in the actual process with allergies, this is a new terminology workgroup that got developed or initiated this summer, we are in the process now of developing a draft report which we would like to have the opportunity perhaps at your next meeting to bring that report forward with our recommendation. And as I mentioned some collaborative efforts with some of the other allergy vocabulary initiatives going on like HL7.

The disability workgroup, and I know Dr. Desi is on the phone as well, again another very collaborative effort, Dr. Desi is with the SSA and Jenny Harvel(?) is with ASPE, they are our two co-leads, AHRQ, CDC, CMS, DOD, Department of Labor, Workman's Comp, has just recently got involved with our efforts, the NLM, the VA, and the VBA, the Veterans Benefit Administration, also recently involved, it's interesting, within the disability domain depending on the area there are very different uses for disability terminology and if you remember from phase one from the report that was brought forth to you part of the reason that a standard did not get adopted is because they're used in such a different way. For instance in CMS they look at it from a very clinical perspective, they're working with the MDS, the minimal dataset within the nursing home environment, where SSA is looking at it maybe from an eligibility and a benefits standpoint and the way they classify and determine is really based on a Congressional mandate. So we have two very different needs within this domain.

So the approach as we moved forward into phase two knowing that not one terminology can really at this point be, can really support the many different uses, was for the different organizations to come up with use cases for this terminology. So we are in the process right now, CMS is working on a clinical use case, the SSA is working on a disability eligibility and benefit use case, and what we'll be doing is as we look forward is to try to look across the federal government ensuring that we're looking at all the different uses of the terminology and then trying to figure out a way either of harmonizing the different terminologies like ICF and SNOMED and LOINC and the different ones that address different areas and see if there is a way to harmonize amongst them. And if not there may be a need to adopt different standards for different use cases.

Where we are right now, we are in the process, two of our use cases have been defined, we're looking at different vocabularies to match those use cases, and I think as I said the end point is trying to determine whether or not we'll be able to identify one particular standard or multiple standards and if they are multiple standards how are we going to continue to harmonize across both the use cases and the federal government.

May I ask Dr. Desi at this point if he has any comments or should we wait until the end, Mr. Reynolds?

MR. REYNOLDS: If it fits in best, you've got the floor for the next period of time so however you want to orchestrate it.

MS. HALLEY: Dr. Desi, do you have any comments at this point?

DR. DESI: The connection is not real good but let me, I think you've done a good job of summarizing what we're looking at here, is that there are different uses of disability vocabularies, or disability vocabulary across the different federal agencies and in one sense we have to focus on not the concept of disability because that's a legally defined term as opposed to the medical terms which may be more uniform throughout the agencies. In other words if disability to Social Security means one thing, it means something else to VA, and it means something to DOD and it means something else to CMS. But if I say this individual can only lift up to ten pounds, that's straightforward, everybody understands that.

What we're looking at doing, I hope I'm not being repetitive, like I said it wasn't the best connection, is to define a core vocabulary with a hierarchical structure that supports it and then each agency can then map its particular definitions to what's there, in other words for Social Security when we say severe we may mean something that comes out to a certain code, say at a level two, where CMS may say well that same type of thing for us is a level four. And we've also taken the additional step to recognize that at least for Social Security we may have to make certain regulatory changes to fit in to whatever the adopted standard in, in other words we may not get 100 percent harmonization with our needs to the vocabulary that's finally selected so we may have to make some tweaks to our regulations to harmonize with that and to adopt that.

And I think that pretty much covers what we're doing and we're going to try to move this along fairly quickly because in a sense we're at least a year behind most of the other workgroups in terms of we haven't actually recommended a standard much less gone to the point of how we're going to implement that standard. That's about all I have.

MS. HALLEY: Thank you, Dr. Desi.

The next workgroup that CHI has been working with is the multimedia workgroup, again and also a workgroup that came to you a year ago in May, presented a recommendation, but really wasn't able at that point to come up with consensus. So the group has been working pretty diligently since then in trying to come up with a consensus report that they could bring forward to you.

One of the steps that they have taken is to also to break out six different types of scenarios that may not fit into what the recommendation would be, for instance DICOM is the strong candidate for multimedia but there are many situations where DICOM, particularly in a transitional standpoint, may not be the right solution. So the report has broken out six scenarios, I'll give you a couple examples, if you're a DICOM system and you need to send to a non-DICOM system, that non-DICOM system might be HL7 compliant, how would that be addressed. So there are six different scenarios that the report addresses at this point.

Where we are with this, we are in the final stages of developing consensus, there is still some vetting to be done with the report, and as I mentioned with allergies we hope that at your next meeting we'll be bringing forward a report for recommendation.

MR. REYNOLDS: When you say next meeting did you mean December or February or --

MS. HALLEY: I was thinking it was December, keep our fingers crossed on that.

Some of the collaboration activities that have been going on since the last time CHI presented to you, one of them is with USHIK, it's the United States Healthcare Information Knowledge Base. This is an initiative under CMS and what it is is they've developed a registry that stores all of the different CHI standards. And when I say that it may store them, it may also be links to them, so if they are like SNOMED, a link to the NLM system, etc., so this is a portal, you can go there now, it is being developed, we have presented it to the CHI Workgroup and one of the suggestions or one of the approaches that has been presented to us is to link that through the next initiative noted here, the caDSR, which is the National Cancer Institute's registry --

MR. BLAIR: C a D --

MS. HALLEY: ca for cancer, small ca for cancer, capital DSR and that's the data standards repository.

MR. BLAIR: Thank you.

MS. HALLEY: So the caDSR. And the links are noted here and the NCI folks came and presented to the CHI group and we were going to work on some potential collaboration there between USHIK and caDSR where caDSR may be the back end processing a lot of these standards and links and things like that and USHIK would be the front end portal to the population.

Another system that has come to CHI and presented is the NIST Healthcare Standard's Landscape, you also see it called HCSL, and what NIST is doing is they're creating a system with all types of health care standards, not just CHI but a place where if you want to go and look at standards that are available, may or may not be CHI adopted standards, but this will be a very robust type of a system with links and information about all kinds of health care standards. And one of the things we're going to be presenting with NIST is the possibility of how we might manipulate and use that for not just linking to CHI but actually putting data from the workgroups in there and the agencies and things like that and we're just starting that process, NIST came and presented to us at our last meeting so we're going to be following up with that as well.

In terms of our next steps, as I mentioned our three workgroups, allergy, multimedia, and disability will be continuing their efforts and we would in a hopeful situation look forward to coming and presenting reports to you in December. One of the other functions that CHI has been involved with, particularly under OMB's guidance, is the need to not just identify standards but how do you implement these standards, so the idea of developing implementation guidelines. By the end of the year with the assistance of DOD and VA we hope to have a DICOM implementation guideline and an HL7 CDA implementation guideline available. They are both currently in draft forms at this point.

Continue the collaboration efforts, which I mentioned, and also and probably very significantly is to coordinate with the new office under the Office of the National Coordinator and their efforts to develop standard harmonization through the new RFP that will I guess shortly be awarded and the other activities within the Interoperability and Standards Department.

Thank you.

MR. REYNOLDS: Okay, thank you very much. I've got a question from Jeff and then Simon and then Stan and then me.

MR. BLAIR: All right, Beth thank you for bringing us up to date on what's been happening with CHI and the Federal Health Architecture, I'll have to wind up saying I'm kind of excited because this is an area that I think is critical to the development of the infrastructure upon which so much of our health care applications will be riding.

One aspect that I've run into in the part of the world that I'm living in part of the time of Arizona, Utah, Colorado and New Mexico, Four Corners Tele-health Consortium, is we're trying to see how we could wind up forging harmonization of the emerging tele-health networks, which as you probably are aware tend to provide imaging and audio, video and audio and stills, in many of these cases to rural and under served populations, how do we start to link that to the emerging health information exchanges and how do the standards for both begin to enable us to do that, how do we harmonize those? Has CHI or the Federal Health Architecture begun to look at that in terms of RHIOs?

MS. HALLEY: Mr. Blair, I'd have to be honest with you, I am not familiar with efforts to harmonize with the RHIOs and the tele-health efforts. The effort particularly in the multimedia group has been really to look at as I mentioned those six scenarios, and particularly in a rural situation we may be in a situation where you have a large hospital trying to communicate with a rural situation that may not be DICOM --

MR. BLAIR: Because I think of DICOM for the most part as radiological images.

MS. HALLEY: I believe it also has the audio, the wave form data, and some of the other wave form audio and video, and I believe they've actually looked at each of those areas to potentially adopt DICOM. And I apologize, I don't really have on the line to my knowledge any of our subject matter experts on DICOM. But I do know within the multimedia they are pieces that they are considering, all of the different types of multimedia. I'm sorry, Nancy, I didn't realize you were on there.

MS. ORVIS: If I could clarify for the question, I think what the gentlemen is asking about in terms of tele-health issues may or may not, CHI has been mostly dealing with the content of imaging and audio that needs to be contained within electronic health records so I think, I'm not clear exactly on all the other issues he's dealing with with the RHIOs but our focus has been primarily on how you capture that and put that within a record, not how you're just doing the tele-video or tele-health consultation across.

MR. BLAIR: Let me clarify then, okay, we have emerging health information exchanges which use all of the standards that CHI has elaborated on so far but in parallel there's a growing connectivity across the country now with tele-medicine and tele-health which tends to deal with stills, videos, and audio reaching out primarily to rural and under served populations. Well, I would think that it would be within the domain of the Federal Health Architecture to say how do we combine tele-health and tele-medicine with health information exchanges and what are the standards issues there where they might be incompatible where we have to harmonize those. Because that's what we're starting to look at right now in the Southwest --

MS. ORVIS: -- Captain Forbes as the chair, managing partner, what I can say is that CHI under the scope of dealing with terminologies and content for the record may not be the group that will deal with all those issues.

MR. KROTMAN: And Nancy, this is Alan, and I think also over the next coming weeks some of the RFPs that deal with standards harmonization and the NIN(?), which deals with the RHIO aspects, will be coming out and some of that will help focus I think on some answers that may answer this question.

MR. BLAIR: Well then maybe this question might be timely because if it hasn't come out yet and if the scope that they're setting forth is the traditional one and they haven't broadened it to say how do we include tele-health then maybe I could put that out as a question to say that we may be missing an opportunity here to address a harmonization issue that's getting larger.

MR. KROTMAN: Understood and I agree with what Nancy said, we'll take this back to Captain Forbes and express it to her and see where we can go.

DR. STEINDEL: Alan, Nancy, this is Steve Steindel, and I'm taking off my NCVHS hat and putting on my CHI hat to respond also. And I think Beth used the proper words when she was addressing the group about the status and we're in the final stages of developing consensus on the multimedia report and I think one of the main things that's driving the final consensus that will come out is the depth of where the DICOM standards are proposed as actual standards versus conditional standards. And one of the reasons that we're looking at that is to a large extent around what Jeff has just raised about the penetration of DICOM outside the traditional imaging area in large institutions and our ability to push DICOM out beyond the federal work force while CHI itself is concerned with federal to federal exchange, we've always had the hidden agenda of trying to use it as a tipping point and the big question now is how far can we push DICOM with respect to that tipping point and I think that's what's going to drive the final consensus.

MR. REYNOLDS: Simon?

DR. COHN: I actually was going to ask, I wanted to find out a little more about the disability activity, I guess I'm reflecting that the last time we heard about that it was unclear even what the questions were much less what the answers were and I think that that's sort of how we left it about a year, I guess it's been a year and a half ago now. And I'm hearing that you've gone back and started asking potentially more basic questions like well what are we talking about and beginning to develop use cases. You had referenced use cases but you never, at least when I was listening I didn't hear exactly what the use cases you were talking about were, so maybe you can reflect on that a little bit, just to help ground us for when we see something coming out in December.

MS. HALLEY: I will try and Dr. Desi, if you'd like to chime in at any point feel free. I know with the CMS use case what they are doing is they're taking a look at the concepts within the MDS and saying what are the concepts that we have to capture within the MDS and then what vocabulary is supporting them, and then within them where are the gaps.

Dr. Desi's group is doing a similar thing with the RCF form which they use to capture some of the disability and eligibility and benefit information, so they've actually gone to the actual source documents or identifying the different vocabularies, and then it may end up that there is no harmonization between these two as Dr. Desi was pointing to, the way that they are Congressionally mandated to categorize and classify certain disabilities may never be in line with the way the clinical forms are designed.

So there may be the, they're looking at two different approaches, either trying to figure out how, because ICF, the International Classification of Functioning, is the standard that SSA and some of the other organizations use, SNOMED is the concepts and the terminology that support the clinical disability terms, so one concept was do we try to move forward and try to get ICF and SNOMED to be aligned and closer aligned and can the federal government through its contracts with SNOMED, can we have some influence on getting that done. Or are they really so different and will Congress not as Dr. Desi mentioned, for them to change how they capture and how they classify disability they need to go to Congress and say we need to change this format.

And that may not happen, so the thought is in going through this exercise is they may come forward to you and say we're going to recommend two separate standards, one that is fitted for the eligibility and benefit needs and one that's on the clinical MDS side. Unless there are ways to harmonize those two needs and then what would happen, say we can get ICF and SNOMED and there have been attempts to work with SNOMED and say is there a way to prioritize this, this is used across the federal government, we have this need, they recognize the need is out there but whether or not it's prioritized into building new concepts into SNOMED or for ICF to build the SNOMED concepts or one of the other, if that can't happen then they may come forward and say these needs are so different that we're going to need to look at two different standards.

DR. COHN: I think that helps but I guess I'd have just a question of clarification, I actually hadn't realized that ICF had been implemented in the United States anywhere, so has Social Security implemented ICF at this point? Dr. Desi, can you respond?

DR. DESI: No, we have not implemented ICF, we have not implemented any particular standard, we did find that ICF does a better job of meeting our needs then does SNOMED and as has been pointed out ICF is a classification scheme and it has very good hierarchical structure but the SNOMED has the granularity for describing clinical terminologies and clinical situations. Not that ICF doesn't do that either but SNOMED does seem to have some advantage in that area.

The other thing that is of concern is SNOMED is already set up I believe it gets revisions or terms added to it about every six months as a regular process, that's not the case with ICF. It's not to say that it's static but there hasn't been a lot of movement with that so far. Now there are some, I think it's Denmark I'm not sure, that are looking to map the SNOMED terms of the ICF, to essentially get to, as one possible solution. I don't know what the status is of that process at this time but just that I've heard that they're attempting to do that.

I think you could see that this, the difficulty is that we use these terms different ways and I think everyone would prefer that we came up with one vocabulary so that if we're communicating information to CMS and you use the terms clinically or vice versa and we use it for benefits termination that we don't have to have something to translate these things in between, they should be the same, but there are these hurdles to jump over especially with regard to ICF as to whether or not we could do, the United States do a clinical modification the same way it's done with the ICD-9 and the ICD-10, clinical modification to do that, obviously that's out of our hands in terms of actually accomplishing that but it could be a recommendation, you'd have to find out whether or not that would be possible.

My understanding also to just add a little bit more to this is that looking way down the road ICD-11 I believe the World Health Organization is looking to incorporate the ICF into the ICD so that you get in addition to the diagnostic data they also get disability data along with that so it all gets coded together. That could also be a good argument for using an ICF structure but again that's way down the road but it's something to keep in mind.

DR. COHN: Thank you very much and certainly mapping or the various other approaches you've described are certainly all reasonable. I guess the other question I really had was when you were talking about, now that I understand what your two use cases are the question is are there any other major use cases that you should be looking at, not to prevent you from ever making a conclusion but obviously disability is not a concept that's just limited to Social Security and CMS and I was just wondering whether the Department of Labor or the VA or the DOD or other such groups may have at least occasionally come across this as an issue.

MS. HALLEY: Would you like to join our group because this is exactly what we've been talking about.

DR. COHN: It's a very important but it's obviously a complex topic and trying to figure out exactly what you mean by it is not the easiest thing in the world.

MS. HALLEY: We actually have made an effort in the last month or so to reach out both to DOL, we now have a representative from the Department of Labor, and also the Veterans Benefit Administration has just joined us as well in the last month and we hope to do exactly that, make sure that the use cases that we're identifying really look across the federal landscape and make sure that we're identifying all the needs. So yes, thank you, and Marjorie Greenberg who is also a member of your committee is a very active member of our committee and unfortunately she's not here today, but she has brought up similar comments.

MR. REYNOLDS: Stan.

DR. HUFF: Thank you very much for coming to present and I just wanted to applaud the progress that we've seen from earlier presentations and in particular I wanted to compliment the committee on the evolution of this process, basically of seeing use cases, then understanding transactions, specific transactions that would be exchanged, and then adopting vocabulary in the context of those transactions because I think that's the only way you can solve the problem.

If you just talk in general terms what's good for allergies you can't solve the problem, you need to say if we said we were going to send this allergy message and it had this structure and these fields in it then you can say what terminology should go in that field in that message and I think that's the only way that you can solve a lot of the issues. And some of the early work didn't quite have that detail of content and background in it yet and so just, it's very clear for instance for the allergy work now that they've focused in a way that they can really come to a good resolution.

And the same way I think much more so also, or the same sort of situation with the multimedia things, it sounds like we're a little earlier for the disabilities part, you're in the use case definition part, the natural progression of that would be to define exact messages that you would want, or even if you don't get to the message stage if you say this is the kind of information exchange we want to support and these are the fields of data that would be communicated in that transaction, then you can resolve issues specifically about, and if you don't get to that level of detail then you're not going to have a tight binding to the terminologies in a way that makes it operational and especially interoperable when you start involving DOD and VA and Social Security and everybody in that same transaction.

So I just wanted to applaud the progress that we've seen and compliment you on the work that's been done, I think that's wonderful.

The only other thing would just be a comment that these are the same issues sort of everybody is struggling with everywhere, within HL7 and DICOM and everywhere they're saying oh we've got these great message structures, now how do we bind specific sets of codes to those so that they become interoperable, and that's exemplified by what's called the termental(?) work within HL7 that's looking at how, if we use SNOMED in HL7 version three messages exactly how do we do that, exactly what parts of SNOMED fit into what slots. A lot of that work is being supported and funded by the National Library of Medicine also.

And so I see it as all very complementary and again I would just compliment you on the progress that we've seen here and in the detail and usefulness of what's coming out.

MR. REYNOLDS: I have a question and I'm not as learned as my colleagues on the whole thing of the terminology and everything but I guess my question is as you look at CHI which is now, each time they come they tend to end up being somewhat of a defacto standard, not just in the government but because with CMS as such a large entity, so I'm going to kind of ask it to those of you who represent CHI and then maybe to Stan also.

So as this, as CHI continues its work and then you have HL7, and we've heard the word harmonization, and then Stan we have, using you as an example of a health group that has electronic health records and does those things, how in the end does it really blend and with EHRs being such a big issue right now, to where more people are going to be jumping into it, both individual doctor practices and smaller providers that have maybe been in it before and so on, how's it work?

DR. HUFF: I can address that. The first thing is that to say is your concern is a valid one and what I mean by that is that CHI by its definition is a consortium of government entities and HL7 and DICOM and the other standards group are open consensus bodies that include the government as well as providers, vendors, everybody else. And so there's the potential that in fact HL7 following its open consensus process could reach one conclusions and CHI could reach a different conclusion.

So recognizing that that's possible in actual fact that's not happening and the reason it's not happening is because the CHI people have been very good at actually participating in the standards activity. And we need to encourage that and in fact with all possible encouragement that we can give because what you really would like, it can be very complementary because HL7 and DICOM and X12 and the others are very good at developing standards but they're doing it mostly, if not exclusively, with volunteer efforts. The CHI activity on the other hand, it may not be everybody's full day job but they can actually bring real staff paid resources to work on issues that HL7 might not be able to address. And so there's, I'm talking about how I want it to be, not necessarily how it is --

-- [Laughter.] --

DR. HUFF: I think when we talk about the government taking a lead in standards that's what we're hoping for is that some professional resources can be brought to bear in ways that we probably wouldn't be able to accomplish, at least not at the same pace or at the same quality as might be accomplished by people who are doing it as a real, but what it means is that there is a real danger there and so if the CHI people don't come to HL7, don't come to DICOM, don't go to X12 there could be a schism and you could see CHI standards that are not completely aligned with the other standards bodies and so we hope that that continues and increases.

And then as a private group representing my hat now as IHC what we're looking for is we're going to do what we're going to do internally, when we start talking to public health institutions, we talk to Social Security, we talk to CMS, and we have specific transactions that we're going to exchange, then we're going to adhere to what CHI and what HL7 and what other people do. We're going to do whatever we do internally and we're going to look at what's happening and it may be that what we're doing to adopt terminologies and standards internally based on what we see happening in the community but ultimately what we're committing to is that when we might make our internal decisions but when we communicate information outside of our enterprise we're going to adhere to the standards that are specified by HL7 and by CHI and the other open standards organizations.

MR. REYNOLDS: Because one of our key things that we've always talked about is adoption, so for the large institutions like yourself that's one thing but as we talk about EHRs and everything all the way down to the small doctor offices their ability to buy is once, their ability to communicate with many people is limited in different ways, so that's kind of what I'm, since I don't understand a lot of what you're talking about I'll take that stance, I'll go to that and be an expert, but I think that's the kind of thing. Jeff?

MR. BLAIR: Harry, you said that you referred to the CHI standards as defacto and I know that I really felt as if from an NCVHS standpoint that they would be and I think at least in my thinking I felt as if when Tommy Thompson made the press release, I guess it was either April or May of 2004 and announced all of the CHI standards, it was my hope that the industry would say the federal government is setting an example and setting the direction.

I'm very concerned because I don't think that the industry has reacted that way, I've been in a couple of vendor conferences and in those conferences, these were specifically designed to speak to vendors of electronic health record systems, less then ten percent of the vendors knew what CHI was. I was very disappointed, and of those who did know what CHI was they felt as if it was only directed to federal government agencies, not for the industry as a whole.

And I'm mentioning this because I am concerned and disappointed, and if this is the case the intent that we wanted to achieve with CHI as part of the FHA I think we may have to do something else that is bold and dramatic to get the attention of the private sector, all these vendors, that these CHI standards are what they are, they may be identified as the standards that the different federal government agencies have agreed to adopt but they're intended to be setting the direction for the industry as a whole. And that last message apparently has not gotten through to the industry --

MS. ORVIS: Sir, could I make a suggestion?

MR. BLAIR: Yeah.

MS. ORVIS: This is Nancy Orvis, I'm with the CHI membership and have worked on these standards for a couple years and with DOD Military Health Systems, just I think it's taking, it will take a couple years to percolate through because the main issue for commercial electronic health vendors is terminology is imbedded often with when you buy an application you buy reference terminologies with the application vendor. Now DOD as a health care provider has put those reference terminologies in contract languages, so new things that DOD Military Health System will buy have references to those but it takes a while.

Now I think one should wait and see what happens here in this next fiscal year because requirements have to be written, they have to be put into RFPs and put on the street and I think a lot of the vendors are waiting to see how do you incorporate this, this is in some cases terminology is kind of like rocket science for some of these vendors, it gets into the very deep set of concepts where they're not always comfortable talking I think. But I do believe that there are now some, there will be some small test cases and things that might be coming up here in the next annual conferences here in the fall and winter and spring where some applications or some vendors and some of us in the government have been doing some testing to see how these work.

I think the benefit that you're looking for is that as reference terminologies get imbedded or linked to these application vendors you will be able to see something like a pharmacy vendor or an ancillary vendor or an E HR vendor say how do I work with a decision support application, and what will be very important is when you put fleets of products together and you see that because the reference terminology is common, say a pharmacy that lists patient medications can interact with a decision support application for the physician that advises them on the best protocols or something, they will communicate through the ability of understanding the reference terminology.

So that may be kind of a lengthy explanation but I think, I have been pleased to see how, we in DOD have a blanket policy that we want to use external standards as much as possible, we use that in our acquisition capabilities, but it does take more then a year to percolate through because you have to be able to put that in that process and I think that we have this in various agencies now where the requirements to look at products that can utilize these is now out there and I think we're going to see some more maturation of this in the next year or so.

MR. BLAIR: Nancy, what you say is true and I do understand that, and not only is DOD doing that but CMS has also identified that in the docket program SNOMED, LOINC and RxNorm will be the standards required there. But the observation that I was making was that if that's the only way that we're going to get the industry to transform itself, the length of time that it will take I think is longer then many of us feel we can afford. I think that we need to get the attention of the vendors, that this is something that they have to begin to learn about and begin to design it into their next product cycle, because I don't think we can wind up waiting another year before we start to introduce them to the fact that it's going to be required by docket or by DOD or the e-prescribing pilot tests are done and that RxNorm might be part of that. All of that I think is appropriate, I guess I'm just expressing my feeling that I think that, I think we've got to raise the visibility of the role that CHI standards are going to play so that the private sector vendors can begin the work now to get ready. It takes them quite a while to develop systems that implement these standards.

MR. REYNOLDS: Steve and then Simon.

DR. STEINDEL: I'm putting on my CHI hat and I have a couple of responses and comments. One was to Stan's comments and I really appreciate Stan's observation about the number of CHI people, etc., who are involved in the standard development organizations and trying to harmonize from that point of view and I can assure Stan that that is not accidental, it's a deliberate plan.

But I would like to also point out something that Beth alluded to in her presentation here is CHI also envisions a very important role in NCVHS in bringing in the private sector influence on the evolution of CHI standards with the policy that we're going to be adhering to in phase two that was used in phase one is preliminary discussions about where we're going and what type of ideas we're doing and then final discussions. And having been involved on both sides of many NCVHS/CHI discussions in phase one I can assure NCVHS that those discussions did influence the standards and I am sure they will influence the standards in the future.

So I don't want this group to be unaware of their critical role in moving these standards forward in a fashion that can be utilized by the private sector even though they are meant to be for the federal sector.

And in the other respect with regard to what Jeff was saying about the visibility of CHI while I probably do agree with him in the sense of the number of EHR vendors that might not be aware or are implementing CHI standards, probably in terms of the number of EHR vendors that in terms of their installations and how many are being used I would say that a large proportion of them are influenced by CHI, are influenced by them and are building them into their product systems.

However, as Jeff also pointed out, the cycle time for that, and the cycle time for terminology use in general since we basically do not use standardized terminologies today in most systems, is going to be long. And I don't think we're going to see it in the next year or two, it's going to be five to ten years before we actually do see it in place. And I think an important role that the Office of the National Coordinator has seen in this area is to build the CHI standards into their RFP process and also roll those CHI standards into NIST and making them federal standards so that when not just DOD when it goes out and tries to procure something it will point to CHI standards but throughout the federal procurement process we will have NIST standards that we can point to and say okay if you want to meet our requirements you've got to adhere to these standards.

So we are making a lot of efforts to make this more visible, I don't think we're going to achieve Jeff's rapid timeframe but I think we're going to be moving as expeditiously as possible.

DR. DESI: Could I ask a question? Has there been any move to include the private sector as part of this process? The reason I ask is when we started up with CHI II in the disability workgroup we were essentially told that we were not to include the private sector, to find out what they were doing or if they've had any approach to this. And I was wondering if we're interested in this getting adopted by the private sector but we're not including them in the process, is that a problem.

DR. STEINDEL: Dr. Desi, this is Steve Steindel again, we brought that up in CHI Phase I, it was looked at, the Office of General Counsel told us that because of the construct of how we were working we could not work directly with the private sector and the only way we could expose the CHI process to the private sector was through a FACA group like NCVHS. We actually did reopen that question and asking for more definitive opinion from the Office of General Counsel with respect to the CHI, not the CHI but the FHA Public Health Surveillance Workgroup, because the private sector is key to public health surveillance, most of it is done by our state and local partners who are not part of the CHI process. And actually the Office of General Counsel searched for about a month to try to figure out if we could any way shape or form bring them in and it turns out that we don't have legislation that's broad enough to allow them to be part of the internal process and that's an unfortunate thing that we all are very upset about.

DR. DESI: I can understand that. Is there any legislative affairs liaison to maybe entice Congress to address that issue since obviously the President is very interested in moving this along as rapidly as possible?

DR. STEINDEL: I think that would be something you'd have to bring up with the Office of the National Coordinator.

MR. BLAIR: And that was essentially the message I had too because all of the things that Steve said, I agree with every single one of the points, and it's very hard to get this complex multifaceted industry and the vendors in it to move quickly, for a lot of good reasons. And maybe what I was expressing was that when NCVHS was looking at these standards we had the private sector testifying and we felt that we were reflecting the consensus of the private sector as well as the federal government.

And then when Tommy Thompson announced it publicly we felt that we had the visibility, so all I was really echoing was a surprise that the vendors that are not directly involved at HL7 and SNOMED, all of those vendors out there to my surprise were not aware of a lot of these standards efforts or Tommy Thompson's announcement on CHI and so I thought it's important that you be aware of that, it came as a surprise to me, and I think that maybe David Brailer needs to be aware of the fact that even though we're doing all of this good work, NCVHS and CHI and HL7 and SNOMED and all and we're doing all of this good work, that a large part of the industry isn't aware of what we're doing and the direction we're going.

MR. REYNOLDS: Simon? And then we'll take a break.

DR. COHN: Maybe I should try to make a comment about all of this. First of all I want to compliment the work of the CHI and I think really what I was going to comment on was really to thank you for beginning to work on implementation guides and I think by way of commenting also on what Jeff was commenting on, I don't want to overstate or even fully agree with some of Jeff's comment because I think I would observe that many of the CHI standards, HL7, many of the other standards that were identified by CHI are obviously already generally implemented by the industry.

Now what is not implemented at this point is connecting terminologies to the standards which is I think really what Jeff was referencing and I will tell you that even as we looked at them last year we realized that many of the recommendations, and I think this is where Stan was coming to also, is that it's great to say use SNOMED or use another terminology but until you have implementation guides, and I actually was reflecting on John Halamka's comment this morning where he was saying geez, not only do we have these concepts but we actually have implementation guides that we're going to make available to people so they'll actually really understand how to implement these things down to the specifics.

And until we get to the point where there actually are implementation guides that we can point to that show that the answer is not SNOMED but use these parts of SNOMED in these fields in HL7 version 2.X or version 3.X for these specific purposes we don't have interoperability and we really don't have really implementable standards.

And so I think that really the point that we saw a year and a half ago was clearly there were open issues, things like disability and all of that, but we were all really looking forward to the next step of CHI being really coming out with implementation guides that then the industry would really know what to do with it. So I think we just need to realize the industry I think is making progress, we're obviously Dr. Desi here to, we are not the same as having something on your workgroup but if you have issues we're obviously happy to have a session where we bring and more fully discuss issues of disability or otherwise, sounds like December that will be happening, and we're sorry that we didn't know that you needed it before or else we would have been happy to have stepped forward to have of some assistance to you and your workgroup. But obviously at the end of the day we still are going to need those implementation guides so that people really know what to do with things.

Anyway, that's my comment and thank you again for starting that work on the implementation guides.

MR. REYNOLDS: And to play off of that, as you want to come forward, if there are others that you think should be part of that same discussion, in other words whether it be HL7, whether it be anybody else, as part of the subject, so to hear from you but we could also use your help as you want to get more visible and as we want to do this differently if there are others that would be good to hear from at the same time that would be helpful at least in building the plan on what we think about.

We thank you very much, nice job, thanks for all of you being on the phone, and we'll take a 15 minute break.

[Brief break.]

MR. REYNOLDS: Okay, we're back started again and our next one is Maria is going to talk to us about the e-prescribing pilots --

Agenda Item: Update on Key Issues: E-Prescribing Pilot - Ms. Friedman

MS. FRIEDMAN: Before I start I'd like to introduce Drew Morgan, Drew is new to CMS, he's been in our office and on my team for six weeks now, he comes to us from NAMSI and we're very glad to have him. One of the reasons I'm especially glad to have him is that they made the team leader for not only e-prescribing but HIPAA, so I needed help and we have Drew so I just wanted to introduce him to the group.

What I'm here to talk about today is the e-prescribing pilot RFA that just came out --

MR. BLAIR: By the way I think we need to congratulate you for that achievement.

MS. FRIEDMAN: Well thank you, we'd hoped it would have been out sooner is all I can say, but having said that, and I'm going to go through just a little bit of background which I don't think I need to do too much of seeing the group out here.

But anyway, the MMA, and I went ahead of my slides here, MMA requires that we conduct an e-prescribing pilot during the calendar year 2006 for standards for which there is not adequate industry experience. And those of you who are regulars here to the subcommittee know that we've spent a lot of time looking at e-prescribing and talking about foundation standards and other standards that might be tested and in February CMS issues an NPRM talking about what we might propose as foundation standards based almost I would say to a large extent what came out of this subcommittee and hearings and all were very, very helpful to us. And we're in the process of putting out a final rule and I'm saying that now that because of the timing on this we could not say in the pilot RFA what were going to be foundation standards or not just because of the legality of the fact that the final rule was not out, so I just wanted to make sure everybody understands that.

The other thing that MMA requires is that the pilot project have to be evaluated and because of the timing, the pilots run calendar 2006 and there's a report to Congress due April 2007, in order for that report to Congress be written the pilots need to be evaluated and the only way we could see doing that is to actually run the evaluation concurrent with the pilots. Normally you do it step wise but we don't have the luxury this time of doing that. The contractor for the evaluation has not been named yet.

As many of you have seen we have an RFA out on the street, that's just a term of art, it's a request for applications because the mechanism, the funding mechanism is going to be cooperative agreements. We're going to make the awards competitively which means that people are going to have to submit proposals and we're going to talk a little bit about that process in a little bit about what's going to be required and how it's all going to work together.

Interestingly we're partnering with our friends at the Agency for Healthcare Research and Quality, AHRQ, to get these applications in and get the awards made, in fact the RFA was put out by AHRQ in collaboration with us. And they will be convening the review panel for it as well for the applications that we get.

The RFA was just announced on the 15th, like last Thursday, and I've included the website and I'm going to read it in case anybody is listening and doesn't have it, I apologize. If you click on this link it will take you right into the application itself and that's grants.nih.gov/grants/guide/rfa-files/rfa-hs-06-001.html. As I said the proposals will be evaluated by a peer review group. We have a total of $6 million dollars available, we envision a max of $2 million, so you can do the math. A lot of how the money is going to be divvied up and who gets what depends on the number of applications we get, who applies and kind of how the results are configured.

We envision that we will get applications from some of the big implementations that are already in progress, they may have to make some tweaks to be able to test everything that we're going to ask them to test. But we're also interested in applications including if not exclusively from entities of interest like long term care facilities, long term care pharmacies, and folks like that, and that's listed in the RFA as well.

There are key dates, we're going to have a bidder's conference on the 29th and you have to apply to get on the list to get the information of the bidder's conference and I'm going to get to that in a moment but I'd like to complement Tony Sheath(?), who's here, he was absolutely the first person who signed up, he gets the award. There's going to be letters of intent due October 7th, applications are due October 25th, and we hope to get the peer review in November although it says in the RFA December, and the awards will be made in December, because according to the statute the pilot projects have to begin January 1st.

Bidder's conference. One of the things we really are interested in is getting questions from potential bidders up front so we can answer as many questions as possible and make the conference more productive and efficient. It's open to any individuals or organizations intending to apply, and AHRQ set up a special email address for you to sign up and that is eprescribingrfa@ahrq.gov, and you need to register by the 28th since the bidder's conference is on the 29th. So again submit your questions in advance and that will be very helpful to us. The call will be at 1:00 eastern and we expect it will run for a couple of hours or more.

The RFA is very specific about what needs to be tested and one of the things that's important to know is that everything that's on the list has to be tested together, there's no mix and match on this, and again its been very difficult to talk about this since we don't have the reg out and I can't tell you what foundation standards are final and what aren't.

So here's the list as in the RFA, test the formulary and benefit information, the NCPDP is working, actually has ratified the formulary benefit protocol that RxHub donated to them. Medication history, same thing, NCPDP Script fill status notification, we're looking at the business value and clinical utility of this function, it's out there but nobody uses it much. NCPDP Script cancellation and change function, same thing. Structured and codified sig, this is something new, this is something that the industry is actually busted its hump to pull together and it should be ready for prime time at least on a pilot basis.

And then there's the clinical drug terminology pilot should determine whether the RxNorm terminology translates to NDC, and also works well with some of the other reference based vendors who are out there as well. Prior auth messages is on the list, NCPDP standard version, Script standard version five, telecom standard version five and the 270/271 version 4010, round out the list. And again if you've been a regular at the hearings we held over the past year and kept up with our letters and stuff this is all stuff that was discussed at length and was in our recommendations.

Basically one of the reasons that we've put everything together in this and we them all tested is we want to make sure that they all work together with all of the standards that have been proposed, as well as some of the new ones like the codified sig.

Again this is in the RFP, one of the things we want to test is are the right data being sent, are the data usable and accurate, are they well understood at all points in the transaction, did they work, is the right stuff being sent to the right place the right way and do they all work together. Data accuracy is very important and because some of these things are new it's important to know whether these new functions actually work and they work with what's being used in the industry today as well as some of these other functions that aren't used very much at all.

The other question that we want to know is if things don't work well together were there work arounds, what were they, how did those work, how can they be improved to address work arounds, how long does it take to turn the transactions around, and does it work for what you need it to work for, can you order what you need to order.

Again more on the project characteristics, the methods of testing, and this is a point I'm going to make at the end but I want to make this now, these project characteristics are very important and some of the other things that are in the RFP are very important and the reason for that is that the applications are going to be undergoing the regular scientific review process and you want to make sure that your proposal, if you're submitting one, addresses all these points. There are evaluation criteria as well included in the RFA but again we're going to be looking at whether you addressed everything we ask you to address and these project characteristics are important. And another thing that's important is budget, is the budget going to be reasonable and all of that but I'll get to that.

I'm just going to run down the project characteristics, methods of testing and why they were chosen, and normally those are handled in applications and there's some kind of narrative that kind of talks about, that tees it up and then there's specifics on what you're going to do and how you're going to do it. The nature of the prescriber pool including specialty, size of practice, percent of participation. Number of patients, demographic characteristics, uptake, enrollment, dis-enrollment, that kind of thing, these are the kinds of things again that we've discussed as issues and that are important for determining what standards coming out of the pilots will be proposed for adoption in 2008 and of course there are metrics to consider how robust these things are, how they work well, again uptake, the use for specific patient populations and that kind of thing is going to be very important and very telling.

Some break and butter things, where are you going to test it, how many sites, who's participating, what's it look like, baseline numbers of prescriptions per month and you have to have something to measure against so it's important to kind of know up front or at least guestimate up front kind of what the baseline for the prescriptions are going to be. Again the same thing with the calls backs to pharmacy because what you really want to do, one of the measures of success will be to show how its reduced call back time and basically if you will improve the ROI. And the other thing is that any other additional metrics that people can provide has to how you might measure success, ROI, these are not the only ones but these must be addressed.

Outcomes reported, again this goes to measurement of how you define success and how you're going to look at things coming out of the pilots, just a few, and I'm not going to run down the list of the number of medication errors and the number of adverse drug events that you found or were reduced. Rates of hospitalizations and emergency visits associated with adverse drug events, I mean that's another way of looking at costs and benefits, especially with elderly populations. Work flow changes are important, renewal rates, I mean all of these are there, these are definitely measures and anybody has any other ideas they want to throw in we'd like to hear them in the applications as well.

We're hopeful to get multiple sites, geographic diversity, we hope to get applications from public/private partnerships. And again, we are interested in Medicare, in pilots that have fairly large Medicare base, I think it's at 25 percent. We're interested in provider types, again I know it's going to be difficult because of the quick turn around time for getting these applications together so these are things to consider.

We would especially be very appreciative of applications that would test the, or use the structured product label and employ that too. So those applications that have that in it will be looked upon favorably as well.

I know we're going to have our own evaluation contractor but because of the timeframes it would be very helpful to have the applicants tell us how they think their particular pilot should be evaluated. Obviously the pilot sites and the evaluator are going to have to work hand in hand so it's kind of nice to know up front what capabilities and what you think is important because we're going to have to figure that out pretty much very quickly. And of course everybody has to comply with the HIPAA privacy and security requirements.

Again attention to detail in the RFA is very important, you need to explain who you're partnering with, what the relationships are, who's bringing what to the table, and that not only needs to be reflected in the narrative but the budget as well. And there's a wide range of entities who can apply, eligible institutions, there are a lot of them, for profits, non-profits, public or private institutions such as universities, state and local governments, federal agencies, faith or community based organizations, and then of course the MMA was very specific about who we should consult with but also entering into cooperative agreements with physicians, physician groups, pharmacies, hospitals, PDP sponsors, MA organizations and other appropriate entities. So there's a wide range of people who can and should be involved in this pilot projects. Again we're looking for long term facilities and rural health clinics as well if we can get them.

And this is my take away for today, I've said it before, I've said it again, attention to detail is important, pay attention to all of the characteristics set forth in the RFA and make sure that you address what we've asked you to address. The budget is a very critical piece and that needs to be done well and it needs to be reasonable. You know what the pot of money is so don't come through with an application that asks for $7 million dollars, it ain't going to happen. The deadlines must be met.

Get help with your proposal, I know that a lot of folks who are not, who are going to apply or who are considering applying have never really done this before, this is kind of standard operating procedure in the academic world where people apply for grants all the time and there are people who know how to do this. I'm not suggesting you need to partner with academic institutions but there are people out there who are professional grant writers and who've been very successful in getting their grants through the application process, so to the extent that you think you need help do seek it out because even if you have the best idea in the world and it's well thought out if your application doesn't pass muster it's not going to get funded and I can't say that strongly enough.

So thus concludes my rant on the application process and again all of this is on the website, it's in the RFA, and we'll be going over questions at the bidder's conference on the 29th.

MR. REYNOLDS: Thanks, Maria. Jeff, you had a question?

MR. BLAIR: About halfway through your presentation I was hitting my co-chair on the shoulder saying to put me in line with the queue but by the time you finished your presentation you had answered every question that I had anticipated so let me just simply say well done, Maria.

MR. REYNOLDS: I have a comment. We had written the letter of recommendation from the committee we had recommended that after the pilot that the results be reviewed before it becomes a rule, I don't remember the exact words we used, I didn't see anything, that's the process and what you want them to do but how do you see it working after that?

MS. FRIEDMAN: Well, two things are going to happen, one is we're going to have the report to Congress and then we will have a subsequent rulemaking round to adopt standards that make it through the pilot process, the ones that work and they're interoperable with other stuff. So these pilots are very important because the results really feed into the next round and what will be adopted in 2008.

MR. REYNOLDS: Because your slides, you paraphrased a lot of cost benefit but the actual words never said it.

MS. FRIEDMAN: That's true, that's a true statement, but to the extent that people can quantify things, and we tried to lead you down the path, to the extent that you can quantify as much as you can quantify both in terms of costs and benefits, especially measuring some of this stuff has been very difficult to measure. And we've discussed that here, I mean how do you measure an adverse drug event, or how do you measure one that's averted, and so some thought needs to be put into that, we tried to give you some ideas for example but still it's very hard work and I realize time is short.

MR. REYNOLDS: Okay, any other questions or comments? Stan?

DR. HUFF: Just a statement of the obvious which is this is a very tight timeframe, I mean if you award those in December I mean, then I would assume that there would be some implementation that had to go on, the actual programming the stuff to support this, which means data collection might not start, the actual exchange of data until June or something and then you're expecting to do the evaluation and report out by --

MS. FRIEDMAN: That's why it's got to be concurrent, we have no luxury of time, I mean to be very honest we had hoped to get this RFA out a whole lot earlier. It's tight.

DR. HUFF: I think it will be difficult for people to, even with their best intentions, to be able to do all of the programming, install and implement this in the timeframe that's being asked for.

MR. REYNOLDS: Okay, anything else? Okay, Maria, thank you very much.

MS. FRIEDMAN: Thank you and we look forward to the applications.

MR. REYNOLDS: All right, next on the agenda is a Katrina update by Steve Steindel. Maybe he's working for FEMA now --

Agenda Item: Katrina Update - Dr. Steindel

DR. STEINDEL: Thank you, Harry. I am going to do a very brief update on what has been going on primarily within HHS and primarily coordinated by the Office of the National Coordinator on some responses to getting electronic medical information to providers and patients who were dispersed by Hurricane Katrina. I'm giving this report based on information that I've gotten second or third hand primarily because everybody who's involved with this process is very involved with this process and there are a few people in this room I believe who have been peripherally involved with it who probably know more then I do and if they want to contribute I would love it.

But the whole intent is to make NCVHS aware of what's been going on because it fits in very well with a lot of what we have discussed here through the years. And the intent of making you aware of it is to perhaps look at a quarter day or so, sometime after the first of the year after the dust has settled to find out the details of the people who actually did the work. So if I misspeak for any of these people and if any of them hear about it I'd like the transcripts and everything to reflect my apologies, I am trying to present it as best as I know how and I've gotten some good information but it's a very rapidly evolving area.

As background, soon after Katrina hit New Orleans there were federal workers who started to go into the area, maybe not on a massive basis but on a basis, evolving what was going on and looking at the response. Several of those were from HHS, primarily from CDC, which of course has public health responsibility and lead in this area. And I do know on I believe it was the Friday after Katrina hit there was a high level CDC group, including Secretary Leavitt and Dr. Gerberding, who went into the area and visited several sites to see what was going on and what was needed. And even before that and definitely after that there was a movement of CDC workers into the area, and an email I just got a minute or two ago we are now down to 161 people based on the Gulf Coast region and I say down to because it peaked at about 200 --

MR. BLAIR: We being CDC?

DR. STEINDEL: CDC, these are CDC workers. They are supported right now by a staff dedicated to their support of 352 workers within CDC in Atlanta. So there is a massive public health, federal public health response to this.

One of the things that CDC did observe during its visits, the early visits to the Gulf Coast area, was that there was a tremendous need to meet the ongoing medical needs of the evacuees and also that a lot of this information actually did exist in electronic format, not necessarily as electronic health records but in particular medication histories, prescription information. It was dispersed, it was in multiple people's databases, it was not readily accessible, but it was there, and there was a need to get this information into the people's hands as quickly as possible, both the physicians and the patients, so they could renew their medications, particularly those who have chronic diseases, who need their various medications refilled, or may have left without enough medication.

So this was identified very, very quickly, I don't know how many other groups within the federal government or outside communicated this to Dr. Brailer but I do know that John Linsk(?) from CDC was instructed by Dr. Gerberding on Saturday to call Dr. Brailer and suggest that he start looking at some way to coordinate this information.

That's what I know as background on this. I do know what happened after that.

On August 28th, Sunday, David Brailer formed an internal team within HHS and the federal government involving DOD, VA, etc., and then reaching out into the private sector and I don't know what the peak was but there's a tremendous number of federal and private sector people involved with this, with the charge of coordinating prescription information within the areas affected by Katrina and getting this information to the evacuees and the physicians that were seeing them at that time.

MR. BLAIR: Steve, you may want to point out where you referenced on August 28th, that is two days before the Hurricane struck New Orleans.

DR. STEINDEL: Sunday the 28th, August 28th --

MR. BLAIR: August 28th, that would be a day before it struck.

DR. STEINDEL: I believe this was the Sunday after that the meeting took place.

But this started, this started very quickly, and the net effect of it was soon after, and we heard this a little bit in Teri Byrne's presentation earlier, the ability to exchange medication information from the private sector groups was starting to come online, particularly, and RxHub was involved with this, five chain pharmacies, Albertsons, CVS, Rite Aid, Walgreen's, and Wal-Mart, were exchanging data by Tuesday the 13th of September and probably before that, others were coming online soon after. The VA facilities were going to be coming online, I'm not sure they're online, my understanding is they are still working with the Medicaid information but they expect that to be online also.

And if there's anyone who can actually correct what I've just said it would be appreciated but that's the basic idea. Teri?

MS. BYRNE: Actually Steve I did get involved in this actually Monday, Labor Day, and have been pretty much living and breathing it since then. The Medicaid, the Mississippi Medicaid data was live originally with gold standard because they already had the data, and then they moved the pharmacy data in, those chains you mentioned, and that became live as you said last Tuesday. The VA data is not live yet but my understanding is the Louisiana Medicaid data I believe went live today. RxHub went live yesterday. So what we did was we pulled in the pharmacy data and married it with the Medicaid data in the gold standard database and then we created a real time connection to RxHub from gold standard because we weren't yet working with them, we weren't implemented with them yet, to feed them the PBM data. So we didn't take the PBM data and load it into their database which has been reported so I wanted to clarify that, many times and is not true, we actually have a live connection with them. So we are feeding them real time data and like I said Louisiana Medicaid went live yesterday.

DR. STEINDEL: And thank you, Teri, basically this is very good because all that I wanted to do with today's very quick briefing was just point out what is happening. And I think we can realize a lot of what is happening is the types of things, while we would like it to happen in a much more seamless and coordinated fashion we were able to do a lot of it through much, much struggle as Teri pointed out. I can assure you she is not the only person who was living and breathing this for the last several weeks, everybody I've spoken to has been involved with this process, has been living with one or two cell phones glued to their ear for this period of time while reading a Blackberry.

So it's been a very intense period of time, I think they have made tremendous progress, I think it is a wonderful example of how electronic health information can help especially in the future when it comes together in a seamless fashion. And I would like NCVHS to explore what has happened with this with time and detail from the experts.

As another matter associated with this and somewhat more a little bit aligned with CDC's particular interests, a similar effort has been going on with immunization records. As you probably know the states in the region, most of the states in the region do run immunization registries, one of the problems that CDC has been having with immunization registries is that there have been state barriers to exchanging state immunization information with other states. I can assure you these barriers went down, not totally, but on a temporary basis is my understanding between some of these states because we now had evacuees from Louisiana in Texas who now needed immunization records that were in the Louisiana immunization registries and so the exchanges started to take place using the standard HL7 messaging.

So this has been another, from our point of view of a health IT and an information exchange point of view, another positive example of what happened during the Katrina incidences, and I think it would be very useful for us to find out in depth what happened.

MR. REYNOLDS: We have added to that our charge, call it special areas of interest, Katrina aftermath.

DR. STEINDEL: And that is my report as shaky as it is and as third or fourth hand as it is, Harry.

MS. BYRNE: One other thing of interest I think Steve is that Dr. Brailer has asked the Markle Foundation to organize a lessons learned meeting within the next several weeks so I think that will be really important information out of that meeting.

MR. REYNOLDS: Any other questions or comments? Any other agenda items for today? I'd like to then thank Janine and Marietta and Maria for setting the meeting up today, getting everybody out here and especially the presenters and we thank them for getting out here. And Judy, thanks for handling matching records and you set a high bar for Stan tomorrow, let's hope he does as well with this portion of the program.

And we plan to start tomorrow at 8:30 and adjourn at 11:15. Everybody have a nice evening.

[Whereupon at 3:55 p.m. the meeting was adjourned.]