Blood Safety Transcripts
DEPARTMENT OF HEALTH AND HUMAN SERVICES
ADVISORY COMMITTEE ON BLOOD SAFETY AND AVAILABILITY
Eleventh Meeting
Volume I
8:11 a.m.
Tuesday, April 25, 2000
Hyatt Regency Capitol Hill Hotel
400 New Jersey Avenue
Washington, D.C. 20001
P A R T I C I P A N T S
Arthur Caplan, Ph.D. Stephen D. Nightingale, M.D., Executive Secretary
Larry Allen
James P. AuBuchon, M.D.
Michael P. Busch, M.D., Ph.D.
Richard J. Davey, M.D.
Ronald Gilcher, M.D.
Edward D. Gomperts, M.D.
Fernando Guerra, M.D.
Paul F. Haas, Ph.D.
William Hoots, M.D.
Dana Kuhn, Ph.D.
Karen Shoos Lipton, J.D.
Gargi Pahuja
John Penner, M.D.
Jane A. Piliavin, Ph.D.
Marian Gray Secundy, Ph.D.
John Walsh
Jerry Winkelstein, M.D.
Mary E. Chamberland, M.D.
Jay Epstein, M.D.
Paul R. McCurdy, M.D.
Eric Goosby, M.D.
David Snyder, R.Ph., D.D.S.
C O N T E N T S
AGENDA ITEM & PAGE NUMBER
Welcome, Roll Call, Conflict of Interest Announcement, P.5
Comments by the Assistant Secretary for
Health and Surgeon General, David Satcher,
M.D., Ph.D., Department of Health and Human
Services, P.10
The Duty to Inform under Canadian Law -
Honorable Mr. Justice Horace Krever
(Retired), Toronto, Ontario, Canada, P.20
The Scientific Foundation of Modern Error
Management - Ronald M. Westrum, Ph.D.,
Professor of Sociology, Eastern Michigan
University, P.46
Challenges to the Development of the Aviation
Safety Reporting System (ASRS) - National
Aeronautics and Space Administration, and
Challenges to the Development of American
Airlines Aviation Safety System, Captain K.
Scott Griffith, Chief Pilot, American Airlines, P.63
The Application of Error Management
Principles to the Operating Room - Robert
Helmreich, Ph.D., Professor of Psychology,
University of Texas, P.138
Extension of Error Management Principles
throughout Clinical Medicine - Stephen D.
Small, M.D., Association Professor of
Anesthesiology, Harvard Medical School, P.179
Lunch, P. 240
What Is Needed to Support Error Management
Systems in Transfusion Medicine?
Jeanne Linden, M.D., M.P.H.
New York State Department of Health, P.267
James Battles, Ph.D., University of Texas
Southwester Medical Center --
Harold Kaplan, M.D., Columbia University, P. 264
Robert Francis, The Farragut Group --
Public Comment, P.241
Committee Discussion, P.256
Break
Committee Resolutions
Adjournment, P.418
P R O C E E D
I N G S
DR. NIGHTINGALE: Good morning. I am Dr. Stephen
Nightingale, the Executive Secretary of the Advisory
Committee on Blood Safety and Availability, and it gives me
great pleasure to welcome you on this, the second day of
National Secretaries' Week, to the Advisory Committee on
Blood Safety and Availability's Eleventh Meeting.
Let me begin by calling the roll. Mr. Allen?
MR. ALLEN: Here.
DR. NIGHTINGALE: Dr. AuBuchon?
DR. AUBUCHON: Here.
DR. NIGHTINGALE: Dr. Busch?
DR. BUSCH: Here.
DR. NIGHTINGALE: Dr. Caplan?
DR. CAPLAN: Here.
DR. NIGHTINGALE: Dr. Davey?
DR. DAVEY: Here.
DR. NIGHTINGALE: Dr. Gilcher?
DR. GILCHER: Here.
DR. NIGHTINGALE: Dr. Gomperts?
DR. GOMPERTS: Here.
DR. NIGHTINGALE: Dr. Guerra?
DR. GUERRA: Here.
DR. NIGHTINGALE: Dr. Haas?
DR. HAAS: Here.
DR. NIGHTINGALE: Dr. Hoots?
DR. HOOTS: Here.
DR. NIGHTINGALE: Dr. Kuhn?
DR. KUHN: Here.
DR. NIGHTINGALE: Ms. Lipton?
MS. LIPTON: Here.
DR. NIGHTINGALE: Ms. Pahuja?
MS. PAHUJA: Here.
DR. NIGHTINGALE: Dr. Penner?
DR. PENNER: Here.
DR. NIGHTINGALE: Dr. Piliavin?
DR. PILIAVIN: Here.
DR. NIGHTINGALE: Dr. Secundy?
DR. SECUNDY: Here.
DR. NIGHTINGALE: Mr. Walsh?
MR. WALSH: Here.
DR. NIGHTINGALE: Dr. Winkelstein is in transit, I
believe. Dr. Chamberland?
DR. CHAMBERLAND: Here.
DR. NIGHTINGALE: Dr. Epstein? Dr. Epstein is in
transit. Captain Fitzpatrick?
[No response.]
DR. NIGHTINGALE: Dr. Goosby?
DR. GOOSBY: Here.
DR. NIGHTINGALE: Dr. McCurdy?
DR. McCURDY: Here.
DR. NIGHTINGALE: Dr. Snyder? I believe he's in
transit.
The following announcement is made as part of the
public record to preclude even the appearance of a conflict
of interest at this meeting. General applicability has been
approved for all committee members. This means that unless
a particular matter is brought before this committee that
deals with a specific product or firm, it has been
determined that all interests reported by the committee
members present no potential conflict of interest when
evaluated against the agenda. In particular, as specified
in Title 18 of the U.S. Code 208(b)(2), a special government
employee, which all committee members are, may participate
in a matter of general applicability, for example, advising
the government about its policies relating to the hepatitis
C epidemic, even if they are presently employed or have the
prospect of being employed by an entity, including
themselves if they are self-employed, that might be affected
by a decision of the committee provided that the matter will
not have a special or distinct effect on the employer or the
employee other than as part of the class.
In the event that discussions involved a specific
product or a specific firm from which a member has a
financial interest, that member should exclude him- or
herself from the discussion, and that exclusion will be
noted for the public record.
With respect to the other meeting participants, we
ask in the interest of fairness that they disclose any
current or previous financial arrangements with any specific
product or specific firm upon which they plan to comment.
One announcement of a change in the agenda. It is
a small one. Ms. Connell is unable to attend the meeting
today.
For those in the audience who wish to address the
committee, we should have ample time to hear your views in
full. If you would identify yourselves to me in the break
immediately before the time when your public comment is
scheduled, I would appreciate it.
Dr. Caplan does have a conflict. I don't believe
it is an ethical one, but during the course of the meeting,
he had to go over to the Senate, and during that time Mr.
Allen has agreed to chair the meeting.
Dr. Caplan?
DR. CAPLAN: You're not going to read the thing
about the chemists?
DR. NIGHTINGALE: Next time.
[Laughter.]
DR. CAPLAN: Well, in the interest of time, let me
say that our deliberations today are aimed at trying to,
first, come back to the problem of managing error and the
reduction of accidents, and we hope that in many ways not
only can we do what needs to be done with respect to blood,
but perhaps we can set out a paradigm as the nation attempts
to struggle with this problem.
We have a visit from the distinguished Dr. David
Satcher, who is going to begin with some opening remarks,
and then the Honorable Justice Horace Krever is going to
talk a little bit about the Canadian situation, how they've
been dealing with the problem of error, informed consent,
and misadventure in their blood world. Then we'll have a
break. That's when I'll ask Larry to take over.
So let me turn to Dr. Satcher. I know his
schedule is a busy one, and I will turn the floor over to
him.
DR. SATCHER: Thank you very much, Dr. Caplan.
Let me begin by thanking you for--I'm sorry. Let me start
over.
Thank you very much, Dr. Caplan. I want to begin
by thanking you for really the exceptional effort that you
made the last time you met to overcome the weather and to
attend the Advisory Committee meeting. We all know that
public service can be very challenging, but last January you
met those challenges in really a great way.
You know, in fact, it was a day when the
government was officially closed because of the weather, and
yet you were here, and I want to commend you for that.
It's interesting because I think you remember that
at the same time we were in the process of launching Health
People 2010, the nation's health plan for the next 10 years,
and, fortunately, that also went well.
We made a very important announcement yesterday in
that regard. One of the two goals of Health People 2010 is
the elimination of disparities in health on the basis of
race and ethnicity, and yesterday we announced the
partnership between our Department and the American Public
Health Association to work together over the next 10 years.
The 55,000 members of APHA voted to make the elimination of
disparities in health its number one priority for the next
10 years, so this will result in a national steering
committee being formed and an implementation plan.
I think more importantly it means that this is not
a Federal program. It's a program that's national in scope,
but it's public and private, it's Federal, state, and local.
It's not dependent upon any administration or any given
Surgeon General or Assistant Secretary for Health, for that
matter. And I think that is encouraging as we move forward.
Let me say to you that the Department has
carefully considered the recommendations you made at the
last meeting, and I also want to say that I really agree
with Dr. Caplan. I think what's happening here is really
significant. It is certainly the major effort within our
Department to deal with issues related to managing our
errors and accidents and responsibilities for reporting, so
this could indeed be the model for all of our efforts in
this area. So we appreciate the hard work that you're
doing.
Let me say we agree that all blood establishments
should have a quality assurance program. As you know, the
Food and Drug Administration already considers such programs
to be essential components of good marketing practice. At
the same time, FDA is re-examining its current guidance to
determine if additional recommendations are needed to
address investigation and reporting of errors and accidents
associated with blood and plasma treatment.
We also agree that error and accident reporting
requirements should be extended to all blood establishments.
FDA will publish a final rule on this subject in the near
future.
We concur and accept as well that quality
assurance programs in all blood establishments should
capture, analyze, and respond to all errors and accidents,
even if the affected blood unit were not distributed or even
if their use caused no adverse effects.
FDA is considering recommending the medical event
reporting system for transfusion medicine, which Dr. Harold
Kaplan described to you at your last meeting for the
management of transfusion medicine errors and accident
investigations and reports.
You recommended that a confidential, non-punitive
system for the collection, analysis, and dissemination of
data on errors and accidents be established for those errors
and accidents not subject to regulatory review. However,
regulatory review of transfusion medicine is, as you know,
very broad, and it includes errors and accidents that do not
cause actual harm as well as those that do. So it is
somewhat unclear that an error management system excluded
from these events could achieve its stated purpose. And for
this reason, we are asking that you return to the issue of
error management and advise us how to accommodate the right
of patients to know the consequences of any treatment
received, provide regulatory agencies with the information
necessary for them to fulfill their statutory oversight
responsibilities, and support initiatives that identify and
correct latent flaws in complex and critical systems such as
blood establishments before these latent flaws cause actual
harm or even death.
We're not alone in this search, as you know. Even
as we speak, the National Transportation Safety Board is
convening at another Hyatt Hotel near here, and this is a
symposium on transportation safety and law. And they, too,
are trying to balance the rights and needs of consumers,
providers, regulators, and the public at large to
information on the one hand and the goal we all share of
reducing morbidity and mortality of errors and accidents
that we should and, in fact, we must prevent. And I
understand that Captain Griffith will be participating in
that meeting as well as this one, so we ought to thank him
for doing double duty today.
We will follow the deliberations of our colleagues
at the National Transportation Safety Board with a lot of
interest, and we suspect that interest will be reciprocated.
This morning it is truly an honor to have Justice
Horace Krever, recently retired from the Supreme Court of
Canada. Justice Krever will open our meeting by discussing
the duty to inform patients of medical mistakes under
Canadian law. This will provide an exceptional opportunity
for us to re-examine and perhaps clarify our own recognition
of this duty, and we do thank Justice Krever for joining us
here today.
The second part of this morning's program will
provide us with additional opportunities to understand the
rationale for modern error management systems, the results
they have achieved so far, and the remaining challenges they
face, and we appreciate those who will be participating in
that discussion.
The third part of this morning's program will be a
discussion of the application of error management principles
to medicine, and we're delighted to have here today Dr.
Helmreich, who was not able to join us before, as well as
Dr. Stephen Small from Harvard.
I understand this afternoon's discussion you will
welcome back Drs. Battles, Francis, Kaplan, and Linden, and
they will continue their participation in these
deliberations.
Last August, the Advisory committee conducted a
vigorous discussion of the potential impact on blood safety
and availability of the proposed Medicare outpatient
pesticide payment system. The concerned raised at that
meeting have been very carefully considered by the
Department, and they are reflected in the final rule that
was published earlier this month; I believe April 7th was
the date of the publication in the Federal Register. And
tomorrow morning, we invite comments from interested parties
on this final rule, and on other matters related to
reimbursement for blood and plasma product. I think this is
really critical that you give us vital input at this point
in time.
Last fall, the Department approved an expansion of
the Food and Drug Administration's Blood Action Plan for a
set of initiatives, including some recommended by this
committee, to assure the adequacy of the United States blood
supply. And one of these initiatives was to address the
economic concerns of the blood industry. So tomorrow's
first agenda item is in a sense a continuation of our
commitment to this process. But also on the agenda tomorrow
are status reports on efforts initiated by this committee to
monitor the availability of blood products and plasma
derivatives.
Finally, the committee will hear tomorrow a
presentation by Dr. Jean Emmanuel, the Director of Blood
Safety and Clinical Technology Section at the World Health
Organization, on WHO's proposed global collaboration for
blood safety. So it is a pleasure for us to welcome Dr.
Emmanuel to Washington, and I look forward to discussions
with him and his colleagues. In addition to this meeting,
we'll be meeting later, because we really want to view our
role from a global perspective.
Thank you very much. I'd be happy to respond to
any comments or questions.
DR. CAPLAN: Thank you, Dr. Satcher.
DR. SATCHER: Okay.
DR. CAPLAN: I think you know--and I think Steve
had this letter from Secretary Shalala out in front of you,
and you'll see that she echoes some of what Dr. Satcher said
to us about what she hopes to achieve in terms of receiving
advice from the Advisory Committee and the balance between
the right of the patient to know and the need of regulatory
agencies to get accurate information on error and to have
oversight and society's interest in having a system that can
find flaws. And we thought that one way to get us into or
revisit this question of balancing values was to have
someone address us who has struggled with this problem, and
we have asked Justice Horace Krever, retired from the
Canadian Supreme Court, if he would get us underway in terms
of trying to wrestle with these problems. Canada has tried
to wrestle with them, sometimes I think on the mat a little
more than we have. But I think we can look to this
distinguished jurist for some insight as to how to make the
balance and compromise that's going to be necessary here.
So let me ask you to step forward, please.
JUSTICE KREVER: Thank you, Mr. Chairman. I think
the expectations are too high for what I'm about to do. But
I want to thank you for the invitation--the honor and the
invitation to be here and for giving me the opportunity to
revisit Washington. The last time I was here was some 20
years ago, I hate to tell you, to testify before a
congressional subcommittee deliberating over a proposed
bill, the Federal Privacy of Medical Information Act, which
was introduced in the House in November of 1979. I've
forgotten the name of the Congressman who sponsored it.
Senator Kennedy was the sponsor in the Senate. The bill
was, however, unfortunately, never enacted, as far as I
know.
I was then conducting a public inquiry into the
confidentiality of health information in respect of which I
submitted my three-volume report in September of 1980. I
will have occasion in a few minutes to say something about
that subject.
Just as a matter of housekeeping, I want to make a
correction. I am not a former member of the Supreme Court
of Canada. I never attained such heights. I am a former
trial judge of the Ontario Superior Court, and after
that--that was 10 years. After that, for the next 14 years,
I was a member of the Court of Appeal for Ontario, which is
the highest court in Ontario, but not in Canada. And when I
speak of the law, I will not be speaking of the law of
Canada. As in the United States, there are provincial
interests and Federal interests. My inquiry into the blood
system was, in fact, Federal, but when I refer to the legal
decision, as I shall in a moment, that was a decision that's
binding in Ontario. However, I should add, the text writers
have accepted it as an accurate statement of the law in the
common-law provinces. As you perhaps know, all of our
provinces except Quebec are common-law provinces. Quebec
has, as in the case of Louisiana, a civil code. So I don't
want to pretend to be talking about something that's
formally beyond my judicial competence.
Now, I'm not sure that what I have to say is
directly relevant to the agenda for this meeting, but I am
confident that it is not entirely irrelevant and may be of
interest if only by analogy. My subject is the health care
professions and, by extension, health care facilities. I
start from the premise that one of the important
characteristics of a profession, a true profession, that
distinguishes it from, say, a trade or other occupations is
the adoption of and adherence to a code of ethics embodying
high ethical standards which in most cases exceed legal
standards.
Unfortunately, as I shall show, legal concerns
sometimes tend to act as impediments in the way of observing
one's ethical obligations. But on the whole, there is today
a congruence of ethical and legal standards in professional
life.
In North America, that is to say, in the United
States and Canada, the concept of informed consent is the
norm. I should point out that, strangely enough, it is not
necessarily the same--the law is not necessarily the same in
England. In high judicial authority in England, the court
has referred to the idea of informed consent as a
transatlantic concept. The idea in England of medical
prerogative has not entirely yet disappeared.
From the material, I think, that has been
distributed to the members of the committee, you will see in
a judgment I had occasion to deliver some years ago, in
1985, to be precise, I held that in law the failure of a
physician to inform his patient of an error he had committed
in the course of treatment was a breach of duty, though in
the circumstances of that case, it did not--the breach of
duty did not amount to negligence.
Just a brief description of the case. What
happened was a physician was conducting a lung biopsy on a
patient, and after the biopsy was performed, the physician
told the patient that he had not got a specimen, a lung
specimen. Problems occurred with the patient later on, and
he was readmitted to the hospital. It turned out that his
spleen had been pierced during the biopsy, and eventually a
splenectomy had to be performed.
So in the litigation that followed, the question
was whether or not liability should follow from the failure
to--that was one of the issues, the failure to inform the
patient. There was no causal connection between the failure
and the loss of the spleen, so that couldn't be negligence.
But I held that it was a duty. Nevertheless, I found
liability on another basis, on the way in which the
operation had been--the procedure had been performed.
But that statement has, I think, been accepted--I
was then only a trial judge and not an appellate court
judge, and you know enough about our systems of law to know
that the higher the court, the more authoritative the
pronouncement. But as I say, the concept, I think, is
accepted in Canada that there is such a duty.
Now, this standard is not a traditional one. In a
new biography of William Osler, who was one of the founders,
one of the charter members of Johns Hopkins Hospital and
Medical School, and whose name I think is probably familiar
to people who know the history of medicine, this is a new
biography just published a few months ago by Michael Bliss
entitled, "William Osler: A Life in Medicine." One finds
this illuminating paragraph. The reference is to the time
when Osler was a member of the faculty of medicine at McGill
University in Montreal. And it says, "The young professor
had effectively become pathologist for the whole city and
medical community. As such, he was sometimes consulted by
colleagues"--by the way, you're going to hear a terrible
grammatical error here, but forgive me, it's not mine.
"As such, he was sometimes consulted by
colleagues. A close friend remembered an occasion when a
distinguished older surgeon asked Osler to examine a young
adult's hand that he had amputated above the wrist for a
supposedly malignant cancer. Osler realized the diagnosis
was wrong. Rather than show up the old man, he submitted no
report, foregoing the fee. When Osler told his friend about
the case years later, he said, `No one but you and me ever
knew of the unfortunate circumstances, and we have both
forgotten it.'"
Now, that's a little over a hundred years ago, and
today I think that probably is a bit shocking, especially
coming from someone who had the reputation as being one of
the greats.
I refer to the intrusion of legal concerns into
the observance of what I consider to be ethical obligations
resulting in conflicts of interest. If I am right in my
view that the fiduciary relationship between physician and
patient--by the way, it is now clear in Canada, not just
Ontario, because the Supreme Court of Canada has spoken on
this, that the relationship is a fiduciary one, not simply a
contractual one. But if I am right in that, that
relationship obliges the physician to be completely frank
with and confide fully in the patient, the potential for
conflict is real. Because of the understandable, in today's
climate, indeed, reasonable fear of legal liability, the
physician contracts with an insurer for protection against
liability to a third party, the patient, to whom he or she
has an obligation to be frank.
The contract of insurance, however, on pain of
loss of protection prohibits the physician from saying
anything that would be tantamount to an admission of
liability.
This is one of the several reasons why in my
report on the blood system in Canada I recommend a no-fault
system of compensation.
At the risk of repetition, let me say that the
ethical standards at the heart of a true profession are
necessarily, I believe, higher than those arising out of a
mere contractual relationship and inevitably doomed the
paternalism that has traditional characterized professions.
Please indulge me for a minute or two as I quote
myself from my confidentiality report of 1980. During the
inquiry, the most emotional issue was whether patients
should be entitled to access to their own health
information. This was a very extensive inquiry lasting a
long period of time and arising out of allegations of
terrible abuses of patient information. And it turned out
that everybody had access to patient information except
patients themselves.
So I say in my report, "Perhaps in no other field
dealt with in this inquiry have such apparent anomalies
arisen than that of patients' access to information about
themselves. Certainly no other issue has aroused such
intense emotional reactions on the part of persons
accustomed by their own admission to the paternalism of
protecting patients from precise information."
Then I say: Historically, and even today in
Ontario, this issue has been determined by the paternalism
of those in possession of the records or other information,
but that information was obtained from the patient or, as a
result of an examination of the patient, with his or her
consent. Only as much information as the professional
person felt ought to be imparted to the patient was
imparted. Why, many ask, should a patient who wants to know
everything that has been recorded about him or her not be
entitled to see what has been recorded? The acquirer of
that information, after all, did not come upon it as part of
a general education or as a result of academic research, but
on the contrary, obtained it because of his or her
physician-patient relationship with the individual to whom
the data relate.
Knowledge is power. Knowledge about another
person, knowledge, that is, that the other person does not
have, is surely power over that person. Does the
therapeutic relationship truly require that a physician have
power over his or her patient? Should a physician's
judgment that I may be harmed by the information or that I
will be unable to handle the information be decisive?
If I, informed by my physician of the risks of
reading my chart because of, for example, my lack of
familiarity with the medical terminology, choose to run that
risk, should I be denied the opportunity? Indeed, is it
really true in this era of specialization and
subspecialization that every member of the medical
profession who sees a patient has such an intimate knowledge
and genuine understanding of that patient psychologically
and intellectually, no matter how brief the contact, that he
or she can have the necessary knowledge to be able to judge
whether the patient can cope with the information? Does
that accurately describe, as I was told it does, the
relationship to the patient of the consultant, the surgeon,
the anesthetist, the physician in the emergency department
of a large urban general hospital, and the radiologist? And
how can depriving the patient of the opportunity of
reviewing his or her record be reconciled with respect for
the dignity of every adult and mentally competent member of
the community that our society wants to be respected--wants
to see respected, rather.
The question arises whether preferring insistence
on the performance of the ethical obligation over the less
onerous legal obligation would not result in a removal of a
meaningful sanction, the potential damage award, for
compliance with the standard. In other words, if there is a
duty the breach of which may act in law--I'm sorry, may not
in law amount to negligence, where is the sanction to be
found to compel accountability? My answer to that is that
it's to be found in the power of the licensing or regulatory
body of the profession to discipline members for
professional misconduct. And I regard the possibility of
losing one's license as a pretty serious sanction.
Now, I seek a further indulgence, and I'm almost
finished. A word about the field of risk management. It
relates to my no-fault compensation recommendation. I
discovered, not really having any knowledge of the field of
risk management, which had become, I guess everywhere, even
an academic field, I discovered that risk management
measures, including third-party liability, in a context in
which there can be no zero risk--there's always some risk.
Where there is some risk, it's very certain that someone is
going to be affected by its occurrence. And third-party
liability insurance is thought to be one way of protecting
those persons. I don't think it does, but the measures that
protect the risk creators but, in my view, not sufficiently
the persons to whom the risk accrues.
We all know, as I said, that zero risk is not
attainable. I am going to quote myself again, if I may, and
take the liberty now of referring to my more recent report
on the blood system in Canada. And in a chapter on
financial assistance for blood-associated injury, I say
this: "The compassion of a society can be judged by the
measures it takes to reduce the impact of tragedy on its
members. Although the risks to the users of blood
components and blood products today maybe low, serious
disease and some deaths will continue to occur as a result
of the therapeutic user of blood. There is, moreover,
always the likelihood that a new and mysterious bloodborne
pathogen may strike. As I pointed out in my interim report,
it is of little consolation or even relevance to those
unfortunate members of our society who suffer from infection
caused by blood transfusions or blood products that the
blood supply now is adjudged relatively safe. A system that
knows that these consequences will occur and what brings
them about has, at the very least, a moral obligation to
give some thought to the question of appropriate relief for
those affected by the inevitable events." And I don't think
the tort system in my country and in your country, with
respect, is adequate.
I had occasion--again, I'm going to quote myself.
I had occasion many years ago as a trial judge in a
malpractice action, in a very sad malpractice action, a man
was advised who had been in perfect health, an elderly man
was advised to have an angiogram. He'd never been in the
hospital before, never had any kind of a procedure before.
He had the angiogram and became a quadriplegic within an
hour and there was, understandably, litigation. And I--it
was a long trial and difficult trial, and I was required by
the use of what I thought was intellectual honesty to find
that there was, in fact, no negligence in the performance of
the procedure or in the aftercare. And I, therefore, had to
dismiss the action.
This was an elderly man whose wife was even older,
she was in her 70s, and they couldn't afford any aid. He
needed 24-hour care, and she had to get up twice a night to
turn him over. They couldn't afford to enlarge the bathroom
to allow a wheelchair to go in. It was a very tragic case.
And, unfortunately, the requirement that negligence be
shown--that's important--be shown, not whether in all truth
that exists we depend on fallible human beings to give
evidence in our systems. In any event, I dismissed the
action.
In my report on the blood system in Canada, I say
this: "The view I expressed in 1983"--and I name the case,
Ferguson and Hamilton v. Civic Hospital--"is pertinent here.
That was a medical malpractice action involving a previously
healthy man who had become quadriplegic immediately after
undergoing an angiogram, a diagnostic test. His action was
dismissed because he was unable to prove negligence in the
performance of the test or in his aftercare. The need for
compensation was palpable, but could not be awarded."
I concluded in my reasons for judgment with the
following statement: "I confess to a feeling of discomfort
over a state of affairs in an enlightened and compassionate
society in which a patient who undergoes a necessary
procedure and who cannot afford to bear the entire loss
through no fault of his and reposing full confidence in our
system of medical care suffers catastrophic disability but
is not entitled to be compensated because of the absence of
fault on the part of those involved in his care. While it
may be that there's no remedy for this unfortunate and brave
plaintiff and that this shortcoming should not be corrected
judicially, there is in my view an urgent need for
correction."
My judgment was appealed to the Ontario Court of
Appeal, and that appeal was dismissed, but the court also
felt obliged to say something about the sadness of the
situation and ended their judgment by saying, "We are in
complete sympathy and agreement with the penultimate
paragraph of the learned trial judge's reasons. The Ontario
Health Insurance Plan is the product of a socially conscious
society, but we agree that in situations such as the instant
one an enlightened and compassionate society, to use the
words of the learned trial judge, should do more."
I think we've got a long way to go in our concept
of proper measures for risk management.
May I just say a word--I may be trespassing on a
subject that isn't mine, but I found it interesting, the
discussion, the consideration of a mandatory requirement to
report near misses, which I think probably are more
accurately called near hits. I found that difficult. I had
occasion in my earlier report to deal with mandatory
reporting because, of course, mandatory reporting is an
invasion of patient confidentiality. So I was interested to
see this discussion.
I wondered, though, how one would define near
misses that must be disclosed in the medical context, that
is to say, where no harm results to the patient. I can see
it as being more feasible in the hospital or institutional
context. And air traffic seems to me to be possibly
different.
How confidential can that information be made, the
reporting of the near miss? It would, I think, have to be
made inadmissible in evidence in a court of law. Yet in
legal proceedings, that evidence may not be irrelevant. If
I were acting for a plaintiff suing for damages and alleging
negligence, I think I might be interested in subpoenaing
records to see if they contained reports of near misses to
show that the defendant's act was not an isolated one but
part of a pattern.
I don't think I should add anything more to that.
I certainly will be interested in following your
deliberations.
I think there may be some time for some questions,
if there are any, but I think that's all I have to say.
DR. CAPLAN: Thank you very, very much.
Let's open the floor for questions, comments.
DR. HOOTS: Justice Krever, thank you very much,
and I think, at least in my own kind of cogitations about
this, those issues that you raise are exactly the ones that
are most troubling. I'm intrigued by where things stand in
terms of the no-fault compensation system in Canada. Is
there some near-term prospect for development of such a
system, something that we could perhaps learn or emulate
here in the U.S.?
JUSTICE KREVER: I think the short answer is no.
There has been complete silence. It's perhaps
understandable. I see no discussion among the legal
profession in legal journals or meetings of lawyers. It's
not something I think the legal profession, for, I guess,
obvious reasons, have absolute conviction that our system is
right. The fact that there are material interests involved
I don't think they recognize. I think they're saying--I
think they're being objective when they say that, that
they're worried about the rights of the injured plaintiff
who, under the litigation system, will almost inevitably
recover a huger--I shouldn't say huger--a much larger award
than a no-fault system would provide. But not all
plaintiffs' actions are successful. I gave you an example
of the kind that I think causes a lot of problem.
Some of the persons who came to grief over the
infected blood during that terrible period have received as
an act of grace from the government compensation. But my
recommendation was for compensation not only to them, it was
a recommendation that they be compensated, but that the
legislation in the provinces should be amended to provide
for another mechanism for giving financial assistance than
the tort system.
DR. CAPLAN: Jim?
DR. AuBUCHON: If we could return to the last
situation you mentioned, Justice Krever, about disclosure
and the conflicts between what may be most appropriate or in
the best interest of a particular party versus the best
interest of the system, if I could frame this in the
aviation industry mode that we are going to be discussing
later today, there has been some discussion lately in this
country about the design of the tail of the Boeing 737 and
whether or not it is as safe as it should be.
If the designers--and I know no particulars about
that aircraft, but if the designers of that aircraft or
someone working on it thought that the tail was potentially
flawed and if the manufacturer could report that or
individuals using the aircraft could report it without fear
that they ultimately would be sued because they had some
information and didn't act on it, it is that not more likely
to yield the situation that we are ultimately hoping for,
and that's a safer aircraft, as opposed to the individual
plaintiff's ability to get hold of the information that
someone knew something at some point and, therefore, they
can be held liable.
It seems that our society's desire to find someone
who's at fault and pillory that person and gain some
financial benefit from that at times prevents us from really
doing what we would like to do, and that is, not have the
accident in the first place.
JUSTICE KREVER: Well, I think the use of the word
"pillory" is unfortunate. I don't think the purpose of
litigation is to pillory anyone. It's to spread the loss,
usually by the insurance principle. But I don't disagree.
I mentioned that in my report on confidentiality I had to
deal with a number of occasions in which there is an
obligation on persons, very often physicians but not only
physicians, to report. In Ontario, there's an obligation on
a physician to report child abuse, for example, if it's
suspected, the condition of a person who has a driver's
license that suggests it should be somehow modified or
shouldn't exist at all, or of a pilot, or sexually
transmitted diseases and other diseases.
Those are all breaches of confidentiality, but
there's a transcending interest in society that that
information be made available to authorities. And so I
don't--as I said a few minutes ago, I can see it in aviation
safety, but I don't know--one of the most frequent errors in
medicine, I am told, in hospital medicine, are medication
errors. People are given medication that is intended for
other people rather than for them. The dosage may be wrong.
I think probably it's the most common medical error in
hospitals, leaving aside surgical errors. And very often
nothing happens to the patient, no harm comes to the
patient.
If the obligation exists to report that that error
occurred, what's the sanction that compels someone who
realizes no harm has occurred, I just messed up, how
effective will it be? And if it is reported, I can
understand protecting it from people in litigation. But
what about the patient himself or herself? If I'm right in
saying that I as a patient should have the right to have
access to anything in my record, do I not have the right to
see that? Or is that put in a different record?
As I understand hospital practice, I am sure in
North America generally, not just in Canada, if something
occurs to the patient such as a fall or something, there's
an incident report that has to be made. I'm not sure how
well known it is that there are such reports or whether
people seek them in litigation. But, you know, one of the
things that I think is wrong is that we should be so
concerned with litigation that our moral obligations take a
second place. I don't think they should.
I don't know that anything can be done about it.
We live in a litigious society. That's not an answer to
your question, but the short answer is I don't disagree that
it would be a good thing and that in some situations the
interests of the larger public and society as a whole must
transcend those of the individual. The question is have we
got the right cases for that principle.
DR. CAPLAN: I have a question for you, Mr.
Justice. It goes right to this issue of collecting
information, reporting incidents and near hits. It is true
that you would be giving up the right to litigate if we
created a policy that kept this information off limits from
the court. But it's also true, isn't it, that there are
other means to obtain the very same information? In other
words, if we created a reporting system and said you can't
use that, but if you want to go to court and find out, you
can still interview people, you could still attempt to
investigate and find out what was going on. Is there a way
to balance the drive to secure compensation for harm without
no fault through litigation, still take the reporting system
off limits, if you follow me, but allow people to still
investigate anyway, just not go to that particular pool of
information to bring their case?
JUSTICE KREVER: I see what you mean, but I'm not
sure that the information would be available in any other
way if you made it secure against disclosure. I don't think
anybody would think of asking were there other
incidents--well, I shouldn't say that. Perhaps that
question would be asked, but there's no sanction to compel
honesty in the answer.
DR. CAPLAN: That may be a truth we would let you
escape from this panel with--oh, Larry, go ahead.
MR. ALLEN: Justice, I had a quick question. You
mentioned the possibility of somewhere down the road there
might be an issue in the blood supply, again, as far as
another pathogen. Has there been any discussion in your
country regarding that possibility and what to do about
treatment or compensation for those affected?
JUSTICE KREVER: No. As I say, nothing has
happened and no discussion is taking place on the question
of compensating people who may be affected by a future
disaster. No. The answer is no.
DR. CAPLAN: Well, I want to thank you for your
presentation and actually for sharing that experience with
us. And I hope you'll let us call upon you in future as we
stumble on in our own deliberations here.
Let me take that 10-minute coffee break that I
promised at this point in time. I'm going to ask Mr. Allen
to come up and take the chair when we return, and let's do
the break and be back in 10 minutes.
[Recess.]
DR. NIGHTINGALE: Could the meeting please come to
order?
Mr. Allen?
MR. ALLEN: [Presiding.] Good morning, again,
everyone. The next speaker is Mr. Ron Westrum. He's going
to talk about the scientific foundation of medical error
management.
DR. WESTRUM: Thank you very much. It is a great
honor to be invited back to speak to this committee.
Could we have the first slide, please?
Dr. Nightingale has asked me to talk to you about
the scientific foundations of error management, and in
particular, about the Reason model of accident causation.
Since even Professor Reason himself took two books to
expound the subject, I'm sure you can appreciate it's not an
easy task to do this in a few minutes. However, the central
core of Reason's argument I think will be sufficient for my
task.
Next slide, please?
Now, in describing the foundations of modern error
management, it will be useful to see accidents as coming
about through two sources of failures: those at the
operating or sharp end of socio-technical systems, and those
at the blunt end, away from the scene of operations, but
still part of the organizational context.
Since the control of operating errors will be very
ably handled, I'm sure, by Bob Helmreich later this morning,
I'm going to concentrate on the second set of factors, those
pertaining to the organizational context.
Next slide, please?
Beginning in about the late 1970s, there were two
parallel developments that created our modern understanding
of systems failures. The first of these was a series of
major technological accidents.
Next slide, please?
These accidents included aviation disasters such
as Mount Erebus, industrial explosions such as Bhopal,
railway collisions such as the one at Clapham Junction in
England, marine collisions and sinkings such as that of the
seafaring Herald of Free Enterprise, nuclear accidents like
Chernobyl, and, of course, space accidents such as the
Challenger tragedy.
These accidents were often investigated by the
judicial branch of the government and yielded understand in
depth of how these man-made disasters took place. The
thorough investigations of these accidents showed that far
in advance of the accident, events had taken place in the
organization itself to compromise safety. These latent
pathogens, as Jim Reason would call them, often had
slumbered silently until the accident revealed the
weaknesses that they represented.
For instance, in the Mount Erebus crash in New
Zealand, an airliner flew into the side of the mountain.
But while the pilots who had made the operating errors were
initially held at fault, the very profound inquiry later
showed that the most serious problem was that the flight
management system of the airliner in question had been
reprogrammed without the knowledge of the crew. The pilots
flew the aircraft into a mountain without knowing that one
of their key systems had been tinkered with without their
knowledge.
The report further commented that the flight had,
in fact, been set up through a series of management failures
and that such failures were typical of the way that the
airline operated. And it is the general finding from this
and other investigations of systems failures that when
operating errors are committed by those on the front line,
they often combine with such hidden weaknesses to produce
the accidents. The investigations of accidents like Mount
Erebus provided the major fuel for the second line of
developments.
Next slide, please?
The second set of developments was the
sophisticated theoretical examination of these major
accidents by academic experts who expressed their ideas
through books on the nature of systems accidents.
Next slide?
For purposes of exposition, I will choose the
three most important. The first of these was a book by
Barry Turner, "Man-Made Disasters," which was published in
1978. Unhappily, this book went largely unnoticed and had
almost no impact on the accident investigation community,
although it presented a highly sophisticated analysis of
many, particularly British, industrial accidents.
The second book, however, was Charles Perrow's
"Normal Accidents" in 1984, and it landed on the scene at
exactly the right time. In the wake of Bhopal, this book
was read with exceptional interest. Perrow's book
emphasized the complex dynamics and tight coupling of high
technologies and suggested that some, like nuclear power,
might even be too complex for humans to manage safely.
Perrow's book showed how social science analysis might
contribute to the analysis of accidents. It also brought
the involvement of many social scientists, including myself,
to the field of safety research.
The third book--could we have the next slide?--was
James Reason's book, "Human Error." In this book, Professor
James Reason of the University of Manchester proposed a
model that has since become the international benchmark for
understanding the nature of safety in large systems. In
"Human Error," Reason set forth the idea that we could
understand systems accidents by considering the dynamics and
co-existence of the hidden faults that he called "latent
pathogens." A latent pathogen is an act or an omission that
causes a weakness in the organization of defenses. While
errors at the sharp end are easy to spot, the latent
pathogen, said Reason, lurks unseen until events reveal its
existence.
Every organization, then, has a larger or smaller
population of these hidden weaknesses. Reason argued that
the specific pathway leading to an accident was difficult to
predict, but that the general probability of an accident was
a function of this population of pathogens.
Next slide, please?
Now, this is the Reason model, as it is ordinarily
seen in aviation safety and other conferences. Reason
suggested that latent pathogens were rather like the holes
in Swiss cheese of the protective layers around the
organization and that you could think of the organization as
being defended by a series of layers like this, but with
faults. And so he also suggested the sheets might move
around and, therefore, that what might be the latent
pathogen at one point might not be at another. But the key,
said Reason, was that when you get these holes lining up,
then the dangers, the external forces affecting the system
can get at the socio-technical core and destroy it.
Next slide, please?
Only when there is a complete line of open holes
between the outside world and the technical core of the
system can the accident take place. Now, as you see in this
diagram, basically systems are protected by a series of
defenses, and it is the compromise of these defenses which
the latent pathogens represent. And latent pathogens can
come about as a result of local workplace factors or
organizational factors. But often the unsafe act by the
operators is simply the last link in the chain that allows
the accident to be completed.
Obviously, what we need to do, said Reason, is to
drain the swamp that produces these latent pathogens.
Reason saw many organizational forces contributing to the
integrity or deficiency of the system's defenses.
Now, the aviation community immediately rallied
around this model and made it their own. Unlike Turner and
Perrow, Reason liked to travel and he liked to speak at
conferences, and over the ensuing years he spoke at many
meetings on aviation safety. He also consulted widely in
the petroleum, aviation, marine, and railway communities,
and in each of these communities, by the way, he developed
systems for reporting latent pathogens.
I'd like to emphasize that one of his greatest
accomplishments was the practical implementation of the
ideas that he suggested through a series of systems for the
railways, for the aviation industry, for the petroleum
industry and so forth. Now, some of these, because they
were public organizations, have been published in detail.
Others, because they were proprietary, have not.
Reason's model has become the common language
through which complex accidents can be understood. I
remember being at one conference where six speakers in a row
got up and showed Swiss cheese diagrams as a kind of
academic overkill. The popularity of this model obviously
comes from its wide application. It's generally felt, as I
said, this provides a common ground for discussing system
safety.
Next slide, please?
But I think it's important to see what the real
implications of this model are because they're very
different than the ordinary way that we respond to
accidents. Reason's approached stressed the build-up of
accidents along unforeseen pathways. This meant that the
typical blame and train approach will not work to prevent
future accidents because it always emphasizes avoiding a
repetition of the last accident rather than providing a
strategy for avoiding the next one. Since the exact pathway
that the next accident will take is not known, it is the
general identification and removal of the latent pathogens
that is key.
Let me put this in a different way. In the past,
each accident has been investigated on its own. The
aviation system was able to learn, for instance, from each
accident and then fix the specific problems that had allowed
the accident to take place. Sometimes this meant a
mechanical fix or a new device such as a ground proximity
warning system, and sometimes the passage of a new rule or a
standard. This item-by-item approach accounts for a good
deal of progress in aviation safety. But Reason's approach
is not just to learn from accidents since accidents are
infrequent, but also to monitor the accident-prone
situations in the system itself. And I'd like to quote here
from his book, "Avoiding the Risks of Organizational
Accidents."
He says: "Are companies doomed to fighting the
last fire or trying to prevent the last crash? The answer
must be yes if complex, hazardous organizations continue to
rely principally on outcome measures in order to navigate
the safety space. But there is a workable alternative: the
regular assessment of the organizational procedures that are
common to both quality and safety. Latent
accident-producing conditions are present now. It's not
necessary to wait for bad events to find out what they are.
But we cannot expect to remedy them all at once. Systems
need principled ways of identifying their most urgent
process problems in order to deploy their limited remedial
resources in the most efficient and timely manner. Making
and acting upon proactive assessments of the system's vital
signs together with the intelligent application of near miss
reporting will not guarantee freedom from accidents. What
it will do is take an organization closer to the only
reasonable achievable safety goal, acquiring the maximum
amount of intrinsic resistance to hazards and sustaining
it."
Now, this is what Reason means by draining the
swamp rather than swatting each individual mosquito. The
organization's culture is a breeding ground for latent
pathogens, and only by drying up the processes that produce
these latent pathogens can progress effectively be made.
Yet even further on this point, there is no magic bullet by
which we can kill the latent pathogens. What we are talking
about here is a fundamental change in the organization
itself, a change which moves the organization toward what
I've called the culture of conscious inquiry. Such a
culture makes visible the state of the organization's vital
signs.
Next slide, please?
Now, a second implication of Reason's model is to
cast a spotlight on the role of management. The judges who
directed the investigations into the previously mentioned
accidents did not satisfy themselves with simple
explanations, but conducted profound inquiries into the
circumstances of the accidents. They often found that the
errors made by the operators at the sharp end, as I said,
were simply the last links in the chain that had been
building up for some time.
But why weren't the errors detected? Why weren't
they corrected? If poorly trained people committed errors,
who was responsible for the lack of training? The reports
often showed that it was previous actions or inactions by
the firm's management that had let hazardous situations
develop unchecked.
In the sinking of the Herald of Free Enterprise,
for instance, the immediate cause of the accident was a
bosun who failed to shut the ferry's bow doors because he
had fallen asleep. But why was there no back-up? And why
was there no sensor that would tell the ship's bridge that
the bow doors were open? Thus, also in the Reason model,
human error in the ordinary sense is much less critical than
the observation that management sets the scene for safe or
hazardous operations. Management does this in several ways.
First, it picks the actors in the drama. If it
picks faulty actors, we should not be surprised when they do
dangerous things.
Second, management sets the goals and creates the
pressure to reach these goals. Many unsafe situations are
the evident result of management pressure to achieve
production goals by compromising safety.
Third, management sets the physical scene. If the
equipment, the spaces, and the budgets are not adequate to
get the job done right, who has the control that could
provide the right stuff? Usually it's management.
Finally, management sets up the active dynamics of
the situation through its creation of organizational
structures, rules, and standards. Management empowers or
disempowers. It either gives the operators the powers to do
the job or it doesn't. And most of all, by the standards it
sets, management lets the workforce know what is expected,
what will be monitored and what will be rewarded or
punished. The Reason model then directs attention to the
dynamics of the system as a whole, not just at the operating
personnel.
Next slide?
The Reason model is, thus, universal. It is not
just an aviation model but a universal model of all
accidents that applies to basically all industries. And, of
course, Jim Reason would be the first to admit that the
model is not perfect and is capable of much improvement.
All this suggests that the model applies equally
well to medical situations. I think this is manifestly true
of the situation in regard to the blood supply, and it is
very well illustrated by the outstanding report of which Mr.
Justice Krever is the author on the Canadian blood system
failures. I would like to take a moment to recall briefly
what some of these failures were.
Next slide, please?
In essence, beginning in 1982 and continuing for
many years afterwards, the organizations in Canada
responsible for the safety of the blood supply allowed it to
become contaminated. They allowed distribution of this
contaminated blood, and they consistently underestimated and
caused others to underestimate the dangers the blood supply
presented. The result, as we know, was the infection and
death of hundreds of Canadian hemophiliacs as well as others
infected through blood transfusions.
Similar contaminations occurred in other developed
countries and still occur in even worse forms today in the
underdeveloped world.
The immediate impression we form reading of these
events is one of dishonesty, including self-deception,
appalling carelessness, and indifference to the lives and
well-being of the patients dependent on the blood system.
And yet the decisions made by the individuals responsible
for the contamination were not simply the result of personal
moral defects but, rather, of systematic forces.
And we know this must be so, because in very
different countries with very different blood systems,
similar, although not identical, mistakes were made. These
organizations did not have a culture of conscious inquiry.
They often acted on the basis of preconception. They did
not keep track of the internal states of the system, and
most surprisingly, they continually tried to prevent change
in a system that was failing them and would end up harming
many of those who depended on a safe blood supply.
While a complete analysis of these failure must
wait for another day, the striking thing to me is that these
events were shaped by the very way that the organizations
had been set up, by the terms of reference they had been
given and by the institutional constraints under which they
operated. This at least is my first reading of the report.
So, last slide, please. Let me sum up the
argument that I have made here.
The organization's approach to preventing and
coping with error is shaped by the will to find out what is
going on within the organization. The organization needs to
know where errors are being made, because where accident is
a rare event, error is common. The situational forces that
encourage making errors, whether these forces come from
human factors, management pressures, or even more insidious
dynamics set up by the structure of the organization itself.
The removal of latent pathogens should be the
common goal of all organizations committed to safety. The
best way this can be done is to empower the work force to
take action and to provide for them the sophisticated tools
to do the job. I personally believe that near-miss
reporting is one of these critical tools. Thank you.
MR. ALLEN: Thank you very much, Ron.
Are there any questions from the panel?
[No response.]
MR. ALLEN: The next speaker is Scott Griffith.
CAPT. GRIFFITH: Thank you, Dr. Westrum.
I was telling Bob Francis, when you were putting
up the slide of the swiss cheese, it seems that every
conference I attend, six or seven slides of the swiss cheese
model goes up every single time, and I see these visions of
airplanes flying through swiss cheese.
[Laughter.]
CAPT. GRIFFITH: You know, ultimately I think, all
analogies break down. Otherwise, they wouldn't be
analogies.
But what I'd like to do today is tell a story, and
I would like to do so without the aid of slides or a Power
Point presentation, because I think that the story is
important, and the story is about ideas. These ideas are
not unique to the aviation industry, but I think that it's
up to you men and women to determine whether or not this
story that we'll tell about aviation has applications in
your field.
It's unfortunate that Linda Connell couldn't speak
to you today, because I think she would set the tone for
what our program provides. Her program--let me say just a
few words about the Aviation Safety Reporting System. NASA
ASRS, which is the Aviation Safety Reporting System, was
established in 1976, and it was established under the
auspices of aviation safety through the Federal Aviation
Administration, and the program was established to encourage
the voluntary reporting of critical flight safety
information, and the program has been in operation since
that time. It's been very successful, but it is limited in
its scope and its design.
The program was designed to offer two types of
immunity, and I'm going to speak a lot today about what that
terms means to us. Make no mistake about it, where you need
to go eventually does not involve what has classically been
described as immunity. But the Aviation Safety Reporting
System at NASA set up two types of limited immunities from
FAA enforcement or legal investigations. And the first
immunity that they offered was called use immunity, which
basically says that the FAA, should they discover the report
that a pilot or a mechanic or a dispatcher turns in, they
would be prohibited by regulation from using that report
against the individual. It did not prevent the FAA from
taking legal action or pursuing an investigation, but at the
end of that investigation, if it was determined that the
pilot, dispatcher or mechanic had actually deviated from the
regulation, then they would waive the penalty or the
sanction associated with that finding.
Now, since 1976--I don't have the exact
numbers--but there have been hundreds of thousands of
confidential reports submitted to the ASRS, and it's been
phenomenally successful in its design to collect and gather
information. And we talk about near-misses or
near-collisions, these are the types of reports that have to
be gathered if you're going to get an accurate assessment of
the operating environment. In the aviation world, what you
don't know truly will eventually hurt you. And what we
found to be true is the incidents or the near-misses are a
better indicator or precursor to accidents than true
accident investigation. Obviously, when a major catastrophe
occurs, it is imperative that we go and investigate and
learn what happened, but oftentimes we do so without the
benefit of context. We don't know the environment that
existed prior to that event taking place, so we're faced
with putting the pieces of the puzzle back together
retroactively.
And I think that's the situation that you may find
yourself in today. When a tragic event occurs, you can
isolate it to the circumstances surrounding that event
itself, but you may not have the benefit of all the
background and nuances that may have led up to the
situation, and it could have turned out completely
different.
And what we see is that, in the aviation world,
events happen so rarely, tragic events, that risk-taking
behavior or poor performance behavior is reinforced over and
over again until something bad happens. So if you're truly
going to minimize the accident rate or the potential for
accidents, you have to address the incidents and accidents
that may lead up to that, and those are the kinds of pieces
of information that lie below the surface.
The ASRS has been, as I mentioned, very successful
in gaining access to that information, but the NASA ASRS was
not set up to do what we call corrective action. But by its
very nature it was set up as a research organization to look
at the trend and the aggregate of the data. What we decided
to do, about seven years ago, was take the best of what we
thought were the programs that were available at the time.
At American Airlines we looked around and we saw
what systems were available. We looked at the NASA ASRS.
We looked at a program that US Airways had started in 1992
called the Altitude Awareness Program. I won't bore with
the details, but specifically, that program targeted a
certain kind of deviation that was occurring at that
particular airline, and they got together, the private
association, the airline, and the FAA, and said, "Let's put
our best people together and try to analyze why these events
are occurring." So they collected data for about 18 months,
and the FAA agreed during that time to not take legal
enforcement action in exchange for gathering that
information.
What we did at American Airlines was we looked at
that concept, and tried to adopt it to a much broader
application. So we took elements from the ASRS, elements
from the US Air program, and elements from a program called
the Air Carrier Voluntary Disclosure Program. And that
program, in essence, was established around 1992 as well,
which said that if an airline deviates from a rule or
regulation, that if the airline discovers it, reports it to
the FAA within a timely period, and provides a comprehensive
fix or a corrective action, then the FAA will not take legal
action against the airline in the form of a sanction or a
penalty or a fine, but they will close it out with what is
known as administrative procedures.
So with that small piece of background, we decided
to try to combine elements from all three of those efforts
into one program that looked at pilot errors, mechanic
errors and dispatch errors in a way that was non-punitive.
Now, don't misunderstand. When I say the program is
non-punitive, it truly is non-punitive within the confines
of the program itself, but it does not prevent the Federal
Aviation Administration from taking an active role in the
process of error mitigation. It does not offer immunity.
It offers what we call corrective action, and that's a very
important concept to understand, because the ASRS program
was designed to gather information. And they did so by
offering a limited type of immunity.
The ASAP program, the program I'm describing at
American Airlines, the Aviation Safety Action Partnership
Program, was not designed to provide immunity; it was
designed to provide a legal alternative to the FAA's method
of enforcement of the regulation, and when you go back and
look at aviation history, there is a sound basis for
establishing a program like ASAP. If you go back and look
at Public Law 103, Title 49 of US Code, it describes the
administrator's responsibility to enforce the regulation in
the manner that best tends to reduce the possibility of
recurrence of accidents. And that's an important concept,
because there are those in the industry that believe that
the only way to reduce accidents is to provide a negative
incentive or a disincentive for that type of action. And
I'm not trying to discount the effectiveness of it. If you
look at aviation statistics over the years, you see that the
accident rate has plateaued. And sharing with Bob
Francis--and that's the other slide that we typically
see--there's a slide that's produced that shows the accident
rate over time. And what we saw prior to the early 1960s
was a very high accident rate, followed by the advent of jet
engines, and higher maintenance reliability engines and
airplanes became more reliable. So the rate flattened out,
but what we've seen over the past 30 or so years is a
plateau, and why that's disserving to us is because as the
demand for aviation increases and the number of flights goes
up, with the flat rate, the total number of fatal accidents
has been increasing year over year, and that's a very
disturbing fact.
So what we're looking at doing is trying to
uncover why we're at that plateau, why can't we increase or
decrease our rate by uncovering new and innovative ways to
improve safety. So I'm not discounting the traditional
methods that have worked in the past, such as FAA
surveillance, government oversight, NTSB investigative
authority. Those kinds of things are very important and
vital to a safe aviation industry. However, it has not
proved successful in lowering the existing rate, and we
believe there are ample opportunities to drive that rate
even lower by using the best of both worlds.
If you take the traditional FAA enforcement and
you say that has been somewhat successful, now we've
provided a program, which I'm going to describe to you,
which encourages the voluntary reporting of aviation-related
incidents and accidents.
The program was established in 1994 at American
Airlines. Since that time we have received approximately
22,000 reports. Not all of those reports indicate a failure
of a person, but an observation perhaps that could lead to
an incident or accident. And less than 1 percent of the
22,000 reports that we have received at American Airlines,
would have been known to the FAA outside of our program,
less than 1 percent. Actually, it's less than half a
percent.
So what that tells us is that the ASAP program,
the first benefit to us is that it allows us to see a
clearer picture of the aviation industry. Now, NASA will
give you the same information because they have even greater
access, nationwide and worldwide, to events that take place,
but NASA doesn't go in and achieve what we think is vitally
important, and that is corrective action. NASA set up in
the aggregate to take information and uncover trends and
statistics, and they do publish bulletins to the industry at
large and to the FAA which talks about the trends that they
see, but they're not tasked with going in and achieving
corrective action with individuals, and our program does
that. So I ask that when you consider reporting programs,
whether they be mandatory or voluntary, look beyond just
gathering the information, because you've got to be able to
go in with the individuals and insure yourselves that you
have corrected it, not only with the system or the
procedures, but with the individuals. And then you take the
next step and try to educate the other professionals to let
them learn from the experiences of others. So there are two
steps that need to be taken, not just gathering the
information, but actually taking corrective action with
those individuals.
The program is designed to encourage the reporting
of any information that could prevent an accident. We have
certain criteria for acceptance. The reporter must report
the event within 24 hours of the time that he or she is
involved in the event, or the time that he or she becomes
aware that the deviation may have occurred. Now, we have
accepted late reports because the individuals truly didn't
know that they had made an error, but from the time that
they become aware that they've potentially made that error,
they have 24 hours to report it. NASA requires 10 days.
And again, the reason for that is because NASA doesn't go in
and take immediate corrective action, or corrective action.
The report has to be an inadvertent or an
unintentional act. We do not accept deliberate disregard
for safety or security. We do not accept criminal behavior.
If we get a report that indicates drugs or alcohol may be
involved or other criminal activity, then we will turn that
information over to the proper authorities. So we do not
offer immunity from criminal action.
Furthermore, we do not offer immunity from civil
action, because we take every report, and because it's set
up jointly between the Federal Aviation Administration and
between the airline and between the Pilot Association, we
must reach what we call a unanimous consensus, which is a
somewhat redundant term, but it forces and agreement
unanimously amongst all parties. And it's very important
because it has set up a balanced approach to error
management. We collectively go in as groups that have
traditionally held widely divergent positions and even
adversarial relationships, and worked together in a
harmonious way to achieve what we think is the greater good.
If you look at the rights of the individual versus the
rights of society at large, you can balance the two. It is
an important concept to understand. Historically we haven't
done a good job of that, but we can balance the public
safety interest with those of the individual.
And what we've done is we've provided an
environment where the individual comes forward, explains
what happened in some detail. We will meet with that
individual if necessary, call them in for additional
training. We achieve corrective action in many forms.
Sometimes the forms of corrective action involve additional
simulator experience for the pilot or a line training event.
Sometimes it involves a change in a procedure or a change in
a systemic concern that we might have with air traffic
control. We work together collectively to achieve that
balance and that corrective action. And once that
information has been gathered, the event has been corrected,
we take that information, publicize it to all of our 12,000
pilots, and try to achieve corrective action in the larger
sense by sharing that information with others.
As I mentioned, we received over 22,000 reports,
and less than 1 percent of those would have been known to
the FAA. A certain greater percent would have been known to
the airline, but even so, it would have been far less than
what we've learned today under the program.
In terms of the ability to insure the appropriate
use of the information gathered from the program, again,
you've got to balance the needs of the society at large with
those of the program. We have unfortunately been involved
in two accidents at American Airlines that involved
litigation. In 1995 there was a subpoena placed on
information arising from our program by the Federal District
Court in Miami, and it was, in our mind, a watershed event
because it allowed us to the opportunity to state the case
for the assurance of appropriate use for this information,
and in fact, the judge awarded us a qualified privilege
based on common law. However, we had made the argument that
this information should be exempt from discovery on the
basis of what's known as self-critical analysis. And
forgive me, I'm not an attorney nor a physician, so when I
speak in terms of other professions, I may get some of the
terms wrong. But we made the argument on the basis of
self-critical analysis, which basically says to us that if
an airline or a corporation has an event or an accident, and
there are legal proceedings resulting from that, the airline
has an obligation to go in and try to learn what happened,
to try to uncover why this event occurred. And in doing so,
you often explore areas that may not have been explored
otherwise. You may chase down a particular trail that might
have in the end no bearing on what actually happened, but in
doing so you might learn more about your airline and take
corrective action.
But the thinking on the self-critical analysis
piece is that you should not then be penalized for going
forward and doing what I call the right thing, but when you
made the argument--when we made the argument in court, the
judge refuted that argument and said, "No. I'm more
concerned with the effect that discovery would have on the
participants in the program based on common law, similar to
the relationship that exists between a patient and a
physician." And so he said that it would have--and this is
an over-used term--a chilling effect on aviation safety
because the reporters, the pilots, the dispatchers and the
mechanics, would no longer feel free to report these events
if it was used in a manner that they didn't expect or
outside of the intended purpose.
And you see this over and over again in aviation,
where you have issues related to the recording devices on
board aircraft, the cockpit voice recorder, the digital
flight data recorders. There's discussion now on Capitol
Hill about putting video cameras in the cockpit. And all of
these issues surround, in my mind, our inability so far to
balance the issues, whether it's a privacy issue with the
individuals or an issue with litigation in the discovery
process, or the Freedom of Information Act if it's a
government body. All of these issues have to be balanced,
and in doing so, we have to keep focused on the greater good
to society, public safety. And we believe--and it may take
some time--but we believe that we can show that programs
like ASAP and like the ASRS and a program called FOQA, which
is described as Flight Operations Quality Assurance, we
believe that these programs ultimately will prove to be the
way that we reduce the accident rate, the way that we drive
that rate down even lower by exploring these human factor
related issues.
And I suspect that in your field, many of the
issues that you're struggling with, with computer technology
and newer devices to assist you in the operating rooms and
in the health care, are similar in nature to the issues that
we faced in aviation, because we've seen an onslaught of
technology in the cockpit designed to make our jobs better,
whether it's through reduced workload or increased
reliability, but what we've seen is that every time a new
piece of technology comes onboard our airplanes, there's
another human element to it that we discover after the fact.
And although I'm not qualified to speak on it as Dr.
Helmreich is, it is an issue that is a basic human factor
issue, and it applies to more than just aviation I believe.
But we must balance the issues that I'm sure
you're facing related to discovery. We have the discovery
process through, not only civil litigation, but through the
Freedom of Information Act, and most recently, the Congress
has directed the FAA to write regulations which would exempt
certain pieces of voluntarily supplied information from
discovery through FOIA. It reminds me in many ways of what
the US military has already done, and I wonder sometimes if
we're failing to learn from our experiences of the past, but
within the US military, when an accident, an aviation
accident occurs, there are two investigations that take
place. One is the safety investigation, which is
privileged, and there's case law that supports that data
being exempt from discovery under the executive privilege,
because it is deemed to be in the interest of national
security. And then there's the legal investigation which
takes place, at the same time, and using some of the same
information, but not all of the information. And so I think
that there's a model there that has existed since about
1948, which says the US Government has the right to go in
and learn the truth, or as Paul Harvey says, "The rest of
the story" about what really happened, and then take action
to prevent its recurrence.
And at the same time there are the rights of the
plaintiffs and the attorneys and the families of the victims
that have a right to know basically what happened, and yet,
those two are not mutually exclusive, but often it's
construed or it's perceived to be something that's mutually
exclusive.
So I think the challenges that you're facing today
are much the same as we've been facing in aviation for some
time. I don't want to give you the impression that we have
succeeded completely in the aviation world. We have a long
way to go before we can achieve what we feel is our ultimate
goal. However, we think that we are on the right track. We
think that the only way to reduce our number of accidents is
to pull the veil back, look at what's happening out there,
try to take corrective action on each and every event, and
then share those experiences with the other operators and
the government authorities and work together in a harmonious
way to achieve benefit for the greater good. And we do
think that public safety improvements are not mutually
exclusive with the designs and desires of other parties.
I think that we need to tell the story, and I
appreciate the opportunity to come here today to talk about
these things. We need to tell it in a way, in a responsible
way to the media. Many people in the aviation industry are
very skeptical and nervous about telling the story when it
comes to these type of programs, but I believe ultimately
we're going to have to do that. We're going to have to
allow the news media access to the process of these
programs.
The first reaction that you get when you talk
about trying to exempt information from discovery or to
protect certain pieces of data, it tends to make people
think you're trying to hide something. That's the
perception. I'm sure many of you have dealt with that in
many ways. But if you take the long-range approach and if
you look at the greater good for society, we have to do
both. We have to preserve our ability to give the public
information about the processes that we use and we approach
safety in these manners, and you have to be able to insure
as a society, as an organization, or as a government
oversight agency, your ability to collect that information,
and then go take the next step to corrective action. So
there's a delicate balancing act that needs to be
accomplished, but it can be done.
I guess at this time I should open the floor for
questions, doctor. We'll just go around the room, starting
on the right here, and move to the left, I guess.
MR. ALLEN: Dr. Piliavin.
DR. PILIAVIN: Hi. I'm a little confused about
all of these different kinds of programs, and particularly
you said that there's only a small percentage of the things
that your program at American Airlines gathers as reports
that would go into the FAA. And I quite understand that the
focus is different, is that you have this corrective focus,
and theirs is a data gathering, but just in terms of what
kinds of reports you get and what kinds of reports they get,
I don't understand the difference. Why are some of the
things or most of the things you gather not the kinds of
things that go to the FAA?
CAPT. GRIFFITH: Okay. Let me clarify one thing.
The NASA program, when I spoke about the ASRS, that
voluntary program, that is sanctioned by the FAA, but it
doesn't go--those reports, you understand, don't go to the
FAA.
DR. PILIAVIN: Right, okay.
CAPT. GRIFFITH: The FAA has regulations that
require legal investigations. If I as a pilot deviate from
a regulation, the FAA is required by law to investigate that
and take corrective action. But there is no--in the sense
of a mandatory reporting requirement, there is--and it
wouldn't be enforceable even if it happened--but there is no
mandatory requirement for a pilot to come forward and say,
"I deviated from the regulations." Historically what's
happened is the Federal Aviation Administration has a vast
team of inspectors that they use to do surveillance, and
they do that surveillance by going and observing you in the
cockpit, watching you fly, and getting reports from air
traffic control. But that limited insight to what's
actually happening is what I think has left the FAA unable
to see the entire picture, because they don't have--even if
they had--they'd have to have enough inspectors to track
every single pilot every day to see the clear picture of
what's going on.
DR. PILIAVIN: This NASA program, what's that
based on?
CAPT. GRIFFITH: The NASA program is based on the
ability to turn in an information on a report voluntarily,
and it's done confidentially so the FAA does not have the
identity of the pilot who reported it.
DR. PILIAVIN: Okay. What's the difference
between that and the ASAP program?
CAPT. GRIFFITH: The ASAP program--actually, every
report that comes into us at ASAP goes to NASA. We send it
there to contribute to that database, and get that
information into their system. The difference is, is that
the ASAP report, if a pilot turns in an ASAP report, he
stops the legal investigation with the FAA, and the FAA has
agreed that if the pilot achieves corrective action, they
will close that event out with administrative procedures
rather than legal findings.
DR. PILIAVIN: What happens with the voluntary
reports that go into the NASA program?
CAPT. GRIFFITH: When a report goes into NASA, it
stays on a file. They make it confidential. They track it
with a number. If the pilot should be investigated
independently by the FAA, and they take him through the
legal process and they find him guilty of the violation,
then the sanction or the penalty is waived if the pilot can
show evidence that he participated in ASRS. So the legal
investigation, the finding of guilt or innocence, and up to
the point of imposition of sanction, the legal process is
still in place with the ASRS.
DR. PILIAVIN: I'm still confused. I thought you
said that a very small proportion of the things that you
collect is available to the ASRS. And now you say you send
everything to the ASRS.
CAPT. GRIFFITH: No. Available to the FAA. Less
than 1 percent of our reports would have been known to the
FAA.
DR. PILIAVIN: Okay, all right. Now I have it
straight I think. Thank you.
CAPT. GRIFFITH: It is confusing. The NASA
program--the FAA--if I can be candid, the FAA has been less
than full in their support of the ASRS because the FAA only
sees that information if they make a request to the Freedom
of Information Act, and they only see it in the aggregate.
They don't see the individuals involved. They don't know if
the event that they're investigating was exactly what
happened in a report. They can't track those reports
one-to-one. So they are prohibited from using that report
on their investigation. It's only used, in the FAA's eyes,
to offer limited immunity.
The ASAP program is different in that the FAA sees
the report, talks to the pilot, uses that information
directly to close out their investigation with us, but they
do so using administrative procedures rather than legal
findings.
In a pilot's world, the obstacles to reporting in
any system are discovery or punitive use through
enforcement. In other words, a pilot's not going to come
forward and tell you what happened if he thinks that you're
going to give that information to the FAA and it will be
used against you, self-incriminating. But imagine what it
would be like--and Bob Francis raised this question to
me--if a pilot was required to tell what happened, admit his
errors to the passengers, as you do in your world. I mean I
can imagine trying to make a PA's telling, "Well, I deviated
from an altitude", and explaining it to 200 passengers in
the back. And I thought, well, if you're going to do that,
why don't you just take the next step and say not only tell
what happened but tell why it happened, and then tell what
you're going to do to prevent it from happening again.
Yes, ma'am?
MS. LIPTON: Captain Griffith, I just have a
question about--it really relates to corrective action and
two quick questions. The first being, do you think that
public acceptance of really what I call a non-disclosure
rule, is--do you think that corrective action or telling
people that corrective action is required is an important
part of that public acceptance? You know, we've been
talking here, but we really haven't been talking about
corrective action. We've been talking about collecting data
about things, but not requiring anything in return for that.
And it sounds that--you think that's very important.
CAPT. GRIFFITH: I think it is the single most
important element of our program. I mean, it's a critical
first step to identify where the concerns are, and NASA--I'm
a wonderful supporter of the NASA program--it is a means of
identification of what's going on, but as a private citizen,
I would say I want to know that my government or my airline
or the pilot associations are working to correct those
issues, not just identify them. So, yes, I think that that
is the most vital element of our program, is achieving
corrective action.
MS. LIPTON: The next question is, once you
collect that corrective action, do you ever go back and see
that that corrective action actually works?
CAPT. GRIFFITH: Yes. And because this approach
that we've taken ties in with the work that's been done in
human factors research, Dr. Reason and others, we're seeing
that these are human issues more often than they are
technological. If we could go in and put a widget on an
airplane to stop it from crashing, we'd go put it in. And
in fact, we've done just that in recent years. But each
technological improvement that we put on brings a complexity
with the man-machine interface, so what we've done is we've
tried to drill down below the probable cause, if you will.
And Bob Francis and I were talking on the break. We've
historically gone and tried to blame someone or some
organization for a failure, and when you--and the NTSB to
some extent has done that by going in and trying to reach a
probable cause determination, which they're tasked to do.
But in determining a probable cause, it's often a simplistic
approach. You can say that if a baggage cart runs into an
airplane on the ground, the probable cause is the failure to
maintain adequate separation between the baggage cart and
the airplane. Well, that doesn't get you very far in
preventing the next one. You've got to be able to go in and
say, "What are the contributing factors? Why do we think it
happened? Where do we think we are at risk in the future?"
So you've got to be able to go down to the
underlying level, the underlying causes, and what we see are
basically human issues. Pilots make mistakes when they're
tired. They make mistakes when they're under conflicting
interests or distractions. When you're performing several
tasks at one time, which I'm sure is the same in some of the
operating environments, when you're performing several tasks
at one time, sometimes you tend to focus on something that's
more immediate, but it might not be the most important task
you're performing. So prioritizing those duties and
responsibilities and minimizing distractions, being
cognizant of the external and internal pressures that are
upon you, and forming a hierarchy of what tasks should be
accomplished first and with which order of priority, is the
kind of thing that you can get down into the lower levels.
And we see the same things that cause altitude deviations
cause navigational errors, which cause other types of
concerns. But only getting that information can you go down
and track it.
And I think the answer to your question, the
long-term measurement, or the measurement of the success of
these programs is going to have to be long term. You're not
going to say that year over year, this year versus next
year, you're going to see a dramatic drop. If it were that
easy, we would have done it years ago. You're going to see
something scientifically I think which is sound, in that
you're taking the right approach, but in dealing with a
non-linear dynamic environment with somewhat chaotic
behavior, you're going to have to measure those trends and
statistics over the long term. I think it's going to take
between 10 to 20 years as an industry to accurately measure
the rate reduction that we're going to achieve. But we're
looking for something that is long term; we're not looking
for a quick solution unless it's a lasting solution. We're
looking for something that long term is going to be
effective in reducing these rates down lower. And I think
if you take the short-sided approach, you're going to be
subject to variations which may skew your results.
MS. LIPTON: Thank you.
CAPT. GRIFFITH: Yes, sir?
DR. GUERRA: I'm interested in what you just said,
and putting it into a context of--I guess as one increases
the awareness of, you know, the potential near-misses or
near-hits and the breakdown in whatever the techniques might
be that are recognized by somebody as contributing to that
pathway. And you used the terms of say 22,000 reports that
have gone into this system, and that it's obviously going to
take a long time to see if there are in fact any changes
that occur that would indicate that, yes, things are getting
much better.
What is your observation though in terms of those
reports that get into the system, that are then
investigated, and are either substantiated or not
substantiated? Because I think that there is such a wide
scatter, you know, some judgment call sometimes or based on
experience or lack of experience or whatever, however one
perceives something to be potentially a dangerous, what is
that observation?
And then the second is, at what cost does all of
this take place in terms of the investigation, obviously,
the anxiety, the apprehension, all of those things that add
some complexity to it?
CAPT. GRIFFITH: Okay. Let me answer the second
question first, at what cost. I think it's a cost that
is--it's a hidden cost, because if you don't take this type
of approach--and it's been termed by Bob Baker, our vice
chairman of the airline has said, that what you have without
the ASAP program is the immunity of silence. And I say what
you don't know will hurt you. The cost involved in the
program are the costs of manning it, supporting it, but it's
more your ability to create or foster an environment of
trust and cooperation. And because we have the regulators
and the pilot associations and the other employee groups,
and the airline management, working together as a team, it's
a great leap of faith for an individual to say, "Hey, I made
a mistake, and I'm going to come forward and tell you
exactly what happened." Because we're all human. And when
I'm flying an airplane and I make a small mistake, the first
thing that goes through my mind is, who knows about this?
And it takes a leap of faith to say, "I'm going to do the
right thing, and I'm going to come forward and I'm going to
tell the story about what happened."
Now, what we've seen over a period of time--and
again, the American Airlines program is kind of like an
experiment. It was a microcosm, if you will, because it was
10,000 pilots. Now we've grown to 12,000. But it's been a
contained system. We have seen a linear increase in
reporting volume month over month since the program started.
Now, we've got to establish a baseline that says, "Well,
this is where we think the threshold is or the baseline will
be established", but even though we've seen the volume of
reports steadily increase--which is to us a good
thing--we're getting the confidence of the employees, and
that's what's required to make this successful--we have
driven within certain categories, certain events have gone
down, even though the total reporting volume has increased.
So we've been able to measure some success.
But you've got to develop the cooperation or the
reporters, the front-line employees. And in your world I'm
sure it takes the form of several different occupations
within the medical field, but you've got to develop the
ability to reach out to them and have them come forward and
tell you what they're seeing, because as managers or
supervisors, you're not going to see the day-to-day activity
that they see, and you're not going to be able to take
corrective action until you know what's happening. So
you've got to develop that relationship.
In the aviation industry it's a little bit
different than others in that--well, actually, it's a lot
different. We come under large employee groups that tend to
have collective bargaining agreements, so there tends to be
ways in which you can apply a program or a rule or a
regulation that will affect a large group of people at one
time without the various subgroups in there to lobby against
that type of action. So in the aviation world there
are--and in the US maybe--I'm guessing here--50,000, 60,000
airline pilots, most of which come under large collective
bargaining agreements. So we've been successful in working
agreements with those associations and unions to say, "We're
going to handle these programs in this manner." So their
union goes back to the employee and says, "We think this is
a good thing and we have signed it."
In fact, I guess I failed to mention, the ASAP
program, one of the reasons that we have succeeded is that
we put the details of our program in writing as a binding
agreement between the airline, the FAA and the employee
group. We set it in writing. Now, that's not to say that
it didn't change and evolve over time, but we started with a
5-page document, and then we've expanded it now to about 35
pages. But there are some basic elements in that program
that I think would apply to any industry. You've got to
have unanimous agreement by the group that's charged with
evaluating these reports, and you've got to have the ability
to take corrective action that you all agree on unanimously.
Otherwise, it's too easy to go in and take an institutional
position.
I'm not sure I answered your question. Would you
like to follow up?
DR. GUERRA: Well, the point that you didn't
answer though was of those that are reported to you and
investigated, how many are substantiated in a way that you
know either you need to take some actions, some disciplinary
action, or that it is going to affect some behavioral
changes?
And I raise the question, because I think we
oftentimes get into the dilemma, which I think we've
observed many times over on the medical side of things, and
as an example--and the Justice referred to this previously
in his comments about prevention of child abuse, for
example--as the awareness has increased about, you know, the
conditions that predispose or that ultimately are recognized
is the abuse or neglect of a child. The number of reported
cases has just steadily been increasing, but at a tremendous
societal cost and disservice to the--quite often the care
takers of the children, and sometimes to the supposed victim
of that abuse or neglect, because of the stigmatization and
because of all of the other things that fall out for cases
that are not very well substantiated. And I think that we
can get into a very similar dilemma with this kind of a
system.
CAPT. GRIFFITH: I think what you're asking is how
do you know when you take corrective action and how far do
you go with that. And in our mind, there's a threshold,
depending on the type of event, that if the severity of the
event or the implications for--potential for future
occurrence at other areas is there, that there is a
threshold. And you say, "Well, these events we see a lot of
them, and yet the impact is minimal." Then that's sort of
noise below a threshold. But because we're working as a
team, that threshold has moved beyond where it would be with
just the airline itself if we were looking at the events
ourselves, because we work closely with the unions and we
work closely with the FAA. So collectively we set the bar,
and if an event rises up, either in severity or in frequency
of occurrence, then we go in and we say, we're going to go
look at this one a lot closer.
And because we have to have unanimous agreement on
that, there's a checks and balance approach taken by saying
it's not just one party saying, "Well, we don't think that's
an important event, or it's not worthy of continuance" as a
group. And it makes me feel good as a private citizen that
the FAA is a full partner in this, because now the FAA, who
is charged with government oversight, has the legal
responsibility to investigate these events, even though
they're investigating far more than they would have in the
past, they have to satisfy their legal requirement.
So you set your threshold based on the type of
event or category that you're looking at, and if you were
involved in issues such as you've described with child
abuse, with a potential for widespread abuse and
applications, I think that you would have to look at placing
the threshold low enough to say, "We've got to take action
with these events, and then educate people on the
consequences in a way that maybe hasn't been done." And if
you can organize the program in a way that has that check
and balance, where you've got perhaps a government agency,
hospital administration staff, and some peer group
represents you, now you're bringing in the resources of
three separate organizations to achieve that, and I think
you have to be able to go in and adjust that bar based on
collectively what you think is a problem.
Other events that might be lesser impact on your
system, you might set the bar in a different place and say
that we're going to trend these data out and see what shakes
out of it. But on something like child abuse, if you wait
to trend that information, you're hurting a lot of people.
MR. ALLEN: Dana?
DR. KUHN: Yeah. You had talked about the
successes of a voluntary disclosure, non-punitive, but
corrective program. The two questions I have are the other
side of the coin, perhaps from the passengers' perspective.
Are air crash accident victims--or should I say their
beneficiaries--compensated, or is the only mechanism of
compensation for these accident victims received through the
purchase of the flight insurance, through those little boxes
at the airport? And that's the first question I have.
And the second one is if an air crash occurs--and
maybe it's due to the turning screw in the tail flap of a
737, or maybe it's because a pilot has logged more hours
than allowed, or maybe he indulged in a little alcoholic
beverage--how many victims' families bring lawsuits against
these airlines, or do they have to establish a cause of
action or negligence before they can even bring a lawsuit
against the airlines?
And then I guess part of that question also is, is
there a protective mechanism which the airlines have to
dissuade lawsuits, such as--I know with blood products,
there's a Blood Shield Law.
CAPT. GRIFFITH: Well, you're asking questions
that I'm afraid I'm not qualified to answer. I will give
you--like most pilots--my opinion.
We see at American Airlines that invariably when
someone gets hurt, someone's going to sue. Now, it's the
fact of life that we all live in, and perhaps you more so
than us. But we do see class action suits brought against
the airline. What tends to happen is that the airline tends
to deal with these on a case-by-case basis. And of course,
we work with our outside attorneys and insurance brokers to
try to minimize the impact on that, but every airline does
it a little bit differently, and we do see a high activity
rate in litigation stemming from events, and I don't think
that's going to diminish.
In terms of compensation, you know, there are
international treaties which describe ceilings and
limitations on the means of compensation or the total
amount, and those issues tend to be anachronistic in the
sense that they were based on a law that was in place back
in the 1940s and in other laws.
But what we see today is an awareness on the side
of the plaintiffs' attorneys or those that represent the
victims, an awareness that the aviation industry is
collecting more information, and so they, understandably,
have targeted programs like ASAP and FOQA and ASRS and
Voluntary Disclosure, and internal audit processes. They
have targeted that information for discovery. I think it's
a fact of life. And if you're a plaintiff's attorney and
you feel that you're doing the right thing for your client,
and you're serving his needs or her needs, then you tend to
be zealous in your defense or your attack.
What I think though as a society that we may be
missing or we are missing, is the accomplishment of a
greater good for society by educating the public before an
accident occurs as to the process of investigation, and that
we can work with the regulators--it's extremely important,
from my perspective, to say that the agencies that are
involved in oversight of aviation, the FAA and the NTSB, are
actively involved in the management of these programs and
oversight, because it increases my government's ability to
do that oversight and prevent accidents. And the goals of
the NTSB and the FAA are actually identical, which is to
prevent accidents, promote aviation safety and prevent
accidents. The means by which they accomplish that is
sometimes vastly different, but the goals are the same. So
it's important as a citizen, as an employee, and as a
manager, for me to know that the government is actively
involved in that.
The civil litigation side of it, that I'm sure
you're dealing with in different ways than we are, is not
going to go away. And I'm not sure that we can depend on
individual decisions in district courts to put in these
fail-safe methods or assurances, but we need to raise a
dialog. We need to talk about it in Congress. We need to
talk about it in the press. We need to let the media know
why we are attempting to do these programs. And I'm
convinced that we can tell a story which will make sense,
because if we don't, then there's going to be this veil of
uncertainty, and the public is going to say, "What are you
trying to hide?" And I think we just need to raise a dialog
to the level of education in this country.
MR. ALLEN: Dr. Epstein?
DR. EPSTEIN: Captain Griffith, could you just
clarify for me, just so that I can be sure I understood.
You said that the group acts through unanimous consensus.
But the component elements of the group are the airline, the
employee group, such as the pilots' association, and you
never actually stated the third. Is it directly the FAA?
CAPT. GRIFFITH: Yes, sir, it is.
DR. EPSTEIN: It is, okay. Given that there are a
hundred-fold more incident reports than would otherwise be
reported to the FAA, does the FAA actually review each of
these 22,000 reports, or do you somehow review them subject
to a pre-agreed standard or threshold? Because in essence,
if the FAA now has to look at 22,000 reports, there's a
tremendous implication to that agency of this system. So I
want to ask a very precise mechanistic question. Given that
there are a 100 times more reports, given that the FAA is a
partner in deciding action through consensus, are they
actually engaged in reviewing all the reports?
CAPT. GRIFFITH: Yes. They receive every report
we get, unedited, unfiltered, and they review it at the same
time we review it.
DR. EPSTEIN: But they review it without the
identifiers?
CAPT. GRIFFITH: They review it--it's a
confidential system. It's not anonymous. But what we do is
we redact the individual's name from it, but his employee
number is there, and we have the means as a group or a body
to go and retrieve that information.
DR. EPSTEIN: Okay. And can you also state
explicitly, is that report then not subject to FOIA?
CAPT. GRIFFITH: That's correct. It's not subject
to FOIA today because the FAA reviews the report, and while
it's an active open investigation, they have been successful
in fending off, if you will, the FOIA request, because it's
an active investigation.
DR. EPSTEIN: How about when the investigation is
complete; is it then open to FOIA?
CAPT. GRIFFITH: Well, at that point the FAA
returns to the airline all copies, all had copies of the
report--and they don't keep electronic files--all they keep
record on are the administrative closures, that if an event
occurred, they document how it was closed out, and the
airline retains all information related to recommendations
and findings and solutions that were achieved, but those are
held by the airline on site at the airline.
DR. EPSTEIN: But if an FAA employee, for whatever
reason, happened not to, you know, purge his e-mail, and the
case was, as you say, closed, it would then be discoverable
under FOIA, would it not?
CAPT. GRIFFITH: What would be discoverable would
be the records that he maintains, which are the closures to
the event, the administrative actions. He would not
have--we don't transmit the report via e-mail, so he
wouldn't have a record of the event itself, other than the
closure in his files. He doesn't keep the event
description.
DR. EPSTEIN: Thank you.
MR. ALLEN: Dr. Hoots?
DR. HOOTS: Just to follow that up a bit, the
proposed rule changes--we have a lot of briefings in terms
of the responses to the proposed rules change, which I think
are very insightful, about the broader issues of individuals
within the society versus the society at large. And it's
probably the most analogous situation other than error
itself to what we have in medicine and transfusion, and
that's the competing interest side of things. As the FAA is
proposing these rules, what you just described is what's in
place now, correct, at the present time?
CAPT. GRIFFITH: Yes, sir.
DR. HOOTS: And the proposed rule would
specifically increase the FAA's access to specific
information; is that not true?
CAPT. GRIFFITH: Yes. The NPRM, Notice of
Proposed Rule Making, describes the method by which the FAA
would exempt certain pieces of voluntarily provided
information from discovery, and I noticed in the briefing
packet that was included in there, yeah.
DR. HOOTS: Right. So in terms of these broad
responses that we see both from unions, the industry, from
plaintiffs' attorneys and all, you are optimistic that
somehow this--that some consensus--
CAPT. GRIFFITH: Yes.
DR. HOOTS: --among these competing interests
could be achieved.
CAPT. GRIFFITH: Yes.
DR. HOOTS: And you believe that education will do
that. Do you think since the intent of Congress was
actually to take what you've had as a very good idea, "you"
being the industry, like the ASAP program, and make it
more--and it seems to me, correct me if I'm wrong, a little
more compulsory, and a little bit more broadly reaching in
this way--and maybe I'm not inferring the right thing, but
that's what I thought--then that actually raises the angst
of the newspapers because of freedom of information, et
cetera, over all of this, whereas up to now, when you've
done it voluntarily in-house and then provided aggregate
information to the FAA, they pretty much have been closed
out of the deal anyway, but once the FAA gets closer in,
then they're afraid they're going to be closed out of things
that perhaps they would never have access to anyway, but now
they would have access to.
So how do we--how do you resolve that in your mind
in terms of these competing interests? Is education still
going to solve that issue, or is this going to go to finally
Congress just deciding that the rules met their intent and
we're just going to do it if they accept what the FAA has
proposed?
CAPT. GRIFFITH: Well, I think that it ultimately
will go to Congress. I don't think education is the only
approach that has to be taken, but, yes, I think Congress is
going to have to act. But what I do think is going to
happen though is that the FAA will not make these programs
mandatory. I don't foresee that as happening in the near
term, the ASAP program. I think that there are
opportunities there to do digital monitoring, video cameras
in the cockpit, and other programs that have gotten a lot of
attention. But I think they have--after some time--I mean
we could talk for hours on how we have convinced the FAA
over a period of 6 years to finally allow other carriers to
operate our program, because for 6 years they haven't been
able to. American Airlines has been the only airline in the
US that has had this program because the FAA didn't
understand, and the Justice Department didn't understand,
how the NASA immunity was different than the ASAP corrective
action.
But I don't see these programs as becoming--as
least this is my view--I don't see them becoming mandatory
in the near term. I think because the collective bar is
going to be raised, airlines and pilot associations will do
these programs because it's the right thing to do and
because there will be some peer pressure. If you don't do
these programs--and the case can be made that an airline
would be negligent for not doing something like this--for
not doing all that they could do to promote aviation safety.
In fact, if you read Public Law 103 in Title 49, it talks
about the administrator's obligation to promote aviation
safety and hold the carrier to the highest standards. I
don't think it will become mandatory, but I think
collectively, as airlines see that they can do these
programs successfully, you'll see intense peer pressure to
do the right thing. But I just don't see it being something
that's going to come in and be made mandatory.
Now, FOQA is a different story because it's a
technological capability that has been enhanced over the
years. The public knows about it. Congress has been
interested in that. And so it's easier to say you will go
put this piece of technology on board and then, of course,
you have to look at the cost and the long-term benefits.
But how do you mandate a pilot or a physician or a health
care professional to come in and tell the rest of the story,
particularly when we have the Fourth Amendment which
prohibits--and the Fifth Amendment which prohibits
self-incrimination? I mean, how can you mandate someone to
come in--now, in the military--and I'm a civilian pilot, so
those of you that are in the military might correct me. But
in the military world, you are ordered to tell the truth.
You are ordered upon penalty of court-martial for not
obeying an order. But they take that information, and they
make it privileged. The safety investigation is held
separately from the legal investigation.
So there's a lot we can learn from that. I don't
think you're going to see an ASAP program become mandatory,
but I do think that there will--so, in that sense, what the
FAA is doing is saying if it's a voluntary program, then we
have an expectation to treat it a certain way. And I think
that that will be successful in the long run.
MR. ALLEN: Dr. Davey?
DR. DAVEY: Captain Griffith, of the 22,000
reports that you have gotten, are they all reports of one's
own mistakes or are some of them observations that perhaps
some--an error performed by someone else? For instance, you
observe your copilot making a mistake and that person may
not elect to report that error. How is that handled under
your system?
CAPT. GRIFFITH: That's an excellent question.
It's a self-reporting program. The intent of it is to
report on your--to tell your version of what happens. And
Linda Connell speaks very eloquently about the bias that's
contained in a pilot's report. Make no mistake, when a
pilot turns in a report, he's seeing it from a certain
perspective. But it's a very important bias. In fact, it
is so important that we require each cockpit crew member on
board an aircraft to--if they want to be in the program,
they have to turn in their own report, which is--in our
world, historically the captain of the aircraft has spoken
for the entire crew. So he would turn in a report, and
everybody else would just sign on the bottom line.
But if a pilot turns in a report that says my
cockpit crew member didn't perform up to the task, then we
will investigate because it's important to us that if we
identify something that could potentially cause an accident,
we are obligated to go investigate that, whether that
individual turns in a report or not.
Now, we are prohibited from using information from
one pilot against another pilot in any disciplinary fashion.
But we're not prohibited from contacting him and making
recommendations to prevent that reoccurrence whether or not
he or she turns in a report or not.
What tends to happen most of the time is when we
contact the other individual that didn't report, then they
will come forward and report as well, because even though we
guarantee that we're not going to take that information and
use it against them, there is the risk that should that
information be discovered outside the program from another
source, that action would be taken against them.
So there's two motivations for a person to turn in
a report. One is a negative motivation, that if he doesn't
turn himself in, somebody else will and he's going to then
be faced with some kind of action; and then there's the
positive motivation that says most of the time somebody will
say, well, I didn't think that was that big of a deal or it
wasn't that important.
But we are required to investigate every report
that comes in, even if it's against another employee, but we
don't use that information in a disciplinary fashion. We
will, however, make a recommendation, if it's serious
enough, that that individual be removed from duty with pay
pending the results of our investigation. We just guarantee
that we don't use that for disciplinary purposes.
Let me finish--follow up on that. What we see a
lot of times and the most interesting reports that we get
are those that involve not only pilots but dispatchers and
mechanics. And I think what you would see in your area is
that it's not just one occupational area when an event
occurs. It's generally the involvement of multiple
employees or multiple health care professionals.
We see events that a pilot looked at something and
thought it was okay to go, the mechanic maybe signed it off
improperly, and the dispatcher didn't correct their
oversights. So those are the most valuable reports in many
ways because now we can take three independent perspectives
from three different job descriptions, each of them looking
at the event a little bit differently, and put it together
and find out what happened.
A lot of times we'll get a report from a pilot and
a dispatcher and the mechanic didn't report or vice versa.
But we go investigate that with the employees, and more
often than not, those individuals turn in an ASAP report as
well. They come into the program, and now we can achieve
corrective action with all three groups.
MR. ALLEN: Dr. Goosby and then Dr. Chamberland.
DR. GOOSBY: Yes, in looking at the expectation
that you've presented by the collection and reporting of
data around misses, near misses, hits, you create a
surveillance capability that feeds into a system in the
American Airline model as opposed to the NASA model that
moves into a system that takes corrective action. And with
the American Airline model, you had a 24-hour reporting
requirement, and the NASA model was 10 days, if I remember
it right.
CAPT. GRIFFITH: That's correct.
DR. GOOSBY: Can you give the committee a eel for
the feedback that goes into the identification of a
consensus problem and then implementing the resolution to
that problem that you've identified? What kind of a time
loop in the feedback component of that are we looking at?
CAPT. GRIFFITH: On the ASAP program in American
Airlines?
DR. GOOSBY: Yes.
CAPT. GRIFFITH: A pilot will turn in a report
that every day we review, my staff reviews the reports that
come in, along with the FAA and the Pilot Association. If
an event takes place that is time-critical or the severity
of the activity is high, then we'll take action as soon as
we receive the report. And a lot of times we'll get
notified of the event before they have time to turn in the
report. Obviously, if there's an incident or an accident or
a near miss, I'll get a page and we'll know about the event.
So we oftentimes speak to the pilot--not
oftentimes. Sometimes we speak to the pilot before they
actually get a chance to turn in a report. On the
time-critical reports, we'll be on that event posthaste.
The follow-up on that is generally we initially
would remove the pilot from service with pay pending the
investigation. If a pilot--let me give you an example. If
a pilot comes in and strikes the tail on landing, the nose
of the aircraft comes up too high and he strikes the tail,
and then the airplane lands, as a passenger you might hear a
scrape in the back, you might not. The pilot might not even
know that it occurred. But if it's reported to us, by
whatever means, we'll take that pilot and we'll go in and
say, okay, you're going to be removed with pay until we can
have a chance to go in and look at it. He turns in a
report. He comes in, he tells us from his perspective what
happened. Now, we take the flight data recorder
information, and we look at this event. These aren't even
accidents, as the NTSB would classify them. They're just
incidents, so to speak. But it's a precursor to what could
have been a much worse scenario.
So we'll take action. If we've removed him from
service, it might be a week before he comes in, we meet with
him, we take action, and then he's back on the line. But if
it's something that's extremely serious that could be an
accident, we might investigate it that day or the next day
and then put out an advisory message to all the other pilots
because it could be something that could affect another
flight immediately.
So every little event is different in that
respect. The average time it takes to resolve a corrective
action is generally about three weeks, generally. But those
are events that we don't have a concern that this pilot
shouldn't be out flying. We make a judgment that is this
event--is the mistake that this pilot may have made serious
enough to warrant his removal from service? That's the
first question we ask. If the answer is yes, we'll take him
off and we'll investigate, but knowing that he's not going
to be out there flying. If it's something that other pilots
would be affected by, then we'll take action that day, put
out an advisory and take whatever steps are necessary.
Most of the events that we see are air traffic
control type events where it may take us three weeks to
secure a copy, an audio copy of the air traffic controllers'
tape, the conversations on the radio between the pilots and
the controllers are recorded, and then we get information
from the controllers about those. So we'll request the
tape. They'll send it to us in the mail. We'll listen to
it, bring the pilot in, review it with him, and then take
corrective action.
Most of the time, these events don't require
removal from service. So we go in and make a value judgment
every day that we look at a report. But the average length
of time to resolve an event is about three weeks, as opposed
to the FAA's legal investigation process, which can take up
to two years. And it might be something that is very
serious. It might be something that the pilot failed to do
properly and requires corrective action with him, but
because they're constrained by the legal process outside of
ASAP, it may take certainly months and certainly in some
cases up to two years to resolve.
And, by the same token, the NTSB's investigative
protocol is such that they come in and investigate an
accident, and it might be a year or more before they can
come up with a determination of probable cause and
recommendations and findings. As an operator at an airline
or the FAA, we can't wait two years or a year or even two
months, in some cases, if it's something that we feel could
happen tomorrow.
So one of the advantages of the program is that if
you have an active event review team, you can go in and take
action in a very short time, or if it's such that it doesn't
require immediate action, it can be resolved in a matter of
days or weeks. But it's certainly an improvement in our
industry over what the FAA historically has been able to
accomplish.
DR. CHAMBERLAND: You've alluded to this a couple
of times, and I wanted to see if you could amplify a little
bit more to make sure I had a good understanding of this.
My questions concern the analysis and the dissemination of
the data that you collect. It seems that there is an awful
lot of intensive one-on-one, three-on-one, whatever,
feedback that occurs immediately with the individual. But
I'm interested in--I'm presuming that your reports are some
combination of narrative as well as standardized data
fields, you know, time of accident, type of aircraft,
whatever.
How do you analyze your data, your aggregated
data, and how does it get disseminated? For example, I'm
sort of thinking of analogies in what we do in public health
or in medicine. But do you publish reports, and to whom are
they circulated, that describe the last three months we had
X thousands of reports and you somehow stratify them in ways
that would make sense to you, X occurred at night, X
occurred at day, by type of aircraft, by experience?
So can you give us a feel for how you analyze the
data and feed it back and who are the recipients of this
analysis?
CAPT. GRIFFITH: That's another good question. We
take our reports and we produce the narrative, the identify
to the pilots. Now, initially, when we started the program,
we published every single report to our pilots. We found
that was really too much information to digest. More is not
always better.
But what we did, we still do, every six weeks we
publish a newsletter which summarizes the types of events
we're seeing and some trends that we've identified, and we
provide statistics by category of event and by aircraft
type, and then we provide a sampling of the narratives. We
highlight the ones that we feel are most indicative of the
groups we're seeing. Or we might have one that stands alone
that is worth publishing, and we publish that to all
12,000-plus of our pilots every six weeks.
In addition to that, if it's something that's
immediate, I might send out a message to all of our pilots
electronically on our computer system that they would have
access to before they sign in for work on a given day. So
we have means of alerting them to immediate items,
immediate-attention items.
There is some risk in doing that because once
you've disseminated it outside of the core group, there's a
risk that an individual out of 12,000-plus pilots that one
might go to the news media with the report.
DR. CHAMBERLAND: Well, that's my next question,
which is--I don't recall reading any New York Times articles
about American Airlines has a system in place and they
receive X thousands of reports, and there were X number of
deviations in, whatever, altitude, tail-scraping events, et
cetera. So I was curious about your thoughts, because this
I think is relevant to some of the discussions that will
take place in our group today, about the wider dissemination
of information.
I'm assuming it is not currently done, but
someone--if you're disseminating it to 12,000 pilots, it's
easy to see how it could potentially go beyond just a group
of pilots and others. Has that happened? Have you been
approached by not just media but by passenger organizations
or something like that for this kind of information? And
what has your response been? Do you think there's a role
for dissemination of this kind of information to the
consumers?
CAPT. GRIFFITH: Absolutely, I think that there is
a role. But I think that that role is different than what
the news media would like to see. That's an understatement,
I guess.
But the challenge is to turn data into
information, to take all the realms of data that we get and
then take that information, or that data and condense it
into corrective actions or trends or highlights that can be
shared. We think the information is most valuable at the
source, at the airline level. Sharing that information with
another carrier is extremely important because we can learn
from them and they can learn from us. That's an
improvement. So we think that there are means to do that,
to share information.
We think it's also important to share this
information with the NTSB and the FAA outside of the offices
that work directly with the program. When the NTSB goes in
to investigate a catastrophic accident anywhere in the
world, they shouldn't go in looking at just that event in a
vacuum. They should have the context worldwide of what may
have led up to that, and they should look at it with
independent eyes.
The airlines have looked at it, the FAA has looked
at it, and we've come up with a recommendation. There
should be a way for us to get that information to the
government and to the other carriers. And then, lastly, the
public at large needs to be--I think the first step in
educating the public is to let them see the process, let
them understand the role of government in these programs,
but giving them the raw statistics, the raw numbers that
could be used to rank or rate carriers against each other.
The potential for misuse is extremely high. When
you look at the ways that airlines collect information
today, it's not standardized. Now, we've got a report that
is--our airline has a reservation system that we invested in
mainframe technology back in the early 1970s, and that
system has served us well. We happen to use that same
computer system to get our reports from our pilots. So we
are constrained by older technology, whereas the newer
off-the-shelf technology off of PCs and Web-based systems
can allow you to do greater analysis. And our people are
working with Dr. Helmreich and others at the University of
Texas to develop a standardized reporting format
electronically which would allow you to do greater analysis
or better analysis in a standardized way. And we're going
to--and we've also worked with NASA to make sure that we're
comparing apples to apples with them.
The best example I have of not doing that is our
airline in one particular year some time ago, we had over
1,000 reports in that year of pilots who had approached an
altitude and not leveled off at the appropriate altitude.
An altitude deviation, we call it. I guess in Great Britain
they call it a height bust.
We compared our statistics to another major
carrier in the U.S., and they published a magazine, and in
the back of that magazine, it said that they had five
altitude deviations that same year. And by all other
measures, we are a comparable airline in any other way you
choose to measure us. But yet the method by which we looked
at these events was vastly different.
So if that information had gone to the New York
Times, then it would have said American Airlines has 1,000
altitude deviation and Brand X has five. And so then we
would have felt that that would have been unfair treatment
of those statistics.
But what needs to happen is that the New York
Times needs to understand the different methods of
collection and the goals of these programs and that Brand X
may approach safety from this perspective and we may
approach it from another, but ultimately we're going to
arrive at the same point, the same destination.
DR. GOMPERTS: To follow up on this train of
thought, the data that you generate internally with your own
evaluation team obviously goes out internally to the pilots,
as you've indicated to us. But what is the relationship to
the FAA and through the ASRS system? Does that data go into
the ASRS system? Do they collate it, evaluate it, and send
out the information? If you could just try and tie those
two programs together.
CAPT. GRIFFITH: Excellent point. We send every
report--we made a conscious decision to send every report
that comes into us to ASRS, and they're dealing with a much
larger volume, you understand. They collect it, they
identify it, and look at it in the aggregate.
We give information to the FAA and other means.
We will advise them when we see something from our airlines'
perspective that we notice to be a trend. We'll send it up
to the FAA administrator.
What I think could happen and should happen is
that as the other airlines start developing these programs,
you would have the ability for the airlines to come together
periodically and see if they are detecting the same trends
from different perspectives. NASA could then serve the role
as facilitator. If each carrier turns in reports to NASA
like we do, they would look at it with an independent set of
eyes and have a greater ability to say, well, at American
Airlines they're seeing altitude deviations represent 30
percent of their reports, at Brand X they're seeing it's 40
percent or 50 percent. You know, there's some disparity
there.
I mean, we could come together with NASA as
facilitator to say this is what they're seeing, this is what
we're seeing individually, and then share information
between the two organizations.
It's something that, quite honestly, hasn't
happened yet because up until now, we've been the only
carrier that has this program. And the program is not
perfect. I mean, we've got a long way to go. But what we
really should be doing, and are doing, in fact, is
strategizing on how to ensure that each airline has the same
ability to collect information and then compare apples to
apples, and then I think the NTSB and the FAA and NASA
should come together as the government and collectively look
at this information and then take certain pieces of it back
and review it independently for the different purposes, but
together as a government they can form a council that looks
and oversees the way the programs are running at the airline
level.
It's a concept that I have proposed a number of
times, and I think that we're a few years away from
achieving it, but I think that ultimately we have to share
this information with the government agencies, and they have
to assure us of the appropriate use.
There's a proposal called GAIN, the Global Access
Information Network--Global Aviation Information Network,
and Chris Hart, who is, I believe, the Assistant
Administrator for System Safety at the FAA--it's an idea.
It's a concept that information sharing is good. Well, of
course, it's good. But it's only good if you preserve your
ability to collect it in the future and if you can take
corrective action and show real safety improvement.
But there's a move afoot to share information.
We're trying to make that movement move in the right
direction so that the information isn't used inappropriately
and inhibits our ability to achieve it in the future.
MR. ALLEN: Thank you, Captain Griffith.
CAPT. GRIFFITH: Thank you very much. I apologize
for the length of time.
MR. ALLEN: Not a problem. Thank you.
We're going to take a 10-minute break to give the
next speaker time to set up.
[Recess.]
MR. ALLEN: The next speaker is Dr. Robert
Helmreich. He'll be here discussing the application of
error management principles to the operating room.
DR. HELMREICH: Thank you very much. Of course,
being a typical American and never terribly compliant with
rules and regulations, I sort of changed it around a little
bit. But I will sort of do what you asked, but I'm going to
broaden it a bit.
I want to start with a little bit of our
background in aviation. I'm honored to be able to follow
Scott Griffith, who is truly one of my heroes in aviation.
I think I really want to make the point that I think the
aviation experience is important for medicine. I think
they're a great match. We have some factors that are really
shared. Safety is a superordinate goal for both of us. On
the other hand, we're limited by cost. Both involve teams
and technology and varying levels of risk and lots of
sources of threat and error, and also in both groups, there
is an enormous amount of second-guessing when things go
wrong, both the public's perception and in terms of
litigation.
I think the fact that medicine is recognizing the
parallels is really illustrated by the British medical
journal. I think this is the first time I have seen a
broken airplane on the cover of a medical journal in quite
an interesting issue.
Just to give you a quick look at what our group at
the University of Texas is doing in aviation right now,
we've worked on a methodology to see what crews really do on
the line, which we call the Line Operation Safety Audit, or
LOSA, where we look at under total non-jeopardy conditions
how crews actually manage their flights and how they deal
with threat and error. And we were very pleased that the
United Nations, in terms of the International Civil Aviation
Organization, has just endorsed this as an optimal method of
assessing system safety.
The second part I don't need to expand on. Our
group has worked on a database front end for ASAP, which
actually asked crews to indicate systematically factors that
either hindered or helped the resolution of the situation
and also asked for some recommendations for avoiding it in
the future. And, of course, Scott identified one of the
major airlines. The other is Continental. And I invite you
to test drive this form. It's on our website, which we'll
make available.
We've also collected survey data on the effects of
what we call the three cultures on performance: national,
organizational, and professional. We have data so far from
26 countries, so we have a fairly broad model.
We've also developed a model of threat and error
management which is a little different from Reason's model
in the sense that it doesn't stop with the commission of an
error or a disaster, but focuses on how teams manage the
threat and the error.
And, finally, we have worked on formal training,
usually called crew resource management in error management,
and worked on methodologies for that.
In terms of the medical program, which is much
newer, there are many parallel factors. We've been looking
at the effects particularly of organizational and
professional culture on medical personnel. It's a starting
point baseline to look for improvement, and we're working
with a number of interesting organizations--Kaiser,
University of Utah, a group at Harvard, the University of
Texas, and the University of Washington.
We've also been doing some work supported by the
Daimler-Benz Foundation to adapt this model of threat and
error management to medicine, and I'll give you an example
of that. We're also working on the development of a
methodology to observe team behavior in the operating room,
and I'll show a little bit of those data.
We're also planning to adapt the incident
reporting model to medicine with the same kind of format, to
make that available, and, finally, to see if we can adapt
the training in aviation in teamwork and communication to
the medical setting. And I feel considerable promise in
that.
So the data I think are very parallel. I won't go
through it. We are actually working on very much the same
kind of data, and it is all driven by kind of a recursive
model of team performance that's only important in the sense
that you have to know how many different input factors
people bring to the table or to the cockpit or the operating
room that influence performance, running from their physical
condition and attitudes and the nature of the team to the
three cultures that I mentioned, and how all of these get
played out in the processes that the teams follow, and,
finally, lead to outcomes that drive subsequent behaviors.
The fact that the model is recursive is why humans
are difficult creatures to study in the real world. It's a
lot easier to grab a bunch of freshmen and see what they
want to--see how they react in the laboratory doing
something totally boring. But I want to show you some of
the data from observing crews in action under non-jeopardy
in the real world, in vivo studies, because we have learned
a great deal from that.
First off, I better define what I mean by error,
and there's two definitions there. I really like the first
one. It's the downside of having a brain. It's kind of
inevitable. But a little more precisely, I think it's
important to note that it's either action or inaction that
leads to a deviation from what either you intend or expect
or the organization intends or expects. So you can err by
doing nothing or you can err by doing the wrong thing.
Most people studying error have focused on active
errors, mistakes, et cetera, but error is much broader.
Safety audit I mentioned. This is non-jeopardy
data collection. The non-jeopardy part is critical, and
we've done it both in the U.S. and foreign airlines. And
what expert observers whom we have trained code is the
threats to error--the threats to safety in the environment,
the number of errors committed and their management, and the
behaviors, actually, the behaviors that are error
counter-measures, both positive and negative, that have been
implicated in accidents and incidents.
So they're proactive data, and I think this
is--they're even more proactive than incident reporting
because this is looking at the natural habitat and seeing
what goes on. And so far we've looked at more than 3,800
flights in six organizations, and I mentioned that ICAO has
just endorsed this.
Threat results are kind of interesting.
Seventy-two percent of the flights we looked at had one or
more threats, and that's not a totally fair number because
we sort of tilted it towards looking at more demanding
environments. But the average is about two per flight, with
a huge variability, of course. And the most threats we saw
on a particular flight was 11.
The error results might be more interesting.
Scott and Bob mentioned the fact that maybe we would make
the captain do an announcement every time there is an error
in the cockpit. Well, that means you'd be listening to a
lot of announcements on the PA. We saw a range of from zero
to 14 per flight, with an average of two. And I think the
next point is very important. The most frequent source of
data was interaction with automation, and that's important
because automation was supposed to be the savior of
aviation; we were going to automate the airplane and get rid
of error. Now, we changed the nature of error.
We also saw enormous differences in error
frequency and their severity which are associated with the
organizational cultures and sub-cultures.
The next thing we did was to, based on our data,
hard data from what crews do, we came up with a typology of
error that I think is important. It classifies five kinds
of errors. There might be some debate about the first kind,
which are violations. Is a violation an error? Yes, it is,
in our opinion, and it's intentional non-compliance.
For example, most airlines have bottom lines that
say if you are not fully stabilized, et cetera, at a certain
altitude, you must abort the landing and go around. So if
you land in an airline that has such regulations, that's a
clear violation.
The second kind of error is the kind most people
think about, procedural error, where you tried to follow the
procedure but you did it wrong. And setting the wrong
altitude into the computer running the airplane is a perfect
example of that.
The third kind of error that's extremely important
is communication, where information is missing or incorrect
or simply misinterpreted. And several examples show up very
tragically in accidents where the copilot tries to alert the
captain to a very serious situation but does it in a very
indirect way where it doesn't register.
The fourth kind of error, which is extremely rare
in aviation, attributed to the strong tradition of training,
is proficiency error, simply not knowing how to perform a
required act such as programming the computer.
And, finally, the last kind are decision errors
which we define very precisely. This has to be a situation
which is not covered by formal procedures where the crew
makes a decision that unnecessarily increases risk. And an
example that we saw more frequently than we'd like is flying
unnecessarily through adverse weather. The crew sees sort
of bright red on the radar, which, you know, when you see it
on your TV at home, this is not good. And in an airplane,
it's not good. You choose to fly through it because we've
got to make our schedule. That would be a decision error in
our model.
Let me show you the frequencies, and I think this
has been kind of a shock to airline management. The largest
source of error that we observed was violations. Fifty-four
percent of all the errors we saw were violations. The next
highest percentage, about 29, were procedural errors. And
the lowest percentage were communications, proficiency, and
decision errors. But the second part, the second line I
think is very important. This is the percentage of errors
that we defined as consequential, this done in collaboration
with the airline.
Consequential errors we defined as either errors
that lead to another error, which I didn't put on that
definition, or that place the aircraft in what we call an
undesired state, you're in the wrong place at the wrong
speed, going the wrong direction--clearly, situations that
increase risk.
Now, what's interesting, of course, is the
parallelism. Notice that violations were the most frequent
kind of error, but only 2 percent of the violations led to
undesired states or were defined as consequential, meaning
that crews get away with a lot of violations without
negative consequences, sort of like His Airworthiness Bob
Francis picking me up at the airport. Bob drives a new
sports car, not always necessarily at the speed limit. But
he seems to get away with it, and, of course, no human being
could resist that temptation.
Not surprisingly, the type of error that gets you
in the most trouble is proficiency, not knowing how to do
it, and also decision errors. So it's very interesting.
But lest you say, well, violations are kind of a non-issue,
maybe they've got too many procedures and too many silly
regulations, which you hear a lot from pilots, let me show
you some other data.
We did a kind of interesting analysis. We took
the database and we split it into those crews that committed
at least one violation and contrasted the crews that had a
violation with the crews that didn't have a violation. And
what we found is kind of sobering. The crews that had a
violation, even though the violation itself was
non-consequential, were 1.5 times more likely to commit
another kind of error, and the other errors they committed
were 1.8 times more likely to be consequential. So the fact
of the violation is a very good precursor of a crew that's
at risk or a flight that's at risk, even though the
violations themselves are rather trivial.
I might say something about violations. Aviation,
as we know, is heavily regulated and heavily proceduralized.
It's what many people call a tombstone profession, meaning
that every time there's an accident, it tends to result in
more regulations. And I think that's why you see so many
violations. But my experience looking at the medical
profession is that there's another extreme, and that's
medicine, which is very much under-proceduralized, that
there are many kinds of events and activities in medicine
that would very much benefit from being proceduralized that
are not. So somewhere in there is probably a happy medium.
Are these error data useful? Yes, I think they
are. And I won't go through this in detail, but each of the
different types of errors suggests different solutions for
organizations. Violations suggest that the procedures may
be bad. They suggest that captains may be weak because it's
probably not--it doesn't take a rocket scientist to figure
out that copilots don't violate if big daddy doesn't
tolerate it. So they may be a symptom of weak leadership.
Or they may define an organizational culture that's simply
kind of cowboy or non-compliant.
Procedural errors may indicate poor workload
management or, again, point to poor procedures.
Communications errors may reflect just inadequate teamwork
or complacency. And proficiency errors may suggest need for
more training.
At one of our LOSAs, we saw a large bump in
proficiency errors. We went back to the same airline. This
airline was buying a lot of new airplanes, and they realized
that they were running pilots through the funnel a little
too fast. So they slowed down the qualification process,
which I think is a very good example of using data
proactively. Decision errors may suggest a need for more
formal training and expert decisionmaking or in risk
assessment.
So that's enough for aviation. Let me talk about
how we've seen some of these factors play out specifically
in the operating room.
We actually have survey data from both operating
rooms and intensive care units from a number of teaching
hospitals in a number of countries, kind of keeping with a
cross-cultural focus, looking more particularly at surgeons,
anesthesiologists, residents, and nurses. And one of the
things that is a little sobering from the data is that 25
percent of the medical personnel we've surveyed report that
they are not encouraged to report safety concerns. So that
suggests that there is not this atmosphere of trust that
Scott was talking about. And, secondarily, only one out of
three respondents across the sample felt that errors were
handled appropriately at their hospital in terms of--again,
I would point to Scott's data. Were they dealt with
correctively? Clearly not in many cases.
I think some of the things that go on are a little
shocking to me, because I had not--I have yet to see major
conflict in the cockpit. This happened at one of the
Harvard teaching hospitals, which I found a little bit
scary. Kind of the image of the anesthesiologist and
surgeon duking it out on the floor while the elderly patient
is on the table is interesting.
I would just point out one other thing about it,
and that is the absolutely draconian punishment that was
levied on these guys. And I'm not talking about the $10,000
fine. I'm talking about the fact that they were sentenced
to joint psychotherapy.
[Laughter.]
DR. HELMREICH: Now, that is a punishment. But
this is, you know, Massachusetts. They're kind of wimpy up
there. What if we get to a more passionate culture? In the
Latin world, I think we see this playing out a little more
severely.
I gave a talk in Florida about three weeks ago,
and I was kind of shocked. The CEO of the medical system
there took me up to his office and said, "I want to show you
a new regulation we just put into effect. I'm really proud
of it." "What is it?" "Well, it's a new regulation on how
we deal with doctors who throw scalpels at other doctors or
nurses in the operating room." And I thought, "You got to
be kidding." He said, "No, we're not. It happens a couple
of times a month in this hospital."
So last week, I was speaking to the med schools
from UT and the head of risk management got up and said, "If
you think he's kidding," he said, "I have had a scalpel
thrown at me in a hospital here in Austin, Texas." So,
again, it's a little different environment from aviation.
You're safer in the cockpit, is my message.
[Laughter.]
DR. HELMREICH: I want to say a little bit about
the professional culture of medicine. We looked at--well,
let me go back a bit. Well, this is not a very good slide
to leave it on, but I'm going to say a little bit about
professional culture.
The professional culture of medicine has--and
aviation, both have very positive sides to them. One is
medical people, doctors and nurses, tell you that they are
proud to be in their profession, that they want to do good;
I mean, it's a very positive image, pride, desire to do
good. But the negative side showed up in our data, both in
aviation and medicine, and the negative side of the
professional culture is very parallel, and that's a kind of
denial of personal vulnerability, a sort of Superman
syndrome that I thought was limited to the astronauts, but
it turns out to be present in the cockpit and the medical
environment also.
This is some data that we collected most recently
at ICUs in Texas, and I think these are very interesting
because these are three facts that are endorsed by the
majority of people that are patently false:
"Even when fatigued, I perform efficiently during
critical phases." No. Fatigue is clearly a detriment to
performance. There's huge data on it.
"A true professional can leave personal problems
behind." Absolutely not.
And, finally, "My decisionmaking ability is as
good in emergencies as in routine situations." We know from
very good data that in emergencies you tend to tunnel
vision. You're unable to deal with complex situations,
which is what you're in. So that one is probably--that's
the one that has the highest endorsement, incorrectly, that
is the most obviously incorrect.
But it isn't just doctors. This is kind of an
interesting chart because this contrasts the attitudes of
doctors and pilots, and this is the pilot sample, global
pilot sample. And you see they're very close with one
exception, which is fatigue, and I think that's interesting
because NASA has spent a lot of research effort on studies
of fatigue in aviation and has developed programs that many
airlines are using. So pilots have been trained on the
effects of fatigue, and I have data from one airline over a
six-year period that shows their attitude is getting better
and better, with more recognition of fatigue and the
importance of counter-measures. And, in fact, if I showed
you another slide, going back to about 1990, the blue line
of the pilots would be right up there with the doctors at
about 60 percent. So the good news is there's hope for
change.
These are some data, again, from ICUs on safety
perceptions. You can see that in this particular
organization they're deeply concerned about understaffing
and problems associated with shift changes and other things.
So that points at what I would call and Ron would call a
latent factor in this organization.
The next one is also kind of disturbing. This is
the perception--and there were no differences between the
perceptions of doctors and nurses--that almost half feel
that the organization would compromise patient safety for
economic or other reasons. And that's sort of played out in
the fact that about 20 percent of the respondents--again, no
difference between doctors and nurses--report that they
would not feel comfortable being treated at this hospital.
So what are the major issues in the operating room
and ICU? Well, one thing I want to point out is that the
operating room is a more complex environment than the
cockpit because you have multiple groups and specialties and
hierarchies that have to cooperate. And there's very
unclear lines of authority. There is no question about the
nature of authority in the cockpit. The captain is the
captain.
I gave a talk at NIH a few years ago, and my
audience was primarily surgeons and anesthesiologists, and I
asked this kind of silly question. I said, "Who's in charge
in the OR?" And now I understood why there are these fights
in the OR because I started a riot. I mean, the surgeons
all said, "That's the dumbest question I ever heard. I'm in
charge." The anesthesiologists all said, "I'm responsible
for the maintenance of the patient; therefore, I am in
charge."
Somebody needs to be in charge, and probably the
best resolution of that debate is not with fisticuffs or a
.357 Magnum.
The other issue is conflict. Conflict is much
broader--our data from observational data in one teaching
hospital, we saw a serious conflict in about 10 percent of
the operations we observed. We have seen it in about less
than 1 percent of the 3,800 flights we have observed
systematically. And then, of course, the issue of
communication across disciplines and, of course, I talked
about the denial of vulnerability.
So how have we diagnosed the OR? Well, it's,
again, parallel but I'm just trying to show you that. We
adapted questionnaires we've used in surveys with pilots to
the medical environment, and we're working on that now.
And, again, we're working on adapting the recording system,
the methodology for observations from the cockpit to the
operating room and emergency room. And we've done our
survey data. And what we do is, of course, query the
organizational and professional culture and also the
acceptance of the importance of teamwork and communication.
Finally, we ask open-ended questions about what's
the biggest problem, and what they tell us overwhelmingly,
both doctors and nurses, is communication. They basically
tell us that communication is poor across the disciplinary
border.
From our observations, well, not surprisingly, we
saw major problems in communications, not informing people
on the other side, a surgeon not telling an anesthesiologist
about using a drug before a blood pressure gets affected,
not discussing alternative procedures if something goes
amiss. Problems in leadership I've already talked about,
not establishing clear lines of how we will deal with
situations and contingencies.
I talked about conflict. When we observed the
patient deteriorating significantly, while the surgeon and
anesthesiologist yelled at each other about whether or not
to knock off the surgery and send the patient to the ICU.
Again, the issue of preparation, planning, and
vigilance, which is critical in aviation, we also saw as
problematic in many operations, not planning for
contingencies and not monitoring the situation, people
getting distracted. We saw one case where the monitor blew
a fuse or blew a circuit breaker. The anesthesiologist was
distracted. The patient was going rapidly downhill until
somebody finally noticed.
And, of course, the issue of status is perhaps, if
possible, even stronger in medicine than it is in the
cockpit. The anesthesia barrier is a huge communication
barrier, so it isn't just status between attendings and
residents but status between different subdisciplines and a
kind of unwillingness to question those of other
disciplines, the fact that juniors--not talking about the
disciplinary differences, but juniors not speaking up when
errors are observed or using indirect speech that's not
understood.
A case in New York where there was very indirect
communication from a resident who saw that a senior
neurosurgeon was about to operate on the wrong side of the
brain.
Now, it's interesting that this doctor just
resurfaced this year. This happened in 1995. He just did
it again at another hospital in New York, which is kind of
interesting.
The parallel in aviation was an accident I was
involved in an investigation of, a copilot who would
literally rather die than speak up to the captain that the
aircraft was running out of fuel. This happened on a flight
into New York. The copilot never mentioned the fuel state,
although they told--when the steward came to the cockpit and
said, "How is the flight doing?" they made this symbol. So
they knew they were out of gas, but they never told the
captain. They ended up crashing in John McEnroe's back
yard.
Now, these should look kind of familiar. From
what we've done so far, we have found that the same
typology, the same five types of errors, seem to cover
everything we've come across in medicine so far, but with
the caveat that I mentioned earlier that the frequencies of
different types vary dramatically because of the difference
in the number of procedures. But procedural errors that we
see, mistake in setting up an IV, miscommunication with
other team members.
I was shocked. I've been in three different
hospitals where they introduced new equipment without either
briefing the staff or providing any training, new anesthesia
machines, new monitors. They're just magically there. So
naturally, you get proficiency errors.
And, finally, decision errors, perhaps you could
argue the unnecessary exposure of patients to very high-risk
procedures.
So let me show you in closing the model of threat
and error management. This is not Swiss cheese. I don't
know what--maybe this is Gorgonzola. But it's a little bit
different.
This came from our studies of aviation, and what
we think the model is important for is three things: It
will help in formal analysis of adverse events; it will help
define training needs for personnel since it focuses on how
threat and error are managed; and it may help organizations
develop strategies to manage error.
So I have to give you some definitions. Threats
are factors that increase the likelihood of error, and these
can come from a variety of sources. They can be
environmental, such as poor lighting; physician-related,
such as fatigue; staff-related, such as communications
problems; or patient-related, which is one that you don't
find in aviation. You may have a difficult intubation, for
example.
Latent threats really follow Jim Reason's
definition. They're the aspects of the hospital or medical
organization that you can't easily identify but predispose
the commission of errors or the emergence of threats, which
could be scheduling, fatigue, for example, being a much more
serious problem due to medical scheduling, or health policy.
So in terms of the error side, what we look at is
the error itself, what behaviors are involved in detecting
and responding to the error, and then we look at the state,
which can either be inconsequential or induce an adverse
patient state. And then what you have to manage is the
state that the patient gets in as a result of the error. So
that's the sequential step. And then, finally, when you
manage the state, the outcome can either be adverse or
inconsequential.
The full model shows the various threats and the
fact that the threats can show up at any point. The latent
threats include national culture. A hospital where I've
been teaching in Texas--I didn't think of national culture
as being particularly critical, but I looked at the roster
of the faculty in this medical school. They had 61
nationalities functioning together, and almost as many in
the nursing staff. And when we probed beneath the surface,
national culture was very much at the root of many of the
problems they have. Organizational culture, et cetera.
Immediate threats are obvious: environmental,
organizational, individual, et cetera.
And then, of course, you have the threat
management strategies and counter-measures which are very
similar to the same strategies that you use to manage error.
So the model seems to work pretty well.
Let me just describe a sentinel event that I was
asked to apply the model to. At first I thought, well, this
is kind of a silly event because this is an example of such
profoundly awful behavior on the part of a doctor that it's
probably not appropriate. But I realized that maybe it
wasn't inappropriate because, despite this really shocking
behavior by an anesthesiologist, it pointed to a lot of
latent factors that were really critical.
Let me give you a brief rundown of the event. It
was elective surgery on a perfectly healthy 8-year-old boy
who had ear problems. He was anesthetized and the
endotracheal tube was inserted, but the anesthesiologist did
not check for breathing sounds. And here's an example of
the organizational culture: they had changed brands of
temperature probe the day before, and the one that they
provided the anesthesiologist was not compatible for the
monitor. He asked for another one, but he didn't wait,
because there was pressure to perform, so he didn't wait for
it or connect it. He didn't connect the internal
stethoscope. After a few minutes, he stopped filling out
the chart. He didn't enter CO2 and pulse on the chart.
And then shortly after that, the nurse observed
him sitting in the chair, head bobbing, a reasonable
presumption of sleep. However, the nurse reported that she
did not speak to the anesthesiologist because, quote, she
was afraid of a confrontation with the guy who was noted for
being difficult.
Somewhat later the surgeon noted that the airway
tube was disconnected, and he sort of spoke sharply to the
anesthesiologist, who reconnected the tube, but didn't
verify its function.
A little bit later the surgeon notified the
anesthesiologist that the patient had a breathing rate of 60
per minute, which was so rapid that he couldn't continue the
operation. The anesthesiologist did not respond to that
alert.
A little bit later the monitor showed irregular
heartbeats and a code was called. When they removed the
endotracheal tube they found it was more than half
obstructed with a mucous plug.
They put in a new ET and ventilated the patient,
but they also noted that the breathing circuit's tubing had
melted down, and they turned off the airway here. The
patient died despite the efforts of the code team, with a
temperature of 108 degrees.
The anesthesiologist claimed the cause of death
was malignant hyperthermia, but the investigation did not
support that conclusion.
When we went to apply it to the model, we
identified a minimum of 9 sequential errors. First, a
decision error, initiating anesthesia without a temperature
monitor. There was no formal requirement, but it clearly
increased risk. There was a procedural error in the failure
to verify the ET insertion. There was a decision error in
not connecting the internal stethoscope. The nurse made a
decision error in not wakening the anesthesiologist. There
was a violation in terms of the anesthesiologist's failure
to maintain the record, but that particular violation was
non-consequential, which is interesting. There was a
procedural error in failing to continue to monitor the
patient and indeed to notice that the endotracheal tube had
disconnected. There was a procedural error in the failure
to confirm the tube placement after he reconnected it. And
there was a decision error in the failure to act on the
elevated respiratory rate. And finally, the surgeon gets
nailed with an error also. He did not respond to the
inadequate response from the anesthesiologist. He said, "I
can't operate because of the rapid breathing." But after
saying that, and the anesthesiologist doing nothing, he
motored on and continued the operation until the code was
called.
So if we look at the threat and errors, they were
interesting. There were some clear overt threats.
Environmentally we had a temperature probe that didn't work,
but there was a staff factor too. This anesthesiologist had
been in trouble before. In fact, he had been in such
serious trouble that--like the guys that got in the
fight--he had been sent to a shrink and diagnosed as having
a severe personality disorder. He was clearly fatigued. He
was undergoing a difficult divorce and had a couple of
difficult kids, and apparently was operating with high
levels of fatigue. And the patient had a small airway; it
was a young child.
So if we look at one of the errors, the decision
error, the nurse decided not to awaken the anesthesiologist,
which contributed to the arrest, and it was clearly a
decision error which they had reasons to follow. Then if we
go through the identification of the latent factors, it gets
much more complex than a simple bad actor.
First off, the FDA had certified an airway heater
that would continue to function without a temperature probe.
In fact, what the airway heater did was assume that the
temperature of the patient was the ambient temperature of
the room, which is why it melted it down. That is clearly a
latent threat that had been present for a long time. It's
since been fixed.
Organizationally we can identify a number of
latent threats. They changed the temperature probe brand
without notifying the staff that they needed a new
connector. The organization failed to act on previous
reports about the anesthesiologist's behavior and
performance. They had sent him to a shrink, but they had
not reacted on the fact that there were 43 surgeons in this
hospital who said they would not operate with this
anesthesiologist because they felt he was dangerous.
Another latent threat was the lack of any formal requirement
for patient monitoring. It simply wasn't present. There
was not a policy for cross-checking other team members,
which is why the nurses felt non-constrained to wake the
anesthesiologist. And again, we threw in the fact that what
aviation has done, of course, is concentrate on team
training and communication. There was nothing like that in
this organization.
If you look at the combination of organizational
and professional culture, there is clearly pressure to
perform when fatigued. If this anesthesiologist had turned
down the operation because he was tired, that might have had
very severe consequences for his future use. And clearly
there is, at the organizational and professional level, a
lack of discipline in the case of a known bad actor. And at
the professional level, I would say the clear willingness to
tolerate peer misbehavior--"Well, I won't use this guy, but
other people can", and of course the denial of fatigue
effects that we've seen in our surveys.
And, finally, the nurse/physician interaction
issues. I felt very sorry for the nurse who said, "If I had
wakened him, he would have screamed at me, and I don't have
to do that. That's not part of my job description", which
is extremely reminiscent of a very famous accident in Canada
that I was part of the investigation of, an accident at
Dryden, Ontario, where several pilots who were passengers
told the flight attendant that there was ice on the wings
and that the plane shouldn't take off without de-icing. But
the flight attendants were trained not to bother the pilot,
so they didn't tell him, and the result was a tragic
accident, and I think it's a very parallel kind of issue.
So these latent threats, and again just building
on Ron Westrum, they're hard to defend against because
they're not immediately visible. They're usually recognized
after an accident or an incident, but proactive events, such
as surveys or observations, can assist in identifying latent
threats, and if you can cut them off--and I think Ron's
slide was a very good example--you can drain the swamp.
That's what we need to do.
So let me sum up by saying, "Can we change things?
Can we build a safety culture?" I think the answer is yes.
Sort of distilling our experience in aviation and our little
bit of experience in medicine, I think there are six steps
you can follow. The good news is AA has 12 steps, and we
can maybe do this in only six.
And it's very much like what you do with a
patient. You start with a history. You have to know what
the issues are in a particular organization, and you've got
to do a diagnosis, and a diagnosis means you need multiple
sources of data, and that includes incident reporting system
surveys, find out what the perceptions are of the people at
the sharp end, and an analysis of near and adverse events,
actually applying threat and error models and observing what
really happens. Then you've got to change the culture. It
requires clear standards and an evidence-based approach. I
think medicine cries out for more procedures, not as many as
aviation, but more. And I think, again, paralleling what
Scott said, acceptance of error in a non-punitive blame-free
way, but not acceptance of violations. And that's why we've
got a problem in medicine, because in the absence of
procedures, it's kind of a murky forest out there, whereas
where there are procedures, we know the difference and
distinction between violations and error. And finally,
sharing data and approaches, and I think not
formally--again, echoing Scott's talk--while there isn't a
formal mechanism to share data, there is a huge amount of
informal interchange in aviation. There are constant
meetings where people share their approaches and their
experiences, and I think the same thing is quite achievable
in aviation--in medicine.
I think training will work. There are a number of
specific behaviors that act very effectively as
counter-measures to threat and to error, and I think the
training programs that were originally called Cockpit
Resource Management, and then Crew Resource Management, and
most airlines have renamed Error Management or Threat and
Error Management, they have much to offer.
And finally, feedback and reinforcement. People
need feedback on their performance, especially in the
interpersonal, non-technical areas, including how they
manage error. I'm always shocked that there is so little
feedback. In the teaching hospital where I taught for a
couple of years, about the only thing I never saw in the OR
was teaching, and I thought that was kind of tragic.
And finally, the sixth point is, none of this is a
one-shot affair; it's an ongoing thing. The organization
has to continue to collect data through programs like ASAP
at American, through observations, through all kinds of
things, and they have to continue to train people. Training
is an ongoing thing in the interpersonal as well as the
technical end.
Just to give you one example, this is actually
Kaiser Permanente's approach. They had a patient safety
forum a couple of weeks ago, enthusiastic participation of
people from all of their regions, building a national
steering committee that encourages local initiatives to
attack error and threat in creative ways, to recognize
customer concerns and cooperation. General Motors is one of
their big customers. General Motors looked at the data, and
they concluded that in all of General Motors, they lost one
employee in 1999 due to a work place accident. They figure
out they lose about 1.3 patients a day to medical error. So
their message to the health care industry is, "We're willing
to pay more for health care if you clean up your act." And
I think that's a very important message. So they decided
that they need better data collection and better measurement
of outcomes, and they recognize the need to address both
their organizational culture, but also, very specifically,
the professional culture of their personnel, and that they
will do this through training. So I found that very
encouraging, a non-defensive reaction to IOM.
So what are the training issues? People need to
understand the human limitations as sources of error. I
assume that even though pilots didn't know much about human
limitations and error, that doctors would. I was wrong.
They need to understand the nature of error and how it's
managed. There's a lot of good data on how experts make
decisions, and I think this has great relevance for medical
training. While it's not a big issue in aviation, I decided
that conflict resolution probably should be a very primary
topic of training, at least for operating room personnel.
And that includes--it also includes specific behaviors and
procedures as counter-measures. And finally, analysis of
positive events. Instead of doing everything based on the
negative, more effective training involves how do really
good teams manage events? What are brilliant--examples of
brilliant and effective teamwork? There are a number in
aviation. We need to use those in training people to the
positive. And finally, mechanisms to reinforce people for
good threat recognition and error management.
So I'll sum up by saying there really aren't any
magic bullets. Ron said the same thing, and I couldn't
agree more. Since effective teamwork, which is where a lot
of the errors happen, is determined by a lot of factors,
there's no single action that's going to fix an
organization. One of the early mistakes, and a major
mistake in aviation, was assuming that this training and
teamwork would fix the problems of human error. It didn't.
It can't, because error happens in the context or an
organizational culture, which has to be health and
supporting. But the final news, and the good news is, that
you can change organizational and professional cultures if
the leadership and the top management are there and they
want it change.
So I think there's much to learn, and I am
optimistic. Although the reactions to the IOM report have
been mixed, I think on the whole the genie is out of the
bottle and it's not going to go back in, and I think the
long-term effect is going to be positive. Thanks very much.
MR. ALLEN: Thank you. In the interest of time,
we're going to move on to the next speaker, and then we'll
get the questions for both speakers after Dr. Small
presents.
DR. SMALL: It's an honor to address the committee
today. I appreciate your work, and I appreciate the
opportunity to address you today.
I found myself in the enviable position about 2
hours ago of having a lot of extra time, and now I don't, so
I've actually just been busy putting away all my swiss
cheese slides, so I won't be showing any of those, although
I am a card-carrying swiss cheese slide person and I have
some at home.
Just a little bit about myself. I have extensive
experience in internal medicine, emergency medicine and
anesthesiology. Currently I'm a practicing anesthesiologist
at Massachusetts General Hospital in transition to the
University of Chicago.
It's perhaps relevant that my father was a family
physician, and I accompanied him on house calls. This was
back in the late '50s, early '60s. And my memories of my
father during that time when he was practicing, are that he
saw his role as a family physician as protecting patients
from the system. And this is something that I have
emblazoned in my memory as a small child, of my father
describing, after the untoward death of one of his patients,
an old lady who lived across the street, that he really did
his best to protect her once she got into the hospital, but
once she got sucked into the system, all was lost. And
these were recurrent stories through my childhood.
And so for me, becoming an internist and a
physician was kind of like--it was like a hero role, if you
will, like a seeing-eye dog, where my job was--I was the
lifeguard. I was there to protect these people. And I
really visualized myself as doing that over the years. And
my transition into system safety about 8, 9 years ago, which
has been gradual but accelerating lately, I've had a sense
of loss actually, over giving up that role. And I think
that one of the barriers that we will see in the adoption of
a lot of these new systems methods to improve patients'
safety, is changing the way that people see themselves in
the system in the long-standing role that physicians and
other providers actually see themselves operating in. For
me it's been important to try to help people in the
aggregate, as opposed to just one at a time.
I wished I hadn't have put the word "error" in
that slide. I thought about it, but its currency is so
powerful, I thought I would just put it in there. As I
speak, I would like you to consider substituting for the
word "error", the phrase "failure at expertise." I was one
of two physicians at Mass General that interviewed
physicians during the Harvard Adverse Drug Event Study, and
I've spent many, many hours in systems analysis meetings,
analyzing hundreds of adverse events. And if you do that,
and you look at the literature on inter-rater reliability
for what is an adverse event and what is preventable and
what is not, you will quickly get lost in that experience
and in that literature. It's very difficult to decide, if
you really spend the time, on what is an error, what is
failure of expertise, what is preventable and what is not.
I don't want to go there during this talk, but I would like
you to consider the context, that while we should not lose
the currency that the word "error" holds, I'm really talking
more about failures of expertise here, and the larger issue
of learning.
So I have three overall objectives today. I'd
like to share with you some perspectives from
anesthesiology. Steve Nightingale has asked me to start
there. Certainly there's a lot that can be learned from
what anesthesiology has done in the last 10, 15 years, but I
don't think that the full story has been told. I'll address
briefly the current opportunity which the patient safety
movement offers us, and I just have a few closing remarks
about blood safety. I'm not an expert in blood safety. I
don't hold myself out to be such, but in the last 10
years--I've been in practice 20 years, and in the last 10
years, I can tell you that I have--well, I've been covered
in blood more times than I would like to say. I've
transfused many, many, many, many units of blood.
Anesthesiologists transfuse, I believe, half the blood
that's ordered in the country. And so I have an intimate
familiarity with blood administration.
Briefly, the NIOM report singled out
anesthesiology as a profession that has done a lot to
improve patient safety, and this improvement's been
measurable. The malpractice crisis provided the clear focus
for this to happen. Premiums for anesthesiologist were
escalating to 30,000, $35,000 annually for each physician,
and so they had to do something. And of course there was a
public outcry as well from some of the celebrated cases in
the early 1980s and mid 1980s. This was managed through
having a unified leadership in the person of Ellison Pierce,
G. Pierce, who took it upon himself to personally lead this
change. It's a very important ingredient. And then
multiple sustained approaches were used to win measurable
safety gains. When I say standardization, I mean if I
walked into an operating room here in Washington, D.C.
today, I could probably very quickly use one of their
anesthesia machines without a manual, without asking anybody
any questions. They probably use one of three or four
machines that I've used before.
Reduction of variability in procedures, there are
now standards that have been adopted throughout the country
on monitoring and what has to be done for each individual
patient. We also have new technology, and we also have new
drugs, that even in the short time I've been practicing, the
8 or 19 years I've been practicing, in the last 3, 4 years
we've seen some astonishing new drugs that have, in many
cases, removed the need for experts to give anesthesia,
because these drugs are so safe and so error-forgiving.
In addition, in the mid 1980s, the closed claims
project was begun, a systematic analysis of malpractice
experience. I think there is now 3 or 4,000 closed
malpractice cases in the data bank. half the country's
insurers participate. And from this retrospective,
admittedly limited, type of analysis, there's been 15 to 20
papers published and widely disseminated for
anesthesiologist on where our liabilities lie and how we can
target those.
Now, that said, I'm reluctant to generalize the
anesthesia experience to the rest of medicine. I can say
that as someone who has managed and run busy large emergency
rooms, practiced in rural settings, academic settings, made
a lot of house calls, run intensive care units. If you
could imagine, I can--if I'm doing your gall bladder
operation, I've got three board-certified people in the room
with me all the time. There's myself, a surgeon, and
possibly another surgeon. There's at least two other
nurses, once scrubbed and one not scrubbed. So that's 5 or
6 people, and we're doing a rather simple operation that we
do every day. We might do 5 of them in the morning. If the
phone rings, I can't answer it. Do I sometimes? Sure, if
the phone's close enough and I'm on automatic pilot and I
get an important page, I might do that, but technically, I
don't answer the phone, I don't talk on the phone, I don't
leave the room to go to the bathroom. If I do leave the
room to go to the bathroom, I have to get another person in
who's equivalent to me to replace me. That's one of the
reasons anesthesia is so safe. I also can't work after 24
hours.
When I was doing emergency medicine, I might have
30 patients I was personally responsible for simultaneously.
I might have to leave a resuscitation to answer the
telephone, to talk to somebody who might have critical
information about that patient that's communicated to me,
perhaps another physician. I also might have two
simultaneous resuscitation in progress while I had 15
children with runny noses in the waiting room, one of whom
might have an incipient meningitis. I've been there. I've
done that. I might have to have 100 patients in 12 hours
that I would see. 15 to 20 I would admit to the hospital
and write their orders for.
Now, I'm not trying to downplay the difficulty
being an anesthesiologist. I think that some of my most
stressful moments have been one-on-one with a patient who
I've--I had one of these last night--you shake someone's
hand you put them to sleep and they have a cardiac arrest on
the table, and you and them for the next 4 hours, bleeding
to death. So I'm not trying to create a one--sort of
upmanship, but I am sort of saying that if you want to
really think carefully about the generalize-ability of what
we've done in anesthesia, you have to think about the
constraints of what other specialties have to deal with.
Surgeons have patients waiting in the office. They have
consults in the emergency room, and they have to deal with
that information while they're operating. They get voice
messages into the operating room, and they have to carry on
three or four conversations simultaneously.
I did not see anything in the NIOM report about
the change in the anesthesia work force in the 1980s. And
I'm not sure this is a PC subject or not, I haven't seen it
written up anywhere except in demographics of training
programs, but if you wanted to go into anesthesiology in
1975, you had a much easier time of it than you did in 1990.
It was extremely competitive to get into anesthesiology in
the late 1980s, and the work force changed completely. That
was due to a number of factors: the advent of Swan-Ganz
catheterization, the advent of critical care as a specialty.
Exciting new procedures and new drugs and new knowledge
attracted a lot of people into anesthesia. I would also
say, having gone through it, that the advent of DRGs and
other issues, and management constraints on physician
practices, pushed a lot of people out of internal medicine
and other fields into anesthesia in the '80s. And so there
was a tremendous--I believe that there was an incredible
increase, a logarithmic increase in the motivation and
quality of people that went into anesthesiology during that
time.
There has never been a single study that has shown
that pulse oximetry or entitle capnography have reduced
morbidity and mortality; in fact, the opposite. The largest
study was done in Denmark of 22,000 consecutive patients,
and it showed the opposite, that anesthesiologist who were
doing operations without the new technology versus
anesthesiologist who were doing major operations with the
new technology, there was absolutely no demonstrable
difference in any measurable parameter, except there may
have been slightly more myocardial ischemia in the group
that the anesthesiologists did not have access to the
technology. In other words, in some patients they were able
to look at a number, and if the oxygen saturation dropped
below a very high value, they would be instantly alerted.
The other group did not know. They had to use old signs and
symptoms: the patient looked pink, the blood looked red,
other types of gross physical signs and symptoms.
The basic message from that is that it's very hard
to show the impact of new technology sometimes, and in my
own estimation, I think that the new technology has actually
in some ways reduced patient safety--and I'll just put this
out there as playing devil's advocate--because I believe
that physicians have become so obsessed with some of these
new monitors, that as a training physician, having watched
physicians move through the system, it's obvious to me that
they have lost the physical diagnosis and cognitive skills
that the older physicians of another generation had.
Lastly, I would just address this issue about
declining mortality in anesthesia. I'm convinced that there
has been a decline in the mortality in anesthesia, although
you could have a very vigorous discussion about that. It's
clear that we're doing sicker patients. We did not redo
heart operations on 85-year-olds 10, 15, 20 years ago, which
we do them routinely today. So we've introduced new modes
of complications. But if you look at the kinds of
conversations I have in the hallway when I go to
meetings--I'm identified as a safety nut, and people come
and tell me about their cases, I get consultations, I do
site visits--I personally do not believe that the controlled
studies that have been published, adequately reflect the
situation that we see in anesthesia. We've had great
advances, there's no question about that, but we still have
a tremendous amount of work to do.
As an investigator on the Harvard Adverse Drug
Event Study, I can tell you that we really did focus on
adverse drug events. We did not look at failures of
teamwork and decision making unless they were related to an
adverse drug event. I was not part of the malpractice study
published in the early '90s on the 1984 New York chart
review that led to the series in the New England Journal,
but the Adverse Drug Event Study was an outgrowth of that
study.
I personally believe that our argument should not
be about the 98,000 versus 44,000 deaths, because I
personally believe it's much, much, much higher than 98,000.
I believe that the response to the NIOM report is what we
should be focused on. I was pleased to see the Federal
Inter-Agency Task Force response, the GAO report about the
FDA and their reporting systems. And I think, as Bob said
quite articulately, the genie is out of the bottle. I think
we will never really know the denominator, but in the
absence of the kinds of reporting systems we're discussing
today, and the fact that all reporting is essentially
voluntary, we do not have a grasp on the number, and it is
much higher than 98,000, and we can discuss that later.
Before I move on to some of the specific
initiatives that I've been engaged in in the last 5 to 10
years, I would ask people to reflect for just a minute on
the difference between safety and quality. The National
Institute of Medicine report definition of quality has been
a good thing. People are using it, provides a common
language. For those of you who sign on to the patient
safety lists that the National Patient Safety Foundation
manages, there was a very vigorous discussion a few weeks
ago about what is safety and what is quality, and a few
clubs came down on the side of, "Let's just use the NIOM
report's definition; this seems to be the right thing."
But in the rush to put safety under the quality
umbrella, I think we should stop and just consider a few
things. In my mind a focus on safety is like the separation
of church and state or the executive and the judicial
powers. Ultimately, safety analysis has nothing to do with
cost at all. Quality has a lot to do with cost. Quality is
defined in terms of value and cost. Safety tells you what
you can get, what is achievable, and then you have to make a
decision as to whether you can afford it or whether you want
to do it, and that's an ethical decision.
I also believe that safety has an entirely
different lens or perspective than quality. The safety lens
is an ecological lens. It's an ecological model. If I'm
going to analyze a situation from the safety perspective,
I'm going to think about technology issues, automation as a
member of the team. I'm going to think about this person's
interpersonal relations at home as well as at work. I'm
going to think about the design of the operating room, the
organizational culture. Safety has introduced a whole new
language and a whole new vocabulary of terms and ideas and
methods and tools that quality has never considered and will
never consider, and I don't think that it can easily adopt
them. I think that safety and quality can interdigitate,
but I would be cautious at assuming that safety is one more
notch on the belt of quality.
And last, I think that safety is a code word for
respect. How much are we willing to respect the value of an
individual life? That's something to keep in mind.
I'll discuss a few projects that I've been engaged
in, just to give you some idea, outside of the operating
room, what we've been trying to do. And I'll conclude at
the end of my remarks by tying it in to blood safety.
I designed a trojan horse study about 6, 7 years
ago, as I began to get involved in simulation training, and
by simulation training I mean high-fidelity simulation
training, the kind of thing that Bob Helmreich has been a
leader in aviation. I became involved in the first
commercial simulation tool that came off the assembly line,
and we used at Harvard in '94, and I quickly became
interested in studying the impact of this tool, because we
were spending a lot of time and money on it. And so I
designed a trojan horse study.
And what I mean by that is people were very
interested in the new technology, but people were not
interested in talking about adverse events and incident
reporting. So by framing the study as a study of the impact
of simulation training on real-world behaviors, I was able
to capture lots of things that were happening as a light
motif of studying the impact of simulation training. So if
you had been through the simulator experience, had been
videotaped and had been intensively debriefed by your peers
and instructors, 6 months later we'd check back in with you
and see how you're doing.
And maybe you had a bad experience a week before
or the night before, the day after your simulation
experience. "Tell us what happened about it, in depth."
And we would debrief that person intensively along the lines
of a simulation debriefing methodology, which is a
non-confrontational, confidential, learning methodology.
So we began accumulating data on events that we
did not know if they had been reported to quality assurance,
to the hospital. We didn't ask those questions and we
didn't go there. And so this was the nature of the critical
incident simulation impact follow-up study.
I have 4 or 5 slides that I'll run through quickly
just to give you a flavor of what I mean by simulation.
1969, the first simulated patient mannequin at UCLA. Two
Hughes Aircraft engineers developed this. It was way ahead
of its time, and it did not lead to anything substantive.
20, 25 years later we now have a
highly-sophisticated computerized mannequin that blinks,
talks to you; its pupils constrict; it excretes carbon
dioxide; it can project vital signs to all routine monitors;
you can put catheters in it. Bob has been working in
Switzerland with another device that you can actually
operate on. That device has not been widely replicated for
lots of reasons. This device has been widely replicated and
commercialized. There are over 150 out there worldwide,
although I would suggest probably only 20 or 30 are being
used actively. The device costs about $200,000. In order
to put it in a room like that and mock it out like an
operating room, and build a control room, costs are widely
variable, depending on your available space in your
institution and what your goals are, but it could cost you
anywhere from 50 to $300,000 to create the environment. So
for half a million dollars, you can get a full-fledged
high-fidelity operating room environment in which people can
observe the action, they can participate in the action. The
person in the dark looking at the screen can control the
vital signs, and either automatically or manually direct the
action. People can move in and out of the action with
hidden headsets, et cetera.
This being the usual that way we debrief in our
culture, and people bring this to the simulator, I should
emphasize that the most important or one of the most
important enabling experiences of the simulation training is
this kind of debriefing that occurs after, to unpack the
action. This is actually a high-level debriefing I
organized about 3 years ago, 4 years ago, with members of
the FAA and the military, and the other people in the room
are the National Anesthesia Peer Review Committee and the
president of the Anesthesiology Society. They actually went
through the simulator and got to debrief themselves. It was
an astonishing experience for all of us.
The positive news is that it happened. The other
news is that the task force that resulted from this,
disbanded after a year, and we have yet--I was really hoping
that that event would lead to the kinds of things that Bob
and others have been doing in aviation, and Scott have been
doing in aviation, and yet we're adrift on the coral reef
now of all sorts of issues, trying to get to accelerate this
change. We know how to do this stuff. We know it works.
People are crazy about it. The leadership's on board, but
as I'll get to later, answering Steve's charge to me today
is to present to you, is what are the barriers to getting
this out there?
One of the things we do in the simulator is that
we experientially get people to understand that everybody
has a different mental model of reality, so that if the
patient can be substituted for the airplane or take whatever
metaphor you want, everybody has a different idea as to what
the problem is, because they bring different skills, they
have different knowledge, and they have different goals and
constraints. And when you have 10 or 15 of those triangles
floating around, everybody has a slightly different idea of
the problem. In the simulator we get to unpack those ideas
and we try to get people on the same page, and they see how
difficult that is in a real situation.
Without a lot of intellectualizing, we also try to
get people to understand how complex their work really is
and how tiny things could have enormous impact, and set in
an event, a cascade of events flowing. Most physicians and
most clinicians really understand the coagulation cascade.
They get that drilled into them in medical school, and they
memorize it, and they have to use those numbers when they're
on rounds. And they begin to--and we use words like that.
We say, you know, "Just think of the event as a coagulation
cascade, that once things get going, you may be out of
control. You don't really understand. There are
interactions happening that you're not aware of. Things
speed up and it's out of your hands. And so little tiny
things that you do to manufacture safety in your environment
could have an absolutely critical importance." And they see
that in the simulator when they forget to tell someone one
thing, or if they turn their back and something happens at
that moment that they missed, they can see on the videotape
that they missed that piece of information forever, and they
were the only one that could have picked it up, because they
sent two people out of the room to do something else that
was inconsequential. And they see the tiny little seed, how
critical it was.
So in summation, the simulator impact study had
ambitious goals. We were trying to study the impact of
simulation training in a pilot study. We were trying to
expand the use of debriefing, not only in simulation, but
also merging it with incident reporting.
And I don't have time to go into the last bullet,
but this may have been the most interesting and the most
powerful, is that I was trying to achieve a culture change
at my hospital. In order to get this study done, I had to
go through the IRB for almost 2 years to get this study
done, the Institutional Review Board, because what I was
proposing to do was to ask people in depth on a tape
recorder, after a complex consent process, about things that
they probably hadn't told anybody else, and that study was
going to be approved by the IRB. And if I publish this
data, then how would the regulators deal with it?
So we had to go through a wrenching institutional
process that went fairly high in the organization to
negotiate the study, because it was unethical not to do the
study since we knew these things were happening. Two years
before, with Dave Cohen and Lucien Leaf [ph] and others, I
published a study with Dave on the actual numbers of adverse
drug events at Mass General that were not reported during
the Adverse Drug Event Study through quality assurance. You
can find this in the JCHO Journal in 1995 or '96, I believe.
There were 54 serious adverse drug events in 6 months in 10
percent of the hospital beds. That's a thousand a year. Of
those serious adverse drug events, 94 percent were
unreported to anyone. And this is in the face of pharmacy
hot-lines. It's in the face of well-known institutional
incident reporting policies.
So given that data, we were now faced with the
prospect of expanding that study and looking in depth at not
just adverse drug events, but anything that happened. So we
were faced with a conundrum. How does the organization deal
with events that it doesn't know about but that are being
discussed, and what are the implications for that for all of
the stakeholders?
So the model that we came up was a very--I
actually tried to model it somewhat on what Scott was doing
and at ASRS, where I tried to envision who were the
squeakiest clean people at the hospital? Who's the NASA of
the hospital? Well, it's the IRB. The Institutional Review
Board holds the ethics baton for the whole hospital. They
have public members on it. They have a report to the Board
of Trustees. So if you could imagine the Institutional
Review Board and give them a new function, that they were
the third party that was kind of internally dealing with
this research, that we might get double protection. We
could double protection under the IRB federal
confidentiality laws for ongoing research, and we could also
get quality assurance protection. But then we had to put
two different hats on the QA people. They had to face the
IRB and the research team on one side, and then face the
corporate compliance and the regulators on the other,
because once the quality assurance people become aware of
this data, they then are obligated to pass it up to the
Board of Trustees, who are then obligated by law in
Massachusetts to pass it up to the Department of Public
Health. So it becomes a continuous chain, that once you
start pulling on it, it leads you into the Board of
Registration of medicine.
We did the study. We published some of the
results. And I'm not sure if it would have been possible to
continue. I will be trying a different model in the
institution that I will be moving to, but I believe that
this whole study was an attempts to try to integrate a lot
of these different improvements, because I believe
individually it would be very hard for them to have an
impact alone. And I think one of the biggest impacts it had
was getting the institution to talk about things it hadn't
talked about before.
I met Bob in 1995 in Denmark. We were lecturing
in the same venue, and Bob said, "Steve, you guys aren't
doing team training?" And I said, "Yeah, I am. We're in
the OR, we're a team." He said, "No, you're not." And it
took him a while to get it through to me, but we were using
anesthesia people in the simulated OR, but we weren't using
real surgeons and real nurses because we didn't have a
high-fidelity surgical simulator for them to work with. And
I came home from Denmark and tried to infect my team with
the idea that we weren't really doing team training and
that's where the money was. And I hit a wall, so I went in
to the emergency medicine folks, because it was clear that
in emergency medicine there's no need for new technology to
really electrify them with the simulation concept.
Emergency medicine is a socially driven model of care. You
don't need laparoscopic simulators and surgical simulators
and holographic simulators and animal models to operate on.
And so Bob, in a collaboration with the Med Teams Project,
which is a congressionally funded multi-institutional
project that has been translating lessons learned from
rotary wing aviation in the army to emergency medicine, we
developed the first model for a team simulator in emergency
medicine, looking at the team as the unit of analysis.
This is a picture of the emergency medicine
simulator. We chose to have three simultaneous patients,
because that's the way the boards in emergency medicine are
structured, and we wanted this to move into certification
quickly. Quickly would probably be slow in terms of years,
but we wanted to develop the model so that it would line up.
The patient on the left and the patient on the
right, the two far gurneys, are both computerized
mannequins. There's a real patient in the middle,
standardized patient, which has been used for many years in
medical schools, and will now be mandated for all
certifications for all physicians in the United States by
2002. Standardized patients are real patients, that is,
live actors will present their histories and physicals, and
medical students will have to examine and talk to them in
order to pass their boards and get their MD degree by the
year 2002. And that is very low-tech simulation. So we're
mixing standardized patients--this is in the operating room
you saw before. We pulled out the lights. We pulled out
the anesthesia machine. We put in the emergency room
equipment and just set it up that way. And these are a true
team. There is no simulator crew that are playing stooges
here in setting people up. This is a team of emergency
medicine nurses, physicians and techs that are taking care
of those three patients in an evolving one-hour scenario
that they will then debrief.
This is a mock code. The patient is the same
computerized mannequin, but it's in a hospital, and the code
bell went off, and people ran, and all the computers are in
the bathroom, and they don't know that this is happening.
It was announced a week before, but most of those people in
that room do not know it's a mannequin. The ones in the
periphery often come out wondering how the patient did. And
you can do a very interesting high-tech reenactment and have
videotapes as well. This is from Dan Raemer's [ph] work at
the Brigham.
I'm going to segue into another aspect of our
work. Perhaps we're biting off a bit much, but it's been my
goal from the beginning to integrate these as opposed to
taking the course of biting off a little tiny piece and
studying simulation and the impact of simulation training,
and biting off a little piece, as I've been advised to do,
which is the traditional academic route. I've take the more
entrepreneurial route of trying to integrate all these and
find an organization that's willing to take the risk of
being a demonstration site and become a high-reliability
organization.
My fellow, Paul Barish [ph], and I just published
in the British Medical Journal, "The Review of Non-medical
Near-miss Reporting Systems", which is very germane to our
discussions today, and I'd like to make only two or three
points in the interest of time.
One is that I got a very interesting perspective,
and that is that these systems have evolved. ASRS happened
in 1975, and if you talk to Charlie Billings [ph], they
wanted to do it 10 years before that, and there were lots of
barriers. There's a long, long, long story behind all of
this. But over the last 30 years, the ASAP system is
possible because of all the ASRS stuff and the other things
that went on before in the '70s and the '80s and the lessons
that were learned. Systems are moving from anonymous to
confidential. The ASAP system--I don't know, is Scott still
here? The ASAP system is not anonymous; the ASAP system is
confidential. That's a big sea change. They're in your
face. If you report to ASAP, they know who you are. So the
co-pilot that reports to ASAP knows that the pilot, if he's
involved in that event, is going to get talked to if he
doesn't report. So you're really exposing your flanks.
Let me take a step back for a second. I really
appreciated the comments of the committee when they were
asking Scott all those questions about ASAP. And there was
a little bit of a disconnect. It's taken me years to
understand the intricacies of these systems. And I may talk
a little bit about ASAP now as I understand it, because it's
a wonderful system, and there's a sea change difference
between ASAP and the other systems.
You can report yourself out of a job with the
American Airline system. You put your report in the hopper,
and then it goes in, and at the bottom of that funnel, at
the end of that labyrinth, you can be out of a job. So
that's an interesting contradiction of terms. But you
should ask Scott how many people have actually reported
themselves out of jobs in the six years the system exists.
I don't believe any have. The funnel gets tighter and
together and the corrective measures are there, and you have
to agree to these corrective measures, but it's structured
in such a collaborative way and such an intelligent way,
that people agree these are not humiliating, and it's a
positive thing, and people cooperate. And it may not be a
corrective measure for you; it may be a corrective measure
in the system.
The most important thing that Scott said--and I'm
going to reiterate this--I think the most important remark
that Scott made--and he made a lot of excellent remarks--was
that they have put in a legal alternative to the FAA
enforcement of regulation. They have instituted a legal
alternative to enforcement of regulation, and that legal
alternative is corrective action. That's a critical point
for what you're considering doing today, and for what Steve
has charged me with, and try to help with the consideration
of how do you balance the needs of conflicting stakeholder
groups?
If you report to ASRS, 6 months later you can
still be sweating in a hotel room in London with your bags
on a transatlantic flight waiting for that call, to see if
you're going to have to go to court or undergo some sort of
regulatory proceeding. I misunderstood that when I--NASA
Document 1114, which I can give the committee, is the most
exhaustive discussion of ASRS that's out there. It was
written in the late '70s, but it wasn't published much
later. And if you read NASA 1114, it's quite exhaustive.
But I didn't understand after reading that what I understand
today, and that is that you can be sitting in London in your
hotel bed wondering if you're going to have to go--no--that
little tiny piece of paper that you take from the NASA
rip-off call-back, you send in your thing, they send it back
to you in 10 days. I thought that when they told you that
they were coming to talk to you about this, you just showed
your NASA little strip, and that was the end of it. No.
That's like waving a flag in front of a bull. You hold
that, and at the end of the proceeding, when all is said and
done a year or two later, is when you produce that strip,
and that is what allows you to not have a sanction against
you.
Whereas the ASAP program is a much quicker
turnaround on corrective action, and once you're in the
corral, in the ASAP corral, you don't have to go through
that other whole process. It's a very different system.
Systems have evolved to considering near misses
and not just accidents and adverse events. This has not
been an instantaneous thing in aviation. This has taken
many, many years to evolve to the near-miss consideration,
and that model is spreading. It took me personally a long
time. I really didn't care about near misses at all till
only about a year or two ago. I was too busy to think about
them, but I've become infected with the power of that
proactive approach, and even for someone like me who's
immersed in this stuff, it took me a long time to become a
zealot. I think that the general appreciation of this--why
do it, it takes so much time, it's costly, these are trivial
incidents--in order to really understand the power of the
near-miss approach, you have to understand something else
first, and that is, for the want of a nail, the shoe was
lost; for the want of a shoe, the horse was lost; for the
want of a horse, the rider was lost; for the want of a
rider, the battle was lost. That is true.
And if we had time we could talk about complex
adaptive systems and complexity theory, but we have to find
a way to get people to visualize that and to understand
that. It's a tough thing for people to grasp. Tiny things
make a really big difference, and that's how systems--I
think that is one very powerful way of understanding and
managing systems.
And of course, I think "root cause" is a dangerous
term because it gets us into this head game of visualizing
that there is a single root cause or that there are these
discrete root causes, these two causes that caused this to
happen. The sophisticated reporting systems understand that
these are non-linear models, that very tiny things can have
huge impacts down the line, and there are multiple
interactions that need to be considered.
There's an interesting book out about the nuclear
power industry called Hostage of Each Other, and I threw
that in here because the evolution of these reporting
systems indeed is, again, a function of this dynamic between
these conflicting stakeholders that are dancing on this
see-saw, and that each group is a hostage of the other group
in this sense, and these systems have evolved with that
dynamic in place, as they each strive for the same outcomes.
Can I have the lights down just a tad perhaps,
darken the room just a tiny bit?
So this conceptualizes what--the three things that
I've discussed around the simulation, around the incident
reporting, around team training, in that what I was trying
to--a little bit too dark--what I was trying to
conceptualize is that you can go--and this is important for
blood safety, which will be my last few slides in just a few
minutes--is that you should be able to take an event from
the medical center, put it into the lock box, put it through
the filters, treat it a certain way, send that data to the
simulator. We can reproduce those cases in the simulator.
I have done that repeatedly. And when I say hundreds of
times, I'm not exaggerating, in the last 8 years. I have
taken cases through the Freedom of Information Act, blood
transfusion FDA reports, and done them in the simulator and
gotten the same results we have in the report, because it's
easy to do. You create the environment. You create the
context. You put good people in, and they make a variety of
errors which you can map on a board. If you put three
people in, you'll see three different things. If you put
100 people in, they fall into 10 categories, and you can
just predict that this is what they're going to do because
human behavior is not infinite.
So we can then take the cases in the simulator and
feed them back in the hospital and train people, or we can
also try out new things in the simulator and say this is the
counter-measure for this type of thing that we're seeing.
Why don't we do it in the simulator, in a patient-free,
risk-free environment, and we can let the errors go until
their natural conclusion, whether that kills the patient or
not--the, quote, patient--and we can have this with
different filters in place because I think training needs to
be confidential. That raises a whole other box of issues.
But with this type of movement in place, I think
that we can accomplish a lot, and I'd love to talk to you
more about that this afternoon if time allows.
So what are we going to do in order to sustain the
NIOM report, the interagency task force report? What have I
learned in extension of threat management throughout health
care? It's that our socio-technical system is so complex
that it warrants an equally complex response. This is an
important point. If you want to control complex systems,
your response has to be equally diverse. I think we should
build on the quality movement. There are many--I believe
there are 11 or 12 evidence-based quality research centers
around the country funding by AHRQ. There are the cert
centers as well. There's a lot of knowledge in the quality
movement, and I think that safety needs to be evidence-based
and merged with the quality movement.
And I think we need to integrate the lessons from
high-hazard industry and not necessarily one at a time.
I can also tell you that there is no
infrastructure out there now for safety, and when I say no
infrastructure, I need to qualify that. But I could
probably list on the fingers of two hands or one hand
trained, experienced clinicians that are out there with
fellowship programs in this area. If a fellow came to me
and said where do I go to learn about this stuff, I'm a doc,
I want to get into this area, I'm a nurse manager, I want to
major in patient safety, I know how to do that if they want
to get into infection control. But if you're talking about
organizational safety, we don't have a Scott Griffith at my
institution. There is no safety officer.
So who are the mentors? Who is leading? There
are precious few. Paul Battel (ph) at Dartmouth has created
a grant through the VA to take 40 young people from the VA
over the years and train them to be safety leaders. This is
a model I think we should look at very carefully. Who's
going to carry the baton?
We also need the tested tools. They need to be
validated and they need to be--we need reliable tools. And
there needs to be a network of centers that work with each
other and share results.
I was also moved very much by Paul O'Neal's (ph)
keynote lecture at the Press Club last July at the
Leadership Forum for Patient Safety, and he has been talking
to the Kennedy executive session. Don Burlick (ph) has been
around with Paul O'Neal. Paul is the chairman of Alcoa,
recently the CEO, and also on the board of trustees of the
Rand Corporation.
Paul O'Neal's vision of safety that transformed
Alcoa is something that I've deeply considered, and I think
one of the most powerful messages that he has it that there
are very positive economic implications of aligning an
organization around safety. Safety is good for the bottom
line.
He challenged the health care industry that day to
develop demonstration models, and in view of what you're
considering today recommending a standardized, acceptable
pilot reporting system that can be widely used to learn, I
think that what I would stress from Paul's remarks is that
we need a demonstration, we need to get off the dime. And I
think that it will rapidly, like the ASAP program, show its
benefits.
So in relationship to blood safety, in a
sense--again, I don't hold myself out to be an expert in
blood safety and transfusion medicine, but it's really a
microcosm of the whole system. Everybody knows about blood.
Everybody uses blood and we give it in teams. It's given in
the emergency room. It's given in the operating room, the
ICU. It's given in the floor. It's given as outpatients.
It's a biologic that crosses all boundaries, just like the
patients do. It involves technology. It involves people.
I also think that the blood safety community and
the transfusion community is very receptive to advanced
systems thinking, and it's far ahead of many other areas of
the health care services sector and they're ready for this
kind of thinking about simulation, confidential near-miss
reporting.
It's also curious that blood administration in my
hospital is probably one of the safest things that we do.
But it's like nuclear power. In a sense, the dread of one
patient in my hospital getting HIV unnecessarily or the
dread of having one major AB-O trans--you know, is
driving--is really driving our system. Whereas, a team
failure in the operating room never gets reported, nobody
talks about it, and it doesn't drive anything, yet it
happens every minute. I mean, you can see it everywhere.
It's endemic.
So I think there's tremendous leverage that--we
can use tremendous leverage in developing models in blood
transfusion and taking a leadership role and affecting the
rest of the system with that.
The experience that I told you about where we
actually simulated an emergency room and actually created a
transfusion reaction situation to happen, most recently I
did one that the IHI National Forum, Institute for Health
Care Improvement in New Orleans, and we'll be doing another
one at a major simulation center in Pennsylvania in about a
month. This is a very powerful tool.
This is a picture of the old Boston Garden, and I
was struggling for a visual image as to what to take away
with here. And this is when they tore the Garden down last
year, and they just tore it down and built another one next
to it. And we can't do that with patient safety. Patient
safety is going to really be more akin to the big dig
where--and most of you probably have heard about the big
dig, seen it in the news lately because of the cost
overruns. But we're actually having to eviscerate the
entire network of the city around itself as we're living
through it.
And as I went around--that whole thing in front of
you there is the big dig, I mean, that whole scene right
there, and we're just living right in the middle of it. And
as I went around town taking a few slides of the big dig, it
occurred to me it was sort of like the Grand Canyon and that
I couldn't represent it for you on a single slide. There
were so many stories, it's so enormous. It caused me to
take a step back and think about this patient safety
movement and my remarks to you today, is that this is an
enormous undertaking. And I think it's going to be a longer
time than we would like to think before we see some results
from this. Patience is required. I sense there's a lot of
blood in the water, people demanding mandatory reporting and
the rest of it. But I think we should have a caution here.
As you go around Boston, you see signs everywhere:
Glory Hole 66, Glory Hole 200. Glory holes are these
muck-and-water-filled, 80-foot-deep holes these guys go into
with their lunch pails every day to make things happen. And
in my last--this is my last slide. In thinking and talking
about how safety is manufactured, just think for a minute if
this committee's job was to control the food supply for
Washington, D.C., today and you had to make sure--you had to
plan out in a centralized way all the kiosks and
frankfurters and all the luncheons at every hotel and every
little home around town and all the restaurants and all the
institutions, you had to decide where the food was going to
come from, how much was required, the spoilage, the cost,
and you organized it and then you made it happen.
Well, you know that would never happen. You could
try, like the USSR did, and we know what that experiment
did. You know, centralization of food supply, it basically
creates very little variation and a shortage very quickly.
And so what I'm getting at here is that when you have an
extremely, extremely complex adaptive system that has to
respond in an event-driven way to contingencies constantly
to production pressure and to ambiguity, that in a sense the
job of the manager is to let the right system emerge from
its starting conditions, so people go down in the glory
holes and they create safety, and you allow them at the
point of contact to do the right thing.
Of course, before you decentralize, as Carl White
(ph) has eloquently said, you have to centralize. You have
to centralize your decision premises. You have to have some
rules and constraints. But one of the powerful things that
a confidential reporting system, non-punitive reporting
system does is that it's an active intervention and it will
ripple through the system. It will create safety for you in
ways that you cannot design in. You essentially create the
starting conditions and then you will see the culture emerge
from those types of interventions.
That's the end of my remarks. I appreciate your
patience. Everyone is probably hungry.
MR. ALLEN: Thank you very much, sir.
Any questions? Yes?
DR. PILIAVIN: This is for both you and Bob. As
you said, this is an extremely complicated system, the whole
medical system, which has got all of these separate parts
and so on. And trying to think about blood as part of the
system makes me very concerned about how you could try to
improve one piece of the system when it's so intricately
intertwined with all of the other parts of the system.
You have also, of course, pointed out that of all
of the aspects of medicine, quite probably blood is the
safest at the moment, or at least one of the safest,
possibly anesthesiology, because you've worked on it so
much.
But all this committee has any power to do is to
make recommendations about this blood section. So what do
we do?
DR. SMALL: Ron? Can I ask Ron to comment? Did I
see your hand up or no, you were just stretching?
I believe that tiny things can have a huge impact,
okay? I believe that if we had a non-punitive, confidential
reporting system in my hospital around blood transfusion,
that it would rapidly infect the rest of the hospital in
every area. And I can say this without compunction.
I made a slide of Ron Westrum's work about two
years ago. Pathologic organizations, bureaucratic
organizations, and generative or learning
organizations--this is a true story. I took that overhead
and I put it up at a meeting that I organized with
high-level hospital leaders, medical students, residents in
the trenches, people who had never met with each other in
the same room before, people from medicine, anesthesia,
different disciplines. It was a patient safety journal
club.
I put up Ron's thing, and I said, okay, let's see
a show of hands, or if you're too embarrassed to do that, we
can vote on little slips of paper. What kind of culture do
we work in? Unanimous. Everyone around the room,
pathologic.
People are so hungry for this. The managers are
hungry. The hospital counsels are hungry. The medical
students are hungry. This is a way--people want--they don't
have the tools. They need the systems to be able to do this
with. They're ready and willing.
Now, if we were to say let's just start a
reporting system--I suggested three, four years ago, and the
response from people was, you know, the jaw on the chest,
you know, that we should just do this. Why don't we just
pilot a confidential reporting system? Well, that was
biting off way too much. But in an area in which there are
already systems thinkers, in an area in which pilots have
already been demonstrated to work, such as MERS-TM, in the
area in which there are already publications and their
effectiveness and their benefits, if you were--like Paul
O'Neal says, let's just demonstrate, a little demonstration,
tiny things will have a wave of effect as people come and
say, I get more requests than I can answer to educate people
wanting to try to do these things. But what they're lacking
is an approved, standardized model with the right regulatory
and stakeholder pieces in place.
DR. PILIAVIN: We were told at the very beginning
of the meeting that it was going to be impossible because we
already had everything required as a mandatory reporting and
that, therefore, we should not recommend what we recommended
in our last meeting, which is what you're recommending.
[Laughter.]
DR. SMALL: What I am recommending is that we
think out of the box. I think it was Margaret Mead who said
this, that--don't think that a small group of individuals
can't make a difference, because that's the only thing that
ever has if you do an ethnography of change.
Our current system is not broken, but it clearly
is not going to get us where we want to go. So we can't get
there from here unless we do something new. We just can't
do it.
I didn't talk about high-reliability
organizations. I thought Ron was going to address some of
that. But one of the models that's been driving my thinking
about this has been this issue of what do nuclear-powered
aircraft carriers do? What do air traffic control systems
do? What do organizations that, as Jim Reason says, do not
seem to have their fair share of problems do to get there?
They do four things: they have leadership that's
totally committed to safety as a core value, that drives the
organization; they have redundant systems because, as Aaron
Woldafsky (ph) has said, richer is safer; they have
redundant systems, and they do systems engineering
continually; they create a culture of safety, a
decentralized culture of safety so that the person at the
point of contact and the CEO are operating on the same
decision premises; and the last thing that they do is that
they are constantly, every minute of every day, in a
learning mode.
A nuclear-powered aircraft carrier, as I
understand it, is a floating school. People are shifted
from job to job quickly enough that they can't get
complacent. Senior people are often forced to ask junior
people questions because they don't know enough about their
job that they suddenly find themselves in. And they do this
on purpose. You can't run an aircraft carrier from a policy
and a procedure manual during a non-routine event. It's a
beehive.
And if you look at the business school literature,
you'll find the same thing emerging, this fourth bullet
about high-reliability organizations. Most organizations
don't last more than one or two generations. Even if they
become billion-dollar cash flow organizations, they're gone
in 50 years or less. Organizations and businesses that last
a long time and the organizations that will make it in the
21st century and flourish will be learning organizations.
So if we're going to get there and improve blood
safety the next notch, and if we're going to take health
care safety to the next notch, I believe we have to resolve
this dilemma about the control system on the one side and
the learning system on the other. They are mutually
incompatible.
But life is lived in a paradox. We know that. We
know that. Scott Griffith said to me this morning on the
way in here, we were talking about this issue, and he said,
"You know, one proverb is that `He who hesitates is lost.'
But the other proverb is, `Look before you leap.'"
You know, there's always a right answer for
everything. And life is lived in the crack between those
two things. That's where we take care of patients. And
it's the same with information theory and control--I asked
an evolutionary biologist--I was giving a talk in Seattle
last month at a conference on complexity, and I was trying
to get them to help me understand adverse events in health
care from a complexity perspective. And the biologists that
are studying the human genome and how the brain is put
together are struggling with the same issue of the control
systems and the learning systems and the body, and what they
have come--and I can give you research on this. What they
have come to understand is that the command and control
center is not in the brain and is not in the genes. There's
no way that you could program all the information we need to
respond to our environments in our genes. It's impossible.
It unfolds. It unfolds from the genes.
So the adaptive nature at the cellular level as an
analogy for what we're doing in organizations, manufacturing
safety every day at the point of contact, that's a whole new
concept that's very hard to grasp. It's tough enough for
people to grasp systems thinking because we're very
object-oriented. But what we have before us, again, as
Scott alluded to, is we have to raise the education level to
a much, much higher level. It's schizophrenic in a sense
because we're object-oriented.
The kinds of things we're talking about are
happening at a systems level and we're living life on an
object-oriented level. I don't know how to get the public
to understand that, but there are great strides in other
industries that have been made that I think we can learn
from in that way. Otherwise, I don't think that we can get
to where we want to go.
Sir?
DR. GUERRA: That was an excellent presentation
and one that certainly is very provocative. But do you
think that you could have gotten to where you are in terms
of the insight that you have to deal with these very complex
issues if you had not incorporated into your own career the
training, the discipline, the experience, the diversity of
being a general internist, an emergency medicine physician,
and an anesthesiologist that covers so many different--
DR. SMALL: I think the key thing was the
simulator. The key thing was the simulator. And I'm glad
you brought that up because it allows--I made some notes to
myself. It's really a great point.
I don't know if Judge Krever is still here. I'm
sorry? I wanted to ask him when that incident occurred with
the man who became quadriplegic after his catheterization.
I listen to people differently since I became a
simulator instructor, but when he said that they had to
make--they didn't have the room to put his bed in and he
told the story of what happened to those two people, it
became instantly apparent to me, after having debriefed a
lot of adverse event investigations, that that was an
adverse event for Judge Krever. I bet it happened 20 years
ago, 25 years ago. He remembers it like it was yesterday,
and if you talk to him, I bet you he could talk to you for
an hour and a half about it and tell you what clothes the
guy wore to court.
That was a critical incident for Judge Krever, and
from immersion in the simulator environment, I can tell you
hundreds of anecdotes where people come out after one
exposure and start thinking, Why do I drive that way? Why
did I fight with my wife last night about X? That was
pretty stupid.
The idea of going through your daily routine where
you live on videotape and then debriefing it right after
that in the non-punitive environment is so powerful that
it's caused me to redirect my entire career into doing that
full-time, because I think that behavior change is what
we're after, isn't it? We're after behavior change of
individuals, and we're after behavior change of
organizations.
So one of the things I've done is to try to focus
on doing high-level simulation sessions for Chairs of
departments and CEOs so that they can be led through
simulation and see themselves on tape. And I think we need
to do this for everybody, and I think it needs to be
institutionalized. It's the most powerful thing. And, of
course, the immersion and all the rest of it, but
intellectualizing it only gets you so far.
This is how we learn. This is how organizations
learn. It's how this committee learns. You go along and
you don't learn very much, and then you fall off the edge.
You pick yourself up and go, gee, what did I learn? Then
you fall off the edge again, and hopefully you fall off the
edge a lot but you don't fall very far. You don't have a
Challenger disaster to have to fall off the edge and learn.
But people learn in step functions like this.
Learning is not continuous. That's a myth. We all run
around every day going about our jobs, doing our cases, and
the residents--the residents have commented in the survey I
just published that a half an hour in the simulator is worth
a month in the operating room, because they run around the
operating room for a whole month waiting for something to
happen, and it never does, or it's trivial, or someone else
fixes it for them because they have to.
And we have done simulations--there are
simulations for business managers and executives, and we're
constructing health care simulations around complex problem
solving. There's a lot of literature on this that we're
trying to bring together in this Nesbitt model of how you
would give a package to an organization and say here, here's
an organizational behavioral change tool.
DR. GUERRA: But I guess that, you know, the
simulation models are not so generally and readily available
to accomplish what you suggest. Is there some substitute
for that with interactive computer technology and
programming that one could do? Because I think it's the
broad cross-sectional diversity of experiences in a
cumulative way that probably helped to get one a little
closer to that.
DR. SMALL: I actually would disagree. I think
that you have to go through it. And it's been
institutionalized in aviation for everybody. If I told you
that your pilot today for your flight back to New Mexico had
never been through a simulator because it was kind of
expensive and he did a computer screen-based thing instead,
and that was public disclosure, would you get on the plane?
If I had a choice, I wouldn't.
Every pilot every six months goes through a
simulator. And every year they go through a full-team
high-fidelity simulator session. There are probably 200
medical simulators out there now worldwide, over 100 in the
United States, and I think five of them are being used
effectively. And even those are being used one day a week
or two days a week.
The NIOM report said that anesthesiology has
become safer because of initiatives like simulation. There
are 35,000 anesthesiologists in this country and less than
2,000 have ever seen a simulator or been through one.
It's not true. Simulators are out there. We know
how to use them. The tools are there, and Bob Helmreich and
others are willing and ready to consult with us and help us
build effective change programs. What we need is the
infrastructure and the resources to make it happen.
I think HCFA is on the right track. I think every
single health care organization should have executive
leadership responsibility for patient safety. There should
be a designated safety officer with resources. They all
should have simulation training programs integrated with
blood banking and critical care, et cetera. That's how a
culture of safety makes it down into the glory holes.
And I think that the next big piece is the
economic model. I think safety saves money, huge amounts of
money. We need to get the economics people in here and
start developing those models. There are billions of
dollars being spent every year on completely ineffective
continuing medical education. It doesn't work. That is
well-known. And I think a lot of those funds should be
translated into what we know does work--video feedback,
small group, interactive, debriefing sessions. And that's
actually happening, but slowly.
MR. ALLEN: Thank you, Dr. Small.
We're going to break for lunch and be back at
2:20.
DR. SMALL: Thank you.
MR. ALLEN: Thank you.
[Whereupon, at 1:24 p.m., a luncheon recess was
taken to reconvene at 2:20 p.m., this same day.]
A F T E R N O O N S E S S I O N
[2:28 p.m.]
DR. CAPLAN: [Presiding.] Let me ask any one who
wishes to speak to come forward, and we will identify for
the tape into the record.
DR. YOMTOVIAN: Do you want to have the public
comment point?
DR. CAPLAN: Yes, yes.
DR. YOMTOVIAN: Okay. I am Rosslyn Yomtovian. I
am from University Hospitals of Cleveland, but I am here
today not in that capacity, but as a representative of the
American Society of Clinical Pathologists' Patient Safety
Initiative Transfusion Medicine Work Group, which I chair,
and it is in that capacity which I would like to make some
remarks, and I will do that for you quickly.
The American Society of Clinical Pathologists is a
non-profit medical specialty society organized for
educational and scientific purposes. Its 75,000 members
include board-certified pathologists, other physicians,
clinical scientists, and certified medical technologists and
technicians making the ASCP the largest medical laboratory
organization in the world.
The purpose of the ASCP is to improve public
health by advancing the science and practice of pathology
and laboratory medicine.
Patient safety is an important part of this
principle. To continue its leadership role in advancing
patient safety, the ASCP has developed the patient safety
initiative for pathology and the laboratory including
transfusion medicine.
Transfusion medicine laboratory professionals, as
you know, have experience in error prevention and detection
by following standard operating procedures and by conducting
audits. Mandatory reporting of errors to the FDA is already
required at some blood banking sites and may soon be
mandatory at all blood banking and transfusion medicine
sites. However, systematic error prevention improvements
are still needed in transfusion medicine.
Not all health care providers who are entrusted
with ordering and transfusing blood and blood components are
familiar with the need for rigorous patient identification
when procuring samples for and transfusing blood, the proper
storage requirements of blood and blood components, or the
type and quantity of blood needed for patients.
The evolution of change within the transfusion
medicine community is rapid. New assays and techniques are
continually being introduced to improve the safety of the
blood supply. Technological advances to make blood even
safer, such as nucleic acid testing, are used within
transfusion medicine because they improve the quality of the
blood supply and because the science is available in the
commercial sector.
Thus, while blood products are very safe and
becoming even safer, they can still be incorrectly
administered or transfused to the wrong patient.
The increase in transfusion product safety should
be matched by an increased entrance transfusion
administration safety.
While the FDA has been rigorous towards oversight
of blood banks by mandating application of good
manufacturing practices and process controls, within the
confines of the blood bank proper, once blood exits the
blood bank there is little systematic quality oversight.
This lack of oversight and process controls, coupled with
decreasing budgets, encourages the opportunity for accidents
and errors.
In addition, mistakes may happen due to lack of
personnel training. For example, draining blood from
patients is not always performed by qualified individuals.
Phlebotomists and other individuals drawing blood should be
trained, educated, and certified in proper phlebotomy
techniques. Equivalent training and certification should be
applicable to other health care providers, whoever they are,
nurses, physicians, et cetera, occasionally performing the
phlebotomy process.
On behalf of the American Society of Clinical
Pathologists, we look forward to working with you and the
rest of our colleagues in the health care community in
continuing to strive for the safest transfusion medicine
system possible.
Thank you very much.
DR. CAPLAN: Questions?
I am just curious. Do you use mannikin simulation
in the training and continuing ed process?
DR. YOMTOVIAN: No, not where we are. Not in the
transfusion medicine arena, which is really all that I can
speak to.
DR. NIGHTINGALE: However, I do understand that
Dr. Battles, who is around the room, if not in it, in fact
just talked in, is in fact developing a simulated patient
for a simulated medical donor that will be introduced into
clinical training at UT-Southwestern sometime later this
spring.
DR. CAPLAN: Comments?
[No response.]
DR. CAPLAN: Thank you.
DR. YOMTOVIAN: Thank you.
DR. CAPLAN: Any other individual wishing to offer
a public comment?
MR. MacPHERSON: I am Jim MacPherson from
America's Blood Centers.
We already submitted our testimony in the January
meeting and would stand by what we said at that point in
support of the MERS error reporting system.
I learned a lot this morning. If I could just ask
your indulgence for a few off-the-cuff comments.
I think in response to Dr. Piliavin's question
about what can the committee do, the MERS system is
complicated, I think it is expensive, but I think as we have
seen it look a long time within the airline industry to get
it implemented, and this committee, I think, can serve a
role by just continuing to push and continuing to see how
the resources can be found and systems can be put in place
like this to support it.
I also think the committee should not give up on
some of the issues that even Dr. Yomtovian mentioned
regarding that the errors regarding blood that are killing
more people than any transfusion-transmitted disease
continues to be outside the environment of the blood bank.
Although I recognize that there is some concern
and frustration on the part of the Federal agencies with
regard to their ability to regulate the practice of
medicine, there are many more people who are killed from
getting the wrong medicine than are killed from blood
transfusions. So, if the Office of Drugs and the Office of
Biologics work together with the medical community, I think
that we can solve this problem, or at least to a large
extent. We saw the technology exist, and there are ways to
address the problem.
Finally, I think that Justice Krever raised the
issue this morning about no fault, and it is interesting
that his report recommended a no-fault compensation system
for transfusion industry that echoed a similar
recommendation from the Institute of Medicine in their
report in 1994 and 1995 about addressing the issues or the
problems arising out of the AIDS crisis.
Yet, as Justice Krever said this morning, no one
is addressing the issue in Canada and no one is addressing
the issue in the United States.
In fact, Justice Krever and I talked briefly at
the break as to why that is. We think partly it is because
most of the lawsuits have disappeared and partly because
most of the consumers who were most affected at that point
have received some forms of compensation and are no longer
actively advocating for that system, but this is probably,
as Justice Krever said, the perfect time to take a look at
this system again and would urge the committee to put it on
its agenda at some point in the future.
As he said, if there are injuries that you know
that will happen, it becomes a moral imperative to consider
how you are going to take care of those injuries, and tort
is not the way that those should be addressed.
Thank you.
DR. CAPLAN: Anyone else?
Yes.
MR. VOGEL: Hi. My name is Rich Vogel. I am
president of the Hemophilia Federation of America.
First, I would like to read some testimony from
Jan Hamilton, our executive director, who could not be here
today. She tore some ligaments in her leg. Then I would
like to read some personal comments of mine.
This is Jan Hamilton's testimony. Some of the
devices that have been contrived in order to conquer errors
and accidents in transfusion medicine are fascinating. Of
course, it is sad that we must resort to such complex
situations to overcome human error often caused by
understaffing.
It has been interesting to watch the evolution of
administration of blood and blood products over the last
couple of decades. With the advent of dozens of blood
components and administration possibilities, there has
naturally been an increase in the number of errors and
accidents that occur. Hospitals are busier and more
crowded, frequently have less adequate staff on board, and
some of those staff persons have worked far too many hours
at a stretch.
Sometimes staff members are alone to other
departments in a crunch and are not familiar with
transfusion routines. However, it becomes evident that we
must come up with a solution and prevent errors and
accidents in transfusion medicine as much as humanly
possible.
How do we accomplish this? Where do we draw the
line on degree of reporting? Hospitals are already buried
in a sea of paperwork. There are already so many forms to
sign, even to be admitted as a patient. In a day where
patient rights are coming more and more to the forefront, we
must establish a protocol for preserving these rights.
The subject of patient rights in the hemophilia
community is almost tantamount to an untreated open wound.
A large portion of the hemophilia community feels their
rights as a patient have been violated many times over.
Some patients were not told of their HIV status
for months and even years and in the meantime infected their
spouse. This may be seen as a drastic side effect of the
equation, but it is part of the equation.
If we are going to set a standard, draw a line in
the sand, prioritize degrees of errors and accidents, then
so be it. Obviously, if someone in the hospital gets a
glass of orange juice instead of the tomato juice they
ordered, it may be a bad error if the patient is allergic to
tomato juice, but it is another story altogether if the
patient gets a compatible type of blood or the wrong
blood-clotting product.
An error receiving orange juice instead of tomato
juice is easy to detect, unless the patient is blind. The
wrong type of blood is not as easy to detect, and in the
case of a seriously ill patient could go unnoticed until
dangerous results have occurred.
Some of the safeguard methods that were present to
the advisory committee in January seem to be very workable
and manageable. Clearly, someone somewhere is going to have
to decide what the parameters are, where will be the line
drawn, what will be acceptable and not acceptable, what must
be reported to the patient. Guidelines must be set.
Someone must have the responsibility to monitor these areas
and report any errors and/or accidents to the patient and to
whatever hospital staff, person or committee is responsible
for addressing this problem.
It must not be swept under the rug. Steps need to
be taken to prevent any problems from occurring in the first
place.
Members of the board of directors of the
Hemophilia Federation of America and the hemophilia
community in general are thankful that the Advisory
Committee on Blood Safety and Availability is addressing
this issue head on and urges you to follow it through to a
final and acceptable safety tool for their health care
industry.
We are pleased that you are taking a proactive
stance in this matter.
Thank you.
I just have a couple of comments that I jotted
down. Like I said, my name is Rich Vogel, and I am a
44-year-old hemophiliac who is infected with HIV and the
various hepatitis.
As I sat here this morning listening to testimony
about the airlines and their voluntary disclosures program,
which are non-corrective, I thought this was very
reassuring, but what is this to do with blood safety and
reducing errors and accidents in medicine?
Then it dawned on me. We cannot be thinking of
doing the same thing to blood safety. The hemophilia
community has a past history with the self-policing policies
of the past, and it did not work. We have seen many recalls
in the past few years and are thankful for some degree of
self-policing, but a program of voluntary disclosure
programs which are non-punitive is not a very good idea.
I understand the concept of acceptance of error,
but not violation of procedure. But there seems to be no
clear-cut procedures to differentiate between error and
negligence.
We see an overwhelming reporting of
non-consequential errors reported as compared to reporting
of consequential errors in aviation. Like a child, human
nature's first response when something goes wrong is to say
I did not do it. To have a review without punitive damages
after working on the patient simulator is a great source of
information, and I applaud that learning technique, but when
something goes wrong with a human life, that is a different
story.
The hemophiliac community has been the patient
simulator for 4 years, from whole blood to plasma to chronic
precipitate to clotting factor to even genetic engineering.
We have seen what has happened to that community with the
threat of punitive damages, known HIV, hepatitis A, B, and C
infection. Could you imagine if there was a non-punitive
reporting system?
In conclusion, what we need is a mechanism to
protect the blood establishments in patients as well as
limiting liability on the blood establishments and providing
reasonable compensation to patients should that system fail.
Thank you.
DR. CAPLAN: Questions?
[No response.]
DR. CAPLAN: Thanks.
Any other public testimony?
[No response.]
DR. CAPLAN: All right. At this point, I think
what we will do is move to a discussion of where we want to
head with respect to the request from the Secretary. You
have got that letter in front of you.
Steve and I have talked, and Steve has something
that he would like to present to us to consider. I am going
to give him a preamble, though.
It seems to me, since I have to go back over on my
little mission, I just want to use the opportunity to say
three things. It does seem to me that informed consent in
the American context is not unlike what the justice told us
about in terms of the presumption that people will get and
have information and it is their decision as to whether they
consider it important and how they want to use it.
Secondly, the legal system presumes in this
country that the way to seek redress to error is through
tort, and the third comment I would make is that everything
we heard this morning undercuts one and two, that tort
stinks and that informed consent does not work. If
information does not come out, people do not know things.
So it seems to me that as we try to move into this
area, what we need to do is assure people, the American
people, that if we are going to weaken their right to
information, they are going to get something for that, that
will enhance their interest, and if we are going to move to
some limits or suggestion of legal reform, then we have to
make it clear that they are going to benefit by giving some
relief from the threat that the legal system poses.
My personal view is that is the way we should go.
We can do better. Some have said why blood. It seems to me
the paradigm is there and the move toward safety and the
culture is already there. People know it, believe it. Ever
since I have been on this panel, I have been impressed that
people take safety seriously. So this is an area of
medicine that could move forward in ways that other areas of
health care would be harder for them to do, but I think this
is one that could.
So I hope that we can in fact answer the
Secretary's request to come up with something, and even some
of the requests we heard in the public testimony and from
our witnesses, to move even further ahead in terms of trying
to come up with an oversight system that handles
misadventure and error and near-miss and near-hit in a way
that really does lead to reform in training, reform in
practice, and advancing the standards of safety that are
already pretty high.
That is the preamble.
DR. NIGHTINGALE: I would thank Dr. Caplan for
that preamble and will try to complete my remarks in an
equal amount of time.
The remark I want to make, though, is based on the
April 14th memorandum that I sent out to you.
On the second paragraph of the last page, I said
that we hope that this program will result in relatively
explicit recommendations, and for that reason, I am going to
do something that I as executive secretary have not done at
these meetings before, which is to suggest where the
discussion is headed and where it might head for the rest of
the afternoon.
It seems to me that in the snowstorm of the last
meeting, the debate focussed around the information, patient
information itself, rather than the right to that
information, and that when we focussed on the information
itself, we focussed on what information would have to be
reported mandatorily and what might be reported voluntarily.
We had difficulties with that concept for several reasons,
one of which I believe Ms. Gregory pointed out most acutely
is if it is not mandatory as in the Baconne [ph] study case,
it may not be reported at all.
The other consequence of the framing of the
arguments in the discussion is difficulty in identifying
what information might be intrinsically protectable and what
information would not be intrinsically protectable, and that
was why I proposed in one of the memoranda that I sent to
the advisory committee a couple of hypotheticals, the
hypothetical of the A-positive patient who receives an
O-positive transfusion and the hypothetical of an A-positive
patient who almost receives an O-positive transfusion.
I put them out for many reasons, but one of which
was because of the inherent difficulty in achieving a
consensus on which of these would be potentially protected
from disclosure and what would not. It seems to me to be
very difficult to draw a line, arbitrary or otherwise,
between what information could be intrinsically protectable
and what would not.
So one of the things I would like to see in the
discussion this afternoon would be attention paid to the
ownership, if you will, or the right of individuals to that
information.
I think one of the things that many of the
discussants, Judge Krever, of course, but many others in the
last two meetings, have pointed out is that patients have
substantial rights, perhaps slightly different in Canada
than here, maybe based on different premises in Canada from
here, but rights to information that pertains to their
health.
Another point that has been emphasized in the last
meeting, and I think to the point where it was not necessary
in this one, the government agencies that have statutory
responsibilities to protect the public health have through
those statutes the right to information necessary to fulfill
those statutory responsibilities. So it seems to me that
one place the discussion will have to go will be to identify
under what circumstances both patients and the public at
large, through its regulators, might be willing to waive
public disclosure or even access to information about an
individual patient's health or the health of a community.
What I would suggest to the community, as I have
suggested to individual members over the last week or so to
see if there was at least initial agreement with these
premises, would be if one gives up a right, assuming that
one has the right, one wants to get something in return.
I think the first thing, obviously, that somebody
would want to get in return for giving up the right to
immediate access to information would be a betterment of the
public health, would want a system that would work.
One can never guarantee that a system would work.
One would probably want some other guarantees, and the first
thing I think that you would want would be a guarantee that
any right that was waived would be waived for a finite
period of time.
The second is while it was waived that there would
be a built-in evaluation of the use of the information that
was protected, if any was, and finally, while that
confidentiality protection was afforded, there would be some
sort of oversight by those who had waived those rights, both
the government and the patient community or representatives
of them.
I am not representing this as where I feel the
committee may necessarily end its deliberations, but I think
that these are points of departure on which I think there
might at least be second-order agreement in which you might
not agree with each of the principles, but you might see
them as a starting point.
One other issue, the second of the two issues that
I would like to raise, is one that has been raised in both
meetings, and that is that these things cost money.
It would be easy for the committee to recommend
that an agency or a Congress devote an arbitrary amount of
money to these things, but I would advise the committee that
in my own experience and I believe Dr. Epstein's as well,
the fact that some money is authorized for an agency's use
on October 1st by no means guarantees that that money is
available to the end regulators for use on January the 15th.
So any recommendation that money be afforded for
this purpose should come with relatively strong underlying
that, A, it should not be diverted for other purposes and,
B, that other resources to that agency should not be
short-changed because of increasing money for this one.
Finally, I believe we have already had and I hope
to continue an open discussion of these issues. We do wish
to incorporate the views of all the stakeholders,
particularly those as they are represented by individuals in
this room, and that is why it is a pleasure for me to
welcome back to the advisory committee Dr. Linden, Dr.
Kaplan, Dr. Battles, Mr. Francis, and Ms. O'Callahan, and I
would mention Mr. Masiello as well.
I would give the initials of the particular branch
that Ms. O'Callahan and Mr. Masiello represent, but they
twist my tongue. They are from the Food and Drug
Administration, and all of you are welcome and we look
forward to hearing all of your comments this afternoon.
Thanks.
DR. CAPLAN: All right. Let me then open the
floor for some comment, discussion, about where we think we
ought to be with respect to trying to enhance and respond to
error reporting.
Dr. Kaplan, did you have something?
DR. KAPLAN: I have just a couple of terms for
definition.
I think all of us were impressed with Scott
Griffith's presentation about the system at American
Airlines. I think there is an importance to defining the
term "correction."
From our experience, when you put an event
reporting system in place, if you do it effectively you get
a lot of input. The worse thing to do is to keep changing
the system in response to every piece of input you get. So
"correction" has to be defined. It may be that monitoring
for a problem's recurrence rather than making a change in
the system and tampering with the system so it is never in a
steady state is one of the things that has to be addressed.
Hazard analysis, potential severity for harm,
probability of recurrence and detectability--the tomato
juice/orange juice analogy is a good one--would help in that
regard. So I think being driven to do a complete
investigation and potentially to make a change in the system
may be counter-productive because you try to eliminate a
target risk, and you end up with a contravening risk that is
worse than the target risk that you tried to correct.
The idea of corrections is very valuable, but I
think it has to be defined and not necessarily construed as
a real big system change.
The change that we are proposing or considering
today is in the context of broad changes, as you know, that
are going to occur within the hospitals. In that context,
although the perception of risk is very high for a blood
transfusion, the real numbers are very different compared to
some of the things we have heard discussed.
So the opportunity to move ahead and let
transfusion be a model is a very good moment not to miss
because, if one looks at the numbers, then the problem is
always that nothing recedes like success. If it is
perceived as a low risk, it is going to get a much lower
priority. So the timing is right now.
I just wanted to mention that event reporting can
be used in three different ways, for modeling, for new and
unique events, for monitoring events, and most importantly,
it is proactively engaging the staff in mindfulness and
awareness of the system, and particularly if they feel they
own the system with feedback and a non-punitive environment,
then you change to safety culture; that is, modeling,
monitoring, and mindfulness.
The three reasons you want data are for
accountability, research, and process improvement, and those
do not map exactly, certainly not at that last level of
mindfulness, unless you consider a kind of negative
incentive for being aware of the safety environment.
So I think that accountability, clearly, is a
necessary issue. Process improvement can take place without
a lot of central reporting. So I think that is another
theme that process improvement and all of this event
reporting starts at the local level, and it has to be
valuable and useful at the local level for process
improvement.
Also, that useability at the local level leads to
the adoption rather than just compliance, and that will
reflect directly on the quality of the information that is
recorded outside the institution.
Thank you.
DR. CAPLAN: Dr. Linden?
DR. LINDEN: Thank you.
I would like to comment on just a few issues, one
of which is to follow up on what Hal was saying.
I think it is really critical that a very positive
outcome from here could be that individual facilities,
hospitals, improve their own safety culture, have a safety
officer, whatever, to adopt their own strategies, to
identify their own individual problems, through their own
reporting system, taking appropriate corrective actions and
so forth.
I think the role of an outside independent, be it
government or strictly independent, oversight of that would
be to look at, say, rare events that maybe individual
facilities would not be able to trend and that they could
pick up common themes, but I think the individual facilities
can very effectively look at their own latent pathogens.
The other point that has been made by myself and
others is that most of these errors are occurring outside of
the blood bank. They are really medical practice issues,
the nursing staff, anesthesia staff, giving blood to the
wrong patient, and I think any sort of event reporting and
tracking system in that regard could very effectively be
part of an overall medical event reporting system; that is,
medication errors, these other types of things that are very
similar and analogous as opposed to reinventing the wheel
and saying we are going to have a separate system just for
transfusion. It makes sense that the hospital practice
issues be incorporated with other medical events.
I think the product side already has oversight to
it, and again, I do not think we need to reinvent the wheel
there, maybe enhance the systems that we already have in
place.
One other comment in that regard, I thought that
Captain Griffith's presentation this morning was
fascinating, and we can learn a lot from the aviation
system.
Unfortunately, it is a very different type of
system from what we have in transfusion medicine, and I
think one of our challenges is that the majority of the
staff who are involved in these problems, nurses,
phlebotomists, house staff, anesthesiologists, they are not
heavily invested in the transfusion system, unlike the
pilots who are extremely heavily invested and do that
basically 100 percent of their time.
You may have nurses who give one transfusion a
week.
DR. CAPLAN: We have to have everybody in the
hospital transfuse themselves once a month or something.
DR. LINDEN: That there, therefore, needs to be a
way of getting information from such people, but in my
opinion, it needs to be focussed through basically
subject-matter experts, which would usually be the blood
bank director, perhaps the supervisor's input as well.
I think the idea of the individual pilot
submitting a report and so forth is not going to be directly
applicable here, but we can certainly learn from that
experience.
Certainly, as was mentioned, some mechanism of
providing feedback, I think, is very important. There was
also some discussion this morning about some of the legal
issues.
In my opinion, having any such reports be exempt
from Freedom of Information laws is very important, and what
we have done in New York is we actually had specific
legislation to accomplish that. So it is not necessarily
easily met.
I also agree with other speakers Dr. Nightingale
mentioned in terms of adverse outcomes. Whether there is an
adverse outcome or not, whether the blood is actually
administered to the wrong patient or not, we have found in
our experience that the underlying problems are really the
same. So that, it does not make a lot of sense to
distinguish whether there was an adverse outcome or not. I
think you need to look at the entire system.
The only other comment was that the error
reporting can be integrated in terms of an overall process
that looks at quality either within the hospital to examine
their own procedures, seeing if the nurses are checking the
wrist bands before administering the blood, and also from a
government or accreditation oversight standpoint.
I know in our case we include the error reporting
piece as part of the overall regulatory oversight, and when
we are in facilities doing surveys, that is something we
specifically look at. While we do not take punitive action
solely because of incident reports, although we may get
other information that would cause us to take enforcement
action, we would cite facilities for not reporting. So it
is basically part of the oversight process.
I think be it government or an independent
accreditation agency, the error reporting and tracking can
be part of an overall oversight program and quality
improvement program. It does not have to be completely
separate.
Thank you.
DR. CAPLAN: I am going to take advantage of those
last remarks and maybe take a little issue with Dr. Linden,
but maybe not.
I wanted to put a couple of ideas just into play
to see if they resonated with the group.
I happen to live with someone who is a hospital
administrator, and a person who visits her pretty frequently
is the radiation safety officer who is always buzzing around
the COO's office at the hospital, reporting things, worrying
about things.
I just wonder if the committee might want to make
a recommendation even though I would preface this by saying
it is just a road into general safety concerns in the
hospital culture that someone should be appointed with
responsibility to look out for safety at every institution
that deals with blood to take on the education, training,
simulation, the missions we heard about. In other words, we
could just recommend that someone at every hospital be
designated as the blood safety officer, not because blood is
worse or in more demand of that attention, but just as a
model, again, to sort of move it forward.
It is within our purview not to sort of lay out
the standards on that about how they do education or
training or simulation or review or how they want to handle
it, but it is one route to sort of take it down to the local
level, although, again, I would preface it by saying we
understand that what we are trying to do is pilot into an
area that has a safety sense.
Another thing that occurred to me we might want to
consider is simply suggesting that relief be sought from the
legal system's tort appropriate. While it is probably
unlikely between now and August that the current Secretary
of HHS and the administration are going to be put in no
fault, at least we could go on record saying that reform is
overdue here, limiting liability, trying to come up with a
compensation theme is something we think ought to be
pursued.
The political realities are probably that this
administration is probably not going to pursue them, but the
next one could. There are a lot of cookies. We can meet
for eternity waiting for this to happen, but we may still
want to say anything about that and argue that that would be
helpful in terms of trying to move safety even further
along.
Another suggestion is that we simply go for a
standardized reporting system to be developed; that it be
imposed wherever blood be handled. We do not get into the
details of what that looks like, but call for the creation
of appropriate persons with the right expertise to draw it
up. Some of those are in this room. Some are not.
If it has the features that Steve talked about of
being finite, evaluated, and having some oversight by
patient groups and providers and citizens, whatever, in
terms of accountability, that might set the stage for the
release from Freedom of Information and some other things
that would normally be expected when you are collecting that
kind of data.
So my comments are meant to get us thinking a
little bit about just structural change where we would not
again have to micromanage one of my pet bugaboos that do not
do, but just giving general direction to the Secretary
saying, "Look, we need standardized reporting that is not
going to get punitive." We are willing to try that out and
think you should be if we can keep it finite, evaluated, and
have sufficient oversight to keep everybody on board that it
is being done honestly, and that the results are worth
doing; that we try to get safety going at the local level,
maybe that we try to encourage some sort of reform on the
legal side within the culture.
I say all this again prefacing it by the
acknowledgement that we have been pushing hard here. The
field has been pushing hard since the experience of the '80s
to make safety at the forefront.
We get it. That is why we think we can do it
here. It should be done here. We want to take that ball
and roll it further.
I do not like it when we talk about changing the
culture of safety. What we want to do is advance the
culture of safety further. It is not a change. Let's move
on.
MS. LIPTON: There are so many ideas that it is
hard to know what to respond to.
There are two things I came away from listening to
all the people this morning. I would hate to see us abandon
the concept of some type of voluntary reporting system that
it is coordinated with, but separate from the regulatory
structure, because I do think we have an opportunity to
learn so much more from that system.
I think the other thing when you were talking
about a blood safety officer, I would not like us to sit
there and dictate the format, and let me just give you a
specific example why.
I know Dr. Small said he distinguishes quality and
safety, but I really do not. If you look at quality and
talk about it being a deviation from an expected outcome,
that is an error and accident. Quality tries to ensure that
you do not have those deviations.
A deviation that has a tremendously adverse
outcome really is a safety problem, but it is a safety
problem that is related to quality in my mind.
In our standards at the AABB, we require
transfusion services and hospitals to say that quality is a
management problem, and we do not tell you the precise
structure to put in place, but we do tell you that there has
to be someone appointed by executive management who is
responsible for ensuring that quality within your
transfusion service, that there is a quality system that is
implemented, that is taken care of, and that those things
get reported up to management.
If we start talking about a safety officer, how
does a blood safety officer coordinate with the quality
assurance program? I think what is important is we need to
give enough infrastructure that it can coordinate with other
things that are already going on in hospitals so that we
give them what I would call the goals and outcomes we want,
not the exact structure that they have to follow to achieve
it.
I think we are just so diverse and we are down the
road in so many different ways that I would hate to have us
pull back and say now tomorrow we are all going to have a
blood safety officer, which to my mind does not really
change anything that is going on in a hospital.
DR. PENNER: I think just about all hospitals have
transfusion committees already in place, with a quality
control officer from the hospital sitting in on it. All of
the incidents are at least run through that committee, and
corrective measures are ordered by that committee. They
have some power to withdraw services of the physicians who
do not respond.
So there is a mechanism in place. Perhaps as I
have seen in my transfusion committee, we do not necessarily
get down to the level of some of the incidents that we are
talking about, the so-called near-misses. That might be an
area that could be amplified for selecting a listing of
things that might be brought up to that committee and the
information controlled and prepared and sent forward from
there.
So I think there are a lot of things that can be
done that are already in place, but just need to be expanded
upon.
DR. EPSTEIN: I think that in structuring our
thoughts about recommendations, we need to distinguish a few
different objectives.
The largest distinction, I think, is that we have
errors and accidents affecting product manufacturing that
are a very different thing than administration errors or
medication errors which largely occur outside of the blood
bank setting and have to do with medical practices.
I think it is important to recognize, just as we
have said a lot about the different environment of aviation
versus medical care, that we are dealing within medical care
with also subset environments which are not the same.
So, for example, in the product manufacturing
area, because we have had FDA regulation for many decades,
we are dealing with well-standardized systems which are
highly regulated and highly monitored. I would submit that
although there is continued and legitimate public concern
about optimizing blood product safety, that on the whole we
have heard stated many, many times, and correctly so, that
blood products are one of the safest medications. I do not
think it is an accident that is somewhat attributable to the
control system of oversight that is in place.
I think that we should be very cautious advocating
any kind of reforms of a system which apparently has been
working.
In contrast, when you look at the practice side,
it is much less standardized. It is only very indirectly
regulated, and it is not systematically monitored.
So I would suggest that much of the attention is
really on that medical side where we are in essence trying
to close a gap related to patient safety through transfusion
practice.
So I guess my first point is that we need to think
about where is our goal to close a gap, and my answer is
that the biggest gap we are trying to close is on the
practice side.
Secondly, I think that we do have goals to improve
existing systems, and I would submit that that goal does
apply on both sides of the equation, the product
manufacturing and the product use side.
On the side of product manufacturing, which, of
course, goes all the way from the donor environment to, say,
release of a unit after a cross-match, we need to improve
the data-gathering, the data analysis, the feedback, and
also the mind-set of that process. I think that that is
where the debate over the addition of voluntary systems
starts to come into play, on the notion that we might learn
more if we had some kind of confidential voluntary
reporting.
There, though, again, it is important to consider
what is the gap; in other words, what are you going to add
to what is already reportable. I think it is important to
recognize that the dividing line is not the near-miss; that
the current regulatory requirements for reporting errors and
accidents applicable to released units does not make a
distinction whether it was a near-miss event or one that had
consequence.
In other words, we do not distinguish between
near-misses, benign events, and adverse outcome events. It
is only a question of was the unit released, was there an
error or accident. So we are dealing with two completely
different classification systems, and we get ourselves
confused if we think that the thing that we are adding is
near-miss reporting. It is only near-miss reporting in the
sense that it would be applicable to units that did not
happen to get released.
I think we should look very carefully at the
question of how much added information is there looking at
the universe of units that were not released as opposed to
errors and accidents related to units that were released.
FDA has been of the point of view that the
underlying causes of those two things are not different, and
that there is not a lot of added value in studying the
non-released unit. However, if the system is to be
extended, that is where the extension is occurring, and it
is not really occurring along the lines of near-miss except
for that fact. They are near-misses in the sense that they
were not released, but plenty of the things that were
released are also near-miss events under the definition of
near-miss.
Then, on the practice side, I think it is clear
that the improvements to existing systems are really far
more sweeping. We are talking about creating an
infrastructure. It has been pointed out to us by Dr. Small
that there is not currently a safety infrastructure in the
hospital environment. So we are talking about creating the
infrastructure. We are talking about instilling or
promoting the culture of safety, and we are also talking
about creating the tools, the standards, the oversight, the
monitoring, the feedback, the training, et cetera.
Again, I would just suggest that it would be very
important to keep the two environments distinct in our
thinking so we do not get confused; that the things we may
wish to recommend in the one domain automatically apply to
the other because, A, they may not apply and, B, the
remedies may not work in the same way. So that is my main
message.
I have to say that the confidentiality argument
confuses me a bit because we are already operating in an
environment of mandatory reporting which is discloseable.
So what remains to give up or waive, I am just not sure
because you have not started with it is confidential. You
have started with it is mandatory to report, and it is not.
So that issue does confuse me for that reason.
Also, we have not clearly distinguished what is a
patient confidentiality issue from what is a practitioner
confidentiality issue, and I think we have to keep those two
domains separate in our thinking.
So my main message is that we need to make a
couple of critical distinctions as we think through
recommendations, and that our main goals, it would seem to
me, are, on the one hand, closing gaps where we think they
exist and, on the other hand, improving existing systems
which in some cases we may conclude are working just fine.
DR. CAPLAN: Jim?
DR. AuBUCHON: May I offer my comments in a visual
format?
DR. CAPLAN: Sure, as long as you supplement them
with audio.
[Laughter.]
DR. AuBUCHON: I was very impressed this morning
with what the airline industry had to share with us, and I
am concerned that the direction that the Federal Government
is going with respect to attempting to reduce errors in
transfusion medicine may not entirely be the correct one.
I understand Dr. Epstein's comments, and I think
we are both on the same page, but I think the agency does
feel constrained as to what it can be involved with.
So I would like to pose the question, working off
the airline industry, is it time for us to actually take off
and move into a different arena.
I would like to propose that we should focus
primarily on the patient. That is why we are in this game.
The three key factors in providing a system that will give
us the information to actually improve patient care are
primarily the internal quality assurance system in each
hospital reporting to the FDA as required, and I understand
they do need data in order to discharge their obligations,
and reporting to some type of national database so that we
can as a profession move ahead and make transfusion safer.
To begin, we are dealing with the patient in an
incident and some initial review. I think the definition of
the incident and the review initially should include the
entire process of transfusion, not just the product
manufacturer, as was described by Dr. Epstein a minute ago.
I think everyone realizes that we are dealing with
something that starts at the vein of the patient as well as
at the vein of the donor and ends up at the vein of the
patient, and we need to make sure that that entire system is
reviewed.
Clearly, if there is a fatality that needs to be
reported to the FDA, according to current CFR, and I do not
think any of us have any problems with that, and also if
this initial review indicates there is some patient safety
issue, the patient needs to be taken care of immediately.
We do not need a system to deal with that or to report that,
per se, but if this is not something that directly relates
to patient safety and it is not a fatality, where do we go
next?
I certainly agree with the approach that has been
suggested by Drs. Kaplan and Battles, that we need some type
of investigatory system and corrective action system in each
hospital. This is an important step, a QA investigation
that leads to all involved in the entire process of
transfusion.
If there is some potential for harm to the patient
in the future, then sharing that with the patient makes some
sense ethically. I am not going to get into the ethics of
it. That is really not the point of this.
The question is what does the hospital transfusion
service do with the information about this event and how can
the regulatory bodies make use of it to make sure that the
appropriate thing has been done and how can we all learn
from this one hospital's incident.
So I would propose that we consider two different
approaches to reporting. Those situations in which there
are violations, as was discussed in the airline industry,
where an established procedure was violated need to be
reported to the FDA, and clearly, if the investigation
within the hospital indicates that the incident involves
issues beyond that one institution, that should also be
reported to the FDA. If it is a problem with the blood
bank, for example, that needs to go through the FDA to the
blood bank manufacturer or to whomever is involved.
The FDA, in my opinion as a hospital transfusion
specialist, can provide the most assistance by reviewing
what we have done in the hospital for completeness and
appropriateness of the corrective action, and if we have
submitted what appears to be a complete report and our
corrective action is appropriate, then the FDA does not need
to do anything more. This would be the equivalent of the
FAA administratively closing out a report to it from the
ASAP program.
If there is any follow-up that needs to be done by
the agency, it would focus on further improvements that need
to be made, the agency is not satisfied that the corrective
action is sufficient.
I think that there should be protection from
disclosure of self-reported incidents in order to encourage
the system to generate as many of these reports as possible
for completeness sake.
On the other hand, I think that all events should
be pre-sorted to some sort of national database of
transfusion incidents similar to the NASA adverse events
system.
Here, the identifiers could be deleted from the
report upon verification that the report was complete, and
that the data would be reported in aggregate form only so
that we would all be able to learn without anyone feeling
that they were unduly at risk for some type of punitive
action or legal action in the future.
Those are my comments, and that is easily
reducible to an overhead in two sentences of a
recommendation, but that is the approach that I would take.
MR. ALLEN: [Presiding.] Keith?
DR. HOOTS: Jim, implicit in that, it seems to me,
is that someone internally has recognized that there is a
deviation, and on the all events, are you including in that
a confidential self-reporting mechanism?
In other words, obviously if the only person who
knows there was a deviation is not as likely perhaps to
occur in this setting, although it well could, obviously,
especially at the point where the patient is being
transfused, the nurse, or some other person could
self-report. It could go into the same data, but it needs a
separate system just like with pilots and the ground people
have for the airlines.
DR. AuBUCHON: I agree. I think the concept of
how American Airlines and the FAA structured it, if an
individual and the airline self-reports the deviation, the
incident, they are then protected from punitive action from
an outside regulator.
I can only share with you the experience in our
own transfusion service laboratory when we changed our
approach to reporting incidents or deviations. I do not
think we had any draconian disciplinary actions in the past,
but it was known by the staff that if a deviation was
reported that they were potentially liable for some type of
disciplinary action.
Despite our best efforts to encourage everyone to
report any and all deviations, no matter how large or how
small, we got relatively few of them. Most of them are
trivial in nature.
However, about a year ago, we finally stepped
forward and said we are not going to use these in any
disciplinary action. We are only going to use them to
correct the problem, and the problem is usually a system
problem rather than a personnel problem.
We saw about a tripling of the number of reports
that we got, and I have been very gratified by that. I
think we need to have that kind of non-punitive approach or
we are never going to get all the reports we need.
DR. HOOTS: I think so, too, and I think Dr.
Caplan made that point very strongly, and I think Jay did,
too, that the point where the error is likely to occur has
now moved out of the blood bank towards the actual
transfusion of the end product.
Secondly, if this is going to be a paradigm on
which we could build other error management in medicine,
then it seems to me it is incumbent on us to build the
beginning to the end process and also to build in this part
of it.
So, if we can show demonstrative success with this
process, with the example you just gave like increased
reporting, and then corrective actions that are brought to
bear by confidential reporting, then we can go to our
colleagues and other branches of medicine and say this was a
successful endeavor and it had a positive impact by
implementation or corrective actions. You might want to
look at this, like the OR and the other things we heard
about this morning.
I think we should come back to one thing that Art
said. By having that piece in, if there is some sort of
tradeoff that could be made to even theoretically--and
obviously it is theoretical now--to justify something like a
no-fault compensation program, it has to be, it seems to me,
an inclusive and closed-loop sort of process so that safety
can be demonstrated or else there is no hope that society is
going to buy into a no-fault proposition.
MR. ALLEN: Dr. Lipton?
MS. LIPTON: Jim, could you put up your last piece
there?
I want to pursue something that Dr. Hoots was
talking about. I just want to clarify when we start talking
about who is turning in these incident reports, it seems to
me that in your internal, what goes into your QA system here
and to your internal QA investigation may in fact be those
individual reports from the individuals involved.
When you get down to violation issues beyond the
institution, what is getting reported to the FDA or to the
national database, at that point it becomes an institutional
report, does it not? So the type of protections that you
are talking about there, when you say protection from
disclosure of self-reported incidents, that layer of
protection from disclosure is vis-a-vis the institution?
DR. AuBUCHON: I would think so, but I would also
like to see that same kind of protection within the
institution's own quality assurance system so that the
individual reporting the event to the system within the
hospital is protected.
MS. LIPTON: Right. So the individual protection
comes at the internal reporting level.
I understand that we consider that there is a
tradeoff and saying that if you report this, for example, to
the FDA or the national database, we will not disclose and
it cannot be used in litigation. I am just wondering how
practical that really is to say that we will not disclose
any of this.
Maybe what has to go along with that is some
understanding of what things will get out that will be
published and some agreement as to what does go back to a
patient at that point. If you cannot go to no fault, it
seems to me you do need to build in some sort of protection
so that the patient does not think there is something in
there. I have got that O-positive unit, right now we think
maybe that is all right, but what if in the future we find
out that is not such a great thing for someone who is
A-positive to have gotten to an O-positive. Isn't there
some way we should be talking about getting that information
back to a patient somehow?
DR. NIGHTINGALE: Karen, if I could try to
rephrase the question I think you are asking, are you asking
whether it is possible to get to a truly confidential system
without concurrently addressing no fault?
MS. LIPTON: Yes. I see no fault as a great
answer to this, that it is not going to happen, and I hate
to see this held up.
DR. NIGHTINGALE: On the contrary, I think if this
is the sort of thinking that is the sense of the committee,
I would very much appreciate the committee going in this
direction because this is our best and possibly last chance
to address the issue.
As we have seen in the past, when there is strong
endorsement by a substantial majority and, even better, a
unanimity of the committee that something ought to happen,
in the past it has happened.
MS. LIPTON: But to get to the point of no fault,
you can pass all the no fault, but if everything is going
into a secret vault, how are you going to know?
MR. ALLEN: Mary?
DR. CHAMBERLAND: Karen, I just want to follow up
on your question or comment.
Certainly, at the institutional level, there are
precedents for this. For example, CDC has a longstanding
program of surveillance called NNIS, the National Nosocomial
Infection Surveillance program, and this is a group that
started off kind of small, 20-some-odd hospitals. Now it is
well over 200 hospitals in 42 States.
They report institution-specific data to CDC using
standardized definitions and protocols and all of that. It
is voluntary. It is confidential, and they report the
numbers of hospital-acquired pneumonia, urinary tract
infections, surgical wound infections, et cetera. The data
are protected. Again, this is a special situation. The
data are protected by an assurance of confidentiality. So
the agency cannot release the names of the hospitals that
are even the system, certainly can never release
institution-specific data, but it is very valuable because,
first of all, you have a national bench mark.
A couple of months ago, there was an MMWR that
talked about the history of the system, and the editorial
note very specifically was written for the context of this
whole arena of errors and accidents. I believe the system
has also been publicly mentioned by John Eisenberg of ACPR,
now something else, as a type of model that could be
employed, but you learn a tremendous amount. You can
stratify data by types of hospitals, all sorts of things,
and it allows you to aggregate a cross-institution so you
get these national trends, and it allows you to build in, I
think in this parlance we say corrective actions, what we
would call preventive measures to reduce the likelihood that
you would have.
So there is some precedent, and this airline
thing, just as kind of a side bar, but there clearly had to
be some very creative negotiations with the regulatory
agency, the FAA, that allowed them to participate in this.
So I think there may be some ways out there that currently
exist or that could be thought of creatively.
MS. LIPTON: Did you have statutory protection
against disclosure?
DR. CHAMBERLAND: It is called an assurance of
confidentiality. Dick Reisberg from the Department's
General Counsel was here earlier, but it allows not just
CDC, but other agencies to protect the data and it is not
releasable through FOIA.
MR. ALLEN: Dr. Gomperts, did you still want to
say something?
DR. GOMPERTS: Yes.
Jim, if you could put up your overhead? It might
be a good idea to leave it on.
I have one question. As far as the internal QA
investigation, internal QA system follow-up, that has the
potential for JCAH review and oversight, if you want to
comment on that.
DR. AuBUCHON: I guess what I would like to say is
that I like the approach that the AABB assessors use when
they come to do an inspection or assessment of an
institution. Their focus now is not on do you have an alarm
system that rings at a particular temperature when your
refrigerator does not work, when did you test it last, but
it is more on do you have a system to deal with problems
with those kind of systems, such that if something went
wrong, they want to know that your investigation system,
your means of investigating what happened and taking
appropriate corrective action and then verifying that the
corrective action really was corrective is the main issue,
not whether or not the alarm rings at 5.9 versus 6.0
degrees.
So I know that JCHO has a sentinel reporting
system. I know many hospitals have had difficulties
accepting that for fear that the information they provide to
JCHO will be disclosed to other parties, and I would like to
see that barrier removed in the system, particularly for the
national database. We can be sure of capturing all the
appropriate information.
I understand that the FDA is going to have to have
enough information to identify the hospital in order to go
back to the hospital to say you did not do this correctly or
you did not give us enough information, but if we do not
have some distance, some trust, I guess, between the
regulators and the people performing the internal quality
assurance system, we will not be able to pull this off
properly.
I am afraid if there is too close an oversight
from the regulators of this and lack of trust in the system,
which needs to be gained institution by institution, we will
not get all of the cases reported. WE will not have their
own investigation.
DR. GOMPERTS: Potentially, JCHO could from the
internal quality assurance point of view for those
institutions--if we are moving away from blood bank to the
actual delivery--and there is, as we mentioned earlier this
morning--safety and quality are not necessarily the same.
So, if there is a move in focus towards the safety
issue by the hospital oversight organization, that could be
a positive move.
DR. AuBUCHON: Yes.
MR. ALLEN: First of all, just so everyone is
aware, there may be a fire drill for the hotel employees
within the next 10 or 15 minutes. So we are to stay put. I
do not want to alarm anyone.
Dr. Guerra?
DR. GUERRA: Thank you.
It seems to me that a lot of what we are
discussing really is accepting almost the given of an ideal
set of circumstances in terms of expertise and capacity and
systems in place.
I am not sure that we currently have enough data
to really come out with some universal kind of
recommendations for how to deal with this. If we really do
not know what the capacity is around the country, I think it
is a very uneven playing field. I think there is some very
small community hospitals that are nowhere close to having
the kind of capacity to do a lot of these things, especially
if we are going to come down on the side of administration
beyond the blood banking part of it where I suspect that
many of those systems are set up on a regional basis where
they certainly have developed capacity to serve a much
larger region.
But on the administration side, we are talking
about sometimes a very small component of lab personnel and
nurses and physicians, in many instances very few that have
the kind of expertise to do the really close monitoring and
surveillance of this.
Then I do not know that we know that much about
cost, of what it would cost us to implement these kind of
systems around the country to really have in place something
that would give us the assurance of quality and safety.
DR. SNYDER: I am a little confused because I
thought that any hospital that is accredited by JCHO has to
have an organized medical staff, number one, and, number
two, a quality assurance program. Somebody correct me if I
am wrong about that, number one.
Number two, the States currently--every State and
Department of Defense and the VA have confidentiality of
medical records, peer-review protection so there is
nondiscoverability of those records, to my knowledge, in
every State and territory and also DOD and VA.
HHS, to my knowledge, does not have that. So
things are discoverable under FOIA. They may be redacted,
but they are discoverable.
It may be a model to look at, although it is a
model that I do not love, a model that potentially could be
looked at is what Congress did with the national
practitioner databank and the health care fraud and abuse
databank. That information is available, but there is
defined categories of who has access to it.
When you are working at the local hospital level
within their peer-review efforts, my understanding is that
all of that is covered by statute in the various States. It
is when you move to a national reporting system that you run
into a problem, and I fail to see how the reports of
near-misses could go by or just be passed through a
hospital's quality assurance committee without at least some
comment, at least some investigation of some sort.
I cannot see it getting to a national system
without passing through that level. So I think that raises
a question, but I cannot see designing a system that sits
outside of the hospital's quality assurance. I think that
is redundant, and it does add extra cost to whatever we are
talking about.
I know within the Department, whenever we use the
word "quality," the acronym they use is SAFE. It is a safe
system, it is accessible, it is affordable, et cetera. So,
to me, safety is part of quality assurance, part of quality
improvement, and I think that is pretty much within the
Department when you look at a definition of "qualify
assurance" or of "quality," "medical quality." You are
looking at a safe and effective, affordable, accessible
treatments.
MR. ALLEN: Keith?
DR. HOOTS: I am concerned, though, about the
concept of analogous assistance in a national practitioner
database.
DR. SNYDER: So am I.
DR. HOOTS: I do not think you are going to get
voluntary disclosure of near hits, near misses with that
kind of system because it does not matter who does not have
access to that system. What is important is who does, and
it is very widely disseminated.
Every hospital has access, and if you give that
kind of information that broad an availability, then I do
not think it will work as a confidential system. The
systems that the airlines have described to us is nowhere
nearly as available as the ND practitioner system.
DR. SNYDER: I agree with you.
What I was looking at was from the point of view
that Congress mandated through legislation who had access,
if anybody. That is what I am getting at.
I am not getting at the national practitioner
databank, the fraud and abuse. The fraud and abuse is even
open to the local police officer and the Federal law
enforcement agency, and nothing is redacted out of that.
So, yes, like I said, it is an example that I do
not favor, but what I favor is the concept that legislation
which clearly defined who had access to the information in
such a databank, number one, and, number two, in what form,
whether it be redacted, whether it be in aggregate form,
that is critical. But if you just form one of these with no
congressional mandated non-discoverability, you are wide
open. I do not think that is where we want this to wind up.
You want it protected from discovery. You want it
protected from what follows from discovery and what
potentially can follow from FOIA, either against an
institution or an individual.
MR. ALLEN: Let's go to Dana, then Dr. Lipton.
DR. KUHN: I want to thank Jim for your schematic.
A picture is worth a thousand words, and I think it comes
close to capturing what we are looking for.
I think we are not going to be able, as we
formulate a recommendation, to try to address all of these
issues and solve all of these issues that we are talking
about. I think we need to name these elements.
As I have been trying to listen carefully this
morning, some of these elements I think we can put into a
generalized recommendation. What I think are the important
elements of a standardized reporting system for blood or
transfusion services would be, first of all, voluntary
disclosure, second, a non-punitive confidential element of
it.
Data collection surveillance, with all due respect
to the CDC, which they would need to have that information,
they have done an excellent job with that. Corrective
measures would be another element, as well as quality
assurance and safety.
I think the illegal needs to be mentioned here as
an element, too, for a limited liability of those
institutions, the blood institutions, and then protection
from discovery. I think we could name these elements. I do
not know if we exactly have to solve these problems, but we
can name these elements.
Then, accountability, in all due respect to the
FDA, I do not believe we need to usurp the many, many years
of the FDA doing such an excellent job, although I know
there are some people in Congress that want to do away with
the FDA.
I think then the other element that we need to
bring into this, in order to help patients feel more
comfortable and to buy into this whole reporting system,
would be the balance with the patient's right to know what
and to know when, to know what and when about what happened,
and then the balance between the patient's rights to come
kind of compensation to avoid the tort system which we know
has become a nightmare in many situations.
I still think no matter what anybody says, we are
still going to be addressing this down in the future. I
think we have only seen some of it now, and I think with
class-action settlement, some of it has been resolved, but
some of it is going to continue as individual cases go on.
I think it goes right along with the patient's
legal rights, but what are the definitions that we report,
adverse reactions or near misses or whatever, to patients?
Should it be the Qualifier B, when the patient incurs harm
and what is going to be the definition of "harm"?
But I think we need to kind of focus into
generalizations, not trying to solve the specifics here, and
then give those directions to the Secretary and let her try
to figure out how to incorporate this. If she needs to go
to branches of the legislatures, let them do it.
MS. LIPTON: I agree with Dana.
I was going to say I think in some ways we are
trying to create something that is too complicated, and we
do need to focus on the elements.
Just as an example, you can deal with the issue of
confidentiality by restricting access. Otherwise, you can
leave it to the present court system where they really say,
well, essentially all of this activity and reporting it to a
national database is looking at peer-reviewed or audited
materials, which in most cases are not discoverable or
permissible. You cannot use them in any kind of a case, but
it is case by case. It is just very difficult, and there is
not really one Federal rule that applies everywhere. So you
can get at it in many different ways, but I think if we talk
about the elements, that it should be left to other people
to design it.
To Dana's point, I think that is really what I was
trying to get to earlier. I think we should push for a
no-fault compensation. I believe in that, absolutely, but I
think, and maybe I am wrong, that patients also want
something else. They want to know clearly then what things
will be told to them and when they can have a right to
expect that that information will be divulged to them. I do
not know if it is harm to the patient or something that even
happens to the patient or there has to be some sort of peer
review in a hospital to determine if that incident is
something that should be reported back to the patient, but I
think it is something more than no-fault compensation
because you do not want to just create a black box without
anything where the patients feel that, yes, if it is really
important, they will tell me and I will know about it.
MR. ALLEN: Jim?
DR. AuBUCHON: I was just going to comment in
terms of protecting the information.
I was impressed that the FAA was willing to
essentially remove from their files all of the information
reported through the ASAP program leaving only as a memorial
to what had happened the administrative action or I guess
the corrective action.
They could then obviously use that to inspect
American Airlines to say had that really been accomplished,
if they were going to change a procedure, they could come in
2 months later and say was the procedure really changed.
That, I think, demonstrates the FAA's commitment
to a non-punitive approach. They want to make sure that the
corrective action has been taken and they have enough
information to make sure that it has been taken. Beyond
that, any other information that they might maintain in
their files could only be used in a punitive sense which I
think is an important thing not to do in this kind of
system.
MR. ALLEN: Jay?
DR. EPSTEIN: I guess I would like to try to focus
for a moment on the whole issue of what is punitive because
there is sort of an undercurrent here of wanting to get away
from what FDA now does and do something more akin to what
was described under the ASAP program.
I have to say there is a big disconnect because
the FDA sees itself largely as already doing the things that
you are calling for us to do in the sense that when we
review a report of an error or accident, our focus is on
determining whether there was adequate investigation and
corrective action, and if we decide that there was, by and
large we do nothing.
Additionally, when we take action, for the most
part, we see the actions that we take as being corrective.
We, of course, also take a compliance approach to make sure
that the correction is implemented, but in our own minds, we
make a very sharp distinction between mandates for
corrective action and punitive action.
Punitive action is really very different. It is
when we would go after individuals to prosecute them or when
we would try to levy civil money penalties, and that is in
fact quite rare.
So I think that the question then is why are
FDA-mandated corrective actions perceived as punitive when
in fact they are not. They are certainly not punitive in
the ordinary sense. They are not actions against
individuals. They are not seeking money penalties or
criminal prosecutions. They simply are not.
I think the two things that I have observed that
would distinguish how the FAA is now interacting through
ASAP and the way the FDA interacts with blood establishments
are these. First of all, there is a sense that the mandated
corrective actions are punitive precisely because they are
externally mandated, and the implication is that the blood
center does not feel as if it has ownership of the larger
system that is being imposed.
The problem, then, is not really whether FDA is
correctly performing its functions as a national system in a
non-punitive manner, but the fact that the entities
regulated do not feel they have buy-in. Probably, there is
a sense of lack of buy-in at two levels; one, that there may
not be concurrence with all of the standards, in other
words, we may be enforcing outmoded or inappropriate
standards, because we do not believe that. Then, secondly,
that there is not a consensus approach to actions to be
taken. I was very intrigued by the model of having a
three-way dialogue with the employee representatives,
usually unions, the airline which is the industry
establishment, and the regulator which is the FDA, and the
whole concept that there was agreement to a consensus
approach, although we did not, I do not think, hear enough
said about whether the regulator kind of had veto power or
trump authority, in other words, could they decide to
investigate anyway and go further and litigate, et cetera.
I suspect that as regulators, they probably do
have that. It is just that they are more often than not
operating in the consensus mode. I do not know if there is
somebody still here who can answer that question about the
airline industry.
I would just like to focus on the fact that we
seem to want to change things because we are not happy with
the punitive aspects of what we perceive as now going on,
and yet the agency does not see itself as taking punitive
action, rather imposing corrective action.
DR. SMALL: To just answer your question, my
understanding of the ASAP program is that on Fridays, the
event review team meets and the FAA regional administrative,
the representative for the airlines and the representative
for the pilots meet, and they have to agree on every
incident report as far as what happened and why and what the
corrective action is.
If they do not agree, then the whole program is
folded. The whole ASAP program is gone on that one incident
report.
MR. EPSTEIN: Meaning what? Meaning it defaults
to an FAA decision?
DR. SMALL: That they close ASAP and that they do
not meet again and that the program is history.
So it is a Russian roulette, if you will, and I do
not know how much is internal documentation and how much I
can say from my study of the system, but there has been
disagreement and they have come back to the table and had to
resolve. Otherwise, there is no more program. So the
weight of that, on that meeting of those people, is intense,
especially given the exposure and the success of the
program.
MR. ALLEN: Yes, sir.
DR. WINKELSTEIN: Before I am going to be able to
vote on a resolution, I guess I am more naive than the other
committee members. I want to get some clarification on what
we meet by no fault. It seems to me, there are two kinds of
errors. I am a little nervous about this concept of no
fault, or confused at least.
There are two kinds of errors. There is the error
for which there really has been on fault, and then there is
the medical error for which there really has been someone or
some procedure that has been truly at fault.
So the issue I am asking is when we are using the
word "no fault" in this discussion, are we using it to
include both of those situations? Are we saying that there
is a parallel system for surveillance that will have nothing
to do with litigation for an individual patient? I need
some clarification. I am a little uncomfortable with the
concept globally of no fault.
MR. MacPHERSON: I probably know more about no
fault than most anybody in this room, having lived through a
lot of looks at this for both blood and other issues.
I think in general the committee could probably
make a recommendation that this should be examined and
looked at, but there are lots of different models, and there
are lots of different ways to look at this.
The classic no fault that probably blood would
want to look at is maybe something akin to the vaccine
injury program, the Federal vaccine injury program. That
addresses all of the issues you are talking about here in
terms of confidentiality and disclosure to patients. They
have already gone through this issue of the tradeoffs and
that sort of thing, but, again, it is hard for the committee
to examine this without really knowing what the options are.
But I think a general recommendation might be in order.
DR. WINKELSTEIN: I do not think I have made
myself clear. In the vaccine compensation program, the
concept of no fault is very clear to me that there is a
measurable, but small finite risk that no one can predict
that there will be some adverse event from, let's say, live
polio. That, I understand.
Are we talking about no fault in the concept of
this afternoon where someone has given the wrong patient the
wrong unit of blood? That is what I need to know.
MR. MacPHERSON: You are correct. That is a
separate issue from a product injury issue.
DR. WINKELSTEIN: Okay, because I have learned to
live with no fault as a concept when there has been on
fault.
MR. MacPHERSON: That is correct, and when you
start talking about no fault in terms of any kind of
reportable medical injury, that is a very different level
and that is much larger.
DR. WINKELSTEIN: That is why I am confused.
MR. MacPHERSON: You are correct. You are
correct.
DR. WINKELSTEIN: So am I going to get
clarification?
MR. ALLEN: We are working on that now. Karen,
and then Mr. Battles.
MS. LIPTON: We can define "no default" any way we
want. We can say that what we are recommending is even for
medical errors for which there was negligence we are going
to say there is a system and you need to tie it. When we
say no fault, you have to tie it to a compensation piece to
that says for this type of injury, this is what you get.
As far as the inquiry goes in terms of the
compensation piece, if you get a wrong unit, an A-, B-,
O-incompatible unit, and you suffered these amount of
injuries, this is what you get out of it, and really what
you are doing is a very short factual inquiry into what the
injury was.
So I do think it is important when we say no
fault, what we are saying for any transfusion injury,
whether it is related to product or process--that means
administration--that we are recommending that we go to a
system that does not have a trial system where a few number
of people get a lot of money, but where everybody says there
are certain injuries, some of which will be avoidable, some
of which will not be avoidable. But what we are saying is
there is a dollar value attached.
In the vaccine and in some of the other no-fault
compensation systems, sometimes you can opt out of that
system and sue the party for negligence, too. I do not know
how involved we want to get in that here in recommending
this, what course you take.
DR. WINKELSTEIN: I understand.
MS. LIPTON: It sounds like we are not just
concerned about product issues. We are concerned about
administrative issues.
DR. WINKELSTEIN: Practice issues.
You have clarified it. I would suggest that the
word "no fault" is giving a false impression then of this
process. "No fault" means no fault, and that holds true for
many of the vaccine-related problems, but I am not sure it
is going to hold true for some of the issues that we are
discussing today. So I am still uncomfortable with using
the word "no fault" because of both personal and private
reasons that might exist within the medical community, not
private, but issues relative to the community, but in a
larger sense issues related to society as a whole.
I would be uncomfortable saying this is a no-fault
situation in every instance. So I am dealing with a
semantic problem that is substantial to me.
DR. KAPLAN: To carry the semantic problem a
little further, and maybe Karen can speak to this, my
understanding, there is a fellow named David Marks who has
talked about a just system. James Reason discusses his
work. The issue is a continuum of compensation up to and
including negligence. The term has a lot of unnecessary
baggage.
Then an error may occur in the kinds of setting we
heard about today, but from reckless behavior on, where
someone is reckless or they know what they did could cause
harm or they intended to cause harm, then you cross that
line into punitive damage. I think that is part of what you
are talking about.
Karen may talk to that because it is well
established within the law, and I think that helps. If your
car skids and you hit someone who is stopped at a light,
there is compensation here, but no punitive action. I do
not know how that translates in the no-fault context, but I
think it is an important differentiator.
MR. MacPHERSON: Actually, I think I can address
your question.
We are talking here about fault versus no
fault-based compensation systems, not whether there was no
fault or fault. You are just talking about whether the
compensation system is fault-based or no fault-based.
DR. WINKELSTEIN: That helps. I am just saying
from the public's perspective.
MR. MacPHERSON: Right.
DR. WINKELSTEIN: If I am confused, there is a
fair chance some people other than me will be confused as
well.
MR. MacPHERSON: In fact, people do not even talk
about no fault anymore.
DR. WINKELSTEIN: I think we have to be very
careful about the wording. That is the last I will say on
it.
MR. ALLEN: Go ahead, Dr. Snyder.
DR. SNYDER: I am confused, too, because if we
talk about no fault and we talk about the vaccine injury
compensation program, we are talking about a federally
supported program. There are no insurance companies. It
was designed because the vaccine manufacturers could not get
information or it cost them a fortunate.
So are we saying we are going to do this at the
Federal level where it is going to come out of the public
coffers versus from the insurance companies and usurp the
State's powers with respect to torts? Because most of these
cases are local cases in a State and they take place there,
and they come under their tort system, the State's tort
system.
Are we talking about no fault or fault, however
you want to say it, no fault within a State, or are we
talking about at the Federal level where, again, the
distinction is it is coming out of the public coffers versus
out of the insurance companies?
That is a distinction in my mind somebody needs to
clear up for me.
MS. LIPTON: I think that is open to question. I
think how you fund it does make a difference, and a Federal
vaccine injury compensation program is funded by some sort
of tax on the vaccine itself, but it does control over the
State laws which generally do control most negligence or
tort questions. That is absolutely true.
Again, I do not think this committee is going to
be able to write the law as much as say this is what we
think are the important elements that people who are injured
get some kind of compensation. As to how that is funded, I
do not think we can possibly address that here.
MR. ALLEN: Keith?
DR. HOOTS: I want to clarify one thing Jay was
talking about.
What I was thinking when the word "punitive" was
bandied about was really more in the context of the part, as
you started out by saying, that you do not routinely
surveil, that is what happens once the product leaves the
transfusion service.
The part that would be most analogous at least in
my mind to what happens with the airlines where a nurse
walks in, it is a near miss, walks in, is about to hang the
blood and then realizes, oops, I forgot to check and then
the name tag is different from what is on the back.
That is a near miss, and that kind of reporting, I
would hope, would be not subjected to any punitive or even
potential for punitive action because that is how we are
going to get the data and change this system in corrective
action.
I think what the FDA does is good, and I think the
way they do it is well done. I think it is trying to figure
out how we can go the next step to add one more layer of
safety all the way into the product, going into the patient.
I just want to make that comment.
Clearly, I think what Karen said is right. We
cannot rewrite the laws here. What we can do is
philosophically say that we think there are parts of this
that may be analogous to a vaccine situation or some other
situation where society has a potential benefit from
relooking at the whole issue outside conventional tort
system.
MR. ALLEN: Dr. Davey?
DR. DAVEY: Yes, just a couple comments.
As far as I understand some of the discussion so
far, it does seem to me that we have made a pretty clear
distinction between product manufacturing which we are in
pretty good shape. I think the safety record is good, and I
think several people have mentioned that. We are really
focussing on the transfusion side.
I think Dr. Penner mentioned this earlier. There
are institutions in place already in the hospitals. The
transfusion committee which is a required committee by JCAH,
I believe, has a lot of responsibility already for
utilization primarily, but we could perhaps look at a
committee like that or the hospital QA committee to expand
their reach a bit. So that, there should be a plan in
place, perhaps monitored by those existing committees,
looking more at safety, and the plan should involve
monitoring. It should involve corrective action. It should
involve some kind of way that the patient will be fed back
information when it is appropriate.
Really, it is not up to us to get to the details
of that plan. There might be some reporting that is
required perhaps, but maybe even some voluntary reporting if
it is built on a system of trust that can expand the
information base.
But we just need to require that a plan be in
place and use existing structures to build that plan.
DR. BATTLES: We have some problems because we use
"no fault" in some other kinds of things in different ways,
but in terms of event reporting, one of the things that is
referred to as no fault is that you do not shoot the
messenger because, if any harm comes to the person who
reports the information, that information dries up because
you would not have known about it otherwise. So we want to
protect in some form the person who reports, and I think
that comes through particularly if we are trying to get at
more of the information, that is, the near miss.
I might comment about a definition. I look at a
near miss if it is reportable to the FDA under regulations
for that is organization. It is no longer a near miss.
That is a hit. Therefore, whether there was harm to a
patient is immaterial. That is a significant event as far
as the organization to meet the requirements. So anything
reportable to the FDA deserves serious consideration within
the organization.
MR. EPSTEIN: I would like to ask Drs. Battle and
Kaplan what your actual findings were. When you were
studying MERS-TM in the transfusion medicine environment,
real time, actual data, you pointed out to us that the
report rates or the capture rates rose quite markedly, and
we have heard that in other settings as well.
What I would like to know is what percent of those
reports were things that were required to be reported to the
regulator. In other words, how did you accomplish the feat
of getting additional reporting if you were still subject to
the regulatory report requirement, and how did you deal with
that interface?
DR. KAPLAN: I cannot recall the percent, but I
think an important point is there are lots of things that
are not reportable that are precursor events.
Let's take a mislabeled sample that is trapped in
our system. At what level do you want to keep testing your
system? Because eventually one of those mislabeled samples
is going to result in a bad outcome.
The premise to the whole idea of near miss is
there are many more of these events, and many of these are
not reportable because they are trapped inside the system
early enough, but you do not want to keep testing that Swiss
cheese model on that particular day when either the computer
is down or somebody misses a cue.
So I think the idea here is to move the errors
farther upstream and hopefully prevent the errors and
address the issues that go on at the point of data
collection or patient card-swapping. So those do not get in
downstream far enough. That is one thing.
Another which we have preliminary data on that
deserves a further look is if you go far enough down, the
events really do not have potential for harm and they do not
map the same way as things that look as though they could
have potential for harm and that get through the system.
MR. EPSTEIN: If I might pursue this a little bit.
I think that the added value or lack of added value of a
parallel system is sort of the crux of the matter, and that
what happened at the January advisory committee meeting is
that there was a recommendation in favor of a voluntary,
non-punitive, confidential, nationally aggregated reporting
system for events not captured under FDA mandatory
reporting.
What I keep trying to figure out is what are they
and what is the value of learning about them. So it is
important to understand whether they are of different
character than the ones that are reportable or simply of
different number.
I can understand that there may be antecedent
events that get trapped, but they may not be of any
different character than the ones that did not get trapped.
DR. KAPLAN: No. I think from your view, since
you get the aggregate data--and I have seen Sharon's
office--the numbers are large, but at any site in terms of
local management, every event occurs within a context, and
that is a very valuable database that is 10 to 100 times
greater that requires assessment and evaluation at the local
level.
Some of these are events that might not rise to
the level of ever reportability, but if enough of them occur
when we want to address them perhaps even on a national
database, because there may be an awful lot of extra work
being done and misfocus resources--btu I think the issue is
you miss the opportunity for the local improvement of the
system.
Frankly, when we talk about error reporting, it
ought to be error reporting internally and analysis
internally ought to be stressed a lot more than the external
reporting.
DR. AuBUCHON: If I could comment on Dr. Epstein's
last question and then Dr. Battles' comments.
Indeed, reporting an event to the FDA is a
significant event for those individuals who are doing the
reporting. It is something one remembers.
I think it is important that the system be
structured so that the event reporters do not feel that
their neck is headed for a noose because they have reported
that event.
Our technologists in reporting an incident through
the system should feel comfortable that they will not have
their job jeopardized or will not be castigated for having
made an error or for having discovered one. Whether or not
the management has to report this to the FDA is of no
concern to them, and it should not be.
By the same token, if I as a transition service
director have to report something to the FDA, I would hope
that the FDA would look on this report in a professional
manner, which I believe that they do, and that they will not
seek to in some way take any punitive action.
I do like the idea from the FAA approach that if
the corrective action we report we have taken internally
does not meet what they feel is appropriate that we engage
in a professional dialogue and not just receive a letter
saying that was an inadequate response and here is what you
have to do.
MR. ALLEN: Jim, you mentioned earlier about the
reporting that was done. I think you said something about
it was more systemic than personnel at some point in one of
your statements earlier. Do you remember that? The
reporting increased three-fold?
DR. AuBUCHON: Yes.
MR. ALLEN: In your opinion, what would be the
reason if the problems were systemic versus personnel? Why
would there be the increase in the reporting?
DR. AuBUCHON: There was concern that reporting
might focus undue scrutiny on an individual person, although
staff always believed that any error is not their fault--I
should not say that--they often feel it is not there fault
and it is a system problem. Occasionally, it is a training
problem or lack of attention to detail for them, but then
there are usually mitigating circumstances in that
situation. They had too many things that they were being
asked to do simultaneously.
In any case, the individual staff do not like to
have to take time out from what they are doing, working on
patient samples, in order to report something. However,
when they came to understand that we were serious about this
and wanted to improve systems wherever possible and that we
needed them to tell us whatever was not working correctly,
they took it to heart.
We are seeing about 30, 35 incidents reported per
month. Not all of them would be FDA-reportable under the
proposed system from a transfusion service, but probably
about half of them would be. That is not the important
fact, and indeed because of our internal QA system,
reporting to the FDA is going to mean probably just filling
out one more form and sending it off. What we would hope is
that by sending that form off, that information will be
useful to someone else and not just occupy a box in a
warehouse.
DR. BATTLES: Most of it is in theory, but it
seems to be supported by a number of industries that if your
voluntary reporting goes up, like Jim is reporting,
eventually we should see drops in reportables to the FDA.
We have some evidence that some of our blood
centers have reported if they drive up the number of
deviation reports, that total aggregate, they drive down the
total number of reports over time that go to the FDA under
current regulations.
Since our regulations for hospital transfusion
services--we can say we think so, but because the number of
reports that Sharon has is so small, we do not know whether
that is action. It shows up in the petrochemical industry.
It shows up in the airline industry.
If you drive up your information locally through
these near-miss benign events, you increase the awareness of
staff and eventually that which is reportable to whatever
the regulatory agents should go down. We have evidence from
British Midlands and Exxon and some other things that would
indicate that is what likely would happen.
So we would anticipate that the more vigilance
there is on the non-reportables, it will have a positive
impact on driving down over time the number of things that
get reported to the FDA.
DR. PENNER: Then perhaps if we just provided
guidelines to the floors so that incidents would pick up
some of the near misses, they would then be forwarded to the
transfusion committee. One could then apply some correction
there at that point, and then information from them could go
on to a national center in which case that would not require
a complete new system to be employed. It would go through a
local system that would be less of a concern with respect to
at least challenging the poor person who is submitting the
error report and it could be all handled internally, but yet
the data would still be corrected. Is that a reasonable
approach?
DR. BATTLES: Yes.
MS. LIPTON: But wouldn't you want on the
transfusion committee some other disciplines besides--I
mean, I would hope that you would somehow tie it into the QA
system in the hospital, too, that those people would be part
of the transfusion committee because otherwise you are
getting just one kind of discipline's viewpoint.
DR. PENNER: No. You have a QA, and you have all
of the disciplines. So it is not just a blood banker. It
is the surgeon and OB-GYN, QA person.
MS. LIPTON: Yes, I am working sort of on the QA
people.
DR. PENNER: Oh, yes. They are part and parcel.
They have to be for the JCHO.
MS. LIPTON: Good.
DR. KAPLAN: That is going to be handled in quite
a level of detail before it gets to the transfusion
committee. Then the transfusion committee which is not
domain-specific in terms of the area that the occurrence
took place in as well as the blood bank, it is the broader
representative committee with QA that has already been
aboard on this. So it would not go zip out the blood bank
to a transfusion report and out the door.
DR. PENNER: But what is missing are the
guidelines at this point to what extent one should report
the so-called near misses and investigate them. Is that
right?
DR. BATTLES: Yes, I think so.
The other thing is that it is sort of an idealized
way that we really have kind of from the operations level a
single kind of reporting and standards. They go to
different places depending on where you draw the lines, that
which goes to the FDA by regulation, but it is basically the
same kind of report and information. So it is a seamless
system with information below whatever the line is that is
drawn by the FDA for their regulatory and oversight. The
other information is shareable, but it is a seamless system.
Ideally, transfusion would lead the way for the
rest of the hospital, and so, with slight variations, we
show the hospital--and it is true at my own
institution--where they look at what has been done with the
MERS system.
MERS is their model for the entire hospital, and
that is what they are planning to do.
MR. ALLEN: I just had a couple of things I wanted
to ask, and pardon my ignorance here. Art mentioned earlier
about a safety officer. I am curious. Once blood is
received by the hospital, who is essentially responsible for
that safety storage of that blood. Would that be the blood
bank director and his staff? So they essentially are the
safety officers as well for that product?
MS. LIPTON: I think he was using the term "safety
officer" a little bit differently, sort of like this super
numerary in the hospital, soret of like the czar, a blood
czar. So I think that is a little bit different, but when
it comes into the hospital, it goes into the blood bank and
the blood bank is responsible for receiving it, holding it
there, issuing it to the right patient and storage.
Where it sometimes breaks down is when it gets
outside the blood bank onto the floor in the surgical suite,
and it is less clear who is always in charge of it then.
MR. ALLEN: Jay, I actually had a question. If
HHS goes ahead with any form of mandatory reporting, what
kind of assistance, if any, do you think institutions could
anticipate from HHS?
MR. EPSTEIN: That is really not a question for
me. Perhaps Steve would like to comment.
MR. ALLEN: Okay. Steve?
DR. NIGHTINGALE: Steve is trying to preempt the
need to answer that question. That is why we are meeting
here today.
[Laughter.]
DR. NIGHTINGALE: Since I got a laugh rather than
a shudder, I believe that some of the many proposals that
are on the congressional table, the various administrative
tables and in the non-governmental agency, such as those at
the institute of medicine are highly likely to be enacted in
this election year.
I think the Grassley proposal, for example, that
was introduced about 20 days ago has a number of powerful
people attached to it.
The real purpose of the meeting here is to
identify whether or not there are specific concerns of the
community that is concerned with blood safety that that
community would like to issues a preemptive statement on.
It is not necessary that we do so, but if we do
not, it is quite likely that others will. I sense a feeling
that this is all going to work out okay, not one that I
share, but it could.
I think that what we are faced with is really the
question, can we now separate the question of
confidentiality, which I think is at the core of the
discussions here, from the question of liability, and Judge
Krever raised that question. It has hung over the rest of
these deliberations.
I think that when I use the word "hung over," when
we looked, we said, oh, my God, we were going to deal with
that one in January of 2001 and here it is staring us in the
face.
We may need additional time to deal with the issue
of confidentiality if we conclude that it is inextricably
tied with compensation. If that were to be the endpoint of
our deliberations today, I would say we have gone a long
way.
I do not know if that is the endpoint of the
deliberations or not.
DR. GUERRA: I may be a naive pediatrician from
South Texas and one who does public health, but I hear about
issues related to blood banks and all sorts of capacity and
what have you.
I guess my concern is there are a lot of places
around the country, some that I have visited within my own
State for one reason or another where there are not blood
banks, where the blood is delivered to take care of a crisis
situation quite often without the opportunity to go through
all the different layers of assurance and where the
committees are almost nonexistent, where there is overlap
from one committee to another just because there are not
that many people that can serve on these.
Many of the hospitals in rural communities
probably have never heard of transfusion committees, for one
thing, and even if they have committees in place, the
participation and attendance is suboptimal just in terms of
input and participation, decision-making. That quite often
is just the chairperson of the committee and maybe a staff
person, secretarial staff that are conducting the business.
I think as we look at these very complex issues,
we need to really develop some recommendations that take
into account what is truly an unevent playing field in terms
of how all of these things are implemented.
MR. ALLEN: Dana?
DR. KUHN: I am not sure of where we are in this
whole process, if we are even at a point of a
recommendation, but I would be willing to throw one out
there to at least get going in the direction that we might
want to go in, and also that it would not encompass
everything, but it at least would be a start. I would be
willing to do that if it is acceptable.
I would like to start. Please forgive me if I do
not have all of my words right. We can wordsmith it later.
I was just thinking that the recommendation we could offer
would be that a standardized reporting system--
[Pause.]
MR. ALLEN: Technology is catching up with us.
CAPTAIN McMURTRY: Let me say that this is only
going to improve my penmanship, not my spelling.
DR. KUHN: Mine is not any better.
[Pause.]
DR. KUHN: The recommendation would be that a
standardized reporting system--
CAPTAIN McMURTRY: I told you, this is only
penmanship, not spelling.
MS. LIPTON: Are you feeling under a lot of
pressure?
[Laughter.]
DR. KUHN: --be established and implemented for
blood establishment/institutions which would incorporate the
elements of voluntary disclosure--and you can put this one
in quotes because I think it is a legal term--"assurance of
confidentiality," data collection, surveillance, quality
assurance, protection from discovery, reportable corrective
measures, and accountability to the regulatory agency.
I think that is a good generalization of where you
could start.
Keith asked me if I would put something in about
compensation. Yes. That is another element. I did not get
to that part. I think that a compensation mechanism should
be in there because that is the tradeoff for the patient.
Still wordsmithing. Maybe, Keith, you can help me with
that. Then what would constitute a patient being
informed--I really believe the definition probably should be
"harm" and what does "harm" quantify.
DR. DAVEY: Dana, by "standardized," do you mean
uniform? Do you mean kind of a uniform system across the
board, a very standardized cooker-cutter, hospital to
hospital?
DR. KUHN: Something that all can encompass or
embrace.
DR. SECUNDY: Established by whom?
DR. KUHN: The Secretary.
DR. SECUNDY: In the Department of HHS?
DR. KUHN: The Department of HHS.
DR. SECUNDY: In FDA? I always suggest we get
very specific. Do you want it in FDA or someplace else in
HHS, a separate office or what?
DR. KUHN: I think a generalized recommendation to
the Secretary of HHS to figure out how to establish this and
implement it.
[Pause.]
MS. LIPTON: Yes. What about "voluntary
reporting" instead of "voluntary disclosure"?
DR. NIGHTINGALE: Mac, "voluntary reporting"
instead of "voluntary disclosure."
CAPTAIN McMURTRY: Okay.
DR. NIGHTINGALE: Karen, do you need the
"assurances of confidentiality" in quote?
MS. LIPTON: Is Dick here?
DR. NIGHTINGALE: Dick Reisberg? No, he had to
leave.
MS. LIPTON: I am just concerned that that is a
research protocol term and not a -- as far as I am aware, it
is a term that is strictly used by Federal agencies, CDC,
the National Institutes of Health. It is a mechanism that
they have the ability to utilize. So why don't we just say
protections from disclosure. Then we don't have to call it
assurances of confidentiality. Protection of disclosure of
information.
One way to phrase it that might make this easier
would be: a standardized reporting system be established
and implemented for blood establishments and institutions
that incorporates the following items, colon. Then, if you
start listing, you can describe them a little better.
DR. WINKELSTEIN: But it does not say what we are
reporting.
MS. LIPTON: We could say "voluntary reporting
of."
DR. WINKELSTEIN: No, no. You still need the
area. Is this miscarriages?
DR. NIGHTINGALE: We will be getting there, going
a line at a time here. In fact, this, I believe, though I
am right in the middle of the process right now, will prove
to be very helpful to the administration in trying to
forward--I hate to use the hacking phrase, but the will of
the people because that is what we are trying to figure out
right now.
Let's take a second until the word processor has
caught up with the conversation, and I think it will move
faster in the long run.
CAPTAIN McMURTRY: This is spelling by the
committee back there.
DR. NIGHTINGALE: "Surveillance" gets an "e."
MS. LIPTON: Don't you have spell-check on that
thing?
CAPTAIN McMURTRY: Yes, but I cannot see it. I'm
sorry.
DR. NIGHTINGALE: Dr. McCurdy has his hand up, and
after that Dr. Chamberland.
DR. McCURDY: I just want to make one or two
comments about the certificate of confidentiality. That
refers to a legislation passed by Congress a number of years
ago. It is for research projects only. It can be used for
such things as you were talking because there is a research
aspect to it.
It has never been tested in court as to whether it
is available or not, but it is available to any investigator
in the country whether they are federally supported or not.
DR. NIGHTINGALE: In that case, Paul, would
assurances of confidentiality then be a subset of protection
from discovery and, therefore, superfluous?
DR. McCURDY: Yes, I think it probably would. A
certificate of confidentiality probably would not pertain
here at least over the long term. Over the short term,
perhaps.
DR. NIGHTINGALE: Captain Snyder?
DR. SNYDER: I was just going to say that that is
a whole different ball of wax, and what you really want is
protection of peer review activities, nondiscoverability of
medical quality assurance records. It is
nondiscoverability.
DR. NIGHTINGALE: Then perhaps you would like to
add "protection of discovery."
DR. SNYDER: It is "protection of discovery"--
DR. NIGHTINGALE: "Of the peer-review process"?
DR. SNYDER: "Of medical quality assurance
activities."
DR. NIGHTINGALE: "Protection of discovery of
medical quality assurance processes"?
DR. SNYDER: "Activities."
DR. NIGHTINGALE: "Protection of discovery"--
DR. SNYDER: "Protection from discovery."
DR. NIGHTINGALE: "For medical quality assurance
activities." You might just take out the "medical." Just
"quality assurance activities."
DR. SNYDER: The "medical" is the context it is
used in.
DR. NIGHTINGALE: Okay, I'm sorry.
MS. LIPTON: But that does not make sense right
there, "protection from discovery for medical quality
assurance."
DR. NIGHTINGALE: It is "protection from
discovery."
MS. LIPTON: You want to protect it from
disclosure to any unauthorized person and disclosure
pursuant to a legal action, right?
DR. SNYDER: That is discovery, right?
DR. NIGHTINGALE: You are the lawyer, Karen.
Could you direct us on how to write this right?
MS. LIPTON: I just want to make sure I understand
what you want to capture.
DR. SNYDER: What I am saying is that the issue,
my understanding of it, in a tort claim is the
discoverability of documents that are from peer-review
activities.
MS. LIPTON: Why don't we say "confidentiality
protections, including protection from discovery and legal
proceedings." Does that do it?
DR. SNYDER: That works for me.
DR. NIGHTINGALE: That works for him.
Would you type that up and then we are going to
move to this side of the table.
MS. LIPTON: Now you want me to say that again,
don't you?
CAPTAIN McMURTRY: Yes, I do.
MS. LIPTON: "Confidentiality protection." I
think you want to delegate "assurances of confidentiality."
"Confidentiality protection, including protection
from discovery in legal proceedings." Now the question is
how far do you want that legal discovery protection to--I
mean, I think this is a question for the group. Is this
tort liability we are talking about? Is it criminal
liability? Would you not want this disclosed in a criminal
case?
DR. NIGHTINGALE: Your microphone is on, Ms.
Lipton.
MS. LIPTON: I think we need the consensus of the
committee. I think if it is a criminal proceeding, I do not
know that you would want this to be non-discloseable.
DR. SNYDER: I would look at it in the context of
what every State and territory has, and that is medical
quality assurance records are non-discoverable as long as
there is no evidence of a fraudulent use of the records.
Does that make sense?
DR. NIGHTINGALE: Can you give us the words?
MS. LIPTON: Yes. Fraudulent behavior is one form
of criminal activity. I guess my question is when we get
into some things that are--what if it is intentional? What
if you have an intentional error? Someone went out and did
something intentionally. Would you really want to protect
these from disclosure and that type of criminal proceeding?
We can figure out how to wordsmith the language.
DR. NIGHTINGALE: Karen, maybe the word would be
"inadvertent," "protection of inadvertent," or not close
enough? Okay.
MS. LIPTON: You can say "civil proceedings" as
opposed to "criminal proceedings," and "legal civil
proceedings" or "civil legal proceedings."
DR. SNYDER: I think what we are trying to do is
not protect the individual--
DR. NIGHTINGALE: We are trying to do what FAA has
already done.
DR. SNYDER: We are not trying to protect them
from the act they committed, but it is from the quality
assurance activities around that. I do not know legally how
you make that distinction.
MS. LIPTON: The question is if it comes into your
system and this is the only way it is known. What we heard
from the airlines is that those type of events get bumped
out of their system immediately. They do not allow them
into their system.
Am I wrong about this?
DR. KAPLAN: No, you are right.
DR. SNYDER: Wouldn't that automatically be
reportable to the FDA which would take it out of the system?
We have not built that in.
MS. LIPTON: I don't know. Is that true, Jay or
Sharon?
MS. O'CALLAGHAN: No, not necessarily because if
you are talking about transfusion practice errors, then
those are not reportable to FDA to begin with.
So, if there was an intentional deviation from
procedure that caused the transfusion of an incorrect unit
to a patient by a nursing staff, that would not be
reportable to FDA. So that would not be kicked back to us.
MS. LIPTON: I do not think you would want
protection for that type of activity.
MS. O'CALLAGHAN: And you would not want
protection from that, I do not think, but if you are going
to kick it out of one system, you have to have another one
to put it into because it would not be us.
DR. SNYDER: That is the problem. When we start
talking about discovery--and you are correct--if we give a
blanket nondiscoverability, then nobody is ever going to
know unless inadvertently somebody in that hospital mentions
it, and then it is not going to take them anywhere because
they cannot get the records to know what happened, not even
from the local facility because they are covered by
nondiscoverability on the local level. So I do not know how
you handle that.
DR. KAPLAN: Both the Australian system, which is
now nationwide, and the aviation systems specifically
exclude those things which are considered to be potentially
criminal.
It is not so much that you violated a rule
because, in the context of what you did, you may have had
good reason to think that was a thing to do. It is whether
you intended harm, you were reckless.
We do not have to parse that if we just make the
point "civil legal proceedings."
MS. LIPTON: Or you could just say "legal
proceedings," and then you could put "excluding reckless,
intentional," or something else.
DR. NIGHTINGALE: Yes.
MS. LIPTON: That is an easier way to do it
because you do not care what context it comes up in. You
just do not want this database or this system to protect
that type of activity from the light of day.
DR. NIGHTINGALE: Or "reckless or intentional
acts."
MS. LIPTON: There you go.
DR. NIGHTINGALE: Here we go. We are heading for
the first semicolon.
"Intentionally harmful"?
Do you have something to type, Mac?
DR. PILIAVIN: Can't we leave the legal stuff
until later, until somebody who knows legal stuff?
DR. NIGHTINGALE: Actually, she is sitting shortly
to your right.
MS. LIPTON: I am trying to get a sense of what
people want before I put words up. I can put all the words
in the world up, but if you do not agree with what I am
saying, then you do not know what it means. That is what I
am trying to ask the committee.
DR. NIGHTINGALE: Yes. This is the hardest part
of the committee. It is certainly the hardest part of
trying to manage as opposed to direct committee.
What we really need is what consensus exists among
this group, and that consensus is forming even as we speak
at 4:30 in the afternoon, and I am sorry it is 4:30 in the
afternoon, but you hear to inform the committee before you
can get an informed response, and that is where we are right
now.
Dr. Chamberland has been patiently waiting for a
while.
DR. CHAMBERLAND: Actually, I think it is just an
organizational comment, and I think Jay is going to amplify
this.
I think we need to go back to about an hour and a
half ago, the comments that Jay and Jim made to sort of get
us into this discussion. I think their comments were
complementary.
I think Jay really made a very important point
that perhaps in our previous set of recommendations, we did
not differentiate or the need to differentiate between
systems that are applicable to blood collection
establishments and then transfusion services because there
already is a system in place.
There may be differences of opinion as to whether
it is adequate, can be tweaked. There should be other
things that are reported to it that are not currently or
whatever.
I thought Jim's diagram and presentation kind of
went to fill what Jay calls "the gap" that currently is not
really being addressed.
Besides wordsmithing, I think the first thing we
have to do is indicate what this system that we are talking
about is--where this should be directed, and I think that is
where Jay wants to take us.
MR. EPSTEIN: Thanks, Steve.
Looking back on the first question of the
committee in January, I think we already talked about the
blood establishment, and that what we really need to do
today is to talk about medical error in the hospital
environment.
So I would like to propose a wording change, but
let me first read what we already recommended in January or
that the voting members recommended in January.
"The experience of aviation and other industries
supports the use in all blood establishments (i.e., both
blood and plasma collection centers and facilities that
provide transfusion services) of a confidential non-punitive
system for the management of errors and accidents not
subject to regulatory requirements.
So I think we have already made a statement that
captures the world of blood banking, collection, processing,
storage, and release.
I think that to capture the scope of the hospital
environment, we need to start this recommendation with a
statement such as to address medical errors affecting
transfusion safety, hospitals should implement and maintain
error investigation and reporting systems which would
incorporate the elements, et cetera. So I am suggesting a
change in scope.
DR. SNYDER: A quick question, Jay. If you say
hospitals, then you are going to keep it at the local level,
or are you going to have something that has a national
database?
MR. EPSTEIN: I see that as the surveillance and
reporting part because data collection--it is buried in
there to whom, and I think we need to wordsmith the later
words.
We can fix it by saying report to a national
entity.
MS. LIPTON: But I think your standardized
reporting system, you have to capture, and if you just say
"hospital should establish," we are going to get 3,300
different systems. I thought we were talking about
standardized reporting.
DR. SECUNDY: Are hospitals the only place that
transfusions occur, Jay? Are you going to include any other
places? Are any places being left out by referring to "only
hospitals"?
MR. EPSTEIN: I think that is a good point. We do
not want to leave them out, but I think the distinction I am
trying to create here is that at the last meeting, we
already decided what was needed for the collection center
and transfusion service. What we are really trying to
address here is the medical environment where transfusions
occur.
MR. ALLEN: Jim?
DR. AuBUCHON: If I could provide a brief
interlude while Mac is typing, I support the concept of
no-fault protection for transfusion recipients, but I am not
sure it necessarily needs to be tied into what we are doing
today. It would be nice if we could kill two birds with one
stone, but I am not sure we need to because for those
incidents of error that result in harm to the patient, the
patient will know that they have been harmed, obviously.
I would contend that any significant harm to the
patient will result in a subsequent medical event which
someone, or someone's lawyer, will point out could be due to
some incorrect activity.
DR. SNYDER: I do not think so.
MS. LIPTON: I thought what we were talking about
was creating something where we would somehow have
guidelines or decision-making about when you tell a patient
and not wait for the lawyer to come and tell you.
DR. AuBUCHON: If I can just finish what I am
suggesting here. Apparently, I do not have unanimous
opinion around the table.
If a patient, for example, receives an incorrect
unit and suffers morbidity from that, they will not need to
have access to data in this system in order to determine
that that happened or to investigate why it happened. That
will be handled by a separate discovery process.
I like the distinction that has been made about
confidentiality protection for quality assurance activities,
and I think that is adequate and appropriate.
DR. SNYDER: My point is that there is a big
distinction. Those medical records are discoverable. There
is no 15 ways about it. So I think, Jim, you and I are on
the same wavelength there.
It is the quality assurance activities which this
would fall under. My question to you and to Jay is, yes,
the FDA has a reporting system, but would it be of value to
the blood banking industry to have the data that the FDA has
go into here and be aggregated so that you see trends within
your own industry? Would that be of any value?
DR. AuBUCHON: Oh, absolutely. My only complaint
about the current system or the likely extension of the
current system to transfusion services is that the trending
and data analysis are not as complete as I would like to
see, and in the past, the FDA has ceased its purview at the
door to the laboratory, and I think this process needs to
have a complete overview of the entire transfusion process.
DR. SNYDER: The reason I am asking that, with all
due respect, Jay, that would go against what you just said
because, if we took data from everybody, which includes the
blood industry, then you would have a complete database that
covers every phase. That information that is something that
is going on in his blood bank, there could be a sentinel
system. Does that make sense?
DR. AuBUCHON: Yes.
DR. SNYDER: Is that of value?
DR. AuBUCHON: Absolutely, because the near misses
that we might have missed might have been detected somewhere
else, and knowledge of problems in other institutions will
alate us.
DR. SNYDER: So that argues in my mind to having a
universal system within the blood industry, period.
DR. AuBUCHON: I concur.
MR. ALLEN: Dr. Penner?
DR. PENNER: I have just a question. If I am
recalling correctly, a litigant has to prove that damage has
been done, and therefore, near misses really would not be
something that one would be able to litigate. So,
therefore, if the information appears, it will be of no
consequence if we are talking about tort into the legal
aspect.
MS. LIPTON: Sometimes they use it, though, to
prove sort of that the institution was negligent because it
knew certain things existed. So it is not that you are
usually litigating over the near miss. You are using
evidence of a different near miss to support your claim that
the hospital was negligent in a case where there was harm.
DR. SMALL: I just wanted to make one point. We
reviewed 30 nonmedical reporting systems, and it is
interesting that we were unable to find a single instance
where the databank was breached in support of a claim such
as that.
We expected to find more instances, but there are
facts and there are perceptions, and I think among
clinicians and providers, there is the perception that
nothing is safe, but the reality is it is actually much
safer, for that that is worth.
MS. LIPTON: My only experience has been, for
example, at the AABB, every time there is a case, we have an
accreditation program and people do subpoena our records and
we are constantly involved in trying to bring them under the
peer review. So far, we have won every one, but you really
have to be diligent about--
DR. SMALL: This is my point. So far, you have
won every one which is interesting. The ASAP program has
had two challenges.
There was a challenge in New York State recently
where a practitioner's privileges were revoked, and then he
had the peer-review material opened up to say, "I was
unfairly treated. There are other people in the Department
who practice just as I do." But indeed, that was not a
fishing expedition. No one benefitted from that. No
plaintiffs' attorneys were able to get any data. It was
used to answer that one specific question.
The Carr case in Massachusetts that was settled
last year has become a very celebrated case, and that the
judge demanded to see the peer-review data in camera or in
his chambers to determine if it was acceptable to be
reviewed. The hospital refused to release it to him. He
held them in contempt.
He went to the Supreme Court in Massachusetts
where they found in favor of the hospital.
MS. LIPTON: But I think putting the language in
is not bad because all it says is that somebody ought to
review it and make sure that adequate protections exist.
DR. SMALL: I agree.
MS. LIPTON: We would not assume, but if this is a
directive to the Secretary of HHS, then presumably that
legal department will take it upon themselves to look at
these issues.
Again, I do not think we are all expert here. I
think what we are trying to do is to say these are very
important issues, but that is also why I thought this should
be directed more to setting up a system rather than just
saying a hospital should have.
To your point, Jay, I understand the distinction
between product and process, but if you sent the original
statement over, HHS could decide we have the product side
covered, so now let's focus on the other side.
I think to the extent that we are sending this off
to the Secretary saying we want you to fix this, we are not
telling you exactly how.
DR. NIGHTINGALE: That is okay. I think if you
substituted "health care organizations" for "hospitals," you
might address the problem.
Jay, would that be acceptable to you? Is that an
identification?
MR. EPSTEIN: I think it addresses the earlier
point.
DR. SECUNDY: It is institutions where every
transfusion occurs, right?
MR. EPSTEIN: But it does not really address
Karen's point.
MS. LIPTON: I mean, my issue is I thought we were
doing two things. I thought we were trying to get to a
standardized reporting system which is now absent from
there. Am I wrong? I do not see it in there.
The other thing is I think what we are doing is
saying the instruction is to the Secretary of HHS to make
sure the system is in place, not just directing the
hospitals to do this.
The Secretary should be responsible for making
sure whatever system that is there works.
DR. NIGHTINGALE: Then, what I would ask, if it is
a substitution of "health care organizations" for
"hospitals" as appropriate, then would you take the floor
and wordsmith it as you wish? We thought we would ask
everyone around the table, even if we were starting in
reverse, alphabetical order, to have one last crack at the
wordsmithing, then ask for a second and call for a vote. Is
everybody in agreement that that is an appropriate practice?
Hearing no objections--
MR. ALLEN: Would it help to put "maintain a
standardized error investigation and reporting system,"
something along that line"?
MS. LIPTON: Yes, or there should be a
standardized--to address there should be a standardized--I
cannot remember the other language we had, a standardized
reporting system.
MR. EPSTEIN: Could I suggest that the place to
fix that is standardized national data collection? I just
thought that was in the data collection piece. If we want
the word "standardized" back and we want "national" back, I
think the place it belongs is under "data collection." It
would be "standardized national data collection."
MS. LIPTON: But you could have a separate system.
You could have "data collection" separate.
DR. KAPLAN: I think it needs "standard error
investigation reporting system" at the origin.
MS. LIPTON: Yes.
DR. KAPLAN: Otherwise, you are going to have
different systems reporting into the center.
MR. EPSTEIN: Then just add the word
"standardized," "standardized error investigation reporting
systems." At the top, "to address medical errors affecting
transfusion safety."
CAPTAIN McMURTRY: We are trying to get this to
the end.
MS. LIPTON: If you use a colon and "1," "2," "3,"
we do not run into these problems of modifying things we do
not want to modify. I learned that in English.
DR. CHAMBERLAND: I guess I just want a
clarification. Is this what the committee is asking for, "a
national system of surveillance and reporting that contains
certain elements"?
DR. NIGHTINGALE: Yes.
DR. CHAMBERLAND: My biased background, that is
the word that comes out of my mouth, but a better word would
be fine.
DR. DAVEY: If I could make a comment on that. I
thought that we earlier were really focussing quite strongly
on local autonomy and hospitals being able to develop their
own plan based on their own particular set of issues. This
to me seems a little bit draconian in terms of a national
standardized data recording system.
I believe, Jeanne, if I remember correctly, you
were an advocate for hospitals doing their own thing on
this, and I think others were, too. So I have a little
concern about the strength of this recommendation
personally.
MS. LIPTON: I am just saying that it is a
standardized reporting mechanism; that we all have the same
terminology. I do not care if you send it to your QA
committee, your peer review transmission committee. I am
just trying to say that we all need to understand what data
we are collecting and what events we are reporting. If you
leave everybody to decide that on their own, we just might
as well go home now because it is not going to happen.
Where I would like to see standardization is what
are we trying to collect, and then there will be
standardization through FDA reporting because that will be
standardized, but I would like to see the FDA system tie
into all the other systems so we are not doing five
different exercises. We should all agree on one thing we
are using.
DR. HAAS: Larry, I guess I am getting really
nervous in the sense of drafting by committee is always
incredibly dangerous. This is a very important document.
We are working to tyr to get a sense of what the committee
wants, but I would be really reluctant in the next 15, 20,
30 minutes to vote on this thing for exactly the type of
dialogue that is going on now.
I think the idea of having a standardized
reporting system is essential, but I also agree we do not
want to just clamp it down on the system and say you do it
this way. I think there is a lot of discussion on how you
get that standardized process.
I think once we can get to the sense of some sort
of agreement among the group, then someone should play with
this tonight and we come back and look at it tomorrow.
I will not vote. I will abstain if we have to
vote today.
DR. PENNER: Larry, maybe we need to just break it
down into pieces that people can agree are covering a topic.
There are several that can be broken out instead of all in
one piece.
DR. SNYDER: Can I make a suggestion? Could
somebody take this tonight? Instead of having it run on
like this, start with a semicolon, it looks like someplace
in there, and then have "1," "2," "3," "4," and "5," and let
us see it in the morning when everybody is fresh?
DR. NIGHTINGALE: I would be delighted to do that.
The one thing that I would ask the committee would be to put
in the additional effort so that everybody feels that their
contributions have been put in tonight.
I think we will have a much more productive
morning than we would otherwise have had. The agenda for
tomorrow is not overly long. I do want to give Dr. Emmanuel
adequate time. I want to give the public adequate time to
comment on the HCFA rule and other rules, but I think that
the comment on the HCFA rule could be completed in an hour.
I think Dr. Emmanuel's could be completed within an hour. I
think Dr. McCurdy's report could probably be within 10
minutes, and the report of plasma could be within 10
minutes. My report on the committee's past
accomplishments--well, that will take an hour, of course,
but I think we can get you all on your plane tomorrow
afternoon and do right by this.
I very much appreciate the efforts that are being
put in, but do not stop quite yet. I think you are close,
but you are not quite there yet. If I have got to smith
this baby tonight, I want to make sure I have got all your
thoughts here.
DR. SNYDER: What I am saying is I agree with you.
We should not vote today.
DR. NIGHTINGALE: I am delighted to accept the
will of the committee on that. Just give me your best shot
this afternoon so I can give you my best shot tonight.
MR. ALLEN: Okay. Why don't we start with Jim and
just go around the table. Any additions or deletions you
would like to make?
DR. AuBUCHON: Well, I have a couple of different
issues. One, I agree with Dr. Davey entirely that the heart
of the success of a system like this is going to be within
the hospital, not the national reporting system. The
national reporting system is very important, but each
hospital needs to have its own quality assurance system to
investigate and come up with appropriate corrective actions
for these incidents. I do not know where to put that in the
wording, Steve, but you can work on that and I will try
tonight as well.
The other is the last phrase, "accountability to
the regulatory agency." I am thinking that possibly the
verbiage, to my mind, might go better if it is
"accountability for adherence to regulatory requirements,"
because my focus in the diagram I showed noted that the
reporting to the FDA was primarily for violations, that is,
non-adherence to regulatory requirements, and for issues
beyond the institution, in other words, things that the
institution could not handle by itself, either to be
reported to someone else through the FDA.
CAPTAIN McMURTRY: To regulatory?
DR. AuBUCHON: For adherence to regulatory
requirements, but if the group does not want to accept that
yet, that is fine because I, too, have to give some
additional thought to this.
DR. NIGHTINGALE: Are there any objections to that
change? I would be surprised if there were, and I do not
see any.
MR. EPSTEIN: Just one comment. Jim, what was
disturbing me earlier on your diagram, you actually were
implying a change in what is reportable to the regulator
because what is reportable now are any product deviations
affecting safety, purity, and potency. It is not strictly
linked to whether they are violations. They do not have to
be violations because our mandate does go to product. The
way you crafted it would create a difference.
So, for example, an information report affecting
product safety is not actually a violation. We consider
those accidents. They are not violative, but they affect or
potentially affect product safety, purity, and potency.
Therefore, they are actionable. So I am not sure we have
quite stratified it so that we are closing a gap. We are
also changing the system.
DR. AuBUCHON: I understand that. I do not mean
any disrespect to the FDA, but the example that you brought
up is exactly the kind of situation that I really do not
think institutions need to spend their time reporting or the
FDA spend their time dealing with.
When a donor calls up to report 3 days after
donation that they now have a cold and the blood center
decides that they need to withdraw that unit from the
hospital, it really is not worth someone's effort here in
Washington to even log that in.
MR. EPSTEIN: It is a doubly bad example because
that would not be recordable.
DR. AuBUCHON: There are similar circumstances
where almost trivial things get reported.
MR. EPSTEIN: Obviously, this is a gray area, but
what is supposed to go on, our 1993 memo, there is supposed
to be a judgment made whether had the information been known
at time of donation, it would have precluded donation and/or
would otherwise affect safety, potency of the product.
It was never FDA's intention that any and all
incidents are reportable, only the significant ones, and
obviously there is a gray area because there are issues of
judgment.
DR. AuBUCHON: I understand, although I know that
it is after 5 o'clock, but my all-time favorite is the unit
that we received a withdrawal notice on because the donor
reported at a later donation that she had stuck her finger
during a quilting bee, had a needle-stick injury during a
quilting bee.
MR. ALLEN: Dana?
DR. KUHN: I think the intent of the
recommendation on that phrase was just to try to keep the
FDA in the loop of this recommendation, and if we are
developing this system to reporting. That is where I look
to Jay to see whether this still allows the FDA to be able
to exercise their authority and their mandates by HHS or the
Federal Government to do. That is what I am trying to keep
in here, and I am wondering if this is still doing that.
MR. EPSTEIN: I do not have a problem with the
current language. I just wanted to point out that it might
be meaning different things to different people based on
earlier discussion.
DR. NIGHTINGALE: Have your concerns been allayed
by the discussion?
MR. EPSTEIN: Well, no, but I am okay with this
language.
[Laughter.]
DR. NIGHTINGALE: Okay, perhaps later.
MR. ALLEN: Paul?
DR. HAAS: I think on a lot of this same tone, we
had a lot of discussion about the word "standardized," what
Jay is talking about. I think as we get this thing set out
in a bullet-type form, then maybe what we need is a glossary
of definition and what we mean by the terms, and even that
is going to create a lot of problems, but if we just leave
language up there which we ourselves around this room are
saying, "Well, I see it this way, you see it that way,"
well, of course, when it gets out of this committee, it is
going to be seen in many more ways.
So I think that there are some very important
words up there that are going to need further clarification,
but I do not think in the body of recommendation it hsa got
to be separated from it.
MR. ALLEN: Dr. Penner?
DR. PENNER: Maybe for the benefit of Steve, I
would break this down into four parts. One is error
reporting system, standardized forms, and voluntary. That
is number one.
DR. NIGHTINGALE: Again.
DR. PENNER: Okay. A voluntary error reporting
system using standardized forms.
Number two, protected for confidentiality and
excluding reckless acts.
Number three, it contains corrective measures.
DR. NIGHTINGALE: Okay.
DR. PENNER: And number four, accessible for
national data collection, but accountable for regulatory
requirements.
DR. NIGHTINGALE: That helps me greatly because
certainly the words that had stuck in my craw and by
extension the Secretary's craw in the letter she signed
yesterday was exclusion of a large body of potentially
useful data from the process of investigation, and this
current resolution addresses that concern to my
satisfaction. So your clarification was helpful to me, and
I believe that clarification--
DR. PENNER: And you have to work on each part, I
think, and wordsmith it as opposed to the run on--
DR. NIGHTINGALE: Actually, both will be helpful.
MS. LIPTON: My only question is with the
standardized reporting, what we have done is set for the
voluntary system, and do we want to encourage that to also
be used in the nomenclature, wouldn't it be nice to
standardize? I think the FDA is trying to look at this
system so that the nomenclature and what we report and how
we analyze would be the same, go through one process that at
the end spits out reportable to FDA, voluntary--that is what
I was trying to get at.
DR. NIGHTINGALE: Yes. And I would like to try to
address the issues that Dr. Davey has brought up which is
clearly that we recognize the need for any such program to
be grounded in local action taken in response to local
issues.
The concern of society is that experience gained
at the local level be made available nationally in a way
that would parallel the efforts of the Federal aviation
industry to disseminate parallel information across its
industry. That is why I have a concern about the concern
that you raised.
MR. ALLEN: Dr. McCurdy?
DR. McCURDY: I agree that it needs to be grounded
locally, but let's not go to the point where we consider
every institution to be different because every institution
should not be different.
There are size differences. There are a number of
differences, but within those broad categories, they should
have many, many similarities, and a national database will
help you detect the problems and help get innovative fixes
where you may not be able to do that at the local level.
DR. NIGHTINGALE: Dr. McCurdy expressed what I
wished to say far better than I had.
MR. ALLEN: Jim has spoke. Dr. Busch?
DR. BUSCH: I think all of this is wonderful.
As someone who actually trained in residency in a
hospital setting when transfusion committees came up in the
early '80s and it was a big deal and I remember as a
resident being responsible for compiling all of this data
and the committee was very proactive at that time, then in
blood centers my career involved the impact of QA and
massive expansion in program.
My understanding now is that most transfusion
services even in big institutions are not anything like as
active as they were when I was in training. They are lucky
if they can get two or three people to attend these
transfusion service meetings.
So my concern about all this is I do not think
people have any balance here with respect to the resources
that it takes. In a blood center of 100,000 unit
collection, which is a modest-sized blood center, there are
five or six full-time quality assurance-type employees to
keep this system, to dissect these nuances of reporting, to
investigate and try to be quite proactive, which I think in
blood center environments it is a very proactive, corrective
action-oriented program, but it takes an enormous amount of
resources.
I think this is a very long-term objective that
needs to be in the long run integrated with an overall
program in hospitals in a much broader context. I think
this is wonderful, but I think we have to recognize that an
expectation that even large hospitals or hospital systems
can implement a program like this without substantial
resources is fallacious.
MR. ALLEN: Mary?
DR. CHAMBERLAND: Just to quickly pick up on what
Mike said and then just a couple comments here, I guess in
my mind--and I do not know if this is the expectation of the
committee at all, but I am sort of thinking of the ultimate
translation or implementation of these recommendations would
be done in a way that historically we do a lot of these
things of this nature from a national perspective, which is
you develop, devise standardized protocols, definitions, and
you pilot them.
We do not implement at all X-thousands of
hospitals, but we provide resources to a small pilot group
of hospitals to test this out, to see how it works, make
modifications, et cetera.
So, at least from a practicality of
implementation, that is what my expectation is, and I do not
know if that is what the committee is expecting because I do
not think there will be resources or really even the ability
to do this on a grand scale.
I mean, the NNIS system started more than 20 years
ago. It started with 20 hospitals, and it has taken that
time to get up to 200, admittedly a system done with very
little, if any, resources.
My specific comments about where we are going with
the recommendations is, one, I see the first recommendation
as being one that sort of addresses hospitals or transfusion
services. I guess I would just urge there would be some
maybe up-front modification of that to I think get to what
people want which is something along the lines of
consideration should be given to the establishment of a
national voluntary confidential hospital-based reporting
system to monitor transfusion-related errors and to guide
the implementation of corrective actions, critical elements
of this system should include or can include.
So I would put the national system more up front
because I agree, when you read it now, it sounds like
hospitals kind of do this independently and not under sort
of a more national umbrella.
I think there might be a need for a separate
second recommendation to address some of the comments that
Jim and others have made about what happens in the hospital,
that there is a national system, but there needs to be some
specific language about what the committee feels should be
happening in terms of hospitals' review of data, quality
assurance, whatever. I think it gets too cumbersome to try
to do it in all one big thing.
Then, thirdly, if people think that there is a
need, should there be a recommendation or commentary that
addresses what currently is going on vis-a-vis blood
collection establishments under the current FDA regulatory
system, is that adequate, are you happy with that, should it
be amplified, should there be changes made, et cetera. So
that is kind of an outline of at least how I am seeing
recommendations perhaps flow.
MR. ALLEN: Dr. Davey?
DR. DAVEY: IiI do not have a lot to add. I
certainly acknowledge the comments of Mike, Jim, and Mary,
which I think were very cogent and to the point.
I guess my only little bit of sniffing would be to
maybe focus a little bit on my thinking of local input here,
perhaps to say something that hospitals should develop,
implement, and maintain a plan that includes standardized
error investigation or reporting systems, elements of
voluntary reporting, blah, blah, blah.
So it focuses on hospitals developing a plan, and
that they do it, and that part of that plan will be
standardized reporting. Part of it will be elements of
voluntary reporting and the other items that are there. So
it takes a little bit away from the national and puts it to
a hospital plan, even though it includes elements of
standardized reporting.
MR. ALLEN: Jay?
MR. EPSTEIN: I like what Mary Chamberland
proposed. I would actually like to see it written out.
DR. NIGHTINGALE: So would I.
MR. EPSTEIN: Generally, though, I think we ought
to substitute something along the lines of "health
care-providing organization" for "hospital," and I would
apply that comment both to yours, Mary, and also to yours,
Rick.
I also like what Dr. Penner suggested adding as a
point which is regulatory access to the aggregate of safety
data or however it was that it was stated. I thought that
was something that we had discussed earlier in the day, but
omitted. It is part of the system of--I forget the right
acronym, the NASA--ASRS system. The data properly modified
are then immediately accessible to FDA, and I think that is
important.
One additional point which is perhaps a fine
point, but the distinction between data collection and
surveillance troubles me. I think what we are reaching for
is active surveillance and data collection; that those
really are not distinct bullets. They are part of the same
bullet. Those are my only comments at this point.
DR. NIGHTINGALE: Excuse me, but you paraphrase
that, Jay, to say surveillance is a continuum and you do not
break it up? Is that the intent of what you just said?
MR. EPSTEIN: The revised language that I am
suggesting is "active surveillance and data collection."
MR. ALLEN: Dr. Gilcher?
DR. GILCHER: I prefer the term, Jay, "transfusing
facility," although "health care provider" would be
inclusive of--and I think we should include dialysis
clinics, home health care agencies, all of the transfusing
facilities.
DR. NIGHTINGALE: Is there agreement to that? It
would appear there is agreement to that.
DR. GILCHER: The second point that I think should
be included in the elements is to be sure that we include
the issue of what I am going to call a near-miss potential
error as well as the actual errors included in the
reporting. I did not see that included here, but we did not
put in the reason why we are doing this. The purpose of all
this is to in fact prevent future errors. That is the
purpose of all of this, and that needs to be included
somewhere in here, right at the top.
DR. NIGHTINGALE: At the top.
DR. GILCHER: There is another piece that I think
needs to be included in here because it does not take into
consideration the patient, and that is the informed consent
issue.
When we look at from the FDA's perspective, Jay, a
recall on a blood product, depending on whether that is a
Class 1, 2, or 3, the patient ultimately is informed. That
is a requirement, but that is not included here. So I feel,
and I wrote here, there should be included "a requirement of
the institution's informed consent to inform the patient of
actual errors." I think that that needs to be a part of
this proposal.
Those are my comments.
DR. NIGHTINGALE: Does that include near misses?
DR. GILCHER: No. No. Actual, not near misses.
DR. SNYDER: Can I ask you one question? I do not
see how you are going to get any health care facility or
organization to admit liability. I just do not see it, and
I do not know if that belongs here because you are talking
about a national system versus they are under local
jurisdiction. Am I making sense with that?
DR. GILCHER: Yes, but what I am saying is that it
is already included by FDA regulations in a blood product.
As we have heard said, the administration of blood is where
the line has been drawn, and I do not think that it is right
not to include.
DR. SNYDER: But my point to you is that the
administration of the blood--you are now into the medical
practice realm versus the FDA's realm. There is no attorney
for any hospital or insurance company that is going to
insure a hospital or facility that admits liability.
DR. GILCHER: But the informed consent
specifically could do that. The informed consent. I would
be interested in the opinion of others here. The informed
consent which the institution uses could tell the patient
that they will be informed of actual errors.
We told them what the potential errors are in the
informed consent, but we never tell the patient--
DR. SNYDER: I just cannot in my mind see an
insurance company or hospital or health care facility's
attorney allowing their client to expose themselves to that
liability, but that is just a statement on my personal--
DR. GILCHER: We do it on the blood products side.
It is a requirement by FDA.
MS. LIPTON: I do not think that is an issue. I
think good attorneys recognize that bad events do not get
better with age. It just starts to stink.
If you have something that is an adverse event
that happened, your best defense is getting out there pretty
quickly and telling the patient what happened because if
they find out and you knew, you got an uphill battle all the
way. So I think that you can get people to buy into that.
I do not know that it is an informed consent
issue, Ron. That is what is troubling me. You used the
informed consent.
Informed consent is I will consent to this
treatment as you have told me all the risks and benefits,
and the funny area about saying "and I will agree to tell
you something else," I mean, it really is not a contract,
and you are talking about almost a contractual issue with
someone.
DR. GILCHER: I am willing to modify that, but
what I want is to be sure that the patient is informed
because I think that is what we are missing.
MS. LIPTON: Right, and I agree with you on that.
DR. NIGHTINGALE: We can work on that this
evening.
DR. GOMPERTS: I think the key issue as far as the
right to know is whether the accident or error has the
potential for immediate or longer-term health consequences
to the individual, and in that situation, the individual
does have the right to know.
Returning to the point of data collection, both at
the local institution as well as national, it is important
that our recommendation incorporate verbiage that this
information will be analyzed, and in the event of issues
arising, corrections instituted or recommended.
Finally, my third point is that in order for
health institutions to do this properly, in addition for the
national surveillance system to carry out its actions
correctly, there would need to be resources.
DR. GUERRA: I think I have pretty much heard
expressed here in the last bit some of the concerns that I
had.
I think as long as we can include in this
statement the diversity of those institutions, facilities,
and services that are in place, I think recognizing issues
related to capacity and resources--and then I think I would
expand Jay's notion about active surveillance to also
include passive surveillance which I think is a matter of
public--it is usually something that occasionally will pick
up incidents that will not have been picked up through
active surveillance.
DR. HAAS: I, too, am not going to be able to go
above what I have already heard, but I guess I will take the
opportunity to emphasize this is a long-term project, not a
short-term project. The airlines have told us that. They
are still not where they want to be.
I am very concerned about the local buy-in, but I
think our role is to be clear on the goals that we want
because, if we have very different sets of data, then we do
not get what we need.
I guess from my perspective, I also have to say
something about resources. If the resources are not there,
it does not get done.
MR. ALLEN: Keith?
DR. HOOTS: I have nothing new to add.
DR. KUHN: I just want to concur with Ron, I think
there has to be the element in there of the patient, the
right of the patient to know the potential consequences or
harm because we are protecting the hospital in some way,
shape, or form, also.
Also, that is the directive that was given to us
by the Secretary in a specific letter that she has here.
DR. NIGHTINGALE: Would you like to amend that
phrase to say "the right to know potential and actual
consequences"? I am asking. I am not putting words in your
mouth.
DR. KUHN: Right, yes.
DR. NIGHTINGALE: But is that what you wish to
say, the potential--
DR. KUHN: Say that again.
DR. NIGHTINGALE: Did you posit "the right of the
patient to know the potential and actual consequences of the
therapy received"?
DR. KUHN: Correct.
DR. NIGHTINGALE: I thought so.
DR. KUHN: That would be acceptable.
DR. NIGHTINGALE: I thought that was what you
meant.
DR. AuBUCHON: Could I ask for a clarification?
Do you mean adverse consequences?
DR. NIGHTINGALE: Well, if the consequences are
beneficial, are they self-evident? I am asking, but this is
what the judge was talking about this morning.
Do you have trouble with those words? Because
those words essentially capture the position that Krever
articulated this morning, and that may or may not be yours
or ours.
DR. AuBUCHON: I am going back to the "O red cells
being transfused to the A patient" example. With 50 years
of experience, we are not aware of any adverse consequences
of O red cells being given to an A individual, and that is
done commonly, anyway.
DR. NIGHTINGALE: Understood.
DR. AuBUCHON: Therefore, were that to happen, I
would not be one to think the patient would need to be told
of that event.
DR. NIGHTINGALE: It would appear that I got the
answer to the question that I was fishing for.
Does anybody object to that answer?
MR. EPSTEIN: Can you read the way you are maybe
phrasing this?
DR. NIGHTINGALE: Yes. "The right of the patient
to know the potential and actual consequences."
MR. EPSTEIN: Of the actual errors?
DR. NIGHTINGALE: Yes.
MR. EPSTEIN: Yes. I think the concept here is to
inform the patient of actual errors--
DR. NIGHTINGALE: Yes.
MR. EPSTEIN: --with potential for harmful
consequences.
DR. NIGHTINGALE: Then we would say--give me a
hand. I am not whetted to the words. What I am whetted to
is extracting the sense of the committee, and the question
is, is the sense of the committee that the recipient of a
transfusion has a right to know both the potential
consequences and the actual consequences?
MR. EPSTEIN: I do not think that is where the
distinction is needed. The first point we need to
distinguish is the near-miss issue.
DR. NIGHTINGALE: Yes.
MR. EPSTEIN: And what we are saying is you only
have to tell the patient about an actual error, in other
words, something that did occur.
DR. NIGHTINGALE: Yes.
MR. EPSTEIN: The second is that what you need to
tell the patient is about any actual or potential harmful
consequence.
DR. NIGHTINGALE: Actual or potential or harmful
consequence.
MR. EPSTEIN: Yes. Either you did get harmed or
there might be some consequence in the future.
DR. NIGHTINGALE: I think actually "actual or
potential harmful consequences" expresses the duty to
inform, as we heard earlier this morning, much clearer than
the words that I used, "actual or potential consequence."
MR. EPSTEIN: "Harm."
DR. HOOTS: "Harm" is the key term. Whether you
put any adjectives there or not, it is really harm.
MS. LIPTON: My only comment is somehow capturing
that resources issue, and maybe we do it here, or we are
going to have an opportunity, aren't we, Steve, tomorrow to
look at some reimbursement issues?
DR. NIGHTINGALE: Oh, absolutely. Just real
quick, what you are going to hear tomorrow is that what
happened with the HCFA Final Rule on Outpatient is that they
separated blood and plasma from comprehensive--from an
umbrella. You have a passthrough there for outpatient, and
this is a sea change for the Health Care Financing
Administration's philosophy.
That experiment for outpatient, the separation of
blood and plasma products from outpatient prospective
payment, is an experiment that will be watched very, very
carefully by all parties. If it proves to be successful in
increasing access without prohibited or predatory pricing,
an argument might be made in the future for extension of
that concept to other areas of medicine such as inpatient
and such as other components that are now lumped under
prospective payments.
MS. LIPTON: Or we can move that along faster.
MR. ALLEN: Dr. McCurdy?
DR. McCURDY: I would like to make just one
comment, and that is when you talk about resources, you
probably ought to include the regulatory agency or whoever
is going to do the analysis and the dissemination of the
results of these reports. So that, the corrective action
will be reasonably uniform and around the country.
MS. PAHUJA: I do not really have anything new to
add. My question to the committee is whether in a separate
recommendation we are going to address the issue of
compensation.
MR. ALLEN: Yes.
MS. PAHUJA: We are? Okay.
MR. ALLEN: Yes.
Dr. Penner?
DR. PENNER: I agree with Dr. Busch. I sit on a
transfusion committee, but when the JCHO comes through for
their examination, it is no doubt that we get everybody in
for that meeting. I think it is whether the hospital really
wants to support it and make sure those individuals on the
committee show up, and if they do so, I do not think there
will be a problem. Otherwise, it tends to be lax.
DR. PILIAVIN: I just have a couple things to say.
First of all, obviously you are going to
reorganize this before you present it tomorrow. I just
think the issues having to do with feedback to the patient
should be in a separate section.
What you want in this part is to talk about the
data collection and the confidentiality and so on and then
have some other sections about how it should be analyzed and
how it should be paid for the analysis and how this
information once gathered within the local hospitals should
be fed back to patients.
You just think about the organization and not
getting a whole lot of different goals mixed into one
statement.
The other thing I want to suggest is I would think
it would be very sensible and would help the people on the
receiving end of this document if you could attach what Jim
has done as one example of such a system, not saying this is
what we think you should do, copy it exactly, but here is
one potential model for how this could be done, just
examples help almost everybody.
DR. NIGHTINGALE: I appreciate and will act on
your first suggestion.
I am a little unsure at this point of how to
incorporate a visual into a recommendation, but since we
have been creative in the past, I am sure we will rise to
the challenge once again with Dr. AuBuchon's help.
DR. AuBUCHON: I will be happy to e-mail it to you
are a Word document this evening, Steve.
DR. PILIAVIN: It can be an attachment.
DR. NIGHTINGALE: To my home, to Steven D.
Nightingale at WORLDNET.ATT.NET. Those of you who have
tried to e-mail me stuff at my Government address know the
difficulty of that.
DR. SNYDER: I got lost there for a second. I
think it is important that we also distinguish or make clear
that this system will not replace the FDA system. I think
that is critical because you need to stay in place, and it
is a question of however we think about this is does the
hospital report or the facility report to both--for things
that should be in both banks or under purview of both, does
it pass from the hospital to--or the facility to the FDA
then to the nation, or does it go to both simultaneously?
That is the only question I have on that point.
I do agree with Jane. The patient notification,
to put that not in the same area as this. I am not sure why
we are addressing that. I think that is really at the local
level, and that is State's right to me, a facility's right.
They come under State.
If they do not have informed consent, they do not
have a problem with the Feds. They have a problem with that
patient's attorney, bottom line.
DR. NIGHTINGALE: I think implicit in what you are
saying is that informed consent is based on 21 CFR 50.
Those are national regs.
DR. SNYDER: I am an oral maxillofacial surgeon.
If I do not give informed consent to a patient, all I have
done is open myself up to the lawsuit.
DR. NIGHTINGALE: That is right, but the informed
consent that you give the patient is--the elements of
"informed consent" are defined in 21 CFR.
DR. SNYDER: No, not at the local level.
DR. NIGHTINGALE: I'm sorry. I retract what I
just said. That is for research.
DR. SNYDER: That is for IRB which has nothing to
do with what we are talking about.
DR. NIGHTINGALE: I stand correct. I'm sorry.
DR. SNYDER: When we talk about informing a
patient, what I was getting at is, yes, you tell them
something happened, but you never admit liability. I guess
that is what I am saying.
MR. ALLEN: John?
MR. WALSH: I look at it as a distinct difference
between informed consent and patient's right to know, and I
strongly feel--I embrace Drs. Gilcher and Kuhn's placement
of the patient's right to know in the text. I do not really
care whether it is in this particular recommendation or an
additional recommendation.
I also think resources need to be in a second
recommendation, support for additional resources.
DR. NIGHTINGALE: Let me get a clarification.
Since these do go to the Secretary and the Department in a
package, whether they are 1(a), 1(b), or 1 and 2 does not
matter once they hit the Government law.
MR. WALSH: Understood.
DR. WINKELSTEIN: And finally, I would endorse Dr.
Chamberland's preamble because it expressed what I could not
do as well, and I would say that in contrast to the
Secretary's recommendation, I think the patient's right to
know, although very important and critical, it is beyond our
purview. So I would not put in the patient's right to know,
a statement regarding that.
I think it is very important. I just do not think
it is in the purview of our committee.
DR. NIGHTINGALE: Is there any further discussion
of that point?
DR. SNYDER: One last thing. Earlier, we talked
about money being authorized. Money being authorized,
having dealt with a couple of programs, it does not mean
there was appropriation, and that may be an unfunded
mandate. So I think it is important that everybody goes
into this up front with the idea that, number one, the fact
that there is an authorization may wind up as an unfunded
mandate on FDA, and, number two, if you are going to press
for something like this, that hopefully you can come up with
the appropriation.
Jay is smiling because he knows--
MR. EPSTEIN: Because I have already decided that
it is you or CDC.
[Laughter.]
DR. SNYDER: We have both been through it, or all
three of us.
MR. EPSTEIN: I mean, there is a gray area in that
we do not currently require reporting to FDA of adverse
events related to transfusion which are non-fatalities.
However, our current thinking is to change that.
If we begin to get medical adverse event reports,
then we are broadening the horizon from nearly errors and
accidents to incidents potentially related to adverse events
of transfusion.
So there is going to be a gray area, but still the
underlying point is that we do not regulate medical
practice, and practice errors will never really be captured
in the FDA system as we now have it.
DR. SNYDER: Steve, if I may add one thing to what
Jay is saying, you do not want it with my agency because
that is where the database and the health care fraud and
abuse databank, the two different databanks, exist.
It would be much better served to have it with
either CDC or another agency, or if at all Federal. Please
trust me. You do not want it in ours.
DR. NIGHTINGALE: I assume there will be further
internal discussion within the Department on this point.
As a nod to the parliamentary procedure, we have,
as I understand it, had an informal discussion. There has
been no motion put on the floor, but there is the
expectation of a motion in the morning, and I believe there
is another motion that a member wishes to bring forward.
DR. HOOTS: Actually, in light of the hour, let me
just say I have a proposal that could become a motion.
Perhaps if it is okay with the other members, I will read
it, and if there is no consensus to proceed, then we can
just bag it, but if we want to put it on the computer for
tomorrow so people can really look at it, then we will
proceed.
This relates to a potential motion about
compensation, and I started out with the word "consensus"
because I think if we are going to talk about any sort of
national compensation, if there is not a consensus of this
committee, there is probably not much point in proceeding.
So I said it is the consensus of the advisory
committee hat a national system of compensation for
morbidities resulting from transfusion and/or blood product
administration be developed and implemented.
This compensation system would be accessible to
all recipients of transfusion or blood product
administration services and would be administered outside
the traditional tort system.
Awards or compensations from this fund should be
made without ascribing direct practitioner or systemic
blame.
In the event that the causative transfusion event
resulted from action reflective of egregious violations,
reckless or intention to harm, it would be expected that
recourse to tort action would be preserved.
DR. AuBUCHON: What law school did you graduate
from?
[Laughter.]
DR. NIGHTINGALE: A good one.
MS. LIPTON: I mean, I just have a bunch of
questions which maybe are better dealt with offline just in
terms of what it means.
DR. NIGHTINGALE: I would very much appreciate an
offline discussion. I do not think that would violate the
Federal Open Meeting Act. So I would state for the record
that it is the intent of this advisory committee that all of
its deliberations be public, but perhaps the public and the
committee are both best served if some of the technical
discussions are held offline, and I see no objection from
the committee to that course of action.
MR. ALLEN: Can we get a motion to adjourn?
MR. WALSH: So moved.
DR. HOOTS: Second.
MR. ALLEN: The meeting is adjourned. See you all
in the morning.
DR. NIGHTINGALE: Thank you all very, very much.
[Whereupon, at 5:51 p.m., the meeting was
adjourned, to reconvene Wednesday, April 26, 2000.]
|