AT
DEPARTMENT OF HEALTH AND HUMAN SERVICES
FOOD AND DRUG ADMINISTRATION
SCIENCE BOARD ADVISORY COMMITTEE MEETING
Wednesday, April 9, 2003
8:10 a.m.
Advisors and Consultants Staff
Conference Room
5630 Fishers Lane
Rockville, Maryland
PARTICIPANTS
Michael
P. Doyle, Ph.D.Chair
Susan F.
Bond, M.S., Executive Secretary
MEMBERS
Robert M. Nerem, Ph.D.
Harold Davis, D.V.M., Ph.D.
Martin Rosenberg, Ph.D.
Cecil B. Pickett, Ph.D.
Josephine Grima, Ph.D. (Consumer Representative)
Jim E. Riviere, D.V.M.
Cato T. Laurencin, M.D., Ph.D.
Katherine M.J. Swanson, Ph.D.
Kenneth I. Shine, M.D.
John A. Thomas, Ph.D.
FDA
Norris E. Alderson, Ph.D.
Robert Buchanan, Ph.D.
Kathryn Carbone, M.D.
Daniel A. Casciano, Ph.D.
Lester W. Crawford, D.V.M., Ph.D.
David Feigal, Jr., M.D., M.P.H.
Jesse Goodman, M.D.
John Marzilli
Mark B. McClellan, M.D., Ph.D.
Stephen Sundlof, D.V.M., Ph.D.
Janet Woodcock, M.D.
C O N T E N T S
Call to
Order:
Michael Doyle, Ph.D. 5
Waiver Statements:
Susan Bond 8
Introductory
Remarks:
Norris E. Alderson, Ph.D. 10
Welcome
and Overview of FDA's Initiative to Improve
the
Development and Availability of Innovative
Medical
Products:
Mark B. McClellan, M.D., Ph.D. 13
Quality
Systems Approach to Medical Product Review:
Janet Woodcock, M.D. 41
Quality
Teams to Improve Regulatory Processes:
David W. Feigal, J.D, M.D., Ph.D. 50
Quality
Systems for Clinical Pharmacology
and
Biopharmacology Review:
Larry Lesko, Ph.D. 59
Quality
Systems for CMC Review:
Yuan-yuan Chiu, Ph.D. 82
Questions
and Discussion with Board/Presenters 94
Update
on Pharmaceutical Manufacturing Initiative:
Ajaz Hussain, Ph.D. 119
Update
on Patient Safety Initiative:
Kelly Cronin 139
Questions
and Discussion with Board/Presenters 154
Open
Public Comment 173
Fostering
Technology Development--Pharmacogenomics
Janet Woodcock, M.D. 174
Pharmacogenomics--Preclinical
Studies:
Frank Sistare, Ph.D. 193
Pharmacogenomics--Drug
Metabolism/Dosage:
Larry Lesko, Ph.D. 234
Industry
Use of Pharmacogenomics and Regulatory Issues:
Brian Speak, Ph.D. 245
C O N T E N T S
(Continued)
Fostering
Technology Development--Pharmacogenomics
(Continued):
Janet Woodcock, M.D. 267
Ethical
Issues with Regulatory Review of
Pharmacogenomic
Data:
Benjamin Wilfond, M.D. 277
Questions
and Discussion with the Board/Presenters 283
Closing
Remarks/Future Directions:
Michael P. Doyle, Ph.D. 308
P R
O C E E D I N G S
Call to Order
DR.
DOYLE: Good morning. I am Mike Doyle. I am the incoming Chair of the Science Board and I want to
welcome you all to this spring meeting of the FDA Science Board.
We might
begin by introducing the Board to you.
I am going to have each Board member introduce him or herself, briefly
state what you do and where you are from.
Then we will move on with the meeting.
I am Mike
Doyle. I am the Director of the Center
for Food Safety at the University of Georgia.
I am a food microbiologist by training.
DR.
THOMAS: I am John Thomas, Vice
President Retired, Professor Emeritus, Pharmacology and Toxicology at the
University of Texas Medical Center at San Antonio.
DR.
RIVIERE: Hi. I am Jim Riviere. I am a
pharmacologist/toxicologist at North Carolina State University. I direct the Center of Chemical Toxicology.
DR.
GRIMA: I am Josephine Grima. I am from the National Morphine
Foundation. I am the Director of
Research and Legislative Affairs and I am the consumer representative.
DR.
LAURENCIN: I am Cato Laurencin. I am Professor and Chair of Orthopedic
Surgery and Professor of Chemical Engineering and Biomedical Engineering at the
University of Virginia.
DR.
SHINE: I am Ken Shine. I am Senior Policy Fellow and Head of the
Center for Domestic and International Health Security at the RAND Corporation
and I am still going to keep an eye of what kind of food you will eat today
because I am a cardiologist.
DR.
SWANSON: I am Katie Swanson. I am a food microbiologist. I am currently Director of Quality and
Regulatory Operations for Yoplait, Columbo and soon to be the Director of
Global Product Safety for General Mills.
DR.
PICKETT: I am Cecil Pickett. I am President of Research and Development
at the Schering-Plough Research Institute which is the R&D arm of the
Schering-Plough Corporation.
DR.
ROSENBERG: I am Marty Rosenberg. I am a microbiologist. I have recently retired from
GlaxoSmithKline, Head of Infectious Disease.
DR.
DAVIS: I am Harold Davis. I am a toxicologist and pathologist. I am Vice President of Preclinical Safety
for Amgen, Inc.
DR. NEREM: I am Bob Nerem. I am from Georgia Tech where I am Director of the Institute for
Bioengineering and Bioscience and also direct the Tissue Engineering Center. I am basically a biomedical engineer.
DR.
DOYLE: Thank you. As incoming Chair of the Science Board, it
is difficult for me to start the meeting with what I consider to be very sad
news, and that is, we have had with us for three years Ms. Susan Bond who has
been what I call the glue that has held the Science Board together.
Susan has
been our Senior Science Policy Analyst for the Science Board and she is moving
up. She is now going to be the Special
Assistant to Deputy Commissioner Crawford.
We are going to miss Susan, but we do want to, as a Board, give her some
mementos of our appreciation for Susan.
So,
Susan, if you would join me up here, I will share with you some of our
mementos. Susan is a big fan of
bulldogs. We, at the University of
Georgia, are also big fans of bulldogs.
So we wanted to give to her some bulldogs for all occasions.
This one
here--you can cuddle up at night with that one. This one--you can keep warm in the wintertime with this one. This one will keep you covered up in the
summertime.
Susan, we
really appreciate all that you have done for the Board and do wish you all the
best with Dr. Crawford.
[Applause.]
MS.
BOND: Thank you. I appreciate it.
DR.
DOYLE: Next, Susan is going to talk
about the waiver disclosures.
Waiver Statements
MS.
BOND: If you could just bear with
me. I have to read these for the public
record.
The
following announcement addresses the issue of conflict of interest with respect
to this meeting and is made part of the public record to preclude even the
appearance of such at the meeting.
The Food
and Drug Administration has prepared general matter waivers for Drs. Nerem,
Davis, Grima, Riviere, Rosenberg,
Doyle, Laurencin, Shine, Swanson, Pickett and Thomas. A copy of the waiver statements may be obtained by submitting a
written request to our Freedom of Information Office. The waivers permit them to participate in the committee's
discussion of the FDA's Initiative to improve the development and availability
of innovative medical products and to discuss the agency's initiatives in
pharmaceutical manufacturing on patient safety.
The topics
of today's meeting are of broad applicability and, unlike issues before a
committee in which a particular product is discussed, issues of broader
applicability involve many industrial sponsors and academic institutions.
The
participating committee members have been screened for their financial
interests as they may apply to these general topics at hand. Because general topics impact so many
institutions, it is not prudent to recite all potential conflicts of interest
as they apply to each participant.
The FDA
acknowledges there may be potential conflicts of interest but, because of the
general nature of the discussion before the committee, these potential
conflicts are mitigated.
With all
that said, I will just give you some housekeeping notes to that. [Housekeeping notes.] We have open public comment scheduled for 1
o'clock. I would just remind everybody
to turn your microphones on when you speak so that the transcriber can pick
everything up.
That's
it.
DR.
DOYLE: Next, we have some introductory
comments from Norris Anderson.
Introductory Comments
DR.
ALDERSON: Thanks, Mike.
In
addition to you members that introduced themselves, I do want to add some
comments about each of those to give you an idea of the expertise and
experience these four individuals to the Board. Dr. Laurencin is the Lillian T. Pratt Distinguished Professor of
Orthopaedic Surgery and Professor of Chemical and Biomedical Engineering at the
University of Virginia. He just moved
there and he is telling me he really likes Charlottesville.
He and I
had an experience last night. I wound
up with his bags and he, by chance, just had my cell-phone number before I got
away. So we have a little something to
share now. But his experience is in
chemical and biochemical engineering and orthopaedic surgery.
Dr.
Katherine Swanson is the Director of Quality and Regulatory Operations for the
Yoplait, Colombo at General Mills and former Director of Microbiology and Food
Safety at Pillsbury. Her expertise is
in food microbiology and food science and its impact on public health.
John
Thomas is Professor Emeritus of Pharmacology and Toxicology at the University
of Texas Health Science Center. His
expertise is in toxicology and pharmacology.
Last, Dr.
Ken Shine is a Founding Director of the RAND Center for Domestic and
International Health Security and former President of the Institute of Medicine
and former President of the American Heart Association. His clinical experience is in internal
medicine and cardiology. He has
extensive experience working with global issues and emerging infectious
diseases, bioethics and access to care.
Certainly,
we welcome these four new members to the Science Board.
Mike
advised everyone of Sue's departure. I
want to add to that. I have been in
this position now almost two years. Sue
is one of those people I have totally enjoyed during that period. She and I have a lot of things we
share. She will tell you I am her chief
advisor now and I keep telling her, now that she has moved to Dr. Crawford,
that that service is no longer free, that I am starting to charge next week.
But she
still works with me until the end of the month. Her other big responsibility, and a huge one, is this annual FDA
Science Forum. I do want to remind
everybody of that. It is April 24 and
25 at the new Washington Convention
Center. We think we have an outstanding
program this year with both Dr. Crawford and Dr. McClellan being major
participants.
We also
have participants, members of CDC and NIH, and, hopefully, the Department. We hope to know that next week. But there is a large poster session of FDA
research. The three tracks of the
presentation are Risk Management, Counterterrorism and Novel FDA Science. We have included a copy of the program with
your notebooks, so if you have questions about that and you are interested in
attending, please see Sue or me.
We are
starting to recruit immediately to backfill Sue's position. In the interim, until we get somebody named,
if you have questions or issues you need to address for the Science Board, you
can easily call me and we will take care of it for you.
Thanks,
Mike.
DR.
DOYLE: Thank you, Norris.
Next, it
is a real pleasure for me to introduce our next presenter, Dr. Mark McClellan
who is the Commissioner of the Food and Drug Administration. We, at the Board, had the opportunity to
have dinner with Mark last night. For
many of us, it was the first time to interact with Mark. But I will tell you, I think the FDA is very
fortunate to have someone with his capabilities in charge.
So I, at
least speaking for me, personally, Mark, appreciate you being the leader of
FDA.
Dr.
McClellan is going to give us an overview of FDA's initiative to improve the
development and availability of innovative medical products. So, the stage is yours.
Welcome and Overview of FDA's Initiative to Improve
the Development and Availability
of Innovative Medical Products
DR.
McCLELLAN: I want to thank all of you
for coming this morning, especially Mike for his leadership of our critically
important Science Board. This is a very
integral part of FDA's policy-development process, as many of you know, and
that is why I am very pleased to see so many representatives from the public
here today. We are looking forward to
hearing from you later.
We need
this kind of scientific input now more than ever. I talked about this a little bit last night with our Science
Board highlighting how the challenges that we face today are, in many ways,
more complex than the challenges that we faced ever before. But opportunities for addressing those
challenges are also better.
We have
had some significant achievements on the legislative front in the past year
with new authorities and new resources for both priorities like addressing
terrorism threats and our challenges on approving safe and effective medical
products more efficiently. I am going
to talk more about that this morning.
But
figuring out how to use these new authorities as well as the changes that are
taking place more rapidly than ever in the fields of science is an important
and ongoing challenge for the agency.
That is why this kind of interaction is a critical part of our efforts
to do our job of protecting and promoting the public health as efficiently as
possible.
All of
you on the Science Board have a major responsibility to play in this. We very much appreciate your stepping up and
being willing to take on that responsibility.
We look forward to the discussions here today and I am going to look
forward to some more extensive interactions with you in the months ahead as we
move forward on these critical initiatives for the public health.
I, too,
want to thank Norris and Susan Bond for their terrific work in putting together
this effort. I hope you all are going
to find the day's presentation by FDA staff informative. I know that they have put a tremendous
amount of new thinking into the initiatives and ideas that they are going to be
summarizing for you here today and they, like I, are very much looking forward
to your critical feedback and appraisals of how we can fulfill these
initiatives as effectively as possible.
The FDA
team is one that I have come to enjoy working with tremendously in my four
months at the agency. Les Crawford, in
particular, has done a great job under difficult circumstances, working as
Acting Commissioner for the better part of a year while I was getting on board
and those types of things were happening. I will talk about this more in a
minute.
During
that time, the FDA did manage to make some major strides forward on important
issues. Janet Woodcock has done a
tremendous amount to organize the presentations for today. Yesterday, you all got to hear about some of
the activities going on in more detail at our Drug Center. I hope you, like I, have found that work to
be very impressive efforts to make the most of the limited resources that we
have available to fulfill our mission.
Our
center directors and senior leadership represented here today will be
participating in the meeting, too.
Again, I want to thank them for their ongoing efforts to help us fulfill
our mission.
I hope
this meeting offers something for our Science Board in return. We are expecting a lot from you in terms of
input and ideas about how we can use the best available science to fulfill our
mission effectively but I hope this is also an opportunity for you to hear back
from us about some of the front-line challenges that we are facing in trying to
think creatively about the practical implications of many of the new scientific
developments and new public-health threats facing us today.
As I
mentioned, the challenges of promoting and protecting the public health today
are greater than ever. We are facing
new threats of terrorism. We are facing
new opportunities to bring more complex but potentially much more effective
medical products to the market and to the public in the months and years
ahead. For the sake of public health,
we need to find the most effective ways possible to address these new challenges.
We don't
have unlimited resources at the FDA. We
don't have unlimited staff. So we have
to prioritize. We have to identify the
best opportunities for improving the public health and the best way to fulfill
those opportunities. In many ways, this
is requiring a reexamination and updating of the way that FDA does its
job. This
is true in each and every one of our centers, our Veterinary Center, our Food
Center. All of our centers are making
some fundamental changes in the way that they approach fulfilling our mission
given the greater challenges that we face today and some of the new resources
and opportunities that I already outlined.
What we
are going to focus on today is illustrating some of the key ideas that are part
of FDA's new Strategic Action Plan in the context of working to lower the cost
of developing new pharmaceuticals, new devices, and other new medical
products. They must still continue to
meet our standards of safety and effectiveness but we are trying to make sure
we are using the best science, the best biomedical science, the best regulatory
science, even the best economic science, in fulfilling this mission as
efficiently as possible.
This is a
particularly important challenge today because of some of the changes that are
occurring in the biomedical industry.
This is why I think it is a very good opportunity to illustrate to you
some of the key ideas that we are
trying to implement as part of our goal of efficient risk management throughout
our activities here at the agency.
Why is
this a special challenge today? As many
of you know, the approval of new drugs and biologics is now at its lowest level
since the Prescription Drug User Fee Act was implemented over a decade
ago. The number of truly new drugs, the
so-called new molecular entities, that have reached the agency is down around
to half of what it was as recently as 1996.
The number of new biologics is down significantly as well.
These
challenges are not confined to drugs but are common to all medical
technologies. At our Devices Center,
for example, the average review time for a major new device, as so-called PMA,
is now almost fourteen months, probably not surprising given the staff level of
the Devices Center has dropped by 8 percent since 1995 and we are facing more
complex and difficult device review decisions than ever before.
So, one
possibility for the future is that this kind of trend continues, that the cost
of developing new medical technologies keeps rising. It is easy to see how this might happen. The kinds of blockbuster drugs that were a
hallmark of medical product development in the 1980s and 1990s are fewer and
more far between today. Many of the
receptor sites that those drugs have effectively exploited, based on a good
understanding of molecular biology in humans, have been exploited.
We expect
to see more of these drugs coming along in the upcoming years but not so many
as before. In contrast, while there has
been a tremendous increase in biomedical research, a doubling of the NIH budget
is being completed this year, over $27 billion. Many in the public do not appreciate the fact that there has also
been a substantial increase in private-sector research and development
contributions. For example, the research
activities of the pharmaceutical companies has doubled since 1995.
We are seeing
a tremendous increase in research and development spending but, so far, it
hasn't translated into new products being approved by the agency. The decline in new-product approvals that I
just talked about is a direct reflection of fewer new-product applications
coming in to the agency.
A lot of
people think this has multiple causes.
One important cause that has been cited is that the kinds of products
being developed now are going to be a bit
different. Much of the new
investment has gone into new, but more basic, types of biomedical research than
has occurred in the past. A new
understanding of the genome and genomics, a new understanding of how proteins
function at the individual protein level, proteomics.
There are
many other "omics" fields at well that are still at relatively young
ages. Remember, it was just a few years
ago that the genome was actually sequenced.
So, translating all this new information into medical products is posing
a challenge for product developers and it may also be a challenge for our
agency, as I will talk more about in a while.
So one
possible course for the future is, as we get all these additional sources of
information, we simply add to the cost and complexity of developing new
breakthrough medical products. Again,
it is easy to see how this could happen.
If we don't get a solid and efficient understanding of what all this
pharmacogenomic and all this proteomic information in data mean, it could mean
additional preclinical tests or additional tests in conjunction with our usual
review process.
Because
these enzyme-activity changes that are associated with potentially thousands of
different testing sites don't have clearly understood implications for the
human body and human systems, we may also end up adding on more testing at the
clinical stage to just understand whether a particular abnormality or
particular deviation in microarray testing has any clinical consequences for a
patient.
If we
take that approach, we are going to be adding further to the cost and time for
product development and that is already going up rapidly. According to that Tufts University study
group on this issue, the cost of developing a truly new drug is now over $800
million in present value terms. While
people may quarrel with what the exact number for this cost ought to be and how
it should be calculated, nobody can quarrel with the fact that that number has
gone up a lot. According to the Tufts
University estimates of a decade or so ago, it was less than half as much.
A lot of
the added costs are coming in at the clinical-testing phase but more costs are
coming in in other parts of the development as well. The time to get a product approved at the agency is going up,
too. So, all of these factors are
contributing to increasing costs of product development. If these trends continue, I don't think it
would be a huge surprise that we wouldn't see a significant increase in the
number of new products approved, especially if you keep in mind that these new
products are not blockbuster drugs that are intended for a market of 5 or 10 or
50 million people but rather of more individualized treatments that are likely
to be highly effective but that would be targeted to more particular segments
of the patient population.
It might
involve not just a drug but a drug paired with a diagnostic test or a novel
drug-delivery system in order to make the treatment particularly effective for
a subgroup of patients. If that costs
$800 million or trillion plus, we are not going to see a whole lot of those
breakthroughs.
I don't
think that the future needs to be that way but it does mean that we need
creative solutions in all kinds of dimension.
First, policy makers need to find new approaches to the problems of
health-policy reforms that we are facing today. As healthcare costs keep going up, healthcare affordability is a
front-and-center issue and is going to remain so. So we need to find better ways in policy making to make
healthcare more affordable for all Americans while still encouraging innovation.
This is a
very complex debate. It involves a lot
of money. There is a lot at stake. My only concern is that, if we don't find
good, creative solutions here, we may, in fact, implement reforms that do not
encourage this kind of innovation, all this potential for breakthrough and more
effective treatments in the 21st Century.
So we
need to adopt, in the policy setting, ways to realize more value in what we do
in healthcare in order to promote the availability of even safer and more
effective treatments today and in the years ahead.
Clinical
researchers need to change the way that they are undertaking their activities,
too. I have had a number of very
interesting discussions with Elias Zerhouni and other leaders from the National
Institutes of Health in recent months.
They are talking, over there, about nothing short of taking steps to
re-engineer the clinical-research enterprise to help get around some of the
issues that I talked about before.
We need
clinical research that is integrated more than ever with some of these basic
science biomedical breakthroughs in fields like genomics so that we don't end
up having a longer and more costly and more time-consuming process for getting
effective products from the stage of basic research ideas to the stage of proof
of concept.
But,
beyond there, FDA has a critical role to play as well, so we need creative
thinking in the area of regulatory initiatives as well. Going from a proof of concept of a new
medical product to a product that we can actually have confidence can be used
safely and effectively by the public and can be produced using techniques that
reliably get the treatment that is desired to the public is not a simple
matter. It requires solving countless
more or less difficult and more or less challenging problems in the process of
product development, effectiveness testing and manufacturing as well as more
effective monitoring, I think, in the postmarket setting, too.
So we
have a major role to play in trying to facilitate this development. FDA can't cause medical products to come to
the public quickly and effectively by itself.
That is primarily the responsibility of the product developers. But we do have a responsibility to make sure
that our regulatory processes are keeping up with the changes that are
occurring in medical technology and are responding to the very critical
health-policy problems that we are facing today.
Again, at
the top of that list is healthcare affordability, making these treatments
available, making sure they are safe and effective but making them available at
no greater expense to the public than is necessary.
So we are
undertaking a lot of activities to try to fulfill our responsibilities in this
area. Many of these were announced as
part of our medical technology initiative back earlier this year. I know you all have some materials on that
and have been hearing updates from us by e-mail and through other means as has
been going along.
I just
want to review some of the main features of that initiative right now. It had three prongs. The first prong involved conducting a
root-cause analysis of the so-called multiple-cycle product approvals that have
occurred recently. These are products
that took more than one round of review before they could be approved.
An additional
round of review means at least an additional ten months or so of time before a
product can be approved as well as millions of dollars of additional costs in
product development. In undertaking
this effort, we found, in our preliminary looks, that, in many cases, these
situations could not have been avoided but, in some cases, perhaps they could
have been through earlier communication and feedback, for example, with the
companies that are developing products.
As part
of our Prescription Drug User Fee Reauthorization, we are in the process of
implementing some pilot programs to test whether frequent early consultation
with product developers and whether reviewing applications a piece at a time,
rather than waiting for the whole application to come in, can help us avoid
some of these multiple-cycle delays by getting the information the companies
need to understand their way through the regulatory pathway to, then, more
quickly and more effectively, as they are designing their studies and the
process, to get to a determination of safety and effectiveness.
It is
going to require some more resources up front through these earlier
consultations. We want to investigate
whether that has an offsetting impact in terms of reducing our total amount of
time and effort in the product-development process by requiring fewer repeat
cycles and we want to see what the impact is on the time and the cost of
developing products as well.
Second,
we are developing a quality-systems approach for our review procedures. The idea here is to apply the
best-management practices internally as effectively and widely as possible as
we undertake reviews. You are going to
hear much more about this today. David
Feigal is going to talk about a quality-teams approach that is being implemented
at our Devices Center.
Larry
Lesko and Yuan-yuan Chiu will be talking about initiatives in the two implement
quality systems in clinical pharmacology and in our new drug-chemistry
activities as well. I want to emphasize
that this is a fairly fundamental change that is ongoing in the agency and it
is ongoing in all of our product-development centers. So we will value your feedback on the initial steps that we are
taking in this direction.
A third
part of this initiative is that we are working to publish new guidance
documents in areas where we think regulatory pathways could be improved or
better defined. This includes some
areas that we have identified as priorities for new product development where
we don't think our regulatory standards have been defined or have been
communicated as clearly to the outside world as they could be for diseases
where the opportunities for reducing the burden on the public health is
actually quite substantial.
These
include new guidances, for example, in such areas as obesity and diabetes
treatment as well as many areas of cancer care. We are also developing new guidances that will address emerging
areas of product development. So, just
as a couple "for examples" here, these guidances may consider the use
of bioimaging tools to help us more quickly and accurately map drug
distribution in particular areas of the body where the drugs are intended to
reach.
To the
extent that we can identify valid markers or valid biomarkers that are clearly
related to important later health outcomes of interest to clinicians, that can
help us in getting through the review process efficiently. Validated biomarkers may be able to
streamline clinical trials by allowing for shorter follow-up times and more
confidence that a product is going to have the desired clinical effect and it
may allow us to enroll patients who are more likely to respond based on a
molecular signature from these kinds of biomarkers as well as to make sure
that, when these treatments are actually approved, we can use them in
conjunction with diagnostic tests and biomarkers to help make sure that
patients are getting a maximal benefit from the treatments that they actually
use.
As part
of this effort, we are going to be running a number of joint workshops with
outside experts including clinical groups like the American Society for
Clinical Oncologists and the National Cancer Institute to help in the
development of these biomarkers.
For other
technologies such as cell and gene-based therapies and pharmacogenomics, we
also intend to establish partnerships with outside experts including some with
the National Institutes of Health to help guide research programs and
activities that can lead to approval.
I want to
say a little bit more about pharmacogenomics in this context because this is a
major focus of our presentations to you today.
This is one of the areas where we think guidance development can help
and where we think there are significant opportunities for improving the regulatory
process if we can use the emerging pharmacogenomics information effectively.
Many
people have argued that a stream of new medical breakthroughs could well be the
result of integrating genomics information into medicine and making medicine
even more of an information science than an art as it is today. So you are going to be hearing a lot about
these efforts in the agency to work with drug developers and other product
developers to use these tools and this information to accommodate the
transformation here.
Right
now, much of the kind of data that could be used from molecular genetics
information technology and related areas is lost to the FDA. We are not able to take advantage of it
because either the product developers don't do certain studies that we think
would be helpful or they don't submit the results to us for concern that this
is an early science and it is unclear how the information fits into the
regulatory process other than to potentially raise some red flags.
It is
true that most scientists, including our reviewers, still don't know what many
of the pharmacogenomic patterns that we are starting to see in all these
microarray studies really mean for their impact or their predictive value for
the response or potential safety problems with the treatment.
This is a
problem. We cannot improve our
regulatory process unless we can use this new information efficiently. Now, our hope is that this science will
progress and that, one day, these kinds of genomic markers will be used to
accurately predict a patient's propensity from suffering from a drug toxicology
based on their genomic profile to being able to allow us to approve treatments
and label treatments so that they can be targeted effectively to the patients
who are most likely to benefit, again increasing the value of medical services
by avoiding the use of treatments in cases where they are unlikely to have a
benefit.
So
pharmacogenomics, if it actually works, is a tremendous tool for providing
doctors and our reviewers with a science-based approach to risk management and
risk assessment related to new products.
But, in order for us to be in a position to evaluate this science and to
help support its productive development, we have to come along the learning
curve on this new science along with product developers and researchers.
We are
starting to do this already with work that we are undertaking. The National Center for Toxicological
Research has a major library development program ongoing for genomic and
proteomic information. But we also need
help from outside experts. We need
product developers and researchers to have efficient ways of sharing their
results with us so that we can incorporate it into what we are doing in our
review processes.
We need
to discuss this new science collaboratively, especially at this experimental-research
phase of its development. So, during
this meeting, we are going to be seeking your advice about ways that we can
make that happen. Janet, Larry Lesko
and industry and NIH representatives here today are going to all be making
presentations on this issue.
Another
area where we think we can increase the value of medical products involves our
application of risk-management principles to overhaul pharmaceutical
good-manufacturing practices. These are
the regulations that govern how medical products can be produced in a way that
is recognized by FDA as leading to safe and effective reliable treatments.
Manufacturing
processes often don't get as much attention in the areas of biomedical science
as they deserve. But we are working to
change that thanks to Les Crawford's leadership on this issue when it was
announced last August. And Janet
Woodcock is chairing--how many work groups do we have in this, like fourteen or
something like that? It is really not a
small effort.
But that
is appropriate here. Our GMP
regulations have not been significantly updated or fundamentally updated in a
quarter of a century. Meanwhile, during
this time, best-manufacturing practices in other industries that have virtually
zero tolerance for impurities or errors have changed fundamentally.
Think
about the semiconductor industry, where it was in the mid-1970s versus where it
is today. Those companies were
struggling along then and were wondering if there was even going to be a U.S.
industry in this area. Instead, they
adopted some fundamental changes in their manufacturing practices such as
6-sigma manufacturing, total quality-control systems with a goal of zero
defects in production.
As a
result, they have been able to march through significant improvements in the
productivity of manufacturing, big improvements in output, reductions in cost,
enormous improvements in productivity that have translated into better value
for consumers without sacrificing quality.
In fact,
they have improved quality. Their error
rates, their precision problems, are lower than ever. Those kinds of techniques have not been applied to nearly the
same kind of degree in the area of pharmaceutical and biologics production as
they have been in other industries. We
think there are lots of opportunities whereby, through regulatory reforms, we
can facilitate these much needed changes to improve productivity and reduce
costs of medical products.
Ajaz
Hussain will be talking to you about an update on the implementation of this
program today and I am very proud to say that, even though this program has
only been in existence for seven months or so, we are already implementing
changes in our regulatory processes to take advantage of these new insights.
Even if
you develop drugs more efficiently, even if you produce them more efficiently,
that is not the end of the story. Too
often, the drugs and devices and other products that we regulate are involved
in preventable adverse events. As many
as 20 percent of Americans have experienced some kind of medical error and,
according to the latest surveys I have seen, more than a third of Americans and
an even higher percentage of doctors have family members that have been
affected by significant medical errors.
So we
need to do more about that, about understanding how we can help our products to
be used effectively not just under the idealized conditions of clinical trials
but in the real-world conditions where they are actually going to be used.
I am
particularly concerned there about issues related to people who are using our
products in conjunction with a number of comorbid conditions or other
medications that are very difficult to test in a preclinical setting, minority
populations, other special populations, where particular issues of safety or effectiveness
may arise. We need to do more in the
postmarket setting to uncover these adverse events and understand better how to
prevent them. So we are working on developing a range of
information-technology tools in addition to some regulatory changes that we
have announced just in the past month to do a better job here.
I would
like to highlight, as well, that we have got new authorities in this area
thanks to the Prescription Drug User Fee Act Reauthorization and the new
Medical Device User Fee Program that we intend to take advantage of. So, for example, we have announced
partnerships with hospitals such as Columbia Presbyterian under our so-called
Marconi Program.
We are
expanding the MedSun Program in our Center for Medical Devices to, hopefully, a
couple of hundred hospitals and other healthcare institutions in the coming
year to collect more data automatically, not just relying on the so-called
spontaneous reports from product developers and health professionals but more
automatic data collections based on modern IT systems to understand quickly
when and why there is a problem with some of the products that we have approved
and to give us a two-way street for providing more quick and effective feedback
to the health professionals and to the general population that is involved in
using these treatments.
You will
hear an update on this program from Kelly Cronin today. This is, I think, a very important way to
increase the value of the products that we regulate.
Another
agency priority involves providing better information to consumers and to
patients. We are not going to have much
time to talk about this today given everything else that is going on, but,
obviously, it is extremely important that people be able to get accurate
information about risks and benefits of a product to help use it effectively
and we need effective labeling for physicians as well so that they can get the
information they need on how a product has been proven to be safe and effective
in guiding their own treatment decisions.
So we are
undertaking a fundamental look at our product labeling requirements for
physicians and we are also going to have some activities in the coming year on
improving information available to patients.
We are working on some new guidance about risk/benefit information in
the brief summary of direct-to-consumer advertising as well, lots of work in
these areas.
Generic
drugs are another area where we can increase the value of products available to
the American public. We need to promote
the availability of more low-cost safe and effective options for consumers in
this area. There are several hundred
major pharmaceuticals that are coming off patent in the next few years that
provide opportunities here. Generic
drug manufacturers provide medications that are just as safe and effective as
brand-name counterparts.
So here
is an area where we will be expanding.
We propose some major budget expansions in this area and we are going to
be implementing some fundamental reforms.
Many people don't know this but the actual time to approve a generic
drug is significantly longer than the time to approve a new drug even though it
is a less-complex process. The reason
for that is that multiple cycles of review are a way of life in the
generic-drug approval process.
Only 7
percent of generic-drug applications are approved the first time around. We are going to change that through some
fundamental reforms in our generic review process. We are also in the process of implementing some regulatory
reforms in the Hatch-Waxman law that governs generic competition to make that
work more efficiently as well.
So these
are just some examples of what we are trying to do here at the agency, to apply
the principles of efficient risk management throughout our activities in the
context of making safe and effective, better, medical products available at a
lower cost. That involves our premarket
review processes. It involves our
manufacturing regulatory processes. And
it involves our postmarket activities to address medical errors and adverse
events as well as getting better information to consumers.
This kind
of comprehensive approach is something that the expert staff at FDA have been
doing a tremendous amount to implement.
That is why I think the schedule here today is so packed and we really
are just focussing on one major, but only one, area where we are trying to
apply these principles of efficient risk management throughout the agency.
I am
going to stop my remarks there. I
wanted to give you an overview of what you are going to be hearing about today
and, again, give you a major plea for critical and useful input as we work to
meet these new challenges facing the agency and these more critical challenges
for promoting and protecting the public health that we face today, challenges
that are more difficult and complex than ever before.
Thank you
very much for listening to me and thank you again for your contributions to the
tremendously important work of this agency.
We very much appreciate it.
DR.
DOYLE: Thank you for that excellent
overview.
Next we
are going to hear from Dr. Janet Woodcock who is the Director of the Center for
Drug Evaluation and Research. Dr.
Woodcock is going to tell us about the quality-systems approach to medical
product review.
Quality Systems Approach to Medical Product Review
DR.
WOODCOCK: Thank you and good morning.
[Slide.]
As Dr.
McClellan said, one of the parts of our new initiative on innovation--one of
the major efforts in our Improving Innovation Initiative that was announced earlier
this year is the idea of, "instituting a continuous quality improvement or
quality-systems approach throughout the premarket review." I am quoting from the announcement of the
initiative.
So,
first, I would like to run through what we actually mean by that because I
think that probably has confused some folks.
David Feigal--I was interested to see, David, in your talk you just
handed out that the Science Board review recommended this to you all in Devices
as part of the scientific review; is that right? So, apparently, this is not a new idea and it was already
reinforced by the Science Board in their previous review.
[Slide.]
What do
we mean? Well, a review is a scientific
assessment of submitted documents and data where we draw conclusions about
conformance to scientific, technical and regulatory standards. So the review that we are doing within the
agency on the medical-product side is really a scientific activity of checking
conformance to standards.
In the
review document that we produce, we have to produce documentation of this. These activities are often controversial and
we have to write our assessment down.
We have review documents that are written documentation.
[Slide.]
However,
the issue with the review process is it is not just this gentlemanly process of
doing reviews. It is a production-scale
activity. For new drugs, this is just a
ballpark estimate that I pulled off our tracking systems. We do an annual number of scientific reviews
just in the new drug review side. About
21,000 scientific reviews are produced.
We issue about 2,500 letters out of a new drug side which have multiple,
usually, scientific issues within them.
The generics program estimates they produce about 5,000 scientific
reviews a year.
[Slide.]
Although
this is mass production, each of these reviews must be scientifically correct,
apply the regulatory and scientific standards that have been established
appropriately. They are subject, many
of them, to intense stakeholder and scientific legal scrutiny for various
reasons because they are implementing standards.
We need
to incorporate any new scientific findings or new regulatory guidance as we go
along in these reviews, so we need to make sure each of these 21,000 reviews,
for example, has brought in any new principles or any new guidance or anything
that we have developed over the recent past.
[Slide.]
How do we
assure quality of these scientific activities?
That is really the question I am addressing here. Obviously, execution is very important in this
area, in the regulatory area. If we
have scientific advances, we have to make sure they are incorporated in a
uniform and high-quality manner into our regulatory process.
What has
been done traditionally, and I don't know whether this is because this is a
medical setting or for other reasons, but, traditionally, there has been sort
of a craft or guild approach where there is a hierarchical system of
control. So we have successive or
serial checking by different levels of expertise or management over each one of
these reviews to assure its quality.
As a
model of quality, that is 19th Century or earlier, I would say, as far as how
to insure quality. It is most
successful as this craft model for one-of-a-kind types of products.
[Slide.]
As was
already alluded to, there has been a revolution in how to insure consistency in
mass production. That started out with
standardization issues back with Henry Ford and others and has moved through
quality control, then the concepts of quality assurance and, finally, to
quality systems and quality-management approaches which are systemic,
systemswide, approaches to assuring quality.
[Slide.]
I know
this is the Science Board, but I would just like to take you through. Some of you are familiar with this, but some
of you may not be and I am not going include any jargon here. I am just going to say what the principles
are sort of in a general English-language way.
I like
this set of statements about it. The
quality-systems approach is; say what you do, do what you say, prove it and
then improve it. That applies to the
system as a whole. What does that mean?
[Slide.]
This
sounds simple, but it is actually much more powerful and I think we may
illustrate with some of our presentations how this can be applied. To say what you do, you need to identify
what your vision and the purpose of your organization is and who the customers
of your organization are, and even subparts of the organization.
You need
to define quality. We are struggling
with that right now in the GMP world, how do you define the quality of a
manufactured product, a pharmaceutical, another medical product.
Then you
define the attributes that you are going to measure the quality by and you
define the processes by which you are going to produce this quality. That is saying what you do.
[Slide.]
Doing
what you say, you measure and produce things that have quality attributes and
you also have another important step which I think the FDA has pretty much
accomplished in many areas, which is process management; that is, you define
the process you are going to follow.
You standardize the process. You
track things within the process and you control the process.
[Slide.]
To prove
it, you make sure that whatever your customers are expecting that you are
producing that. You do trend analysis
and have other metrics of whether you are achieving the quality attributes that
you set out to achieve. And you do
audits and evaluation.
[Slide.]
Then
improve it has been the most challenging and difficult, I think, for anyone
involved in the quality area because that is where you need to do corrective
and preventive actions. You need to do
feedback and training. If it is about
science, you need to incorporate that science in and prove--the organization
needs to learn and change as you are moving forward.
So that
is sort of a short description of quality systems free of any particular
jargon.
[Slide.]
But you
might say to me, and that is what the reviewers say to us, "Review is an
intellectual activity. The review
product is a scientific document."
[Slide.]
How can
it be reconciled? How can we apply on
to this a quality-system framework that really did originate in mass
manufacturing? How can we bring these
things together? That is what we are
here to ask you because we don't know totally, precisely, either.
[Slide.]
But, in
fact, the systems approach works very well for many processes. Sure, it originated in mass manufacturing
but it is now being applied many places.
I read this morning in the newspaper that a healthcare system won the
Malcolm Baldrige award this year for applying quality-management principles to
healthcare demonstrating, demonstrably improving the quality of the care within
their system.
There are
a variety of tools and methods that can be selected for any specific
application so we don't have to pretend we are making widgets and then apply
quality principles of widgets to our process.
We are exploring examples of how these approaches can be used in
scientific activities; in other words, getting people in to talk to us about
how they are applying these principles in science.
[Slide.]
I think
David is going to talk a little about CDRH and the situation there. In the Center for Drugs, we have completed
considerable work on process-management aspects such as procedures, tracking
and training. We have also done
considerable work on some work-product standards. I already told you our work product is a review. So we have gone through and we have
established templates for the review documents, what they should look like, and
some directions on good review practices; in other words, what the review
activity should be, what items should be covered and so forth.
We have
done some work on quality assurance mechanisms and feedback. We are instituting audits and so forth to
see how well these standards are followed.
We have done very little on systems aspects such as peer discussion and
assessment, CAPA, organizational learning and also the metrics, the overall
metrics. So some of the presentations
we are going give you this morning are to show where we are in a couple of
different areas, not every discipline, and then get some of your feedback.
[Slide.]
The next
presenters are going to give examples of some progress and also some proposals
for moving forward in this area. Our
question to you is really what steps are most important do you think, as we
move forward in this, for integrating evolving science into the review process.
There are
many things we need to do, obviously.
Which ones do you think will be most important for us to implement first
with a focus on integrating the new science, making sure that our reviews
contain the up-to-date scientific information and apply that to the review
process.
So our
next speaker will be David Feigal. He
is going to talk about what the Device Center is doing in this area.
Quality Teams to Improve
Regulatory Processes
DR.
FEIGAL: Thanks.
[Slide]
I would
like to actually tie some themes together and this is a bit of a progress
report. Fifteen months ago we had a
presentation to this Board of Science Review that was led by Robert Nerem. It was a process that the Center spent about
a year preparing for, and we took the recommendations of that group very
seriously and actually promised to perhaps even come back with a more
comprehensive follow-up report and actually pull the committee back together.
But there
were two of the recommendations of the science review to the Center and one of
them was about quality systems. CDRH
should develop and implement a quality evaluation and improvement program, and
this should include metrics in addition to timeliness of reports. We have always had metrics for
timeliness. We produced about a 40-page
annual report that slices timeliness about a hundred different ways. But it has been more difficult for us to
grapple with quality, and this is one of the things that the Science Board
recommended to us.
[Slide]
It is
also a theme that is really a theme government-wide and the whole issue of how
do we measure performance, how do we measure value added--this is just a clip
from one of the federal government trade presses about the government-wide
effort to be able to actually quantify the value that we add. Historically that has been done in terms of
review times but as the Center began the process of having user fees to speed
review times, the Center became even more interested in showing that we can
improve the quality at the same time we improve the speed.
[Slide]
This fits
in with a number of themes. It fits in
With Dr. McClellan's initiatives that he spoke of this morning. It fits in also with the strategic plan that
the Center developed, and as part of that strategic plan we have begun a
process of developing a scorecard where we will actually be accountable, and
describe in real time the areas that we think are key to our performance and
key to some of the changes we need to accomplish.
[Slide]
I just
wanted to show you the areas that we think are areas for key results and how we
will indicate these. Some of these
relate to our central mission of public health protection. We want to look at the effectiveness with
which we identify hazards and resolve them.
Some of them are on promotion in terms of timely availability of novel
products. But in terms of operational
accountability and quality, which is where we have had most of our metric,
there we want to actually look at our application activities, all of the
conformity assessment activities including our field programs and compliance
activities and even business accountability and look at the quality of those
systems.
[Slide]
One of
the things that we did as a strategy to do this, we actually began working with
an author in this area, Dr. Richard Cheng, and one of the things that we
contracted with him to do is to actually train our staff in a method of
continuous process improvement to actually take to some of our staff. We have identified 18 people and they have
worked half time in training for the last year to actually learn how to be
leaders and coaches in one of the methods.
Many of the quality systems have similar elements and this is one that
Dr. Cheng uses for teaching materials, but to actually have an intellectual
framework for this and to have a team that actually can move out into the
Center. So, working with one contractor
we actually leverage that off 18 of our staff to begin this process.
[Slide]
What I
just wanted to show you is one of the processes. I am just clipping three slides from the middle of the 20-slide
presentation of one of the teams. What
the teams do, they pick a continuous process improvement area and they take the
review process and break it down into the components. They take many of the elements that Janet talks about, not just
looking at the process but also looking at the quality.
For
example, they took a look at PMA filing decisions. This is just simply the facts, we didn't file 11 percent of them
in the study period, which was an 18-month study period. But what was done is a little bit
different. Instead of just looking at
the ones that weren't filed, and that is an activity all the centers do because
those get appealed and those are under scrutiny, but they also did peer review
analysis of the applications that were filed because there is often the sense
that there is tremendous pressure to file and can we identify whether or not
the filing clearly met our guidelines and standards or whether there were some
issues with that.
[Slide]
The
primary issue, and this will actually help us work with industry to improve the
quality of the submissions--the primary issue, even in the applications that we
filed, were concerns about inadequate clinical studies in the applications and
disorganized applications; then, less frequently, lack of information about the
product. I only bring this up as an
example of a process. This is one of 18
projects that are implemented in the Center by this team.
[Slide]
One of
the ways that they take this issue apart, and this is again just focusing in on
the filing issue, is to look at what are the opportunities to really make
improvements; what are the root causes of the problems. Some of them deal with our staff; some of
them deal with relationships between different units in our group; and some of
them deal with industry.
[Slide]
These are
just examples of the way that this team approached this problem, looking at
various options that could be taken both to staff and to the outside
stakeholders. Specifying minimum filing
criteria, we have this. The question
is, is it up to date. Providing
feedback, we actually don't do that very much about the quality of
applications, except in an informal way.
Education.
[Slide]
For turbo
filing we actually have successful applications that are computerized and that
have been nicknamed turbo, so that is what turbo refers to. Then, the issue of actually having
operational accountability for this, a lot of times there is a feeling,
particularly when time frames are tight, that standards have fallen or things
have changed. If we are actually
measuring this and monitoring this in an ongoing fashion we can actually talk
about this. Of course, we are very
interested in the stakeholder collaboration.
[Slide]
So, I
show that just as an example to highlight the strategy that we took, which is
to develop some internal competence in this area. As busy as we are, and all of these people had day jobs and
useful things that they did, one of the things that struck us was that we
needed to develop some internal resources.
One of the questions to you is are there opportunities to actually learn
from business or other opportunities to learn from university systems that do
similar things.
The other
thing I wanted to give you some quick feedback on is a second science review
recommendation which I think also relates to quality and peer review in a
perhaps more indirect way. That was a
recommendation about the use of expertise, particularly external
expertise. Dr. McClellan alluded to
this a little bit as well.
[Slide]
Since the
science review we have developed a device fellowship program. The types of people that have participated
in this range from the very senior, including a sabbatical that is planned this
fall with a chairman in surgery from a medical school, to more mid-career
faculty, to fellow, residents, medical students, engineering disciplines and
other health science disciplines. This
is a program that we have already had formal and informal relationships and
fellows join us from all of these different universities and, actually, more contacts
to come with this.
[Slide]
We have
had cardiology fellows from Brigham and Women's come and spend six months with
us. We have begun co-op programs with
engineering schools and working with faculty.
So, this is something that we are actually quite excited about and take
quite seriously.
[Slide]
One of
the things that was very helpful for this board--this is just a list of the
kinds of expertise that we are looking for.
One of the ways that this board was very helpful and the science review
was very helpful is that when it came time to actually negotiate the user fee
bill, the science board really very clearly laid out the need for resources for
external expertise. In the language of
the letters that went back and forth about what was expected from the user fees
it was actually specified that we would actually try and develop enough
external expertise that would consume about 20 FTEs of staff. To put that into perspective, the Center has
300 special government employees on its advisory panels. There are 17 panels. They each meet about twice per year and that
is one of our major sources of external expertise. All of that time from those 17 panels each year only adds up to 3
FTEs. With bringing in the fellows
program, we have already added in part-time people the number of hours equal to
all of the consultation from those panel meetings. But that still only is about one-sixth or one-seventh of where we
intend to go with this and what we intend to develop.
So we
appreciate your support. I think that
the strong recommendation to do this and the strong interest in external
expertise from industry combine to actually make this part of the user fee bill
so that it wasn't just focused on review time with us.
[Slide]
Implementation--we
always feel in the hot seat, and we appreciate your support. Thanks very much.
DR.
DOYLE: Thank you, Dr. Feigal. We are going to have a break and then two
more presentations and then we will get into a discussion for about 30 minutes. So, at this point we can take a 15-minute
break and reconvene at 9:30.
[Brief
recess]
DR.
DOYLE: Next we are going to hear from
Dr. Larry Lesko, who is with the Office of Clinical Pharmacology and
Biopharmaceutics with CDER. He is going
to tell us about quality systems for clinical pharmacology and biopharmaceutics
review.
Quality Systems for Clinical Pharmacology and
Biopharmacology Review
DR.
LESKO: Thank you, Dr. Doyle and good
morning, everyone.
[Slide]
Thank you
for having me here to talk about quality systems for clinical pharmacology and
biopharmaceutics. I look forward to
hearing what your comments are after the combined presentations that you are
going to hear this morning.
[Slide]
What I am
going to do in the time I have is cover five topics. I will talk about the organization and responsibilities of our
Office. I will talk about what we have
instituted as quality systems approaches to reviews; the benefits to customers,
both internal and external; the metrics of improvement, although these won't
necessarily be hard numbers; and some of the future goals that we have in the
area of quality systems.
[Slide]
Let me
begin by introducing the Office to you.
You can see in the organizational chart, down to our divisions and
teams, we are structured very much like any other office with an upper
management level responsible for the strategic management and implementation of
the program. We have three divisions,
each in three separate locations throughout the Rockville area. Then, we have 15 therapeutic teams, each led
by a team leader with anywhere from three to nine reviewers per team depending
on the business of that therapeutic area.
[Slide]
This will
give you a sense of the demographics of our scientific staff, at least with
respect to their educational background.
All of our reviewers have advanced degrees. Many of them have post doctoral experience of anywhere from one
to three years. Several have dual
degrees. Most of those dual degrees are
M.D., M.P.H.D. degrees. We have a
fairly active recruitment program because it is necessary. There is a relative shortage of clinical
pharmacologists within the profession and we are in deep competition with the
pharmaceutical industry to get talented people. We recruit actively at colleges of pharmacy, from clinical pharmacology
training programs, schools of medicine, and while we don't recruit from the
pharmaceutical industry, we average about three scientists per year coming to
the Office from the pharmaceutical industry with experience.
Our
current staffing is about 95, and 70 of those represent the scientific review
staff. On the right-hand side you can
see the breakdown of our review staff in terms of their degree. Most of them are Ph.D. but we have a
sprinkling throughout our three divisions of both Pharm.D. and M.D. as terminal
degrees.
[Slide]
Let me
define for you briefly what I mean by clinical pharmacology. It is a science dealing with human
properties of the drug substance--and I have underlined "drug
substance" to differentiate it from the drug product--which occur after
the release of the drug from the dosage form.
So, these are pretty much systemic properties of a drug substance, as
well as nonclinical characteristics of the drug substance that relate to these human
properties.
To give
you an example, clinical pharmacology would encompass the pharmacokinetics of
the drug substance, the absorption, distribution, metabolism and excretion
properties either in healthy volunteers or, for example, patients with renal
disease. Another example of clinical pharmacology
data that would be nonclinical would be the solubility and permeability
characteristics of the drug substance that are important prerequisites to
getting the drug absorbed.
[Slide]
Compare
that and contrast that to biopharmaceutics, this is a body of science that
deals with the in vivo performance of the drug product once the drug is
formulated into a drug delivery system and the in vitro properties that
relate to the drug product.
I don't
mean to separate the two sciences. They
are integrated and they are overlapping but they do have distinguishing
features. Examples would be
bioavailability of a tablet. It is the
percent absorbed or perhaps bioequivalence of two capsules. Another example might be the rate of
dissolution of the drug product.
[Slide]
This is
an important aspect of our quality system because it represents the flow of
information throughout the review process and it is categorized in terms of the
scientific and information content of what we receive in the Office. It beings at the top with data that comes in
as part of the NDA. I consider data
simply facts. It is raw measurements
and some of the examples of that are illustrated on the slide. The review process, however, takes that data
and begins to make connections between it, associations, cause and effect. I call this information and this is what is
important in terms of understanding what is going on with the drug product.
As a
review proceeds, we move down into the third hierarchy of knowledge, and this
is very important because it really represents the context for applying the
information that comes out of the review process. This is what we know to be relevant to the therapeutic situation,
and this is also a prerequisite to an effective review in terms of the risk
management aspects that we have heard about today.
All of
this informational content, all of these hierarchies are in the past because
they rely upon the review of the NDA.
We try to go a step further in our review process and look at the
knowledge component or the fourth category which is more futuristic. We attempt to understand things a little bit
better, to synthesize new information that may be relevant to future
reviews. In this category of knowledge
we will frequently apply modeling and simulation, predictive tools to try to
look at "what if" situations that may be relevant to the current
review or may be relevant to some future review. So, as we proceed through the review these are the stages that we
hope the review process takes on.
[Slide]
Where the
information comes from is, of course, the drug development process. I have illustrated it here in a very
simplistic linear fashion. But, as you
can see, the clinical pharmacology and biopharmaceutic information comes from
all of the clinical phases of drug development as well as the nonclinical
portion of drug development, and under each of those boxes of clinical
pharmacology and biopharmaceutics I have highlighted just a sample of the types
of information that comes out of the drug development process that is subject
to review.
[Slide]
The
regulatory review process then is intended to move us through the data to the
knowledge hierarchy and convert that information to customer-related
knowledge. I am referring here to the
internal customer. I will talk about
external a little bit later, but this is the internal customer who I see as the
other disciplines that we integrate our review with.
We expect
that the reviewer becomes an interpreter of data. While they may assess all data in an application, we ask them to
prioritize and selectively analyze only that data that has the most impact on
the risk/benefit assessment. We ask
them to look at the factors that most likely impact efficacy and safety and, just
as important, we ask them to look for data that is missing, the gaps of
information that we will have to deal with in the product label or in a Phase
IV commitment.
We
emphasize mechanistic understanding at all levels from the cell to the patient,
and we view our goals as translating science into therapeutics. We view clinical pharmacology as a means to
an end, and the end is an effective risk/benefit assessment when coupled with
the other disciplines.
Our
primary scientific focus in the Office currently, as it has been, is on adverse
events and managing the risk of adverse events and understanding the likelihood
of those adverse events. For that
reason, we focus in particular in our review on exposure-response
relationships, drug-drug interactions which, we understand, is a major cause of
preventable adverse events; and on the integration of new technology and
evolving science into the review process.
[Slide]
Let's get
to the quality systems approach to review and speak in terms of the
customer. This is the internal
customer. Our primary customers are the
medical officers who rely on our assessments, along with their own, to make
some judgments about risk/benefit. Our
systems approach involves placing as much emphasis on reviewing and describing the
connections between data and studies as on reviewing the data in studies
themselves. In other words, we
emphasize an integrative type of review.
The
figure on the left-hand side shows our Office as a matrix. it covers the five ODEs in the Office of New
Drugs. On the right-hand side, under
good review practices, are the ideal goals of our review. It really reflects our vision and mission
within the Office. We want to generate
knowledge as part of our good review practice.
We want our reviewers to be decision makers. We want our reviewers to understand patient context for what they
are doing and to recognize the medical need for knowledge that medical officers
have and, most importantly, to communicate the science in a very clear and
useful way, and not in the jargon of the individual scientists.
[Slide]
Our GRP
is based on something we call the question-based review. It occurred to us some years ago that when
NDAs come into the agency they don't come in with studies in any particular
order. They may come in the order that
they were conducted in for drug development, or they may come in the order that
the sponsor would want us to look at.
Instead of reviewing studies one by one in sequence as they appear in an
NDA, we felt it was more important to put those aside and focus on the pertinent
questions, not the studies or the sequence of studies.
So, we
developed a question-based review that is intended to integrate knowledge
across different nonclinical and clinical studies in a way that addresses what
the reviewer perceives as the key safety and efficacy issues. In addition, the reviewer looks at the
information in a way that links it to the claims being made by the sponsor in
the label of the drug product.
As I
said, there are often gaps in information so it is important for the review
process to know the risks associated with uncertainty or gaps of information
that are in an NDA and use appropriate label language, or perhaps Phase IV
commitments, to manage those risks.
[Slide]
Our good
review practice timeline is not that old; it is still evolving. But, as you can see, we began using a
question-based review in July of 1999 and went through a series of educational
steps. We had an Office retreat to talk
about it within our Office. We had a
voluntary implementation of it and a formal requirement of it in October of
2001. We are currently at the point
where we are going to stand back and assess our GRP and question-based review
for compliance and for effectiveness and determine where else it might be
applied within the review process.
[Slide]
These are
the five quality subsystems of our systems approach, and I have numbered them
in a clockwise direction to give you a sense of the quality subsystems. It begins at the top with a template. We call it the clin. pharm./biopharm. review
template. It is a very important
template because it brings uniformity and consistency to the review process
across three divisions and 70 reviewers.
What we
like about it, and I hope the users of it like about it is that it standardizes
the order and placement of the subject matter.
It doesn't dictate the content.
That is still an individual scientific endeavor. But when you pick up one of our briefing
packages, over time and in different therapeutic areas, it should look the same
in terms of the format and consistency.
We build
our quality system not only around the regulations but the domestic and
international guidances that relate to our area. Coupled with the regulations, these form our review standards. We have guidances for reviewers we call
MAPPs or SOPs. These are both
scientific and a process that creates the construct for the review
process. In those MAPPs are frequent
decision trees that aid the reviewer in analyzing their data and making
decisions. We stay on top of new issues
and new science through our team leader meetings, from which oftentimes new
scientific policies emanate.
Then,
number four, we get into a very important part of our quality systems, and that
is our NDA clin. pharm./biopharm. briefing package. This package, which represents the product of the review, is
posted on the Internet. It is a very
open, transparent process. It contains
something for everyone. It contains an
executive summary, a listing of key issues and important unresolved questions
and recommendations related to the approvability of this section of the NDA.
Finally,
the quality systems approach ends with the NDA clinical pharmacology/biopharm.
briefing. This is the highlight of our
quality systems. This is a formal
presentation by the primary reviewer of the NDA. They make a presentation to an inter-disciplinary audience. The meeting itself is characterized by a
very good interactive dialogue and it represents in many ways the vehicle that
we use to assess the quality of the review and a reality check in terms of its
relevance to the needs of other disciplines. Following that briefing, the
review package is reconsidered as necessary and then finalized for the records.
[Slide]
As I
said, we are talking about internal customers and what have been sort of the
expectations and the benefits to the reviewers with this approach. The quality systems approach has been
excellent. I think what it has done for
us is that it has standardized our method of delivering a comprehensive
review. This has led, I believe, to a
higher quality of review. Our reviews
usually meet and usually exceed and frequently anticipate our internal customer
expectations, and I believe this has built over the years a significant amount
of trust with our other disciplines.
I think
our review process collectively has a greater ability to derive clinical
inferences from the data. This has
helped communication and I believe it has led to a more effective review. I think the process has enhanced the
critical thinking of our reviewers through the question-based review and this
has led to some efficiencies in our review process.
Finally,
I think there is an ability to recognize efficiencies more in our clin.
pharm./biopharm. data set than we had in the past. I think this has instilled in our reviewers a large amount of
confidence. I think it brings a large
amount of job satisfaction and high morale to our staff as they go about their
business of intellectual reviews.
[Slide]
As I
mentioned, the briefing is really a critical component of our quality
system. Because it is a formal
presentation, I believe it compels reviewers to think very deeply about their
review. There is individual
accountability as you stand up in front of a group of 20 or 30
individuals. There is a pride of ownership. The feedback and the seeding of new ideas
from our briefing is also critical. The
attendees, as I said, are medical officers, toxicologists, chemists and this is
critical to our Office in terms of continual learning and knowledge
sharing. There aren't many
opportunities to do this within a review process.
For the
attendees, I think in this setting there is a greater consideration of the
comments before speaking because it is a showcase of people's comments and I
think, therefore, the quality of what is being discussed is high. We also invite other reviewers. They may be new; they may be
inexperienced. We invite them to the briefing. It is a good teaching tool and it is a form
of professional growth and the presenter, the reviewer can lead by example in
this presentation.
[Slide]
I have
talked pretty much about internal customers.
Let me turn to external customers and what we think about in a quality
systems approach here. We think about
their needs and expectations, and I am only going to use one example, drug-drug
interactions, because it is such a predominant part of the adverse drug
reactions scene. With health
professionals, we know and they know that drug interactions represent a high
percentage of preventable ADRs and are an important contributor to ER visits
and hospital admissions. We know that
from the literature; we know that from hearing from them. Patients, what do they worry about? Their top two concerns are getting the wrong
drug and getting two drugs or more that interact. So, the problem is very real.
We feel
the work product of our review is the delivery of prescribing information in
the label that addresses the needs of health professionals and patients. In the past the drug interaction section was
very much a fact-based presentation of pharmacokinetics, area under the curve,
Cmax. What does that do for the
physician and the patient?
There are
many potentially harmful drug combinations.
This information was often poorly utilized by consumers. Number one, they had a hard time finding it
in the label. Number two, when they
found it and they looked at two labels the presentation may have been
inconsistent and unclear. Finally, what
is the significance of increases in area under the curve, and what is the
significance of this increase of 20 percent versus that increase of 40
percent? So, it was a real difficult
interpretive problem.
[Slide]
What we
have tried to do over the last year in particular is to bring more added value
to the work product and improve the labels, using drug interactions as an
example. We still, by law, by
regulation, have to provide fact-based changes in pharmacokinetics in the
label. However, along with the agency's
label initiative, that information is now being moved to a more prominent part
of the label so it isn't lost in the shuffle of clinical pharmacology
information so it is moved to the top of the label.
We have
begun to establish a common language to describe drug-drug interactions. We now classify drug interactions as being
strong, moderate or mild. That gives
the physician and the patient a sense of risk when they use two drugs together.
Along
with that, we have also a parallel process of standardizing the method of
interpreting drug interactions so that we can say something about what to do in
the label, and this usually involves an analysis of pharmacokinetic changes
using what is known about the exposure-response relationship for the drug.
We are
also moving towards expressing results in terms that are meaningful to clinicians
such as probabilities or odds ratio so that somebody can make an informed
decision. We don't know the success of
this. It is in progress. We hope to survey customers at some point to
assess the value.
[Slide]
As far as
metrics go of quality systems, our metrics of process improvements since we
have introduced our question-based review, good review practices are maybe soft
metrics but valuable nevertheless. We
feel the consistent format of reviews that we have brought to the process has
made our review packages very readable, which is a very important aspect in a
busy scientific environment and, as I said, various levels of detail. It is a predictable package.
High
quality reviews that I feel we produce address issues of relevance to
therapeutics, and I think this has resulted in more informative questions of
sponsor data that comes in an NDA. I
have noticed over the last five years a significant increase in attendance at
our briefings. People only attend
meetings if they are valuable and I think this is indicative of a metric. We have had greater staff participation at
advisory committee meetings.
I think
also through different means--CDER awards and other things like that--we have
achieved greater recognition of the expertise and leadership in our
Office. What I like best to say is we
have frequent requests from the Office of New Drugs for more reviewers so I
hope we are providing a need--additional invitations to speak at advisory
committees and an obviously increasing role at industry meetings.
[Slide]
Some
other metrics--efficiency and informativeness of drug development. I think what we have done over the last five
years is actually identified studies that may be more germane to regulatory
decision-making and public health. As
an example, through our efforts and our interactions with industry we see an
increasing number of high quality drug-drug interactions which is a major issue
in public health, but along the way we have also been able to identify and
convey to industry the idea that some studies aren't necessary for regulatory
decision-making. I don't know if the
companies did them for reasons that they had of their own or they thought we
wanted them but, for example, the number of bioequivalence studies in
applications has decreased significantly since we indicated clearly, through
guidances and one-on-one meetings with companies, where they are necessary and
where they are not. I think this allows
us to shift emphasis.
Our most
recent initiative is to emphasize exposure response studies. While they are good, they could be much
better and much more informative in regulatory decision-making and we are
currently communicating that to industry through guidances and public meetings.
[Slide]
Let me
just say a few words about this. Dose
response and PK/PD represent exposure response studies and we feel they are
really the core or the hub question in our good review practice as it applies
to the submissions from sponsors. It is
a quantitative approach to assessment of efficacy and safety. The reason that it is important is that it
allows in the review to assess safety and risk of dose selection in
quantitative ways. It allows us to
evaluate the risk of exposure changes in special populations because of drug
interactions or disease and come up with a rational basis for dose
adjustment. As formulations evolve
during drug development, it allows us to assess the clinical significance of
differences in formulation.
In order
to implement this in the review process we have had to make use of extensive
pharmacometric tools such as modeling and simulation and other statistical
approaches. It has been an intellectual
challenge for us. In the near future we
will have a guidance for industry that describes a lot of the principles that I
am referring to.
[Slide]
This has
had a significant impact on quality of our review process and I think on drug
development because, if I look at three categories of impact of our involvement
with exposure response, I find that we have case studies that have demonstrated
improved study design that assess benefit and risk during the end of Phase II
review. In the long term, this can
increase efficiency in drug development through greater interactions at this
point in drug development. We know that
some of our case studies support approval of doses that are deemed effective
and safe during the NDA review, particularly when the doses are different than
those that the sponsor proposed, or sometimes without additional clinical
efficacy and safety studies which is a powerful impact on efficiency of drug
development.
Thirdly,
our examples of impact have identified absent exposure response information
that, had it been available, could have supported efficacy and safety and
avoided delays in approval that resulted from conducting additional clinical
trials.
[Slide]
Let me
conclude with saying that we have future goals. We are currently in the process of a QC/QA check of our NDA
reviews. We want to check two things. One is the compliance with our good review
practice MAPP. We want to get reviewer
feedback on the experience of the last 18 months and make revisions or tweaking
of our process as necessary, and also focus on more added value areas, both
internally and externally.
We want
to look at the application of good review practice to INDs and
supplements. We have only focused on
NDAs, for the most part. We want to
establish a standardized method of assessing experience response, particularly
for dose adjustment when intrinsic and extrinsic factors change exposure.
Finally,
we think it would be valuable, and we have said this publicly in other
settings, to interact with the industry at a much earlier stage of the drug
development process to talk not only about specific studies but about drug
development plans and to jointly maybe think about what is necessary and what
is not necessary for the decisions we have to make when that NDA eventually
comes through the door.
With
that, I will conclude and say thank you for your attention and turn it back
over to Dr. Doyle.
DR.
DOYLE: Thank you very much, Dr.
Lesko. Next we are going to hear from
Dr. Yuan-Yuan Chiu, who is the Director of the Office of New Drug Chemistry at
CDER. She is going to address quality
systems for CMC review.
Quality Systems for CMC Review
DR.
CHIU: Good morning.
[Slide]
It is my
pleasure to be here to present to you our CMC product quality review. CMC stands for chemistry manufacturing
controls. Usually this is what people
call our reviews. I am going to give
you a brief summary of the current status of the CMC review and our
responsibility, and then I am going to explain to you what we plan to do and
what we are hoping we can achieve in the future.
[Slide]
As Dr.
Lesko said, his office is structured based on the therapeutic class of drugs
and, in the same way, we are have 19 chemistry teams and each team is
responsible for drugs of a specific therapeutic class.
[Slide]
We also
have three divisions. Our review staff
are called chemists because they all have the basic training in chemistry. However, many of them actually are
specializing in very different fields and they have academic training, some in
chemical engineering, biochemistry, molecular biology. This is because we review a diverse variety
of product types so we need people with different expertise. We hire people not just right out of
university. They all have working
experience. Some of them actually were
professors and some have research experience, and a small number of them come
from industry. We are not very competitive
in salary and when they come, at minimum, they have to have their pay cut by 30
percent. That is why we are never
successful in recruiting people from industry.
[Slide]
The
purposes of the CMC review are multiple.
The first one is to assure the quality of the investigational new
drugs. We want to make sure that
clinical trial material is safe for all phases of the clinical studies. We also want to make sure that clinical
material is properly characterized and with adequate consistency and the data
generated from expanded studies are reliable.
We don't want the products to be not properly monitored with inadequate
quality so that you cannot be sure whether the batches, the material taken by
the patients actually is what it is supposed to be. We also review the NDAs and we also monitor some of the possible
changes after a product is marketed.
Therefore, we really monitor the quality of the products throughout
their life cycle.
[Slide]
We also
see ourselves as the facilitators of product development and approval. So, we have written guidances provided them
to industry for various phases of drug development and for NDA
submissions. We participate in
international activities to standardize the NDA submissions. We encourage dialogue with sponsors to
discuss their drug development programs.
[Slide]
I want to
give you a flavor of the diversity of the product types we evaluate. In terms of drug substances, we have very
simple compounds sometimes. However, we
also have very complex compounds. In
fact, our Center is the one that approved the very first biotech products. So far we have approved more than four NDAs
in the biotech area. We also have
products which are complex mixtures of many, many substances derived from
natural sources.
[Slide]
The
diversity of dosage forms is also very important. We have simple oral tablets for immediate release but we also
have very complex dosage forms which are targeted for an action site or which
are designed to reduce toxicity, such as liposomal products, inhalation
products. Surprisingly probably to you,
we also evaluate some devices. We
evaluate IUDs. We have a number of NDAs
for IUDs.
[Slide]
So our
scientific evaluation of the drug substance includes a lot of different
studies, data and information. I have
several slides to explain that. I am
not going to go into detail in the interest of time, however, you can see that
even within a product the knowledge required to evaluate it will be very
important because the product types are so different. Even within a product, the data and
information will cross multiple disciplines.
I also
want to point out that even though in a submission we have the manufacturing
process and control description, however, the majority of that data and
information actually is the responsibility of our field offices. They assess the process validation,
demonstration of product batch analysis, and those data are evaluated outside.
[Slide]
With all
the data submitted and the studies submitted to the NDA, and with analysis such
as the impurity profile and analysis methodology validation, study and clinical
batches results, the NDA establishes specification for a drug substance at
release time and also through shelf life.
I would also like to point out that most of the time the NDA
specification eventually becomes the published standard at a later time.
[Slide]
We
evaluate the stability, the design of the study and the results so that we can
establish the storage conditions and expiry or retest data for a drug
substance.
[Slide]
The
second major product of our work is to evaluate the drug product
information. There are also multiple
elements which are listed in the slide but I will not go over them piece by
piece, however, it is important to point out that recently, because of the
implementation of ICH, we are able to get some knowledge from the data for
pharmaceutical development. Therefore,
now we can understand the formulation design, the dosage form characteristics
and the critical components and attributes.
So, this will help us to have a critical evaluation of the process, of
the specification we are going to set.
Just like a drug substance, the manufacturing and the process control
information are mostly a field evaluation function.
[Slide]
In the
same way, the NDA specifications for a drug product eventually also become
public standards.
[Slide]
Some of
the container closure systems actually are used as a delivery device and in
those cases we will evaluate that the container closure system is appropriate
for the use.
[Slide]
We
establish the shelf life for the dosage form and, because of the study, we will
know whether you need special handling instructions. Whether it needs to be protected from light, things like that. We also review all the container labels and
certain sections of the package insert.
So, we want to make sure we have clear and accurate information for
patients and the healthcare providers.
[Slide]
I think
this gives you a flavor for the diversity of the data and the information we
have to evaluate, and the challenges we have been facing over the years. The first one is, as I explained, that some
of the information is actually reviewed by the field office so, therefore, the
reviews actually lack of full manufacturing information to make a final
decision. The CMC review is separate
from inspection, with sometimes limited or no communication between the
reviewers and the investigators. I will
explain later how we rectify those things.
The reviewers have no access to inspection results. The inspection observation of deficiencies we
don't get to see. Turbo 483, we are not
on line and we don't see established inspection reports, the final report.
In the
past few years, two or three years, we also experienced change. Because of the compressed drug development
time, because the companies would like to launch the product as quickly as
possible, there is not enough time to really do a thorough manufacturing
process development and optimization becomes a post-approval activity. Therefore, in our submissions when we do the
review, often we do not know the manufacturing capability.
[Slide]
As I
mentioned, the data and information cover multiple scientific areas and the
emerging new science and technology in formulating design, manufacturing process
and novel dosage forms require the reviewers to be on the cutting edge of
science and technologies. Reviewers are
expected to have broad knowledge to address all scientific and technical issues
for all types of products.
[Slide]
We also
observe an increased work load of generic products. There are many, many interests of industry to launch generic
products as soon as possible so there will be an increasing need for our Office
to coordinate with the generic chemistry review. There are changes in patient involvement in medicine and patient
expectations which we have to fulfill.
We have to be able to meet their needs.
The last remark is that this is consolidation of CDER and CBER
products. We need to have the same
review standards.
[Slide]
Facing
those challenges, what have we done? In
our Office we have participated in ICH activities, and because of the change in
product process development and the shortened time or the compressed
development time, we don't have enough data so what we have done is to
implement interim specification at approval time.
We use
the limited clinical data and we use statistical approaches and set an interim
specification. The final specification
would be set post-approval when we have additional data. We also implement skip lot testing and
sunset testing because, once we know more about the process or we are confident
about the data, a certain test may not be necessary to be repeated every time
and certain tests may be able to be dropped.
We also
implement limited involvement in the pre-approval inspections so our reviewers
will be able to resolve some of the issues on site with the firms and we will
be able to evaluate some of the data on site.
We also
have expert review teams for special products.
Because of the uniqueness of biotech products or metered-dose inhalation
products and botanical products, we have special people for consultation review
or for primary review.
In terms
of sharing information, the lessons learned, we have symposia. We also have peer review. We select certain INDs and NDAs and present
them to the entire staff at the office level or at the division level or at the
team levels so we have multiple levels of peer review.
[Slide]
Like any
other office, we have the good review practice which we implemented more than a
year ago. I am proud to say the Center
has evaluated the implementation of GRP across the offices and our Office
received the highest score. In addition,
we also established some unique programs so we can continue to train our
reviewers in terms of manufacturing processes, in terms of learning the cutting
edge of science and technology.
We have
coordinating committees to address common issues across generic drugs and new
drugs. We participated in the 21st
century GMP initiative, and we are very hopeful about integration of quality
review and inspection to bring us additional knowledge so we can make more
intelligent decisions.
[Slide]
The GMP
initiatives mapped out the desired states of product quality, the product
review and the inspection. Dr. Hussain
will give you a detailed explanation of the desired state so I am not going to
go over that. However, in order to
reach the desired state we believe we need a better quality system for our
review.
[Slide]
The
benefits of a quality system are multiple.
I am not going to go over them one by one but I think a couple of them
are very important. First and foremost
is that we will have improved scientific underpinning in reviews. We will have question- and risk-based review
and the risk assessment will be based on science and the decisions will be
based on critical analysis and thinking.
[Slide]
In
addition, I think a quality system will make us dynamic; make our causes
dynamic; make our review dynamic.
Therefore, it will be easy to adopt changes and have agility for the
future.
[Slide]
So, what
are our initiatives? To have a quality
system; to have multiple internal conceptual discussion. We have formed a steering committee. The committee members consist of the management
of Office of Pharmaceutical Science and our Office. We also formed a working group consisting of a strong team, the
best and the most dynamic people in our Office. We have high hopes that this team--they are actually in the
audience--will be able to bring us to the future.
[Slide]
We also
actually already hired a quality systems specialist as a consultant and he has
started to interview the members. We
would like to do certain things. A few
of them are listed here. We would like
to identify our customers. We know that
patients have care providers, our customers, and we would like to identify all
of them. Then, we would like to survey
customers to determine their needs and to establish metrics for accomplishment
and to establish milestones, short-term and long-term goals. With that, we would like to determine an
implementation process and, hopefully, at the end we will have a continuous
improvement program.
[Slide]
This
concludes my presentation and I thank you for your attention. I am looking forward to having feedback from
all of you.
Questions and Discussion with Board/Presenters
DR.
DOYLE: Thank you very much, Dr.
Chiu. This now brings us to the time
where we can ask of the presenters any questions or present any comments that
we may have. We have had a couple of
questions presented to us that we should try to address as well. Specifically, what steps are most important
for integrating evolving science into the review process? And, are there opportunities to learn from
business or universities in improving the review process? With that, do we have any thoughts or
comments? Yes, Jim?
DR.
RIVIERE: I have one question for Dr.
Lesko. I am just curious, on the
template that you have provided for the structure of the review, have you made
that available to the sponsors applying and what those criteria are for the
information in that document?
DR.
LESKO: That is actually a very good
question. I think we have made it
available informally through presentations at professional meetings. We have certainly made it available through
our meetings with PhRMA. Whether it is
publicly available on an Internet site, I don't believe it is.
DR.
RIVIERE: Very similar to an NIH current
review, you have very specific areas and you know a study section is looking
for that information. So, you know, the
whole point of this is to improve the approval process and improve the quality
of the data. If that is now formalized
to the point where your reviewers are looking for this information in that
format, I would think it would be very useful for PhRMA to look at that so they
know exactly that the information they provide is consistent with your goal of
early communication.
DR.
LESKO: Right. I think it is a great idea.
It has been somewhat of an experimental phase for us using this
approach, and it looks like we are going to move to make it more firmly
established in our process. That might
be the time to do exactly what you are saying, to make it available on an
Internet site as opposed to an internal site and continue to advance it within
our public contact with the industry.
DR.
DOYLE: Dr. Shine?
DR.
SHINE: First let me congratulate you on
the efforts to bring continuous quality improvement to the process. I think this is a very important development
and it is one that should be strongly supported.
I would
just make a couple of observations.
First of all, Janet, when we started to worry about quality in the
healthcare system we were told it couldn't be measured. Scientists will tell you that there are all
kinds of ways in which people resist the notion of addressing these
issues. I think the key in many regards
is data, and I want to come back to that in a moment.
I guess
my concern is that for a long time the efforts in improvement in quality in the
healthcare system were very much focused on process. JCAHO kept looking at the processes of hospitals and things of
this sort. One of my concerns in these
presentations is that there has been so much emphasis on process and not as
much on outcomes, as I always like to see.
Outcomes can be measured in a whole variety of ways and I am not talking
about simply the time frame in each area, but outcomes can be measured when you
want to talk, for example, about the confidence of your customers or how can
one set up a way to measure that.
We heard
about different ways in which new drug applications come in. Are the outcomes different depending on how,
in fact, the new drug application comes in, whether it comes in, in an orderly
way or whether it comes in, in bits and pieces, whatever? Depending on what the data shows, what
approaches can one take in order to improve the quality of the submissions and
what are the outcomes you want to measure in that regard?
But I
want to emphasize that there are many, many ways to define the outcomes in each
segment of the activity and I would urge you to look much more closely at
that. Again, we heard metrics used but
in some cases the metrics were about the quality of the meeting. Again, that is soft. Can you come up with any better outcome
measure in terms of helping people to tell whether things are getting better?
The
second observation I would make is that we have certainly learned in the
healthcare environment that it is systems that count. As you know, ever since we brought out the patient safety report
at the IOM, our principal message has been "it's the system, stupid"
in terms of drug safety and not blaming an individual.
Again, we
saw some approaches to subsystems. Dr.
Lesko presented a subsystem. But I
would submit to you that each of those items, one through whatever it is, five
or six, could be described internally as a system, that is, the very system
which allows one to move within each of those areas. And, I would like to see some real attention given to the
microsystems, if you will, that underlie each of those because those
microsystems are, in fact, where you have the greatest leverage in terms of
improving quality, and if you have a relationship between some outcomes and how
that microsystems works, then you expose, if you will, the way in which the
interstices work to a much better extent than I think the broader conceptual
activities.
Finally,
I think one of the things we have learned, at least from some industries, is
the notion that one can encourage the worker, in this case the staff, to
innovate and to do things in different ways provided you have an outcome
measure. In other words, in the context
of some kinds of outcome measures you can give people the opportunity--and it
has to be open and transparent and explicit, but you can give them a chance to
do things somewhat differently and then demonstrate that they can produce a
better, or if it doesn't work a worse, outcome but at least you have a context
in which you create the opportunity for innovation.
One of
the things we learned, that you may or may not know, is that on the cardiac
advisory committee that has been reporting on the outcomes of cardiac surgery
and angioplasty, pediatric care and so forth, in New York State, what was remarkable
was the pride that people took, once they got the data, in improving the
system. It wasn't about the doctors
choosing better hospitals or the patients choosing better hospitals. It was when one group saw that they weren't
doing as well as another group that they tried to figure out ways to
improve. So, I am suggesting that
thinking about a way to do that could be constructive.
DR.
DOYLE: Excellent points. Thank you, Dr. Shine. Dr. Pickett?
DR.
PICKETT: I also want to congratulate
the group for thinking about quality systems.
The industry also has given, as you know, a lot of thought to quality
systems. One of the things that I did not hear today, because in my own
experience it has been difficult implementing quality systems without increasing
head count--I heard absolutely nothing about what the plans were to increase
head count, I assume, to meet the demands of implementing quality systems
across your organization.
DR.
MCCLELLAN: I just want to take a moment
to respond to Dr. Pickett on head count.
We are increasing head count.
The agency is larger than it ever has been and this is a direct result
of legislation passed last year. On our
food side, we have added over 700 new people, primarily in the field to help
improve our inspection activities, and the like, but that creates some
opportunities for taking a fresh look at how we are undertaking our food
security and food safety programs, which we are doing.
Here, on
the medical products side, we had legislation last year that implemented some
new programs and provided some new resources as part of the Prescription Drug
User Fee Act and the Medical Devices User Fee Act. The PDUFA new authorizations with increased fees is going to
provide some new personnel, including personnel in key areas like IT and in
post-market monitoring, that will help us implement these activities. The Medical Device User Fee Act, which is
off to a little bit of a rocky start because of some issues about the right
level of appropriated funds to go along with it, is also providing some new
resources and we are fully committed to implementing that program effectively
too.
We are
actively pushing right now for new legislation on animal drug user fees and
there is considerable bipartisan support for that. I think it is mainly a matter of just getting the focus in
Congress to get it on through, given all the other very urgent priorities that
Congress has to deal with. These
programs together provide some new resources as well as some new authorities to
undertake these activities.
But I
don't want to kid you, the financial situation facing the government is
tight. It looks like it is going to be
tight for a while to come. So, I think
we are going to have to try to make the best progress we can without huge new
infusions of new resources and that is difficult given the increasing
complexity and the increasing number of products that we are facing as a result
of all the increases in research funding that I was talking about earlier, and
all the information that is still at a stage of being transformed in
understanding, as you heard in the presentations this morning. So, that is an area of challenge. It is one that we are trying to respond to
as best we can given the constraints that we face.
DR.
DOYLE: Yes, Dr. Shine?
DR.
SHINE: Yes, I just would emphasize the
world is filled with quality experts. I
know that Cecil is not implying this, but you have to have some support people
in terms of facilitation but this kind of quality improvement in this kind of
an organization has to be generated internally by the people who are doing the
work, who know how it has to be done, who can interact with each other and do
it well. So, I think the notion of
additional help is important in terms of facilitating that but they are not
going to do the planning; they are not going to do the quality improvement
themselves in a professional organization like this. I think people have to buy into it and they have to be committed
to it in terms of their own part of the organization.
DR.
PICKETT: Could I respond to this for a
second? I don't disagree with Ken. The quality obviously starts at the bench
and individuals at the bench really are critical for quality. But there are certain other parts of the
organizational structure--quality control; quality assurance--that do need
expertise and will require additional head count if you want to fulfill that
aspect of the overall quality system. I
think that has certainly been my own personal experience in industry and I
think those types of individuals are necessary.
DR.
SHINE: I would agree with that, not
disagree.
DR.
DOYLE: Dr. Thomas?
DR.
THOMAS: Mike, I would like to comment
on one of your opening remarks with regard to how to integrate the new
sciences, and perhaps a question directed to any of the previous speakers. How much effort is actually devoted to what
I will call test validation, replication, particularly in the area of
biopharmaceutics or agro. biotech. products?
Without this validation and replication of these assays that you are
going to be looking at in the future, and you probably are already looking at
some, I think you need to have some internal expertise within given units to
assess these things. I don't know who
to direct the question to specifically, but I would like to know what sort of
effort or energies are devoted to this type of activity.
DR.
LESKO: I think that is a good
point. In particular in our clinical
pharmacology area when we think about things like pharmacogenomics or things
like modeling and simulation, it would be unrealistic to think that everyone in
the Office is going to have those skills.
What we
have tried to do over time is identify those skills that we think are relevant
in proportion to sort of the integration of that science of drug development
and tried to stay ahead of that a little bit and, through the recruitment
process or through some internal training, try to get that expertise built up
and sprinkled throughout the Office and the respective divisions in a way that
gives us the coverage appropriate for the therapeutic areas.
The
reality is that all therapeutic areas don't need all of the new science and it
always happens that certain therapeutic areas sort of advance faster than
others in terms of integrating some new science such as pharmacogenomics or
such as modeling and simulation. So, we
try to do it that way and begin to utilize on site--by on site, meaning in a
division--experts to begin to mentor other people in that division on the
skills that we are talking about and really come to the reality that what we
have is sort of a puzzle where each piece doesn't necessarily equal each other
but together they provide us the coverage that we need in terms of assessing
the science coming in.
DR.
THOMAS: Thank you.
DR.
DOYLE: Thank you, Dr. Lesko. Kathy?
DR.
CARBONE: Hi. I am Kathy Carbone, sitting in for Jesse Goodman who is
unavailable for this morning. At CBER
we have a fairly extensive program where we actually do some in-house
testing. This is assisted by Sherry Lard
and Deborah Jensen, who are focused on quality issues and specifically
laboratory quality issues. We are
working with all the laboratories which do testing to make sure that the
systems are up to quality assurance and quality control laboratory testing
standards.
We also
have an extensive interaction with manufacturers where experts on board
actually assist in development of tests that have good utility, from our point
of view, and consistent products. That
is driven, of course, by the complicated nature of the products and the way in
which biologic products are produced that require specific interaction. So, we have a fairly extensive program
in-house.
DR.
DOYLE: Thank you. Dr. Crawford?
DR.
CRAWFORD: I believe, Dr. Thomas, you
also asked about ag. biotech. That is
done, as you know, in our Center for Food Safety and Applied Nutrition, and was
put in place in terms of reviews of these products as they were being developed
in 1992 when we formed the Food Advisory Committee which I was on. Since that time, they have refined those
procedures I think very well under Jim Maryanski from the Center for Food
Safety and Applied Nutrition. I would
characterize it as a very reproducible system that is working well. For those that they have cleared, they haven't
had any recalls or any product cancellations as a result.
But they
are continuing to look at it because the next generation of those kinds of
products will be those that are specifically doing something to change the
food. In the past it has just been
processes that cause product to grow faster, and so forth.
DR.
THOMAS: Not substantial equivalence?
DR.
CRAWFORD: Yes, substantially equivalent
to the products already on the market.
As the science gets more refined there are going to be products that
will have more vitamin C or whatever and, as you know, those products will have
to be labeled. So, the enterprise will
have to be ratcheted up a number of steps and quality control is going to be a
critical part of that.
DR.
DOYLE: Thank you, Dr. Crawford. We have time for one more.
DR.
CHIU: I would also like to answer the
question about new methods, new assays.
Whenever there is a new technology or new methodology developed by the
pharmaceutical company for new products, our laboratory will repeat the tests
and verify the rigidity of the tests.
DR.
DOYLE: Thank you, Dr. Chiu. Dr. Nerem asked for the next question so we
are going to respect that.
DR.
NEREM: Thanks. I didn't want David Feigal to get off
without any questions, but actually my questions could be submitted to any of
the centers.
David,
through the CDRH science review it was very clear that one of the challenges is
to bring the right science to bear, and there were a couple of things that were
talked about and I would appreciate your providing some kind of an update. One was that as you look ahead over the next
five years and you have a number of people, for whatever reason, leaving that
you actually developed some kind of a plan as to what you want your science to
look like five years from now as opposed to simply falling into the trap of
when someone leaves replacing that person by someone that looks sort of like
that person. So, I am wondering where
you are in terms of having such a plan.
The
second thing was in terms of using outside people. You indicated the 20 FTEs.
Part of the suggestion from the group was that you have in critical
areas where you don't have the science expertise, at least not to the extent
you would like, have people that are sort of consultants on call because when
you need to talk to someone you don't want to wait weeks for paperwork to go
through the bureaucracy. One of your
staff needs to be able to pick up the phone and simply talk to the person then. So, where are we on these kinds of issues?
DR.
FEIGAL: A couple of great
questions. On the issue of how do we
plan for the future, let me describe what we did with planning for the new
hires with the user fee positions. We
didn't simply look at the organizational structure and see which groups were
busy, where there was need and proportionally distribute the resources
according to the organizational chart.
What we did instead was that we recognized that we had divided the
product areas into six different working groups. For example, one of our busy groups is the cardiovascular group.
What we
did, we asked each of the division directors responsible for a product area to
actually pull together a team from across the whole Center, not just from new
product review but also from our laboratory programs, from our post-marketing
programs or compliance programs, and for them to look at the kinds of things
that need to get accomplished by the user fee goals, as well as the critical
needs of the Center from our initiatives, from the initiatives from Dr.
McClellan and from the Department and some of the President's objectives. What we asked each of those groups to do,
Bob, was to actually give us a prioritized list of the hiring they needed
without respect to organizational structure.
So, the cardiovascular group could, in fact, say, well, they need to
have some expertise in biomaterials; they need to have some expertise in
electrophysiology; they need to have an engineer who does computer software
without respect to where organizationally those people sit, and prioritize
them. We essentially asked them to take
responsibility for our overall responsibilities for the whole product area.
Those
requests are ranked and voted on, just like a study section, by the office
directors. That is the way we do our prioritized
hiring. So, it isn't on the basis that,
well, if you have lost someone you can back-fill that position. In fact, you have to actually request it
through this process, that there is a critical need to back-fill that position
and time to stop to think do you want to change and reshape the organization,
and also not to get groups into the bind of, well, I lost a person but the only
kind of people in my unit are statisticians, or the only kind of people in my
unit are engineers and clinicians. We
invited people to actually think about the whole product area. I think that is actually going to allow the
Center to identify needs for the future.
DR.
NEREM: This will be a continuing
process?
DR.
FEIGAL: A continuing process. As the resources become available through
the collection of fees, which is just starting now, we will release a certain
number of slots for hiring in batches because you never know exactly who you
are going to find and in what order, and have authorization to hire sort of from
a pool of identified needs. But it is a
way of actually getting involvement of the whole Center in thinking about where
does the Center need to go rather than simply asking each organizational unit
to do its best. Historically, that
hasn't worked so badly but this actually allows you to not tie the needs to the
organizational structure.
Your
second question, our biggest pool of outside experts that are ready-made are
actually advisory committee and advisory panel members like yourselves who are
already qualified as special government employees who are available, if used
singly, for consultation, which we nicknames homework in the Center. We will often actually send out parts of a
review or parts of a protocol, sometimes the whole thing, to an advisory committee
member and ask them to provide expertise.
It is not unusual to invite a single advisory committee member to
participate in a telephone conference call with a company over an issue. So, that gives us a pool of about 300 people
who have been selected for their expertise on the advisory panel system to
start with.
Then,
above and beyond that, we have actually looked at one of the things that we
have in the works which I think will be out in the next month or two, and that
is a process for actually contracting with universities, with individuals who
will be available to us on a part-time basis as essentially part-time staff,
except located in universities. We
think this might work particularly well with more junior faculty that don't
have as complex relationships with industry as more senior faculty often
do. We have started in our very busy
cardiovascular area to see if we can develop a contract proposal for
universities to bid on where, in fact, a young cardiology faculty member could
review applications, could review protocols as a special government
employee. We would probably hire them
through an IPA, or there are many, many different mechanisms.
But those
are some of the things we are pursuing and moving ahead on. The thing we had to do to make progress was
to hire someone whose job it was full-time to make this happen. You know, as long as we talked about an
interest in this nobody had time to get through all the contracting details and
punch these things through. Dr. Susan
Homire has been very effective in putting these programs in place over the last
year.
DR.
DOYLE: Thank you, Dr. Feigal. Dr. Buchanan?
DR.
BUCHANAN: Thank you, Mike. I would like to also follow-up a little bit
on Bob's questions and reflect on our experiences in the last few years in
terms of the evolution of our scientific staff and our needs. Strategic planning for your scientific hires
is a critical component but the lesson that it has also taught us in the last
few years, as we have had tremendous changes take place in our program
orientation, is that we also have to try to hire some of the brightest people
we can and have a learning organization.
So, it needs an equivalent commitment, for example, to an FDA university
where we can quickly retrain people to fill in the needs that we have
immediately to get started on critical attributes. We found that you can't do one or the other. You can't rely on future hires. You also have to be able to respond quickly
with your own internal learning organization.
DR. DOYLE: Dr. Rosenberg?
DR.
ROSENBERG: I just have a comment and
maybe a question. Dr. Lesko mentioned
the initiative by the agency to get involved earlier in the process of
interaction with some of the people that are starting to do that science and to
participate much earlier with industry in the process of going through clinical
development.
I think I
applaud that initiative but it strikes also of moving kind of to a dynamic role
of review rather than kind of just a static role of review. I was wondering if the agency has considered
some of the problems that they may run into.
It sounds good on paper and I think it would be applauded. The problem, of course, is that these processes
tend to be fairly long processes. They
can go on for years and, therefore, the agency is going to have to maintain
very careful consistency in how it works with industry through that process.
I think
as Janet mentioned, the more you think of this as a guild or an artist in
science, of course, then scientific opinion and as people turn over in these
groups, as things go on in time, particularly with people turnover, you end up
with different opinions along that process.
Therefore, it will be very important that when you start a conversation
or begin that process with industry you have a way to maintain the consistency
you are going to need so that what they are hearing at the end of the process
is consistent with what they heard at the beginning of the process. It is something that I think people in
industry have been concerned about in the past in terms of maintaining
consistency in conversations with the agency.
DR.
LESKO: I agree with what you have
said. I think we have the context with
industry now and interface in meetings where decisions and discussions are made
about the Phase III program where that is equally important. But I think you are right, there are things
that have to be thought through for these earlier interactions. I believe we should be dynamic in the sense
of interacting with companies earlier in drug development as opposed to living
with what we get at the end of the day with the NDA, where we have unanswered
questions or perhaps even some wasted dollars in terms of generating
information that perhaps the sponsor thought we needed and we didn't need.
In
particular in the area of dosing strategies for drug development programs, I
believe dosing strategies are set very early in the drug development program,
even preclinical, and that is a critical area in terms of regulatory assessment
later on in terms of efficacy, safety and dosing adjustments.
So, I
think we should do both. I mean, I
think we should be both dynamic and facilitate efficient and informative drug
development, but also be in our traditional role of review. This has implications and I would expect not
all companies would welcome this sort of early interaction and, like any
meeting, I think it would be up to the sponsor to determine whether this has
value to them or not.
It also
requires resources on our part. This is
not a small issue. But I think it is
worth exploring. I think it is worth
exploring in selected cases or with very clear roles in mind to pilot something
like this to see what the value of that would be, both for us internally as
well as for companies that engage in this activity.
DR.
DOYLE: Thank you, Dr. Lesko. Dr. Sundlof?
DR.
SUNDLOF: Thank you, Mike. I think Dr. Rosenberg raises a very critical
question. The Center for Veterinary
Medicine has had what we call a phased review process for about ten years now
in which sponsors meet with us in the very beginning, even before they have
decided whether or not they are going to go forward with the development. All of the problems that you have just
mentioned we have run into but we have been able to work our way through all of
those problems. Some of them we are
still working on, just to be honest.
Some of
the things that we have done have been that we have chopped it up into various
technical sections, the review process.
When we get through with each one of those sections we send the company
a letter that says this is a technical section complete and you can move
on. You don't have to worry about
that. One of the disciplines that we had
to maintain is that with our reviews, especially as we have turnover and new things
come up is that once we make a commitment to a company that we have reviewed
their protocol and have come to agreement we will stand by those. We are not going to change unless it is
something that would put the public health at risk. So, we do have some caveats to that but primarily, once we have
made that commitment to the company, then we honor that commitment and that is
very important to us.
It does
have some additional problems just because the submission is broken up into so
many different pieces and it is going in so many different directions
throughout the Center, it is hard to maintain centralized control over where
all the pieces are. So, that has been
our biggest challenge in this phased review process.
DR.
DOYLE: All right, thank you, Dr.
Sundlof. I think we need to move on if
we are going to have lunch. Next we are
going to hear from Dr. Ajaz Hussain, who is the Deputy Director of the Office
of Pharmaceutical Science in CDER. He
is going to provide to us an update on pharmaceutical manufacturing initiative.
Update on Pharmaceutical Manufacturing Initiative
DR.
HUSSAIN: Good morning.
[Slide]
I am very
pleased to be here to share with you a progress report on our manufacturing
initiative. There are actually two
initiatives but I think my goal is to show you just one initiative as we move
forward from here.
[Slide]
What I
would like to do today is to share with you a progress report on the process
analytical technology initiative that we first discussed with you in November
of 2001 and subsequently in April of 2002.
Then came the cGMP initiative for the 21st century. And, how these two sort of come together
now, and to move forward I would like to sort of spend my time discussing a
desired state.
We have a
number of activities, a number of programs, a number of initiatives ongoing and
unless we define very clearly what the desired state is, I think alignment and
the challenges associated would be greater.
So, I would like to get some input from you on how we have articulated our
desired state for the future and, hopefully, you will provide us some ways to
improve that desired state.
[Slide]
The PAT
initiative, the process analytical technology initiative, was, as I said
earlier, first proposed to you in November of 2001. We presented this as an emerging science issue in pharmaceutical
manufacturing. We presented several
different perspectives. We had an
academic and an economic perspective by G.K. Raju from MIT and Doug Dean from
Price Waterhouse Cooper, who essentially outlined the opportunities for
improvement, looking at the current efficiency levels in manufacturing; some of
the challenges; some of the reasons, root causes for why things are as they
are.
We heard
from Norm Winskill and Steve Hammond from Pfizer about their perspective on new
technology and innovation in manufacturing.
Essentially, they outlined for us that industry has adopted a
"don't use" or a "don't tell" approach to new technology
and innovation. Under the "don't
use" scenario, essentially because of the regulatory uncertainty, industry
says let's not use any new technology.
Or, if they desperately need new technology they will do it, but then
they will provide the routine type of information to the FDA and be in the
"don't tell" scenario. We were
extremely concerned by this because this is not in the interest of public
health and this is not in the interest of the country. So, we really needed to address this.
In April
we had a presentation by Ray Scherzer, who is senior vice president of manufacturing
at GlaxoSmithKline. Essentially, he
came and sort of challenged the industry itself, and his presentation was
titled "challenge to PhRMA industry--quality by design." The perspective he shared was that
manufacturing, although being treated as a stepchild in some cases, is a
significant part of the industry; a critical part of the industry, as well as
that the technology that exists for manufacturing and improving the efficiency
of manufacturing exists already outside and not within the pharmaceutical
manufacturing arena.
Therefore,
I think we actually posed a question to you, is it appropriate for FDA to take
the lead on this initiative and actually facilitate introduction of innovation
and new technology in manufacturing?
Essentially you sort of endorsed that proposal and we moved forward from
there.
[Slide]
Following
our proposal to you, we set up an advisory committee under the Advisory
Committee for Pharmaceutical Science, a PAT subcommittee, and we held three
meetings. These meetings and deliberations
were focused on definitions of what process analytical technology really is in
the pharmaceutical context, benefits and scope. Essentially, the discussion led to the scientific underpinning of
process understanding. PAT is
essentially the science of manufacturing and understanding the process itself,
not just having a sensor on-line, and so forth.
We
identified perceived and real regulatory hurdles. I think the point I want to make here is that we have perceived
hurdles that have a way of becoming real hurdles. I think that we saw that in the internal hurdles a company
has. I was actually a bit pleased but I
sort of held myself back at the end of the third meeting, when the industry
representatives on the PAT subcommittee said that FDA is not the hurdle;
industry itself is the hurdle for us. I
would like to believe that but I think we are all part of the same system; we
are all part of the problem; so we all have to be a part of the solution so I think
we all have to work together to remove these hurdles.
We also
identified the need for cross-discipline communications. Pharmaceutical manufacturing in particular
essentially works from the art of pharmacy compounding and to some degree I
think our thinking is somewhat similar in that context. To give an example, I think we make tablets
the same we made tablets a hundred years ago.
The scientific underpinning of that in terms of predictability and
generalization has not really occurred.
So, we really need to bring pharmacy, chemistry and engineering together
as a discipline of pharmaceutical engineering.
We also
identified approaches for removing these hurdles. I will share some of those with you. We also had excellent case studies presented by company
representatives such as Bristol-Myers Squibb, Pfizer, Glaxo and others. We also discussed a general approach to
process validation. But I think most
importantly, we actually developed a training and certification curriculum for
our staff. We are proceeding with that.
[Slide]
But before
I sort of outline some of this to you, let me just share with you the people
involved in this initiative. This is
the Office of Regulatory Affairs, Center for Drugs and Center for Veterinary
Medicine initiative right now. We have
a steering committee, a policy development team, training coordination, as well
as a team of reviewers, inspectors and compliance officers that are being
trained and being certified in some of these new technologies. The certification program is based on the
curriculum we developed through our discussion and deliberations of the PAT
subcommittee.
Three
universities, University of Washington, Seattle, University of Tennessee,
Chemical Engineering and University of Purdue, Pharmacy have been brought
together to provide this training. This
training has both didactic and hands-on experience at these sites.
[Slide]
In
summary, what the consensus of these deliberations was is that process
analytical technology provides an opportunity to move from the current
"testing to document quality" paradigm to a "continuous quality
assurance" paradigm that can improve our ability to ensure quality was
built in or was by design. We felt that
this was actually the ultimate realization of the true spirit of cGMP.
This
provides greater insight and understanding of the process itself--at, on, or
in-line measurement of performance attributes.
I want to sort of distinguish that from measurement of, say, pH,
temperature and so forth. We are not
talking about those type of measurements.
We are talking about measurements that can predict product
performance. Real-time or rapid
feedback controls focus on prevention; potential for significant reduction in
production as well as development cycle time; and minimized risks of poor
process quality, thereby reducing risk itself and also reducing regulatory
concerns.
[Slide]
What we
were able to do was to create a conceptual framework for PAT, not
piecemeal. This is quite a complex
slide and I am not going to walk you through this slide. The slide is just to illustrate that we have
addressed every part of the manufacturing process, including development
optimization, continuous improvement and how do you bring a multivariate
systems approach to risk classification and mitigation strategy to be part of the
PAT system. This is the framework that
we are using to develop our draft guidance.
I will not, as I said, walk you through all of the elements of this.
But I
would like to sort of emphasize that, as we move forward, product and process
quality has to abe based on knowledge.
I have distinguished knowledge from data. I would sort of submit to you that I think many of our decisions
are data-driven decisions where we cannot generalize, we can not predict and,
therefore, the level of sophistication is not what it can be.
So, if
the question to the FDA review staff as well as inspection staff is to assess
whether quality was by design, what is the information that they use to make
that assessment? I think that is the
key. The information that is submitted
to us in the review or when it is held at the site tends to be more data
driven. What I mean by that is that you
will have bits and pieces of data arranged to form information and we say, all
right, this drug is stable, therefore, it is all right. But when there is a change in any
manufacturing process you either have to repeat and retest every package to say
the change did not impact and in a knowledge-based system you will have
understood the critical formulation variables, and so forth, and be able to say
this is not a critical change and, therefore, you can make a decision in a
different way.
Today I
would submit to you we are at the bottom of this knowledge pyramid where our
decisions are driven by data that are generally derived by experiments and as
we move up this knowledge pyramid, going towards mechanistic understanding and
first principles of these systems, I think that is the direction we want to
move in. The PAT process actually
brings us in the middle of this pyramid where actually we are moving towards
establishment of causal links that are a predictor of performance.
[Slide]
So, the
regulatory framework under which this initiative will sort of conclude in terms
of a draft guidance is that the PAT tools are not a requirement. The current manufacturing processes do
provide quality product but they are not as efficient, but the technology is
really new so it cannot be a requirement.
Industry and companies will need to decide whether it makes sense for
them, whether they have the know-how to move forward. So, that is an important question that we have posed to you and
you sort of have endorsed that.
Research
exemption I think is an important part of this because one of the biggest
hurdles we have is if you put new technology on an existing manufacturing line
you will see problems, or you will see trends which might be considered as
problems, whereas, if you had not done so, there would not have been an issue
at all. So, continuous improvement
without the fear of being considered non-compliant is a major hurdle and a
research exemption framework. Again,
Dr. Woodcock presented a case study on how we intend to use sound statistical
principles for addressing some of these issues that will be part of this draft
guidance.
Regulatory
support and flexibility during development and implementation--this is the PAT
review and inspection team. This is to
eliminate the fear of delayed approval and actually dispute avoidance as well
as resolution for some of these issues.
Science-
and risk-based approach--low risk categorization based on a higher level of
process understanding. So, if you link
regulatory scrutiny to the level of process understanding you are able to
provide better incentives.
[Slide]
The
strategy for moving forward has been to conduct workshops, and we have done
several workshops now, both in the U.S. and Europe. We have participated in some workshops in Japan also. I forgot to mention that. The scientific discussion and debate that
occurred following our PAT subcommittee was very important and it brought in
debate across disciplines--pharmacy, chemistry, chemical engineering. And, some of this debate was emotional
because I think you are dealing with different disciplines and some of the common
issues and how different disciplines address that.
We had
debate across organizational units between development and manufacturing. For example, manufacturing folks say you
develop a product and throw it over the wall and we have to deal with it, that
type of debate and, then, the regulatory departments and what sort of
regulatory policy and process there should be.
So, the
general guidance that we will issue in a few months as a draft will sort of
encompass all these aspects. It will
not be a technical guidance; it will be more of an information guidance on
process and so forth. We plan to bring
other guidances out on the scientific and technical aspects. But when we release this guidance we will
bring together the engineers, the chemists, the pharmacists together as part of
the workshop for training.
[Slide]
I think
the most important point that I am very proud to share with you is that
although FDA initiated this initiative, we cannot champion it. We need champions to take over from us, and
the champions have been created. Champions
to drive this initiative towards a shared desired state are coming from
industry. Some examples of companies
which have come forward are listed there but, more importantly, academia. We started with the MIT presentation by
Purdue, Washington, Tennessee, Michigan--you can look at the list of schools in
the U.S. where we have established contact and they have already introduced PAT
in their curriculum as well as research programs. But also London, Bradford, Basel have already started working
towards their PAT-based curriculum and research program. We plan to work with several universities in
Japan this summer. But more
importantly, I think you have pharmaceutical engineering programs as part of
clinical engineering departments now, and PAT has been introduced in these
programs at Purdue, Michigan, Rutgers and so forth.
One
challenge that we are still working on is that the instrument manufacturers,
the sensors and so forth, do not have a common voice. We want to bring them together as an association to deal with the
common issues.
[Slide]
Moving
forward, we have hired several experts.
We have an intramural research program focused on our needs. We are learning from other industries,
especially linking to ASTM and their organizations where other industries have
already done this. We are in the
process of finalizing a collaborate development agreement with Pfizer on issues
with respect to chemical imaging and other on-line technologies. We are in the process of finalizing an
inter-agency agreement with the National Science Foundation to be part of the
Center for Pharmaceutical Processing Research.
[Slide]
I think
the main point I want to make here is that I think this started as a small
initiative but it could not have proceeded to its ultimate goal without the
cGMP initiative. So, this is going to
be part and parcel of the entire initiative that was introduced, and this will
be an example of how a science- and risk-based system approach could be brought
to bear.
[Slide]
The cGMP
initiative was introduced on 21st of August of 2002. I would like to call that initiative a drug quality system for
the 21st century initiative because it is broader than just GMPs. It also includes review. It also includes all aspects.
The
objective here was to sort of take time to step back and evaluate the currency
of our current programs on product quality, such that the most up to date
concept of risk management and quality system approaches are incorporated while
continuing to assure product quality.
The latest scientific advances in pharmaceutical manufacturing and
technology are encouraged. The
management of the program encourages innovation in the pharmaceutical
manufacturing sector.
[Slide]
The
submission review program and inspection program operate in a coordinated and
synergistic manner. You heard from Dr.
Yuan-Yuan Chiu about some gaps that we need to fill. Regulations and manufacturing standards are applied consistently and
FDA resources are used most effectively and efficiently to address the most significant
health risks.
[Slide]
The scope
and timeline of this initiative--this is veterinary drugs and human drugs,
including human biological drug products.
Organizations involved are ORA, Office of the Commissioner, CBER, CDER,
CVM with involvement of CFSAN and CDRH depending on the issues to be discussed,
for example, electronic records. We
have identified 14 task groups. We have
13 active. Dr. Woodcock is chairing
that. We have a two years timeline for
this. The immediate goals were to
complete certain tasks, which we announced in February of 2003. We have intermediate and long-term projects.
[Slide]
In your
handout packet you have a summary progress report so I will not go through,
line by line, all the activities but I want to share with you the major
highlights. We have issued a draft
guidance on 21 CFR Part 11. We have
issued a draft guidance on comparability protocols for CMC information and
manufacturing changes in a more proactive way than what we do today.
[Slide]
We are
working on different aspects: Center
involvement in the cGMP warning letters, a process for the Center to review
that. A technical dispute resolution
process for cGMP disputes is being considered; emphasizing risk base
appropriate to the work planning process; including product specialists on the
inspection teams. I think there are
some best practices that CBER has that I think we can learn from and bring best
practices to work so we want to sort of evolve a program which integrates the
review and inspection in a synergistic manner; improving the operations of team
biologics; enhancing expertise in pharmaceutical technologies; pharmaceutical
inspect rate; quality management system; international collaboration; and
holding a scientific workshop with stakeholders. As I said, you have a progress report in your handout. I will not go through each one of those but
if you have any questions I will be glad to try to answer those questions.
[Slide]
I would
like to sort of have your feedback on the desired state. It is I think very important that we all
define the desired state for pharmacology manufacturing for the 21st century,
and this desired state has to become a shared vision for public health not only
from an FDA perspective, but from an industry perspective as well as from
academia. I think it will not only help
us move in a synergistic manner towards this desired state, but also identify
scientific and engineering gaps that need to be filled and how to fill those.
[Slide]
In our
announcement on February 20th we articulated the desired state, and the
principles that this articulation was based was as follows: We recognize that pharmaceutical
manufacturing is evolving from an art form to one that is now science and engineering
based.
Effectively
using this knowledge-and I underscore knowledge--in regulatory decisions in
establishing specifications and for evaluating manufacturing processes can
substantially improve the efficiency of manufacturing as well as our regulatory
processes.
The
initiative that we have started is designed to do just that through an
integrated systems approach to product quality regulation founded on sound
science and engineering principles for assessing and mitigating risks of poor
product and process quality in the context of the intended use of
pharmaceutical products.
[Slide]
So, that
is our framework to sort of defining the desired state which is as
follows: Product quality and
performance is achieved and assured by design of effective and efficient
manufacturing processes. Product
specifications are based on mechanistic understanding of how formulation and
process factors impact product performance.
I want to
sort of underscore that. If the
information available in submissions is limited, for example, if you have done
seven or eight pilot batches and that is what you have used for clinical
development and these are your specifications for those batches, if you don't
have a mechanistic understanding you say, well, this was your lowest dissolving
tablet so this is your specification.
So, you are setting specifications based on the capability of your pilot
batches and then you set up a system later on because you cannot achieve that
capability during routine manufacturing.
So, if you move away from that modality of specification setting to more
mechanistic understanding, it really will improve the process.
[Slide]
Continuous
real time assurance of quality, that would be sort of the manufacturing
perspective. From an FDA perspective,
how can we facilitate that? I think we
can do that as follows, if our regulatory policies are tailored to recognize
the level of scientific knowledge supporting product applications, process
validation and process capability.
Today I think we have done an excellent job of harmonizing and coming up
with minimum standards, but in doing so we do not have good means for saying
this company has done better science than this. So, we treat the same company the same without recognizing the
level of scientific understanding underpinning their application in this
area. So, we could sort of open that up
and actually understand and reward or provide incentives for good science.
So,
risk-based regulatory scrutiny would lead to a level of scientific
understanding of how formulation and manufacturing process factors affect
quality and performance, and the capability of process control strategies to
prevent or mitigate risk of producing a poor quality product. So, you can see the regulatory incentives
removing the hurdles of good science sort of comes through some of these
statements.
[Slide]
I will
stop here and would really welcome any suggestions that you may have on how we
may improve our articulation of the desired state; what considerations we
should have when we sort of discuss this with our stakeholders; and how we
should align our processes better.
Thank you.
DR.
DOYLE: Thank you, Dr. Hussain. It is heartening to see the progress that
you are making in this area, and thank you for that update. Next we are going to hear from Kelly Cronin,
who is Senior Advisor for the Office of Policy and Planning at FDA. She will provide an update on the patient
safety initiative.
Update on Patient Safety Initiative
MS.
CRONIN: Thank you and good morning.
[Slide]
I am
going to spend the next 15 minutes trying to give you an overview of all that
we have been working on in recent years regarding patient safety, but with a
particular focus on our recent efforts with strategic planning.
[Slide]
I am
going to start by just giving you an overview of this strategic planning
effort, and while I go through the various areas of our focus I will give you
an update on ongoing initiatives, many of which you might have read about
recently in our announcement last month.
I also
want to try to articulate our vision for the future and get your feedback on
that, as well as articulating some of the challenges that we are going to have
in trying to make that a reality.
[Slide]
I think
as mentioned briefly this morning, there are five key areas to the strategic
plan:strong FDA; risk management that has a focus on really pre-market
activities and programs; better informed consumers and there are many different
initiatives trying to get more information out to consumers; and patient
safety, and most recently we have expanded this to consumer safety with the
intent that we would like to be broader than medical products and also
including foods, dietary supplements and animal health products. Then, the last major area of this strategic
plan is counter-terrorism. But this
will give you a sort of overall idea of how this fits into the larger effort.
[Slide]
Our
primary goal with patient safety is obviously to improve both patient and
consumer safety by reducing risks of regulated products, and by regulated
products as I just mentioned, we mean medical products, devices, drugs,
biologics, vaccines, blood, as well as products for animal health. We have three primary objectives within this
effort. We want to enhance our ability
to more rapidly identify risks associated with these products. We want to increase our capacity to analyze
these risks. We also want to be taking
appropriate actions to communicate risks and actually correct problems that are
identified and known.
[Slide]
In order
to think through all these issues and come up with a thorough strategic plan
and action plan that will be implemented in the next two years, we have put
together four different agency-wide working groups. There are 55 people involved across five centers at this point.
The first
one is focusing on how we can improve our current reporting systems. So, the adverse event reporting systems that
are across CFSAN and CDER or CBER, CVM are all our focus.
We are
also trying to take a more careful evaluation of all the external data sources
that exist to better identify risks. We
currently have access to many in-house healthcare databases from payers, from
the CDC, from AHRQ that we do actively use, but we are trying to do a more
comprehensive inventory and address how else we could be accessing data that is
outside the agency.
We would
like to improve our risk communication efforts both to consumers and healthcare
professionals. Historically, we have
been very reliant on certain types of dissemination vehicles and we are trying
to think more broadly now about how we can partner with people and implement
some new ways of communicating directly to stakeholders.
We are
also trying to develop other approaches to control risks and reduce risks. A good example most recently is the bar
coding rule.
[Slide]
Relevant
to the adverse event reporting systems, we recently announced the proposed
suspected adverse drug reaction rule which has been many years in the
making. It was largely developed in
conjunction with industry and many different foreign regulatory bodies. In essence, it proposes new standards to
improve both the quality and the usefulness of the data that we are collecting,
but it is also going to reduce burden in that it does allow for one common set
of definitions and procedures for reporting across all of these countries.
There are
many different features to this rule.
We will be getting expedited critical safety information. We will be getting more volume of
information on medication errors. But
it is a 500-page rule and we don't have time to get into it in too much detail
today.
[Slide]
Again,
with adverse event reporting systems we have been continuing to implement the
MedSun program. We are currently across
80 facilities in the U.S. and we hope to expand to additional 100 in the next
year. For those of you who are not
familiar with this, it is an Internet-based reporting system that is allowing
more direct interaction with healthcare professionals to get device-related
adverse event reports in.
We are
also trying to identify potential partners that are already active in the
patient safety arena, that are collecting various types of reports related to
our products. One company in particular
we have recently met with has 50,000 reports already collected across 100
different hospitals in the country, and we would like to be able to develop the
capacity to be able to get this information in on a regular basis since it is
really above and beyond anything we would normally have access to.
[Slide]
As I
mentioned before, we are also trying to look carefully at the databases we
currently do have in-house to identify and analyze risks, but we are thinking
beyond that now and trying to look across 24 different databases, both public
and private, to ascertain how we can better measure exposure and risks
associated with all of the products of interest. Many of them we have identified and tried to target to address
the concerns that we will likely have under products specifically approved
under PDUFA III since there will be some funding available to do some
surveillance of those products. We
would also like to plan forward, not just with 03 but plan to tie in to the
risk management programs that will be proposed under PDUFA III but to look at
future years and see how we can meet these needs.
[Slide]
Another
important project we have going on related to external data sources i the
Marconi Project. This has been recently
mentioned in announcements. It is
coming out of a private-public collaboration with the Marconi Foundation and an
initiative called connecting for health that is involved with setting clinical
standards to enable exchange of healthcare information. Essentially, what we are doing is partnering
with healthcare facilities that will be sending in periodic reports, but it
will be automatic. So, if a woman on
thalidomide is tested positive in a pregnancy test, it will automatically
facilitate a report that will go into the agency.
We
actually plan to get data starting in June and this really will be our proof of
concept study. We hope to expand this
effort with the healthcare providers that are participating now and look more towards signal detection and
trying to identify unexpected events, as well as try to perhaps tie this into
some active surveillance of the products that will be marketed or approved
under PDUFA III.
[Slide]
We are
also going to be developing a strategy specifically to facilitate this ability
and we have the interest of many of the providers that are already engaged in
this effort. There have also been other activities ongoing in the Department of
Defense in this area. So we hope to try
to learn from everyone who has already been involved to try to use the tools
and come up with a better way for real-time assessment of product risks.
Of
course, there are some concerns with HIPAA given that we will likely have
access to electronic medical records.
We have attorneys with expertise in this area who are very much involved
with the working groups and are thinking through these issues carefully.
[Slide]
In risk
communication, we have recently completed an inventory of all the types of risk
communication efforts that we have going on across the agency to try to get a
better handle on our baseline. We think
there are some known weaknesses but we do really need a more careful evaluation
of some of our key ways of communicating to better understand how we can
improve them and also build on them.
For example, we have known historically that the package insert has a
lot of very valuable information but it is perhaps as user-friendly as it
should be to most physicians.
We have
also started exploring new ideas as to how we could perhaps get more
information out on a regular basis to providers, in particular, who are
prescribing products with some known risks.
While we do have many different mechanisms for doing that now, we would
like to try to develop better ways of doing that, perhaps emailing certain
groups of physicians who are prescribing products, and doing that based upon
the idea that we would be facilitating the collection of additional information
so we could better ascertain the true risks.
[Slide]
We also
have an ongoing initiative, called the DailyMed. This is in conjunction with the National Library of
Medicine. This is also related to two
different rules, both the physician labeling rule as well as the electronic
labeling rule. What we hope to
accomplish with this initiative over the next couple of years is providing
healthcare information systems with product labeling that will be updated on a
real-time basis so that that can be fed into various types of tools like
decision support technologies that could be used at the point of care.
We also
have a variety of proposed initiatives.
Right now we would like to get better information out over the Internet
and we don't have a very well coordinated place for people to go that would
give you all the information that would be relevant and important for all of
our products. We would also like to
expand on our electronic communication of tools we have already developed, like
the patient safety news, which should be relatively low cost and easy to do.
[Slide]
I think I
mentioned before that we have recently proposed the bar coding rule, several
weeks ago. This will require bar codes
on all prescription drugs, over-the-counter drugs packaged for hospital use,
vaccines, blood and blood components.
This will allow for unique identification of these products. They will be able to scan them in and
identify them through NDC codes. This
is expected to facilitate the uptake of scanners across the country at
hospitals and, in essence, prevent a lot of medication administration errors
which are very prevalent. So, things
like the wrong dose, the wrong drug or the wrong time to administer a product
could be avoided.
I mentioned
before that our last group has focused on risk prevention and control. We are identifying many different actions
that we could take under our current statutes to try to correct or prevent
problems. Most recently we have been
talking about how to take a more risk-based approach to recalls. While these ideas are early in our thinking,
we are thinking that this could perhaps improve in a more systematic way how we
handle a lot of our actions.
We also
have recently released concept papers on risk management and pharmacovigiliance
and there is actually a series of public meetings ongoing this week downtown to
discuss these concepts. We hope that in
conjunction with these efforts we will be able to better prevent risk given
that, at the time of approval, it is more likely that the industry will be
engaging with us to figure out how we can work together to prevent risks.
[Slide]
We have
various different partnerships that are under way or at least being
explored. With CDC we have had several
discussions about how we could improve our data collection by adding on
specific modules for drugs and devices and potentially foods through their
existing surveillance systems. There
seems to be a lot of interest and support of this idea, and it really could
drastically improve our ability to collect information related to
device-related infections and adverse events that occur in the emergency room.
With AHRQ
we have been exploring different ways that we can facilitate data collection
specifically with adverse events. There
has been a portal that has been under discussion for quite some time now that
could perhaps facilitate the collection of data from hospitals.
We also
like to share the expertise that we have in this area. Since legislation is pending on patient
safety right now, that could create a national patient safety database. We would like to work with them and be
involved in that effort in thinking through how we might design and access data
that would be coming into that database.
With the
VA we have been under discussions as to how to sort of continue our
collaboration with them. Historically,
we have used their pharmacy benefit management database which has been very
useful for specific projects. But we
would like to expand our efforts with them and use their data sources on a more
regular basis. They are actually quite
ahead of the curve in terms of electronic medical records. They have a variety of health services researchers
and pharmacoepidemiologists who are very interested in collaborating with us.
[Slide]
We have
also been trying to think about how we can work with different outside
organizations to improve communication.
Most recently we have been discussing this with the Joint Commission. We have several projects that we would like
to pilot with them. In fact, we have a
meeting this afternoon with them. They,
in particular, have a lot of interest in root cause analysis associated with
medical errors and would like to be a part of getting risk messages out to
various organizations.
[Slide]
In the
future, we would like to obviously build up our spontaneous reporting systems
but, given the voluntary nature and the fact that it is passive surveillance,
we would really like to move to a system that would be much more automated and
less reliant on healthcare professionals to take time out of their busy days to
report to manufacturers and to us. So,
we really would like to be using more automated tools to collect data, to
analyze data and the Marconi Project is really the most relevant example of an
effort there.
We also
need to make sure that we are going to have a flow of information across
information systems. Right now there is
a lot of effort under way to develop standards to facilitate this but we feel
we really need to be part of this effort to make this type of thing work. Again, we want to be continuing to use
healthcare databases and other outside sources so we can improve our ability to
monitor risk on a more real time basis.
[Slide]
We feel
that the better we become at rapid detection of adverse events and medical
errors, the better able we will be to take steps to actually prevent them. We really do want to improve our risk
communication with the intent that informed healthcare professionals and
consumer are going to be able to make better decisions about their role in the
healthcare system, and also be able to prevent obviously the many adverse
events, morbidity and mortality and associated healthcare costs.
[Slide]
As I
mentioned before, we have many different challenges. Of course, we have budget constraints. We have our two-year action plan that is focused on being
resource neutral, meaning that we are not anticipating any additional funds. So, we are trying to establish what is
feasible given the FTEs we have across the centers and the expertise that we
have across the centers, but we also realize that we are going to have to rely
on partnerships with healthcare providers, with other government agencies, with
accreditation bodies to really carry a lot of this through.
Our new
initiatives are also very IT intensive.
We realize that we have to be thinking about a lot of new and innovative
ideas in the context of consolidation which is being directed by the
Department. We really do at this point
have less control over how we are spending our IT resources when we really need
to be thinking creatively about how we are going to be making electronic
labeling a reality and getting that directly to the provider. Also with some limited resources, we have
some issues with trying to retain and recruit talent.
We also
have legacy IT systems that we are going to need to transition out of but, yet,
we can't interrupt the functioning of our programs. So, these are all considerations that we need to keep in mind as
we are moving forward. I think that we
are trying to deal with our barriers and our obstacles as best we can, and we
are being very creative in our use of partnering and what-not, but these are
challenges that are not going to go away.
Thank you.
Questions and Discussion
DR.
DOYLE: All right, we are getting a
little bit close to lunch time so, with Dr. Woodcock's permission, we will save
your presentation until after lunch and initiate some discussion of our last
two presentations. So, if anyone has
any thoughts or comments, questions?
Dr. Hussain did give us questions that he would like for us to comment
on relative to how FDA might improve their articulation of desired state
considerations for communicating the desired state to stakeholders, and any
additional considerations for aligning FDA's activities to ensure efficient
progress. So, with that, any
thoughts? Dr. Shine?
DR.
SHINE: A question for Dr. Hussain and
also for Kelly. Dr. Hussain, there is a
little paragraph about team biologics in your report but the overall flavor of
the report is about drugs and drug manufacturing, and the expertise that you
described was pharmacy, chemical, chemical engineering. Clearly, one of the areas that has been most
troubling has been manufacturing processes for vaccines and the biological
soups that have to be created to do that, and also the concerns about what has
happened with vaccine manufacturers not being in a position to continue
production. Could you give us some
sense as to the extent to which that is a focus for this initiative and what we
are doing with regard to addressing that particular element? Then I have a question for Kelly.
DR.
HUSSAIN: My presentation started with
the PAT initiative which is focused more on traditional pharmaceutical
manufacturing, and within that context I said, you know, chemistry, pharmacy,
engineering. So, that was the focus of
the PAT initiative and that continues to be the focus.
I think
in many ways the opportunities for improving efficiency are tremendous but with
respect to the level of complexity we have in terms of scientific understanding
and so forth because vaccines, and so forth, are far more complex systems so
the PAT initiative started with that focus in mind. The GMP initiative actually is trying to address all pharmaceutical
human drug manufacturing including vaccines.
So, within that context that is being addressed, but with respect to the
PAT, we have kept that limited at this time to the traditional pharmaceutics.
DR.
CARBONE: We do have representation in
that group, and I think the encouraging component is what was mentioned by Dr.
Hussain, CBER's plans include product specialists in inspections, and this is a
plan that is under consideration for adaptation and adoption. So, I think that the biologic specific
issues, although not prominent, certainly will be part of the consideration and
can be increasingly so as the need arises because you are absolutely right
about the complexity and specific issues in manufacture of biologics, and
vaccines are an example.
DR.
SHINE: Given the challenge in this
area, Mr. Chairman, I would hope that sometime in a future meeting we might
address the question of the status of vaccines and vaccine manufacture. I think with the crises that have occurred
with regard to childhood vaccines, vaccines for terrorism and so forth, it is a
really pressing issue that I think the agency has a great deal to contribute
to.
I just
wanted to ask Kelly a couple of things.
First of all, congratulations on getting bar coding. As you well know better than any of us, the
VA have been doing that for a long time.
There are two issues. One,
Marconi is a hospital-based experiment?
MS.
CRONIN: Right now it is, although there
are some providers that are involved that have vertically integrated healthcare
systems where we could potentially be getting data across settings of care.
DR.
SHINE: That would be terrific, and the
VA is another example of being able to do that because we really don't know
very much about what is going on in the ambulatory arena, and beginning to get
some meaningful data there would be extremely useful.
The other
question I have is, as you are probably aware, when you try to get healthcare
providers to pay attention to the future the hospital administrators are
interested in a microsecond but the docs are not. The docs are resistant.
Recent polls have demonstrated that only about a quarter to a third of
docs really acknowledge the importance of errors, and so forth. Part of the reason is they believe it is
something that takes place somewhere else.
The most successful programs have been ones in which a hospital has
looked within its own organization and identified the error rate in their
institution and being able to say to the medical staff we have just as much of
a problem here as they do at the hospital across the street where you think
everything is happening. Is there any
mechanism, as you develop this information--I am not talking about public
reporting or whatever; I am talking about the question of any kind of feedback
mechanisms so that we can get information to people that will get them to take
seriously the notion that there is a problem in their environment as opposed to
somewhere else?
MS.
CRONIN: We are thinking about various
ways we can facilitate communication and feedback once risks are
identified. We haven't really worked
through all those ideas yet. We hope to
discuss that at an upcoming brain-storming session with many of the providers
that are involved in Marconi. But we
realize that quality improvement and prevention of risk is really an integral
part of this effort and in order to really get people to buy into this effort,
since it is really above and beyond any regulatory effort and is really just
relying on their interest and their participation, we will have to be
communicating what we learn back to them in an effective way so that they can
then turn to prevent risks and improve quality in their systems.
DR.
DOYLE: Dr. Woodcock?
DR.
WOODCOCK: Yes, I have a comment about
this. One of the more promising
discussions I feel we have had has been with some of the payers because for the
data you are referring to, you have to talk to the practitioner about their
practice pattern. Some of the payers
are actually willing to collaborate with us in feedback mechanisms around
prescribing patterns and consequences, and so forth. That type of feedback is very personal, of course, but it also is
very effective in changing behavior compared to exhortations or general
communications, and so forth. So, there
could be some interest in actually communicating to patients who are on
specific medicines about the risk that would pertain to that medicine.
DR.
DOYLE: Dr. Thomas?
DR.
THOMAS: I have a question for Dr.
Hussain. Where do you think you are
going to get the next generation of PAT experts or professionals? For example, in schools of pharmacy it is
very difficult to find a so-called physical pharmacist. Schools of engineering, while some may have
very strong chem. engineering may not be interested in this sort of
manufacturing process. Also departments
of chemistry, I mean, the medicinal chemist obviously isn't going to fulfill
the criteria that you need for a PAT.
Has anyone done any manpower surveys?
Has anyone looked at curricula within the U.S. academic institutions
with regard to this type of training?
It used to be more prevalent but, for whatever reason, it is not
something that young people are going into.
DR.
HUSSAIN: Just to sort of reflect on
that fact, before I came to the agency I was on the faculty of a school of
pharmacy for nine years and I saw the erosion of the physical sciences from
those curricula. I think the sensitive
issue is in the sense that schools of pharmacy have a very important function
from a patient care perspective and that is where the focus has been and will
continue to be. So, the gap that
remains is industrial pharmacy, physical pharmacy and where this information
will be utilized to train individuals.
The trend has been that schools of engineering, especially chemical
engineering, have picked that up. For
example, the Rutgers School of Pharmacy essentially has moved away from
industrial pharmacy program but the School of Chemical Engineering now has a
very mature pharmaceutical engineering program. So does the University of Michigan. Purdue has still maintained its strong focus on industrial
pharmacy with the PAT-based applications.
So, there
are now at least three focused programs that at least we have direct contact
with which have already moved in this direction. But what I would like to say is that from a chemistry
perspective, process analytical chemistry actually has evolved over the last 30
years as a mature science. So, you do
have opportunity to tap into that pool of talent and then sort of combine that
with the pharmaceutical know-how. I
think it has to be a team approach.
So, in
the short term the solution is to bring a team concept to this with engineers
from other sectors which have the know-how for some of these. Eventually what I see is that some of these
programs will mature and will provide the know-how or the talent base for the
U.S. But that is a major concern when
we have spoken to a number of companies about where the new plants are going to
be, PAT-based, many have said it is going to be Germany, not U.S., and that is
precisely the reason why. The German
education system has maintained a strong base and is providing that
know-how. But I think the concern I
have always had is that we need to have that base in the U.S. also. That was the reason for sort of partnering
with the National Science Foundation and creating some opportunities to have
this talent pool.
In short,
I think we will have the talent pool as a team concept but eventually this will
take off. The University of Michigan
actually will have a distance learning program based on PAT systems and so will
the University of Purdue. So, those
curricula will be established very soon.
DR.
THOMAS: Thank you.
DR.
DAVIS: You mentioned, Dr. Hussain, the
use of case reports to help work up the desired state. I would suggest, sort of thinking out of the
box, because you don't have a lot of programs, training programs that you might
try case reports as a teaching tool to provide information to those schools
that aren't so strong. You know, I am
not business or an MBA person but I often look at the Harvard Business
Review from a learning perspective.
It serves as a great way to get information out using case reports to
study. So, I would suggest scientists
need to consider doing the same thing where we have training programs that are
deficient in some specific region so instructors might take this information
and at least use it to talk about.
The
question I have for you though, I notice you had ORA as a part of the
team. How are you anticipating these
new technologies in CFR Part 11 from the start? I was very pleased to see you had ORA. I am just wondering what is the input there in using CFR Part 11
with these new technologies getting started at the ground level?
DR.
HUSSAIN: Let me go to the first
question. I think the point is well
taken in the sense that I think case studies, case reports would really be
examples. What has been quite
gratifying is that companies have stepped forward with some examples and have
shared those publicly. For example, I
think Bristol-Myers Squibb came up with a complete case study presentation at
our PAT subcommittee, and this is how it can be done; this is how they are
doing it. So have Pfizer and
others. So, those really have helped us
sort of conceptualize the concept itself.
At the same time, I think the curriculum that we have developed for PAT
staff training and certification actually has become a model for academia to
adopt. So, in this instance we are
actually providing a curriculum to academia.
So I think that was a positive step of our deliberations on that
advisory committee.
On the
issue of Part 11, I think we did realize that the interpretation of Part 11
could have created some of the hurdles for new technology and innovation. Therefore, as part of that I think we should
draft a guidance as part of the cGMP initiative. As we move forward, I think the integrated team concept is the
only way forward and I think the centers are working hand-in-hand to get there
not only with respect to computer validation, software validation and Part 11
issues but every aspect of science because we will be moving to a more
sophisticated statistical base analysis and I think that is a learning curve
that needs to be sort of understood as well as applied appropriately.
MR.
MARZILLI: This is John Marzilli, from
ORA. I wanted to echo that sentiment
and just to add that Dr. Hussain, with Dr. Woodcock's leadership, has been
involved with our senior management staff at our past two senior staff meetings
where we have presented this to our regional directors and district
directors. On the GMP initiative since
its inception, Dr. Woodcock and company have worked closely with our Division
of Field Investigations and our field staff from across the country to our drug
field committee. Our Chair, the
District Director, Doug Ellsworth from the New Jersey District, as well as our
Regional Director, Susan Suderberg from the Central Region and members of that
committee have worked closely with these work groups and we have members on
each of these work groups.
Because
we are dispersed across the country, we try to have our folks come in as much
as possible to participate and generally we have been active participants from
day one in these initiatives. I want to
thank Dr. Woodcock for bringing us on board, because it is an important aspect
of the field organization, at the table from day one. So, we have been there.
Thank you.
DR.
DOYLE: Did that cover your points, Dr.
Davis? Dr. Shine?
DR.
SHINE: What kind of metrics are you
going to use to evaluate this program in the future?
DR.
HUSSAIN: Well, I think we have been
giving some thought to that. One clear
metric is actual real-life applications and moving towards adoption of some of
these technologies, in that regard, bringing technologies that are already
existing in the "don't tell" mode and actually having some regulatory
utility and benefit from that.
In one
sense, I think it would be prudent on industry's part to adopt a move in this
direction, and that essentially would indicate that the hurdles that are
perceived or real are sort of being removed.
The metric for that would be simply the applications that will come
through. That will happen over a period
of time. But most of all, I think sort of
a metric in terms of what has already occurred is the almost emotional debate
of the current state of pharmaceutical manufacturing in terms of what it was
and what it should be. The workshop
that we have held I think brought that debate out, moving towards a consensus
to the desired state. I think we have
established that.
But in
terms of collaboration between the centers and the ORA, I think with respect to
technical issues and disputes that might come about, whether we resolve those
quickly or we actually prevent those issues from happening, those are all a
potential source of future metrics that we could come up with. So.
DR.
SHINE: For example, will there be any
attention paid to cost issues, to interruptions in production, some of the
things which are sort of the end measures of what the outcome is--quality of
product? I just want to get some sense
at the end of the day of what does this do for the industry.
DR.
WOODCOCK: Yes, or for the public. We are going to look at that. We have an evaluation group that is trying
to devise how we would measure we are achieving the desired outcome of this
initiative. So, your suggestion is a
good one and we agree. I mean, we would
hope, like Ajaz said, for some work examples from industry because the preliminary
indications we have, which were presented previously at the Science Board, is
that there could be a major reduction in cost after the initial
investment. So, we look forward to
those. Those are real-life applications
that would demonstrate some of the benefits.
But we also need to further develop how we measure whether it is
successful or not.
DR.
DOYLE: Thank you, Dr. Woodcock. Dr. Laurencin?
DR.
LAURENCIN: This is for Kelly
Cronin. How do the adverse event
reporting strategies that you have compare to other countries, say, like
Canada?
MS.
CRONIN: We actually haven't considered
comparing our systems to other countries.
I think with the SADAR rule there are probably going to be more
similarities than differences, once that gets finalized. But we do hope to improve our systems in
terms of trying to have better outreach and increased awareness of the
importance of reporting. We also hope
to come up with better ways of signal detection through the adverse event
reporting databases that we have in-house.
So, we have a variety of different initiatives that we are thinking
through now. Whether they will
differentiate us more from other countries, you know, I am not certain.
DR.
WOODCOCK: We have actually been talking
to Canada. They want to get onto our
database and add their reports and be able to analyze because they don't have
the power with a much smaller population to be detecting rare adverse
events. Their system is somewhat
similar to ours. European countries are
very difficult for pharmaceuticals at least because they have a national
healthcare system so much of their reporting is because they are kind of
administering the payment of drugs through one system. In Japan for pharmaceuticals, I don't know
for devices, it is a totally different healthcare system where the practitioner
actually dispenses the drugs. They have
put in place some adverse event reporting systems that are focused on the
practitioner when a drug is newly launched and their early experience with
dispensing that drug to patients. So,
we are talking about a lot of different kinds of systems. It is very interesting but it is hard to
make direct comparisons because of the differences in the underlying healthcare
systems.
MS.
CRONIN: Yes, I think it is also fair to
say that they are all voluntary as well so it is passive reporting. Several years ago I did look at data across
various different countries that reported to the WHO collaborating center in
Sweden. The data, at least for this
particular class of products, was sort of spotty across many different
countries. So, I don't think that any
other system is going to have a better way of identifying the true safety
profile given that they are all passive systems.
DR.
DOYLE: Dr. Feigal?
DR.
FEIGAL: It is interesting to look at
the system in the U.K. for medical devices.
In the U.S. 90 percent of our reports come from manufacturers. In the U.K. 90 percent of their reports come
from health professionals. So, they
have a very different system and it is in part related to the fact that the
same public health system that runs the hospital systems and the primary care
systems also is responsible for the MDA, which is the regulatory authority for
devices.
They have
the ability to make some linkages that we don't. For example, they look at recalling devices as something where
they have responsibility for the whole system.
So they have recall coordinators within hospitals that are accountable
to the same body, the device authority.
So there are some interesting different systems.
There is
another layer that exists probably for all products, which is communication
between regulators of the problems they are analyzing. It is one thing to have robust systems for
signal identification and triaging and data mining and refining systems, but
one of the benefits of the harmonization efforts of pharmaceuticals and devices
has been more opportunities to share between regulators.
For
example, a few years back problems began developing with a new heart valve and
we had reports that we heard from the U.K. and from Canada. It was a product that had an earlier launch
in Europe than it had in the U.S. that alerted us to the problem while there
were still clinical trials ongoing and early marketing in the United States
where we could look at this system together.
So, I think it is going to be necessary to take a whole systems approach
to the situation and recognize that it is really a global marketplace, that
these same products are appearing throughout the world in somewhat different
orders, and if we want early signals we are not going to be able to get them
just by turning up the detection in our own systems.
DR.
DOYLE: Any other questions or
comments? I think we are getting
hungry. So, I think we are going to
take a break now and reconvene at one o'clock.
[Whereupon,
at 11:55 a.m., the proceedings were recessed to reconvene at 1:00 p.m.]
- - -
A F T E
R N O O N P R O C E E D I N G S
Open Public Hearing
DR.
DOYLE: I guess we are ready to
reconvene. Next on the agenda is public
comment. We don't have anyone
officially registered for public comment but is there anyone in the public
audience here who would have a comment?
I see no hands so I guess no public comments.
DR.
BOND: I do have one written comment
that I am just going to submit to the transcriber for the record.
[Written
comment submitted for the record:
"It is clear that FDA is not protecting the public from cancer or
other illness risks. It is clear that
FDA is allowing industry to sell any old product to the public so that the
seller makes money, and the public gets sick.
The FDA is there primarily to protect the public and all standards
should do so. If it takes 20 years of
testing to prove a product safe, that should be the standard, not anything less. These are my comments for inclusion in the
public record since I cannot attend Washington conference on scientific
standards. B. Sachau, 15 Elm St.,
Florham Park, NJ 07932."]
DR.
DOYLE: Very good. We will submit that for the record and move
on. Next we are going to hear from Dr.
Woodcock again. We are going to make
her earn her keep today. She is going
to talk about fostering technology development in relation to pharmacogenomics.
Fostering Technology Development--Pharmacogenomics
DR.
WOODCOCK: Thank you.
[Slide]
Good
afternoon, folks. I hope we will have a
lively discussion this afternoon and keep everybody awake after lunch. This continues the theme of facilitating the
introduction of new technologies through regulation. This afternoon we are going to talk about pharmacogenomics,
pharmacogenetics. I want to say that
many of the centers are involved in different aspects of this.
As I said
earlier, NCTR obviously, as was said earlier, is extensively involved in
certain scientific assays and so forth.
The Center for Biologics has a very extensive genomics/proteomics
program. But today what I wanted to
talk about is the relationship of genomics and, by extrapolation, really the
new proteomic techniques and similar techniques. I am going to talk about the genomic techniques but really these
issues apply to all these techniques in the development and regulation of
drugs. So, that is the
"pharmaco" in the genomics.
[Slide]
As I
said, this is really about translating innovative science to bedside
medicine. We know this is often a very
rocky road and there are a lot of bumps along the road, and part of the
initiative that Dr. McClellan spoke about this morning is trying to smooth that
pathway and help these new innovations translate into benefit to patients. As Dr. Shine would say, that it the outcome
measure we want to see here. We want to
see the actual benefit to patients.
[Slide]
What is
the issue we are talking about here?
This new science of pharmacogenomics, and I will define this later, and
proteomics and other similar technologies are actually being applied
extensively by the pharmaceutical industry in drug development.
They have
the potential to revolutionize a number of the processes. Most of the data that is being generated is
not being seen by regulatory agencies, partly out of concern for how the data
will be used. All right? We need an approach to this new data that is
going to be generated, and is being generated right now, that will enable the
free exchange of information between regulators, companies and, hopefully, to
the extent possible the scientific community at large. It will help advance the science and
technology and, very importantly, aid in the timely development of appropriate
regulatory policies because we cannot have this revolution upon us before we
start formulating our policy development.
There are many scientific and technical issues that are going to have to
be solved and some of the later speakers are going to address some of these.
[Slide]
Basically
what am I talking about here? I am
going to give you a little bit of background, if you will, on why this is
important. There is tremendous
variability in how people respond to medicines, and that is such a commonly
understood fact that most people don't even think about it but that is really a
major barrier to having effective therapeutics. It is also a major barrier to developing drugs and biologics
because we can't predict who is going to respond.
[Slide]
This is
true for effectiveness. For many drugs,
if you leave aside antibiotics, antivirals and so forth that are directed at
some other organism, other than a person, for many drugs the size of treatment
effect when we do randomized trials is less than 10 percent of some outcome
measure. It is a very small
effect. Many people conclude from this
that the effectiveness of drugs is very small.
Even some of our own staff will tell me this drug doesn't work; it is
such a small effect that we observe in the population.
[Slide]
You
expose a population; you randomize them and you see something like this. You see that the mean response of the drug
is enough over the placebo that, in fact, you can demonstrate a statistically
significant difference in a population but it is not anything to really write
home about clinically. This is very
common and one of the problems in drug development because you have to have
very large trials to show effectiveness of drugs.
[Slide]
If you
did trials in a different way and you set up the responders, what often you may
find is that most of the subjects on the drug arm look just like the placebo
subjects. In other words, they don't
respond or, you know, some of them get better, some of them get worse, just
like the placebo. All right? But there is a little group of people that
respond very much to the drug. This is
a common observation, except we don't usually conduct trials using this type of
hypothesis and we don't usually report them in medical literature according to
the magnitude of the response; we look at the mean difference between
populations. One of the goals and one
of the issues here is to predict who these people are, the small group of
people who actually do respond to a specific drug in a very positive way.
[Slide]
What
about drug toxicity? Again, this
observation is so common we don't even think about it. Often not everybody gets the drug
toxicity. If you study drug versus
placebo you see each drug has a consistent pattern of side effects, and that is
repeated over and over again in different trials when you compare it to
placebo. That is observed with the
common as well as the rare events. Some
of these effects are attributable to known pharmacologic effects of the drug
and others, in medicine we have always called them idiosyncratic. Idiosyncratic is not defined this way in
Webster's but what we mean is we don't know what this means at all; you know,
it just happened. Actually, from a
science perspective you have to say there was a reason; there was a reason for
this. There is a reason that some
people suffer more from the pharmacologic effects of the drug than others. It can't be just random but that is how we
have thought about it. Random is
another word for we don't understand the causes; we don't understand the
underlying cause.
So, the
current physician medical approach is kind of at the level of organ
function. We say, well, people have
compromised liver function and you have to be careful here, or whatever, or we
just observe the patients and we wait for something to happen that we try to
prevent from getting worse. We can't
really often predict who are the people who are going to experience these
adverse events and why; what is the underlying cause that they get them and
some other people don't get them?
[Slide]
It is
commonly agreed, and we will probably convince you in our presentations today,
that there is an inherited or a genetic component to variability in drug
response. Here are your
definitions. Pharmacogenomics we are
defining, just for the purposes of this meeting since these are controversial
definitions but this is what we are going to talk about today,
pharmacogenomics: application of genome-wide RNA or DNA analysis to study
differences in drug actions or drug effects.
The
earlier term, pharmacogenetics, is by many people considered to be study of
genetic basis for individual differences in pharmacokinetics. The reason for that distinction is
pharmacogenetics was discovered fairly early.
We have known for a long time that different people metabolize drugs
differently, the same drug differently.
There was even early study of this and some things have been known for a
long time, using probes and so forth to find out people's phenotypes. So we were able to learn about
pharmacogenetics a long time ago for some drugs but we are learning a lot more
now because we have better methods of studying it. Today I am going to talk about the broad issue of
pharmacogenomics and some subsequent speakers will focus more or less on the
drug metabolism issues.
[Slide]
What
genetic background issues could affect efficacy and what do we know about
this? Well, we know there can be
genetic diversity of disease pathogenesis.
So people have studied, for example, hyperlipidemia and there are many
different genetic differences in people and how they handle lipids--lipid
transport, lipid receptors, all sorts of things. In some cases there is a suggestion that perhaps that may, in
fact, alter their responsiveness to certain drug interventions, and that makes
sense depending on where the variability is and how they handle the lipids.
Variable
drug metabolism is very well understood as a cause of lack of efficacy. There are some people called
hypermetabolizers. Larry Lesko probably
talked about this. You give them the
usual dose of drug and their body chews it right up so they don't really have
any drug around and so they can't respond; they need a much higher dose. So that is a cause of lack of efficacy and,
depending on how big the population of hypermetabolizers is for that particular
group, you have a certain group of people who are genetically fated not to
respond to that drug at that dose, at the usual dose.
As I
showed you earlier, we often determine mean population effects so we determine
a mean dose for the population and that is what everybody gets. Well, that doesn't work when the drug is one
that is variably metabolized in a human population.
Then,
there are genetically based pharmacodynamic effects. What does that mean?
Well, that means that the beta-adrenergic receptor may be an example of
this. People have genetic differences
in their beta-adrenergic receptor. That
is not believed to cause them to experience asthma at a higher rate but certain
people whose adrenergic receptor is different may not respond the same way to
beta-adrenergic agents. In other words,
their drug responsiveness is modified by the fact of their different genetic
background.
Most of
these things I have been mentioning here are somewhat controversial. There are studies to support to this;
studies that don't support this. But
basically the story is that there are probably three different ways where
efficacy can be impacted by genetic differences.
[Slide]
What
about toxicity? Again, there are
underlying genetic contributions to the variability in the toxic response. For example, if you have certain inherited
syndromes that affect cardiac repolarization and then you are also given a drug
that happens to also affect cardiac repolarization you are going to be much
worse off than a person who didn't have that genetic background. This, along with a long QT syndrome is
something you can identify phenotypically.
In other words, you can measure someone's electrocardiogram and find out
about this but there are probably many underlying genetic problems where we
can't easily measure them with our current measures and so we give them a drug
and there is a bad interaction.
Again,
differences in drug metabolism also go the other way. We have people who are outliers in the population who metabolize
drugs very slowly. A good example is
thiopurine methyltransferase. Drugs
like azathioprine or 6MP are metabolized through the pathway and the population
has normal metabolizers--it has various metabolizers but there are people who
are very slow who lack the ordinary metabolic path. Their AUC will be about ten times higher on administration of
these drugs than the normal person--well, these are all normal variants--than
the most common person. This is very
important because 6MP for example is a cancer chemotherapy agent, and so
forth. It is used in the treatment of
inflammatory bowel disease, chronic use.
So, it is a very important thing that may result in various drug
toxicities. There is a lot of
information about this in the literature and similar cases.
Another
example of a different kind of genetic contribution might be what I call a
toxicodynamic interaction. That is not
where you have a genetically based abnormality or you don't have a metabolism
difference; you have some normal existing state that somehow interacts with the
drug that can cause a severe reaction.
People have postulated that a hypersensitivity reaction to abacavir
might be caused by association with a certain MHC type, for example. Again, these are still being studied. We are not really sure of the association
but this is the kind of prediction that could be made.
[Slide]
How
important are these differences? You
could say, well, they exist but they are not that important. How much of the variability could we predict
if we knew the genetic substrate? That
is an important question.
[Slide]
At the
level of an individual--and that is what people always ask us, they say what
drug is right for me? What dose is
right for me? They want individualized
therapy. Obviously, at the level of an
individual a genetic difference may actually determine the drug response. I am using a very obvious one here, enzyme
deficiency. Obviously, if you pick the
wrong replacement enzyme for somebody then they are not going to respond and
that may not be completely clear phenotypically.
Also, it
may highly influence drug response.
That is what I have already talked about, the polymorphic drug
metabolizing enzymes. In some cases
these are not that important because there are multiple pathways; because the
therapeutic index of the drug is very broad; for a variety of reasons. In other cases these are extremely important
and will predict to some extent who gets toxicity; who fails to get efficacy if
you don't control the dose correctly.
[Slide]
Unfortunately,
we have to recognize that many responses of people to drugs, just like
everything else in life, are going to be an emergent property of multiple gene
products interacting with each other and with other environmental factors so
you get a system where you have multiple gene products. They all can have genetic differences. They interact with each other and it can be
very difficult to predict how that will affect their drug response.
So, many
individual genetic differences or even the patterns of differences, that Frank
Sistare is going to talk about, even transcriptional differences may not be
that highly predictive of drug response.
So, we are going to have to sort this out over time.
[Slide]
This is
kind of a picture that was just in Science about the body and actually
how it is organized. The point is that
there is a level--we are talking at the genetic level, the RNA level and the
protein level, expression level and so forth--the body though has larger systems
imposed on top of that to become a very large-scale complex interacting
system. So, we are probably naive, if
we look at a few things at the bottom, here, that we are going to tell how the
system is going to read this out. All
right? So, genetics is not
determinative, except in some cases.
[Slide]
Let's go
back to drug development, the background of how the genetics could affect drug
development. Currently, in drug
development we have a satisfactory way to determine efficacy. As I said, it has to be done on a population
basis. We randomize a whole bunch of
people; we look at what happens and then we decide whether a drug works or
not. It is the same with a
determination of drug toxicity.
It is
observational. First we expose a bunch
of animals--and I apologize to the toxicologists here, I am sure it is much
more scientific than that, but basically you expose a bunch of animals and then
we expose a lot of people and, again, it is observational. We expose a lot of people to the drug and we
write down what happens. If we are
lucky we compare it to a placebo or something so we can tell which ones are
real. In fact, even for approved drugs
the carcinogenic potential and repro. tox. potential is based on in vitro
and animal studies because obviously we are not going to find that out in people.
[Slide]
So, that
is drug development. It does well; gets
stuff on the market. But how can
we--we, meaning the whole enterprise--use pharmacogenetics in drug development? Number one, and that is being done now and
this doesn't really involve the FDA, we can improve candidate drug
selection. Obviously that is in drug
discovery.
We could
develop new sets of biomarkers, and that is being worked on too and I think
that is what NCTR is working on as well as others for toxic response in animals
and humans which will eventually perhaps decrease or minimize animal studies,
or at least make them much more predictive and be able to understand the
underlying mechanisms.
Perhaps
we could predict who will respond to a drug at some level; predict who will have
serious side effects and rationalize drug dosing, which would be a very helpful
thing to do.
[Slide]
In fact,
Ajaz and I were talking about this, this has a lot of parallels to the PAT GMP
initiative we are doing. The
pharmacogenomics has the potential to move us from a current empirical
process--and that is really what drug development is right now; it is
empirical, not mechanistic--to a mechanism-based process that is hypothesis
driven. Really that is pretty far in
the future, I admit, but we ought to be trying to go there because that is the
science we are seeking.
This
could in the future result in a lower cost, a faster drug development process
that would, nevertheless, result in more effective, less toxic drugs but for
smaller a population, the people to whom it was targeted.
[Slide]
How is
pharmacogenomics being used now by the pharmaceutical industry? We are going to have a later presentation by
a member of the pharmaceutical industry so I will skip over this pretty
quickly, but I want to just give you a little bit more of the background before
we get to the next speakers.
As I
said, it is being used in discovery and identification of lead candidates,
identifying targets, evaluating cellular or animal responses to different
candidates. But it is hard to make a
lot of sense right now, I think, out of a lot of this data and this is not part
of regulatory submissions, not required.
This is in the discovery and nonclinical part.
[Slide]
Within
the nonclinical it is in the path toward a regulatory submission. There are a lot of exploratory studies going
on. There are a lot of directed studies
of genes of interest. Sometimes we see
what I call an explanatory study. Well,
what is that? How is it different from
exploratory? Explanatory studies--well,
you find a toxic effect in animals. You
don't think it is related to people or other animals and you can look at the
responses elicited during that toxic response and you can perhaps show they are
different than in other animals or maybe even in human cells, or whatever, and
that explains that this can't be generalized.
Then, people are working on predictive toxicity patterns.
[Slide]
What
about human studies? Well, the real
hope is that we can sort disease syndromes into subgroups based on genetic
differences and pathogenesis and these subgroups may have different responses
to treatment. This is undoubtedly true
because many of our diagnoses of disease are, again, observational based on
syndromes and these syndromes may contain many different disease pathogenesis
groups. Almost by definition, it is
like these are going to respond differently to different drugs also in humans,
looking at genetic or phenotypic tests for metabolizer status to predict dosing,
rational dosing, and see how metabolism interacts with dosing.
[Slide]
Then you
can go the other way and you can look for genetic differences in responders
versus non-responders. Some of that is
being done. People respond to a drug. Are they somehow different and you find out
in the genetics versus non-responders?
Can you look at RNA expression and look for pharmacodynamic differences,
different pharmacodynamic responses?
There is
a hope that soon--and I know nothing about this and whether this is a
reasonable hope or not that some day soon we could find some genetic
explanation for the severe or catastrophic side effects that people experience
nowadays from drugs. This is really a
bad problem because we have drugs out there that really help people, help large
numbers of people and, yet, every once in a while, and completely unpredictably
from what we know now, they cause catastrophic side effects that are fatal or
terrible. We don't have a way to
predict who is at risk right now, and that is a huge problem I think for
medicine.
This again
is a typo and I am sorry. This is
supposed to be SMP and gene expression screening. Other folks are going to talk about this more but these are the
kind of things that are being done in people.
What I am
going to do now is pause and I am going to have the other speakers come
up. We are going to hear from two FDA
speakers next about what we are seeing and what we are doing from our
side. Then we are going to hear from industry
about what they are doing and what their concerns are about the sharing of
these data. Then I am going to come
back and present our proposal to deal with this and then we are going to hear
from a medical ethicist from NIH, who has kindly consented to talk about this
problem, and then we can have a general discussion of the proposal. Thank you.
DR.
DOYLE: Thank you, Dr. Woodcock. Next we are going to hear from Dr. Frank
Sistare, who is the Acting Director in the Office of Testing and Research at
CDER. Dr. Sistare is going to share
with us a presentation on pharmacogenetics in preclinical studies.
Pharmacogenomics--Preclinical Studies
DR.
SISTARE: This is a good time slot
because we saved an hour for the open public discussion so that means I have an
hour to talk--
[Laughter]
I think I
have 40 slides here. I am going to
breeze through a bunch of them but it is going to be a challenge to get through
in 30 minutes but I am willing to take on that challenge. Let the record reflect it is 1:27!
[Slide]
In fact,
I am going to alter my original charge.
My original charge was going to focus just on preclinical but we realize
that there would be a big gap missing if I focused only on the preclinical and
we decided that a fair divvying up of the action was for me to focus on RNA and
Larry is going to focus on DNA.
[Slide]
What I am
going to do is give you an overview first of the technology and get everybody
up on the same page, and then talk about some of the medically related
applications of pharmacogenomics relevant to CDER's responsibilities.
I am
going to highlight some of the concerns and issues that we have already heard
or that we have already experienced ourselves, and I categorize those in three
major areas, technical, biological and procedural, that have been raised at
that triad interface between patient care, drug development and regulatory
oversight. Finally, I am going to talk
about what CDER is doing to address those issues presently.
[Slide]
Again, to
get everybody on the same page, this is a very simplistic cartoon that shows
the double helix DNA molecule. From the
DNA, RNA is made and then from RNA protein is made. They are not all made in the same place obviously. There is a little trafficking that needs to
go on there. But what I am going to
talk to you about is the RNA component of this process.
[Slide]
RNA is
measured in a number of different ways.
RNA can be converted to cDNA and cDNA or DNA can be amplified through a
process of PCR. This has been a well
tried and well tested and highly depended upon method of measuring RNA in a
quantitative way.
[Slide]
But in
1996 what really stimulated the focus on measuring RNA molecules as a function
of disease and drug intervention was the application of the DNA
microarray. Again, this cartoon is very
simplistic but what it shows is a two-dimensional surface to which DNA
sequences are attached. These serve as
probes. A sample can be prepared. RNA is isolated from those samples of interest
and they are labeled in some way, copies of those things are labeled in some
way. Then, the labeled copies of the
samples, together with the two-dimensional capture matrix, are hybridized and
you can measure a signal ultimately.
There are a lot of technical aspects to being able to get the data out
of this process but this is essentially the process.
[Slide]
There are
a number of different platforms, a number of different specific technologies
that have been used to measure RNA or gene expression. There is the dual color approach in which a
control and a treated sample or a normal and a diseased sample can be hybridized
to the same exact same platform. That
platform exists in several different ways.
You can have a long 500 to 2000 base pair length probe sequence attached
to that surface, called the cDNA microarray.
You can have small oligonucleotides attached to that capture surface, on
the order of 30-60 length of nucleotides.
Or, it can have in situ synthesized oligo microarrays. These all exist out there in the
marketplace.
[Slide]
Then
there is another whole species of microarrays.
There is the GeneChip from Affymetrix which is a high density probe
array and 500,000 specific 25mer oligonucleotides are synthesized, and this is
a single color approach. So,
controlled, treated, diseased or normal are applied to two different probes.
I said
that just as background to get into some of the other issues I will be talking
about later about technical aspects of the technology that need to be
established.
[Slide]
What I am
doing here is just giving you a feel for the fact that these things are
probably not a flash in the pan. We are
not talking about cold fusion here I think.
This is a technology that is on the rise and it is probably here to
stay, and it is going to continue to evolve.
There are probably a lot of different ways of making this point. We can look at publications; we can look at
commercial sales. This is just showing
GeneChip sales and the sales of the machinery to analyze these have quadrupled
in the last four years.
[Slide]
Why are
people really so interested in measuring RNA?
Essentially, the RNA is serving as a protein function signal. Proteins are what is making alterations in
activity of the cell. So, something is
perturbing the cell, working through proteins, to alter changes in RNA
concentration and we have the capability of measuring tens of thousands of
these things simultaneously--an incredible capability.
Well,
what are you going to do with tens of thousands of measurements at one time, in
one experiment, at one dose, at one time in one animal? You have to measure collections of these
things. You really need computers
obviously. You need a database. You need to be able to really efficiently
analyze the experiments, individual sets of data across patients, across
animals, across times. You need a
relational database to help you with that, to identify changes and the function
of what the proteins are doing.
[Slide]
I am
going to spend some time, as I mentioned, and not just talk about the
nonclinical but I am going to talk about some clinical. I think this really kind of excites the
imagination and I think it tells you that the future is now. Okay?
We are being challenged presently with some very exciting data that is
out there in the peer-reviewed public literature.
So, very
quickly I am going to go over the next six or eight slides just to impress upon
you the reproducibility of this information.
This is a publication, a collaboration of two groups, The Netherlands
Cancer Institute and researchers at Rosetta Inpharmatics, who have recently
been bought out by Merck. The focus, as
you can see from the title of the publication in The New England Journal
is on gene expression signature as a predictor of survival in breast
cancer. There was also a paper in Nature
and I am going to actually present data from both of those.
[Slide]
So, the
question that these investigators are asking is can gene expression profiling
be used to improve prediction of clinical outcome as it relates to breast
cancer? The specific aims were to
identify patients at risk to develop distant metastases and to accurately
select for adjuvant therapy--who to treat; can we determine better who to
treat, who not to treat or who to avoid over-treating?
[Slide]
Proof of
concept in a sense here is this first unsupervised cluster analysis of breast
cancer expression profiles. What we
mean by "unsupervised" is we are looking at correlation between and
among gene transcript changes as a function of the samples they derived from. You can see across the top, there, we are
looking at 5000 genes. Which genes behaved
similarly; which ones were not together; which ones went down together; which
ones correlated, in a sense, as a function of the tumor samples. Of course, on the left side you can see
which tumors are behaving similarly as a function of the gene expression
profile that they exhibit using a microarray.
Red means
up and green means down. The
transcripts are up or down and you can see that there is a pattern there. It isn't just random noise; there is a
pattern there. In fact, there are 98
tumor samples there. The 20 at the
bottom are derived from women with germ line mutation in BRCA1; the 70 above
are not. You can clearly see that
pattern in a totally unsupervised approach.
[Slide]
So, you
dig down deeper and you look at those 78 tumors to which the women did not have
the BRCA1 mutations. You can whittle
those 5000 genes down to 70 significant prognosis genes using a standard
T-statistic based approach. This is a
supervised approach, so you look at phenotypic outcome. These are tumors that were derived from
women over a period of 10 years, starting 20 years ago. So, you had outcome on these women in terms
of whether they developed distant metastases and also whether some of them
died. I will show you some of the plots
of that data. But you can clearly see
patients with what we call good signature, with that red section of genes up in
the right-hand corner, there, as opposed to women with a bad signature with a
poor outcome.
[Slide]
These are
patients that are lymph node negative, the same 70 prognosis genes lymph node
negative. I am not a clinician. I don't know a lot about the practice of
medicine but I have a sister-in-law and she was diagnosed with breast cancer
and she was lymph node negative and I breathed a sigh of relief. I think a lot of people in the family
breathed a sigh of relief. What this is
saying, however, is that even amongst those women that are lymph node negative,
14 of 40 of those women went on to develop metastases and they had a poor
signature. Of the ones with the good
signature, only one of 33 went on to develop metastases. So, it is looking like we have something
even better than current clinical practice.
[Slide]
Patients
lymph node positive, sort of the flip side of the coin--women with poor
signature, yes, the majority of the ones that had metastases had poor signature
but there are some that had a good signature there even though they were lymph
node positive, and 42 out of those 44 had good outcome.
[Slide]
Again,
just putting it all together and looking at the outcome, good signature; bad
signature--I don't want to spend a lot of time on the study; I don't have a lot
of time to spend on this study but you can see that same red/green segregation
of what is going on.
[Slide]
Here are
the Kaplan-Meier plots, again showing that for prognosis the good signature
versus poor signature outperforms current medicine. That is just an example.
That is an example I spent a lot of time on.
[Slide]
There is
prostate cancer; there is brain cancer.
[Slide]
Here we have
GI cancer of some sort.
[Slide]
We have
kidney cancer. There are a lot of
different cancers these things are being applied to. I believe Brian is going to show an example not of cancer but
psoriasis where, again, you can use this approach.
[Slide]
So, what
is the approach? Thematically, this is
the approach. Based on phenotype, you
can visually or by some objective accepted method determine two different ends
of the spectrum, green and blue, metastases, no metastases within a defined
framework, and then look for differences.
Can we define differences using expression profile?
[Slide]
What you
find is that when you do those expression profiles, yes, you can define
differences. Not only do you see these
global differences in just those two broad phenotypes, but you find subclasses
within those observable phenotypes as well.
This is a process that goes on in an iterative way getting better, and
better, and better at distinguishing what look like the same disease or the
same people using the technology that we have now. So, this is saying we have a very powerful technology and we can
do things better.
[Slide]
This is
just a visual graphic that says everybody might look the same but we can
clearly classify them in different categories using some of the new
technologies, genomics, proteomics.
[Slide]
Let's
think about what we just did. We took
5000 genes we applied in the clinical scenario and we reduced that down to 70
genes, which really did all the work for us.
That was looking at outcome over a ten-year period.
What if
we could do that same thing in toxicology?
We don't have to wait ten years.
We can treat animals with or without drug and look at differences that
occur in a much faster time with a lot faster throughput, and we can go through
that same process of turning 5000 genes down to what are the critical genes
that actually serve as the business end to help us make these kinds of
decisions or classifiers. I hate to use
the term prediction at this point in time.
Drug developers have to predict; clinicians have to predict but
toxicology reviewers I think classify and integrate data.
[Slide]
This is
shown on a slide where we are looking at signature projection scores. It allows accurate clustering of compounds
and allows identification of subparts of a compound's effect. One more point, 5000 genes, whittled down to
70 left 4930 genes that didn't contribute a whole lot there. I think that is where a lot of angst comes
when we start talking about toxicology.
There are a lot of genes that are changing and a lot of signals there,
and we have to do a lot of good science to figure out where the signals are
that we need to pay attention to and what we shouldn't get into an uproar
about.
But we
have the capability of doing that to a certain extent already, and I think this
slide from our collaborators and Iconix makes this point very clearly. This class of compounds, statins--you can
see all the statins along the bottom line.
Each one of those squares does not represent one gene but represents a
signature of 6-20 genes and they have an overlapping redundancy, some
overlapping genes which go into that signature and some which that are not
totally overlapping. So it is like that
70 gene score, only now we have different sets and we are talking about
6-20. But you can see the consistency
with the statins.
Here we
are looking at pharmacology. The
statins are doing something desirable.
If you put another compound in there and it performs like a statin you
should see that signature come up and that tells you something about the
pharmacology. So, we have proof of
concept established here. With the
NSAIDs you can see performing signatures there. Clearly, the dose defines the toxin. You can push these things.
That is the purpose of a toxicology experiment, you have to push the
toxicology. We won't be happy in the
FDA unless you tell us what the toxicity is.
With the statins, if you push them high enough, you also see a
hepatotoxicity signature.
You can
see an estrogen signal over there, and then you can see with DNA cross-linkers
carboplatin, cisplatin and you can see the cross-linkers significant
there. So, the signatures are relating
expression changes to pharmacology, toxicology, pathology, clinical chemistry,
the chemical structure of the compound and the biology.
How can
we do this? This is somewhat simplistic
but it makes the point how can we come up with a signature of a statin? Here we are showing the biochemical pathway
for how a statin works. Statin works
pharmacologically because it inhibits H and G CoA reductase. We learned in biochemistry that there are
feedback loops, substrates, downstream factors that will regulate a biochemical
pathway. If you inhibit H and G CoA
reductase all those enzymes downstream don't see the same substrate
concentration. They are trying to drive
and up-regulate expression of genes to compensate for that. So, we are seeing an attempt on the part of
the animal, a compensatory response, to compensate for the inhibition of H and
G CoA reductase.
So, you
see a pattern. all those red spots
indicate that with simvastatin all those genes went up, and it makes
sense. It makes biochemical sense that
is telling you there is some pharmacology going on. And, it wasn't just one transcript. There was a combination of transcripts which gave you the
confidence in what you are seeing here.
[Slide]
Here,
making the point, are a lot of different platforms and there are a lot of
different steps you have to go through to get knowledge out of that whole process,
that data generation process. You have
to worry about the RNA. Sampling
integrity is critical. There are a
whole lot of enzymatic processing steps--amplification, labeling, array
hybridization, washing, array scanning, a lot of steps in the process there.
You turn
that data into information through statistical processes, data
transformation. There is normalization,
scaling and, again, statistical processes you apply to select the key
transcripts.
Finally,
there is the new and exciting, burgeoning and very much in need of field of
bioinformatics. If someone can put
bioinformatics behind their name I think they are on a road to success. People are hungry for these people to
explain all this information, to tell you about all these cellular processes,
biochemical pathway analyses referencing previous experiences. Database comparisons enter into that and
there is no substitute for just downright good homework, getting into the
literature and trying to figure out what is going on. Knowledge of biological truth is what we all want here, and this
is where the decision-making level is in terms of the regulator. This is where it has to occur.
[Slide]
So, what
concerns and initiatives have been raised about that triad, drug development,
regulatory oversight, patient care? At
the technical level, I have alluded to this already, with so many variables and
options in data capture--I didn't get into the details about the probes; probes
can be at different ends of the genes on these different platforms but it is an
issue. RNA sample processing,
hybridization processing, data analysis--what is the true signal? Are we measuring true biology reproducibly
and accurately? Or, is there too much
systematic experimental error to reasonably contend with? Are the data misleading us? Can we get the same true answer from data
sets on different platforms and different laboratories? Can universal reference standards help us?
These are
all concerns that were raised in the May workshop that we had. It was an open public workshop. What is a reasonably detailed and
practically useful relational data set to constitute a regulatory submission
that defeats healthy skepticism concerning the integrity of the data? We need to define that. Will these data be appropriate, ever be
appropriate for an FDA database to draw from and develop institutional
knowledge against?
[Slide]
What
about the category of biological concerns and issues? How would the agency react if an oncogene was activated? I am finally going to answer that question
for you. I am going to have fun with
this presentation; I am really enjoying this.
Would the
sponsor have to notify the FDA, clinical investigators and IRBs? This is a question that was posed to us in
May. To be fair, that question got some
answers and all those answers weren't all that pleasant for drug sponsors to
hear at this workshop in May, and it raised a lot of concerns and we have to
hit these concerns head-on and that is what we are going to try to do here, or
at least get started with that today.
Will they
be--and I put this in quotes--"more sensitive" because I am not
really convinced yet that they are more sensitive; they are earlier clearly but
are they more sensitive in terms of dose response? Will "more sensitive" gene expression changes drive
drastically lower clinical trial starting doses and prolong Phase I clinical
trials?
Which
expression alterations--and this is a difficult one and, again, this is a
critical biological question--are reliable and biologically relevant classifiers
or biomarkers of--and I have to separate these out--which are desirable
effects, which are undesirable but clearly tolerable, which are healthy and
fully compensatory responses to the exposure that we can adapt to and not be
concerned about, and which are the special category of intolerable drug actions
that are going to lead to irreversible outcomes? How are they all biologically relevant?
[Slide]
I think
Janet has a very good plan and she is going to get this out to you today to get
some feedback from you in terms of some concerns and issues raised that we can
maybe deal with procedurally. Would all
these tasks have to be done under GLP conditions? Would they all be interpreted by FDA as relevant to human
safety? Is a research information package
approach feasible? What data are
appropriate for that mechanism, what data are inappropriate? These are extremely important questions that
we need to resolve.
How will
the FDA prepare itself to work with these huge data sets in a timely
manner? That is an internal process
that we have to deal with and we are dealing with that, talking about
that. How are we going to ensure that
individual reviewers do not "prematurely" interpret, generate their own
hypotheses, or over-interpret those expression alterations whose biological
significance have not been scientifically established?
How will
the FDA communicate which expression alterations have reached a scientifically
mature level of understanding and can rationally be considered relevant safety
biomarkers?
[Slide]
Here is
the sort of traditional drug development process. You think you understand something about disease, the biology of
disease and you feel you have a validated target. You put a compound there and you see if you can alter that target,
if your drug is doing what you wanted to target. Then, you develop a lead compound to inhibit that process and you
come up with a better and better lead compound.
Well,
there is a line sort of between that discovery process which Janet alluded to
and then the process which FDA really focuses in on. But in my own discussions with individuals in drug development
and industry, I think it is not always clear now that some of the stuff they do
in those early phases of development do not need to be shared with the
FDA. That needs to be clarified. There are some grey areas in there. Is it a no line; is it a dotted line; or is
it a solid line here? I think that is
something that we need to work out and be very transparent and clear to the
industry about because it doesn't quite seem to be a totally level playing
field.
[Slide]
This kind
of captures in a sense if that dotted line appears between that solid green
box--if there isn't a solid line here, then why not allow a lot of
imprecision? Why not allow a lot of
hypothesis generating? Maybe the
platform doesn't need to be very validated.
And, let the scientific researcher generate the hypothesis, look at the
consistency and keep things wide open.
As opposed to, as you progress the concept here being this is what we
envision, that there is going to be more and more fine tuning and more of a
demand eventually on the precision and maturity of the signals and
interpretability of the biology as you move closer and closer to the clinic.
[Slide]
What is
CDER doing to address the issues?
First, building the internal capacity infrastructure. One thing we did early on is establish a
core expertise within the CDER laboratory.
I like to think of that as an effective process. Having that core expertise within the CDER
laboratory, we have contaminated the rest of the Center and we are bringing our
knowledge, our experience and our skills and getting more people excited about
establishing committees. You know, we
have a neat toy. We had some money and
with that money we are able to buy a cDNA array scanner. We got some cDNA arrays and just went to
town, and through a beautiful collaboration with International Life Sciences
Institute we were able to get up to speed very quickly and to be able to
compare and learn a lot through that process.
What did
this do? This enabled us in a sense to
sort of establish credibility. We were
able to leverage the Affymetrix GeneChip system and we have a wonderful
research agreement with them. We are
working very closely with them on a more profound biological issue, and also
working with Rosetta on a research agreement so we can look across platforms
with the same samples and try to understand the biology.
We were
also very fortunate, and I want to thank the Office of the Commissioner, we
have an Office of Science grant from the Office of the Commissioner and we are
using that to develop consensus toward RNA standards development
initiative. This involves all the
medical centers within the FDA in a collaborative mode and involves the NCTR as
well, to look at can you establish a benchmark data output across platforms
using, for example, a mixed tissue standard.
We are focusing on the rat as the fundamental toxicology model. We had a tremendous workshop, working
through NIST, to bring this out and to get more collaboration. NIH is going to join us as well.
Expanding
core expertise for CDER review enhancement--we have established a nonclinical
pharmacogenomics subcommittee. The
focus there is on regulatory decision-making practices, procedures and policies
as nonclinical expression data come to us.
the database I referred to through Iconix, the DrugMatrix database,
leveraged that. Again, a concept the
previous commissioner brought to us and pointed to this as a way we are going
to be operating in the future. But if
you think of it, you know, how are we going to get up to speed very quickly?
This
database all of a sudden gave us access to 550 pharmaceuticals and how they
perform. These are things in the PDR so
we can turn the clock of time in a sense and say what if we had this capability
10, 15, 20 years ago and 550 pharmaceuticals which are now in the PDR? What would we have done if we had seen that
microarray data? So, we have something
to compare it to. This is stuff we have
experience with that is being used, and we feel comfortable with these
things. We can ask about oncogenes.
[Slide]
Very
briefly, the point I want to make here is that the same sample was run at two
different sites on the same platform, Affymetrix, FDA--blue is Affy. They are showing that you don't always get
the exact same genes identified as statistically significant. But what this is saying is, okay, if you
don't get the exact same ones sort of surfacing above a p value, what happened
on the other platform when it was significant on one platform? Well, this shows you there is tremendous
correlation. It behaved pretty much the
same way, p less than 0.001; 8000 transcripts; 0.001 times 8000 may be
8-10-fold. Over here, you see in
quadrant 2 and 4 there are about 8 or 10 spots--amazing statistics that kind of
play out there.
[Slide]
If you
lower the p value, 0.01, you have a lot more genes to work with if you are in
the hypothesis-generating mode. Look at
all the genes in quadrant one and three.
You have more in two and four "false positives" maybe to
contend with but if you are in the hypothesis-generating mode why not go down
to this level?
[Slide]
You don't
have this in your slides, sorry. This
is saying another story, and that is if you use a slightly different
statistical approach to identify the genes of interest--Affy has a unique
thing, increased/decreased calls where you look at exact probes across results
from controls to treated animals and I don't want to go into details of that
but look at this correlation. It is
amazing; a much better correlation. So,
the statistics that you use to identify those transcripts are equally important
in terms of how you define, and this is something we need to work together
on. Each platform sort of has their own
approach to identifying transcripts of interest and this is something we need
to work toward together.
[Slide]
Here is
the answer to this question. What if a
reviewer sees an increased expression of oncogenes in a product submission package? We used the Iconix database to ask this very
simple question. You have 72 genes on
the left-hand side. They all have the
term "oncogene" in them. We
are really talking about proto-oncogenes here; these are not mutated
genes. They have oncogenic potential if
they are mutated. The list of oncogenes
is growing every day. Nature
came out with an issue a month ago with, you know, this list of what could be
oncogenic if it is mutated. It is going
on and on.
But here
you can see diphenhydramine clustering.
We have 9 known human and rodent genotoxic carcinogens; 5 human and
rodent non-genotoxic carcinogens. There
are 14 carcinogens proven to be carcinogenic in man and we have 14
non-carcinogens, at least tested in rodent 2-year bioassay. Do we see oncogenes going up with these
rodent non-carcinogens, and do they cluster themselves all the way from all the
carcinogens? There is no pattern
here. There is no clustering of the
non-carcinogens away from the carcinogens when you just look at oncogenes.
You can
see mannitol and up-regulation of oncogenes there. You have aspirin. You
have diphenhydramine. Oncogenes are
going up. This is a message more to our
reviewers than anything else. You see
an oncogene go up, no single transcript is going to derail and identify a
safety issue.
[Slide]
I
mentioned that efforts have been initiated by all the medically related product
centers, along with NCTR, along with NIEHS and several microarray stakeholders,
Affymetrix, Rosetta using Agilent-based platform, Iconix using the Amersham
platform to develop standards useful for evaluating platform features and, as
well, experimental performance if you want proficiency of the end user. Ultimately, bottom line, biologic conclusions
that are independent of platform and represent biological truth is what we want
to help these standards do for us. That
project is ongoing.
[Slide]
I
mentioned we have a PTCC nonclinical pharmacogenomic subcommittee and you can
see here what the various goals are of that group. We want to effect an appropriate infrastructure for
pharmacogenomic data review and integration.
That is our initial and primary goal right now.
A
question came up during one of the discussions, what are we going to do to
explain to sponsors how we want the data submitted to us? That was a question that was posed to
Larry. This is our first goal, to get
that out and open to them if they choose to submit microarray data right now,
at this point in time, if they choose to generate it. If they choose to generate it and want to submit it to us, how
should they submit that data? And, this
is what we want to be transparent with them about.
[Slide]
What else
are we doing? Establishing a network to
assimilate reasonable consensus; develop mechanisms to communicate and deliver
our needs. I mentioned working with
NIST to develop RNA standards. A
workshop has been held; an initiative has begun.
We have a
number of collaborations with NCTR, a tremendous resource to the agency down
there. FDA inter-center communications
group, white paper. There is a National
Research Council committee, a government liaison group, again, to explain to
them what our needs are. This is really
something that NIEHS has thrust and tried to develop to get input from a lot of
different regulatory centers in terms of how they can help us as well, and we
are involved with that. MIAME-tox,
minimal information about a microarray experiment as related to tox,
specifically working with MAHS on that and also with ILSI. There have been FDA-PhRMA workshops.
In sort
of a non-collaborative mode, engaging external expertise. On June 10 we are going to convene that
group for the first time that we are going to ask to help us with what should
the data submission be that we receive from sponsors. What is workable? What is
reasonable? What is compatible with
current practice within the drug development industry that doesn't add a lot of
burden and can work with the current systems that are currently being
used? Then, what is reasonable for
experts in the field in terms of how broad does the data have to be? Do we have to see images? Can we have scaled normalized data? So, these are all things we need to work
out.
We are
going to practice a mock data submission.
We need to develop our prowess at this.
We can ask and maybe be careful what you ask for, but we are going to
see how this whole process works and try to work out some of the kinks.
[Slide]
I am
doing pretty well. I am only three or
four minutes over and I am at the end.
CDER shares the vision that applications of pharmacogenomic technologies
will improve recognition, understanding and assessment of pharmaceuticals by
recognition; easier, quicker, more accurate identification of efficacy,
efficacy potential toxicities and toxicity potentials. By understanding, we mean modes or
mechanisms of drug actions; by assessment, the significance and relevance of
the findings to humans.
[Slide]
CDER has
a responsibility to enable, not force nor impede evolution, vision, desired
state, preferred future, whatever the best term is there. So, if we think about the drug development
process and where it interfaces with FDA, lead compounds going into analytical
trials and perhaps animal trials can be reduced and shortened if we work
together and we get to the crux of the biological value of this approach. Perhaps our clinical trials will have a lot
more options to pursue and we will have potentially shorter, more focused kind
of trials with clearer indication of efficacy going into them, and potentially
more product approvals.
Just one
last quote struck me and I am going to change it a little bit. This is a quote from a very brief piece that
appeared in Nature Genetics. The
individual is Chris van Ingen: As the
impact of new technology on the quality of human life increases, so too do the
responsibilities of appropriate and reasonable regulation. I think we have a good plan to work with
people on that and I hope you give Janet a lot of good feedback. Thanks.
DR.
DOYLE: Thank you very much, Dr.
Sistare. Next we are going to bring Dr.
Larry Lesko back and he is going to address Pharmacogenomics from the
perspective of drug metabolism and dosage.
Pharmacogenomics--Drug Metabolism/Dosage
DR.
LESKO: Thank you and hello again.
[Slide]
As you
see in the binder that you have there, I have more slides than Frank had but I
promise not to show them all. I also
promise not to show any microarray data.
What I
will talk about, however, is pharmacogenetics, which many people think is a
subset of pharmacogenomics. I will be
speaking about it from a fairly narrow perspective, that of drug metabolism and
dosage.
As you
have heard already today, there has been a paradox in the modern drug
development process. Clinical trials
provide evidence of efficacy and safety at usual doses in populations. On the other hand, physicians treat
individual patients who can vary widely in their response to drug therapy at
the usual dose.
Well, in
clinical pharmacology we try to take care of this paradox in part by looking at
subgroups of patients defined by demographics, disease or other types of
factors. While that is not quite
dealing with the individual patient, it does deal with individual subgroups of
patients with respect to drug dosing and drug therapy.
[Slide]
You will
see in any discussion of pharmacogenomics many definitions. This is the practical definition that I will
provide for our discussion: systemic genomic analysis in populations of treated
subjects to identify variants that predict drug response including the
occurrence of adverse events.
[Slide]
Leading
from that definition are three broad areas of application. You will hear more about this, but drug
discovery is obviously an area where genomics can be used to better understand
the cause of disease and, therefore, identify new drug targets. On the far right is drug selection. Drug selection results from using genomics
to differentially diagnose something that we call hypertension or diabetes and
then align a specific drug in a class with that subtype of the disease.
I am not
speaking about either one of those. I
am speaking about the middle channel, talking about drug therapy and how drug
therapy and the knowledge of genomics can be used to tailor the dosing to an
individual or to a population.
[Slide]
This is a
cartoon that illustrates the pharmacogenomic strategy as applied to practice of
medicine. It is a bit of a simplistic
presentation because it takes all patients with the same diagnosis. Based on a single nucleopolymorphism, or
SNP, the idea is to remove non-responders or toxic responders in advance and
then, based on other SNPs identify responders or patients not predisposed to
toxicity whom we can treat. It is
idealistic in that it treats non-responders and responders as two different
groups when, in practice, it has actually turned out that it is more
probabilistic than that and there is probably an overlap in terms of response
between the responders and the non-responders.
[Slide]
In any
case, in terms of pharmacogenetics we know already that there is a
pharmacokinetic basis for differences in drug response that define those two
groups I just showed you. A way of
categorizing those differences is based on extrinsic factors such as the environment,
smoking, diet, alcohol, drug interactions Rx, OTC and herbal products taken
together. Also intrinsic factors and
intrinsic factors can be demographics; could be disease states; or the subject
of our discussion today, pharmacogenetics.
An
analogy would be just like we define subgroups of patients based on renal
disease or hepatic disease which is phenotype, the thought is that we can also
define subgroups of patients based on their phenotype by looking at
polymorphisms in the genes that encode metabolic enzymes.
[Slide]
So, I am
going to change that cartoon a little bit and talk about the pharmacogenomic
strategy applied to drug metabolism and dosing. Notice, on this slide I don't have responders and non-responders. Rather, I have a group, group A, that has a
genetic profile for toxicity with the usual dose of the drug. It isn't indicated that this group not
receive the drug, but it is indicated that this group receive the drug but
perhaps at a lower dose. That is the
implication of what I am referring to in pharmacogenetics and drug metabolism.
As you
can see, groups 2 and 3 continue with the theme of responders but link the
responders more to the drug selection process than to identifying a drug
target. So, I am really operating under
conditions of number 1 in the presentation that I am making today.
[Slide]
This
illustrates what I am referring to when we talk about pharmacogenetics and drug
metabolism. This is the case where I
have the usual dose given to a population of patients, but those patients
between them have different plasma concentrations. They may have those different plasma concentrations for many
reasons, as I indicated, environmental or intrinsic factors. But I show on this slide patient A with a
certain wild type sequence in their DNA that encodes for the activity of the
enzyme. Patient B has a polymorphism
that creates a mutation in the variant.
The net
result, if you go to the far right, is a plasma concentration time profile in
the wild type patient that is right within the boundaries of that therapeutic
range. When you go below that, because
of the slower metabolism imparted by this polymorphism, the exposure goes above
the therapeutic range and that patient, or patient subgroup, is predisposed to
toxicity.
[Slide]
We know
from history, probably twenty years of history, that many of the cytochrome
enzymes are, in fact, polymorphic. I
could pick several of them but let me use as an example one of the most
popular, and that is cytochrome 2D6. On
the left I show sort of the portfolio of cytochrome enzymes. I could pick for this example 2C9, 2C19 or
many others that are polymorphic, but I picked on 2D6 to illustrate the example
because this is an important enzyme in terms of prescription drugs. It is responsible for metabolism of about 40
percent of prescription drugs. That
means over 300 million prescriptions for drugs with polymorphism in the 2D6 per
year.
The
example on the bottom represents the terminology in the cytochrome enzyme
literature. I have a family called
CYP2. I have a subfamily called 2D6,
and then I have a gene defined by, in this case, a deletion at 5 2D6*3. If you happen to have that 2D6*3 you are
going to be characterized as a phenotype called poor metabolizer.
[Slide]
What are
the clinical implications of that?
Well, this shows a theoretical but somewhat real dose-response curve for
nortriptyline, and on the left-hand side I have indicated the therapeutic
window which is somewhat well defined.
You can see that there is an overlap between the exposure response curve
for therapeutic effect. That would be
the therapeutic response curve. There
is also an overlapping curve for toxicity and that window isn't very wide. It is about three-fold in practice and, in
fact, nortriptyline levels are monitored in therapeutic drug monitoring labs
quite often.
If you
think about nortriptyline and the implications of having a wide range of
exposure from the usual dose, you may be moving into that efficacy world of
reduction in anxiety and symptoms of depression, or you may be moving into that
safety world of tachycardia, arrhythmias and drowsiness. That is all possible from the usual dose
without taking into account metabolic patterns of a patient.
[Slide]
This
shows a little bit more. On the left
are illustrated nortriptyline plasma levels as a function of frequency. It shows the bell-shaped curve. The majority of patients are called
intermediate metabolizers. Their plasma
levels are somewhere around 60. The
poor metabolizers may be on of those 2D6*3 on the right hand side and you can
see their exposure is significantly higher.
Then you have people with extra activity that are on extensive
metabolizers side who have lower blood levels and, in fact, may require a lower
dose to achieve a therapeutic response.
Translated into functional dosing, on the right, the dose in the PDR for
nortriptyline is 25-300 mg. You can see
the wide range and that wide range is a result of the wide range of
requirements that patients have based on their metabolism.
But you
could ask what is the equivalent dose if I were to achieve the same exposure in
all patients or the patients who are a mix of poor metabolizers, intermediate
and extensive? That is what the graph
on the right shows. In order to achieve
equivalent exposure I would have to give, for example, to the extensive
metabolizer 120 mg. To get the same
systemic exposure the dose would have to be reduced in the poor metabolizer to
25 mg. That is the implication that we
have for therapeutics, the wide dose requirements defined by the genotype of
the patient.
[Slide]
Well, if
this is a wonderful part of science, one might ask the question what has been
accomplished over the last twenty years with the integration of this at the
bedside. I did a search of the
electronic 2003 version of the PDR which had 2000 entries and identified 51
labels that contain pharmacogenomic information. That doesn't seem like a lot of advancement over the course of
time in looking at the field. In most
cases, in fact, of the 51 labels the information about genomics in the label
was not easily translated into clinical practice.
[Slide]
Here is
an example of one label that came up in the search. It was thioridazine, Mellaril.
In that label I thought the information was fairly informative in terms
of communicating the risk. In the
contraindication section of the label thioridazine was contraindicated in a
subgroup of patients which are seven percent of the Caucasian population who
are known to have a genetic defect leading to reduced levels of CYP 2D6. In the warning section, to further provide
information to the prescriber, there are certain circumstances that may
increase the risk of torsade because of long QT in patients with reduced
activity of 2D6.
So, here
is a factual piece of information, no different than other factual pieces of
information in the label. It does not
go on to say what to do or how to identify those patients in terms of dose
reduction or in terms of laboratory tests.
[Slide]
A recent
example that illustrates the inclusion of genomic information in a label is
that of atomoxetine or Straterra, which was approved in January of 2003. This goes a step further in terms of
informative information in the label.
For example, it includes in the human PK a statement that a fraction of
the population are poor metabolizers resulting in...and the label goes on to
describe that.
There is
good news here. The inhibitors of 2D6
in extensive metabolizers increase exposure, however, if you are a poor
metabolizer you are not prone to that interaction. In the adverse reaction section the following ADRs were either
twice as frequent or statistically significantly more frequent in PMs compared
to EMs. Finally, in the label the
laboratory tests are available to identify CYP 2D6 poor metabolizers and, in
fact, this is a relatively new development in the field in terms of the
widespread availability of laboratory tests to identify poor metabolizers. So, this is probably the most informative
label that we have at the moment.
[Slide]
Brian is
going to talk a little bit more about pharmacogenomics during drug
development. Talking to Brian and other
people in the industry, we have been told that 80 percent of new clinical
trials include collection of samples for DNA analyses. We haven't been told what those analyses are
or how they are going to be applied.
This
graph kind of illustrates the number of clinical trials under way. So, if you multiply those clinical trials by
the number of samples collected you can see why we have so many gene banks
springing up.
[Slide]
But what
is going on from our experience at FDA, we undertook a survey, a very informal
survey that doesn't claim to be 100 percent accurate, of the appearance of
pharmacogenomics in INDs and NDAs. We
started this very recently, and you can see the time course of this
survey. Over that short period of time
the number of INDs and NDAs that contain genomic information has gone up very
dramatically. It has gone from 5 to 70. I expect this is an underestimation of the
actual utilization of genomics in the drug development process as perceived
from the IND/NDA standpoint, but it gives one a sense of what is going on.
[Slide]
What do
these 70 applications represent? They
represent predominantly genotyping of the cytochrome enzymes that are
responsible for drug metabolism. You
can see the breakdown in the pie chart.
By and large, most of the genotyping is related to the enzyme I picked
as the example, CYP 2D6.
On the
bottom of that slide are some bullets that represent other things that are
being genotyped that are much more exploratory than the confirmatory data on
the cytochrome enzymes. For example, we
are beginning to see genomics as applied to receptors in an attempt to
understand differences in drug response.
This is very preliminary. It is
very exciting to see it but it is not in the confirmatory world like the
metabolite enzymes are in. We just
don't understand that as much. So,
there is a lot of activity going on, as evidenced by the trend in the data that
we are seeing.
[Slide]
On a lot
of people's minds is the issue of drug safety and the fact that adverse events
are a major public health issue, as well as a major pharmacoeconomic
issue. So, the question is logical, how
does this relate to pharmacogenetics?
We have
asked the same question and we were piqued by the article that appeared
recently in the literature that really linked several databases but provided us
with some interesting information that the top 27 drugs frequently cited in
adverse drug reaction reports, including nortriptyline, 59 percent of those
drugs are metabolized by at least one enzyme having poor metabolizer genotype;
38 percent of those drugs in the top 27 drugs are metabolized by 2D6, mainly
cardiovascular and so on. So, this is
inconclusive but it is circumstantial evidence that polymorphism in drug
metabolism matters when it comes to drug safety.
[Slide]
It raised
an issue for us about how pharmacogenetics can improve existing therapy. The emphasis, by and large, when people talk
about genomics is the future. How is
this going to help us develop better drugs, more targeted drugs and reduce
adverse effects?
According
to Dr. Collins, at NIH, mainstream genomics is still eight years, ten years
away. But I think we are on the verge
of possibly doing something sooner that can benefit public health. Thus, I ask this question, how can
pharmacogenetics improve existing therapies, which I will define as all
medicines that have been approved by the FDA for prevention or treatment of any
disease in humans under patent or not?
[Slide]
As an
example of one case study we can look at thiopurines and more specifically
6MP. This is a drug that has been
approved for acute lymphocytic leukemia mainstay therapy for over 50 years, and
clinical studies over the last 20 years have focused on various phenotypical as
well as genotypical methods of refining optimal dose.
To look
over the history of 6MP to the present time, there have been incremental
advances in using pharmacogenetics.
Efficacy is fairly high for an oncolytic application. Adverse effects are also associated with
that. In the case of 6MP, the risk of
myelosuppression is the primary limitation to drug dosing. Blood counts have been traditionally used to
monitor therapy, however, there is evidence to suggest that pharmacogenetic
testing can significantly reduce, although certainly not eliminate, this risk.
[Slide]
In
addition to its approved use, there are many off-label uses of 6MP,
inflammatory bowel disease, various autoimmune disorders. If one were to look at the prescription use
of this drug, it wouldn't be surprising if the use off-label exceeds the use
on-label. Nevertheless, it is a
mainstay of therapeutics in today's practice of medicine.
[Slide]
What we
know about the genetic basis of 6MP is fairly well established. This is not exploratory but, rather,
confirmatory that a certain enzyme TPMT, thiopurine methyl transferase,
catalyzes the inactivation of the compound.
If you happen to be a deficient patient you accumulate excess of
thioguanine nucleotides in your hematopoietic tissues and this accumulation in
RBCs leads to an increased risk in all patients of severe and possibly fatal
myelosuppression.
[Slide]
Let's
look at integration of genomics via this flowchart. What this flowchart shows is the prevalence of genotypes for TPMT
in the population. I start at the top
with all newly diagnosed ALL patients per year. That is 30,000 and 3000 of those are pediatric; the rest are
adult. By and large, that population
can be subdivided on their genotype into three broad categories. On the far right is the wild type. Most patients are of the high activity
type. In the middle is the heterozygote
deficient, intermediate activity, and on the left, homozygote deficient, the
one that we probably would be most concerned about. That individual would have two mutant alleles.
It is
further well-known that three major SNPs, or single gene polymorphisms, define
these mutant alleles. It is quite
common to find in the population *3A, 3C and *2. I have shown in parentheses the prevalence of these alleles. There is a rare *3B, 120,000. There is also one more that I don't have on
there, *3D but it usually travels in association with *3A so one doesn't have
to measure it specifically. If you are
in that at-risk category on the homozygotes, low activity, the prevalence is
1/300; for the heterozygotes it is about 11 percent of the population.
[Slide]
In order
to think about a genetic test to guide therapy and guide dosing, one has to
look at clinical utility. We have tried
to do this by looking in the literature at the incidence of toxicity. What this graph shows on the left-hand side
is the cumulative incidence of dose reduction due to toxicity in a patient
population receiving the usual doses of 6MP.
Notice in
the homozygote deficient, the v/v patients, 100 percent of the patients need a
dose reduction because of toxicity because they can't metabolize the drug. The middle category, heterozygotes need a
dose reduction in 35 percent of the cases because of toxicity. As one would expect, the inherent toxicity
of the drug in the high metabolizers is around 7 percent.
What is
very interesting though, if you move to the right-hand side, is you can see the
different dose requirements that these patients require. These are the average final weekly 6MP
doses. If you are a wild type allele,
on the far right, the dose is pretty standard.
However, if you are on the far left, homozygote deficient, you can see
the dramatic reduction in dose necessary to avoid toxicity.
The problem
in dealing with this drug dosing of 6MP is that you have to interrupt therapy
based on the toxicity, and interruption of therapy lessens the intensity of
treatment which is very important in the ALL patients. Furthermore, if one has multiple drugs in
the regimen, as is typical for an ALL patient, reduction of the 6MP dose allows
for full dosages of the other chemotherapeutic agents. In fact, many of the drugs in a regimen for
an ALL patient have overlapping toxicities so for a physician to figure out
which of the drugs is causing toxicity a test can assist that.
[Slide]
In order
to construct the framework of how to think about the integration of a genomic
test into existing therapies, I have borrowed some of the guidelines from the
Secretary's advisory committee on genetic testing from their report. I think these are very reasonable questions
to ask. Is the phenotype relatively
common? Is the impact of the phenotype
serious? Does early detection of a
genotype alter therapy, in this case drug dose? Are there accurate, reliable tests available and, once those
tests are performed, is there counseling available for the physician or the
patient as necessary?
In going
through this questions one by one as they apply to 6MP or as they apply to a
drug metabolized by 2D6, I would hope we could come up with a rational answer
as to how to move forward in improving existing therapies using
pharmacogenomics.
[Slide]
You heard
about many working groups within FDA, well, here is yet another one. We are going to have a summit one of these
days to talk to each other about working groups, but this is the one I have
been chairing since June of 2001. This
group had, as you can see, a cross-section of individuals from all of our
centers. It also has a cross-section of
disciplines represented on the group.
[Slide]
Our goals
are listed on this slide. We came
together originally in 2001 to organize a public workshop to at least get on
the table a discussion with the industry and others about the issues of
pharmacogenomics. That occurred in May,
2002 and the workshop report was released this month in The Journal of
Clinical Pharmacology, in the April, 2003 edition. It makes for some interesting reading, if
you haven't seen it.
We also
used the working group to make presentations at public meetings, primarily
professional associations, to begin to provide some regulatory perspective,
which audiences generally want, and to begin to engage audiences from academia
and industry in discussions of issues.
We are
developing a draft guidance for industry on pharmacogenomics that will
emphasize primarily the clinical side of pharmacogenomics, the efficacy and
safety trials. It will discuss
statistical issues. It will discuss
labeling recommendations and it will discuss in part some of the issues
revolving around diagnostic tests and nonclinical pharmacogenomics.
What it
will eventually discuss at the end of the day depends in part upon coordinating
the activities of this working group with the others, and we are still some
time away from that guidance.
We have
also used this working group as a forum to discuss the scientific details of a
submission requirement for PG data that Dr. Woodcock will talk about in the
next set of remarks that she is going to make.
We have tried to develop a number of case studies that would represent
data that would be typically in a standard review stream, versus that data that
would perhaps be outside that review stream in terms of submission to the FDA.
With
that, I will conclude and turn it back over to the Chairman, Dr. Doyle. Thank you.
DR.
DOYLE: Thank you very much, Dr.
Lesko. We will move on. Next we are going to hear from Dr. Brian
Spear who is Director of Pharmacogenomics, Global Pharmaceutical Research and
Development, at Abbott Laboratories.
Dr. Spear is going to share with us a perspective of the industry's use
of pharmacogenomics and regulatory issues.
Industry Use of Pharmacogenomics and
Regulatory Issues
DR.
SPEAR: Thank you very much.
[Slide]
I
appreciate the opportunity to participate today and to put forward an industry
perspective on the use of pharmacogenomics in drug development. Now, what is this industry perspective that
I have? Good question. Part of it comes from my experience at
Abbott Laboratories where I am responsible for pharmacogenomics, including
pharmacogenetics and cellular molecular toxicology. It also comes from active interaction with a number of people in
the industry, primarily through either formal or informal groups such as the
industry pharmacogenetics working group which has been working closely with
FDA, for instance on last May's workshop, through PhRMA, through FPIA which is
the European equivalent, and other groups that involve industry, regulators and
academics who are focusing on scientific and, to some degree, regulatory issues
related to pharmacogenetics.
But the
views that I am going to put forward are my own. I think that they correspond with those, and I have had feedback
saying they correspond with those of other people in the industry but they are
still my own and I will take full responsibility for them.
I should
say that within the industry the participation among the major pharmaceutical
companies is 100 percent. All of the
companies are involved in pharmacogenomics.
This is not something that a company here and a company there have taken
a chance on. This is now a standard
part of the drug discovery and development process in every one of the drug
discovery and research companies.
[Slide]
What I
would like to do is two things. One is
to just go over, in very brief detail and by example, some of what the major
activities are within the drug industry in pharmacogenomics to give you a sense
of where we are putting our resources and what we think is important. Then I would like to raise some issues that
are important to us in terms of how our work is done, how the results are going
to be used, and what the regulatory consequences are to try to lay a bit of
groundwork of where we would like to see the interaction between the industry
and the FDA.
[Slide]
The
primary uses of pharmacogenomics within industry relate to clinical trial
results, to data quality, to study design, to biomarkers. Some of the results here will not be
reported; some of the results will.
Some of them will eventually go on to drug labeling. Some of them may lead to specific labeling
to direct a genetically defined group for therapy. However, targeting drugs at genetically defined groups, that is a
drug strategy where you are going to make a special drug for a special genetic
group, is not a primary focus in industry.
You hear this from time to time.
You seldom hear it from industry.
Rather, we are still engaged in conventional drug development but now we
have a new set of tools that we can use to direct the drug development or
interpret the results in new ways.
What I
would like to do is to describe three areas which seem to involve the greatest
degree of effort within the pharmaceutical industry: clinical genotyping,
pharmacogenetics, preclinical gene expression and clinical gene expression.
[Slide]
First of
all, clinical pharmacogenetics--I am not going to go into a lot of detail, but
there are areas where understanding patients' phenotypes can help considerably
in interpreting or designing Phase I studies.
First of all, there are outliers that appear in Phase I studies and when
you are looking at 6, 8, 12 patients one outlier can have a considerable
effect. In some instances,
understanding patient genetics can help you understand the pharmacokinetics
much better, and I will show an example in just a moment.
Secondly,
you can use this to exclude certain patients or you can use it to include
certain patients, depending on what you are trying to tease out in a particular
Phase I study on healthy volunteers.
You can
use it to normalize genotypes. What do
I mean by that? Well, as Dr. Lesko
pointed out, poor metabolizers of 2D6 appear somewhere between six and eight
percent of the American population. In
a small trial you would hope that you will have genotype frequencies which
represent that so that you are not, because of small sample size, missing the
boat completely or overloading on the wrong patients.
Finally,
bridging the other populations. We have
had no discussion so far about ethnic or racial differences in allele
frequencies. But some racial groups or
ethnic groups have high frequency of poor metabolizers for enzymes, low
frequency of poor metabolizers, high frequency of hypermetabolizers. Because of that, if you have not allowed for
these frequencies in your trials, you may find it difficult to translate data,
let's say, from the United States to Japan or China, or vice versa. By normalizing the allele frequencies to
account for these differences in different national populations, you may make
it possible to bridge much more easily.
[Slide]
Let me
just show you an example of a Phase I trial.
This is a study with investigational drug. We are looking at the effect of the investigational drug on the
pharmacokinetics of desipramine. This
is a fairly straightforward trial that one would run. But in this particular case, because desipramine is metabolized
almost entirely by 2D6, as is nortriptyline, patients who are poor metabolizers
do not metabolize at a rate which will appear normal in your Phase I trial.
What we
did in this case is we genotyped people ahead of time. We removed those who were poor
metabolizers. I think there were 3 in a
population of 36. Then we carried out
the interaction study only on those who had active genes to metabolize
desipramine.
There are
two major reasons to do that. First of
all, patients who are poor metabolizers do not typically tolerate
desipramine. So, they would be at risk
for injury, especially at the dose levels in the trial protocol, and they would
probably drop out. If we did get PK
data from those patients, since they don't metabolize it, it is going to be
confounding whether you are looking at the genotype effect or the drug-drug
interaction effect. Therefore, we
excluded those patients. It is a safer
trial and it is a trial that is going to have a cleaner outcome.
But when
you look at the results you can see that there is still an outlier. If you look at the rate of elimination you
have one with a long half-time. It is
being metabolized slowly. Why is
that? Here genotyping helped us
again. This individual turned out to
have the unusual phenotype *6, which is one of those nonfunctional null
alleles, and then *9, which is a partially functional mutation. This is an extremely rare genotype. Nevertheless, we just happened to get one.
So, you
can see that most of the patients cluster with a consistent, readily
interpretable PK, and then we have one outlier. We have interpretation of that outlier and we can go on to say
that, due to studies we have done in our ethnic diversity panel, and published
figures, to say that this *6/*9 occurs about 1/250 in the population. We can account for it. We have the mechanism. Therefore, we don't have to concern
ourselves significantly with that anymore.
[Slide]
What
about pharmacokinetics in Phase II and Phase III studies? First of all, I am going to use
pharmacogenetics here not quite in Dr. Woodcock's way but we are using DNA
sequence variants as they relate to drug response. There are genetically identifiable groups whose disease can be
differentiated by genotype. These may
be more rapid progressors. They may
have more pronounced disease. These are
very good populations that can be used to carry out targeted small clinical
trials, as has been done with celecoxib with patients with colon cancer.
Secondly,
we can include or we can exclude patients from these trials, include if we
would like to carry out a proof of concept study where we would like to be able
to demonstrate that it really does what it is supposed to do in a likely
population; exclude if we are concerned about at-risk populations and we want
to prove the concept before we go and expose those patients. We can stratify studies according to
genotype, either by clinical response or risk of adverse events. In some cases we may wish to proceed with
drug development for a genetically defined group.
[Slide]
Also, in
Phase II and III studies it is possible, when you have a large enough number of
patients, to use those patient samples to discover new genetic markers related
to disease outcomes. This is an example
I have taken from Alan Roses at GlaxoSmithKline. These are patients, in this case, with Alzheimer's disease who
have been scanned using SNP markers in the genome to determine where in the
genome there is association of a genetic variant with the condition. This is able to demonstrate the effect of
APOe alleles on Alzheimer's disease.
The same approach, using large number of patients and large number of
genotypes, can be used to discover genes associated with drug effect or with
drug side effects.
[Slide]
Going on
to preclinical gene expression, there are two ways primarily that people are
interested right now in preclinical gene expression. One is toxicogenomics, which I will come back to. Briefly, toxicogenomics is attempting to
predict toxicity of candidate compounds or identify mechanisms of
toxicity. The second is to identify
potential biomarkers that reflect drug toxicity or drug efficacy, and these are
biomarkers that one can use later in clinical studies.
[Slide]
I would
like to go just a little bit more into toxicogenomics. This is the "everything on one
slide" result. This is a very
broad toxicogenomic study that was carried out between Abbott and Rosetta. This is the same sort of heat map or
clustering analysis that Dr. Sistare showed, where across the top you see 3500
different genes. The lines there in the
dendrogram indicate how similar those genes are in terms of response of rat
liver to those genes. The longer the
line the more disparate they are; the shorter the line the more similarly they
respond. Down the right side are 52
different hepatotoxic chemicals or pharmaceuticals in multiple doses. On the left you see another dendrogram which
indicates how similarly they respond to each other. At every intersection of a vertical and horizontal line there is
a result, and that shows whether gene expression was increased, or decreased,
or remained the same when a rat was exposed to that compound and then was
analyzed for that gene. You can see a
wealth of information on this slide.
You can especially see it if you are not red/green color blind, as I am.
[Laughter]
But even
with that, it is apparent that there are some genes that respond very similarly
to each other. There are some drugs
that respond similarly to each other.
From this, you can determine patterns that are indicative of
hepatotoxicity or particular types of hepatotoxicity.
[Slide]
This is
now taking results from one drug on that graph on one page of 28 pages of
results. This is just put up here to
give you an idea of the wealth of information that comes out of one of these
studies. You can see in that magnified
section that there are some genes--and I don't even know what these genes are;
with 3500 of them I am not even going to try to--but you can see that there are
some whose variation is at a significantly great level and some where it is
not. This is a treasure-trove of data
mining. There are any number of post
hoc experiments that one could do with this data. When we talk about the concern we have about interpretation of
this data, it is this exact post hoc data mining which first comes to mind.
[Slide]
Let me
briefly describe some work going on in clinical gene expression now taking
samples from patients in clinical trials and looking at gene expression
responses. The primary effort here is
to find biomarkers that we can use to indicate drug response or to indicate
early signs of drug toxicity. It is
also very useful, if you have been doing the sort of rat or mouse studies that
I showed in toxicogenomics, to be able to compare human responses to the animal
model responses to see if we are getting similar effects or different effects.
We can
use this to identify genes that seem to be key for particular responses, which
may then become genetic markers if we find that there are polymorphisms in
there. Since each one of these genes
that is expressed indicates a protein which is being expressed, this now may be
an indicator that there is an important protein which has an effect in the drug
response or as part of the pathogenesis of the disease.
[Slide]
Here are
some results from Andy Dorner, at Wyeth, looking at gene expression in patients
with psoriasis who were treated with cyclosporin or IL11. They looked at 7000 different genes--much
the same sort of analysis that we looked at in rat liver. They have access to a human sample, which is
skin. They found that there are 159
genes whose expression is related to the incidence of psoriasis. Of those, they found 142 which respond to
treatment with psoriasis. You can see
in the two graphs here, starting at 1, which is the incident level, that with these
particular genes the expression level decreased when they were exposed to the
drugs, and they responded well.
Whereas, patients who are not responders, whose psoriasis did not get
better, as you can see on the right panel, there is no improvement in the gene
expression pattern. So, we have a very
clear indicator from these genes that there is a correlation of these genes
with drug response.
[Slide]
Using
these high density approaches, human clinical gene expression raises some
challenges. The greatest, as I already
mentioned, is we generate huge amounts of data, more than can be readily
interpreted. As I said, it is open to
many different types of interpretation.
The statistical methods that we are using now themselves are
experimental methods. We are still
developing methods that can generate true p values, that can show true
association of patterns with outcomes.
Once we
reach a conclusion, we have yet to work out good methods to confirm those
conclusions. We can do the same thing
again and use the same interpretation but that doesn't necessarily confirm it. So, these are still in process. As I said, there are multiple
interpretations. Two people can take
the same data, use different interpretation algorithms and reach different
conclusions as to which genes are important.
But at the bottom of all of this is, yes, we have the human genomes
sequenced. We have a sense of how many
genes there are. We have names for many
of them. But the correlation between
our knowledge of the genome and our knowledge of clinical outcomes is still
rudimentary and we are not at a point yet where we can clearly say if this gene
does this, the clinical outcome will be that, and that is at the bottom of much
of the work that is going on and much of the uneasiness we have with people who
would want to draw facile conclusions from the data.
[Slide]
There are
proposals being considered for ways in which this data could be submitted to
the FDA so that the FDA can learn from the data, can develop methods and test
methods on the data which would, at the same time, not jeopardize us to this
post hoc analysis concern. So, what I
would like to do is just reflect on some of the comments that have come from
people in industry to the suggested proposals and what we are looking forward
to in proposals for how this could happen.
Because
of the concern about post hoc analysis, and you have heard already from Dr.
Sistare, there is a reluctance by pharmaceutical companies to submit data on
investigational drugs to the FDA. In
many cases that is because we are reluctant even to carry out the experiments
on investigational compounds because we recognize that in IND updates it would
be appropriate that you would ask for that information. This is hampering our research in the same
way it is hampering the transfer of information.
Why is
that? Well, the same issues--we are
using analytical methods that haven't been validated. There are multiple interpretations. The reviewers may not be trained in any way related to this data
but may, nonetheless, recognize the names of some key genes. It is possible the results could be
over-interpreted; they could be misinterpreted. Even without those problems, the sheer analysis of it could lead
to lengthening of timeliness while people work through what it all means.
The
perception on our side then is that submitting this data could jeopardize a
drug development program and, as a consequence, we don't want to do that. The risk may be low but the consequences may
be high.
[Slide]
In
considering a process by which these types of results might be exempted from
normal review, and we will hear from Dr. Woodcock shortly what that proposal
is, there are certain things we would be looking for. One of them is to lower our risk. We would like to be able to do the work without the risk that we
are going to jeopardize our programs.
That is the bottom line.
We would
want data to be evaluated by experts.
We would like an evaluation that is consistent from drug to drug, and we
would like to find out what those results are.
We would hope that the review would be independent of the drug review
timeline. That is, the drug review is
not going to be held up while people are trying to figure out what to do with
the genomic data. We would hope to work
closely with the FDA in determining what is the best process to represent the
science and get the best regulatory approach, continuing the type of work that
we have been doing with the FDA so far.
[Slide]
There are
some things that seem to be up in the air, and I think they can be worked
out. Just what would fall under this
research exemption? How would it be
defined? There will be a definition but
as yet we are not sure what it is, and we hope that there is some input to
that.
There
would certainly be situations, and it was mentioned at some point this morning,
where there may be a public health issue that would overrule this
exemption. We would like to know what
that public health issue is. Certainly,
if it is a real public health issue we would be certainly in agreement but this
needs to be made clear.
It is
also not clear what sort of feedback would be, the process by which we would
get information back from the FDA once they have looked at data from one
company and perhaps compared it to data from others.
There are
also things that we would find unfavorable in a process like this. One that comes up repeatedly is, well, what
if we have this program for three years and then we decide to rescind it? And, everyone is concerned about that. But I think perhaps a more real concern is
will this process then lead naturally to a request for more and more data? If there is a process by which
pharmacogenomic data may be submitted, does that lead to the assumption that
pharmacogenetic data should be submitted?
That is a decision that we believe should be made based on the specifics
of the drug and the drug responses, not simply because there is a process by
which it can be carried out.
[Slide]
This
matter of submitting high density data is important. It is one of many issues that are facing industry right now with
respect to pharmacogenomics. I would
just like to bring up some of the other issues that are important to us in this
field. Of course, the biggest thing is
not industry-FDA, it is nature which is not easy. Biological systems are very complex and genetics is only part of
it, and even when it is, that is not a clear yes/no answer. This is something we are always dealing
with.
From a
drug development standpoint, in carrying out a pharmacogenomic study generally
you have no idea what the result is going to be. You don't know the result and you don't know its value until you
have finished. If these get to be
expensive and you are trying to convince people that you are going to carry out
a study but you don't know what the result will mean, this is a natural
impediment to conducting these sorts of studies.
There is
also no clear regulatory pathway for what to do with the data. If you look at drug labels where genetics is
found in drug labels, it is in all different sections from one drug to another. So, it is not clear to people who are
designing drug development programs that there is a role for genetics in all of
this. It seems to be an ad hoc thing.
In
industry, where we have significant financial constraints, a program which
requires up front investment with uncertain outcome, uncertain even as to
whether the results will help the drug or hurt the drug, those programs are not
going to be looked on favorably when competing for resources. So, these are built-in impediments.
[Slide]
Let me
just bring up some of the issues which have been surfaced by people within
industry that we think are questions that we would like the industry and FDA be
able to discuss more fully.
First of
all, what is the reasonable expectation of the role of genetics in drug
responses? Dr. Lesko was saying, well,
it may be large; it may be small. If
the expectation is that most drugs have a genetic component, then that tells us
one thing that we need to do in terms of conducting our studies. If the assumption is that few drugs have a
genetic component, that tells us something else. It is not that I know the answer to this, although we have been
trying to address it, but we need some clarification of what those expectations
are, and those expectations have to be consistent with the best scientific
information.
In a
practical sense of conducting clinical trials, in the hypothetical case where a
drug in its pivotal studies shows a benefit and has a reasonable safety
profile, such that that drug might reasonably be expected to be approved, if we
had carried out a genetic study and found that maybe 30 percent of the people
don't respond and, yet, the remaining 70 percent do, are we now going to have
to have a restrictive label that says don't treat the 30 percent
non-responders? That is, are we going
to be penalized in our label because we did the extra work? This is a question that comes up all the
time from drug development people.
This next
question I have here sounds almost trivial but it is also very important to a
lot of people. If we collect DNA, is
the FDA going to expect us to do something with the DNA? Because if the answer is, since you have the
DNA, then why don't you come in and genotype something, the response is very
simple, fine, we don't collect DNA.
[Slide]
If we
look at different genotypes in the population--let's say we have three
different genetically identifiable groups, homozygous wild type, homozygous
mutant, heterozygous, and we want to evaluate the drug, does that mean we now
have to redesign our trial where we power it to the smallest group? That can make for a very, very large
clinical trial and that also is an impediment to carrying out that sort of
study.
We would
like to know what the regulatory requirements are for a test that might
accompany a drug. Let's say we have a
drug for which we want to exclude a particular genetic group, what are the
regulatory requirements for that test?
Does it need to be an FDA approved in vitro diagnostic? Can it be a home brew? What about the tests that are used just for
conducting clinical trials? What are
the regulatory requirements for those?
In order to do this we need some clarity.
Finally,
if we only test a drug in a certain genetically defined group of people, what
is that going to do to the label?
Again, there are many answers.
We just need clarity on that.
[Slide]
Just to
end this, three points. First of all,
pharmacogenomics is now an integral part of drug development. It is an emerging part but it is already
taking place within all the drug companies.
Secondly, we have had excellent cooperation with the FDA in trying to
develop the scientific basis for using these procedures and for regulating
drugs. Thirdly, the greater the clarity
in FDA's expectations of pharmacogenomics, the greater will the progress be in
use of these technologies. Thank you
very much.
DR.
DOYLE: Let's take a ten-minute break.
[Brief
recess]
DR.
DOYLE: Next we are going to hear from
Dr. Woodcock, and she is going to share with us the FDA proposal on
pharmacogenomics. So, Dr. Woodcock?
Fostering Technology Development--Pharmacogenomics
(continued)
DR.
WOODCOCK: Thank you. I think you have gotten a very good overview
of some of the issues that we are facing and why we are here, before you, today
to discuss how to move forward on introducing the technology into the
regulatory arena and actually making this come to pass.
[Slide]
Drug
regulators, according to the framework that has been set up, have basically two
kinds of obligations. We have to
determine if drugs are safe and effective for marketing, and if they remain
safe and effectiveness. We also have to
protect human subjects who are enrolled in trials. That is the regulatory framework that has been set up for us and
we have to fit this new information into this framework.
[Slide]
I am not
going to talk about the PHS Act. It is
slightly different for the biologicals but the Food, Drug and Cosmetic Act
defines safety for a new drug application as the FDA will evaluate reports of
all methods reasonably applicable to show whether or not such drug is safe
under the conditions in the proposed labeling.
That is the requirement for safety.
Effectiveness
is even more vague in a way. We need
adequate and well-controlled trials to show that the drug will have the effect
it purports to have under the conditions of use.
[Slide]
These
obligations and framework have been translated into regulations. For an IND the sponsors have to submit
pharmacology and toxicology information, and here we are talking about the
animal data, on the basis of which the sponsor has concluded that it is
reasonably safe to conduct the proposed clinical investigations.
[Slide]
For the
new drug application for nonclinical studies, it says submit the studies that
are pertinent to possible adverse events.
Those that are pertinent. For
the clinical information for a new drug application, submit data or information
relevant to an evaluation of the safety and effectiveness of the drug product. That is the legal and regulatory framework.
[Slide]
The
question that is before us and that we want to have a discussion on is when and
how to use this developing pharmacogenomic information in regulatory
decisions. That is kind of the crux of
when is the information reasonably applicable to safety. It is a statutory standard. Under what circumstances is this information
required, either by regulation or statute, to be submitted to the FDA? These are the questions we would like you to
talk about.
[Slide]
But we
have a proposal. We are not just asking
you to come up with something. This
proposal is that FDA will establish policies on categories of pharmacogenomic
studies, broad categories. This will be
hard. Let me give you some examples.
We will
have a submission requirement, an explicit submission requirement as part of
the policy and the submission requirement will be of several types. A submission might not be required for some
types of data that are being generated.
I think the various presenters presented some kind of data that wouldn't
be required to be submitted to the FDA--early drug development data and so
forth.
A
submission may be required under the current statute and regulations right now
for some other types. We would sort
that out in this policy. But some of
the submissions would not have regulatory decision impact. This is the proposal.
[Slide]
What do I
mean by that? We would say in our
policy related to the categories what the regulatory impact of the study type
would be. Results from some study
categories would not have regulatory impact.
In other words, they wouldn't influence the decision making. We wouldn't say to Abbott you have to do ten
more clinical trials or five more animal studies, or whatever.
Other
study results though will have to be utilized as part of the safety and effectiveness
evaluation. That is clear right now and
I think that is clear going into the future.
The question is about the difference between these two types of
categories.
[Slide]
A
possible threshold determination or a proposed one that we could use: does the
genomic information represent a valid biomarker with known predictive
characteristics? That is an important
threshold that I want you to think about.
I know Dr. Spear said, well, maybe if this information were to be
submitted and it was analyzed by the FDA, it doesn't mean necessarily we would
require that a drug would have to be used in a subset, or something, to respond
to your hypothetical, where you did a trial in everyone and some subset seemed
to respond better but you would have to submit that information. That is the proposal. But that wouldn't mean that you could only
market that drug for that subset. So,
those things aren't linked necessarily.
The FDA
is proposing we would develop this threshold and this set of policies using a
public and transparent process with advisory committee insight, and we are
starting this process now.
[Slide]
Possible
procedures--procedures are important in the transparency. We would establish an interdisciplinary
pharmacogenomic review group probably across the centers. There is expertise in different places. We would also categorize these studies and
develop internal procedures and publish them in a guidance so they would be publicly
available. Our guidance procedures call
for us to publish a draft, get public comment and then go to a final
guidance. So, we would have a lot of
opportunity for public discussion.
We could
propose that the results would be submitted to the IND or the NDA as a research
information package for review by this group.
These are the ones who are not going to have regulatory impact. This would allow FDA to get this information
where we have determined it is not a valid biomarker, but it is useful
information that we need to learn about, and that Frank and his group needs to
learn about, and Larry's people need to learn about. We could get this; we could review it centrally and develop our
knowledge base without negatively impacting on the clinical development
programs or the review process.
We would
have to have periodic public re-evaluation of this decision tree that we would
construct to allow all this to happen, to make sure you are still on the same
track, which gets to the issue of would we renege on this at some point. We would have to have a public discussion.
[Slide]
Examples
I think that are fairly clear that do have some impact in the sense that you would need to submit doesn't
mean, as I said, that there would be impact on the label necessarily in some
way but trial enrollment by genotype, if you are going to stratify people in
your trial by genotype, you want to enrich for responders, you want to avoid
bad outcomes in a trial so you are going to eliminate certain people from a
trial. That is something that needs to
be submitted as a part of your clinical development plan in this proposal.
Selection
of dosing based on metabolizer genotype, I think we would all encourage these
type of activities to be done during drug development where there is a
hypothesis, say, that a metabolizer genotype may make a big difference.
[Slide]
Where
there is a safety rationale based on animal genomic data, say, explaining why a
toxic finding in animals is unique to that species and is not expected in
humans, other species, and so on, in general, we think that results intended to
influence the course of clinical development process should be included in the
safety and efficacy evaluation. Does
everybody follow that? So, if you are
randomizing patients based on these data, if you are modifying their dose, if
you are doing those sorts of things, this should be included. This has gone into mainstream drug
development as part of the safety and efficacy evaluation.
[Slide]
Potential
results that could be done and not have regulatory impact, and I am not saying
whether or not these would have to be submitted; that is going to depend on the
interpretation of the regulations that I showed you, but what if you found a
new transporter gene and your drug is metabolized by it and you want to look at
the diversity of this gene versus the response, the clinical response in
clinical subjects? We really wouldn't
know anything about what this gene means or is, and so forth. So, that type of study would be one that wouldn't
have regulatory impact, decision-making impact. Something that is being done frequently now is what I call sort
of fishing expeditions where you are looking at markers in clinical trial
subjects during the trial. That is
hypothesis generating. That would have
no regulatory impact. The same with
microarray screening in trial subjects where you are simply trying to find out
correlations. The same with gene
expression microarray screening in animal toxicology studies.
Again, I
am not saying whether or not these would have to be submitted. I am saying until these have been established
as far as the validity of information from these studies, they would not be
figured into regulatory decision making.
We already have good tools, animal toxicology, traditional nonclinical
development. We have good tools to
ensure the safety of clinical subjects, and so on.
[Slide]
We are
going to have one more presentation but then these are the questions that I
would like maybe the discussion, if possible, to focus on. Is this approach reasonable? We have to move forward. You have heard the horns of the dilemma that
we are on. These studies are being
widely done and they may have tremendous promise. There is extreme concern about how they are going to be used, and
that is legitimate because they are not really ready for prime time, many of
them, for regulatory decision making.
We need to find a way to get the information in, develop our policies,
develop a regulatory framework for this information, and help to move this
field along. So, is what we have
proposed reasonable and feasible, or should we go back to the drawing board?
Will it
achieve its objectives? Obviously, we
need to achieve the objectives via policy.
We need a free exchange of data, and this is how I started the afternoon
although that was a long time ago and you probably don't remember. We need to have free exchange of data. We need to have the ability of the FDA
scientists to begin developing the framework for the new findings so they can be
integrated into regulation. We need to
advance the use of the new science, not impede the use of a new science in drug
development.
With
that, I will turn it over to the last speaker and then will come back.
DR.
DOYLE: Thank you, Dr. Woodcock. For our last presentation we have Dr.
Benjamin Wilfond, who is with the Medical Genetics Branch of the National Human
Genome Research Institute at the National Institutes of Health. Dr. Wilfond is going to address ethical
issues associated with regulatory review of pharmacogenomic data.
Ethical Issues with Regulatory Review
of Pharmacogenomic Data
DR.
WILFOND: Thank you.
[Slide]
It is a
pleasure to be here. I certainly
enjoyed hearing the prior presentations.
There are a lot of questions which I wish I had time to answer but I
won't address. What I will do is try to
focus on Dr. Woodcock's specific proposal and make some comments about some of
the ethical issues related with that.
For those
of you who aren't familiar with ethicists, basically all we do is ask more
questions--
[Laughter]
--but on
a more serious note, we try to refine the questions to give the questions
greater clarity. So, that is what I
hope to do today so it will be a way of beginning the discussion in a more
earnest way.
[Slide]
We have
heard from all the speakers that the ultimate progression from pharmacogenomic
research is to improve clearance applications.
But it is a challenge to get to that place. The question that is being asked from a regulatory perspective is
when is it appropriate to use the data, either for the drug approval process or
for clinical decision making.
[Slide]
The FDA's
proposal is to divide this into two current approaches. The first is the regulatory approach which,
as Dr. Woodcock described, would be based upon when a sponsor intends to use
the data either to demonstrate safety or efficacy from clinical trials, or if
they would recommend specific clinical applications of the data. However, it would then put aside other
categories as research uses which might include unique genes or gene expression
profiles that would not be part of the regulatory process.
[Slide]
One of
her slides had this quote which I wanted to discuss because I wasn't quite sure
exactly what it meant, which is develop thresholds and policies using public
and transparent process with advisory committee oversight. I understand all the words. I think the challenge is to figure out what
exactly does that mean, or what exactly would that do.
I don't
mean to say this facetiously. I think
it is actually a very difficult question.
What was mentioned is that we need to have some threshold for these
issues of biologic validity and clinical relevance, but the challenge is what
process will we use to decide in a particular instance when this threshold has
been reached. I think that is going to
be the hard question.
[Slide]
One of
the ways we can think about this is to think about two different types of
decisions. One I will call policy
decisions, which would be thinking about categories of studies, types of data,
and whether we would use of not use those types of data. We can also think about thresholds for
inclusion from a conceptual point of view in terms of biological relevance or
clinical significance. But again, as I
pointed out, the challenge will be the application specific decisions. Those are the ones that I think will be the
most challenging, and those will be the ones that either will be used for
making further requests for more data or to be used for evaluations for
decisions about regulatory approval.
Both of
these categories would benefit from review of ongoing research data. So, the general idea of trying to ask
industry to provide data to the FDA could be useful in either of those two
processes.
[Slide]
From my
quick understanding of what is being proposed for the pharmacogenomic review
group, it is going to be primarily focused on policy. So, the question is will this group, or should this group also
consider specific drug application decisions itself or might it, as part of its
activities, say that a decision needs to be made and a second group ought to do
it? I think that is one question that I
think is important to get some feedback on about what the role of this
pharmacogenomic group should be. Dr.
Woodcock may have a particular answer for that, I don't know, but that is a
question I would like to know about.
The
second issue, and this is the second main point that I want to make, as we
think about this group or another group, is the procedural options we have for
how this group should practice. I
wanted to identify three specific categories of options that we would need to
consider. One is going to be the issue
of the confidentiality of the group.
Would this be a public group or a private group? The second would be is this
independence? Is this a group that is
internal to the FDA or external to the FDA?
The third is authority. Is this
an advisory group or is this a determinative group?
[Slide]
I think
these are very important questions and there is not one clear answer to them
but it might depend upon what you wanted this group to do. To illustrate this point, I have this slide
that shows IRBs, data monitoring committees and FDA advisory committees along
these three points. You can see that
each of these has different features among these characteristics. All of these are appropriate for what they
do, but the question will be, as we think about this group, where might it fit
in to these categories?
So, with
regards to confidentiality, clearly a public group will have greater
accountability and transparency. With a
private group there will be a greater willingness of industry to share what
otherwise would be thought of as confidential data. An internal group will have greater familiarity and expertise
with all of the issues related to the drug approval process, but an external group
might have greater objectivity and not feel so constrained. With regards to authority, groups that
provide recommendations are often much more acceptable, except that it still
would require the designation of who ultimately would be making the decision.
[Slide]
I think I
would like to end by saying that a choice between these procedural options
includes ethical considerations of these three categories and the tradeoffs
between them, and that this information could be used to decide whether or not
this review group would decide about specific applications or refer to a
specific group and, secondly, to decide about these issues of confidentiality,
independence and authority.
What is
missing from the slide is perhaps the hardest question, which is what criteria
ought this group, whether it is the review group or second group, use for
making decisions in specific cases because, as much as we would like to say
that this will only be for research, we have to sort of think down the line and
realize that at some point we will cross that threshold. Thank you.
Questions and Discussion with the Board/Presenters
DR.
DOYLE: Thank you very much. Now it is in the panel's bailiwick. It is now time for us to respond. The proposal is in front of us. Anyone have any comments or
suggestions? Yes, Dr. Laurencin?
DR.
LAURENCIN: First I want to thank the
speakers today. This was really a great
session and great information.
My
comments are that before we consider the world of pharmacogenomics we should
consider a group that we can identify right now that are suffering healthcare
disparities possibly due to drug effects, and a group that could possibly
benefit from the monitoring or higher level monitoring in terms of drug
effects, and that is the African American population in the United States.
If we
think about pharmacogenomics, it is interesting that pharmacogenomics as a
discipline probably has, as part of its roots, the observation that certain
antimalarials when given to African Americans cause hemolysis, which led people
to uncover problems with G6PD deficiency and, again, led to the realization
that different drugs have different effects on people. We know now in the United States that there
are a number of drugs that are given to the African American community for
problems that have high prevalence in that community, yet, the numbers of
patients that are tested in terms of clinical trials, those numbers are low,
leading to questions about the effects.
Now, with
more and more acceptance of data from European trials, one would expect that
the numbers of African Americans that suffer from some of these diseases, in
terms of clinical trials, will be decreased.
I know
the Commissioner is sensitive to this.
I have spoken to him about it and he is very sensitive to this issue,
but I think that before we explore the world of pharmacogenomics and get
committees and councils set up to start to explore this or subgroups, I think
that we need to really take a step back and also look at what is going on in
populations in America right now whose healthcare disparities may be affected
by the manner in which drugs are regulated right now in the United States.
DR.
DOYLE: Dr. Shine?
DR.
SHINE: Can I ask Janet some questions?
DR.
DOYLE: Sure, absolutely.
DR.
SHINE: First of all, just for
clarification, Janet, when you made the statement "free exchange of
data" I presume that means between the FDA and industry, and that it
doesn't mean that it goes on the web.
DR.
WOODCOCK: In principle, it is most
desirable that this information all be published but I believe that a fear of
regulatory action is inhibiting publication, although also inhibiting
publication, obviously, is the proprietary nature of some of the data and its
use in drug development. To answer your
question, no, I meant free exchange between the regulators and the regulated.
DR.
SHINE: Thank you. Again, I understand that you are proposing a
plan but it just would be helpful to be a little bit more explicit. Under the proposal, there would be a
threshold determination of genomic information which represented valid
biomarkers with known predictive characteristics. First, does that imply then that those biomarkers would be then
obligatory in terms of all the testing that is done, or are we talking about
biomarkers that sponsors happen to collect as a consequence of it? I am trying to understand how prescriptive
this would be.
Secondly,
some sense of what you mean by valid biomarkers. I can easily see a requirement, for example, for looking at
genetic evidence with regard to metabolism.
I could see that even ultimately being a requirement in terms of how one
does it. On the other hand--and this
goes to Cato's comments--will there be an obligatory requirement that studies
be done in such a way as to determine biomarkers for particular
populations? So, some help with regard
to that?
DR.
WOODCOCK: What we are talking about
right now is studies that companies are doing for their own purposes, and
whether or not that data needs to be submitted, number one, and if it is
submitted whether it would be used in regulatory decision making. We are not talking at all about requiring
companies to do any kind of genetic testing during drug development.
As Larry
Lesko showed, we are seeing a lot of the drug metabolism polymorphism data
right now, and that is something that clearly is a valid marker, a predictive
marker for many systems, many enzyme systems of what is going to happen to
metabolism of that drug. So, that is
kind of one category. But we are not
requiring that people be genotyped. We
have other ways of getting that information through pharmacokinetic data,
phenotypic data, all sorts of ways of determining how people metabolize drugs.
So, no,
we are not talking about establishing a system whereby we would develop new
requirements. We are talking about all
this information being generated out there and what should be submitted to the
FDA, and what of that universe that is submitted to the FDA, which part of that
universe should be used for regulatory decision making. The threshold that is being proposed is the
universe where we understand what the information means.
DR.
SHINE: For example, if they have
information about a genotype related to metabolism, that might be required
whereas something else might not be required.
DR.
WOODCOCK: Required? You mean in a submission?
DR.
SHINE: Yes, if they had the
information.
DR.
WOODCOCK: Yes, that is correct.
DR.
SHINE: Finally, I am curious, you talk
about the research information package for review by IPGRG. Now, information package--I would imagine,
from what I understand about the discovery process and so forth, there is a
huge range of research that may go on with relation in a company, a laboratory
or whatever, some of which bears directly on that study; some of whiich is
interpreted or inferred for the purposes of that study. It seems to me there is a huge potential
spectrum there. What kinds of criteria
would you use for defining what this package would look like?
DR.
WOODCOCK: Well, I think Frank Sistare
was trying to get to some of that in his talk.
We have to work with industry in that particular instance to define the
parameters of what would be submitted because, obviously, these results are
platform dependent. So, we would work
with industry on the specifics of a submission.
DR.
SHINE: But what would be the principles
that underlie that discussion? I mean,
what boundaries would one want to place around that in terms of what we would
be talking about? I am trying to
understand what the underlying thesis would be in terms of the scope of the
activity. I understand you would
negotiate the contents but there needs to be at least some definition of the
area.
DR.
WOODCOCK: Well, the area would be at
some point during drug development, when you have a drug that is going into
people and you are submitting that information in the IND, say, to the FDA or
you are going through clinical development with a drug and you are going to do
genetic testing on animals or people, that type of testing might be submitted
to the FDA.
DR.
SHINE: But it would be specific for
that drug?
DR.
WOODCOCK: Absolutely. Yes, everything we do is more or less
product specific, yes. So, it would be
specific tests in animal or human or maybe cellular systems in some cases on
that drug that is actually in an IND, is actually in drug development. Other submissions might be voluntary. People were telling me at the break that
their folks are doing natural history studies--you know, there is a lot of
other information. Hopefully, that will
be published and we would also like to see it as well.
DR.
SHINE: The last question goes to the
discussion by our ethicist. As I
understood your presentation, you in fact do separate the role of the IPGRG
from the regulatory process. As I
understand it, they would have nothing to do with the regulatory process per
se?
DR.
WOODCOCK: They probably would talk to
the folks making the regulatory decisions but they would be separate. It would be a separate review. We do that in other cases, such as animal
carcinogenicity studies. These are
centrally reviewed at CDER by a committee of expert toxicologists and
statisticians. They give a
recommendation back to the division about what that study means. In this case probably very little would go
back to the reviewing division, although they might talk to them.
DR.
SHINE: But you used two words,
"talk to them" or "recommendations." Which would it be? Is it a discussion or does this group make any kind of
recommendations?
DR.
WOODCOCK: The current proposal is that
we would establish the categories up front.
That is very important. Once the
categories are established up front, that submission comes in and if it is not
a regulatory impact submission it goes to the multi-disciplinary review group
to look at. They don't make
recommendations about regulatory action.
They do not.
DR.
SHINE: Thank you. Thank you, Mr. Chairman.
DR.
DOYLE: Dr. Thomas?
DR.
THOMAS: I would just like to ask the
experts if they feel comfortable with platform uniformity and standardization,
harmonization, as well as the assimilation in terms of a repository or library
for this technology based on the state of the science before we get down to
that next step.
DR.
DOYLE: Any response? Frank?
DR.
SISTARE: I am not an expert so that is
why I don't feel comfortable answering that.
I don't know who is an expert on that topic, but I can speak from some
level of experience.
One of
the points I tried to make in the slides is that platform comparisons, even the
same platform, the same sample, in the hands of different investigators, if you
are looking at a venn diagram, is not going to perfectly overlap. But the question is what does overlap or
even what doesn't overlap, does that represent biological truth? Is it accurate information? That is the key. So, what we are actually trying to assemble is what actually is
out there in the literature to try to address that question. When people for the most part have tried to
confirm the results of transcript alterations using alternative methods, say
RT-PCR, there is very good agreement.
So, that suggests that when you do see changes, they are real. Very few publications have looked and asked
when you don't see changes, are those non-changes real. In the few that have been published, again,
there is very good agreement.
Now, let
me answer this question from another perspective. If I look at one platform and look at another platform, again, do
I get perfect overlap in those transcripts?
The answer is no. A venn diagram
won't be perfect. If you do the
correlation coefficient, like I did, quadrant 1, 2, 3 and 4, you are going to
see some things in quadrants 3 and 4 where one platform says they went up and
another platform says they went down.
Is one wrong? Maybe not because
there may be spike variants and you may see real changes there. They may both be giving you accurate
information.
We are
actually involved in a very nice collaboration right now with the same samples
being looked at on two different platforms.
The bottom line right now that we are seeing is you can come to the same
biological conclusion--you can come to the same biological conclusion, and that
is the bottom line, can you get to the same knowledge level when you look
across platforms. A lot has been said,
a lot of nay-sayers have said confusion, havoc, a lot of craziness by looking
across platforms, but the bottom line is, by careful investigators following
established protocols, making sure they are doing everything right--new tools
in the hands of fools, you are going to get garbage out--but if you have
careful investigators looking very carefully, really testing what they are
doing and validating it, you are going to get similar biological conclusions at
the end of the day.
But it
still is a burning question and we need to address that. We are addressing that exactly. We have this Iconix database that we have
access to, 550 compounds. We are
generating data on different platforms and asking the question if I take this
different platform data and I relate it to this database or for the Amersham
Motorola platform can I get the same biological answer? So, those experiments are being done and
they do need to be done. There is a lot
that needs to be done to calibrate platforms across different
laboratories. We are hoping a universal
RNA reference standard will help that process.
We are not at the point where we can rely on this for regulatory
decision making, but if we envision the day when we can use this information
and obviate the need for some of the very lengthy experiments we do right now
using very traditional toxicity testing, we have to go through this process of
making sure we get accurate answers.
DR.
THOMAS: Thank you.
DR.
DOYLE: Go ahead, Dan.
DR.
CASCIANO: Another response to the
question, I don't think the mystery is any different than with any other assays
that we have developed over the last 25 years.
Even so-called standardized assays like the Ames assay provides us data
that is not relevant to so-called gold standard until you go deeper into the
analysis and understand well what may be true for a mouse may not be true for a
rat. So, it depends upon what you mean
by standardizing and what is the gold standard. So, I don't think we are that much different.
DR. LAURENCIN: I am not sure I would use the Ames assay as
an example.
DR.
CASCIANO: I only used it because it is
a standardized assay.
DR.
LAURENCIN: Yes, okay.
DR.
DOYLE: We have a few other questions
here. Dr. Rosenberg?
DR.
ROSENBERG: First of all, Frank, I agree
with you about gene expression data.
That is why I want to concentrate maybe on something that is a little
more relevant in real-time today, which might be the genotyping data, and talk
a little bit about threshold determination and criteria that the agency may
want to consider.
It would
seem to me that in trying to relate genotype what is going to be very important
for you to consider is a term that geneticists use called penetrance. Where penetrance is very high, and the
threshold of penetrance is probably going to have to be thought about
differently for toxic phenotypes versus efficacious phenotypes, and you will
probably have to break them down differently, but normally what one would
consider, of course, is that if a genotype is highly penetrant with its
phenotype, then it meets the kind of criteria that the agency would want to
consider as being meaningful because if genotype and phenotype correlate, then
you can get decent data.
The
problem, of course, is that for very few loci is that the case, and when you
are dealing with the human population and you have things that have lower
penetrance where many people, or a number of people are carrying genotypes but
express different phenotypes from that, you have to be very careful about making
regulatory decisions about who should or who shouldn't be taking drugs because
their phenotypes have to be correlated with the background of perhaps many
other genes that are being expressed against a background of the genotype you
are measuring.
Therefore,
genetic penetrance I think is a very good criterion and threshold concept that
you should be looking at in order to kind of start to make or think about
decision making in terms of which of these genotypic changes are relevant to
the agency starting to make real decisions about.
DR.
DOYLE: Dr. Woodcock?
DR.
WOODCOCK: Thanks. I think that is a very good point. I think we have one other step that we have
to go. It can't be just phenotypic
expression. It actually needs to
influence drug response or we have to think it is predictive in some way. So, there are kind of two thresholds that
genetic information has to make. I
mean, many of the drug metabolism polymorphs, sure, affect drug metabolism
because metabolism has multiple routes and they aren't that predictive of one
level, for example, so you know, we don't make a big deal out of that. So, I think your point is extremely well
taken.
DR.
DOYLE: Yes, Dr. Pickett?
DR.
PICKETT: Yes, I also would like to just
compliment the group on the presentations today and to also hear about some of
the thinking here, at the FDA, on pharmacogenomic approaches.
I think
the procedures Janet has outlined, to me, make a lot of sense on the path
forward. What concerns me are examples
that implied regulatory impact versus those that do not, and how one can really
clearly define that. To me, it is
obvious in terms of metabolizer data to regard to 2D6 polymorphism, etc. and
that that should be an important component.
For example, if you are developing a drug against HIV and against the
C*5 receptor it would be wise to genotype patients that actually express that
receptor before you initiate therapy.
What is
unclear to me is transporter gene diversity versus response in clinical
subjects. You said that that would be
pharmacogenomic data without regulatory impact. To me, it would depend upon the data that obviously comes out
from that type of study before one can make that determination. So, going into the process I think it will
be a challenge to clearly define pharmacogenomic data that will be subject to
regulatory impact versus that which is not subject to regulatory impact.
DR.
WOODCOCK: Could I make a comment about
that maybe to stimulate some more discussion?
I think we agree with you. There
is a difference though. Say, a firm is
studying a transporter gene and wishes to use it, that is a completely
different situation than being compelled to use that information that you
collect. In any case, if a firm wishes
to use genetic data as a biomarker to shape the design of a clinical program,
then that is completely fine and we welcome that. That is a challenge obviously, but that is not the subject of
this. It is where this kind of data is
being collected for research purposes to see perhaps, if you wish to do that,
but you are not doing it. Does that
make sense?
DR.
PICKETT: Just one other comment. Because pharmacogenomics DNA arrays are only
part of the overall biological picture, it is not clear to me how other
technologies, such are proteomics--for example, I mean, if I was focusing on
oncology and trying to understand signaling pathways, more than likely I would
look at phosphorylation of certain proteins in a signaling pathway to give me
more information versus a DNA array experiment. So, it is not clear to me how the agency is thinking about
utilizing that data to generate really a total picture of biological response.
DR.
WOODCOCK: Is it fair, I will ask the
assembled people here from the FDA, all the different people in the back
there? I think it might be fair to say
we are looking at all of this in a similar way. This is a spectrum of markers and folks will be collecting these
data, and so forth, so it is very similar.
I mean, proteomic data is, in my mind, more empirical than the genetic data
because it is more of an observational empirical pattern recognition type of
thing. But conceptually, as far as this
scheme we are proposing, it is very similar.
DR.
SISTARE: Yes, I would add that I agree
100 percent. Again, one of the points I
was trying to make is that the RNA is a response to what the proteins are
really doing there. It is an indirect
measure because we can measure it so tremendously right now. It is much more difficult--we don't have the
protein arrays to the point where we can get fidelity of those kinds of
measurements right now. It is
happening; it is developing; and definitely we need to be on board with
that. There is some extremely provocative
data coming out of some of the CBER labs right now. For example, the HER2 NU story and that is only part of the
story. Part of the downstream of that
is from HER2 NU receptor signaling.
Looking at phosphorylation of proteins and signaling pathways is adding
another level of information in terms of who to treat; how to measure responding. So, I agree with you 100 percent. We need to be prepared for these kinds of
input data as well.
DR.
WOODCOCK: Can I just say one more thing?
DR.
DOYLE: Sure.
DR.
WOODCOCK: But once the developer makes
the decision to segregate patients based on one of these markers, then we are
in a different game. That is part of
the development program and we all have to look at that and consider it.
DR.
DOYLE: Thank you, Dr. Sistare and Dr.
Woodcock. Dr. Riviere?
DR.
RIVIERE: As time is running out here, I
just want to make one comment. I think
you are going along well. You just
really need to be very careful as to what you actually require. I think you have brought up all the
cases. The point is that that data is
existing and it is going to be generated, and really you have to start
developing a framework upon which you can even accept the data into the
system. Right now you can't handle any
of this data. As you work on this just
keep it very, very flexible because, again, I would think that the proteomic
data might actually show more information in some cases than the genomic data.
I also
agree with you that if the genomics data is actually being used to design a
clinical trial, then obviously that data somehow has to be presented. If it is not, and if it is being interpreted
to, say, an animal toxicology study or to rule out a response in another
species, you need to at least have some mechanism of bringing that data into
the picture because right now you don't.
I think it is a process. In a
year or two a lot of the outstanding questions right now will not be questions
or we are going to have a bunch of new questions. I think you are doing fine; just keep going.
DR.
DOYLE: Dr. Davis?
DR.
DAVIS: As someone who has watched this
and been to most of the meetings that Frank and PhRMA have had together, I
would like to applaud the effort. You
know, we started out with a great deal of fear that the agency was going to try
to force something down industry's throat that we weren't ready to present and
how it was going to be interpreted.
Clearly,
Janet has presented, to me, a well thought out proposal. I think there is still a long way to go but,
clearly, there is a sense of recognition that there are these two camps of
data, one being the research grade data that we were very much concerned about
that, all of a sudden, lead optimization data was going to need to be dumped
into a regulatory package. I think most
of us are aware, I believe, that if this is data that safety decisions are
being made around, then you have to share that data and we will have to debate
what it means. But if we are making
decisions with it, we are drawing some conclusions from it so it would be crazy
not to think that we would have to share it with our FDA colleagues.
But the
question will be what are the grey areas, and what to submit with no impact
versus submit with regulatory impact and I think we just have to work that
out. But I think the groups are in
place to work on that, the ILSI, PhRMA groups, some of the workshops that are
going on. So, I am quite pleased with
the proposal. Thanks.
DR.
DOYLE: Any other comments regarding
this proposal?
DR.
SHINE: I thought the representative
from Abbott was going to say something.
DR.
DOYLE: Brian?
DR.
SPEAR: I will take the
opportunity. I was intending to respond
to the question about having a standardized platform. It is very important that we have sufficient standardization that
we can have data from one experiment to another and compare them. The concern I would have would be if
standardization involved something that would inhibit technology
development. That is, saying that one
device procedure or method was the acceptable device procedure or method could
significantly stop development of new devices procedures, methods and so on and
that would be very antithetical to what we would like to see in research.
DR.
DOYLE: Dr. Sistare?
DR.
SISTARE: We totally agree with
that. Maybe the best way to
characterize what we would like to do is to provide universal calibrators as
opposed to standardizing a procedure, but offer universal calibrators so people
could see that we are on the same page.
I wanted
to also, if I can, come back to something that Dr. Rosenberg started with when
he started talking about penetrance, focusing on DNA. I want to bring that discussion to RNA. One thing I think I failed to make clear--you know I don't think
I heard one work of Dr. Lesko's presentation because I was going through my
presentation and what I didn't say and what I did say but, anyway, I have heard
it before and I have the notes--but one of the things I didn't make clear and I
kick myself for not making it clear, is the data that you saw at The
Netherlands Cancer Institute to day, and from this day forward, that information
from those arrays is being used to make clinical treatment decisions. Okay?
They have made the decision the data are powerful enough now to decide
who does and who doesn't get treatment based on expression array.
I would
ask our ethicist, in a sense, to ask the question, posed with data like that is
it ethical to not use that information to treat a lymph node negative patient
with chemotherapy in a situation like that when you have data that look that
compelling. So, when I said the future
is now, it is now. This is happening;
they are using expression patterns to decide who does and doesn't get
treatment.
DR.
ROSENBERG: Frank, help me then because
in the data you showed clearly, again, there wasn't 100 percent separation in
those groups. Therefore, if there are
people who are in the not treated group but they still are at risk, and you
showed they are at risk, to deny them treatment because they fall in some RNA
expression group I would say is just as damning. You can't deny such a person treatment because you have shown, in
fact, that they have a statistically significant probability of having the same
problems as people in the high risk group and, therefore, I think it is very
hard to use that data to make good clinical decisions.
DR.
SISTARE: There is no doubt about it, it
is hard to use the data. There are two
ends of the spectrum where I think some decisions are easier to make. There are grey areas where it is very
difficult to make decisions. And, even
at those ends where you will see that people don't conform, it means that there
are other variables. If you want to
talk about penetrance, there is not complete penetrance, if you will, for RNA
expression patterns which means we have to do more work to define what is
different about those individuals. We
are not perfect yet, but it is better than what we are doing now in terms
of--again, I am not a clinician. The
question you are asking, I agree with you 100 percent and I don't know what
they are doing in The Netherlands in terms of deciding not to treat
patients. That is a much more difficult
question I think than to treat someone that otherwise looks clean.
DR.
DOYLE: Thank you Dr. Rosenberg and Dr.
Sistare. Dr. Shine?
DR.
SHINE: I would just comment that
ultimately it is going to be a decision made by a patient with a doctor, given
the relative risk, the options and whether they want to be treated and what the
complications are. So, I agree entirely
with Marty that we have to be very careful from this point of view.
I do
think that the pharmacokinetic data becomes critical in terms of what you are
talking about. So, I am comfortable
that if you do good pharmacokinetic data, that will provide a lot of the
information with regard to the genotype.
Mr.
Chairman, do you want a sense of the committee on this matter?
DR.
DOYLE: Yes, I think that would be a
good thing to do.
DR.
SHINE: Well, I would just comment that
the devil is in the details and seeing how, in fact, this is articulated and
elaborated is the critical thing. But I
think the sense of the committee, which they can express in any ways they want,
is that the approach that is being proposed is sensible. It is well thought out. It offers some promise both to improve the
regulatory process and the educational process for the agencies, and I would
support moving forward with this proposal.
Closing Remarks/Future Directions
DR.
DOYLE: I sense that is exactly what the
committee is thinking right now. So,
there is general agreement so thank you for articulating that, Dr. Shine. Does the committee have any further
comments? If not, let me move ahead and
try to summarize these conversations.
First of
all, relative to the quality systems presentations, I think the Board applauds
the FDA's efforts in the quality systems area.
The Board encourages FDA to emphasize metric measurable outcomes when
approaching what one can do to improve the systems.
With
respect to FDA personnel, the Science Board encourages the worker--I can't read
my writing--that the worker provide these outcome measures and needs to
consider not only total buy-in from the internal work force but also consider
increasing head count and/or outside expertise. We encourage the FDA to continue strategic succession planning
not only for attrition of personnel over time, but planning for what type of
science you will have in the future and what type of expertise you will have on
hand.
The Board
applauds FDA's efforts in getting involved with sponsors early in clinical
development but asks the agency to emphasize careful consistency.
Relative
to pharmaceutical manufacturing, the Board is interested in hearing about
similar efforts with regards to vaccine manufacture. The Board encourages the use of case reports as a
learning/training tool. Thirdly, the
Board encourages measuring outcomes and benefits of PAT.
Relative
to the patient safety initiative, the Board encourages FDA to facilitate
feedback and communication. Are there
any other comments or follow-up that I have missed in that regard?
DR.
SHINE: Pursuing the ambulatory data.
DR.
DOYLE: Yes, pursuing the ambulatory
data. Thank you.
Relative
to pharmacogenomics, the Board applauds the FDA's efforts in this area. It encourages the FDA to step back and look
at specific populations, that is minority populations, as we look at
pharmacogenetic populations.
FDA needs
to explore and consider what needs to be done about the standardization of
different microarray platforms. They
need to consider penetrance and phenotyping and genotyping. Marty, would you like to further clarify
that?
DR.
ROSENBERG: No, I think Janet made
exactly the correct point, that it is good criteria to use and that you have to
pick the right phenotype because it has to be the medically relevant phenotype
to be measuring.
DR. DOYLE: Thank you, Dr. Rosenberg. We need to clearly define which examples
determine a regulatory impact and which do not. We need approaches that are sensible and offer promise to improve
the regulatory process and encourage FDA to move forward on this issue.
Does
anyone have any disagreements with that or any additions?
DR.
NEREM: There was the statement that Ken
made, and I think we should emphasize that the devil is in the details.
DR.
DOYLE: All right, good point. Dr. Shine?
DR.
SHINE: Before we adjourn, to come back
to the Commissioner's initial talk this morning when he talked about the
substantial decline in the number of new drugs coming through, the new drug
applications and, at the same time, we have been talking about pharmacogenetics
and pharmacogenomics. I am just curious
given that there are people here from very diverse backgrounds, including
industry, as to how much of this reflects the fact that although we may know
lots about the genome and lots of genes, we don't know what the targets are and
we don't know how to attack those targets in a predictable way given the
multiple genes that regulate a variety of processes. We have the problem of polygenic involvement in many of the
common illnesses, and so forth, and I am wondering--you know, it reminds me a
little bit of the computer revolution.
For 20, 25 years in the '60s and '70s we were going to see this huge
revolution in computing and it wasn't until the mid to late '80s and the early
'90s that it just exploded and you began to see real differences in the way
companies operated, middle management being laid off, and so forth. I am wondering whether there isn't going to
have to be a very prolonged period of development in this area, and then an
inflection point when some of the fundamental questions about these targets are
addressed. I am worrying about whether
the movement from chemistry to genetics may, in fact, account for some of this
lag. I am just curious, since we have
some very knowledgeable people here, whether they would have any comments about
the Commissioner's observations which I found very interesting.
DR.
DOYLE: Any thoughts.
DR.
DAVIS: Well, it is too bad Cecil left
given the position that Cecil holds with Schering. I think you are absolutely right. I think everything you elucidated is part of the issue or the
perspective. Genomics has given us a
lot of targets that we don't know what they do, and also even after you have a
target, being able to put a drug on that target and have it have an effect is
not always an easy thing. Some of those
targets are not as easy to get to.
So, I
think we are in a place where there is a plethora of information but we aren't
sure yet what to do with some of that.
On top of that, as the Commissioner mentioned, there is the increasing
review time. So, I think it is a
combination of a whole lot of stuff that has really got us in the position that
we are in. I am optimistic, being in
industry, that in a couple of years there is going to be this deluge. We sure hope there is, anyway and I hope we
don't have to lay off a lot of people, like the computer people did.
DR.
DOYLE: Just a few last comments. First of all, I want to thank Commissioner
McClellan. He just did a super job in
sharing with us his vision for the FDA and I think, by all standards, at least
I feel very comfortable that the FDA is in good hands.
Secondly,
I want to thank all the presenters for the excellent presentations that they
have given and the forward thinking that was provided. I want to also in particular thank Dr.
Woodcock and her team for the very informative overview that we received of
their research program yesterday. That
was very well done. Finally, I want to
thank Susan Bond for assisting this group for the last three years, and wish
her all the best.
With
that, happy trails and we look forward to seeing you in the fall. Excuse me, one last thing, I want to let
either Dr. McClellan or Dr. Crawford have the last word here. So, do you have anything you would like to
add?
DR.
MCCLELLAN: I don't have anything to
add. This has been a terrific
discussion of a whole set of complex issues.
Thank you for your comments about where we seem to be headed. To the extent that we have coordinated
vision and are actually making progress in getting there, it has nothing to do
with me and everything to do with the very talented and dedicated staff in this
agency, and I am pleased to see that the Science Board is able to have some
impact on the direction in which we are heading. We are going to need that help more than ever so thank you very
much for what you are doing.
[Whereupon,
at 4:17 p.m., the proceedings were adjourned.]
- - -