SGDEPARTMENT OF HEALTH AND HUMAN SERVICES
FOOD AND DRUG ADMINISTRATION
CENTER FOR DRUG EVALUATION AND RESEARCH
ADVISORY COMMITTEE FOR PHARMACEUTICAL
SCIENCE
MANUFACTURING SUBCOMMITTEE
Wednesday, September 17, 2003
8:30 a.m.
5630 Fishers Lane
Rockville, Maryland
PARTICIPANTS
Judy P. Boehlert, Ph.D., Chair
Hilda F. Scharen, M.S.
MEMBERS:
Patrick P. DeLuca, Ph.D.
Robert Gary Hollenbeck, Ph.D.
AD HOC MEMBERS:
Daniel H. Gold, Ph.D.
Thomas P. Layloff, Jurisdiction., Ph.D.
Garnet Peck, Ph.D.
G.K. Raju, Ph.D.
Nozer Singpurwalla, Ph.D.
GUESTS AND GUEST SPEAKERS (Non-Voting):
Efraim Shek, Ph.D., Acting Industry Representative
Colin Gardner
Edmund Fry
Greg Guyer, Ph.D.
Tobias Massa, Ph.D.
Gerry Migliaccio
Kenneth M. Morris, Ph.D.
FDA STAFF:
Diana Koliatis
Joseph Famulare
Ajaz Hussain, Ph.D.
Norman Schmuff, Ph.D.
Janet Woodcock, M.D.
C O N T E N T S
PAGE
Call to Order and Introductions, Judy P. Boehlert,
Ph.D., Chair,
Manufacturing Subcommittee 5
Conflict of Interest Statement,
Hilda F. Scharen, M.S.,
Executive Secretary 7
Introduction, Ajaz Hussain, Ph.D.,
Deputy Director, Office of Pharmaceutical
Science, CDER, FDA 9
PQRI/FDA Workshop Report Summary, Tobias Massa,
Ph.D., Executive
Director, Manufacturing and
Controls, Eli Lilly &
Defining Quality, Janet Woodcock, M.D.,
Director, CDER, FDA 20
Considerations for Quality by Design, G.K. Raju,
Ph.D. Department of
Chemical Engineering,
Massachusetts Institute
of Technology 49
Current Regulatory Challenges in Assessing
Quality by Design, Norman
Schmuff, Ph.D., Office
of Pharmaceutical
Science, CDER, FDA 70
Proposals for Regulatory Asses of Quality
by Design:
Industry, PhRMA, Gerry Migliaccio, Pfizer, Inc. 91
Industry, GPhA, Edmund Fry, IVAX Corp. 103
Academic, Kenneth Morris, Ph.D.,
Regulatory, GMP Perspective, Joe Famulare,
Director, Division of
Manufacturing and Product
Quality, Office of
Compliance, CDER, FDA 132
Regulatory, CMC Perspective, Ajaz Hussain, Ph.D.
Deputy Director, Office
of Pharmaceutical
Science, CDER, FDA 143
Open Public Hearing:
Robert C. Menson, Ph.D.,
Menson Associates 166
Committee Discussion and Recommendations 185
C O N T E N T S
PAGE
Quality by Design and Risk-Based Regulatory
Scrutiny CMC:
Specifications and Post-Approval
Changes, Colin R.
Gardner, Transform
Pharmaceuticals, Inc. 228
GMP, Greg Buyer, Ph.D., Merck & Co., Inc. 257
P R O C E E
D I N G S
Call to Order and Introductions
DR.
BOEHLERT: I would ask everybody to start
taking their seats so we can get started properly at
Good
morning, everybody. I am Judy Boehlert. I would like to welcome you to the second
meeting of the Manufacturing Subcommittee.
This meeting will perhaps be a little bit different than the first one
we had because today we are looking for definite input from the committee; the
first one was more introductory in view.
So, today we are going to be asked to address a number of questions
around the topic of quality by design and the concept of risk, and how the two
fit together.
To
get the meeting started, I would like us to introduce ourselves. I will start off by saying I am Judy
Boehlert. I have my own consulting
business to the pharmaceutical industry.
We will start at the end of the table with Efraim.
DR.
SHEK: Efraim Shek, from Abbott
Laboratories.
DR.
GOLD: I am Dan Gold and I also have my
own consulting business.
DR.
LAYLOFF: I am Tom Layloff. I work for Management Sciences for Health,
which is a health sector NGO, working primarily in Africa.
DR.
SINGPURWALLA: I am Nozer
Singpurwalla. I am a professor.
DR.
HOLLENBECK: I am Gary Hollenbeck,
professor from the University of Maryland.
DR.
DELUCA: Pat DeLuca, professor at the
University of Kentucky.
MS.
SCHAREN: Hilda Scharen, Executive
Secretary for the Advisory Committee for Pharmaceutical Science.
DR.
RAJU: G.K. Raju, Executive Director of
the MIT Pharmaceutical Manufacturing Initiative.
DR.
PECK: Garnet Peck, professor, Purdue
University.
DR.
WOODCOCK: I am Janet Woodcock. I am the head of the Center for Drugs. I am also the Chair of the Product Quality
Steering Committee Initiative for the FDA.
MS.
KOLIATIS: Diana Koliatis, Regional
Director, Northeast Region, Office of Regulatory Affairs.
DR.
HUSSAIN: Ajaz Hussain, Office of
Pharmaceutical Science, CDER.
DR.
BOEHLERT: I would like to ask Hilda
Scharen to read the conflict of interest statement.
Conflict of Interest Statement
MS.
SCHAREN: The following announcement
addresses the issue of conflict of interest with respect to this meeting, and
is made a part of the record to preclude even the appearance of such at this
meeting.
The
topics of this meeting are issues of broad applicability. Unlike issues before the committee in which a
particular product is discussed, issues of broader applicability involve many
industrial sponsors and academic institutions.
All special government employees have been screened for their financial
interests as they may apply to the general topics at hand.
Because
they have reported interests in pharmaceutical companies, the Food and Drug
Administration has granted general matters waivers to the following SGEs which
permits them to participate in these discussions: Dr. Judy Boehlert, Dr. Patrick DeLuca, Dr.
Daniel Gold, Dr. Gary Hollenbeck, Dr. Thomas Layloff, Dr. Garnet Peck, Dr. G.K.
Raju.
A
copy of the waiver statements may be obtained by submitting a written request
to the agency's Freedom of Information Office, Room 12A-30 of the Parklawn
Building.
In
addition, Dr. Nozer Singpurwalla does not require a general matters waiver
because he does not have any personal and imputed financial interest in any
pharmacology firms.
Because
general topics impact so many institutions it is not prudent to recite all
potential conflicts of interest as they apply to each member and
consultant. FDA acknowledges that there
may be potential conflicts of interest but, because of the general nature of
the discussion before the committee, these potential conflicts are mitigated.
In
addition, we would like to disclose that Dr. Efraim Shek is participating in
this meeting as an acting industry representative, acting on behalf of
regulated industry. Dr. Shek is employed
with Abbott Labs.
In
the event that the discussions involve any other products or firms, not already
on the agenda, for which FDA participants have a financial interest, the
participant's involvement and their exclusion will be noted for the
record. With respect to all other
participants, we ask in the interest of fairness that they address any current
or previous financial involvement with any firm whose product they may wish to
comment upon. Thank you.
DR.
BOEHLERT: Thank you, Hilda. Just by way of further introduction, the
meeting today will be structured with a number of presentations, followed by
committee discussion, followed by another group of presentations, followed by
committee discussion. We have two topics
that we have been asked to address. One
is quality by design and the other is relationship between quality by design
and risk-based regulatory scrutiny. So,
with that introduction, I will ask Ajaz to get us started.
Introduction
DR.
HUSSAIN: Good morning. Madam Chairperson, we would like to sort of
welcome everyone here, the subcommittee members and invited guests, to
Rockville and, hopefully, Isabel is not on your mind today.
As
you have already mentioned, this is the second meeting of this subcommittee of
the Advisory Committee for Pharmaceutical Science and we would like to, as was
said earlier, at the first meeting, move away from "blue sky" to some
"blue collar" work here. To do
this we have posed several questions to you in the memo I sent out to the
committee. In particular with respect to
quality by design, we seek your comments and recommendations on how do we
define quality by design and how does one achieve quality by design, and then
how one should assess quality by design in a regulatory setting such that we do
not interfere with the development programs of a company. That is one set of questions that we posed to
you.
To
support the discussion and facilitate the discussion we have invited several
speakers with several different perspectives, and I would really hope that the
speakers invited would focus on providing some proposals to you and different
perspectives. This will be followed by
committee discussions and please feel free to ask the invited guests the
questions that you have, as well as provide us with your recommendations on the
questions that we have posed to you.
The
second part of the discussion focuses on linking quality by design to
risk. Now, if you go through the
presentations you will see that the risks we are talking about are focused on
the CMC review process. So, I think
there is a general feeling that there are opportunities to reduce the burden
that we have in terms of managing post-approval changes. For that aspect we have invited Dr. Colin
Gardner back. He had introduced to you
the concept of make your own SUPAC and calling it custom SUPAC. Based on the development knowledge, can we
find better ways of developing a regulatory framework that recognizes that
level of science and allows companies to benefit from the high level of science
that has already occurred, has been done for a product. So, how does one link to that? We have invited Greg Guyer also to focus on
aspects of that.
We
also have a couple of presentations in the open public session. I think one particular presentation focuses
on risk. I think that will be very
valuable. With that, I would sort of let
you know that Helen is on a well-deserved vacation somewhere on the West Coast,
away from Isabel, and I again welcome all of you and look forward to discussing
these important topics with you today.
DR.
BOEHLERT: Thank you, Ajaz. Our first speaker this morning with be Dr.
Tobias Massa, who will be updating us on the PQRI/FDA workshop.
PQRI/FDA Workshop Report Summary
DR.
MASSA: Good morning. Last April PQRI co-sponsored a rather large
workshop with FDA on many of the topics within the context of quality for the
21st Century. This meeting had
approximately 500 attendees from a broad swath of industry and included
approximately 70 representatives from FDA.
We had people from the innovator as well as generic industry, small
molecules as well as biotech companies represented. The human as well as veterinary segments of
the industry were there as well. We had
international participation as well. We
had industry representation from EFPIA and JPMA, our European and Japanese
counterparts of PhRMA, as well as some EU regulators.
The
main focus of this meeting was the discussion groups. The topics that were covered are listed on
this slide, specifically focusing on manufacturing changes and how we can
change the regulation to make manufacturing changes easier to achieve
manufacturing science to try to define the body of information upon which we
make decisions; how to define risk and how risk ties into the issues of
manufacturing science and changes; and then also trying to integrate CMC
review, and that includes development through the review and inspection
process.
With
regard to risk, I don't think it was any surprise that everybody, or most
people--we had a consensus opinion that risk and science-based approaches to
GMPs and regulations is the desired state.
Tiered regulatory oversight was deemed to be considered appropriate. The lower the risk, the lower the regulatory
oversight. The more information you know
about your product, the better you know your product, the lower the level of
regulatory oversight.
There
was also consensus around the concept that risk is dynamic and changes over the
product lifetime. The more commercial
experience you gain from your manufacturing, the better your body of
manufacturing science is and, therefore, the lower your risk is.
Now,
we were not able to get consensus on some items and clear definition of risk,
risk assessment or risk management, just was not there. When you have that many people involved it is
a little difficult to reach consensus.
Also, how risk is related to fitness for use and how you tie that to
manufacturing, what happens on the manufacturing floor, was not something that
could be agreed on although there were certainly a lot of opinions there.
Within
the manufacturing science discussion group, this definition was agreed to. You will hear more about that from Gerry
Migliaccio later. But within the body of
manufacturing science it was felt that there should be identification of risk
at various points in the manufacturing control process and how that risk is
mitigated. Again, the concept that risk
is dynamic and manufacturing science is dynamic was discussed in this group as
well.
What
started to emerge were concerns about what should be shared with regard to
manufacturing science and how it should be shared. We have not achieved, I think, that culture
of trust that we need between industry and the agency in terms of how we are
going to handle this. There is a lot of
concern from the industry that this might result in more regulation instead of
less regulation. So, I think both within
the agency and within industry we have a lot of cultural barriers to overcome
with some of these concepts and, again, the concept that manufacturing science
would be inversely proportional to the level of risk and manufacturing
oversight.
The
issues that were discussed here were exactly what I just mentioned--what data
should be shared and how will that data be used. The overall goal, obviously, is assurance of
product quality by design rather than by testing. This group came up with some very specific
recommendations and these are enumerated here.
Basically, what we are talking about is having more discussion between
industry and the agency on the topics of what is the body of data to be shared;
how do we collect that; how do we format that in a way that makes sense for the
agency; how do we identify a risk classification system based on that body of
data; how do we use technology to mitigate risk and also providing guidance on
broader interpretation of current regulation.
Note that I am saying "interpretation" of the regulation, not
changing the regulation, as they pertain to filing supplements and inspections.
With
regard to integration of the process, the review and inspection process, a lot
of the same comments were made. It was
felt that if you had the appropriate body of data you could have tiered
regulation or tiered regulatory oversight.
With
regard to inspections, there was a general consensus that PAI should be
conducted where warranted, i.e., in higher risk situations, and higher risk
could mean a new technology that has not been approved before or a new plant
that may not have been previously inspected.
There should be a risk-based focus on the most critical issues during
any inspection,
The
CMP inspections or the biannual inspections should be focused, it was felt, on
quality systems as opposed to being product specific.
Probably
the one item that will probably be very difficult to attain, based on past
history, is mutual recognition of inspections and industry has raised that at
ICH as well, not only mutual recognition of inspections but, maybe a little too
"blue sky" mutual recognition of the review of CMC sections of
applications.
People
also saw a lot of value in the proposed pharmaceutical inspectorate that FDA
has in their plans. Industry would like
to participate in trying to put together a training program so that we can draw
on some of the expertise that industry has to help put that program together.
Again,
the concerns are what data and how much data should be shared; how will it be
reviewed and by whom; concerns about
more, rather than less, regulatory oversight; a lot of concern about what is
the impact on the review timeline.
People do not want to have their reviews held up by submitting
additional data.
This
is kind of further out on the fringe, but there are also concerns about FDA dictating
pharmaceutical development. We are going
to be submitting a lot more data and what we don't want to have happen is for
the agency to say, "well, I like company A's development paradigm better
than I like company B's. Therefore,
company A's ought to be the one that's used."
There
is also concern about the role of reviewers, technical experts and
inspectors. If we are talking about an
integrated approach here, how will this work?
I think industry wants to hear more specifics about who will be responsible
for what parts of this process.
With
regard to manufacturing changes, a lot of the comments that were made on
development reports, and risk, and manufacturing science also showed up
here. The two comments that I will make
here are that the comparability protocol, as was proposed in the draft for
small molecules, and at this point in time we had not seen the large molecule
or the protein comparability protocol guidance, that proposal was too
narrow. It did not allow enough breadth
of scope to allow for a manufacturer to make changes in an expedited
manner. We think the scope of that can
be enhanced.
Also,
we need to have global, not US-centric, change regulations based on risk and
science. We manufacture--many of us
manufacture for a global customer base and we can't be operating, and we have
been operating based on regional regulation or interpretation of that
regulation. We have to get to a
harmonized set of regulations or interpretation of those regulations so that we
are not trying satisfy three different regions with the same body of data.
In
terms of next steps, what can be summarized from this is that we need to have
further discussions on what is the definition of risk, risk assessment and risk
management. What is the appropriate body
of manufacturing science and how should it be shared with the regulators? How do we marry the concepts of risk and
manufacturing science to come up with tiered regulatory oversight? How can we achieve global standards and
mutual recognition for inspections, as well as manufacturing changes? And, how can we define the roles of and
training for reviewers, experts and inspectors in the process and in the review
of manufacturing science data? Thank
you.
DR.
BOEHLERT: Thank you, Dr. Massa. Are there questions from the committee
members? Yes, Nozer?
DR.
SINGPURWALLA: One point of information,
Product Quality Research Institute, is its focus strictly for drugs or is it
across the board, including all kinds of manufacturing?
DR.
MASSA: We are only concerned with
pharmaceutical manufacturing.
DR.
SINGPURWALLA: That helps explain the
next question, namely the last slide that you put up, definition of risk, risk
assessment and risk management. Now, all
this is pretty standard outside this particular community. Why is that particular knowledge not
absorbed? Why start defining things that
have already been defined?
DR.
MASSA: I think based on some of the
discussions we have had, particularly based on the last meeting of this group,
we are exactly trying to do what you are implying, and that is learning from
other segments of industry in terms of how they apply identification
assessment, management of risk, and that is I think what we are trying to
achieve. I think your point is well
taken.
DR.
BOEHLERT: Are there other questions or
comments from committee members? If not,
thank you, Dr. Massa. Our second speaker
this morning is Dr. Janet Woodcock, who will speak to us on defining quality.
Defining Quality
DR.
WOODCOCK: Thank you. Good morning.
This talk that I am going to give bears directly on the point that was
just raised by the committee, which is, of course, there is a framework for
definition of risk, and a framework for risk management and how to do risk
assessment, and so forth. The question
that we have really been dealing with in PQRI and in our whole initiative is
how does that specifically apply to the manufacture of pharmaceuticals?
I
am reporting on deliberations that have been going on with the steering
committee for the FDA, Pharmaceutical Product Quality Initiative Steering
Committee. What we determined is that we
really have to have a common definition of what is quality for a pharmaceutical
product and then we can start talking about what is a risk to quality.
So,
my talk is going to take you through some of these reasoning. It may seem peripheral at first to your
deliberations, but I think by the time I get to the end of my talk you will see
how this links to classic definition of risk, risk assessment and so forth.
Now,
when we looked into this, when we looked into the issue of quality and how it
is applied in other sectors there is really a very common understanding in the
world of what quality is. From a quality
person's standpoint it is a product or service that meets or exceeds customer
needs. So, over the years, over the last
fifty years, whatever, people have recognized that if you are in the business
of a product, a service, or whatever, your obligation really is to determine what
your customers needs are and meet those needs or exceed those needs, and
quality is really a customer-centric definition. So, that is the outside world.
Now,
in the regulatory context of pharmaceutical quality, I think it has been agreed
that the customer or the market cannot easily or rapidly evaluate the
attributes of performance that are critical to them, which are the safety and
efficacy of the drug. That is due to the
nature of medicines; it is not easy to tell, obviously, whether or not they work. That is why we have these extensive clinical
trials on side effects. It is not easy
to link whether you have had a side effect due to a quality problem or not in
many cases, although not always. By
economists this is called a "market failure." The market isn't able to sort out these characteristics.
But
I think our society has decided that regardless of this much is at stake with
medicines--your life maybe; your health.
So, don't just let the market sort it out. That was the impetus for the statutes that
were established in the last hundred years requiring pharmaceuticals to be safe
and effective before their marketing. By
the Food, Drug and Cosmetic Act governing statute, FDA actually stands in for
the customer. The way we are
establishing and enforcing these quality standards that will ensure the
clinical performance, as I am defining it here, of the product, we are defining
quality for those attributes.
I
am defining that tentatively--more discussion will be had about this--as
clinical performance, which is delivery of the effectiveness and the safety as
described in the label, which is derived from the data and information in the
clinical trials of that product. That is
sort of the contract that is made and enshrined in the label is that this
product has been tested in people and it will deliver this effectiveness; it
will deliver this kind of safety profile.
We rely upon the manufacturing controls and standards to ensure that
time and time again, lot after lot, year after year the same clinical profile
will be delivered because the product will be the same in its quality in this
narrow sense of the word.
So,
I am defining quality almost as clinical performance of the product, that it
will deliver the clinical performance, but that is not aesthetics of the
product; it is not the price of the product; not other kind of consumer-defined
attributes. So, there are other aspects
to quality that consumers may have that FDA does not regulate and will leave
out of this discussion because we are not concerned too much about the risks
there.
That
leaves the open question then of who are the customers, say, of the FDA
standing in for the customers? Who are
we standing in for? We agree, and the
Center for Drugs has agreed for a long time, customers are people who take
medicine. They are our customers because
we have a pact with them that we will make sure that they get that medicine
that they need. Also, of course, their
parents when we are talking about children or caregivers, relatives, etc. are
all people who take medicine, which ends up being most of the public.
Now,
obviously there is the public health stake in this in pharmaceutical
quality. So, in that sense, the whole
public has a stake. Also, a very strong
customer for this are the health professionals.
They prescribe and dispense these medicines. They are relying on this system, quality
ensuring system to make sure the medicines they prescribe and dispense deliver
the quality that they expect. Then there
are many, many other customers, including Congress and the administration. The pharmaceutical industry has a certain
kind of relationship to the FDA, and so forth.
But I think when we think about quality and risk to quality we have to
think of the primary customers as people consuming that medicine and we have to
think of the statute and what we are guaranteeing in there, that the drug will
continue to be safe and effective and perform as described in the label.
We
can debate all this when I am done, but let me get through the argument. So, this is how we proact as regulators. We want bioavailability studies to make sure
that new twists or tweaks of formulation continue to deliver the drug delivery
in the same way. If there are major
changes in the drug, we might ask for clinical or additional safety
studies. We want to make sure that
clinical performance continues the same or, if it is different, that the
changes are reflected in the label.
A
surrogate for this has been proposed and that is fitness for use. In the next part of my talk I am going to
discuss using a surrogate like fitness for use and its relationship to clinical
performance which actually, in my mind--and I am going to be talking from a
clinical perspective here, this is a somewhat tenuous relationship,
unfortunately.
I
don't think we are disagreeing that fitness for use is a surrogate that is used
for quality. We define that via the
standards that are established in regulation, in guidance, internationally, and
so on, as well as the attributes we regulate which are basically the
specifications of a product, in-process controls, and so forth. So, there is a body of items or quality
attributes that, if a product passes those, conforms to those, then we consider
the product "fit for use" and it is released.
The
question is if you are talking about risk and you are talking about risk to
quality, which is clinical performance, how do these two things relate to one
another, these specifications and everything on one hand and performance on
another?
We
define, as you know and you know this better than I, a product "fit for
use" if it meets its established quality attribute standards, including
all these and often many more. There are
in-process standards; there are all kinds of things. I don't have sterility on here. There are a lot of things, stability, all
sorts of things that a product has to meet to be "fit for use."
This
includes attributes of the label and packaging that might influence the
performance of the product. It also
includes aspects of physical performance.
For example, if a metered dose inhaler isn't delivered with a plume
properly would not be "fit for use," and so forth, and there are a
lot of physical performance aspects.
Adherence of a patch, for example, is a very important. If a patch falls off it is not going to
deliver the drug to the person. So,
those also are attributes that we look at.
But
another regulatory quality attribute, one that is going to be discussed here
this afternoon, is made in compliance with current Good Manufacturing
Practices. That is a surrogate in its
own right. That is a process problem,
processing problems. There is a quality
system surrounding this product so that the probability that processing
problems have influenced the quality negatively is low, and that is made in
compliance with cGMPs. I am going to
talk about this a little bit as a surrogate.
I
also have to remind everyone, and this is something I don't think that FDA was
always focused on by everyone although clinically we have always been aware of
this, an important quality metric from the point of view of the customers--if
you talk about people who take medicines, people who prescribe or dispense
medicine--is availability. It is a key
attribute. If a product is not available
people can't use it and we act as if this is a very critical variable. We go to extreme lengths often for medically
necessary products at the FDA. We go to
extreme lengths to make them available to the customers if things have happened
so they are not available. So, by our
actions we have clearly signaled that, obviously, availability is a very
important point. Actually, the mission
statement for the Center for Drugs says that we assure that safe and effective
drugs are available to the public, and that has been our mission statement for
a long time.
Obviously,
availability is important and, as with all quality measures and efforts, you
have to factor this in its importance compared to some of the other attributes
and risk to the other attributes. Risk
to availability is a risk to quality.
The
issue I want to raise here and the rest of the talk for you is how does this
surrogate that we all use, and that you are probably going to be discussing,
the fitness for use surrogate with the associated specifications and so forth,
complaints, good manufacturing, how does this relate to the real quality metric
of clinical performance and what do we know about that? And, you all may debate me about this but
this is what I think. This is the view
of a clinician about this.
The
relationship has really several dimensions like any surrogate. There is a qualitative dimension. There is a quantitative dimension and then,
particularly in this case, there is probabilistic dimension of the
relationship.
First
of all, and I think we really have to focus on this and you can see where I am
going with this, to quality by design at the end of this talk--first of all,
you have to think about what do you select for a given drug? What attributes do you select as critical to
performance, and on what basis do you select them?
I
would propose right now, you know, we generally select them on the basis of
tradition but some of our traditions are really good. For example, we feel that content uniformity
is an important attribute. I agree with
that. It is critical to performance
probably--it is critical to performance but I don't think we always go through
a conscious process of deciding what attributes are critical to performance and
how do we decide this, and that really determines whether or not your ultimate
fitness for use surrogate and your risk analysis is going to be useful or
not. So, that is point one. You have to get the right attributes. I am sure you all agree with this.
But
once you select an attribute there is going to be a relationship between the
value that you get for the attribute and clinical performance, safety and
effective. Maybe. Maybe there is going to be a quantitative
relationship and maybe there isn't. But
whatever the relationship is, it is usually nonlinear and my observation of
this is that we usually treat it as linear.
Let
me give you some examples. For example,
with content uniformity, we have all agreed this is an important
attribute. Right? So, you get increasing content uniformity and
at some level you are going to start getting diminishing returns as far as
better safety and effectiveness. I think
we all agree with that. Right? But the problem is because the clinical
readout of this is so coarse, much coarser than the assays you do of content,
you really don't have a very good idea of where the minimal acceptable level
is. I think what we end up doing usually
is that we look at what was achieved in the clinical trials that the process
can achieve and we say, "well, that looks pretty good. We'll tighten it a little bit beyond
that," make it a little tighter so you fail five percent of the time or
whatever, and that is the spec. Now, I
may be wrong. I mean, you guys can tell
me I am wrong but that is my idea of what happens.
Here
is an example, we kind of set arbitrarily to some extent, of the minimal
acceptable level based on what the USP has traditionally set, or whatever, but
you have to agree with me it can't be the same for all drugs. That makes absolutely no sense. Right?
But it is to a large extent. That
is my understanding, anyway.
Then,
this is theoretical 001 but you can see that with increasing rigor of a
particular attribute you get a big gain, and then you can have much increasing
rigor after that but, don't forget, this is a quantitative relationship between
the attribute and the ultimate performance in the person. You can have no improvement in performance.
Why
am I going over this? Well, this is very
important if you are going to construct a risk model because on the right side
of this graph you don't change risk. You
aren't having any influence on ultimate risk.
But, depending on how you set up your attribute, you may think you are
having an impact on risk because you haven't looked at the relationship of the
surrogate to the ultimate safety and effectiveness.
A
lot of times what we do is we set an arbitrary limit, and this is fine. Again, this is due to the coarse readout in
the clinic, in the animals or whatever.
We just don't have a lot of information to bring to this. We decide, okay, we have qualified some level
of impurity by a toxicology study and maybe in clinical trials, a lower level,
and anything below this is fine and anything above it is unknown and,
therefore, not acceptable. So, there is
a dichotomous relationship.
You
know, sometimes we see that a product may have an inactive contaminant in it
that is a metabolite. Of course, much of
the drug is converted to the metabolite inside a person's body but you want to
control how much is going in. Is that an
inactive metabolite? So, we develop an
arbitrary limit, and that is it.
That
is fine, and that is a very pragmatic and reasonable way to proceed but I am
trying to point out that it has no relationship to risk that I can tell, at
least no quantitative relationship to risk which is what I am talking about right
now. It is arbitrary. All right?
Then,
there is an example here where you might have used the wrong color ink--it is
still readable and everything; it is not what you said you were going to do but
it is still readable. There is no
relationship at all to clinical performance but it is a manufacturing defect of
some sort or other.
So,
there is a whole spectrum here and my point is that in very few of these, and
it doesn't have anything to do with people who manufacture drugs but has to do
with the nature of pharmacology and our inability to distinguish the impact of
small changes in the clinic--we have very little understanding of the
relationships of these attributes to what we have already decided, if you agree
with the opening premise, is the ultimate quality measure for these products.
Finally,
and I am glad Dr. Layloff is here; he can correct me on this if I am wrong,
there is a probabilistic relationship between the measurements we take on the
surrogate because we don't just have an absolute value of the surrogate; we get
our surrogate by doing measurements and between that and the medical
performance.
I
want to go through two examples. One is
the testing surrogate, a measurement, and then GMP compliance. For testing, of course, we ordinarily
evaluate whatever attribute it is for each unit that is released. We take a sample, a very small sample usually
and then we extrapolate to the whole batch, or whatever. We are then doing a probabilistic
exercise. We are saying if this sample
passed, well, how probable is it that the whole batch would pass if you tested
it. If the sample failed, you say, well,
there is a probability that this batch is different than other batches but
certainly by no means are either of these 100 percent probability.
So,
we have a surrogate marker and we are one step back from the surrogate marker
because we are taking a sample and we are doing a probabilistic evaluation
based on our testing of that sample.
Now,
the same thing, in my mind, is true to inspections. It is analogous to testing. You do an inspection and you get a set of
observations, which is a sample about the quality practices of the
organization. I haven't gone through
that in terms of the graphs and what the quantitative relationship might be
because I have even less idea. I think
the world has even less idea about how those practices quantitatively might
relate to the probability of making a quality product in terms of performance
in the clinic.
Then,
when you get this set of observations you ask how does this set of facts that
you have observed about GMP compliance or lack of compliance relate to the
probability that you are going to either produce a high quality product that
performs well in the clinic or you are not?
That is the task that we have when we evaluate inspection reports, as I
hear from folks who are engaged in that.
They take a holistic look at this set of observations and say does this
set of observations lead to the conclusion that the control of manufacturing
process is either out of control or in control and, therefore, likely to have a
certain outcome?
So,
the relationship of the proposed surrogate, the point we have been talking
about all day, which is fitness for use--the relationship of that clinical
performance is what I have been discussing.
I think we generally lack information about that except at the
extremes. So, we know if you have really
bad potency or content uniformity that is going to have an impact on
performance. If you have an extremely
high contaminant, it may but it doesn't seem good. But in the middle, where we are talking about
much of this risk analysis, and so forth, I don't think we have information.
So,
fitness for use in the medical world is not a complete surrogate because of
this lack of information. So, should we
just give up and not have any more discussions about this? No, I don't think so. First of all, I believe that we can use fitness
for use. We just have to keep in the
back of our mind these issues. All
right? We shouldn't be paralyzed. We realize we don't have a complete link all
the way to the person in the clinic and we have to live with that because we
have lack of information. We have to
move forward.
Second
of all, I think that quality by design makes a lot of sense. This would be in prospectively designing a
product but I think you are also going to talk about changes, process changes
and everything during the day and I think it also makes a lot of sense
there. If you can prospectively design
or designate the critical quality parameters during your development for the
product and the process, and evaluate and refine those, then you are testing in
the clinic something that is controlled on these parameters. You still will never get this clinical link
because it is just not really doable yet.
But
you can create a robust link that is hypothesis driven between the process
parameter, the specs that come out of that and the clinical performance of the
drug and we can all have, and I think that is what we have been talking about a
lot, more confidence about changes, and so forth, if you have gone through this
quality by design exercise. But I think
none of us should mislead ourselves as we talk about this. To a great extent this is still empirical at the clinical end because of the limitations
of the medical science that feeds the information back about product quality.
To
close and to get back to the question you asked earlier, we are thinking now
about how you would apply risk models to this definition of quality; how do you
think about risk to quality. Because
when we think about risk to quality we have to think about what is the risk
that a patient will suffer from failure of medical performance of a drug. That is the real bottom line here, if that is
our definition of quality. But we can't
think about that because it is too hard because we don't have the data linking
it, except in the extreme cases.
So,
we can use, I think, the fitness for use surrogate and we can move pretty far
using traditional techniques of risk assessment, risk management and so
forth. We can move pretty far along in
this area. At the same time, this is why
we think we need to develop the quality by design part of this exercise because
that really has the potential to make the link much stronger from the beginning
of the manufacture and development of the product. So, thank you very much.
DR.
BOEHLERT: Thank you, Dr. Woodcock. Are there questions from the committee
members? Tom?
DR.
LAYLOFF: First of all, I would like to
thank Janet for an excellent presentation of the subject. I think the tradition of quality in the FDA
stretches back to 1906 before safety and efficacy so it was adulteration,
misbranding and labeling. I think that
that tradition has carried forward into drugs so when we buy a bottle of pills
and it says it contains 100, we expect it to be between 90 and 100 regardless
of what the clinical aspect is. So, I
think that tradition has rolled on into content uniformity and all of our
concepts. There is a tradition of
commodity sales rather than the quality issue of clinical performance. But I agree that we should bring that
clinical performance into the risk issue rather than the commodity issue. It is time to walk away from that one.
DR.
WOODCOCK: The customer still expects the
commodity properties to be there, and we do regulate many of those, as you
pointed out, based on our tradition.
And, I think we still should but that is not the be-all and end-all
anymore. I mean, those should really be
pretty much no-brainers. You should have
the number of tablets in there, and so forth.
I agree with you and that is a good comment.
DR.
BOEHLERT: Are there other
questions? Yes?
DR.
SINGPURWALLA: I have two comments and a
question. The first comment is on your
graph A and graph B. I presume these are
just illustrative because how do you measure content uniformity and how do you
measure increased rigor? Those are not
measurable things. I presume you are just
showing them for illustration.
Perhaps
the more germane comment is about what is the probability that X test result
will predict Y outcome? I just want to
alert you and alert everyone that probability is subjective and adversarial,
particularly in an industry and government situation like the one you
have. My probability is not your
probability and there is a potential adversarial scenario evolving.
The
second thing is that many times probabilities are calculated based on prior
beliefs. So, you start with a prior
probability and you collect information and you come up with a posterior
probability. Again, there is a potential
conflict because of the adversarial scenarios that you have. So, I think you want to be aware of those
obstacles that you may face.
The
question that I have is can you, in one or two sentences, try to enlighten me
as to what is the focus in which you want us to think vis-a-vis your
presentation? There is a lot of
information there and, as Dr. Layloff said, it is pretty good but I need a
sharper focus for me to be able to focus on it.
DR.
WOODCOCK: With respect to your comment
about the probabilities and the adversary relationship, nobody disagrees with
that. That is why we would like to
develop a model that is commonly understood of what is the probabilistic relationship,
or at least define some greater level of specificity than what we have right
now. And, it is not a regulator's model,
an academic model and an industry model but a single model that we can discuss
and agree upon and we can use data. And,
we are trying to do that. We are trying
to construct models, mathematical models and see how the data look in those
models. So, we agree but things are only
adversarial and only value-driven if you don't use concrete models; if you use
mental models. That is what we are
trying to get away from.
Your
question is where am I going with this and why did I give you this information
when you are supposed to be talking about quality by design in the GMP
process? I think the reason is that it
relates to what you raised earlier.
Okay? If you are only talking
within a self-referential system where you are saying quality is defined as
whatever we say the specs are, that is not really right and we have to remember
that. That is my basic message, that we
don't know the relationship of the process controls and the specs totally to
the clinical outcome, what their quantitative or even whether they should be a
measure in the case of a particular drug.
As
you go through your discussion I think you have to remember this, otherwise, as
I said, you get into a circular self-referential system where you say, well,
risk is risk to the specs, and that is actually what has been proposed
already. fitness for use is defined as
meeting specifications and whatever process parameters and GMP parameters. You can easily get into a situation where you
can't get back to the underlying scientific principles, I think, if you just
stick to that definition, and that is why I presented this.
On
the other hand, I am also saying that you can't use the clinical readout as
your measure because it is too coarse and we don't understand these
relationships well enough. But you have
to keep in your mind that the ultimate measure of quality is how it performs
for the patient and that these surrogates are not that good a fit, in my
mind. You may disagree with me
though. Partly I raised all this to get
some disagreement maybe.
DR.
BOEHLERT: Dr. Gold?
DR.
GOLD: Dr. Woodcock, I am a little
confused now. Are you challenging
fitness for use or are you really challenging the setting of specifications
when we establish the quality parameters for the product, the method of
establishing the specifications?
DR.
WOODCOCK: Well, I think that is what you
are going to talk about in quality by design.
Right? Isn't that part of it? How do you set those specifications? How do you go about the process of
determining what this product should look like?
Is it an empirical process that is sort of post hoc, or is it built into
the development?
So,
am I challenging fitness for use?
No. I am saying that is as good
as we have right now but we have to be mindful as we build those attributes
that go into fitness for use. They can't
just be what we have had for 100 years because I know that is not right. Every single drug can't have the same
requirements. So, we need to move
forward with what we can do and what we have, but we have to remember the
bigger picture. That is what I was
trying to say.
DR.
PECK: Concerning fitness for use we are
not dealing, unfortunately, with a single response. We do have patients who do not act the same
as another group of patients and we have these side effects which cause us to
have some sort of limits and a deviation, if you will. So, if we look at the clinical side of it, we
already have some sort of deviation from what we would call the norm and we are
trying now to match this, I am assuming, with quality attributes and at the
moment we are still dealing with some band of attributes.
DR.
WOODCOCK: Well, I agree. The hypothesis, going into this, is that most
of the side effects that are experienced by humans with today's drug supply,
which is very high quality, are not related to quality attributes of the
drug. It is related to the pharmacologic
attributes to the drug and genetic and other variability in the people--drug
metabolism, all sorts of things. So,
that is why the clinical readout isn't very useful for us in determining many
of the quality attributes because they don't lead to the side effects that we
see. In many cases in the clinic in
doubling the dose we can't distinguish.
We can't distinguish a double dose in the clinic. Well, a double dose off the line would be a
horrible thing if you didn't intend to do that.
So, that is what I am saying, that those readouts are very coarse and it
is hard to know which matters; sometimes it would probably matter a lot. That is quality by design, thinking all the
way from the functional use of the product and thinking backwards, I think, to
what do you need for this product to perform that way.
DR.
BOEHLERT: Any other questions or
comments? First we will start with Gary
and then Tom.
DR.
HOLLENBECK: I think your remarks are
right on. I really appreciate your last
slide because it says three things to me.
I think, first of all, we are not going to find a probe that will
measure clinical performance. As we are
looking at PAT and in-process measurements, you are not suggesting that we blow
up the system that we have. It is the
best that we have.
DR.
WOODCOCK: Right.
DR.
HOLLENBECK: The second thing that you
said that really impressed me is that we have for years developed this
portfolio of information that includes many useless tests. So, we are going to look in this process
critically at what tests have the best information available for us to make
decisions.
Then
the third thing, with reference to PAT, is that we are going to look for strong
links between in-process measurements to those specific critical
parameters. I think those three points
are very important.
DR.
WOODCOCK: Thank you. You said it better than I did.
DR.
BOEHLERT: Tom?
DR.
LAYLOFF: Yes, I think we have a couple
of other traditions, and one tradition is that we typically push fitness for
use as to what is technically feasible so that as our technologies improve we
change the definition of fitness for use by what technologies are available.
There
are a couple of other risks that are involved also, another side bar. That is, a useless test is useless to
whom? Because if you are the one
performing that test it is a risk to your job.
So, risk is in the eyes of the beholder and useless is in the eyes of
the beholder also. But we tend to move
specifications and fitness for use by what technologies are available, and that
certainly was the case in digoxin as we moved on, that and others. It is the available technologies which drove
the whole thing, starting first with RIA and then fluorescence as we shifted to
try and deal with it better, but it was clearly that the changes in technology
drove the standards.
DR.
BOEHLERT: Other questions or
comments? Yes, Nozer?
DR.
SINGPURWALLA: At some point in time you
mentioned fitness for use as a key factor.
Now, I agree with you, if I understood you correctly, that fitness for
use should be defined in terms of how effective the particular drug is to a
patient or to a taker.
However,
the issue here is that this is a manufacturing subcommittee that we are talking
about, and if I was a manufacturer, my job is to produce the product to the
specifications, whereas what you seem to be saying is question the
specification itself because it is the specification that determines whether a
headache is going to be cured or not. If
the drug is even manufactured to specification, I may still not be cured.
So,
there is a potential conflict in my mind vis-a-vis the charge of this
committee, namely manufacturing. So,
from a manufacturer's point of view you simply say, "I did what you asked
me to do; it's within standards."
Whereas, you are questioning the standard at a much higher level, and
perhaps correctly so. How do we resolve
this conflict? Am I clear?
DR.
WOODCOCK: Yes, you are very clear. I think what I am saying is the fitness for
use--I am proposing we should define as meeting applicable specs. That is how the regulators behave. Right?
That is how the manufacturers behave.
But we can't lose sight of the fact that we have accepted a surrogate
for clinical performance because we don't have anything much better. I am not saying that this committee has to
find something better and define that link.
That is going to take probably ten or twenty years of biomedical
science, for us to make that link better.
But what I am saying is although we say fitness for use is a set of
specs we ought to look very critically at the specs, and there ought to be
reasons that those specs are there, not just commercial commodity reasons, as
Tom was saying, or tradition, or whatever.
They ought to be something we believe in because that is what we are
making a product to.
DR.
BOEHLERT: I would like to cut off
comments, if I could, so we can stay on schedule. I think we will have plenty of time for
discussion later today. Thank you, Dr.
Woodcock. Our next speaker is Dr. G.K.
Raju who is going to be talking to us on quality by design.
Considerations for Quality by Design
DR.
RAJU: Considerations for quality by
design, and I will attempt to do that in the next half an hour or so. It is not possible to do a complete job in
half an hour and I will try to give a very high level set of components for us
to discuss in the afternoon. I hope I
will be able to do that.
To
me, I see quality by design and the extent of quality by design being very much
about the extent and about manufacturing science. I don't see them as being in terms of
descriptions, how they go along together to be that different. As you listen to my talk, you will find that
I say that multiple times.
From
what I understand, I guess in some ways why and how the setup comes to talk
about quality by design, and to do that let's look at our manufacturing
system. If you define a manufacturing
system to be a set of processes and systems bound by common material and
information flow, this is what a manufacturing system looks like today. We have a set of steps at the beginning and
the end and little or no in-process testing.
As Gary suggests, the question then becomes is that the place to be
testing and how are these correlated with the in-process testing that we are
doing or could be doing?
But
if that is the way our manufacturing system is today, what are the consequences
of that manufacturing system's performance given that the products are
predominantly, by far and almost exclusively, safe and efficacious?
Given
the interests of time, let's just take a look at time and ask what are the
consequences of our manufacturing system today in terms of our motivation for
quality by design. The testing that we
do at the end of our process is exactly the same set of tests that Janet
Woodcock put up on her set of CFR 210, 211 kind of tests. These tests are done and they ensure safety
and efficacy.
But
what are the consequences of doing these tests at that point in time in this
way, using this technology? A
consequence, and measured in time, is that this testing demonstrating safety
and effective given a set of presumed specifications for a drug product in this
case seems to take at least as much time as making the product. Clearly, we are not building quality in this
by design and that is why we are all here.
We seem to be testing, not sure whether we are testing in quality. We may not be. We may have built in the quality and we
certainly are putting a lot of effort into testing the quality at the back end.
Now,
if we tested at the back end, and put so much effort into it, and our products
and processes were not variable, then there would be inefficiency at the
end. But despite that, we would have to
also have to bring to the table the fact that while we take a lot of time
testing we continue to have issues around how do we bring technology to be able
to address the reasons for that testing.
Taking that long test at the back makes it difficult to understand
exceptions.
If
you look at multiple companies over multiple years doing these operations, all
are safe and efficacious. The
consequence of it is that it takes half a year to do this safe and efficacious
product for the world, and the question then becomes what can we do as a
manufacturing subcommittee to enable and maybe continue to enhance the safety
and efficacy but ask questions about how we get to getting to that safety and
efficacy.
If
you argue the motivation is not about safety and efficacy but how we get to
safety and efficacy, let's look at manufacturing science in terms of this
definition as being a body of laws, knowledge, principles involved in the
transformation of materials and information into goods for the satisfaction of
human needs. That is, we want to ensure
safety and efficacy but what is the body of knowledge, laws and principles with
which we do it today and with which we can do it tomorrow? That, we would argue, is the extent of
manufacturing science and, in many ways, the extent of manufacturing science is
nothing but the extent of quality by design from a philosophical point of view
and in many of the measurements point of view and the kind of knowledge that
you capture at different points as you go forward.
I
would argue that we are very much talking about how we get to that safety and
efficacy in terms of a manufacturing system, and how we can work together to
enable us as a society to move from knowledge that is descriptive, correlative,
sometimes causal but rarely, in my opinion, mechanistic and, hence, rarely
predictive. And, if we cannot be
predictive, we would not have designed in quality.
Yes,
we would like to go to ultimately predicting everything, but if we can predict
the qualitative trends I think that would be a huge achievement for us. So, in many ways I see this opportunity for
us as we ensure the safety and efficacy, how do we go from a set of bodies,
laws and principles that are mostly descriptive and correlative to those that
are mechanistic and may be beginning to be qualitatively predictive. That is, how are we going to work together
whether this Y axis is the extent or manufacturing science or the extent of
quality by design to, depending on your business choice, do a lot of the
mechanistic knowledge development even before you make a submission.
If
you do that, you have now enabled yourself to be quite independent and very
much able to make changes during the regulatory period of your manufacturing,
and maybe you have bought yourself the ability to be quite independent of the
regulator. The alternative is to have a
minimum level when are at commercial manufacturing and work within your company
and within the regulator, interactions you share with the regulator, to enable
you to make this transition.
The
reality, in my opinion, is while this is the desired set of profiles, whether
we are going to build in quality towards a quality by design state during
development or during routine commercial manufacturing. There is, I believe, a state of today, which
is very much the correlative and causal knowledge, and a state of tomorrow,
which is the mechanistic and first principles knowledge, and between these two
states of today and tomorrow is the cost, quality, time, opportunity for the
social structure.
If
those are the dimensions of manufacturing science and quality by design, I
think I begin to lay the foundation of two general classes of leverages to go
from here to there. While each is a
powerful leverage and not mutually exclusive, it seems clear that the strategic
level or the leverage that has the biggest impact is the one during development
because that is when you decide what your specifications are. That is when you decide what your information
sources are, what your experiments are at a small scale in your collaborations
in the laboratory. That is your ability
to make yourself independent of the regulator to a large extent. However, there are costs and organizational
consequences of that.
A
second, tactical leverage is to do that quality by design development around
learning from each lot, particularly the lots that are the exception lots.
So,
the strategic leverage to get to quality by design in terms of learning by
doing is a significantly enhanced level of product and process understanding
before commercial manufacturing. Doing
so enables potentially a mechanistic basis for setting product and process quality
specifications that allow us to get out of this discussion that we had earlier
today.
It
has an impact over the whole life cycle.
You do the development much before the manufacturing and makes it easier
at this stage to make basic process design changes between wet granulation, dry
granulation and blending, for example, and few, if any, regulatory barriers at
that point in time.
The
tactical leverage is an enhanced level of product and process understanding
during commercial manufacturing. The
good news is that there is a potential to use large amounts of production data,
much more data than you do for many of your experiments, but much of that data
is large in quantity but low in quantity [sic] because in commercial
manufacturing most of your runs are about trying to meet specifications rather
than trying to do experiments to gain information about the process.
Investigations
and exceptions are the ones here that provide opportunities to learn. It is difficult, however, to make significant
product and process changes because you now are making a product that has been
approved given a set of bioequivalency, given a set of submissions that you
have made to the FDA, and it is rarely an environment to develop a mechanistic
understanding. I would argue that this
is a very difficult place to do mechanistic understanding despite the fact that
you have huge quantities of data and you have opportunities where the data has
some information contact.
Depending
on which of these two leverages you use or what combination of leverages you
use, we have an opportunity together as we go from correlative and causal
knowledge kind of process, which is represented here, which is the diagram I
gave you as an example, to one that has a much more simple, much more
automatically controlled, and much less quality by testing focused technology,
manufacturing system, process flow diagram.
The
question then becomes how are we going to go from here to there. The strategic leverage is to go from here to
there during development, and the tactical leverage is to go from here to as
far as you want to go or have to go during manufacturing.
If
you look carefully at this diagram, one point to make is that you sometimes
have to measure more than we measure today to figure out what your critical to
quality process variables are for process understanding. To a large extent, the critical to quality
variables for safety and efficacy are very much in place. The critical to quality measurements about
process understanding are not necessarily in place and so we have to measure to
figure out what we have to measure, and this is a significant investment of
time and resources both for organizations and development and in
manufacturing. So, we bring in the
question of what is the cost-benefit tradeoff to make this transition together. It also brings up the point that this has to
be a choice of the companies rather than a requirement.
So,
those are the two big leverages and that is what the picture could look like in
the future. What do I see as being the
components of getting to quality by design to be able to get to the top of that
pyramid in terms of first principles?
I
believe that going to the top of the pyramid is a learning, is an improvement,
is a change goal or a change exercise, and I believe that all learning
opportunities/problems have five components to them.
One
is the thing you are learning about, called the application domain. It can be the process and its relationship to
the product, and that is its relationship that is somewhere in the world and
you are trying to learn that relationship and how much of that relationship you
have learned is measured by the extent of your manufacturing science or your
quality by design.
This
then would be where you are at any point in time. But where you are at any point in time depends
on three pieces. One, where you started,
which is your prior knowledge. You may
have started here; you may have started there and that very much determines
where you are going to be given the time that you have.
Two,
what your relationship is between you as a learner and the application and the
process you are trying to learn about.
That is, are you really trying to learn, or are you simply trying to
conform? Are you simply trying to
comply? If you relationship as an
organization, as a site, as a plant is about compliance only you will learn
very little after you have complied. You
will not challenge your specifications.
You will not see the need to go to a mechanistic basis because you
complied. The cost, and quality, and
human life consequences of doing so are significant. So, we must ask ourselves what is our
relationship between the process and ourselves as an organization. Is this an opportunity to learn or is this a
demonstration of compliance?
Given
these two determinants of this place in the learning curve, we can measure our
place on that learning curve given a set of performance measures. I will tell you a couple of examples of each
of these components.
Given
that in this case our organizations are fixed in terms of their names at least
for now, and the focus is on the processes, those two parts of our learning
structure are fixed. Let's then talk
about the other three parts that help us define where we are relative to where
we started and where we can be.
The
first step is a priori knowledge. As we
interact with the FDA and track within our companies and we want to communicate
to each other and the FDA what our level of quality by design is, I actually
think it would be quite difficult to make that communication in terms of a set
of numbers at this point in time.
If
your level of process understanding is at the correlative and causal level, you
need to also share knowledge about your prior knowledge, where that comes from
about your materials, your excipients, your APIs, how much was known before you
started, how much was known in your development programs, how much was known
across the industry, and what does this a priori knowledge look like in terms
of the extent of it being first principle, mechanistic, causal, correlative and
descriptive knowledge.
The
second piece, what was the basis on which the experiments, the data, the runs
were done for you to say what your performance is? If I did the same thing again and again by
having all my variability outside my process system, I could have a very high
process capability number for a while but it really wasn't that capable. For example, what was the extent, how much
data and what kind of data you have.
That is the question, how much of this target space have you really
explored, not was I able to do three batches.
This is not necessarily a good thing.
Then
the question then becomes, in terms of relationship between the organization
and its process, what were your experiments?
How much of this space did you explore?
And, what are the basic failure modes around the edges? That is the next piece of information that I
believe should be communicated as long as we are not yet at a mechanistic level
of understanding.
The
important point to bring up in the case of the role of information exchange
between the process and the organization is that the measurement system that is
in place very much determines your measures of variability and what your
critical to quality performance is, and it is very difficult to do that because
often the process, the cause of your performance and the actual test in our
current testing paradigm has a lot of built-in variability. That is a big factor in determining the role
of information exchange between the process and the organization and how fast they
can go down that learning curve to head towards quality by design. So, that positions beautifully the role of
bringing in the different measurement systems into the information exchange
between the process and the organization.
The
third piece of getting to quality by design, the third component is how far
have you got and how well do you perform in terms of the extent of quality by
design. Here I would like to suggest
four, but really three new or maybe additional variables as potential things
for us to discuss today as performance measures of extent of quality by design
or extent of manufacturing science.
Safety
and efficacy in terms of what it is in the outside world will always be a
measure of performance, of quality by design.
That is the ultimate performance.
When you have a recall or a complaint about safety and efficacy, that
will always be a measure and it is always going to be in our current system.
In
addition, I want to put on the table three metrics, each of which I will also
have significant complaints with as I put them on the table. First, process capability associated with
critical to quality attributes. Two,
variability of critical to quality attributes and, three, predictive ability of
performance.
With
that suggestion to put them on the table, now let me criticize why these have
to be discussed in great detail and a significant amount of thought has to be
given to keeping them on the table for very long.
First,
I like variability. It is one divided by
process understanding. But given our
performance measures of today beyond safety and efficacy, it is not clear that
we know what critical to quality variables are and so putting it as a measure
without having an understanding that this is not yet in place could be a source
of great friction if we don't lay the groundwork in place for research
exemption, or safe harbor, or the reason why we are doing the whole thing being
process understanding and not necessarily safety and efficacy.
The
second variable up here is called process capability, which is the variability
of the process relative to the customer specifications. Again, you have the question of critical to
quality variable in it but you also have a presumption of a specification in it. As you put it up there and, yes, you are safe
and efficacious, just because you have a low process capability doesn't mean
your process is that bad. It may
actually be good. The question is really
all about your specifications and it comes back to what Janet said earlier
today, we are in this exercise of challenging our specifications, and that is
the mechanistic piece, and that is the first principle piece. I know it is a lot of work to get there but
as we develop these pieces we are going to make sure we have all those pieces
in place as we talk between regulator and regulated.
However,
in many cases these are about mathematics.
Mathematics is in the end trying to describe physics and chemistry. In the end, however, the physics have to be
represented if you want to go beyond correlative to causal to mechanistic
understanding. To do that, this would be
the ultimate test of performance, and this will really be the indicator because
you cannot necessarily define your a priori knowledge if you have this piece
already. You don't necessarily have to
define your relationship with the process if you have this piece already
because that piece is embodied in your mechanistic model. If you don't have it, you will always have to
add those other two components of your learning paradigm, which is what was the
data you generated; what did you know before; what does all that other
knowledge look like? So, it is going to
be very difficult to have one or two variables, these two being the only two
variables where a lot more context has to be given to them. In the end we will come to this highest state
but we would have to go through quite a transition to get there.
So,
my complaints with these three suggestions that I put out is, one, we don't
necessarily know the specifications here.
We don't necessarily know the critical to quality pieces here. In many cases we are far away from here. So, this is not necessarily immediately
useful either, although it is a desired state and it is in place for some cases
in my opinion.
So,
given those three performance measures, on top of the safety and efficacy
implications and presumptions that I think we very much do very successfully
on, I believe that is the opportunity to decide where, as an organization, we
are going to go between two, to three, to four.
To
end my presentation today, ask yourself if those three components make
sense. Ask yourself if those performance
measures make sense. If they do, and
even if they don't and we find better ones, which is the whole point, in many
ways quality by design is simply the extent to which we do things right first
time. That is, if we are going to do
quality by design by bringing in these changes here in development, yes, we
have reduced the burden on the testing on the end of the plant but we have
reduced the burden of the testing outside of a plant in society, and with the
building quality, philosophically we have laid down a social structure for us
to go to be designers and developers rather than testers, and for the
regulators, instead of being evaluators and investigators, to maybe be
facilitators and accelerators. That is
the quality by design consequence to the society at large.
But,
of course, how can't quality by design and manufacturing science not be about
lowered risk, process understanding, lower variability which is one of the
majors and, of course, lower costs? We
want to go to a physical understanding and a chemical understanding that goes
beyond "I can correlate; I see a relationship; but I can't extrapolate
because I don't know if this is the cause." I have some causal knowledge. I can extrapolate a little bit now but I
don't really know if there is a linear relationship or a nonlinear
relationship. I know the basic forms of
the relationship so I can extrapolate and I know the basic bounds. I don't necessarily know the individual
parameters to the dream land of "right first time" and in many ways
the extent to which we do things right first time is in many cases the extent
to which it is quality by design.
As
we begin to understand our mechanistic knowledge, I think this committee,
probably not even this committee but industry and the FDA together can lay down
a foundation for a classification, a separation of social tasks as to when the
FDA no longer needs to be involved in the process at all. If you go back to the cGMPs of 1978, maybe
there is an opportunity, as we measure better, as we look at more product and
the product is connected to a mechanistic understanding and the manufacturing
system has more presumed mechanistic understanding, maybe we don't need to go
into the process at all one day in the distant future. That is my last slide.
DR.
BOEHLERT: Thank you, Dr. Raju. Are there any questions? We will have an opportunity for further
discussions, since Dr. Raju is a member of the committee, later this
morning. Janet?
DR.
WOODCOCK: Yes, I would just like to make
one comment because I feel that perhaps my presentation or focus might be
confusing as far as how it is related to this, but it shouldn't be. I think there are several pieces of quality
we are talking about here. When I was
talking about definition of quality for the regulator, as I said, that is what
the patient ultimately deals with.
I
think G.K. is talking about the process quality, quality of the process. That is different. It should lead into the quality but there is
another step there, which is the step I was trying to talk about, which is how
you set the specifications that the manufacturing process is aimed at some
goal, and that goal would be the safety and efficacy.
So,
I think my point was you need to understand the goal as well as understand your
process, and understanding the goal might be even earlier. The earlier you do that, the better off you
probably will be, although I know it is very hard. At least, the earlier you develop a
hypothesis about what your objectives are, then you can design the process and
a formulation that is intended to achieve those. So, there are two different kinds of quality
and you are talking about different kinds of risks when you are talking about
each of these. So, I just didn't want to
confuse people. We will be developing
better models as we go ahead so you don't have the feeling you have to solve
all these issues today.
DR.
BOEHLERT: Thank you. Now we can all relax.
[Laughter]
Our
next speaker before we take a break is Dr. Norman Schmuff, from the Office of
Pharmaceutical Science.
Current Regulatory Challenges in
Assessing
Quality by Design
DR.
SCHMUFF: Well, I can't relax because, I
mean, I am up here.
From
the high level perspective that G.K. presented, I would like to give you some
thoughts, if not from the trenches, at least from one of the commanders of the
troops in the trenches. As a team
leader, I see all secondary review from the CMC in our Division and, as well, I
have been involved in CTD-Q and in those ICH negotiations where that was
drafted, and in the drafting of the P.2 section in our Drug Product Guidance
which we are now revising for final Drug Product Guidance. I can tell you, we have plenty of comments on
the P.2 section.
So,
this is my outline here. I would say the
current model, if we divided up in sort of the typical ways in the IND, we
heard from industry in the past that they don't like to hear a lot of comments
about their subsequent development. So,
generally we should stick to issues related to safety. That is the way our Phase I guidance and our
Phase II/III CMC guidance have been drafted.
The emphasis is on safety.
It
is somewhat peripheral that product consistency and quality are also aspects
that should be addressed, and during the IND process the CMC amendments that we
see are usually pretty brief.
Now,
the NDA '87 model for NDA submission is that, as I see it, we only had a couple
of places where this development data could have snuck in. That is, there are investigational
formulations which typically is just a table saying here are the components and
composition of a product that we used in our earlier clinical trial. There is, however, a section for in-process
control.
In
supplements these two, as with IND amendments are generally not very
substantial documents and don't contain really much development
information. Annual reports, which are
becoming more important in post-approval changes, many post-approval changes,
we still don't see there much development data.
So,
the conclusion is currently available information, we don't see a lot of
development data. Traditionally, much of
this data was not shared. There have
been some cases where firms have shared with us the European pharmaceutical
development report and I think people generally have found that to be helpful,
but there are regulatory concerns and concerns about increasing resources and
increasing sizes of the submission.
But
I guess it does provide an opportunity to down-regulate post-approval changes
if we can feel more confident about the quality that you built into your
product; if we can feel confident that your development program has identified
critical issues, and you can make changes, and we would then know what was
critical and what was not critical.
The
existing development reports, really the P.2 section of the CTD owes its
history to the European development pharmaceutics report, and there is this
guidance that is still on the web for the pre-CTD development
pharmaceutics. They subsequently have
issued a post-CTD development pharmaceutics report which really is not much
more than what is in the CTD. There also
is a development chemistry section that you will find i you look at this notice
to applicants.
FDA--there
is a thing mentioned in an ORA guidance called a product development report,
which is not obligatory but the items are mentioned that should be included, if
not in a development report, should somehow be available for inspections.
Of
course, we have the P.2 pharmaceutical development section which was in the
CTD. Now we are trying to draft out some
drug product guidance about what that would be and there are, of course, the
ICH initiatives in that area.
Here
are the broad-brush headings and subheadings of the pharmaceutical development
section. P. is the product section. So, you see that even in the product section
there is some drug substance information.
Then there is drug product information.
Perhaps
the section where there is the most opportunity to educate us about your
process knowledge is this 2.3 section, manufacturing process development.
This
is verbatim what the CTD-Q says.
Essentially, I have summarized it in the next slide. It is compatibility of the drug substance
with excipients; the physicochemical properties that can influence the
performance; and the compatibility of the substances with each other if you
have more than one drug substance in a dosage form.
There
are opportunities to put this information in different sections of the
application, and we have typically seen it in different sections of the
application. So, the drug substance and
product group have struggled with the "what goes where" question and
how these sections differ from similar sections. Here I have just listed out, for example,
where polymorphism is mentioned. It is
mentioned in the pharmaceutical development section but it is also mentioned in
these two drugs substance sections.
So,
one proposal would be that testing on a drug substance still be in the
substance section, and the drug product testing would be in the pharmaceutical
development section. Then, data in the
P.2 section can be used to justify drug substance specifications. So, it seems a little bit the reverse, that
is, you have a section in product that points to justification of drug
substance specifications.
Here
is the Q6A drug substance particle size decision tree. I just thought I would point out that in
answering these questions in this box, many of these would probably be in the
pharmaceutical development section, in this P.2 section. So, that is how P.2 would relate to the Q6A
decision tree.
If
we look at the polymorph decision tree, conduct a screen--you know, there is
some question of how to conduct a screen.
Can polymorphs be formed in characterizing the polymorphs? I guess the current thinking is that this
actually would be in the drug substance section.
Now,
if you go further in this tree you will find these items. Is the product safety or performance
enhanced? In that case, we see that it
would go in the drug substance part of the pharmaceutical development
report. But the justification for no
further testing and the justification and the setting of the specification
would go in the drug substance section.
The
remainder of the polymorph decision tree--we see that more product testing,
that is, does the product performance provide adequate control if polymorph
ratio changes, that would be in this physicochemical-biological property
section of pharmaceutical development.
An
alternative proposal would be that any kind of one-time testing should be in
P.2 and that all stress testing, for example, be in P.2. So, this was discussed during the ICH
negotiations and it is not explicitly written into CTD-Q but it was one proposal.
The
excipients--what kind of data should we expect since, to some extent, this is
new to us, and this is kind of your pre-formulation studies? Should we always expect to see this kind of
compatibility testing? Test all of them
at once? And, drug product stress
testing, should that be performed or would that be covered if you did adequate
pre-formulation development?
So,
CTD-Q indicates that you should essentially justify, based on function, why you
used the excipients that you used, and we have sort of added in the draft of
our product guidance that ranges should be justified; that functional excipient
performance be mentioned; that there be additional information on novel
excipients; that if you use an excipient with some biological activity, inherent
biological activity, that you tell us something about that, that you
rationalize that; and that you give us the tracer information in that
particular section.
There
are other excipient sections that are listed here that deal with control of the
excipients. Really it is control in your
product. So, the one-time testing is in
P.2 and the control is in this particular section of the CTD.
The
novel excipients appendix really was designed to provide a place for providing
extensive information should you have an excipient that has never been used in
an FDA-approved product.
So,
now we get more into the development-related issues. The CTD says that the development history
should be included in this formulations development section, including route of
administration and use. Here is where
you should lay out what were the differences in the clinical versus
to-be-marketed product. So, you would
give us the information about the composition that was maybe used in a Phase I
study or maybe in one of the Phase II studies, and you would lay out what the
difference in manufacture was and, if it appropriate, you would give us
bioequivalence, at least a summary of bioequivalence data there. Generally most of the bioequivalence stuff is
in the clinical part of the CTD.
In
the drug product guidance we added a few other things about scored tablet;
about overfill. We actually didn't put
in anything about drug product studies and the polymorph decision tree because
I think, frankly, we are still thinking about it. And, diluent selection, that is, why did you
select the diluent that you selected.
Compatibility is in a subsequent part of the P.2 section.
So,
this is the entire manufacturing process development statement description in
CTD-Q. I am not going to read it but it
is relatively brief and certainly open to a lot of interpretation in terms of
what went in, what would go in and how much would go into that particular
section.
So,
we sort of laid out some additional information, although really not a lot beyond
what is in the CTD, that says you should describe the manufacturing in-process
controls. You should at least mention
that thing that you mentioned in the previous section about changes that is in
the clinical trials. You should explain
selection and optimization of the manufacturing process and define critical
aspects of the manufacturing process.
This
criticality issue comes up a lot. It has
been mentioned several times today and also it is a CTD-Q heading in control of
critical steps and intermediates. That
is a heading in both the drug substance and the drug product section. Actually, we did get some comments on the
draft guidance that maybe we should define what we mean by critical.
One
of the ICH guidances in which critical is defined is the Q7A GMP API guidance,
in which it indicates that what is critical is any step that, should it lack
control, would affect the specification of the drug substance. So, that is one sort of way to get at
criticality but, it seems to me, it is perhaps a bit incomplete.
The
development data--these are some general thoughts on the development data. That is, you should identify the critical
steps and variables. I can tell you that
we had a discussion in my Division about drug substance. We have a CTD-Q application and the applicant
has finally decided that there are no critical steps in the drug substance
manufacture and I guess we are kind of struggling with that concept. That is, could that be, or would it maybe
even generally be true that you don't have critical steps in drug substance
manufacture? You can argue that, after
all, with drug substance probably a lot of the quality attributes can be
tested, end-product testing probably does tell you quite a lot about the quality
attributes at least for drug substance.
I
think that science-based specifications, that is, specifications based on what
you know about the manufacturing process, what you know about any clinical
data, should allow us to focus on the high risk steps and the controls on these
high risk or critical steps.
I
guess lack of adequate development data would suggest that there may be
critical things that you didn't uncover and suggest that maybe there is a
higher risk in any post-approval changes, so maybe the reporting category
should be higher. But when best
practices are employed, I think most people would agree it would minimize the
risk of poor product quality, and it would allow us to down-regulate any
post-approval changes.
That
is kind of where we are now, where we have been in '87. We are still trying to work out where exactly
we are going with the P.2 pharmaceutical development. There is another concept paper that will be
presented in Osaka to the steering committee, or actually perhaps before Osaka,
who will then have the opportunity to adopt P.2 as an ICH topic. So, there may be some substantive P.2
discussions at Osaka.
So,
we are trying to refine this in the drug product guidance and we would be
anxious, since we are currently rewriting that, that is, we are taking the
draft and writing the final guidance, we would still be interested in hearing
your comments. I can tell you we got
maybe 200 pages printed out of comments from not a large number of people, but
the people who did comment had many comments.
Closer
cooperation between ORA and the Center review chemists--I can tell you that I
have been here for 15 years and it is still not completely clear to me, even
after having taken some GMP training recently, what exactly it is that the ORA
folks look for. I mean, if you ask me to
write out, for example, the elements for a validation protocol for a wet
granulation, I have some sort of general idea about what that would be but I
think I really lack some specific knowledge in that area. I think that generally reflects this sort of
division between the field and the Center in that we don't really understand
what the field folks do and I think the field folks are not completely clear on
what we do. So, you can see that these
current initiatives that we have are going to bring us closer together on that.
Now,
P.2, when there were initial discussions, was initially seen as a one-time only
report that would go into the initial submission. When discussions turned to well, what about
post-approval changes, there was general agreement that for CTD-Q we should
focus on the original NDA submission, and we shouldn't focus on any subsequent
submissions. So, I think the thinking
was there that P.2 would maybe be submitted once but now I think the idea certainly
comes about that once you have established your manufacturing process you learn
a lot in that first year on drug product manufacture. So, maybe it is appropriate in some sort of
subsequent filing to update that and to tell us what you have learned
subsequently, maybe in a first supplement or something like that.
Could
portions of the GMP product development report be included in P.2? I say that because, you know, there are
issues of resources, resources devoted to putting together this P.2 report that
you didn't have to put together previously.
But I think now, if you think about it, we have the opportunity for data
reuse so you can imagine reusing some or all of this GMP product development
report in the P.2 section, thereby minimizing the amount of resources for this
seemingly new section.
I
think really the XML-based document management that probably will be
necessitated by the XML-based eCTD will promote this kind of information reuse,
this reuse of various modules so I think, for example, one thing that occurred
to me is that information from the annual product review, which typically we
don't see. So, in the Center we don't
see the number of batches that you made during the year, you know, what the
specifications were like and what the acceptance criteria were. We don't see control charts that I understand
are typically in the annual product review.
So, maybe there would be an opportunity, with little additional
resources, to provide that to us, telling us the number of batches manufactured
and the observed trend.
I
guess I have to point out that FDA actually was sort of in the forefront in
using this kind of scheme and that ten years ago we had the Morris project
which had a CTD for chemistry, which we worked on with several other regulatory
agencies, and I think that was a thought that we had at that time, that if you
used this kind of XML model it really would promote the reuse of information
and minimize the resources and redrafting what essentially was the same
thing. That is all the comments that I
have.
DR.
BOEHLERT: Any questions or
comments? First Dr. Gold and then Dr.
Layloff.
DR.
GOLD: Dr. Schmuff, you mentioned that
you had difficulty with the Q7A definition of critical. You did give the definition and, to my
memory, it is correct. How would you
modify that? The definition of critical
is a very important aspect of what we are talking about today.
DR.
SCHMUFF: Well, I guess in my reading of
it, I mean, critical says that--I will put it this ways, it says that if it
doesn't affect the drug substance specification it is not critical. At least the model that i still have in mind
is that product quality is built on specifications and GMPs and what happens
along the way and specifications don't cover all of the aspects of drug product
quality. So, in that same way there
should be critical elements that are not covered by specifications.
DR.
GOLD: So, you are saying that
"fitness of use" involves more than the specifications that we have
currently in our files?
DR.
SCHMUFF: Well, I would say it involves
some aspects of the current model, which are GMPs and product
specifications. I mean, that is the
model for drug product quality. If drug
substance specifications were the only story, then GMPs would not be important
for drug substance and I think most people agree they would be important. So, there must be something related to
criticality that relates also to these GMP aspects or attributes that simply
aren't tested.
DR.
GOLD: Aren't the GMPs a surrogate or an
examination for lack of quality? That
is, lack of adherence to GMP implies the product may be adulterated and the
GMPs say it is adulterated if you don't adhere to GMP. But if you don't adhere to GMP, you can still
make a perfectly good product under various circumstances. So, I am not clear why you are involving GMPs
in this issue.
DR.
SCHMUFF: Yes, I mean, this is just my
personal view on this but if you can make a product that meets the
specification, and the current model is if you have a non-satisfactory GMP
inspection, then we don't approve the product without a GMP inspection. So, it seems to me that there must be
something about criticality, there must be some critical aspects built into the
GMP inspection.
DR.
GOLD: Well, if there are I would like to
hear about them. But let me end this
conversation. I think I now understand
what your viewpoint is.
DR.
LAYLOFF: I think I share some of Dan's
hang-ups. On a drug substance, is there
a dimension other than the characterization of the drug substance itself? In other words, are there process steps that
are critical to the drug substance? If
you take an ICH model and you say that you have to identify or qualify every
impurity over a tenth percent you define the product quality around those
analytical parameters rather than critical steps in the process of obtaining
it. The question is are there critical
steps in the process apart from those that you find out analytically?
DR.
SCHMUFF: Well, I would say that, yes,
there probably are some process control issues related to yield. I still believe there are some GMP aspects
that are important in determining the quality of the drug substance. I did say that I thought that end-product
testing took care of most of the drug substance quality issues, but I don't
think it takes care of all of them. I
think otherwise we wouldn't have had this big effort aimed at developing a Q7A
and having an MRA related to API inspections.
DR.
BOEHLERT: I think Ajaz wants to make a
comment.
DR.
HUSSAIN: Yes, I think there are several
aspects to this discussion. One is do
specifications tell the whole story is that argument, and if your is against
that, the specifications often do not tell the whole story because of a number
of other elements that go beyond that, for example, one is the probabilistic
aspect that Janet talked about. I think
to be a representative sample for your decision-making you really have to
approach it from a control perspective, understanding the process and bringing
that to the forefront from that aspect because the fundamental basis of GMP is
that quality cannot be tested into a product; it has to be built in. Just reliance on a set of specifications
often is insufficient from that argument.
Also,
I think the other argument that I would like to sort of present is that
specifications are a test method, an attribute of interest--the next step is
criteria. The test method is related to
the a given process and you cannot look at that in isolation often. So, I think you have to approach it from that
angle. So.
DR.
BOEHLERT: Dan?
DR.
GOLD: Well, I still have a problem,
Ajaz. The GMPs are inferential in
determining product quality. Now, I
agree with you that if the batch is not uniform you may take a portion of the
batch for your sample and measure quality that is good and, yet, there are
parts of the batch that are not good.
But that is a matter of the processing methodology which is what you
presumably control when you approve the application. Now, it may well be that we are not using all
of our knowledge in approving applications and making certain that the
procedures we use for manufacturing give us quality through the batch. Isn't that why we started the validation
activities, to show that the process was robust and the batch was uniform, and
the sampling that we do is truly representative of the entire batch?
DR.
HUSSAIN: Correct, and I think that goes
to the heart of what I think Norman was getting at. If we have uncertainty with what are critical
attributes, what are critical quality attributes, what are you validating against? So, that is the discussion.
DR.
BOEHLERT: One more brief comment and
then we will take a break. I don't want
to put any pressure on you, Tom!
[Laughter]
DR.
LAYLOFF: There is an interesting
example, sugar, sucrose. We have beet
sugar and we have cane sugar that are different processes, completely different
processes. Yet, if you look at NF and
any other food chemical codex, any process, we look at sucrose as a chemical
entity. I think in the case of process,
process is very critical for ill-defined or non-homogeneous materials. If it is a unique homogeneous material, then
the end-product test actually does define it I think. It is the critical aspect. Certainly, in case of sugar that is true.
If
you want to go further than that, then you can talk about a consumer view and
then we would say sweetness, and then we would say a high fructose corn syrup
is a sweetener also and we have then a different behavioral problem.
DR.
BOEHLERT: Ajaz, very brief.
DR.
HUSSAIN: Tom, under ESP they may be the
same but when you come to processing physical attributes, they won't process
the same way and that is the point I wanted to make.
DR.
SCHMUFF: If I could just make one point
about the GMPs for APIs, if a firm used a reactor that was previously used for
a toxic pesticide, and we have seen this case and, of course, the pesticide
residue is not going to be tested into the drug substance, then that is
something that clearly cannot be picked up on the review side and can only be
picked up by GMPs, but I would defer to my field colleagues to further define
the importance of GMPs for APIs.
DR.
BOEHLERT: Thank you. This was a very interesting discussion. It sounds like we could go on for quite some
time. We will take a 15-minute break and
reconvene at 10:50. Thank you.
[Brief
recess]
DR.
BOEHLERT: I would like to get started
again. Our next speaker is Gerry
Migliaccio.
Proposals for Regulatory Asses of
Quality by Design
Industry, PhRMA
MR.
MIGLIACCIO: Thank you. What I would like to try to do is advance
this discussion of quality by design and try to dig into a bit more detail and
really talk about using manufacturing science and risk management principles to
achieve quality by design.
I
am not going to talk about specifications.
Specification is what is developed during the NDA process for us and it
is what we need to achieve in whatever we design. So, acknowledging the limitations and setting
those specs, as Janet pointed out, we certainly support a science-based
specification development process and, hopefully, we will achieve that in the
future but the specification is what we need to design to at the present
time. So, I won't be talking about the
design specifications.
What
we are talking about is designing quality into the pharmaceutical management
process and, at the same time, encouraging innovation and encouraging
flexibility in the associated regulatory processes. So, those are the overall objectives, as we
see it, of quality by design.
A
couple of key definitions, and this is a classical definition of risk just
applied to manufacturing processes, which is that it is the probability of a
manufacturing event occurring and having an impact on fitness for use, safety
and efficacy, factored by the potential severity of that impact.
My
definition of manufacturing science is slightly different than G.K.'s. It starts with a body of knowledge but then
it gets into some specifics about that body of knowledge that, hopefully, will
lead to the definition that G.K. uses.
But it is getting into the critical to qualities and the capabilities of
the process, technologies used and, importantly, and this pertains to the last
discussion before the break, the quality systems infrastructure. Because the specification doesn't define
everything, the quality systems infrastructure is critical to ensure that those
things which cannot be measured, like cross-contamination in many cases--those
things that cannot be measured are being addressed properly.
This
is slightly modified from the last meeting when I presented to you but it is
the correlation or the conceptual correlation.
That is, as manufacturing science, as that body of knowledge increases
the risk associated with the product or process so that the risk of that event
occurring decreases. Then, what we are
advocating is a tiered regulatory approach, certainly in the post-approval
change management arena, a tiered regulatory approach that as manufacturing
science, as that body of knowledge goes up the ability to make changes is more
streamlined.
Now,
the key issue is how do we get a product or process on that manufacturing
science curve? How do we quantitate
it? Where we think we need to go, from a
PhRMA perspective is to move into developing a quantitative measure, developing
a method to place a specific product on that curve.
Now,
this is going to be somewhat repetitive of what Norman just talked about, but
that body of knowledge--getting into some more specifics, what are we talking
about? What are we talking about having,
developing, sharing with the FDA? On the
API, certainly the critical attributes, both physical and chemical, and
compatibility with excipients and, obviously in a combination product,
compatibility between APIs. Excipients,
the critical attributes associated with excipients.
Drug
product formulation, what is the rationale for the dosage form that we are
using? Why did we decide on a tablet,
capsule, liquid, whatever? The
formulation development. And, the key
physicochemical attributes and the relationship of those attributes to the
finished product quality or the surrogate, as Janet is talking about, for
quality and, of course, performance testing.
In
a drug product manufacturing process, what are the critical to quality
steps? What are the manufacturing
technologies used for those critical to quality steps? What are the critical to quality process
parameters? Importantly, what is the
relationship of those parameters to product quality? What process control technologies are used
for the critical to quality parameters?
And, where sterilization is involved, aseptic manufacturing, terminal
sterilization, etc., what method are you using to achieve that?
Then,
the manufacturing facility, what is the quality systems infrastructure? That is generally measured by inspectional performance.
So,
that is a little bit more specific about what we are talking about, this body
of knowledge that should be shared and should be used to determine what level
of manufacturing science a given product is at.
Now,
what we are recommending is that we take that body of knowledge, that
manufacturing science, and we turn it into some metrics. These are potential metrics. Others could come up with a different set but
let's use these for an example. Three
potential metrics, first, process complexity.
Complexity can be determined by the number, the nature of critical to
quality attributes in a process or critical to quality parameters in a process,
but also the inter-relationship of those critical to qualities. So, that can be a measure of complexity.
The
robustness, process robustness, how much tolerance do those critical to
qualities, how much variability can you have in those critical to quality
parameters without impacting safety or efficacy? Finally, a well-established statistical
analysis process capability.
So,
from that body of knowledge we could convert that into three metrics, or more,
where we have a measure of complexity, of robustness, of capability of a given
process.
What
do we do with those metrics? Well, first
of all, there are some intuitive correlations.
You know, lower complexity should mean lower risk. Higher robustness should mean lower risk and
higher capability should mean lower risk.
Then
we can mitigate risk. Once we know where
we are on the curve and whether we have a higher or lower risk product we can
take steps to mitigate risk. For higher
risk products and processes we have talked about advanced technologies over the
last couple of years, and process control technologies to mitigate risk. But for inherently low risk products it
really is important to point out, and I think Janet showed that in one of her
curves, that more technology, more control doesn't necessarily lead to any
lowering of risk or any real benefit.
Examples
of risk mitigation--process automation, eliminate or at least reduce the
potential of human error. Isolators and
closed systems for aseptic manufacturing.
Dedicated equipment and closed systems for highly potent compounds. I mean, a perfect example of the risk
equation is the potential severity of penicillin cross-contamination is very
high. I mean, it could be fatal. So, the severity is very high. So, we use dedicated facilities to reduce the
probability to zero that you will have penicillin cross-contamination. Okay?
So, that is a risk mitigation strategy.
Process
analytical technology, learning more about the process, monitoring the process,
real time, real time feedback--that is risk mitigation. And, vision systems on packaging lines,
ensuring that the cavity has a tablet in it; that the label is there, the lot
number and expiration date are all there.
Those are examples of risk mitigation.
So,
getting back to the quantitative method, we believe that an algorithm can be
developed to assign some manufacturing science factor to get us on that curve,
and it is a relationship of the complexity, the robustness, the capability and
the risk mitigation strategies.
Okay? Again, this needs to be
scoped further. Maybe there are other
metrics that you would use besides these four, but this is the direction we
believe we need to move in.
That
algorithm will get us onto this curve and, hopefully, achieve the tiered
regulatory approach that we are discussing.
When
you look at this again there are three elements here. There is the manufacturing science
element. There is the risk management
element and then there is the regulatory process element. So, what we are really recommending to
operationalize this is we believe, through the successes that we have seen
through PQRI recently, that we should have three working groups where academia,
FDA and the industry come to the table with a focus on manufacturing science to
develop these metrics with a focus on risk management, on how to model using
this information to truly classify the risk associated with a product, and then
a regulatory process focus which is really related to changes, process change,
and also is related to the inspectional process.
For
the sake of time since there are several speakers after me, I am not going to
go through what I believe the focal areas are for each of these three. Clearly, we need to scope this out more
clearly before we send these working groups off. If we can do this, the benefits of quality by
design I think are manyfold. G.K. talked
about some of them; Janet talked about some of them.
Certainly,
enhanced quality assurance will encourage the sharing of knowledge between the
industry and FDA. It certainly will
promote this mechanistic view, more process understanding and a mechanistic
view of our products. It is going to
promote the effective use, and this is one of the underlying drivers here--the
effective use of both the FDA and industry resources on what is important. Certainly, it is going to facilitate
innovation and continuous improvement.
So,
our drive here is really to encourage FDA and the industry to support the
establishment of PQRI working groups to operationalize quality by design and,
again, to bring it to an operational stage we believe we need to start to
quantitate what we are talking about, bring it from the conceptual to the
quantitative. Thank you.
DR.
BOEHLERT: Thank you, Gerry. Are there any questions? Yes, Nozer?
DR.
SINGPURWALLA: I have a few
comments. On your first slide you had
objectives, the very first slide. I am
sure you agree with me that these objectives are conflicting.
MR.
MIGLIACCIO: I don't.
DR.
SINGPURWALLA: You can't get all three.
MR.
MIGLIACCIO: Why?
DR.
SINGPURWALLA: Well, we can have a long
discussion on that, but you can't have your cake and eat it. That is why.
All right?
MR.
MIGLIACCIO: I believe in the genius of
the end.
DR.
SINGPURWALLA: I don't agree with
you. Because you have these conflicting
objectives, you are going to strike a compromise and I don't know where. If you agree--of course, you don't--
[Laughter]
--but
if you agree that these objectives are conflicting there is going to be some
form of a compromise and I don't know where it is going to be.
Let
me go to something else. I said I am
just going to make some comments. The
second comment I want to make pertains to this algorithm that you would like to
develop, page eight, first slide. That
sounds like a good idea, except I don't know how to do it. Part of my difficulty is that those four
factors that you have put up perhaps are interdependent. Therefore, doing one is tantamount to
eliminating the other. For example,
robustness and capability may be very highly correlated.
MR.
MIGLIACCIO: They are.
DR.
SINGPURWALLA: All right. So, how are you going to incorporate the
interdependency? And, my major concern
is how do you define complexity? A lot
of people have struggled with the definition of complexity and there does not
seem to be a satisfactory definition, other than when we talk socially--
MR.
MIGLIACCIO: Yes.
DR.
SINGPURWALLA: --about what we mean by
complex.
MR.
MIGLIACCIO: If you want an algorithm, it
is going to be very subjective and that is why we believe we need to have the
right people, the right scientists in the room to discuss and define complexity.
DR.
SINGPURWALLA: Do you think this can be
done, not complexity but do you think this algorithm can be done?
MR.
MIGLIACCIO: Yes, I do.
DR.
SINGPURWALLA: You do?
MR.
MIGLIACCIO: Yes.
DR.
SINGPURWALLA: Well, thank you. Good luck!
DR.
BOEHLERT: Thank you. Our next speaker is Edmund Fry.
Industry, GPhA
MR.
FRY: Thanks. Good morning.
It is a pleasure to be able to speak here, to meet with the subcommittee
today and, also, I am speaking on behalf of GPhA. My comments represent my personal
understanding of the general views and concerns of the generic pharmaceutical
industry and don't necessarily represent the views of all member companies.
What
I am going to try to do today is add some little practical aspects to the
discussion and raise some issues and suggestions. GPhA members exist to make affordable drug
therapies available to all. Although our
companies are generally smaller than the brand-name companies, we believe it is
completely appropriate that the same regulatory requirements apply to all
companies. Recognizing the range of
companies that will be affected, we have confidence that FDA will provide
needed flexibility in its requirements and guidance arising from this
initiative. The bottom line is that we
fully support the FDA initiative.
I
have been involved with the implementation of GMPs for a long time, both inside
the agency and in my subsequent career, and the slogan "you can't test
quality in" has been the justification for good manufacturing practice for
as long as I can remember. What is new
about the current initiative is that it seems to recognize that quality by
design goes beyond the traditional manufacturing and quality control unit
organizational silos. It works if it
becomes the company's culture. It is a
way of focusing on factors that are important to the customer in assuring that
products and processes address these factors.
To me, it is a much more rational approach than the traditional and
sometimes arbitrary approach to good manufacturing practice.
Compared
to the modern and widely known quality approaches, there are some limitations
on the pharmaceutical industry. For
example, the methods of Tagucci and others encourage continuous improvement in
the product. As has been stated, design
changes during manufacture can result in the last product produced being
different from the first product. When
it comes to pharmaceuticals you have to be very careful of improvements. You can improve stability, purity, tightness
of specifications, etc., but not necessarily the way the product works in the
patient. Of all the variables that
physicians face when treating patients, one variable they probably don't want
is batch-to-batch differences in the performance of the drug products.
In
the generic industry there are additional constraints. The innovator reference drug is a fixed
target which the generic manufacturer must not improve on. The biological performance of the reference
drug is a target to be duplicated even if not optimal, and the generic
manufacturer has no access to information about how the reference product was
developed. In the case of certain dosage
forms even the inactive ingredients must be the same as the reference product.
The
successful implementation of quality by design is going to require the
regulatory environment to change.
Quality by design requires adequate resources, both in number and
quality, and currently there is little guidance in that area.
In
quality by design self-assessments play a key role. However, they have little regulatory
significance. They are suspected to be
self-serving and, therefore, not worthy of much attention.
Continuous
analysis and improvement is another key.
Although the product itself is amenable to improvement only in some
areas, continuous analysis is important to understanding the process. Currently the focus is on annual product
reviews instead of continuous analysis.
Hand-in-hand
with continuous analysis is good change practices. As we all know, there are formidable barriers
to change. We are a conservative
industry, having learned well that we are pervasively regulated by a
conservative agency. We would like to
see the barriers loosen, including a reorientation of the emphasis on
enforcement.
The
last bullet sums up the concept. Focus
on what is important. Some things are
more important than others. Therefore,
we should acknowledge that there may be some things that are simply not
important and we can let go of them. An
example, the GMP section that requires recording the lot number of every single
bottle of expired or near-expired product that is returned by customers to our
distribution centers for credit. People
have to be hired to do that work which is of no discernible benefit.
Some
suggested actions--in an effort to be constructive, I have listed some actions
that I think would be welcome by many in the industry. Most, if not all of them, are already under
way: Give credit for good performance;
continue to reduce unnecessary supplements; develop the pharmaceutical
inspectorate; reward process innovation; eliminate unnecessary testing; and
address some issues with oversight of API manufacturers.
I
would like to expand on these. We
welcome the efforts to reduce the inspection burden for companies that have a
proven record of good compliance. I
believe this concept can be expanded and refined as time goes. The factors should become transparent so that
companies understand their goals. There
is no point in keeping it secret and generic companies should be rated on the
same basis as larger companies. A system
that would assign greater risk to generic companies than brand-name companies
isn't rational and isn't fair, and it would be inconsistent with FDA's clear
and long-held view that generic drugs meet the same standards as brand-name
drugs.
We
know the agency is focusing a great deal of attention on reducing unnecessary
supplements. I just want to add my
encouragement to a couple of areas, which is new manufacturing plants and
post-approval changes for sterile products.
Launching a new plant, as I have learned, is extremely complicated, with
certain pre-approval requirements appearing to be redundant to work done in the
past and done in the field. As you might
imagine, delays in commissioning a new plant can be extremely costly but they
almost always result in improvements in both the process and potentially the
product.
In
the case of sterile products, post-approval guidances never materialized
resulting in quite a few pre-approval supplements that would appear to be
unnecessary.
FDA
is doing an excellent thing with the pharmaceutical inspectorate. We encourage further integration of field and
review activities, with more delegation of decisions to the field force.
PAT
is a large area of promise but is not the only area of innovation. Similar encouragement should be given to
other promising areas. One example is
advanced aseptic processing. Until PAT
came along we sometimes felt that FDA was overly skeptical of new
technology. We do not think that new
technologies should necessarily be made mandatory but they do deserve
encouragement.
the
other side of "the you can't test quality in" coin is the elimination
of arbitrary and unnecessary testing requirements. This is just a couple of examples, sterility
testing where experts have for years pointed out its lack of usefulness, and
blend uniformity testing in which we look forward to the FDA guidance that we
understand is coming, guidance based on input from PQRI.
Our
industry, large and small, is dependent on overseas API suppliers. We are concerned that regulatory scrutiny may
not be on a par with domestic companies for a variety of reasons. Variation and physical quality of APIs is a
practical concern in the industry and is an area where FDA could help.
In
the software field there is an industry program, operated by PDA, that pulls
qualified audit data for use by all pharmaceutical companies. FDA support for such a program for APIs would
help increase the level of information available to all, agency and industry,
and could potentially increase the quality of products from API suppliers.
The
generic industry is very interested in participating in such vital activities
as ICH, although we are concerned that we have not been able to participate to
such a degree as the brand-name companies.
We simply don't have the same level of resources. We do, however, need to be at the table. As ICH moves from drug development
initiatives into GMP and other areas that affect all companies equally, it is
very important that the generic industry be a full partner.
Of
all the adverse factors that a patient faces from drug therapy, manufacturing
deficiencies are fortunately the least of his worries. He or she may face lack of efficacy, adverse
reaction, misdiagnosis or dispensing error but it is very rare for a
manufacturing deficiency to cause a discernible effect. The member companies of the associations
represented here, along with the agency, can be proud of that. However, we understand that you can't rest,
ever rest, when it comes to quality.
Quality by design makes excellent sense.
We support it fully and GPhA welcomes the opportunity to work with
FDA. Thank you very much.
DR.
BOEHLERT: Questions or comments?
DR.
SHEK: Can I just comment on both of
these particulars as well as Gerry's presentation? The comment on the objectives, as a matter of
fact, those three objectives that Gerry presented, I believe, they are
synergistic. They are not contradictive
to each other because without having the flexibility and without having innovation,
I think it would be very difficult, you know, to design quality into a product
because there are restrictions there.
Another
point which was very interesting, talking about designing quality into the
product, it looks like there are two phases. One phase is when, let's say, the innovator
comes out the first time with a product and you want to make sure that it is
efficacious and it goes into clinical testing.
Once you have that, then you have the other part of the quality to
ensure consistent manufacturing day in and day out.
So,
I think as we go into the discussion later on, we have to somehow separate it
otherwise I believe we are going to confuse ourselves because there are really
two parts of quality, as I see it here, and I think your presentation to me
exemplifies it because you have the original product and now you are going to a
situation where the same product is going to be manufactured by somebody
else. So, what kind of design in quality
do you do at that stage relative to the first stage?
MR.
FRY: It is a somewhat different
challenge. Thank you for the comment,
Dr. Shek.
DR.
BOEHLERT: Other questions or
comments? No? Thank you, Edmund. We have now heard from two industry
associations, PhRMA and GPhA. Our next
presenters represent academia. Ken
Morris?
Academic
DR.
MORRIS: Thanks, Judy, and thanks for the
invitation. I am here as the blue collar
representative I think because what I am going to talk about is going to be
more detailed with respect to an overview of tools that are currently
available, and sort of keeping in concert with the idea that GMPs, although the
specific regulations may kick in at different points during the development
process, GMPs start from day zero or minus one.
To
that end, I am going to focus more on the two first primary goals in the cGMPs
for the 21st century document, which are the risk-based orientation and the
science-based policies. I know that is
largely the focus of the committee as well.
Obviously, the rest of them are important but a little beyond the scope
of what I will discuss.
The
first question I think you have to ask yourself, as we all have been doing I
think, is what is new. The idea that you
use good science to develop compounds is not new. We are all presumably doing that--let's be as
generous as possible, we are all presumably doing that now to the extent that
it is available and to the extent that it is reasonable. But there are some technologies, some
techniques and models that are at least advanced, if not new.
This
certainly includes computers and the advent of the really high speed
computation that allows the implementation of things that may have been known
for a hundred years but have never been really fully implemented. I think G.K. Raju's work on bench-marking the
pharmaceutical industry certainly shows this.
Sensors
have been developing at a frightening rate, which is to our advantage. Chemometrics, which was once the sort of
domain of the few obscure Scandinavians, has now become the sort of mainstay of
our curriculum as well as tool.
Phenomenological and fundamental models still have to rule the day. The physics has to dominate if it is
physical; chemistry if it is chemical; and biology if it is other.
The
last thing that is new, and I think this is probably the most important thing
and is the reason why we are all here, is that the mutual FDA, industry and
academic recognition of the technical way forward in the application of the
state of the science is unique in my experience, and even the experience of
those which is somewhat longer than my own.
So,
let's sort of use Janet's analogy and look backwards. Actually, these are data that Ajaz presented
at Arden House but I think some of these are actually from the University of
Maryland.
DR.
HUSSAIN: No.
DR.
MORRIS: No? No, these are not. In any case, what we have here is sort of the
example or an example of how formulation and development variables can impact
on dosage form performance, which is what we are after. This now is trying to bridge that gap, if you
will that Janet described between the development process and actual
therapeutic activity by drug plasma concentration.
So,
if we look at the sort of traditional development timeline, starting with early
develop which bridges to discovery through pre-formulation in the product
development and drug substance synthesis trying to come up with a commercial
pathway, and formulation, design and development, you see that fairly quickly
you can't really separate that from what comes next, nor should you, in this
sort of historical disconnect that is in part I think disappearing, but this
historical disconnect between analytical and formulation, between API and drug
product is now I think something we can no longer tolerate as a community.
The
same is true upstream from that because what happens is that these minor
changes that you think are occurring early on, that you are not really sure are
important, turn out to make batches fail in the final analysis.
With
that, what I would like to do is sort of go through each of those three stages
and point out a couple of important properties and the theory and method or the
variables and methods that are used to address them, with the underlying theme,
as I said at the outset, that the tools for many of the things that we want to
do already exist, in an effort to sort of take the sting out of what looks like
an entirely new paradigm, really is in many respects just the proper
application and the more modern implementation of existing knowledge.
So,
in the initial drug substance characterization phase issues of purity,
solubility/dissolution, partitioning, stability, solid state shape and form and
hygroscopicity are all, of course, important.
Each of these has their related theory and the method that is based on the
theory for detection. In this case,
purity, of course, the paper chemistry, if you will, coupled with the HPLC was
in itself an innovation twenty years ago providing a very robust way of looking
at purity.
Solubility
and dissolution still, no matter what changes, is based on the thermodynamics,
the kinetics and that, irrespective of regulation, won't change.
Similarly,
with partitioning you are stuck with thermodynamics and there are various ways
of monitoring that are all excellent, or many that are excellent, I should say.
Stability,
this is now solid state stability as well solution stability, relies both on
chemistry and their associated HPLC but also solid state methods which are more
on the edge of what we understand very well but in keeping with the adage that
being able to detect changes may be the first step in being able to predict
changes. We have nice methods for
detecting them; predicting is a little further down the road.
Solid
state form, crystallography, solid state physics are the sort of bulwarks that
underlie that issue, and screening, prediction and control are the tools that
we use.
Hygroscopicity
itself, although almost a non-defined word--the extent to which something will
interact with water, if you will, can be described in terms of classical
surface energetics, and measured by automated systems.
I
will try to show you quick examples of a couple of these. I want to refer quickly to Peter York's paper
from '94 where it was pretty clear to Peter, even long before this, that raw
materials--here he has "new solutions to old problems"--are sort of
the fundamental non-controllable variable for many formulations and
processes. The statement I make in class
is that formulations and processes are only as robust as their ability to accommodate
changes in the raw materials. If you
have not taken into account what is going to happen over the breadth of
possible change in the raw materials, you can't possibly formulate around what
you don't know. The techniques that are
trying to eliminate the differences through mechanical or solution-based
approaches have an effect but they are not certainly complete.
Here
is just an example of screening and controlling forms. There are many ways to do this. They are certainly accessible to all folks
these days where you can relate the frequency of a particular form. We chose colored polymorphs because you
didn't have to analyze them very carefully; you could just look at them. Plus, Steve likes them. But, in any case, what we have here is an
example of the frequency of distribution following the supersaturation ratio at
which certain compounds will come out of solution. This is fairly well described by traditional
nucleation theory and is, therefore, certainly something that can be placed
under the column of being able to be handled.
Advances
in hygroscopicity--even though, as we have said, you can describe the
energetics of moisture absorption, the advent now of new instruments--this one
is a 10- or 11-point simultaneous multi-sample instrument that will measure the
same material over a broad range. So, if
you really want to know what the variation is in your material, instead of
taking a sample and couching everything in terms of the results from one sample,
you put a dozen samples on from the same lot and look at your innate variation
in what it is that you are going to do in the future with modeling or with
formulating.
A
simple method for looking at crystal shape is a handy tool to be had. You can certainly do this microscopically and
there are other ways to do it. If you do
it through looking at powder diffraction and x-ray diffraction in combination
you can also get crystallographic information.
So, we have made, as a community, fairly large advances in terms of at
least understanding what the morphologies of these crystals are.
The
advantage to having a technique that gives you this sort of information is that
you not only get the information about the shape itself but you get
representations of the moieties, the chemical moieties that are going to be
exposed during processing. In addition,
the crystal structures give you, as they have for many years, a lot of
information with respect to how they will respond to mechanical stress. How something responds to mechanical stress
is the problem that you don't think about in the reformulation stage. You don't think about it until your tablets
start capping but you really have to start thinking about it earlier. I think, sort of in line with what Gerry was
talking about with algorithms, I will say something at the end.
For
formulation design we have dosage form selection, of course, which we know is a
combination of medical and processibility issues, excipient selections and
stability to processing, as well as mechanical properties. A lot of these elements, as Norman said, can
be placed in different sections of the development timeline. I have sort of placed them where I think they
have the most impact, and finally, initial processing during this period.
I
think here we have sort of a mixed bag.
We have pretty decent ways for looking at mechanical properties. Certainly, Hancock and Houston initially, of
course, looked at those and found very nice correlations between what goes on
in small samples in the laboratory and what happens ultimately during
processing.
Initial
processing includes some process models that we will talk about in a moment,
and process analytical technologies in order to monitor these processes, much
like we discussed earlier, in terms of implementing things that have been known
for a long time but not usable due to the lack of technology.
With
excipients, on the other hand, it is not so hard maybe to choose an excipient
for its functionality but it may be very difficult to get anything that really
expresses or manifests real-time excipient interaction potential
liability. That is still an open
question.
Powder
flow has really made a lot of advances in recent years. Here are a few flavors of powder flow
instrumentation. The important part here
is that each of these has an underlying theory that allows you to extract the
information that is specific to your use.
So, that is the bottom line here, that in a sense it matters less what
instrument you use than that you know what the data that it is giving you say.
For
compaction and mechanical properties--I won't go through the derivation of the
Heckel analysis. I think a lot of the
people here know it as well. But,
certainly, there is a lot of information to be had in terms of compactibility
of material from Heckel analysis. This
is a routine measurement. You can get
these data off any instrument these days.
Shape
and flow, we talked earlier about determining shape and it sounds more or less
like an academic exercise when you do it early on, but when you look at the real
impact of shape on everything from capping, edge erosion and flow, you see that
there can be quite significant differences.
Here is an example that we have from Bristol Myers Squibb that shows the
impact of the shape on the mass flow rate where the shape differences were
detected by x-ray. But that is less
important than the fact that when you are above this threshold of the responses
you have absolutely no flow. This is faithfully
preserved when you go on to mixtures and causes essentially complete failure.
Then
we come to the processing stage and process analytical technology. Here, ironically, a lot of the advances that
we have been using to elucidate very fundamental questions were championed by
the processing and technical operations in manufacturing sectors, which is part
of the reason that this committee exists.
But if you look at this, and I won't read the list, if you go through
essentially every unit operation, there is some modeling aspect to be applied
that will at least give you the beginnings of understanding of the process and
in many cases will give you the ability to control the process given the proper
eyeball.
If
you start down the road, particle size reduction models have been around for
years that relate the particle size reduction to the energy put into a system.
Powder
charging, which is perhaps one of the most elusive characteristics to be
tracked, you can see that although quantitatively there are going to be issues
for a long time, there are mechanisms to measure this and anticipate charging
problems a priori and a priori means "a blender." I don't know exactly what the term in Latin
is for before you go into the blender but before you go into the blender.
Blending
itself, I think we have seen a fair advance in modeling of blending. This particular example is phenomenological
modeling that we developed but there are really dozens of models, reaching from
discrete element methods through very applied models but, clearly, with the
proper modeling and being able to monitor real time, you can scale from
relatively small to normal batch sizes using these models by the use, in this
case, of scale-up coefficients and in other models other variables that are
important.
Granulation,
here is a quick slide of one of the students that Garnet Peck and I advise,
showing the impact of the force of breaking of a ribbon out of a roller
compactor versus the roll speed, and showing the response for the NIR IR versus
the same force showing that you can now, real time, monitor the ribbon and then
predict the force on breaking, which is mildly interesting. What is more interesting is that you can also
predict the post-milling particle size distribution. So, there is no real disconnect between the
measurement and what you are ultimately going to have. Now, compatibility is down the road.
Fluid
bed granulation is well modeled and monitored using NIR IR, showing that you
can simultaneously predict the size increase as well as the moisture content so
that you can stop when you get to an optimal condition. This is again a real-time process.
Wet
granulation and high sheer is one of the most problematic in terms of
determining endpoint and there are a lot of people working on this. The point here is that by understanding the
basic phenomenon of wetting and over-wetting and the characteristics of the
water molecules themselves you can at least spectroscopically make an attempt
at following, and in the first stages now--this is a little bit blinded because
it is not yet published, but it shows that during the wet massing stage, using
signals treated from the NIR, you can control to a particle size now post
drying, a particle size mean and, in fact, particle size distribution.
Drying
was the lowest-hanging fruit for PAT but it is certainly one of the most
ubiquitous processes we deal with, very well handled by the same sorts of
technologies that we have already been discussing, which will include the heat
and mass transfer engineering essentially that underlies the process, and can
be modeled a priori, "a dryer" before you start and come up with the
drying cycle that is appropriate for your system.
You
can take advantage of that to increase the cycle throughput, or decrease the
cycle time; increase throughput but following through evaporative cooling stages
to protect your product.
Just
an example of monitoring and seeing excursions in drying, you can see that this
will now, in this particular case, correlate to excursions or deviations in
dissolution rate, so a drying monitoring process that results in the ability to
essentially eliminate dissolution testing.
The
last example I have is one that is perhaps more germane--I can't remember who
was talking about this, whether it was Janet, but talking about the idea of the
statistics, the probabilistics I guess, that is, if you take ten tablets out of
a million no statistician can keep a straight face if you tell them that that
is how you are doing content uniformity.
The idea that we can improve our statistics I think is today closer than
it ever was because you can do real-time monitoring in a realistic way. A lot of people are working on this. We have developed just the statistical
justification for the process of real-time monitoring of portions of tablets,
but the point is that the limitations imposed on us by the small statistics
with which we typically deal are in part relievable.
I
have one more slide, just showing that you can also monitor coating. When I say monitor, in every case I mean
monitor in model because the reason we can monitor is, in part, because we know
what to look for and what eyeball to use.
The second part of that becomes the mathematics that describe the
process. Together this makes a very
powerful set.
If
you look at it individually, these theories and techniques that look independent--it
looks like you are looking at one unit of operation here and one unit of
operation here and one aspect of the API, but together they show a really
concerted effort to describe, I would say, contributions to the overall process
of drug development. What this is saying
is that you really have now, by going at it piecemeal, the tools you need to
link these together into some sort of an algorithmic approach. These are applicable to batch as well as
continuous. Ultimately, the multivariate
linkage through chemometrics will be replaced.
The univariate approach is typically used first to make sure that you
have the right variable.
So,
what does a multivariate approach look like?
Well, there is the multi-block PLS approach that Paul Gimperline is
developing based on MacGregor's work.
What that says is that as you go through the stages, as you go through
pre-formulation into early development, you can link these processes
chemometrically by identifying principle components and doing partial lee squares
on top of this to link. If you do this
at each stage, much as Gerry is saying, then when you get to the next stage you
add that to the model and eventually, in "the blue sky" sense, which
is not my forte--in "the blue sky" you eventually link it to the
clinical data. But at the very least
there are active projects to link it at least through the development process.
Ultimately,
this gives you the ability to understand how the development variables interact
to influence the final product and the product quality, of course, which is the
goal.
The
business case is that essentially the earlier you start collecting information,
the more you will know and the more comfortable everyone will be, as Norman
said. Given that level of knowledge and
assuming, I would say, facile communications with the agency, you have to be at
the lowest risk as we propose it.
Obviously, if you don't have the data there is nothing to
communicate. There is no value in the
risk; there is no lowering of risk.
On
the other hand, if you do show variability the sooner you know the better. I don't believe, and certainly in my years in
industry it was never communicated to me that if you don't know, it won't hurt
you. That is just not a viable stance;
never has been and, hopefully, never will be.
I
guess the bottom line here is that the companies really right now have most of
the tools in their possession. This
should improve with research but almost all of the companies that I interact
with, and I have to thank CAMP and NSF for letting me present a lot of these
data and Purdue, Michigan, but most of these companies are way ahead in terms
of having the science. It is a question
of implementing it and it is a question of having the internal regulatory
environment conducive to communicating that to the agency. That is all I have.
DR.
BOEHLERT: Thank you, Ken, a lot of good
information presented this morning. Any
questions or comments?
DR.
MORRIS: And they said I had too many
slides!
DR.
BOEHLERT: We are running a little short
on time, I don't know why.
DR.
MORRIS: I have no idea.
DR.
SHEK: Ken, if I may? I think, as you have indicated with regard to
the API and you showed the examples and, you know, the synthetic chemists as
well as the formulation scientists and the process engineers realize some
attributes of the API--to show the shape, you know, not just the size--are
important.
DR.
MORRIS: Sure.
DR.
SHEK: As a matter of fact, what you see
is happening is you design your API to fit into your process, and you see it
more and more. My question to you, and I
think you mentioned it, is what do we do--because I am concerned that the next
wall will be basically the other ingredient, the excipients that we have less
control over. In your opinion, what can
be done? You know, you can check so many
variables, so many suppliers of excipients but things are changing with time
and that might affect your process and the quality that you designed in. I don't have a good solution.
DR.
MORRIS: I don't have a good
solution. I think the sort of medium
term approach to that can well be designing the processes using some of the
more multivariate models so that you can build in the variation that you expect
to see. So, if you can work early on to
get the kind of products that are representative of the variation that you
might expect to see, then if you can build that in so that you can formulate
against these models, knowing that this variation may occur, then you have a
shot. It is not too different than, you
know, when you are trying to qualify a vendor and you always ask for multiple
lots. But the problem with that is that
multiple lots to a vendor may often be just subsets of the same batch. So, it is not trivial to do but, to the
extent that you can, if you can get this variation in material, built that in
to a multivariate model after having established the univariate dependence,
then I think that is the best medium term solution. Obviously, granulation and the other
techniques that are used to try to sort of blank out the differences are also
viable, as we have known as a community for years.
DR.
BOEHLERT: Thank you.
DR.
MORRIS: Thank you.
DR.
BOEHLERT: Our next two speakers are
going to talk to the regulatory assessment of quality by design, starting with
Joe Famulare from the GMP perspective.
Regulatory
GMP Perspective
MR.
FAMULARE: Thank you. Good morning.
I am going to try to briefly give a GMP perspective and how that plays
into quality by design. I know we have
had some preceding discussion this morning so, hopefully, we can elaborate on
that somewhat.
I
will start out by looking at the quality system as a whole. Actually, this is a definition we wrote in
the program for our investigators conducting GMP inspections. It is from our compliance program. We basically say that the quality system
assures the overall compliance with GMPs, internal specs; encompasses not only
the quality control unit and its approval duties but all aspects of drug
product defect evaluations and evaluations, and the various sub-parts of the
GMP. So, it is very broad as we put it
there. Actually, we put that system, as
you will see later, as the center of our inspection program because it is the
basis for many of the things that we are talking about for the quality system.
We
have several presentations internally now going on about quality systems, as
was noted in our cGMP for the 21st century most recent announcement. I think Dr. Woodcock was inspired by the
first speaker we had from the Malcolm Bladridge program in terms of looking at
quality from various aspects, as we announced just several weeks ago.
But
summing it up, say what you do; do what you say, proving it and improving
it. Of course, one of the earlier
speakers, Ed Fry, even said, well, improving it sometimes gives you somewhat of
a challenge in the pharmaceutical element.
What are all the regulatory inhibitions?
What are all the things about the pharmaceutical product that may affect
how it works on the patient? So, these
are the challenges that we are facing in terms of overall quality systems.
So,
what does building quality in mean? Here
are some suggested proposals, at least from a cGMP perspective and it should
overlap with other perspectives:
Developing a product that meets the patient's needs; identifying and
developing appropriate specifications; then developing a process that can
reliably reproduce a product meeting these specifications and a mechanism for
translating process knowledge to maintain and improve that state of control.
So,
these are the challenges that work into quality by design that we have been
talking about. What is the state of
control? At least one theory has been
proposed by Russ Madsen in his article, "Real Compliance" in the PDA
Journal.
Some
of these issues are important as we talk about not only existing systems but in
moving forward--processes that are well characterized and understood; process
checks that are essentially confirmatory rather than controlling, again, based
on understanding the process; feedback loops; feed forward indicators and
failure alarms; instructions and procedures; verification of critical
operations and documentation and, again, "critical" being important;
and an immune system which has root cause determination, corrections, etc., and
consistency. So, some of these themes we
are seeing now. I guess he was
evolutionary or, you know, preceding many of these goals in the GMPs for the
21st century.
When
we look at the overall issue in terms of pharmaceuticals, we very much look at
product design and process design from the company perspective and from the
regulator's perspective. In product
design, of course, there is the desire to have product specifications that
reflect the formulation and the desired safety and efficacy. From the process paradigm, we want operating
parameters that evolve from process development knowledge, action limits that
reflect the process capability knowledge, and suitable equipment and
measurement tools.
We
have been talking about these through various committees, through PAT subcommittee,
not only doing these but trying to integrate these processes, which is very
important.
From
the GMP perspective, these things are consistent with cGMP requirements and
even those elements that maybe go beyond that somewhat in terms of the design
of the facility, the design of equipment, to have a facility or equipment that
won't bring in unknown elements that cGMP, in terms of the quality system, is
there to cover. For example, a pesticide
that maybe was processed in another part of the plant in a contract
manufacturing type of facility, situations we have run into in reality. Equipment that won't affect the process in a
negative way, but is also designed to accomplish the process.
The
design of production and control procedures--very often cited situation in
terms of validation and where we go beyond what may be traditionally have been
thought about in terms of process validation, and to convert that into process
knowledge.
And,
the requirement in GMP that you have a development of laboratory controls that
come from scientifically sound specifications.
That can only come from engineering the process in such a way that those
will be scientifically sound.
So,
these elements are there in the GMP. How
do we activate them now with modern technology and our modern ways of thinking
that we have been talking about? Going
back to what I said earlier, in terms of how we looked at conducting cGMP
inspections and looking at the various aspects that will be important on an
inspection, the quality system is the underlying base that makes all these
things happen, and that is why we made that central to the program. Without that underlying quality system for
consistent procedures, processes and controls--looking at critical design
mechanisms for facilities and equipment, production, the laboratory, those
things won't be effective without the underlying quality system.
In
looking at a CGMP quality system the focus has to be on patient safety, product
quality through sound science and technology.
An important element of the quality system approach, as we have seen in
modern quality system paradigms, is the ultimate management responsibility and
that connection with management to make that happen, and that being applied to
the design, execution, review and inspection of the product and the
processes. Just to reflect a comment
that Ken made, you know, it has to start not only at time zero but even maybe
at negative points in terms of where that quality system begins.
The
challenge, of course, that we have been discussing over time, not only in our
GMP initiative but even in the PAT initiative as a whole, is to have a
regulatory process that encourages new technology to improve product quality
and process control. We are trying to
meet that challenge through the GMP initiative, through the guidances that we
are issuing to not only have clear expectations but also to provide flexibility
where it is needed and to emphasize critical areas. We hope that Part 11, for example, might be
one way where we are exercising that flexibility.
Again
as I said earlier, the ultimate goal is to have an integrated quality system,
to have a systems approach across design, execution, review and
inspection. The ability to control
quality within your system, and that was that element that Ed Fry was talking
about earlier--how do you make that variation towards quality improvement, and
it is a challenge in terms of how does that in the end affect the patient. Focusing on critical process parameters,
measurements and product performance, again, documentation that focuses on
critical product and process parameters; and science-based inspections
resulting in increased consistency.
These are the goals that we are, hopefully, on the road to achieving at
least from the regulatory side and, again, integrating these things on the
industry side.
So,
in terms of cGMPs for the 21st century, again these themes come through in
terms of being science based, risk based, encouraging the use of modern
technology, quality management techniques in involvement of management of
making that happen, clear guidance, and at least in terms of the beginning
stages of harmonization discussions where we already have two groups formulated
for the next ICH meeting in Osaka, which will bring these elements in the risk-based
group in a way and the common technical document group that is expanding to
take on quality by design.
It
is important to bear in mind that in the future pharmaceutical manufacturing,
and this comes from the cGMP for the 21st century announcement, will need to
employ the best principles of quality management to respond to the challenges
of new discoveries and ways of doing business such as individualized therapies
and genetically tailored treatments. So,
these are the challenges that we face, and I think we are in the midst of
meeting those and we realize there is a long way to go in terms of the GMP
perspective. Thank you.
DR.
BOEHLERT: Thank you, Joe. Questions or comments? Nozer?
DR.
SINGPURWALLA: Thank you. Somewhere on your slides, one of which was
quite intriguing, it said "say what you do and do what you say." Below that you have another bullet that says
"improve it." Subsequent to saying
that, you made the comment, ah, but that is a little difficult because of the
regulatory process--I don't know exactly what verbiage you used. So, the question I have is the following,
does regulation impinge on continuous improvement?
MR.
FAMULARE: In terms of regulation, we
have seen over time that it has posed a challenge, at least as we have been
told by industry and in looking at what are the issues that face pharmaceutical
quality in the 21st century, and this has been very much an operating theme in
the cGMP for the 21st century initiative.
We are self-examining ourselves in terms of regulators. Is regulation posing a challenge in
that? Does GMP provide enough
flexibility, for example, to allow you to make those changes? We would hope so but is our implementation of
GMP allowing for that as well in terms of how do we interpret things on
inspections, etc.? I think when Ajaz
talks about the CMC process can also talk about that in terms of the review
process, how is the regulator affecting that.
So, our challenge and our goal, as we put forth this initiative, is to
try and make sure that we are at least not the stopper of innovation.
DR.
SINGPURWALLA: No, thank you. You have clarified my question but you have
also reasserted one of my previous comments, that some of the objectives are
rather conflicting. Thank you.
MR.
FAMULARE: Okay.
DR.
BOEHLERT: Tom?
DR.
LAYLOFF: I think there is no problem
with continuous improvement but continuous change is a threat. Any change which is not documented to be an
important could be a negative thing. I
mean, when you say it is continuous, when you say continuous you mean
change. But does change result in
improvement? And, that is a
documentation issue, a demonstration issue.
DR.
SINGPURWALLA: I think any time there is
supervision from one group to another there is the sense of inhibiting the
supervised group from being completely innovative, or open, or whatever
have-you. But, at the same time, if you
don't do that things could go amok too, and that is basically what my comment
was, that some of these objectives tend to be conflicting and when you strike a
compromise the optimum of everything is going to change, and where is the
biggest change going to come? That is
all I was saying.
DR.
LAYLOFF: I am not sure if they are
conflicting, but they may be restraining.
DR.
SINGPURWALLA: Okay.
DR.
BOEHLERT: Thank you, Joe. Our next speaker is Ajaz, who is going to
talk about the CMC perspectives.
DR.
HUSSAIN: Madam Chairperson, just sort of
for clarification, we have two speakers in the open session so I can give a
briefer presentation--
DR.
BOEHLERT: Yes, I have been watching the
clock, we have from now till 12:45 for three presentations.
DR.
HUSSAIN: Yes, so I will probably be
briefer than I was planning to be to allow the two open speakers to have their
time.
DR.
BOEHLERT: Thank you.
CMC Perspective
DR.
HUSSAIN: Well, I think I do want to
focus the discussion on change and innovation for the afternoon session and how
quality by design can improve that.
I
believe the regulatory process is intended to meet the patient's need, that is,
to have a safe and efficacious product available all the time. In absence of knowledge, in absence of good
methodologies often, change, which is often necessary to keep the product on
the market, is difficult to implement.
The unintended consequences often, from the regulatory system, is that
we do inhibit good change. I think that
is true. I do want to sort of build on
that.
My
talk was designed to sort of pose some questions in addition to the broader
things that I outlined in my memo to you, but I don't expect you to focus on
every question that I have posed here but to sort of focus on the broad things
of helping us define what is quality by design and help us take the next steps.
So,
the outline for my presentation is quality by design, QbD and I hope you like
the small "b" there. What is
quality by design from a pharmaceutical science perspective? Here what I would like to sort of share with
you is that we do achieve quality by design and one of the biggest challenges
we have I think is reflected in Norman Schmuff's presentation this
morning. Before I joined the agency I
used to consult with many companies, and so forth, they have a lot of
information which is the basis of their development, and so forth.
When
I came to the agency, one of the contrasts that I saw was looking at the
submissions and looking at the scientists in the companies, I said those folks
would not have done this. So, what we
see in the submission, there is a big disconnect with what it takes to develop
that product. I think that is the issue
that we are dealing with, the opportunity exists today, without doing any new
technology, to do a better job on our part to ask the right question. If we ask the right question industry has to
have the right answer. So, if we are not
able to ask the right question, then it builds in inefficient systems and
processes which are not adding value from the public perspective. So, I think that is the theme of my talk.
So,
what is quality by design from a pharmaceutical science perspective? How is or should this be achieved? When should this be achieved? How should the level of quality by design be
evaluated and measured? How should
quality by design be communicated, especially to the agency? What is the relationship between quality by
design and risk? What are or should be
the regulatory benefits of quality by design?
And, what steps should FDA take to realize the benefits of quality by
design?
Now,
with respect to the second to last question, regulatory benefits, for the
afternoon presentations we have invited Colin Gardner, and so forth, to focus
on change management, what is the most efficient change management? At the previous meeting we had discussed that
because we spent ten years working with the University of Maryland developing
our SUPAC guidances and we do have that knowledge base of experience. So, quality by design overlaid on that will
probably provide an easier task of moving forward in that direction.
In
some ways, what I would like to sort of present to you is that what we are
talking about is not new. It is new from
the perspective of regulatory decision-making to a large degree. Now, if we take a look at traditional dosage
forms, tablets are a hundred years old now.
We have often forgotten to think about how we make these products in
terms of design. Say, for example, when
a decision is made to make an immediate-release dosage form of a tablet, that
is a design decision. Then, how you make
that is a process decision, process design decision. So, design features of these conventional
products and processes have essentially been defined over the last several
decades and today we often do not consider these as design issues. In many ways, because of lack of thinking of that
as a design issue we often jump in and just do things by tradition, and I think
that is the challenge that we face.
Thinking
or rethinking in terms of quality by design offers significant
opportunities. So, this is I think one
thing of my presentation. I am not going
to go through each of the slides but just to make my point, here are certain
book chapters from the Encyclopedia of Pharmaceutical Technology, and the title
is "Dosage Form Designs" So,
we have always considered that from that perspective, at least in academia and
in the industrial setting but not in a regulatory setting per se although, as
Joe pointed out, our regulations do emphasize that but the questions we ask and
the decisions we make do not fully bring this into consideration.
Here
is a definition: A rational approach to dosage form design requires a complete
understanding of the physicochemical and biopharmaceutical properties of the
drug substance. This happens to be from
the University of Kentucky. Then, from
the University of Maryland, and if two academic centers agree then we have a
consensus--
[Laughter]
Again,
just the same thing, tablet dosage forms have to satisfy a unique design
compromise. You know, dissolution versus
hardness, and so forth, but the same emphasis is pre-formulation characterizing
and learning about the aspects for moving forward.
Just
to sort of emphasize for design features, optimal drug dissolution and, hence,
availability from the dosage form for absorption consistent with the intended
use, and these are actual quotes; I should have put quotation marks. I have cut and pasted this from this book
chapter. Accuracy and uniformity of drug
content. Stability, patient acceptability and manufacturability. So, all the discussions we are having are
actually simply bringing in the disciplines of industrial pharmacy and
pharmaceutics to bear on some of the decisions we make.
Those
are two academic perspectives. Here is a
perspective from Chris Sinko, who came to us in September of 2001 at our CMC
workshop, internal workshop, and talked to us about achieving quality by
design. The design aspects are
integrity, uniformity, weight control, chemical purity and stability over the
entire shelf life. How do you achieve that? You focus on your ingredients; you focus on
your manufacturing process. You actually
then design these things through a pharmaceutics profile, selecting the right
salt, deciding what the right particle size is, making sure compatibility
exists, understanding the degradation pathway of the molecule, doing process
simulations, and using material property characterizations, and so forth. That is one way of getting there.
So,
what we are talking about essentially exists today. I did have a plan of sort of explaining some
of the challenges with an example. Let
me see if I can at least touch upon this.
If we take one attribute, bioavailability, you can make a beautiful
tablet but if the tablet does not disintegrate or dissolve it comes out the
next day and it is in your toilet. So,
that is not a bioavailable formulation.
So, that is one aspect which is important, essential.
So,
if the design objective is to maximize bioavailability and make it
reproducible, you approach it from first understanding what are the absorption
mechanisms. If you don't, then a lot of
your formulation strategies take you to where you don't get a return on your
investment. Then you focus on what are
the physicochemical attributes related to the release of the drug from the
product and its absorption; designing a formulation, making sure you have the
right disintegrating agent, if you need a solubilizer a wetting agent, and so
forth. Then, designing a process whether
wet or dry. Just keep in mind that wet
granulation and dry compression of the same formulation will not give you the
same bioavailability. So, you have to
bring that in. Then you design your
specifications and controls that will make sure the process remains
reproducible.
Now,
here is one example. This again happens
to be from Chris Sinko's presentation.
How does one sort of arrive at a critical variable? In this case, if particle size is important
so if dissolution is rate limiting in the absorption process, it is likely that
particle size will be important of the drug material. So, there are elaborate procedures in place
today, which are not shared with the agency, which sort of go in a step-wise
structured way of arriving at a decision of what the particle size should be.
Here,
again, I don't want to take time to get to the decision tree criteria in this
case but it essentially goes to early studies, including animal studies,
looking at information that sort of signals whether particle size is important;
doing simulation work; and then arriving at a decision--if particle size is
important, can it be achieved to the level needed, or what needs to be done.
Also,
I think one important element that is in this decision tree is a decision on
particle size with respect to dissolution also impacts on uniformity, content
uniformity. So, you have to sort of
decide on all aspects together.
Just
as an example, the aspect that I would like to sort of focus on is that I think
today there is a lot of inefficiency built in.
Now, formulation and process design generally starts at a small scale
but, I will sort of share with you, continues on pilot scale and then continues
with clinical materials too. In the
pre-approval world you then have to face the bridging studies so you have to
qualify changes during development for bridging studies.
If
I take the example of bioavailability, we often use in vitro dissolution
tests as a tool to screen and evaluate various design prototypes. Now, often when an in vitro
dissolution test is deemed not sufficiently reliable many companies might do in
vivo studies either to provide some relevance to that in vitro test
or just qualify those changes based on in vivo dissolution.
My
personal observation before I came to the agency, and I think it just
reinforces this, is that I have seen development programs that have developed
50, 60 prototypes and have used an in vitro dissolution test to screen
but never asked the question was that screen meaningful or not, and then start
the cycle again. So, often the
dissolution test is used to screen and evaluate experimental formulations without
sufficient consideration or verification of its in vivo predictability
or relevance.
The
experience I gained a lot from looking at all the submissions that I could lay
my hands on where we had problems, and this was when I had to lead the
development of this guidance on waiver of in vivo bioavailability and
bioequivalence studies for immediate-release dosage forms, and what we have
tried to do with this guidance, that we published in 2000, was to bring the
physiology, the physical chemistry, the chemistry together along with the test
method to see when is this test method reliable and when can we rely on that.
In
this case, I think what we have done is we have connected pre-formulation
information to all the decisions that occur later on. The work I did in this case led to a very
interesting sort of observation. On the
new drug side, where we have data for failed studies--on the generic side we
don't have failed studies so this is biased in some regard--when you have to
make a decision to say how good is this in vitro dissolution test for
immediate-release dosage forms often this gives you false positives or false
negatives. Very rarely does it give you
the right answer. So, it is on either
side.
From
a regulatory perspective we have been very happy, saying dissolution generally
is over-discriminating so you can see big differences that do not translate to
differences in vivo. At least
from our perspective, we have said we are comfortable saying if there is a
difference we won't allow that change to happen but that becomes too restrictive. But there are situations, and we estimated
about 30 percent of the time, where dissolution actually gives you the wrong
answer.
Now,
why is that? Here is just an example
from a published work where you can get false positives and false negatives. If you look at formulation "C"
compared to the reference formulation, which is the top line, if you look at
the dissolution at 45 minutes, it is 92 percent. It meets the specification. But if you look at the maximum concentration,
it is 55 compared to 100 of the reference.
But if you look at formulation "F" the dissolution is only 53
percent. So, this often can be not a
reliable test of you don't qualify and if you don't approach it from a
scientific perspective.
An
example of over-discriminating test--here are our research examples from the
University of Maryland plus all the ANDAs and the innovative product on a drug
called metoprolol. All these products
are bioequivalent. They meet the
criteria. But one doesn't meet the
specification. So, this is an example of
what we call over-discriminating.
At
the same time, here is an example of product "B" which we actually
withdrew from the market. Product
"A" is the innovator product.
It meets the specification. This
is a pre-'62 drug so the only criteria for marketing was dissolution meeting
USP specifications. So, it met the
specification. Product "A"
being the innovator, the innovator company did a bio study and submitted a
petition saying that product "B" is not bioequal and shouldn't be on
there and so we actually withdrew product "B" from the market. In this case, it is inappropriate acceptance
criteria. If you just look at one point
of the curve, it gives you the wrong signal.
Here
is an example. I was surprised that Ken
would show this, but more and more, if you don't select your dissolution test
in accordance to the physicochemical properties of the drug substance you get
misleading information. Here is an
example where a drug is a weak acid and if you do the dissolution in slightly
alkaline conditions you do not get the right answer. The company actually used that as a basis for
development and landed up with a tablet 2 formulation which was supposed to be
the marketed formulation and was not ready.
I
do not want to go through this but I think it goes to the same point. Just to sort of emphasize, the point I am
trying to make here is this, as we think about design, if you change the mind
set to a design mind set you first start off evaluating what is
appropriate. If the dissolution test is
a method by which you screen your formulations, then you have to think about
whether it is appropriate first or not.
There are many reasons why this may not give you the right answers.
I
will skip this but I just want to hone in on one point. Changes are reality. Changes happen every day. On average, in a new drug application we
estimate that there are three to six bioequivalence studies, clinical studies
done just to qualify that.
Here
is an example of what a major company does, on average seven bioequivalence
studies for each product. But here is an
example. This is an actual case
study. Each start that you see is a
bioequivalence study done during development to qualify those changes. Phase I was dealing with a capsule. They went to granulation. They qualified with a bioequivalence test,
and each change was qualified using a bioequivalence study. At the very last minute, ready for approval,
the test failed. So, what do you
do? So, here is an example where not
thinking through the process actually delayed the approval process.
So,
the aspect I think of what I would like to say is, in a sense, that as we think
about design you have to approach it in a holistic way, looking at the
reliability of your methodologies that give you. One aspect which is important and we have
quite a little bit of experience with, working with the University of Maryland,
is what we did as a model for development.
The University of Maryland has been collaborative with such a model
which we used to support our SUPAC program.
We start with pre-formulation, focusing on the right pre-formulation
attributes, physicochemical characteristic, adding the critical variables
through a design of experiment. Now, I
know the design of experiment concept to some companies is fairly new because
they still do the trial and error type.
Again, if I call back Prof. Shangru, he published a paper in '93 of the
survey he did and only six to eight percent of the companies surveyed actually
used a formal design of experiments in their development. That was in '93; I don't know what the
situation is. But in a multifactorial
world you have to design your experiments to identify your critical variables
and do it step-wise and that can be easily achieved. I know many companies which do that. So, for many companies this is a low-hanging
fruit.
Let
me just skip to this slide. This is
again from the University of Maryland.
For example, the general tendency has been that this is the dissolution
test. We will test it and screen it on
the formulation. But if you don't pay
attention to what that information is telling you, then you are missing half
the point.
Here
is an example of the experimental formulations that we manufactured at the
University of Maryland. If you analyze
this at different points on the curve, you know where the formulation of
process factor impacted the dissolution profile. For example, if magnesium stearate is a
critical attribute the dissolution profile picks up the changes in magnesium
stearate and has a negative impact when you look at time, about 10 minutes to
about 15 minutes. But if you make
decisions only on time 30 minutes--that is what the specification focuses
on--it does not pick up the differences in the variability in magnesium
stearate. So, the point I am trying to
make here is if we have to identify critical variables, we need to know what
the test is telling us and not simply blindly follow--this is the target
specification. I am going to do this; I
don't want to know what it is telling me.
This
is our own research study on the products we made at the University of
Iowa. This happens to be flurosomide, a
Class IV drug. Any minor change in
composition today will require a prior approval supplement for this; will
require three batches of stability studies; will require a bio study. All right?
In
this particular case, for example, one of the ingredients our SUPAC guidance
identifies as critical is magnesium stearate.
For this particular formulation the changes in the level of magnesium
stearate have no impact because the product was designed to be robust to
changes in magnesium stearate. The only
aspect which was critical here is the disintigrant level in the
formulation. Even the processing
conditions were not important or critical.
The reason is that the right level, the right disintigrant takes care of
all other things. It makes all other
variables less critical. But our
guidelines do not recognize that today.
So, even if a company understood that, they face quite a significant
challenge for getting any change approved today.
Wrapping
up quickly, what is quality by design? I
think that is the key question. I think
if design decisions are based on thorough formulation and process understanding
as these relate to the intended use, I think that is what you are trying to
achieve by quality by design.
So,
what are my thoughts on what should be the relationship between quality by
design and risk? I want to emphasize
this, and this is how we have emphasized this in our draft PAT guidance. Within a given quality system and for a given
product, there is should be an inverse relationship between the level of
quality by design and risk. I think that
is the framework. So, we have to sort of
think about that and mature that part further as we go along.
So,
how should quality by design be achieved?
I think in a structured manner, guided by scientific information and
knowledge gathered during pre-formulation, development, scale-up and
production.
When
is or should quality by design be achieved?
Ideally, before you get into your pivotal clinical trials. If not, you pose the risk of confounding your
safety and efficacy studies with quality problems. However, you have to recognize this is a
continuum. So, for all critical material
attributes and other aspects this should be done before we get to pivotal
trials, but then fine-tune this over the life cycle of a product.
How
is or should the level of quality by design be evaluated and measured? This is a very sensitive sort of topic, and I
think this is where we seek your help.
The sensitivity is industry is hesitant, largely rightly so, to share
this information in the review process.
The reason is if we are not asking the right question having this
information will create a nightmare for the companies, resulting in delayed
approval. So, we need to learn to ask
the right question, and we have to be ready for that.
One
aspect is--this is sort of my thinking, if, for example, we don't want to
interfere with the development activities, how does one evaluate the value of
the knowledge content of the information that we have? I think one possible way is if we have
established relationships, especially mathematical quantitative relationships
with product and process variables and the quality attributes, then the
predictability of those relationships could be one way of saying, yes, you have
sufficient coverage, sufficient data density and your predictability is
acceptable so we can actually make decisions without having to sort of
scrutinize every step of the way. That
is one possible scenario. Hopefully, you
will consider that.
How
should this be communicated? Preferably
as part of the original submission.
Normal provided the sections where this can come in so there is no need
to create new sections. I think the
sections are there; they need to be filled with the right information.
But
I think what I have proposed, and that is the reason why I invited the speakers
for this afternoon's session, we need to probably think about this in the
post-approval stage first. There are two
reasons. One, I think the agency needs
time to be ready to sort of ask the right questions, learn how to ask the right
questions to a large degree. Second, I
think we have taken this to the ICH process and I think that will continue in
that mode. So, I think the post-approval
world offers a very forward process. The
information could be shared in the form of a supplement or a comparability
protocol.
What
should be the regulatory benefits? I
think in my mind the most important benefit is more rational, science-based,
mechanistic-driven specifications. For
that, it has to come in the NDA submissions but that is the ultimate goal.
At
the same time, I think risk-based regulatory approaches that recognize the
level of scientific understanding and the capability of process control
strategies is our desired state statement.
I think we can move that from a post-approval situation of thinking
about customized SUPAC or SUPAC-C, whatever you want to call that.
So,
what steps should FDA be taking to realize the benefits of quality by
design? What we are trying to do is
start to build elements of pharmaceutical development in all relevant guidance
documents. One such guidance document
was included in your background packet.
Some of the comments we have received are interesting.
Support
development of ICH guideline on pharmaceutical development. This process has already begun. Train FDA staff on how to evaluate the
knowledge content of pharmaceutical development reports. We already have a set of activities planned,
and we invited Ken to come and brainstorm with us in a number of sessions to
help the leadership in the Office of New Drug Chemistry and Office of Generic
Drugs and Office of Biotechnology Products.
We want to sort of start thinking about this in terms of how we approach
this.
I
think while the ICH process on pharmaceutical development is ongoing, and this
will start in Osaka next month or month after that, what I think we should do,
and this is open for your comments and suggestions and I think we need some
feedback, is focus on the SUPAC-C concept, customized SUPAC concept. One option is to work with or within the
draft comparability protocol guidance.
But we have heard already that this will probably be too restrictive.
So,
in addition to the comparability protocol concept, one thought could be to
develop additional guidance on SUPAC-C.
This could be not an elaborate guidance but be part of an appendix to
the comparability protocol guidance or planned revision of the SUPAC guidances
that we have already started because 314-70 is to be reissued and I think we
have to revise all of our SUPAC guidances anyway, or it could be an independent
SUPAC-C guidance.
I
want to sort of share some thoughts on level of quality by design metrics. Again, to measure this you have to sort of
begin with an end in mind. Achievement
of predetermined product and process performance characteristics that are adequate
for the intended use on every batch and in an established cycle time. So, that is my way of thinking about it.
So,
performance characteristics are selected of developed through scientific
studies to identify the target characteristics of all relevant sources of
variability in the target characteristics, and to evaluate the effectiveness of
test and control strategies to mitigate the risks. So, that becomes part of quality by design.
Metrics--I
think this is the key. We really need to
have the right metrics because we do what we measure. So, if we measure the right stuff we will be
doing the right thing. If we don't ask
the right question this will not get there.
One
proposal is right first time. Percentage
batches manufactured right first time could be a metric. Process time over cycle time, the ratio of
process time over cycle time and its improvement. And, ability to reliably predict impact of
changes. That gives you an ability to
say, all right, this is a low risk. If
we require a product supplement for this, we probably don't have to. It may not mean that you don't have to do all
the qualifying tests. The qualifying
tests could be done and kept on site, and the integrated approach that we have
talked about with CMC review and inspection can address that. So.
So,
I will stop with these questions again for you to think about. Thank you.
DR.
BOEHLERT: Thank you, Ajaz. In the interest of time I will defer
questions and comments on Ajaz' presentation because he will be here for the
discussion. We are going to go to the
open public hearing section, and we had two speakers scheduled and only one
will present and that will be Rob Menson.
Open Public Hearing
DR.
MENSON: I need to thank Fred for giving
me some of his time here, and I hope that I can prove to him that he did the
right thing. I also know that if all of
you who are sitting out there like I was, the seats are getting hard and you
are getting hungry. So, I will try and
make this as quick and dry as possible.
My
name is Robert Menson. I am a
consultant. I have my own company,
Menson Associates and I work with QRC Associates, also a consultant in the
pharmaceutical industry.
We
are going to briefly talk about a risk model today. We heard a lot of discussion about risk, risk
models, risk management in the pharmaceutical industry. Today I am going to present a model that we
have been using for a couple of years now.
It is described, to a certain extent, in a different iteration in the In
Vitro Diagnostic magazine, March 2003.
It is a model in which we are going today to talk about application to
the perfect product and a perfect process to make sure we mitigate any
potential event disturbing that situation.
By
changing the decision trees and the rules, the model can be applied to such
things as where is the best way to put our resources in designing a process to
make a product? How can we balance off
changes of our product or process and looking at what the impacts are?
We
all heard that, of course, the FDA's mandate is a risk to safety of patients,
users or potentially handlers. Now, the
handlers is more in the medical device industry. We also have business and regulatory risks
and we have product liability risks.
I
borrowed the discussion of intended use or intended purpose from ISO 14-971,
which is the application of risk management techniques to medical devices. It fits in pretty closely with the earlier
ones presented by Dr. Woodcock on the surrogate fitness for use. It is a pretty general statement. Intended use is use of a product, process or
service in accordance with the specifications, instructions and information
provided by the manufacturer.
That
can be a fairly general application as we go forward. I bring this forward because when we begin to
look at the failure analysis in the process, the model currently ties all
failures to implicit or explicit fitness for intended use. Implicit fitness for intended use would be
something where the customer doesn't know they are not supposed to expect
particulate matter maybe in an injectable; explicit, the customer would be
expected probably to know that he is expecting a sterile product that is not
going to give him any problem. So, when
we look at the failures we need to consider both of those. Also, the model helps us identify critical
quality parameters by tying them to process steps which impact the fitness for
use.
The
elements of risk management process, these again come from ISO 14-971. We all talked about risk analysis, risk
evaluation. The addition of control and
post-production feedback into your risk assessment situation model is the
important aspects because we do the best we can when we do risk assessment, but
we need to have the feedback because we all know we have recalls. We all know we have discrepancies, as showed
by earlier slides. So, what did we miss
when we get those when we look at the control of our process and integrate the
aspects forward?
There
are various risk assessment tools out there in the industry, as was brought out
again earlier. These are standard tools
used in a lot of different industries.
They are just beginning to be new things to the pharmaceutical industry
and we are going to present shortly a modification in which we are combining
two of these tools in a model that we feel is beneficial to the industry.
I
will briefly talk about some of them.
Fault tree analysis most of you have heard about it. Our National Safety Transportation Board uses
that after the fact, unfortunately, for crashes to figure out what could go wrong. So, typically a fault tree analysis is done
after the fact, though you can also use it to determine reliability of a
product by mathematical formulas and you can also use it to see whether you
have conflicting design criteria.
The
standard one most of us have heard about is FMEA or FME(C)A. We typically use these terms interchangeably
today. I am going to briefly talk about
the FMEA model to lay the groundwork for the model that we want to talk about
in more detail.
HAZOP
was developed for the chemical industry, particularly to make sure that any of
the things they did didn't blow people up or create other hazards. But it also has its applicability to certain
aspects of the pharmaceutical industry, particularly maybe in manufacturing and
API or during formulation to look at the impact of what might happen if an
operation didn't conform to its specifications during the manufacture of a
product.
HACCP,
which was started in the food industry--actually, it started earlier in
applying the technique to developing food for astronauts. It was promulgated in the seafood industry,
and we will go through briefly what the HACCP process is.
The
FMEA model typically looks at the device or the function, If we are looking at the design of a product
we start out with what is the component and what is its function. If we are looking at a process FMEA, we look
at the functional step in the process and try and understand what the process
is supposed to do. At each step we
identify the potential failure modes. In
this particular case I used an example, because it is fairly easy to work through,
of visible treatment field indicator at an x-ray machine where there can be
several different failure modes. We look
at those and what happens if a product does fail.
Once
we do that, we assign it a severity level.
Behind all this is a lot of work defining severity, occurrence and
detection tables which we don't have time to go into today. Once we look at what is the severity, we say,
well, what could cause this and how often would it happen. That is the occurrence column. What do we have in current controls that we
can do to mitigate this and are we able to detect it? In detection here, we are looking at
detecting before the event occurs.
Obviously, we can detect it after the event occurs most of the time.
In
this particular case we looked at, well, if it happens really in this case it
is increasing the setup time and probably in 99 out of 100 times that has no
severe impact upon a patient. But if
that one time is when that patient needs that x-ray because something severe is
going to happen to them in an emergency room, we may have to change our
severity level.
So,
this is really the FMEA model and, as I said, I don't have enough time to go
through it in detail today but look at the overall graphic and we will come
back to this when we look at the model I am going to present in a few minutes.
The
HAZOP model starts with a design statement.
We have an activity; we have the material; we have a destination and we
going to transfer a powder to a hopper.
We
then come down and say apply a set number of words that we use in the HAZOP
criteria. These are just some examples
and the examples are "no material," "more than," "not
greater than," "less than."
We apply it to each one of these statements in our design criteria. As we apply it to each one of the statements
in design criteria, if you look down here and it says "no transfer"
what could have caused no transfer? A
valve closed, line blocked, pump broken--no material. This then allows us to systematically walk
through our process, understand what would happen if various standard HAZOP
words are applied to each one of the design criteria. Then we take this forward and say, well, if
we had no transfer of the material because the valve was closed, was there a
risk? If there is a risk, then we put
together a plan to mitigate the risk. We
may want to check on the valve that is open or that is functional before we do
something.
HACCP
started with the criteria, again, that we had something that was particular in
food. We had a natural product and what
we wanted to do was make sure that we didn't add any additional hazards to the
product. So, at each step of the way in
the process we looked to see whether we added a biological hazard, a chemical
hazard or a physical hazard. Again, you
can take this to some aspects of pharmaceutical manufacturing where you can
look at could I possibly add particles to a tablet during tableting
operations? The answer is possibly yes
because we could have particles coming off the punch press. So, we need to say, well, how would we
control that?
The
HACCP process requires that you have a prerequisite quality system program
because all it is doing is providing you a method to analyze things, and it
says that I have a process that I am going to go forward to maintain this. Traditionally it is something like a GMP.
I
have to apologize for the printouts because one of the things that happens if
you don't print this in pure black and white--you guys have the same product
that I have; all the red boxes are black on yours. So, we will go through this and correct it as
we go forward.
The
most important part of the risk assessment process we are going to talk about
today is to make sure you map the process.
You really need to use the map to walk through the process as we go
through it. Now, as I said, this process
that we are presenting today is a combination of a FMEA analysis and a HACCP
decision type tree. This is what adds to
the model beyond the FMEA and we are combining the two of them.
In
a classic FMEA, if you can remember back three slides, you have severity,
occurrence and detection and different models multiply those and get what is
called a risk priority number and they generally set a cut-off and work
forward. What this model does by using
the HACCP technique is it emphasizes severity of the problem before it goes any
further.
So,
once we map the process, then what we are going to do is use the FMEA type
model to do a risk assessment. We use
our decision tree to decide whether we have an ECP, in this particular case an
essential control point; some people call it a critical control point, there
are various names for the same type of thing.
We move those things forward to a review matrix and then an action plan.
Now,
what this model does is it integrates all the things that a manufacturer has
done in the pharmaceutical industry from test method validation or process
validation and puts it together in one analysis.
So,
the first thing we have to do is create what we call the SOD tables.
In the model we typically link the severity to the end product
functional failure. I talked to you
earlier about both implicit and explicit.
We get the medical department of the company involved to classify the
severity of the functional failures.
Obviously, as somebody brought up earlier, those will vary depending on
the risk of the product. If you have a
short fill for a product where somebody is trying to take 1 cc out of 5 cc,
that doesn't necessarily have the same risk as if you are trying to give
somebody 1 cc out of a 1 cc bottle and you need to have it right away.
When
we look at this, because the model as I am representing it today is looking
retroactively at your current manufacturing process, we typically use
historical data or data from similar processes and products to identify the
possibility of the event to occur. If
you have a product that you make very few times a year but you apply that same
process to multiple products, what we do is assimilate the knowledge across the
processes.
For
detection we can look at our method validation studies, in other words, can we
detect it or can we not, assuming that it has been presented to us, which means
sampling is an important part of detection capability. Again, we can look at historical data.
So,
the concept of the process is we assign an essential control point to steps in
the process for a process that is in control--and by our definition, it does
not produce a significant defect and, again, we can spend a lot of time on what
do you mean by significant defects--but it is difficult to verify by
testing. An example of a process here
would be sterility. We can't possibly
test sterility in. The corollary is a
process that may have a higher level of defects than you want but we can always
detect them.
So,
if severity is greater than 5, and when we set the model up anything greater
than 5 we deal with in our table--the model that I am currently working with
does not allow anybody to give a 5 because when you do these analyses most
people want to stay in the middle ground.
So, we force them to make a decision.
Anything above 5 we deal with; anything below 5 we have considered of a
less impactful nature.
Basically,
as I said, we have a risk assessment tree.
I am going to talk through some of these but basically if the severity
is less than 5 we have judged that this is a low risk and we are not too
concerned with that step of the process.
Now, again, when you talk about low risk, as was brought up earlier, it
could be that the color of the ink is slightly wrong. It is a low risk potential. Now, there is a quality issue. It may not be our standard so by definition
we may be out of GMP compliance but it is a low risk issue.
If,
however, the severity is greater than 5, then we go through the analysis. If the severity is greater than 5, and if you
go down and say the probability is greater than 5, then we go down and say can
we detect it? By our definition,
detection less than 5, because this is reverse, says we can. Typically we assign detection of less than 5
the fact that we have a high chance of catching both random and non-random
events.
Then
what we do, we call the detection capability at that step the essential control
point, and we want to make sure we spend our resources and effort on making
sure we can detect it at that step and that our process and our testing method
is robust and rigorous.
In
this particular model that I have up here the probability is less than 5 so we
add can we also detect it? If it is less
than 5, which means it doesn't happen very often and we can always detect it,
we say we have a robust step because one or the other can potentially go out of
control and we can still have fairly good assurance that we have mitigated the
risk.
On
the particular model that I have up here--I changed this at the last moment, I
apologize--if the probability is not greater than 5 and I can't detect it if it
happens, then I want to make sure I spend my time on the validation
process. I want to make sure that I have
a good process capability and it doesn't happen very often.
If
the model that I just talked about says that it happens but I can always detect
it, then I make sure that my control point becomes my detection point and I
want to make sure that I don't do anything to disturb the detection.
Now,
occasionally what happens is this, it happens more often than I want; I can
detect it less than satisfactorily. That
means that I need to do something about that process step. I either need to reduce the probability or
increase my capability to detect.
Now,
one way you can increase the capability to detect is to add additional test
methods, samples or use a different test method. Then, once I do that I assign the control
point to the reduced parameter.
This
then comes into convening participants and beginning to fill out a form, as I
showed you referring back to the FMEA form earlier. We go through each process step. We look at what would be the failure mode at
that process step and I told you earlier we took those all to the implicit and
explicit intended uses, fitness for use, whatever term we want to use. We look at, if stability failed, what could
be the potential hazard. Some products
subpotency can be just delayed medical treatment in some products subpotency
can be fairly severe.
We
look at potential causes at a fairly high level. What are the controls? In other words, what methods do we have in
place, what process step do we have in place, etc. How can we potentially find this failure if
it happens? Then we go through our
decision tree.
Under
the severity column, we have already decided with medical, based on our
severity table and they listed the ratings of severity for the functional
failures so that is kind of a given. So,
if stability is the failure, in this particular case the medical department
said subpotency will cause delayed medical treatment but we don't call that a
high severity. Okay?
Probability,
we again base that on historical knowledge.
Again, most pharmaceutical companies, because of the number of lots they
make, they don't have the same amount of information we have on reliability
before it happens that you might have on a chip from Intel. Detection, again as I said, is related to
what we--the confidence level we can determine something.
We
have gone through this. We have assigned
these numbers to each one of these and then we go back to our ECP decision
tree. In this case, because severity is
less than 5 we say that we do not need to call this a critical control point.
As
you go down further, the next one says severity is 10, which means we
automatically have to look at it. In
this model we have said, gee, it happens more often than we think it should but
we can detect it pretty easily and, therefore, what happens is the ECP becomes
the detection point.
Once
we have done this whole assessment, we then bring the information down from
that assessment to a work plan in which we now say, okay, we have brought the
step down; we have brought down the failure mode for continuity; we have
brought the potential cause down and now we begin to look at our existing
controls and say what is the actual procedure step? What is the quality attribute we are going to
be looking at? What is the test method
we use? What equipment do we need to use
for that test method? What documents do
we currently have in place to support that?
And, any related issue, and in this particular case we put down sampling
because sampling in a water system is an important issue. Then we said this is owned by the quality
control department.
This
then provides us with a method for compiling the information, because what you
will do when you to through these, you will find that multiple control points
are in the same place and what we want to do, rather than treat them
individually--you notice we brought a bunch of ECPs down, 4.1, 4.2, 4.3 in this
model, and they are all related to looking at the same procedure. They are all related to looking at the same
task. If we had any prerequisites we
would put that in there. For instance,
if we are going to look at the prerequisite for test method validation,
qualified equipment, we would put that in there. We assign responsibility,
completion date and then any particular links.
So,
what we have done is taken the process, taken each step, looked at the risk and
linked it to all the information there that the organization has in process and
other capability. This then becomes our
remedial action plan if, of course, we have ECPs. I haven't done this with any manufacturer yet
that hasn't had some ECPs to look at.
Thank
you for your time. I thank you for ten
minutes of your lunch hour. I will be
glad to answer questions either now or later.
DR.
BOEHLERT: Are there any brief questions
right now?
DR.
SINGPURWALLA: I have a brief comment.
DR.
BOEHLERT: Okay.
DR.
SINGPURWALLA: Probabilities bigger than
one are not allowed. Probability is
always between zero and one. You show
probabilities with four, five, seven, eight or nine.
DR.
MENSON: No, in the FMEA model you rank
the probability from one to ten and then you assign a probability to each one
of those numbers so that you can carry through the mathematics.
DR.
SINGPURWALLA: Those are rankings?
DR.
MENSON: Those are rankings, yes. Thank you.
DR.
BOEHLERT: Others? If not, it is time to break for lunch. We will try to reconvene as scheduled, at
1:45, for committee discussion.
[Whereupon,
at 12:56 p.m., the proceedings were recessed for lunch, to resume at 1:45 p.m.]
A F T E R N O O N P R O C E
E D I N G S
Committee Discussion and Recommendations
DR.
BOEHLERT: It is time to get
started. FDA has asked us to answer
several questions today. They are in
your handout that has the agenda. There
is no page number, but topic number one that we are to focus on is quality by
design.
There
are three bullet points that we have been asked to address: Articulate a clear description of the term
quality by design. Identify the type of
information and knowledge most useful to assess quality by design. Regulatory approach for assessment of
pharmaceutical development knowledge to maximize its value without impacting
drug develop.
We
have an hour for this discussion. Our
goal is to come up with some concrete proposals, not just to have a
free-wheeling discussion but actually come to some proposals that we can leave
with the agency. Would anybody like to
get us started? I don't know why I am
looking at this end of the table--Pat?
[Laughter]
DR.
DELUCA: I wanted to make some comments;
what I heard today was very informative and to a great extent I think people
were saying the same thing in a different way.
I think Ed Fry, you know, indicated that for decades we have been
talking about building quality into the product. I think it goes back to a conference that I
was at in the 1970s and it evolved into an FDA handbook. I will have to send the reference to
Ajaz. It was published in 1973.
I
guess what Ed had said about it being a culture, the new thing of building
quality into the product is a culture work.
Now it evolves that most of the elements in the pharmaceutical company,
you know, from the research to development to the quality control,
manufacturing and even the regulatory component and now having a tie-in with
the regulatory agent, the FDA.
I
guess what I see here is quality design, as we have talked about, coming up
with a description, is a dynamic process.
It entails both learning before doing as well as learning by doing. I think there is a balance there and I think
you have to, at some point, get on and learn by doing, by experience.
So,
I see as the definition of quality--I know it was brought out too in discussion
where the patient, you know, was involved here, and satisfying the
patient. I think that, one, that is the
pharmacological aspect of it, which is not easily clearly defined or
measured. I think once we know that the
product has a pharmacological effect and gives a therapeutic benefit, then I
think when we talk about quality we are talking about the product and the
process and specifications that go with that.
I
think also with regards, you know, to learning before doing and learning by
doing we get into the clinical trials often with a product. Certainly, that is the same thing today, a
company wants to get into the clinic as quickly as possible, and we get there
by probably not development--I mean, development is still ongoing while
clinical testing is going on. The INDs,
as was brought out, certainly lack all of the detail. There are things that aren't in there that
should be in there. I know, from
experience from conversations and going into the clinic and preparing INDs,
that discussions such as we don't know this or we should know this but let's
get to the clinic and we will do these other things later. But oftentimes that "later" never
comes and there is reluctance to do anything later.
I
guess what I think that I would like to bring out here is that somehow that
this design of quality has to be clearly expressed as a dynamic effort that
continues. It continues in the
development stages while the clinical testing is going on, and it must continue
post-approval to a product. You know, as
we talk about process improvement, however you define the process complexity or
what-not, certainly I think the incentives to improve a process are either
lower risk or lower cost so that a company is going to either reduce the cost
or it is going to reduce the risk that is involved in that product. I think they should be encouraged to do
this. I hope when we talk about lower
cost that that lower cost doesn't mean just a savings to the company but it is
passed on to the patient as well. So, I
think there are some ethical and humanitarian issues that are involved here
with regards to costs.
There
was one thing I would like to bring out which Janet had said, that FDA, in
looking out for the safety of the patient, one of the things they weren't
really concerned with is pricing. In
some respect I think we can't say that entirely because oftentimes if the price
gets too high there are a lot of people who can't afford the medications. So, I think price is something that should
not be excluded from some of the considerations.
But
I think when we talk also about how much risk to accept and how fast to get a
product to the market or to make changes, I guess one can't avoid--and I think
it was brought out by the last speaker--legal issues. There are legal components that can play a
role here. We know very well that
products that are on the market, if they have a side effect, like the Viox,
there are going to be legal issues and we will have the lawyers advertising to
seek out patients to try to get them in class action suits.
So,
I think these are also things that probably come into this. But I guess one of the things that I wanted
to stress here is that quality design is a dynamic process and that it should
continue post-approval of a drug, and I think this should be brought out in any
kind of description that we give of quality by design.
DR.
BOEHLERT: Thank you, Pat. Others?
Tom?
DR.
LAYLOFF: I want to play too! First of all, I want to talk about quality
and I am going to use it interchangeably with fitness for use. We will walk away from the safety and
efficacy side because I don't play in that box.
So, quality by design is establishing a formulation and manufacturing
knowledge base which is sufficiently robust to allow manufacturing of product
which consistently meets requirements, as sort of an over-arch. And, the type of information and knowledge
most useful to assess quality by design is the identification of stressor elements
in the critical control points and robustness of those critical control points.
The
regulatory approach for assessment would be an output orientation as a number
of OOS or failures in the control systems.
So, I see it more as establishing the dimensions for keeping the system
in control and building it by identifying stressor elements at the control
points so that you can see what the control dimensions are for incoming
materials and manufacturing. Then, lastly,
the way of assessing it would be how well is it staying in control. Those are my two cents. Nozer is not going to like that.
DR.
GOLD: Let me just ask Tom a
question. I think you have a lot of very
good elements but you talked about meeting requirements and that smacks of
specifications, and I don't think specifications are really the same as fitness
for use. Fitness for use includes
specifications but it goes beyond specifications, from my perspective.
The
comment that was made earlier today, I think by Norman, was that, say, a product
becomes contaminated, you can't test for it; it meets all the specifications
but it is not fit for use. So, I would
suggest perhaps we change the wording when you said meets requirements to meets
customers' needs or meets fitness for use.
DR.
LAYLOFF: I have a problem with untoward
contaminants because I end up in a universe and I don't know how to deal with
it. For example, I had a heated
discussion with somebody from Food Chemicals Codex because they changed the
limit on lead and sucrose to a tenth of a part per million lead. I asked them if they did that on the basis of
a health risk and he said, no. And, I
said why did you do it? he said because
it was technically feasible and nobody objected. I said, well, what is your cadmium
limit? He said we don't have one. I said, well, cadmium is toxic also. You really should have one and I doubt if
anyone would object. How about bismuth? How about some other elements that are
probably more toxic than lead? Plutonium
for example? You might get a limit for
plutonium. He thought I was being
facetious but, of course, I thought he was nuts.
[Laughter]
DR.
BOEHLERT: Nozer I think was next, and
then Efraim.
DR.
SHEK: Yes, I just want to propose a
definition, I think that is the first assignment, and basically using one of
the slides that Ajaz has and modifying it.
It will be a higher level than, Tom, what you are proposing. For example, it can read as follows: What is quality by design? Design based on pharmacokinetics, formulation
and process understanding as it relates to the intended use of the drug
product. We can then go into each one of
those areas and ask what information do we need to know the
pharmacokinetics. For example, do we
want to know the clearance? Okay, if we
know the clearance we know how to take the next step. Formulation, physicochemical properties of
the drug substance. Then process, you
know, we talk about the manufacturability once we decide on the dosage
form. I feel that will be comprehensive,
talking about both the quality with regard to clinical performance as well as
translating it to a manufacturing environment.
DR.
BOEHLERT: Questions or comments?
DR.
LAYLOFF: I want to walk away from the
clinical because it is so noisy.
DR.
SHEK: On the other hand, the product
that we make is supposed to work and somehow, if I go and design a tablet and I
don't get the blood level that I need, it doesn't matter how well I make the
tablets. So, at some point in the design
there has to be something to do with, you know, the efficacy. I don't know whether pharmacokinetics is the
best but that is something that I can understand.
DR.
LAYLOFF: Okay, but then you play with a
30 percent window or more.
DR.
SHEK: Thirty percent window?
DR.
LAYLOFF: If you give solutions to people,
how many people compare if you do multiple tests on different people?
DR.
SHEK: But I think that really will
depend on the pharmacokinetic profile.
There is a lot of variability there and you have to take it into
account. If it is pretty robust and you
don't see a lot of changes--it depends on the biology of the compound. You will have to play around with it. In some cases you will try to have tighter--I
won't say specification but tighter requirements.
DR.
BOEHLERT: Now, Nozer, did you have a
comment?
DR.
SINGPURWALLA: I am going to speak what
comes through my mind. The question that
is asked is articulate a clear description of the term quality by design. That is what we are asked to articulate. We are also asked to articulate clearly.
So,
what goes through my mind are the following things. I took a course on quality control long, long
ago. I also happen to have worked with
Edward Demming and have written a paper with him. So, I have some idea of the history of what
is going on. When I took a course on
quality control and reliability the particular subject matter was an
understanding and study of variability.
How do you understand variability; how do you study it; how do you
control it to whatever extent you can and, based on the variability that you
observe, what kind of actions and decisions you make. That was the state of affairs for a long,
long time.
Then
comes along Tagucci who essentially said the following, he introduced the
slogan "design by design" and that became a slogan and what was he
basically saying? He was basically
saying two things. Now, whether this was
his original statement or whether it trickled down from the likes of Demming
and others or George Box, I am not sure.
But basically his claim was the following, that quality control should
be active, not proactive. The way the
old books were written was that the designers designed; the manufacturers made;
and then the quality control people came in at the end and watched everything
and reported what they saw.
So,
Tagucci introduced this notion that quality should be more active and he said
one way you can do quality control more actively is to use design of
statistical experiments which were used in agriculture. That is why we have the slogan "design
by design."
We
now have this new verbiage, quality by design.
The thought that goes through my mind is that whatever you do to produce
good quality, you should think about it way in advance, use all possible
methodologies that are available to you, which includes experimental design,
pharmacokinetic experiments and so on and so forth. So, to me, the term quality by design simply
means think about quality right from the word go.
Now,
I would like to suggest that this committee, if possible and others if possible,
watch an excellent program on public television. It is called "Building an Airplane for
the 21st Century." It is a story of
how they build and designed the 777. It
is a seven-part program on PBS.
Basically, they are essentially doing what I think you are trying to
do. They start with a concept. They bring the designers, the manufacturers
and the customer--this happened to be United Airlines--and essentially they
designed this airplane, which ran for the first time successfully and, thank
God, nothing has gone wrong with it as yet.
But
I think that gives me a signal of what one means by quality by design. I still cannot articulate it very carefully,
other than the fact that it simply means think about everything. Otherwise it is just a slogan. Perhaps it is still a slogan.
DR.
BOEHLERT: Other questions? Comments?
I think we have heard some variations on the same theme here, except
perhaps for Nozer's last comment which was a little bit different. I guess Tom's comment, you know, meet
requirements which includes specifications and other things that were brought
out. I think Efraim brought in the
concept of pharmacokinetics. We heard
comments about involving nowadays most areas of companies. We talked about a dynamic process, you think
about things early. You know, it is a
concept that gets started--you know, learn before doing kind of concept and we
need to satisfy patient needs.
So,
I don't think we are saying completely different things here. I think we are all going around the same
issue and I don't know if that is helpful to you, Ajaz.
DR.
HUSSAIN: No, I think I like the previous
comments quite well. I think that is
something we have thought about in a similar way, Tagucci's approach and so
forth, and you will see a lot of those thoughts in our draft PAT guidance which
are already sort of captured in that thought process.
At
the same time, I think I would like you to keep in mind--at least my personal
perspective is this, quality by design and all the effort that goes into
designing a product, especially formulation and manufacturing, a lot of it
already exists. One of the challenges we
face is that we have to make decisions in absence of some of that information
and our decisions, therefore, have to be extremely conservative. We essentially create this as an art. You may have just achieved a formulation and
manufacturing process just by chance and then when you repeat it you don't
understand anything more. So, somebody
who does that is in the same box as somebody who has really put all their
effort in designing experiments, doing an optimization, and so forth.
So,
one of the objectives of quality by design is to differentiate between those
groups, so as to give advantage or give incentives for people who do the right
things.
DR.
BOEHLERT: Tom?
DR.
LAYLOFF: I think I was very interested
in Nozer's discussion because the customer could be identified and in the case
of pharmaceutical products is the surrogate customer. In the case of 777 also all of the
operational definitions and variability issues could be previously identified
and programmed; it was in control. So,
the design was conducted in control with a known consumer. Now, the FDA is a known consumer, our
surrogate consumer. But the knowledge
base for it is available but not visible so you don't have the design under
control as far as the consumer is concerned.
DR.
GOLD: Ajaz, what you need to elaborate
on is if you follow the path that you just discussed, and that is the path that
we have been talking about for a while now, how does one proceed unless
guidance is also given as to what the requirements really must be to satisfy
the objective that you discussed, that robust development has been performed,
multivariate experiments have been run, independent effects, confounded effects
have all been determined. How can you
establish this without providing the requirements in clear form so that
companies can follow them? I did not
think our pathway was moving toward that type of regulatory guidance but it may
be necessary so I would like you to elaborate, if you could.
DR.
HUSSAIN: Well, I think my thought
processes were more focused on a change situation, and I think we can actually
start defining things in that mode. For
example, change in zip code can be a post-approval change and can be a
prior-approval supplement if it is a modified-release dosage form. So, you are keeping everything the same. You are moving your factory to a different
location. If the product is such, you
may need a prior-approval supplement.
You may need a bio study. You may
need a stability study of three batches, and so forth. So, essentially the requirements are set.
So,
one could take that and say what are the concerns here. One approach could be that the dissolution
test that you have, even if the product meets that criteria, is not giving the
regulators the comfort they need to say, all right, the product has not
changed. So, you have identified a
limitation or perceived limitation in a test method that probably is holding a
decision back and they would like to see additional testing done to make a
decision.
Similarly,
I think with respect to stability testing you need three batches of stability
data. Keep in mind that when you get a
prior-approval supplement what you receive is maybe three months of axillary
data and whatever real-time data you have.
But the review process, and so forth, often is such that by the time we
approve it you actually have more real-time data on that and often there is not
enough shelf life left so you throw away the batch.
I
am going back a few years, in talking to the review chemists one of the
concerns in terms of stability testing, one of the biggest concerns that comes
up is the axillary stability studies are not fully indicative of, for example,
the shelf life, especially when the shelf life is associated with physical
attributes. The reason for that is the
basis of the axillary stability studies truly may not be valid for predicting
physical changes. So, I think you sort
of start taking layers and layers of concerns out and then you can sort of
structure the discussion.
So,
one aspect that I could think about is if we understood what are the critical
variables and how they are impacted, how they control to a higher degree, then
we could say, all right, this change is really not of major concern because the
process is well understood. You may
still do some of the qualifying and additional testing that may be necessary
but that could be handled within a quality system within the GMP aspect, and
not have to wait for a supplement and wait for the process, and so forth.
So,
it is easier for us to actually think in that mode because you at least have
well defined endpoints that you can discuss.
In the development it is a bigger question and that is the reason I
proposed that we should probably start thinking about this in the post-approval
world because I think there we can hone down to the key questions.
DR.
BOEHLERT: Nozer?
DR.
SINGPURWALLA: I would like to make a
proposal to move forward regarding this first bullet, articulate a clear
description of the term quality by design.
Certain times in the sciences, particularly the mathematical sciences,
you take certain things as axiomatic; you don't question them, one of which is
your declaration independence by Jefferson, we take it to be true that
everybody is equal, or something.
Basically it is an axiom. You
basically don't question the axiom.
So
my proposal is the following, take the term quality by design as an axiom. Don't try to articulate on it and don't try
to explain it but go to the next step and try to see what kind of information
and knowledge is the most useful to assess quality by design. Take it as an axiom and then start looking at
its attributes. This way you will make
progress, otherwise we can spend the next so many years trying to define
something which is, to some extent, vague.
You know, it is a catch-all expression and one could think of it as
experimental design; one could think of it as specifying requirements; one
could think of it in so many possible dimensions. So, my proposal is to just take it for what
it is and then go further down the line and then see if we can come back and
revise it.
DR.
BOEHLERT: Okay. Gary?
DR.
HOLLENBECK: I like that. It moves us to bullets two and three.
[Laughter]
I
think that is where we have been spending most of our time talking about, those
two aspects. I am probably going to
ramble a little bit here too but I think to proceed we need to think in the
current context, the current system that we have and make progress within the
things we understand. Some of the things
we have been trying to debate here today we have debated even before the '70s
and we will debate long after we leave here.
As
I flash back, the term developing meaningful specifications has always rung
really true to me, and I think that is a clear part of what we are talking
about here. Your goal is to create
incentives for a broader development context so that companies do it and
communicate it to you. I think that is
an essential part of this.
The
other essential part of that process is the identification of those things that
really matter, whether you call them critical process parameters, critical
variables, critical components. Within
the experimental designs that you are doing the identification of those things
that we really need to monitor and follow is the second step, it seems to me.
Then,
I can't help but link that second step to the PAT initiative, and that is to
find better ways to efficiently determine whether or not you are meeting those
specifications and not be redundant. So,
I think those three things are really part of bullets two and three, you know,
how we can accomplish quality by design, which is clear to everybody now.
[Laughter]
DR.
BOEHLERT: Diana?
MS.
KOLIATIS: I like your comment about
moving off to the second part of the question, but ultimately you have to
decide what you are building here. I
think you are either building a sports car or you are building a sedan for the
family. Once you have made that
decision, what you are building, then the content and format of your
development data takes on a certain path, and that is what I think is what we
are looking at, what is the content and what is the format that needs to be
presented to the regulatory customer, as Tom said. That is the other customer that you want to
keep in mind. What needs to be presented
so that that regulatory customer can evaluate your thought process on how you
came up with your process to manufacture either the sports car or the
sedan. That choice is yours. We are not telling you what to manufacture,
but once you come up with that decision, then what was the thought process to
make the best sports car or the best sedan.
So,
I don't know if we can get away from defining the term quality by design. I think Janet tried to put out some concepts
about what are we trying to make here. I
think we should ultimately get back to defining it but I think what you are
talking about is the content and the format to allow FDA to come in and make
that assessment of the thought process, not to tell you what that thought
process should be and not to tell you what the specs should be. That, I think, is what the company needs to
do and then we need to sit down and look at that thought process together and
say yes or no.
DR.
BOEHLERT: Ajaz and then Garnet.
DR.
HUSSAIN: I think I agree with Diana's
comments. I think the discussion on
quality by design, I think I would like to sort of point you to Gerry
Migliaccio's slide. I think he presented
that in the context of manufacturing and I think that has some relevance here.
But
I do want to go back to sort of the issue of quality by design, different
levels of that. For example, if I have a
choice between an immediate-release tablet versus a controlled-release tablet,
if I know that the features of an immediate-release tablet will lead to certain
adverse effects because of the high peak concentration, and so forth, then
there is an advantage for doing it in the controlled-release product or a
transdermal product. Then, that is a
design feature. I think that clearly is
sort of one element of the design.
I
am not sure if we are sort of talking at that level yet. I think our discussion has been that this is
the product feature that we have selected so there is a clinical link to that,
and I think clearly that is a very important discussion but for this committee
I think we have made the decision; we are making this particular product; it is
going to be an immediate-release dosage form.
Now let's design the formulation and the manufacturing process to
produce that in a consistent, reliable way.
So, that is the part of the discussion that I think we need to focus on.
DR.
BOEHLERT: Garnet?
DR.
PECK: We have heard a number of comments
about understanding the process. We have
heard a few comments made on what I am going to say now. I think we have delved into manufacturing
science. I am still concerned about
material science. I think the elements
certainly for obtaining quality by design are achievable but we have to also
remember that the material we are starting with has to be well defined, both
the API and the excipient.
The
excipients are very dear to me because many of them are commodity items and we
make some judgment based on small sampling of very large amounts of
material. I was reminded of this in a
presentation I made in June to Food and Drug inspectors and reviewing
chemists. They reminded me about looking
at just a small sample of an excipient that is available in extremely large
quantities.
I
am still concerned about the material science of everything that we are trying
to put into a particular dosage form. We
know by processing that we can modify these various substances, but do we know
everything that we can know about either the active or the excipient? I think we have a long way to go yet in
pharmaceutical material science and we need more effort in this particular area
to stabilize what we are going to propose from the processing concerns that we
have. I think understanding our
materials is going to contribute to quality by design.
DR.
BOEHLERT: Other questions or
comments? Ajaz?
DR.
HUSSAIN: I think I would like to sort of
respond to Garnet on that. I think that
is a very important point and I think the task ahead in terms of how you want
to do material characterization is a significant task.
My
concern, as I think Garnet also pointed out, is that for many of the materials,
the commodity items, and so forth, the resources needed to characterize the
relevant functionality and develop measures and test methods for measuring
functionality I think--USP wanted to sort of pursue that. But in my mind, that will take twenty years
for us to really get there. That is a
bigger societal issue and we don't even have the infrastructure, academic
infrastructure to even start tackling that problem. So, that is one piece.
The
API, on the other hand, is quite well characterized. We have that information. So, I think my way of thinking over the last
couple of years has moved to saying we know this is a highly variable
material. We take that as an axiom and
say this is highly variable. This is a
commodity material and we mix commodity material with our API which is so well
characterized. Therefore, the current paradigm
that we have that we manufacture to a fixed time, I don't know whether that
really is fully supported. It is a
dichotomous situation.
So,
that is the reason the PAT guidance emphasizes that, for instance, if we learn
how to understand the variability and them manage that variability, this
variability will remain. So, if you move
to a process approach or process design that gets to endpoints that are more
meaningful, instead of time as an endpoint, then that is one way forward. So, I think I wanted to add that.
DR.
BOEHLERT: Tom?
DR.
LAYLOFF: Yes, I agree with Garnet that
material science is really critical to the whole manufacturing process, and I
agree also with Ajaz that these critical processes have--the endpoint
identification has to be sufficiently robust to deal with that spectrum. So, part of the process design is looking at
the robustness of the process after the assault with different materials.
DR.
BOEHLERT: I think that is going to be
one of the challenges that you are going to face, do we have good tools to
assess--you know, at a multi-component mixture that may be changing, a number
of the components changing at the same time--to reach a defined endpoint? And, are those tools available today? We may need to develop some new tools.
DR.
HUSSAIN: But I also want to sort of
point out that I think there is a whole spectrum of options and tools
available. One of those options is a
well-tested option. I actually have
written on that also myself. It is based
on the experience from the University of Maryland. Just to give you an example, magnesium
stearate is present in 97 percent of all products and it is a big culprit in
the problems it creates. So, the way we
control that is most of us buy it from one source. If the source changes we really would run
into some difficulty.
The
monograph approach to that in the USP does not even get to the key
functionality even from the purity perspective.
So, just because somebody qualifies magnesium stearate just on the basis
of USP, I think that is a high risk situation.
So, how does one address that? I
think there are many formulation strategies to address that, one of those being
use of a wetting agent within the formulation to make it robust and less
dependent on the effect of magnesium stearate on dissolution.
So,
there are formulation design strategies that can overcome some of the
variability. So, I do want to sort of
point out to you that there are many options.
I think if we know there is a source of variability, one way would be
new technology to sort of manage that.
The other way would be to try proven approaches. But we don't have a means to recognize that
as a robust formulation.
To
go back to Gerry's presentation, one of the criteria could be that the process
or formulation is robust to these sources of variability. If we can generalize about how we get a
robust formulation, that becomes one additional option that becomes
available. So.
DR.
BOEHLERT: Other questions or comments
dealing with the type of information and knowledge most useful to assess, or
any of the other bullets?
DR.
SINGPURWALLA: I am not sure if we have
moved forward from the first bullet--
DR.
BOEHLERT: I am not sure either.
DR.
SINGPURWALLA: Assuming that we have not,
I am going to take a second crack at an attempt to move away from it. So, I am going to propose a definition. Quality by design is the process of achieving
acceptable quality by a methodical and systematic scrutiny of all elements that
go into characterizing quality from inception to end use. That is sufficiently general; sufficiently
nebulous; sufficiently meaningless.
[Laughter]
DR.
DELUCA: I would accept Nozer's
definition as one alternative. I always
like to have a clear description of something but as I heard the discussion I
really think we ought to get on with item two because we can always come back,
and after we know what information we need we can always come up with some sort
of a definition. You know, the type of
information is going to vary by the product.
DR.
HUSSAIN: I would agree with that and I
think that is where it is more fruitful and more useful for us because I think
the type of information then gets associated with the intended use, the risk,
and everything and I think fine-tuning and definition can come later on.
DR.
BOEHLERT: Gary?
DR.
HOLLENBECK: I am going to phrase things
slightly differently. Here are things I
don't want to do, I don't want to endorse a process where we are trying to find
out everything. I think Garnet said you
would like to know everything about the materials and we all know that is
impossible and we really can't afford to do that. I want to know more about the things that
really matter. I think that is what we
are focusing on.
The
second part of that is I also don't want to wait for FDA to provide the kind of
detailed guideline that I think Dan was asking for. I know you hate to hear these words but I
think it is case by case pretty much, especially at this stage in the process. If you are building an SUV it may be different
than if you are building a compact. I
think you have to engage the industry on a case by case basis as you look at
these development portfolios.
DR.
HUSSAIN: If I may, I think I agree with
that and that is the reason I think my thought process sort of focused on
post-approval changes because that provides a flexible means to sort of engage
in that sort of a discussion and create some aspect of what Colin will be
talking about later one. So, that was
sort of my motivation of getting to that, not directly but indirectly. Then, once we learn a bit more, then bring it
up in a later forum. So.
DR.
GOLD: Gary, I don't know how you could
go out with a statement saying you want development information without
defining the type of information because the industry is going to ask you
this. I just cannot understand how we
could not be prepared or say that this is going to be needed. I do agree with you it is a case by case, but
the industry is certainly going to ask for guidance on what are the parameters
that we should be looking at.
DR.
HOLLENBECK: But I think we know
that. Maybe we could have our second
axiom, we should do good experimental designs."
DR.
GOLD: And we should celebrate
motherhood!
DR.
SINGPURWALLA: That should be a theorem!
[Laughter]
DR.
HUSSAIN: I think the point is well
taken. I think that is the challenge
that we will have. I actually put on the
table the FDA University of Maryland research model, which sort of starts on a
small scale doing screening experiments and then do response process analysis
to look at the response, and the impact of different variables on that
response, what the impact is. So, it is
a more structured approach to that. The
draft PAT guidance is saying, all right, from a knowledge perspective what are
we looking for?
Now,
the challenge that comes in any product development, whether it is
pharmaceutical or any development, is that the developer or the formulator
brings past knowledge to bear on this.
Okay? So, that is one critical
element that I think is very valuable because, in absence of that, if we
suggest you have to do design of experiments, the number of variables that we
have to deal with, the complexity of the designs would be out of reach. So, that is not what we are talking
about. We are talking about bringing
past knowledge to bear on decisions, which then become more rational and
structured, to define a program that leads to a satisfactory outcome of what
the intended use was. There is a structure
to the information that then becomes knowledge.
I think that is what we are looking for.
To
add to that because Dan did ask in a sense, my preference here is not to give a
detailed guidance because if we do, the unintended consequence of that is that
we will encroach on the development programs, and we don't want to do
that. I think if we simply define the
objectives that we seek, to understand the value of the controls that you have,
the ability to mitigate risk, what is the relevance of the specification, when
there is a process change meeting the same specification, what does that
mean? Does it give us the satisfaction
that the performance will remain the same, or was the acceptance criteria or
the test method not sufficient to handle the changes that you may have?
Just
to give you an example, we established correlation between dissolution and
bioavailability. All right? So, there is an established correlation. The way we accept that correlation, it is a
type "A" correlation, point to point, that brings it closer to being
causal but is not causal yet. So, if you
change your manufacturing process significantly and you still qualify your
change based on that correlation, that correlation may have been formulation
specific and with certain changes in formulation the correlation will not hold. So, I think that is how we can sort of
approach that.
DR.
BOEHLERT: Tom?
DR.
LAYLOFF: One more comment on the
development issue, I like the idea of submitting post-approval because it gets
around the risk of stalling at approval, which could be a big issue. The firm, of course, has to invest in the
development to get a viable product for the approval. Then, going on beyond it into the design for
any post-approval would I think be a more palatable option. I like that one better.
DR.
BOEHLERT: I have been thinking about
that as well and I think that has barriers as well because post-approval
assumes that you have very good knowledge of your current process. If, in fact, you don't and it isn't up to
today's standards, perhaps then it is far more work and far more involvement to
try to decide what it is you are going to give to the agency because, in fact,
you put your prior process at risk.
We
have about ten minutes or a little less in our discussion period. Joe, did you have some comments?
MR.
FAMULARE: no, I agree with your concern
on the post-approval changes. If it is
not framed properly, post-approval changes could be effects for a product that
is not well developed in the first place.
So, we have to be careful how we frame post-approval. If it is an accretion of process knowledge
that relates back to the original development work, that could be a logical
progression. But if it is to try and fix
what wasn't done properly in the first place, well, isn't that kind of where we
are at?
DR.
HUSSAIN: But I will add to that. In fact, I won't say most but a large
proportion of develop occurs after approval.
DR.
BOEHLERT: Tom?
DR.
LAYLOFF: I think that the manufacturing
experience at the time of approval is very, very limited and that the knowledge
base is increasing all the time. I
actually sort of like the idea of interim specifications to allow that
evolution to occur under a regulatory blanket.
I put this under the same thing, after the approval occurs and the
manufacturing is under way and you get more experience with different
excipients, different material issues, you actually are redesigning to deal
with the variable material science and process science.
DR.
DELUCA: I have a little bit of a problem
with conveying the idea, you know, about the post-approval that you are trying
to fix something. To me, okay, that may
be true but so what? Let's fix it. I mean, whether it is improving the process
or fixing it, let's do it.
[Laughter]
MR.
FAMULARE: But fixing sometimes is not really
fixing; it is just mitigating something and just going on till the next time
something comes up as opposed to going back and finding a root cause, or really
finding the problem. That is the sense
that I meant it in. You should fix it
and we should have a lifecycle approach to dealing with a product in terms of
what you learn over time. But, you know,
many of the paths of action that are taken, as we talked this morning about the
regulatory framework, well, let me fix it so much that it stays within my
approved specifications and filings, and so forth, because that may pose a
bigger risk than really getting to the root cause of the problem.
DR.
DELUCA: I was only being critical
because it may create the impression on the part of the manufacturer that I
don't want to go in with this change because they will feel we are fixing
something because we did something wrong in the first place, when that may not
be the case. So, I think it is best to
improve the product for the benefit of the patient and to lower cost.
DR.
FAMULARE: And I think the overriding
question is how much of that latitude could be in the hands of the firm in
terms of the regulatory filings. I think
that is an important element in terms of what I was talking about this morning,
improvement. How much can you improve
and keep improving and keep on that paradigm without the regulatory scrutiny so
that you can truly improve, and now much do we need to come back into that to
make sure that the product does act the way we felt it acted when we approved
the product in the first place.
DR.
BOEHLERT: G.K.?
DR.
RAJU: I agree that we need to have
experimental design as an axiom, but even before we get there, in terms of the
process development knowledge one of the first things I think we should have,
whether it is in the record or not is to be debated, is the boundaries and the
basic failure modes of this process in terms of its basic safety and efficacy
issues and predictability issues. I
think those come even before an experimental design. I don't know if that was talked about but
they actually are the basis for the ranges of the variables and the
specifications, and they don't have to be quantitative but I think the
qualitative ranges are really priceless information and the investigations
around them, even if they are under development, I think are very valuable
information because the failures tell you the best relationships between the Xs
and the Ys. Then you can do the
optimization later but the big stuff is the failures. The successes and the better successes can
happen later I think.
MS.
KOLIATIS: Just to follow-up on what Joe
mentioned in terms of improvements to the process, and I want to get away from
the terminology "fixing" but improving a process--in many cases the
individuals who are improving or tasked with improving the process are a little
bit removed from those who actually developed process and the R&D that went
into that process. So, they are trying
to fix a process without all of the underlying scientific information, and the
danger that we might see is that we are going to move away from that desired
product which had a relationship to the product that was studied under a
clinical trial and gave you that desired clinical performance.
One
of the goals that, hopefully, we can see through this discussion and through
the process is to integrate production with R&D folks on a greater scale,
and have them get together in the development phase so that there is less need
to improve "fix" after the product is in post-approval and is out on
the market.
That
is one of the things that we see in the field when we go out and do our
post-approval inspections. A lot of what
we see are problems and things companies have to deal with because of perhaps a
lack of communication early on in R&D with production. So, one group is now trying to fix it without
all that underlying information. So, I
am hopeful that some part of this process will allow for that increased
communication of these two groups.
DR.
SHEK: That is an interesting point
because I believe industry, exactly because of this point, thinks of
changing. If you follow up on the way
companies are structured, and interaction between R&D and manufacturing,
there is a big change, I would say, in most companies because some of the
reasons you brought up.
But
we have to remember, you know, all of us were conditioned to some kind of rules
and regulations, and that is what I think many of us are reacting to. So, now we are talking basically about a new
approach and I think here we have to use the same and it should be basically
encouragement.
I
would assume a company will go and say let me try to fix the process in the
frame of specifications because it will be, you know, not today under
regulation. It will be faster to go, you
know, and fix the problem I have. But if
there is another pathway where you can add this information, do the right thing
which might take you longer in the lab but faster than to bring it to
completion, I believe we, in industry, will be conditioned to do it
differently.
The
information is there. Manufacturing and
R&D are working much closer than in the past and this information is being
shared. If that can also be encouraged
by some kind of, you know, let's say friendly regulatory approach, I think it
will, again, be a win-win situation. The
environment is right; we just have to create it. The same thing has happened, you know, with
the PAT. There is no question, if you
read the guidance--we just talked about it outside--it is no different than
other guidance that was ever published and that is refreshing. I think we can continue this approach also in
talking, you know, about quality by design.
DR.
BOEHLERT: Ajaz?
DR.
HUSSAIN: Yes, I just want to sort of
summarize what I heard and sort of help you sort of close this part of the
discussion. Clearly, I think the phrase
quality by design is a term that we sort of all have a grasp of what it
is. It is difficult to define in words
but I think I like the idea of defining what gets you to quality by design and
the discussion was very helpful for us to sort of frame that. And, I think I was very pleased. Some of that was very consistent what we had
articulated in the draft PAT guidance. I
think that was very, very helpful for us.
The
aspect that I think we also heard was to move it at least in an interim step in
the post-approval world because it makes more opportunities to collaborate and
to work together to really hone in on how best to do this, and creating
flexibility to achieve this in different ways.
I think there are many different development approaches that can get you
to the same end goal. So, we don't want
to be directing which is the best development approach, and so forth.
With
that, I think that was very helpful and I think after you have listened to
Colin and Greg, sort of give some more thought in your discussion in the
post-approval world on how we can approach the next steps for quality by
design, if you can consider that in the second part of the discussion that
would really help us because what we plan to do is take this discussion and
sort of structure some of the activities of our manufacturing science working
group within the GMP initiative to sort of focus on how we move in this
direction, keeping in mind that we already have two draft guidance on
comparability protocol and keeping in mind we have the SUPAC revision thought
process and how we can integrate some of this into those activities.
DR.
BOEHLERT: Tom, did you want to have the
last work?
DR.
LAYLOFF: Penultimate.
DR.
BOEHLERT: Penultimate word?
DR.
LAYLOFF: I would say the most
conscientious manufacturer is going to try and get availability, the safe and
efficacious product out the door and on the market as soon as possible. The way the materials are manufactured, since
the pharmaceutical industry doesn't swing the manufacturers of excipients,
excipients are going to be a variable and manufacturing processes really should
change to reduce cost and become more robust in time. That means post-approval. So, I think availability, having safe and
efficacious drugs out there as quickly as possible is not going to allow you to
explore all the critical dimensions of incoming material science because it
will slow availability down and increase costs with no net gain.
DR.
BOEHLERT: Gary?
DR.
HOLLENBECK: This is not the last
word. I like your summary, Ajaz, except
I don't see the need to restrict to post-approval changes.
DR.
HUSSAIN: No, we are not restricting it
but I think putting our efforts in that because I think that will yield results
more quickly. I think the regulatory
process actually is quite flexible enough if someone wants to do this right
now, and some companies are already doing it.
In fact we already have some
proposals, and so forth. So, our system
is flexible but getting to a formal guidance and other approaches, that is
where our efforts could be placed. That
is what I was saying.
DR.
BOEHLERT: Any other final comments? We are scheduled for a break right now. I would suggest we come back at three
o'clock. We might take a very brief
break and get started again promptly at three o'clock.
[Brief
recess]
DR.
BOEHLERT: We are ready to get
started. We have two presentations
scheduled. The first will be Colin
Gardner.
Quality by Design and Risk-Based
Regulatory
Scrutiny CMC: Specifications and
Post-Approval Changes
MR.
GARDNER: Well, I have to thank you,
Ajaz, for inviting me again. I thought I
had come and done my bit in May but you insisted that I come back again today
so I had to dream up some new slides to present. I also want to acknowledge Scott Reynolds,
who is executive director of pharmaceutical development at Merck who got back
in the early '90s from a manufacturing division to come into pharmaceutical
R&D as an engineer and bring a lot more engineering principles into the
development of processes. I have
continued to chat with Scott even in the time that I left Merck so part of what
I present today with ideas from Scott.
The
CEO of our company always tells us you have to tell people at the beginning
what you are going to tell them and then you come back and tell them at the end
to make sure that they understood what you are going to tell them. So, here is what I hope to get across today,
the continuum of process development activities really starts with the NCE
selection and continues all the way through development and manufacturing
process and post-approval.
Fundamental
new chemical entity characterization and process development really lead to
meaningful control points. I agree with
what Garnet said, you know, material science is absolutely critical. I have a colleague who is a professor at MIT
for material science and he has dealt with the electronics industry all his
life, and only recently became involved with the pharmaceutical industry and he
is absolutely shocked at how poorly we define these multi-billion dollar
products. That is his view, anyway.
Success
of the scale up exercise and also process changes and site transfer is really
driven by rational comparison of meaningful process and product parameters that
we have to define during development.
Ultimately
we have to have a fingerprint of parameters that are identified to be able to
monitor process robustness, and these are not regulatory specifications but
monitoring the robustness of the process, and drift in those parameters can be
used to flag issues before you lose control of the process.
So,
that is what I would sort of like to get across today. Let me start and be a little bit on the
social science rather than the hard sciences here because I think there are
some aspects of that involved here as well.
So, issues within the industry themselves--this is data from PRTM, a
consulting company in Boston, basically saying that it takes anywhere from 6.5
years to 13.5 years to develop a product, and it can cost up to 800 million
dollars according to the latest figures from Tufts and Bob Ruffalo from Wyeth
who quoted 2.4 billion last week so it is an even growing number.
But
the challenge really, and someone already said this, is to send safe and
effective drugs on to market. These are
the two things that the pharmaceutical industry is targeted towards and they
haven't been terribly successful at doing that recently so they are trying to
improve that.
But
what really are the products of a pharmaceutical company? Well, there are three products. There is the API itself. There is the marketed dosage form or dosage
forms, and there is the approved label claim that is used to position the
product in the market for the physician and the patient to use the product.
But
if you think about which one of those is most important, it is not the API; it
is not the dosage form. What really
rings the cash register is the approved label claim that is used to position
the product on the market. That is what
is important to the CEO.
So,
the consequences within the industry is that R&D tends to focus on potency
and selectivity and safety and clinical response. These are the things they monitor. They don't uniformly recognize the importance
of any investment in process chemistry and formulation development. Those are things that they figure will get
done on the way to developing the product.
They
also tend to have inexperienced clinical staff who, you know, have come from an
academic environment, they come into a company and they are put in charge of
running the clinical program, and they set timelines and targets that are
totally independent of the product development capabilities.
The
goals and the rewards of the various divisions in discovery, development and
manufacturing adverse event not aligned.
Discovery people get rewarded when they get when they get a compound
into development. Development people get
rewarded when it gets transferred to manufacturing, and manufacturing people
are left to suffer the consequences of those rewards.
[Laughter]
And
the CEOs really haven't regarding manufacturing excellence as a competitive
advantage. So, the industry is not the
only one to blame here. I think the
regulatory agencies have their share of the blame too.
Most
of the people who are in the reviewing divisions of the FDA--correct me if I am
wrong--tend to have analytical chemistry backgrounds and a lot of what we are
talking about here is process engineering and if you don't have a background in
process engineering how are you going to understand the information in a
development report? That is partly the
reason why companies don't sent those development reports in because the
regulators in the companies are afraid of how it will be interpreted by the
agency.
Secondly,
the timeframe to review and understand the regulatory filing is really
limited. I am sure the reviewers of the
agency are constantly working on different programs and very often it is right
down to the wire before they get around to reviewing the product and they only
have a few weeks to do that. Again, that
doesn't mean that they have a fundamental understanding of what is happening.
Then,
I think the training of compliance inspectors, particularly in the early days
of PAIs, was very, very poor. I think I
hear that is improving but let me give you some examples that I encountered in
my area. These are two very, very simple
processes. I didn't go for complicated
controlled release processes; these are really simple ones.
Here
s a case where we developed a biobatch and it was a simple mixing of excipients
and drug in reasonably viscous but not terribly viscous environment. So, it was at a 10 liter scale and, if my
memory serves me correctly, took about 15 minutes to achieve homogeneity by
sampling that process. As we scaled that
up to the commercial batch it was 100 liters and it was 45 minutes to get to
homogeneity.
When
we had a pre-approval inspection, the FDA inspector said the processes are
different because in one case you used 15 minutes; in the other case it is 45
minutes. It is not the same
process. This indicates, you know, the
fundamental lack of understanding of the process engineering.
Here
is another one which was even more dramatic.
In this case we have a 4,000 liter tank in which the drug is, again,
being suspended. It is an oral
suspension. It goes to a filling tank
which feeds the filling line. In this
pump the tubing is flexible, is capable of adsorbing things. Of course, since it is an oral suspension it
contains a preservative.
So,
in development we asked ourselves if this line were shut down for one reason or
another, how long would it have to set there before we started to adsorb the
preservative and it would drop below specification? So, we did those experiments in the lab. Then we went to the manufacturing division
and we ran an engineering run, not a validation run, an engineering run to
figure out where we were going to go. We
ran that for six hours and we showed we got a certain amount of adsorption in
those six hours and we had time points all along. So, we established cardiopulmonary and if it
was shut down for more than 15 minutes we would empty out this line and then
continue the filling process.
At
the pre-approval inspection, the inspector's conclusion was that not only did
we have to throw this out, but we had to throw this out as well. Since this was days before the approval of
the drug there was no way we could argue.
So, that is what the process has to do, they have to throw this out if
there is a 15-minute shut down. Again,
it doesn't explain good process engineering.
So,
what can we do about this situation?
Manufacturing processes really have to start with the choice of the NCE,
its form and its formulation. They have
to link discovery, early development, process scale-up and manufacturing.
Let
me skin through this slide because I don't really have time to go through all
of it, but the key part of this is really if we are going to do this we have to
be able to demonstrate reduced regulatory risk to the agencies. As a result of that, we have to be able to
get regulatory relief for companies that have done good process development and
then demonstrate the value of that to the company management. That, to me, seems the fundamentals of what
we are trying to do.
How
do we do that? First of all, we have to
pick better development candidates. We
have to build in developability. The
processes that are used in discovery these days are targeted towards finding selectivity
at various receptors and enzymes, and that results in very much more
hydrophobic compounds that are much more difficult to formulate and prove
bioavailability.
So,
we have to start somehow affecting that process. This is today's process using genomics and
libraries of chemicals and high throughput selection processes to identify
hits, mingling through synthetic chemistry, looking at selectivity, metabolism,
some animal models, in vitro tox and some small in vivo tox
studies, and only after selecting the candidate for development they bring the
process chemists and the formulation people in to bear on the problem.
With
all of the constraints that are being put on the pharmaceutical development
people in terms of the number of compounds that are coming forward, the time
constraints and the results constraints, this really constrains these
people. So, the best way to address this
is to think the form and formulation back into this process to help build into
the molecule that you are developing the physical properties that make it a
better candidate.
So,
if you think of ways to do that, instead of just looking at potency and
selectivity and metabolism and iterating through this until you get a lead
candidate, you can actually build in ways to look at the physical properties
much earlier which then, eventually, gives you a target which is a developable
compound rather than one that just has an interesting chemical entity.
How
about form and formulation selection?
This is a very, very busy slide and I don't intend to walk you through
it, but the point I really want to make here is that, unlike developing an
airplane, you are not designing an airplane and then testing it, coming back
and testing it and flying the plane, and eventually having the final
product. Here the final product is
really defined at Phase IIB because by that point you have clinical data and
you have the dose response. Anything
after that that affects the performance of the product is not permitted. So, you are really investing in engineering
to be able to do your process development and skill to meet the criteria that
were established on the product in Phase I and Phase II. That is a very, very different kind of
challenge.
What
it does mean is that you have to put more effort up front in terms of
understanding your product and understanding your formulation and understanding
your excipients so that you actually have something in Phase IIB that is the
basis of a good Phase III development.
Traditionally,
this has been somewhat of a black box and you can use this black box to
represent anything. I am just talking
about solid forms here. If you only do a
limited number of experiments you may only find a couple of solid forms in this
box but, in fact, if you now move to using some of the high throughput
technologies that are available, if you cast a flood light on this, you can
find all of the forms that are in here whether these are polymorphs or salts or
hydrates, and you know you have much, much more information, and you can gain
this very much earlier in the process with very much smaller amounts of
material.
Let
me show you an example that I used at Merck to say that we really had to have
pharmaceutical people work with the discovery people to pick a candidate for
development. Here is a compound that
came forward. It was an antibiotic. It had great solubility but it was sort of
weakly crystalline, and since it was going to be injected we needed 10 mg/mL
solubility. Once it came into
development and the process chemists got their hands on it, it converted to
this beautiful crystalline trihydrate but the solubility was now less than a
mg/mL so that project was dead.
So,
I think this demonstrated to management that it really was important for people
with pharmaceutical and process capabilities to be working with the discovery
people to pick the best candidates to come forward.
At
the other end of the spectrum, ritonavir is the one that is represented here
and lots of companies have had this problem.
They just haven't had it with an AIDS drug after it was on the
market. But this compound, of course,
threw up a new polymorph after it had been on the market for a year and a half,
with the result that it had to be withdrawn from the market for a short of
period of time and be reformulated.
Again,
using modern techniques you can actually do this kind of screening for all the
different forms. In fact, we have done
some work along these lines. In fact,
there are five different forms of ritonavir.
This can be done with very, very small amounts of material and in a
very, very short time. So, this is the
kind of activity you can do to build this information into the development
process.
Here
is another example. This is the marketed
product here. You can actually make a
salt form of this drug, which had never been made before. It is a lot more soluble and when you put
that into animals you can see what happens, you get a much, much faster onset
compared to the green line, which is the marketed product. So, if you are looking at, say, pain then
onset is much more important. So, the
choice of that salt would be a much better development candidate than the
original choice of the compound that is on the market.
On
the other hand, as Ajaz pointed out, you might get side effects from this
peak. So, you might actually have to
develop a controlled-release form. The
form you take into the controlled release might not be the same form you would
use for immediate release. But knowing
all of this information allows you to do a much better job of selecting the
candidate and the formulation.
Another
example, here is the product that is on the market. As you increase the dose you increase the
area under the curve but you increase it in a nonlinear way. If you change the form of the formulation of
that product you can get it to perform in a totally linear way, with a
bioavailability that is 2.5 greater than the marketed product.
So,
doing that kind of search at an early stage results in much better
products. Let me use the same analogy again
of the black box. The current norm is to
poke into this black box a little bit and figure out what is happening in the
process. Well, a much better way is to
really shed the flood light on this box and understand the process in depth.
So,
the objectives of the pharmaceutical process development really are to provide
a continuous link from these early phase characterizations of the materials to
the final manufacturing process; to define the process based on unit operations
approach; to have a road map for tracking success so that as you scale up and
you have transfer and site transfer you really know what you are doing; and
enable effective process monitoring and improvements after you are on the
market.
So,
an initial design is really important to identify the parts of the process
which are most susceptible to failure upon scale-up. If you identify those and work on those, then
you are going to have a much better process.
The way you can do that is to conceptualize the scale down of the final
manufacturing process to the pilot plant and to the lab and to carry out
experiments there that will then direct you as to what the critical parameters
should be to monitor at full scale.
In
terms of process understanding, you really need to determine the fundamental
process constraints and, where appropriate, you can utilize unit operations
which are the most forgiving. So, if you
have a choice of two different processes and one is much higher risk than the
other in terms of its ability to be controlled, you are going to go for the one
that is most forgiving. And, if you can
show that to the agency, then you can demonstrate that there is a much lower
risk with that particular product.
Identify
the underlying principles which control the process. In other words, avoid this black box analysis
and really understand what is in the black box so that you can make much better
decisions. Then, identify appropriate
process parameters to monitor and to control.
That is where the value of the process analytics comes in, which can be
done on-line and in real time. That will
then provide confidence about the process robustness and, again, make the
argument to the agency that you know what you are doing.
In
terms of process optimization, it is really important to find the regions of
the process parameters where the process is most stable, and then to design the
process to what was in this regions. If
I show this schematically, what I am doing here is reducing a multi-component
system to two dimensions, and saying that within this space here, this is the
region where the process is unstable and these are the targets we are going to
shoot for, and these will be the basis for our specifications for the product.
But
in order to demonstrate process robustness you have to stress the range of the
variables. As Ajaz said, you have to
find out where your plateaux are. Again,
what Garnet said, you have to include the range of materials because the material
properties of the excipients are going to play a very important role. Also, the environmental conditions and
process parameters, and if we think back to the famous old days, Ajaz, of
working on site stability, that turned out to be the issue of site stability,
the environmental conditions. It had
nothing to do with the site itself; it was poor control of the environmental
conditions. It took us a long time to
convince the agency of that.
So,
once you have done the process robustness, now you can find the region where
the process is, in fact, robust. This is
where your target is but the process is robust in this region. I don't mean to imply it is the same
parameters we are looking at here, but you have a set of parameters to define
the robustness of the process.
Then,
by going through your process design you can have measurable quantitative
endpoints, again using PAT; eliminate any dependence upon qualitative
endpoints; evaluate how the process can respond to variations in process
equipment performance and ran material characterizations; and then provide a
continuous fingerprint of process performance.
Again, this should not be a regulatory requirement. These should be parameters that the company
tracks to monitor whether or not the process is still in control.
Also
provide hooks for future process development.
So, plan into your development program the collection of these
fingerprints that you can use for future comparisons when you change site, or
when you modify the process, or you change the excipients. Design a validation protocol to collect
similar fingerprints. So, your
validation protocol should be designed based on the process that you are
validating. I once met someone from
validation who said that his job was to read all the reports of recent FDA
inspections and to be able to answer every one of those questions when they
came in to check the validation. Our
development person said we are validating a particular process. What FDA did last week at Wyeth of Pfizer has
nothing to do with it; we have to define the validation protocol that is
relevant for this particular process.
Then
use these parameters in manufacturing to continually monitor the process,
monitor its operation and its status.
When you do that you have a subset of these parameters that you can
monitor and this become the fingerprint region so that you can see whether the
process is robust and prospectively identify drifts before your specifications
start to go out of control.
I
am sorry I had to race through all of that.
I hope I have managed to get at least some of it across. So, I will come back to my summary slide
again. It really is a continuum of
process development all the way from the definition of a new chemical entity,
all the way through manufacturing.
We
need to fundamentally characterize our new chemical entities and our excipients
and process development as a consequence, and that will lead to meaningful
control points.
The
success of any scale up, or tech transfer, or process change should be judged
by rational comparison of meaningful process parameters that we have defined
during the development stage. And, this
idea of having a fingerprint of parameters, that are not regulatory
specifications, that can be used to monitor process robustness, and then to
flag issues before the process goes out of control. But that will only work if, in fact, FDA does
not regard those as regulatory specifications.
In my experience, it has not been the FDA that has been the problem, it
has been the regulators in the company because they are afraid to have those
specifications around because they think the FDA is going to come in and
immediately assume that they are regulatory specifications. So, we have to change culture and mind set
both within the agency and within the company.
I
wrote a few notes down here as I was listening to all the other presentations
that were being made. I think the
implication for the FDA is that we don't have a box-checking mentality, as it
were, and we are talking about trying to define--I think you said you couldn't
see how the FDA would not have some guidelines for the product. But I think the guidelines have to be in the
mind set of the regulators, not saying exactly how processes should be
developed but a mind set of how to look at the development reports that the
pharmaceutical companies send in so they can understand what those reports are
about and then interpret them, not just a box to check off.
I
think Ajaz' suggestion of starting with post-approval changes probably makes
sense but I certainly would hope we would not end up there. I would hope that companies, in fact, are
doing this all along but it might be easier to start there at least to get the
message across that the cultures are changing and, in fact, this is a viable
way to proceed. So, thank you very much
for your attention.
DR.
BOEHLERT: Thank you, Colin. Questions?
Comments?
DR.
GOLD: Colin, I don't want to beat a dead
horse but I don't know of any initiative that FDA has taken where the industry
has not said, well, please explain what you need. These are your requirements, please elaborate
on what these requirements are. For
example, if we expect analysis of variance to be done, statistical design, or
whatever, certainly we need to look for interactions as well as main effects,
do we not? Doesn't this have to get
across to the practitioners?
DR.
GARDNER: I saw people use these kind of
approaches when I went into industry at first and, you know, I have never been
convinced that they are used correctly in the industry. I think people, you know, build these models,
very many of them are linear, and they put in a bunch of parameters but often
they don't put in the critical parameters.
In general people have not used a process engineering approach to look
at the process and understand the fundamentals of the process, and then you can
define the process. I don't think the
FDA should be defining the process for us.
The pharmaceutical companies should be defining that and telling the
agency what to expect of this process and what the parameters are that they
will control. I think what the FDA has
to say is that this has to be a mind set, that this is the kind of approach we
are going to expect from you but we are not going to tell you what to do. It is your product; it is your process.
DR.
GOLD: Yes, I am not trying to imply that
the agency should define the variables that are going to be applicable to any
particular dosage formulation, but I am thinking that they will need to provide
general guidance for how to develop these experimental programs. I may be wrong.
DR.
GARDNER: I think that is destroying
innovation. I think the innovations come
from the companies and they should be bringing forward concepts of how they
develop they processes.
DR.
GOLD: In a perfect world, Colin, I think
you are right.
DR.
GARDNER: Well, we disagree then.
DR.
BOEHLERT: Nozer?
DR.
SINGPURWALLA: Well, I was hoping not to
ask any questions--
[Laughter]
--but
this brings me in as an outsider who knows something about design of
experiments and analysis of variance.
Did I hear you say that the analysis of variance and the design of
experiments that are done by industry don't take into account the true
variables, and just takes canned variables into account?
DR.
GARDNER: I think many people have done
that in the past.
DR.
SINGPURWALLA: Then the industry is
lagging behind in terms of Baysian ideas because the Baysian ideas would
essentially allow you to do it.
DR.
GARDNER: I am not disagreeing with
you. I mean, I think it is changing but
it certainly was like that 15 years ago.
DR.
SINGPURWALLA: Well, it is not that; I
think the point is this, it is a different philosophy and a different paradigm
of doing experimental design. The kind
of old paradigm does exactly what you are saying. The kind of paradigm that you would like to
see is now allowed by certain new methodologies, and what you are saying is
that industry has not adapted to new methodologies.
DR.
GARDNER: I think that is about what I am
saying.
DR.
SINGPURWALLA: Then it is the function of
a committee like this to draw attention to that.
DR.
GARDNER: Right.
DR.
SHEK: I am a little concerned about
generalization. I don't think it is
generally correct--you know, your experience, my experience is different;
things are changing. Many companies
have, you know, process engineers. My personal
thought is you cannot separate the formulation from the process. Both things have to happen at the same
time. You want to get people involved,
process engineers, as you select your formulation because otherwise you put
yourself in a box and you try to get a formulation that you cannot process of
vice versa.
DR.
GARDNER: Absolutely.
DR.
SHEK: But it is true, like in any other
business, some people are doing better in experimental design and some are not
as good, but the concept of experimental design and training people--it is
happening in industry. At least that is
my experience. I want to make sure that
we don't have a generalization.
DR.
GARDNER: I probably have more process
engineers than any other company. You
know, our whole organization was chemists, process engineers and material
scientists so I think we started that trend.
So, I hope, you know, we understood what we were doing there. I still think that originally there was not
that focus on trying to really understand the fundamentals but, rather,
modeling around very, very standard parameters.
DR.
BOEHLERT: Ajaz?
DR.
HUSSAIN: Colin, you and even Diana were
discussing the fingerprint concept or a signature concept, and that being used
as a means of comparing and evaluating changes, and so forth. I think that is a very intriguing thing. I think that is a very viable option, and
that not being a regulatory aspect, we agree with that. What challenge do you think there are in that
mode?
DR.
GARDNER: Well, I think the challenges
would be to identify what are the parameters that you are going to select to do
that, and that involves--I mean, the way I would see that is starting off in
the development phase, conducting a lot of collection of data as you go through
the elements of formulation design. I
agree with you, formulation design and process development are
indistinguishable but you are starting off usually with a few grams of material
when you are starting to define the formulation and then you go into tens of
kilos, hundreds of kilos. But you should
be collecting that information as you go along and basically be building a
database of parameters that you can measure.
Then, as you scale the process up and you go into your Phase III
studies, you will probably select a subset of those that you could continue to
monitor. Some of those parameters will
be selected eventually as your end specifications whether they be on the end
product or specifications for intermediate steps. But you will still maintain a significant
number of parameters that you are measuring.
You will use a subset of those to do your tech transfer into the
manufacturing and then you will use a subset of those perhaps to be these
fingerprints that you continue to monitor, and they are the ones that you have
shown are most significant in terms of monitoring when the process is going out
of control.
That
gets back to the question I think you asked me last time, you know, the fact
that you have a lot more variable in the excipients that you have in your API
and, therefore, how can you control for that?
I think you control for that by building that into the process.
DR.
SHEK: Yes, I think I agree that is very
correct. For example, if you take a
granulation process, today we have an endpoint which companies are using which
is like power consumption, which really doesn't tell you anything about what
you have inside. You know, you see an
effect. Now, with looking with various
others, you can maybe have some measurements which will tell you about the
particle size, tell you how much water stays there. Then you can build some kind of a signature
which, hopefully, will stop the process--
DR.
GARDNER: What you just said about power
consumption though was developed because up until that point, as Ajaz said
earlier, it was time, and time was fixed and that was part of the NDA
specification, and if you changed the time there was a difference in the
process. As you change your excipients,
your API, you have to change the time if you are going to make the same
product. So, I agree with you, power
consumption was one step along the way.
As new technologies come forward we should definitely encourage people
to try and use those. I mean, where PQRI
can come in is by helping to define what the value of those measurements is,
not for any one particular product but just in general.
DR.
HUSSAIN: In that concept is sort of the
learning aspect in a sense, and actually collecting more information that is in
your batch record. I think that is a
major concern because people don't want to do that but, at the same time, I
think the dilemma we have is when you have a specification you often don't get
to the root cause because you are not measuring the right things that will get
you to the root cause. So, that becomes
a part of the continuous improvement.
DR.
GARDNER: And another thing--I know there
are differences in different companies but during the development part of the
process you make a lot of batches for clinical supplies. Those absolutely should be part of your
development program. I think to have a
separate group that makes clinical supplies from the group that is doing
development is actually a very, very big mistake because the amount of
experience you get in making clinical supplies and building all of that into your
database is just a huge advantage. If
you think about how much time you might spend just developing the product and
then maybe a hundred batches or so made for clinical supplies, if you don't
capture that information you are losing an immense amount of knowledge.
DR.
BOEHLERT: Thank you. Our next speaker is Greg Guyer.
GMP
DR.
GUYER: Well, the good news is I am the
last speaker of the day. The bad news is
I guess I am the last speaker of the day!
I guess I would start off by saying what I am going to show you is
obviously not a baked cake by any stretch of the imagination, but a lot of the
things that you are challenging yourself on is exactly what we, in the
industry, have been challenging ourselves on, how to actually get to a
quantitative model that could be used conceptually in a way that would show the
bridge between the body of evidence in your manufacturing science, and then
somehow equate that to a level of risk.
That has probably been one of the most significant challenges that we
have had.
We
haven't had any challenges with a lot of the conceptual things, and I think the
whole concept of quality by design--again, no one disagrees with the concept of
quality by design. It just makes good
sense, business sense, regulatory sense.
So, we haven't really challenged ourselves. Actually, we don't even have a definition for
quality by design. It is just what we
call all of the things that we have been working on.
What
I want to try to do is to maybe start to give you some ideas about how we might
start to equate this body of evidence in terms of manufacturing science to
risk. What I want to do is kind of pick
some pieces out of different presentations that you have seen because you have
seen a lot of different information and, again, I don't see any real
differences in the objectives of what people want to accomplish. But there are different ways to get to
that. So, I am going to pick pieces of
what Gerry presented. I am also going to
pick pieces of what Rob presented from a risk management standpoint and try to
start to integrate well-validated risk models.
Also, understanding better manufacturing science and using some of the
core parameters that we talked about, and Gerry talked about earlier, as well
as G.K. did, to kind of start to pull those together and see if there is some
way in which at the end of this we can at least get some common systematic
framework whereby this information could be collected, could be presented and
could be agreed upon.
Our
goal is not to train either the reviewers or the investigators or even the
industry on 40 different models of how to do this. It would be nice if we could use a common
model. It doesn't mean we have the same
collection of data, to Colin's point, but at least collected in such a
framework that would be consistently applied.
So,
I want to talk about one way or one suggestion we might use that is a validated
model and, again, this is not a baked cake but it is more of a conceptual
presentation.
So,
let's start with a definition of risk management. This came from Australia. I apologize, I have no references here but
this has been an evolving presentation and evolving thought because I am not a
risk management expert but I am understanding that we all make risk-based
decisions every day; we are just not aware of them. A lot of the things I am going to tell you
are, you know, motherhood and apple pie, things that we all know. What I am trying to do is put those things
that we use as attributes of risk into a quantitative model.
So,
we look at risk management. It is really
a process consisting of well-defined steps which, when taken in sequence,
support better decision-making by contributing to a greater insight into the
risks and their impacts. That is a lot
of jargon but basically what it is saying is that by using a very well-defined
common process, if it is done in the right way, you can actually come to a set
of decision elements which are predicated not only on science but also on the
elements of risk.
So,
let me talk to you about what that might look like. If I get back to the famous model that Gerry
presented, again, we all understand there is an inverse relationship of
management science to risk. I don't
think that is a debate. But the question
really is how do we start to equate these two concepts together.
So,
when we start to think of developing relationship between these two, what does
that algorithm look like? We talked
about it earlier. Can you solve, can you
create an algorithm? You know, I would
argue that you have to. It is not a
question of if, it is a question of how do you because it is so critical to
what this whole initiative is about, in my mind, that we have to figure out a
way to extract volume of information that a pharmaceutical company will
develop, extract what is important so that when it is received by the agency
they can understand that information; they can evaluate that information and
they can understand the decision-making that was made in a very consistent and
robust way. The bottom one is how can it
be solved consistently and systematically using validated models?
If
we look at those primary attributes that we talked about earlier in terms of
manufacturing science, and I am not going to argue that these are the only five
but there are five so we will start there and I think some of the concepts we
will talk about are equally applicable if we want to broaden this, constrict
it, whatever.
But
we talked about process knowledge. We
talked about process capability. We have
talked about manufacturing technology.
We have talked about process control technology. This is where PAT comes in. We have talked about quality systems
infrastructure.
What
I want to do is to think about if we use those five main attributes, just for
argument's sake, say those are the five main attributes of defining your body
of management science or where you are in terms of your value of manufacturing
science, I want to do is go through each individually and talk about how we
might look at those a little bit more quantitatively than we might have
historically. Then, at the end I don't
have an algorithm for you. I don't have
an equation that fits in but I think that is what a group of people should
do. I think there is enough in the
outline here that you might be able to think about how we move from a very
conceptual state to a very quantitative state.
So,
let's talk about process knowledge. I
apologize that it is kind of hard to read, but since I am not a risk management
expert I am going to have to use some terminology. It was great that Rob went before me because
he really explained to you what failure mode effect analysis is and he tried to
do it in a short way. Some of you, I
know, are more aware of it than others.
We and Merck have had quite a bit of experience with it, not necessarily
in this realm but in a lot of other areas.
But I think that the concepts there which are clearly identifying
failure modes, to G.K.'s point earlier, basically use a systematic way of
examining all the ways in which a failure can occur within a process.
When
you are thinking about this, this is higher level than probably what is in your
mind right now. It really is looking at
the process steps and trying to look at each process step and the failure modes
beneath them, and then identify all the potential root causes of each
failure. So, anything that can go wrong
with that process step, identify that and then what are the controls that you
have in place and what would be the root causes for those failures?
For
each failure you can estimate what the effect would be on the whole
system. In the case we are talking about
here, it is the effect on product quality.
My definition of product quality is probably a little different from
Janet's this morning because I start from a basic understanding that in process
development and in the clinical programs we derive a set of experience with a
range of parameters, excipients, different manufacturers of excipients. We have different parameters that we understand. There is a whole body of information that is
going on while that clinical program is developed.
The
output of that is a synthesis of the development program, from a pharmaceutical
research and development standpoint, to look at the parameters and to define
what are the critical aspects that could impact the quality of your
product. In doing that you do have a
link to the clinical program, and that is the basis under which I make the
supposition that the specifications that we have today are more than what we
need. I don't know of specifications
that could be challenged to say that we don't have sufficient specifications to
say products are safe and efficacious. I
would argue that they are. So, I would
say that this argument is about constriction rather than growing it and that we
are measuring the wrong things.
In
some cases I agree a lot with Colin in that we have jumped to some endpoint
testing because that is what FDA expects.
So, although we understand from a fingerprint standpoint what are the
elements within the process that are important and, as you see, we go back to
those over and over again when we make process changes and we make validations
at new sites, we go back to those. But
we all stand by the set of, you know, 12 tests that everyone has to do because
that is just what FDA wants. So, there
is a change that has to happen in the mentality for us to start to go to
something. You know, those fingerprint
items that we look at, those are really critical parameters and these tests, we
know they will never change as long as these fingerprint items don't change.
That
is really the concept that Ajaz and G.K. and others have been talking
about. A lot of that information is
already there. It is now starting to try
to leverage it in a little bit different way.
So, once you have done this, this would include how often that failure
could occur in the specific step; the severity of the failure, the impact of
that failure; and the ability to detect it.
So, if you get that failure, are you able to detect it readily so you
can mitigate it?
This
is really the start in a way of defining critical quality attributes and
parameters. If you think about it, if
you look at your failure modes you start to understand in your process what is
critical to defining the quality. Now,
that doesn't necessarily mean all of those parameters or attributes are
critical, and that is something I would like to discuss a little bit with you
just conceptually.
I
can say that internally at Merck we have spent a lot of time wrestling with
this concept of critical quality attributes and critical process
parameters. But I would say FMEA gives
us the first step in understanding what are the critical process steps. But deciding on whether something is a
critical quality attribute or deciding whether it is a critical quality
parameter will depend on some other variables.
So,
let's go through these manufacturing science attributes and at the end maybe
you can see how all these attributes are interlinked; they aren't independent
variables and they aren't independent assessments. They actually are very much linked. But I am going to talk about how that might
be done in a way that might give you the right solution.
To
me, once you have used FMEA you can start to define potential critical quality
attributes and parameters. But then you
define the process capability to meet those accepted ranges. In other words, FMEA would say these are the
ranges under which you can run your process and you won't have impact on
quality. So, that is step one.
Step
two would be what is your process' ability to continue to meet those
ranges? So, that is step two. Then, if process capability is well within
acceptable ranges, then additional risk mitigation may not be necessary. That might not be a place, even though it is
a critical step--let me give you an example because we have this all the
time. It is easier to do it in the API
world.
If
you think about it, you can almost drive any parameter to failure. I mean, think about pH when you are
developing a chemical. Even though the
reaction happens at a pH of 3, if you tried to do that reaction at a pH of 9
you are not going to get the chemical moiety you want. Well, that makes sense. That doesn't necessarily mean that is a
critical step. If you can control your
pH between 2.95 and 3.05 and you can show process capability to say whether it
is 1.5 to 4.0 you get the same result, to me, that is not a critical process
parameter because you can drive a truck through it and not screw up your
process. So, to me, that doesn't tell my
manufacturing people that you need to focus on that. You absolutely need to make sure it is always
the ranges of 2.95 to 3.05. But that
wouldn't drive, in my mind, necessarily to make it a critical process
parameter.
This
is where the definition in Q7A has frustrated us. It states something along the lines of any
parameter that could impact the product quality. Well, almost any parameter can impact product
quality at some range. The question is
what is its relationship to your ability to continue to meet that.
I
think the other thing is that when you think about the manufacturing technology
you have to have the right technology to be able to control your process in a
way to demonstrate you can control within that acceptable range reproducibly.
So,
one size doesn't fit all. It doesn't say
you have to go to barrier technology.
However, in some conditions you may have to go there because of the
control necessary. So, that is why it is
not a one size fits all but, again, the FMEA process sends you through a
thought process that will make you ask those questions of yourself and start to
define what is really critical.
So,
if the process capability cannot ensure process reliability within those
acceptable ranges, I go back to my example and say your technology can only
control between 2.5 and 3.5 and you know that at 2.4 and at 3.4 you start to
get some changes in terms of whether it is the polymorphic form or some other
impurity. If you start to understand
that, then I think that is where you need to employ risk mitigation strategies
and either look at a new technique, learn how to better control it, there is a
whole host of things you can do. But
what FMEA does is it drives you on a path to make decisions and understand what
is important about your process, and then that is where you focus.
So,
that is kind of the way we have used it, again, in different areas but it
really helps you get through the morass and start to focus really on what is
critical.
The
next concept is process control technology.
I would say that is a very important one if you are in that bottom
bucket I just talked about. That is
where your process capability cannot reliably keep you within those acceptable
ranges. Then, I see PAT as a potential
risk mitigation strategy which is considered when the critical attributes and
parameters cannot be reliably ensured in the process to meet those acceptable
ranges.
That
is one way. Again, when I think about
process analytic technology, it is where you want value real-time data. Obviously, you want to focus it where there
is a risk to not determining quality of your product, but you want to know
absolutely that you are maintaining it within a range that is acceptable. That is where you have to deal with your
process capability.
This
may mean that in some cases for new products you have to look at the technology
you chose. You might even have to change
the way you go about it if you can do these studies early enough. You might even change the technology you
would use to make sure you can reliably stay within those ranges.
Lastly,
and very important, especially to people like Diana, is the quality systems
infrastructure. It is a different
attribute mainly than what I have talked about, but I look at it as the ability
for a plant operation to reliably make any process you give them. If you think about process and product
development, there is a series of studies which deliver a process to
manufacturing that, hopefully, can reliably meet all the predetermined
specifications and ranges and all of those fingerprint aspects that Colin also
talked about.
But
that all has to go into a facility that has a quality systems infrastructure
that can reliably make a product. In
other words, you are trying to dampen the operator error input into the
equation as you are raising your process capability. So, in other words, you are trying to control
those variables better. To me, what
quality systems infrastructure has done, and has really done this especially
for us at Merck, is really demonstrate the ability for whatever process comes
to be able to take out of the equation, to a large extent, the
interdependencies on material controls, on product release, on manufacturing
systems. If you have a good fundamental
quality system it will set you up significantly to reduce the type of
deviations and atypicals and things that you have that sometimes are deemed to
be process problems when, in actuality, they don't have anything to do with the
process. They are the way in which the
facility actually operates your process.
So,
to me, those first four are very important together. The last one is a risk determination which
really can be made by FDA. I mean, their
inspections today are totally, or in most cases, quality systems related and
really give us a good assessment about how good we integrate quality systems in
the decision that we make. For the most
part, I think FDA has a pretty good idea about the quality systems on a plant
basis when they go in. I think this is
something we have to work at to try to quantitatively let the agency decide on
how that fits into the algorithm but my point here is that it is a critical
part of the algorithm because that is a critical part of risk. The ability for us to control our operators,
and our chemists, and everyone in a way that can allow us to manufacture
processes that are reliable and robust is a critical ingredient to this risk
equation at the far end.
I
think that is it--no, the last point I wanted to make is if you think about it,
risk should equal some aggregate evaluation.
It is not additive, but some aggregate evaluation of the elements above
as determined by the manufacturer, except for the last piece which is something
I think we do collaboratively with FDA.
But
what I have tried to do is just give you some idea about how we might put all
that data together in a very constructive way to start to weed through the
stuff that is not as important. Again,
FMEA is a very validated methodology.
I
can tell you, although I hadn't planned to, we have used it in a process that
is not a manufacturing process but we have used it in a quality system process
that had some defects and we were not happy with it. It is a very cross-country process. You might guess what that might be. But in applying FMEA we went from a defect
level that was in our minds unacceptable to better than six-sigma. It was a methodology that wasn't
over-tedious. It took us a couple of
months to actually do this analysis. But
you actually go through the critical steps and then what it tells you to do is
where you focus your energy. How do you
make sure that you don't have those defects get on the market? And we did that, and we don't see those
defects anymore. So, it is a very robust
methodology. I know Rob went through
some detail of it, but it can be used at a very high level to start to weed
through some of this.
So,
it is one approach we might want to think about when we start trying to collect
this information in a way that is understandable. The other thing that I am concerned about, my
ten years at FDA told me I don't want companies submitting 10,000 pages of some
development activities they have been doing for the last 20 years. I would like to see it in some way where I
can trust the methodology that was used that get me to the parts that are
important for me. How do I know that you
have done all the right steps, and how did you come to the conclusions that
these are the critical quality attributes?
FMEA does that for me. It gets
you through a very methodical process that can you get to what is important
about your process.
Obviously,
there are a lot of studies and a lot of infrastructure that has to be developed
for you to use it effectively, and I think that that has evolved quite a
bit. Even in my ten years at Merck I
have seen that evolve quite a bit and I think it is time to start putting those
kind of concepts together. Then I think
that will create a nice algorithm, for lack of a better term, for FDA to start
to assess.
I
think it might be better to start on the post-approval area because I think
FMEA originally was set up after the fact.
It works very well that way. So,
to Ajaz; point, I think that is a great place to start but the concepts are
very applicable in development as well; it is just not quite as robust
yet. So, that is my presentation.
DR.
BOEHLERT: Thank you. Questions or comments?
DR.
GOLD: May I make one comment? Greg, one of the advantages of having a
definition of the critical parameters, critical variables, however we want to
express it, is that perhaps that leads you to the consideration of redundant
instrumentation in the type of example you gave because, should you have a
calibration failure of that instrument, your process is going to go off.
DR.
GUYER: Correct.
DR.
GOLD: So, there are some advantages to
do this and they are not to be under-evaluated.
DR.
GUYER: Dan, if I take that example one
step further, it was not a regulatory process.
So, what it allowed us to do is to stop doing ten things and start doing
three things, and those three things were the most critical pieces that we
could control and now we don't have the defects. We were doing a shotgun approach; we were
doing ten different and everyone thought they were accountable and no one was
accountable. We were doing all this
documentation, but the value at the end of the day was lost because people
weren't focused on the right thing.
So,
I think your point is very well taken but it is an example of where you can
move to that state very easily because it wasn't a regulatory process. It was one that we owned. Although the output of it is a regulated
process, the design of it was not.
DR.
GOLD: Good presentation, Greg.
DR.
GUYER: Thanks, Dan.
DR.
BOEHLERT: Other questions? If not, thank you very much.
DR.
GUYER: Thank you.
DR.
BOEHLERT: We are getting towards the end
of the day but we have one last topic to address. Ajaz, did you want to say something about it?
DR.
HUSSAIN: Sure. I think the thought process was, in a sense,
quality by design and process understanding I think. In many ways you achieve quality by design
through understanding, at least to a significant level, a fundamental level,
the attributes that sort of lead to your quality, and so forth. So, process understand is the key framework.
Post-approval
change is a risk scenario because clearly, I think, we recognize that there are
certain attributes when change will improve a product. But change brings risk. And, there are examples, clinical examples of
a minor change leading to significant safety issues, and so forth. So, change is a risk scenario.
I
think the two concepts come together quite nicely and in our statute, Food and
Drug Modernization Act, there are three risk categories that sort of came up,
you know, the level of scrutiny that we apply to a changed scenario. For example, any change that requires a
change in specification, the statutes require that to be a prior-approval
supplement, and so forth. Any change
that necessitates a clinical study or a bio study automatically is a
prior-approval supplement type of a change.
So,
the concept of risk and the concept of process understanding essentially come
together quite nicely in the post-approval world. What I presented to you, and I think that is
how we defined it in the draft PAT guidance also is that within a quality
system and for a given process or for a given product, the risk associated
should be inversely proportionate to the level of process understanding. The process understanding of relevance will
come on the basis of what type of changes you are likely to make and why you
are making those changes, more so in what type of changes are necessary.
In
the post-approval world, and I think as part of continuous improvement,
fine-tuning of a manufacturing process is often necessary and new technology
has to come in, as well as changes in equipment, changes in site of
manufacture. These are all necessary
changes that need to occur. A product
that is experiencing a lot of difficulties in manufacturing has to be changed
too to improve that process.
So,
from that perspective, the two concepts come together and, therefore, I was
hoping you would give us some feedback on the proposal I had in place, at least
to move forward in the post-approval world to bring some more concrete steps
that we can take to achieve some of these objectives.
DR.
BOEHLERT: I have two announcements I
want to make before we get into the discussion, just so I don't forget
them. First, the next meeting of this
committee will be January 13th and 14th.
So, if your calendars aren't marked, please do so.
The
other has to do with an announcement about another committee meeting. The Drug Safety and Risk Management Advisory
Committee meeting scheduled for Thursday and Friday, September 18th and 19th
has been postponed. So, for anybody in
the room who might be interested in that meeting, it has been postponed. This is risk management I think at the
highest level. I just wanted to get
those off the table so I wouldn't forget them at the end of the day.
Ajaz,
I think you are looking for us to give you some feedback, relationship between
quality by design and risk management.
DR.
HUSSAIN: Right. I think in this context,
using the word process understanding sort of as a means for quality by design
would be a way of sort of describing that.
And process understanding sort of comes from different levels too. In the PAT guidance we define high level of
process understanding is when you can actually predict the impact of a
change. I think Greg sort of was getting
to some of that in his example of PAT.
If you have understood that and how well you have controlled that, then
that leads to a risk assessment. So, any
change associated that is necessary can be judged in that light.
There
are two things that occur. One is the
type of filing that will be necessary, whether it is a change that can be
managed within the company's quality system and reported in an annual
report. So, that is one aspect. The other aspect also is what sort of test is
necessary to qualify that. For example,
Colin brought up the issue of site specific stability. That was a very controversial and heated
discussion between us and industry. I
think what we were expressing there is the elements of uncertainty that come
because of the materials not being characterized and the physical aspects. So, I think that debate sort of occurred that
way. How do we sort of move forward with
a better level of process understanding to provide the least burdensome change
management processes?
DR.
BOEHLERT: Would somebody like to
initiate the discussion? Please, Gary.
DR.
HOLLENBECK: Ajaz, I have all sorts of
risk things going through my head. What
you really would like to do first I think is find a way to place things at
those three levels. Is that the focus
initially?
DR.
HUSSAIN: Well, I am thinking more in
terms of a custom approach in a sense.
If you look at the scale-up and post-approval changes guideline, I think
clearly that was a step forward but, yet, I think the criticism there is that
it is so conservative. I think what I
have argued is based on the information that we gather through our research,
there is a limit to generalization.
Flexibility can come when a company can provide a level of process
understanding and quality by design knowledge to sort of justify other
changes. So, this could be as part of
several options, as part of the comparability protocol. Although I have heard criticism that it is
too narrow and too restricted, but I think the comparability protocol is
flexible enough to allow that to happen.
I think that was one of the intentions that we had, that people could
use the comparability protocol to share this knowledge to justify change or
justify a number of expected changes that could occur.
For
example, I won't to be very specific but we actually have a couple of good
examples in the small molecule also where the product is fairly unique. We don't have a change guidance for that in
terms of SUPAC, and the company said we will need to make these changes as we
development, scale up and then produce this.
So, this is our knowledge. These
are the variables that we have assessed.
Based on this information, we think this is high risk and we would like
to report this in this way, and this is how we will qualify that, and so
forth. So, it was a very novel
proposal. The unfortunate thing is it
came to us two years ago and we were not ready for it. We want to be ready for it next time.
To
help the committee, I would like to suggest this, from the perspective of
reducing uncertainty, the fear, we will be working diligently in sort of trying
to identify approaches to assess that information. As I mentioned to you, we have invited Ken
Morris to come in and work with our chemistry leadership to sort of brainstorm
and sort of identify a strategy for asking the right questions. That process is already starting.
The
second approach is in a sense ICH P.2 activities will get started in
November. There are two aspects that we
have requested and that I think we have agreed on. In the P.2 concept paper the activities will
incorporate two elements, one element being quality by design. So, that group is going to address some of
those challenges. The second element is
risk. The risk aspect will be run as a
parallel group to the P. 2 pharmaceutical development expert working
group. Greg will chair that and John
Barrett is going to chair the P.2 group.
Diana and others will be part of the expert group working with
that. So, that activity is already
starting in November.
What
my proposal is, and I would like feedback from you, to move in parallel
here. We are initiating the training
aspects that will help us ask the right question. Now, there are proposals that we can take
some of this in the PQRI world and actually start developing very focused
activities. For example, one aspect
could be definition of critical elements, and so forth. So, that is one element.
The
third aspect, which I really need your help on, is from a regulatory
perspective, the comparability protocol, what are the challenges possibly with
that? That is one element. Should we consider a separate guidance, it
could be custom SUPAC or make your own SUPAC.
It would not be a very extensive guidance. It would be more of a framework which sort of
either becomes an appendix to a comparability protocol to expand its scope, or
it becomes part of the other SUPAC guidances that we have to update
anyway. So, there are many options. What would be the most useful from your
perspective?
DR.
LAYLOFF: Do you envision like a
template, product type template for identifying elements and the scope of the
elements?
DR.
HUSSAIN: No. The PAT guidance is the framework and you
will start seeing more general guidances rather than prescriptive
guidances. The key element is consistency
and in the guidances for the last ten years we have addressed the consistency
issue. I think we can approach
consistency issues from a training perspective and sort of creating procedures
for assessment. That is the approach.
In
many of these aspects, when your goal is not to interfere and sort of have
unintended consequences--for example, with the PAT guidance we tried hard not
to even have the work NIR in that. We
did a few examples here and there because if we elaborate on that everybody
will jump to that whether it works for the system or not, and we don't want
that to happen. So.
DR.
LAYLOFF: I was thinking of leaving it
more open, like you identify the critical parts and the fingerprint sensing, or
something like that. You don't even want
to go that far?
DR.
HUSSAIN: No.
DR.
LAYLOFF: Then I have difficulty, you are
going to have a hodge-podge stream coming and, from a regulatory point of view,
what do you look at, what elements do you look at?
DR.
HUSSAIN: Again, the elements that we
need to look at--one aspect is predictability, if you have understood and if
you have the ability to predict and describe that change. For example, one approach could be what Colin
provided, a fingerprint approach and areas of maps of the system which says
this is a critical region and this is under various controls. So, the flexibility has to come and the
suggestions have to come from industry.
So.
DR.
SHEK: I believe we are talking about
topic number two, relationship between quality by design and risk based. Is that the theme of this?
DR.
HUSSAIN: Yes.
DR.
SHEK: Okay, the question is what is the
relationship between the two. Well, to
me, at least to my understanding, good quality decreases risk. And good quality is a responsibility of the industry. Ensuring risk or scrutiny is a regulatory
function. So, the relationship of one
feeds to the other. However, the former
is the responsibility of the industry and the latter is the responsibility of
the regulator. So, that is the
relationship. If that is not the case,
then why even have a relationship?
DR.
HUSSAIN: No, that is the case. I think what we want to find is better ways
to use the knowledge that drives quality to say how do we take steps, or what
questions do we ask that actually lead to risk reduction and not lead to burden
or constraints that lead to, say, lack of innovation, lack of improvement, and
so forth. That is the basic theme.
DR.
SHEK: So, as a corollary to this, if the
industry came and said that we have excellent quality and you were satisfied
with it, then there is no need for you to do risk because good quality
minimizes risk, and if you are satisfied with good quality then the question of
evaluating risks is moot because even if the quality is excellent, certain inherent
risks cannot be removed. For example,
open heart surgery is an example. You
could have an excellent surgeon but there is only so much the surgeon can do. There is a risk obviously of something going
wrong. So, that is the answer.
DR.
HUSSAIN: True but, no, that is not the
answer because even if you take the example of surgical procedures, unless the
surgical procedures and techniques improve or the training is adequate in
different hospital centers, you see different rates, and so forth. So, I think from a public health perspective
you really have to keep an eye on is there an acceptable risk. Everything has risk. Therefore, for example, on the inspection
side the quality system, is it adequately managed to get you to make sure the
risks are minimum, is one aspect. But
that is not what we are talking about here.
If you understand the regulatory part of managing changes, if there is a
change needed to improve a process, it is not done today. And, the fear of a change changes risk, but
innovation is change and improvement is change.
So, we have to reconcile some of the dichotomous and opposing forces
that lead to that and try to find a better way to arrive at a least burdensome
pathway that is shown by the level of process understanding. So.
MR.
FAMULARE: There are two concepts in
terms of the regulator trying to evaluate that risk. Ajaz touched upon it in the hospital setting
where they advertise their success rates.
You know, is that the way to tell whoever regulates hospitals how to deal
with them in terms of the level of scrutiny?
If
you go back to the successful example that Greg used in his talk, it was
something that is not subject to regulatory scrutiny and Greg seemed to
attribute that to some of their success in using that and making the change and
challenging that.
So,
going back to the regulator quality paradigm, it is still leaves for us the
open question, in a non-prescriptive way as Ajaz has been saying, how could we
assess that quality or level of risk in such a way that allows for
changes? For example, just the one
example that Greg described?
DR.
HOLLENBECK: I am getting closer
here. I think if we flash back to SUPAC,
for instance, from a risk approach there that the agency was willing to take
based on therapeutic index, solubility and permeability of a drug, so what you
are talking about now is a different paradigm.
You are talking about a risk assessment strategy based on process
control and the kinds of attributes that are listed on Greg's last slide. Is that right?
DR.
HUSSAIN: Actually, this builds on the
previous paradigm. That is the reason
why I sort of brought the biopharm classification system into my discussion,
which I did not fully expound on. The
decision that we made in the SUPAC--there are two aspects that we primarily
focused on in SUPAC, unchanged shelf life and unchanged bioavailability in the
event of a change. I mean, those are the
two most prominent driving forces there.
Now,
in the case of unchanged bioavailability we are using a surrogate of an in
vitro dissolution test. In the
immediate-release world we don't often have an in vitro in vivo
correlation because often dissolution is not rate limited, and the dissolution
test has built-in flaws that sometimes give you false positives and false
negatives. So, the way we approached it
there was saying identify what are the risks.
The risk question here is what is the risk of bioinequivalence when a
regulatory decision is made on the basis of similar in vitro
dissolution? That was the risk question. Essentially, what we found was that because
of the inherent inability of the dissolution method, as well as lack of
connection between formulation and dissolution, there are risks associated. So, the biopharm classification mitigates the
risk by saying bio waivers are feasible under four conditions. One, the drug is of high solubility. Two, the drug is high permeability. The product has rapid dissolution under three
different conditions. So, that is how we
sort of structured that.
So,
that becomes sort of quite a nice model for making decisions and saying if you
met those criteria, then there is no need to do a bio study. But now I think the same concept comes with
respect to process understanding. If you
have understood the process so well that a change that is necessary--you are
changing equipment and encapsulation, that raises a concern but since you have
understood the process and you have understood the other change that you will
be applying to make and its impact, and have said the change is not likely to
change the performance, then that becomes a low risk.
We
don't have that in the SUPAC in a way that allows process understanding to come
in. So, you have to sort of think of
this as an extension of the current SUPAC.
DR.
HOLLENBECK: Then would the agency be
willing to use those attributes listed on the last slide in Greg's presentation
to make these judgments? Why not have
the same kind of an aggregate conclusion to determine what level you are
at? This would still be on a product by
product basis. You are not taking about
classifying Merck as a number one company for every product. You are still talking about individual
products.
DR.
HUSSAIN: No. No, I think the way we are thinking about it
is there are two pathways that we plan to take.
One is in absence or process understanding because this information is
not available to us. We will use the
concept that Chiu proposed sometime back and that will be part of the
discussion later on, and that would be a very conservative approach to saying
that we don't have information. These
are the critical elements. Anything
beyond that is a prior-approval supplement.
The
second layer comes in if you have understood the process and are able to
predict the impact of a change on the key attributes of shelf life and
bioavailability. Then, the level of
scrutiny could be reporting in an annual report, managing the change under the
inspection program rather than having all the paperwork sent here, but that
does not mean that you maybe will not do any additional test. The test would be done possibly and be
managed under the GMP change system.
That is how you make it less burdensome, more manageable change but,
yet, you have the level of scrutiny that ensures the safety and efficacy.
DR.
GOLD: Ajaz, how do you define the
difference between a comparability protocol and the concept you were talking
about, SUPAC-C?
DR.
HUSSAIN: Well, in my mind, the way we
started out, I think the comparability protocol is broad enough to accommodate
"make your own SUPAC" concept.
But I think I am hearing the comments that we are receiving which raises
the concern that it probably will not. I
think we will have that discussion elsewhere.
But I think within the comparability protocol "make your own
SUPAC" could be. We anticipate
these type of changes to occur over the next several years of this
product. The difference will be that
these changes, say, the site change or change in equipment or change in scale
or change in the type of quality control measures that you want to have, are
not likely to impact on the critical attributes that we are concerned about,
shelf life and so forth, and so forth.
And we have arrived at that decision on the basis of this information
that we have collected during our development, and so forth.
So,
that becomes a proposal to the agency as a protocol, saying that because of
this information and knowledge that we have, these are low risk and this is how
we plan to qualify the changes that will need to occur. The changes may not have occurred. So, the protocol gets submitted to the agency
and the agency reviews the protocol and agrees or disagrees with that protocol
and says, all right, when this change needs to be made this could be reported
as an annual report and managed in the way the protocol outlines.
DR.
GOLD: So, the comparability protocol
gives you a prior approval approach to agreement by the agency as to how you
can make the change. Now, if you are
going to request development documentation at the time a submission is made for
making a change in the process based on robust knowledge of the process, that
is going to require time to review and determine whether that information is
sufficiently appropriate, is it not? So,
how does it then differ from a prior-approval approach?
DR.
HUSSAIN: The difference is it is one
time so that is a big difference because, for example, this is a case that we
ran into and luckily it was not misunderstood.
I will give you a very recent example.
The first PAT protocol came in.
It is a prior-approval supplement.
It is for a new PAT-based approach.
It applies to over 150 NDAs. All
right? So, the change is managed through
one protocol for all those applications and it is a one-time use and all
subsequent changes will be reported. So,
that is one way of looking at it. The
bundle supplement also gets there. So,
it is a very similar concept.
But
the concept here is you are agreeing on a less burdensome change management
system based on the information provided, development information provided, as
well as the testing protocols that are necessary to qualify changes. So, it is a one-time supplement.
DR.
GOLD: Oh, I understand. What I am trying to fathom is why not extend
the umbrella of the comparability protocol to cover SUPAC-C.
DR.
HUSSAIN: Maybe you misunderstood
that. That is one of the aspects. We have the flexibility of doing this under
the comparability protocol or creating a separate document of SUPAC-C. Which is a better option? I am not sure. That is one of the questions I posed to
you. So.
DR.
GOLD: Well, personally, I don't see any
advantage to creating a separate protocol if you simply enlarge the concept of
the comparability protocol.
DR.
SHEK: The way the comparability protocol
is today, it gives you one level of jump.
Right? You go from one level,
whether it is, you know, from reporting and what you are talking about is
basically completely--to me, it sounds like a new concept, a different concept.
DR.
HUSSAIN: No, the SUPAC-C is much
broader. It is probably less restrictive
than the way we have defined the current comparability protocol.
DR.
BOEHLERT: I think you are running out of
new ideas. We have beat around the bush.
DR.
HUSSAIN: I think so. If I could just summarize, I think the aspect
that we tried to bring to get some feedback today was I think some elements of
quality by design. I think the key
aspect is that we will focus on the knowledge necessary to achieve the type of
risk assessment that needs to occur. I
think many of the things we have heard we have already incorporated in the PAT
draft guidance so that was sort of reconfirmation that I think we are on the
right track, and that was very helpful.
We
also heard from our invited guests and others that, clearly, the post-approval
change scenario offers a way forward to bring pharmaceutical development
information to learn how to better use that information. That will give us not only the information
coming in that will help us train ourselves, as well as I think will start
building a culture of sharing this information.
I think that is clearly an important aspect. Clearly, I think well-defined projects within
PQRI can get us to that state quite rapidly.
At
the same time, since I think we already have certain aspects in ICH, the process
will run in parallel but, at the same time, I do not want to give the
impression that pharmaceutical development reports are only for post-approval
change. I think there are many issues
that I think you want to welcome and we want to sort of open up the process in
an NDA and alleviate the fears of delayed approval. I think this exercise will help us get there.
I
think since we have a number of opportunities for meeting during the NDA
process, the fear should not really be there.
I think as we move towards a quality system for the CMC review process
itself, that I think will address many of the concerns that I think were raised
today. So, what I got I think from all
the discussions is that, to a large degree, the thought processes that we have
expressed in some of our draft guidances are probably on the right track
already. We will continue with that
process and we will focus on training.
We will focus on creating some additional frameworks that will bring
development knowledge. At the same time,
these activities will support our delegates to the ICH process which will be
working on the P.2 section. I will
invite Joe Famulare and Diana to say a few things.
MR.
FAMULARE: In summary today, I think the
presentations were very good and enlightening in terms of the types of paths
that we are looking to follow now in terms of the ICH groups, etc. So, as Ajaz says, I will say just briefly I
think it just reconfirms that some of the thinking that we have is on track
and, as I say, the presentations today I think were helpful to us.
MS.
KOLIATIS: We heard a lot of information
from different folks and I think we have a very good basis to continue our
discussions on the ICH front, and to be able to communicate all the concepts
that we heard today.
DR.
HUSSAIN: We didn't hear about Isabel so
have a safe trip back and thank you.
DR.
BOEHLERT: Thank you.
[Whereupon,
at 4:26 p.m., the proceedings were adjourned.]
- - -