JHDEPARTMENT OF HEALTH AND HUMAN SERVICES

FOOD AND DRUG ADMINISTRATION

CENTER FOR DRUG EVALUATION AND RESEARCH

 

 

 

 

 

 

 

 

 

 

PROCESS ANALYTICAL

TECHNOLOGIES (PAT) SUBCOMMITTEE OF THE

ADVISORY COMMITTEE FOR PHARMACEUTICAL SCIENCE

 

VOLUME II

 

 

 

 

 

 

 

 

 

 

Thursday, June 13, 2002

8:01 a.m.

 

 

 

 

 

 

Hilton/Gaithersburg

620 Perry Parkway

Gaithersburg, Maryland

P A R T I C I P A N T S

FDA Staff

Kathleen Reedy, RDH, MS, Executive Secretary (acting)

Ajaz Hussain, Ph.D.

Committee Members:

Thomas Layloff, Ph.D., Acting Chair

Gloria L. Anderson, Ph.D.

Judy P. Boehlert, Ph.D.

Arthur H. Kibbe, Ph.D.

SGE Consultants:

Melvin V. Koch, Ph.D.

Robert A. Lodder, Ph.D.

G.K. Raju, Ph.D.

Guests/Speakers Participants:

Eva M. Sevick-Muraca, Ph.D.

Leon Lachman, Ph.D.

Emil Walter Ciurczak, Ph.D.

Kenneth R. Morris, Ph.D.

Howard Mark, Ph.D.

Thomas Hale

Industry Guests/Participants:

Efraim Shek, Ph.D Ph.D.

Ronald W. Miller, Ph.D.

David Richard Rudd, Ph.D

Rick E. Cooley

Colin Walters

Doug Dean, Ph.D.

John G. Shabushnig, Ph.D.

Jerome Workman, Jr., M.A., Ph.D., FAIC CChem, FRSC

Jozef H. M. T. Timmermans, Ph.D.

Robert S. Chisholm

John C. James, Ph.D.

Jeffrey Blumenstein, Ph.D.

Dhiren N. Shah, Ph.D.

Henry Avalllone, B.Sc.

Open Public Hearing Speakers

Justin O. Neway, Ph.D.

Li Peckan

Allan Wilson

Dan Klevisha

Tom Tague

John Goode

C O N T E N T S

AGENDA ITEM PAGE

Call to Order - Thomas Layloff, Ph.D., Acting Chair 4

Meeting Statement - Kathleen Reedy 4

Charge to Working Groups: Dr. Layloff 8

Working Group Discussions 15

Break 71

Working Group Discussions (Continued) 71

Lunch 134

Working Group Summaries:

Process and Analytical Validation - Arthur H. Kibbe, Ph.D., Chair 135

Subcommittee Question and Answers 150

Product and Process Development - Judy P. Boehlert, Ph.D., Chair 154

Subcommittee Question and Answers 164

Training Working Group - Kenneth R. Morris, Ph.D. 165

Subcommittee Question and Answers 174

Summary - Ajaz Hussain, Ph.D., Deputy Director, Office of Pharmaceutical Science 188

Closing Remarks - Dr. Kibbe 204

Adjourn 204


P R O C E E D I N G S

DR. LAYLOFF: I would like to welcome you back to the second day of the Process Analytical Technologies Subcommittee of the Advisory Committee for Pharmaceutical Science.

I would like to have our meeting statement from Kathleen.

MS. REEDY: This meeting statement is acknowledgment related to general matters waivers for the Process Analytical Technologies Subcommittee of the Advisory Committee for Pharmaceutical Science.

The following announcement addresses the issue of conflict of interest with respect to this meeting and is made a part of the record to preclude even the appearance of such at this meeting.

The Food and Drug Administration has prepared general matters waivers for the following special Government employees which permits them to participate in today's discussions: Dr. Boehlert, Dr. Koch, Dr. Raju.

A copy of the waiver statements may be obtained by submitting a written request to the agency's Freedom of Information Office, Rom 12A-30 of the Parklawn Building.

The topics of today's meeting are issues of broad applicability. Unlike issues before a committee in which a particular product is discussed, issues of broader applicability involve many industrial sponsors and academic institutions.

The committee members have been screened for their financial interests as they may apply to the general topics at hand. Because general topics impact so many institutions, it is not prudent to recite all potential conflicts of interest as they apply to each member.

FDA acknowledges that there may be potential conflicts of interest, but because of the general nature of the discussion before the committee, these potential conflicts are mitigated.

We would also like to note for the record that Dr. Efraim Shek, of Abbott Laboratories, is participating in this meeting as an industry representative, acting on behalf of regulated industry. As such, he has not been screened for any conflicts of interest.

With respect to FDA's invited guests, there are reported interests that we believe should be made public to allow the participants to objectively evaluate their comments.

Dr. Leon Lachman is president of Lachman Consultants Services, Incorporated, a firm which provides consulting services to pharmaceutical and allied industries.

Dr. Howard Mark serves as a consultant for Purdue Pharma Incorporated.

Dr. Kenneth Morris serves as a consultant, speaker, researcher, and has contracts and grants from multiple pharmaceutical companies.

In the event that the discussions involve any other products or firms not already on the agenda for which FDA participants have a financial interest, the participants' involvement and their exclusion will be noted for the record.

With respect to all other participants, we ask in the interest of fairness that they address any current or previous financial involvement with any firm whose product they may wish to comment upon.

This is for June 13, 2002.

DR. LAYLOFF: Okay. Now we'll go around the table and introduce ourselves and our affiliations starting with John James.

DR. JAMES: Good morning. My name is John James. I'm the Executive Director of Operations Services for Teva Pharmaceuticals.

DR. SHABUSHNIG: Good morning. I'm John Shabushnig and I'm the Director for the Center for Advanced Sterile Technology for Pharmacia Corporation.

MR. COOLEY: Rick Cooley from Eli Lilly.

MR. CHISHOLM: Bob Chisholm, AstraZeneca.

DR. TIMMERMANS: Jozef Timmermans, Merck and Company.

DR. WORKMAN: Jerry Workman, Kimberly-Clark.

MS. SEKULIC: Sonja Sekulic, Pfizer.

DR. SHEK: Efraim Shek, Abbott Labs.

F DR. G. ANDERSON: Gloria Anderson, Morris Brown College.

DR. KIBBE: Art Kibbe, Wilkes University.

MS. REEDY: Kathleen Reedy, Food and Drug Administration.

DR. LAYLOFF: Tom Layloff, SGE with the FDA and with Management Sciences for Health.

DR. BOEHLERT: Judy Boehlert. I have my own consulting business.

DR. KOCH: Mel Koch, Center for Process Analytical Chemistry at the University of Washington.

DR. LODDER: Rob Lodder, University of Kentucky.

DR. SEVICK-MURACA: Eva Sevick, Texas A&M University.

MR. HALE: Tom Hale, Hale Technologies.

DR. MARK: Howard Mark, Mark Electronics, also an independent consultant.

DR. MORRIS: Ken Morris, Purdue University.

DR. CIURCZAK: Emil Ciurczak, Consultant.

MR. ELLSWORTH: Doug Ellsworth, Office of Regulatory Affairs, FDA.

DR. HUSSAIN: Ajaz Hussain, CDER, FDA.

DR. LAYLOFF: Thank you very much. We had a very productive day. We gained some time on our schedule. I think our working groups made good progress, and we will reconvene those this morning and continue those discussions for the morning.

I think, Ajaz, did you have anything that you wanted to particularly emphasize to them?

DR. HUSSAIN: There are sort of three things in my mind: one, starting with education, the training program working group. If, for example, you go through the outline and what I would hope is that you would sort of define the learning objectives more so than the details of the curriculum itself, in a sense I think that would really help us to frame the broad requirements and focus on what--how do we arrive at the right questions. I think that's--if you could summarize that today, that would be wonderful.

And with respect to process and analytical validation working group, I think this would be probably one of the most important aspects for the guidance development process--the general guidances--what type of information--keeping in mind this is a general guidance without much technical details. I think one of the frameworks under which we could sort of define validation-- validation for intended use, I think Moheb had some suggestions, I think he'll bring those to the committee this morning. And sort of the rational approach to validation. Because my personal belief is, I think, the GMP are so critical that we really need to have good GMPs to ensure quality because endproduct testing is so limited. And I think the challenge to our inspectors has been in the sense their workload and their responsibilities so huge, I think if we can bring rational science with using PATs to manufacturing, I think that would be wonderful because without GMPs, I don't think we have a quality system so validation and qualification all are extremely critical elements of the whole quality assurance system.

So, I'm looking for sort of an approach for how would we validate PATs in a rational sense and what sort of information should be sort of brought to bear on evaluating these technologies. So if that is the broad focus and some of the questions we posed and some of the questions we provided to you, if you go through those I think it will be very helpful for us to have a summary of your thoughts so that the general guidance might include a paragraph or two paragraphs on general principles for validations of PAT.

In terms of process and product development, I think the concerns that have been raised have been with respect to delay in NDA approval because of a new technology coming in. And I think those concerns, in my opinion, I think there are, certainly, basis for that but should be ill-founded because FDA is willing to work with you throughout the process and, in fact, what the offer on the table is we could set up special meetings at the end of Phase II and so forth to simply discuss some of the new technologies so the fear of delaying NDA approval is removed.

But at the same time, I think the aspect that I'd really like to sort of bring in is I don't think the supplement process is an ideal process for having innovation come in because a lot of these things have to--if you had prior approval supplement for everything it holds things back. And the concept that we're trying to develop is a team approach--a review and inspection team--so my hope is a lot of these implementations could be in an annual report type of a format, rather than a supplement. So if a company is willing to invest and go through and apply new technology in the new drug development itself, one could imagine that we could sort of essentially establish interim specifications for the approval process because you essentially have the traditional testing for validation and so forth. So, essentially for PAT, you have interim specifications and we agree to those, and essentially at some point when submission data is collected those become the rule.

So let's think differently--out of the box--in terms of how to facilitate new drug development using PAT, as well as in terms of validation.

So it's a big task and the challenge is the general guidance will have to have language which sort of reflects the positive win/win aspect of this and not be perceived as cumbersome, bureaucratic, and so forth. So that's what we're looking for.

DR. LAYLOFF: I'd like to reinforce a couple of those comments. I think for the training, I think the way that this probably should start out is what are the required competencies that these people should have and that's the training objectives. And I think the target should be to have the competencies required to satisfactorily perform their assigned duties, which would be reviewing and inspection of these techniques, and the target should be a certification so it's a nice little consistent-type function so that people do have--are credentialed that they have achieved a certain level.

The other caution you see is when you start moving to new technologies is everyone starts to move to the realm of the possible, rather than the realm of the probable. And if you start moving to the possible, you become paralyzed. Certainly, the disaster of September 11th does not mean that we should start designing all of our buildings to be hit by planes loaded with fuel. That's a possible but not probable, and if we look at the regulatory history that the FDA has had with our industry, the probability of having significant fraud is very minimal. The people are very conscientious; our industry's very conscientious. So when we look at validation issues, integrity issues, we should look at probabilities rather than possibilities.

The other thing I think that would reinforce what Ajaz pointed out, if you think you develop an NDA and you throw it over the wall at the end at FDA, it is going to be delayed. On the other hand, if you take him up on his offer, with his skilled staff and the trained people to work with them so that everybody understands what you are trying to achieve and how you're trying to achieve it, it will facilitate the whole process.

So I would ask that you keep focused on what is probable and not what is possible so we can keep moving forward.

We'll adjourn now, back to our committee meetings, our groups. So if you could go back to the groups--same rooms?--in the same rooms that we were yesterday, and we will have a break at 10:15 and you will reconvene with your groups until you complete your efforts this morning. Thank you. Oh--

DR. KIBBE: When do you want to regroup here, because I think we will finish a bit early so we can wrap this meeting up maybe by 3:00 or so.

DR. LAYLOFF: Would you like to come in--would you like to start to convene at 11:15 for sessions here?

DR. KIBBE: What I was hoping is we could reconvene here immediately after lunch--

DR. LAYLOFF: Okay.

DR. KIBBE: --so that each group has time to make the summaries and so forth.

DR. LAYLOFF: Okay, that's good. One o'clock would be fine.

DR. KIBBE: Okay.

DR. LAYLOFF: So we will go through our group discussions and reconvene here at 1:00 o'clock for wrap-up. I will not be able to be with you this afternoon. I ended up terribly conflicted in my schedule, and Dr. Kibbe has agreed to take the helm and take you to conclusion.

[Recess.]

DR. KIBBE: [Presiding] I thought we did really well yesterday, but maybe I'm delusional. Or, perhaps, we needed to put a process assessment tool in place to see how well we're doing. Each 15 minutes we decide if we said anything worthwhile. I still like assessment rather than analytical because I think it gets us away from remembering how to do titrations.

Yesterday when we broke, we had some people who had agreed to begin our thinking towards a document that could be used by the agency to formulate its guideline on validation. I think we've come to some good conclusions. I don't think anybody would disagree with the fact that we're not going to come up with 42 different validation documents for 42 possible technologies but, rather, a guideline where a company who has a technology that they have faith in would use to go forward to make a case for the agency. We have, I think, discussed the fact that you can't validate a process very well if you don't even know what process you're trying to validate, and we have a colleague who has some introductory paragraphs or sentences ready. He's hiding down there.

MR. LEIPER: Not quite hiding, Mr. Chairman. I like your use of words. I don't think that we agreed to do anything. I thought we were directed to do something, so we've actually met that aspiration of yours--well, I've tried to do that.

I think that the other point that I would certainly subscribe to you that you've brought up just now, I think that that terminology that we use about process assessment technology might actually be an awful lot closer than analysis, and if we go back to where we started yesterday, I think the reason that we went a bit off track to start with is that we started thinking about chemical analysis. And that is not what it's about.

So I'll try and summarize. I've got some bullet points and we can see how this works out, and I'll get them over to Rob as we go through and we can put them on the screen.

The first issue that we tried to address, I think, was the background, you know, where we are now, because if we don't actually have a datum point of where we are now, of where we think we are now, we won't know whether we've improved or not.

And from that the first bullet point I've got is that, whether we like it or not, existing validated measurements invariably correlate poorly with process performance. So there are two issues: one, the measurements that we make don't correlate; and, two, they're validated. And so if we're going to use that type of validation as our background, we might just be disappointed. So that's where I think I started yesterday.

I also made the comment that univariate measurements are used to infer compliance of dynamic multivariate systems. And that's what we do; that we measure what we can measure not what needs to be measured. That measurement needs to be--well, it hasn't been seen a process-related; there's actually been a divide between the process and the measurement. Measurement is product-related rather than process-related.

That measurement needs to respond to process needs over the product life cycle, so it's not a one-off operation. If we want continuous quality improvement, it's got to be dynamic.

And to do that, we need to understand the process, and the last point in this slide would be that we've also got to recognize that the conventional approach to validation might be limiting or, indeed, inappropriate.

So, do these bullet points sort of ring bells with you? Does that sum up where we started yesterday?

DR. KIBBE: Anybody? What we're going to try to do, when we have electronically validated our system, is put those bullet points up there so people can read them and say, ah, that's one's--no, I'd like this worded different and that different, okay?

MR. LEIPER: Yes.

DR. KIBBE: I think there's a lot of what we agreed to in what he said, and I want to give you an opportunity to say, well, I didn't quite agree to that statement, but it's close to what I agreed to and we'll wordsmith it.

This would constitute our attempt to helping the Agency write a preamble to why we're even going in this direction.

MR. LEIPER: Precisely.

DR. KIBBE: And what have you. While he's still arguing with the equipment, Jerry had--

MR. LEIPER: Okay, I've got the next one that we went on to, Art, and then Jerry's would fit in after that, I think, if I may. Excuse me.

DR. KIBBE: Excuse me, go ahead, my fault.

MR. LEIPER: Okay, then we went on to talking about understanding processes, and if we want to understand processes, we've got to break them down into their unit operation--the unit operations and begin to understand them individually and, indeed, collectively, if appropriate.

So we break it down into unit operations; we assess the risk potential from each unit, individually and collectively where it might impinge, two might link together, using techniques, for example, experimental design.

DR. LODDER: May I break in for a second?

MR. LEIPER: Yes.

DR. LODDER: I think it would be a lot easier if everybody who has written text could move over to that microphone so I could look off of it while you were reading. I thought we'd just keep things going faster.

MR. LEIPER: Okay.

DR. KIBBE: Or if you could give him your first set and he could type in--

DR. LODDER: Okay, well, whatever.

DR. KIBBE: A couple of you had--you have it electronically. Okay, so--Tom, you had something electronically? Good. All right.

MR. LEIPER: So, you know, that's what we would do; we would address the risk potential. We would then--we'd be looking to design systems to manage the risk, and that could be univariate measurements, it could be multivariate systems. It could be anything, but it would be certainly directed at what the need was.

We would then develop systems. The next step would be to establish proof of concept. And then to challenge, which would be conventional validation. But this is all related to the design of the system. It's not--you know, we just can't pick it out of the air.

And the objective is to confirm that processes--is to confirm process and measurement validity in real time across the life cycle of that process.

And then I thought that's where Jerry's list of bits might fit in, but that was where I got to.

DR. KIBBE: Anybody have a comment about what...

[No response.]

DR. KIBBE: I have one little aside. Listening to you, it sounded like you were describing changing from what we have to a completely assess process from beginning to end, and I think what we're going to see is segments of the process being assessed with, you know, technology being--and then that growing across lines of production.

MR. LEIPER: I agree entirely with your view of it. I see it--I don't--this is what our overall objectives would be and it would be the journey to get there and I think that's where--

DR. KIBBE: All right. We're starting to see some of your words up on the--

DR. NASR: Art, I want to make a couple of comments.

DR. KIBBE: Sure.

DR. NASR: These are intended to be general comments, but may I address some of the validation issues we are dealing with. I spent time reading the transcripts of the meeting we had in February, and I decided to stay completely silent yesterday because about half the comments I made myself about validation when we met in February. Sometimes when you listen, you get a bigger picture and better understanding of what's going on.

I think two comments, good comments, were made yesterday: one about the validation of the process need to be done after we understand the process. And the data and the information gathered during the process development is just useful to develop the process and the process needs to be validated only after complete understanding of the process taking place. I think that was an excellent comment.

Another comment that was made by Rick Cooley, and Rick and I discussed it substantially afterwards, and that is the focus of validation needs to be on the intended purpose to make sure that the measurements that we are making are suitable for the intended purpose only. And we do not and we should not focus on validating the technology itself or the device, whether it's analysis or an instrument, because if we do that, we will not be able to achieve what we are being asked to achieve.

So because of that, my suggestion would be, for the purpose of a general guidance, that we have three paragraphs: one paragraph simply to state that validation needs to be tied to the intended purpose to make sure that the suitability for the intended purpose.

The second would be to outline major validation criteria that must be achieved no matter what application or measurement we are dealing with. We are talking about robustness, we are talking about suitability, and all the things that most of the people in this room are familiar with.

And the third paragraph simply list available documents and guidances available such as ICH documents and the agency guidances on analytical and process validation where we can lean on and abstract and gather information that we can use.

Again, in summary, I suggest that we make our validation input into the guidance to be simple, general, and without going into too much details because if we go into details and try to provide validation criteria for all possible measuring devices, I don't think we'll achieve that.

Thank you.

DR. KIBBE: Are those your words?

MR. LEIPER: Yes.

DR. KIBBE: Good. We're starting to get to where that is. Does someone else have--he's got yours, too, right, Jerry? Then we're going to start putting them in order. Yes, sir?

DR. WOLD: Just a short comment to Ken. It seems that Ken is very much focused on validating the process. I think we should perhaps discuss the two. We have the validation of the process which is necessary in the process, and when we use process in manufacturing, but we also want to validate that PAT measurements give information about the quality of the product. That's two different things. And, as Ken says, the quality measurements made for the products do not necessarily correlate well with the measurements for the process, but they're still needed. So we have two sets of objectives.

DR. KIBBE: We could certainly divide it again and say that we can validate a process, but we also have to validate the instrument we're using to measure the process, and then we have to validate whether those things are all resulting in a product that's what we wanted. And we could even go as far to say how do they help us understand the endproduct quality.

MR. LEIPER: I think it's not coming off. The objective is to confirm process and measurement validity in real time across the process life cycle. I mean, that's what we are trying to do. If you remember, the very first statement I said is that we do use validated measurements today, but they don't correlate with process performance. So as you go through these two slides, that's the transition. And I agree with Sonja all the way that we've never seen measurement validity and process validity actually looked at in the same context.

DR. KIBBE: Thank you.

MR. LEIPER: And I think that the point you make is actually a good one, and what I was trying to do in terms of the unit operations, et cetera, is that we heard a lot about risk-based assessment, but when we were talking about risk-based assessment, the quotation yesterday was about safety and efficacy, it wasn't about processes. Processes are what deliver safety and efficacy. So I think that we've got to take risk-based assessment and FDA's got this in their HACCP procedure. It's actually sitting there. It's just that we don't choose to use it. But that's a very good way, a very good methodology for beginning to understand what the variability is, as Sonja would prefer to see it called, or risk. Because that's what we're trying to do in processes: we're trying to manage that potential variability out.

DR. KIBBE: Do you want to comment on what's being miraculously presented to us here?

MR. CHISHOLM: I think the first point is that this is a general gauge so we can't be too specific. So I'll try to keep--other statements from yesterday a few thing that I said, and I said I'd do that last night.

The first one says the validation protocols will be different depending on whether you're dealing with a new product or an existing product. It's a very, very different thing that you have to do. Because a new product has probably, hopefully, been designed with manufacturability and all these principles in mind; whereas existing products haven't. Okay.

So when you apply PAT retrospectively, I think you probably will have different validation protocols than you have for new products where you've been sinking it into the process all along.

The second point is I think that your validation plan really needs to reflect the holistic nature of the system that you're in. If you have actually got a system where you've got real-time quality control and real-time quality assurance for the product coming off at the end all statistically based, that's a very different situation for someone who's sampling occasionally outlying even using these techniques.

And so, you've got to remember that if you have what I've just described, RTQC, RTQA, then what you do is, every time you manufacture a batch, you essentially revalidate your process because you're monitoring through both the QA and QC. So that's a very different situation from the one where you're occasionally sampling. And we don't use the word "statistically" often enough, I don't think.

I think the second one's a very important point that we haven't touched on, and it's going to be very, very important for the FDA, as well as the industry. There has to be some measure of yes or no, even though it's always going to be maybe.

You've got to be able to see why you passed something and why you failed something. So I think that your validation rationale has to find some way of establishing that so that when you go to predict in running a manufacturing process you can justify it yourself to the authorities that you are actually in compliance and why you took that decision. And I think that's quite a gray area, and I think it has to be addressed in some way.

Okay. Those were the three things you asked me to do yesterday.

DR. TIMMERMANS: I just wanted to make one or two comments. I fully agree with what Ken and Bob have been saying so far. But I think we should take a look at what reality is. I suspect from experience that we will be implementing process analytical technologies first sparsely, and then later on we may design our processes around it.

I think what we need to do is realize that and really provide guidance in the area of how to implement--maybe, you know, we would start with one unit operation. If I look at some of the processes that we've used process analytical tools that we haven't used it in each unit operation, rather we've picked those that we felt needed the technologies and implemented it there.

So, I think the overall approach is correct and is a lofty goal, but I think the reality is that we will be implementing them in just bits and pieces. So I think the guideline needs to address how to implement it in such cases, not only for new products--and I think even with new products, if we're designing our processes to be able to--to accept these process analytical tools and marry the two, there's still the need and certainly, I imagine, a significant number of applications will be applied to in-line products because we know we have problems with in-line products. So I think that's something that the guidance needs to address. Not only the overall heuristic approach, if you have a new product and you have every opportunity to implement it, but just also on a case-by-case basis or on a case-by-unit operation basis, if you will.

MR. LEIPER: I agree entirely with you. I think one of the problems that we've got when we talk about validation just now is that we've got a statement about validation that the process will be fit for what it's intended or, you know, something like that. I think that what we're trying to do here is to get behind the method the basic systematic approach, and I've been in a similar situation and I think that what we do is that we've got a problem, we then say, why have we got a problem, and we identify the risks, et cetera, and we go through it in a pretty logical manner.

Now, what I'm concerned about is that I don't think that we look at a lot of validation from that logical perspective. And I think that if we give people a systematic approach to validation, they can plan their scientific response against that systematic approach; whereas, as it stands just now, there's no such thing as a systematic approach. Different companies have different--you know, they look at it in different ways and come up with similar types of solutions, but it's a systematic approach that could be agreed between industry and the regulators for how one addresses these problems that would probably help to take us forward.

DR. TIMMERMANS: I fully agree.

DR. KIBBE: Anybody else?

MR. CHIBWE: I think Ken's comments, as well as Bob's comments, I see them as very valuable for making the foundation for process and analytical validation; and if we could use those principles to tie in with what Tom and Jerry pointed out yesterday--and I believe they're going to present some of that today--where we could differentiate from batch process, as well as continuous production process, and then we also have to use the intended-use validation approach, not necessarily always going back to the traditional validation approach which is going to tie us down.

So I think if we use those as the basis and foundation, we'll end up with very good guidelines at the end of the day.

DR. KIBBE: Anybody else?

DR. MILLER: I think we'd probably all agreed that what Ken said is the goal, and the question is probably how to get there; and partly how to get there is where we're going to start from how we're going to approach it. That's why yesterday I made a comment, is it reasonable to start from the current validation paradigm, and my thought then and it's still my thought now was that in terms of actually implementing it in practice, the people involved both, you know, from the top level all the way down to the field inspectors would probably be more comfortable if we had a sort of a revolutionary approach outlined, rather than, you know, just all of a sudden changing the whole paradigm suddenly, so they'd start from somewhere they're familiar with and there would be a greater comfort level and, therefore, a greater acceptance level of the new paradigms.

And I think one of the things we should try to think about, you know, during our session this morning is the path to get to where we want to be at.

DR. KIBBE: Does somebody have a path?

MS. SEKULIC: Not necessarily. I do have a comment, though. I concur fully with Nasr. I did a lot of talking during the last session. I think we covered a lot of good territory. I'm not convinced that we're not overcomplicating the situation. Okay? I'm going to try and challenge a few thought concepts here. Separating the two validation--into two validation approaches, one for pre-, one for post-, may not necessarily be the right thing to do. If you validate before or after the NDA, we're still concerned about the appropriateness for intended use. Therefore, the same logic, the same sequence of actions, methods of element, identification of sources of the variability, identification of critical parameters, control points, followed by validation of those and, thus, the documentation of that, we're done. Don't we already have the pieces and the framework in place? Are trying to complicate things too much by raising PATs to a new level of scrutiny which may not necessarily be warranted?

DR. KIBBE: And what do people think about that? We're very quiet this morning. I think we need to make you run around the table. Yesterday we were so fired up. Did you have a long night or something? Go ahead.

MR. MADSEN: Yeah, I totally agree. I think that we've been--in a perfect world, which we don't have, we should have been validating methods and processes this same way all along, and I realize that maybe, you know, back several years ago we weren't but, certainly, the goal is to validate the method, to validate the process to do it in a logical, sequential way. And I don't see where PAT would be really any different. There may be some differences in multivariate versus univariate analysis that we have to worry about and maybe some of the methods are different in themselves, and maybe because of the method differences there might be some little quirks we have to deal with, but basically validation is validation.

DR. KIBBE: Is this a good time for Jerry's list of things that are important in a validation process that apply to any validation process that we could just put in here and reiterate and say, guess what, you've been doing this and these are what we really still want?

DR. WORKMAN: Well, as Professor Lodder has magically projected on the screen, this is just basically a laundry list of things that have to be rationalized or addressed in the validation process, potentially, at least.

Going through the sensor validation means the box itself in the sampling system. You have to know that the integrity of that is maintained. Then the software validation, including any multivariate algorithms, you just have to say what you're doing and verify that what you say you are doing is what you, in fact, are doing.

Sensor calibration and calibration transfer validation. Once your software and your algorithms in your hardware are validated in terms of operation, then you have to take a look at what you're doing with that, which is generating models. Those calibrations have to be evaluated in relationship to what you're measuring to make sure the integrity is maintained and that you are, in fact, reporting what you think you're reporting.

Also calibration transfer, it's not just important from one instrument to another, but that instrument will inevitably fail and you'll need to put that calibration back on the instrument after repair. So you need to demonstrate that there's a lot of integrity in what you're doing there. And then the process-monitoring protocol, batch versus continuous, is basically that as you're monitoring the process, you need to demonstrate that, in fact, you are measuring what you think you're measuring, where you think you're measuring it, and rationalize that whole issue.

And process modeling, in order to study the process, you have your basic thermodynamic models from the textbooks and engineering training. You need to take a look at that and see how true that is because oftentimes we know that when we look at real information it's much more complex than what we thought.

And then the process control protocols, when you're getting this good information from your system, what exactly are you doing with it to control the process and make sure that the end product is what you think it is.

And then the data management and storage protocol, how are you going to maintain that data and be able to display and demonstrate what you're doing at a future time.

Next slide please.

And then if we're looking at--if we're just trying, again, make a list. If we're looking at types of methods, you have a primary method where you're actually analyzing directly the analyte and you don't need any secondary or backup methods to verify this method, so it has specificity and selectivity that are appropriate.

And then a secondary method requires a primary method to validate it so, in that case, both methods would have to be validated. And then, in terms of analyte complexity, you have direct measures, which might be an active ingredient; indirect measure is something like dissolution, which is a property based on composition or physical properties which can be measured directly, or some virtual measure which is, you know, cost of production or customer satisfaction or quality index or something. So those have different considerations involved with each one of those.

Then we also were talking about dimensionality in terms of univariate/multivariate which are quite different. And then there was another list--next slide, please.

Just on the implementation side, I believe a thorough document would have rationalization from a scientific basis on the following points and maybe more but, you know, what information is needed and why is that needed? Where is that going to be taken in the process? What are the sampling points? And when and how often are the measurements needed and the rationalization for that? And how is this information that's received from this whole validated measurement system, how is that going to be used and the rationalization behind that?

And then, who's going to interpret that? What group or training is required to interpret that information and how is that used? So the whole rationalization behind that.

DR. KIBBE: Thoughts, anyone?

MR. LEIPER: I think there is a point of contention here and, again, it's a view of what is a primary method. Now, and people say, well, this is a definitive method, but often you find it's the first method that you thought of and it's actually knowing whether our primary method is capable of doing it. It's one of these things that got mixed up over a period of time. And if your primary method doesn't--if the primary method that you've got doesn't actually correlate with what you need to measure, then you've--we've got ourselves a problem.

And I think that brings us onto the complexity, and I wouldn't see this being--I don't see it being overcomplex or anything like that, but if you think of blend uniformity, we would probably tend to go to an endpoint. You know, so we wouldn't necessarily need a primary method or--

DR. WORKMAN: Of course, when you flesh these things out, you get a better definition. I think primary method indicates that you don't require any other method to validate or verify that. So, in that case, that would be a primary method.

MR. LEIPER: No, I agree with that, but that's a mindset away to what we--you know, what we've used to today, I would suggest. Is that a mindset away from FDA thinking or--

DR. NASR: I think so.

MR. LEIPER: And I think it's about capturing that because that's the way we'll get simplicity, to get away from the current mindset, I think.

DR. TIMMERMANS: I think what Jerry just has shown in my mind validates what Sonja said before. In my mind, this approach as laid out here is not very different, if not different at all, than what I would expect we do for any analytical method or any measurement we do right now.

DR. KIBBE: No, I couldn't agree more, I think one of the things we're talking about is, because it's a new approach, everybody's got these little ooh-ooh kinds of feelings; and as we get closer and closer to understanding it, it isn't anything new, it's a new way of doing a better job of what we're good at, and we use the same logic and same science to validate what we do.

I think, if you look at his list, and an example of primary is the active ingredient. And when we talk about blend uniformity, we used to talk about the active ingredient. And now we don't want to talk about just the active ingredient; we want to talk about all of them.

Well, this is a step forward in our understanding of what we're doing and controlling what we're doing. And if that happens to be our new measure and we have a way of doing it that allows us to comfortably come to an understanding of blend uniformity in terms of all of the ingredients near IR or something else, then all of the ingredients are the primary measure or the blend mix is the primary measure and we go on. And so I agree with you, I think that we can agonize over this, and one of the reasons we need a guideline which lays this out is because our colleagues, in an absence of coming to these meetings, are going to wonder what we mean and how complex we want it to be. And if they see the same thing they've always been doing, they might have more comfort in moving forward.

MR. MADSEN: Having said all that about blend uniformity, you can have a perfect blend uniformity--I've seen situations where the blend is uniform, but during the transfer process into the press or because of certain press considerations, the finished product may not be uniform or may not have the desired content uniformity. So we have to make sure we build this in.

MR. HALE: I agree with the statements that we are building on a foundation of validation, and I like the comment that Ken made yesterday and if I could restate it or in my own words, perhaps, that there are layers of validation that we go through. We start out with IQs and OQs in process, and in my mind the foundation parts of validation are really no different. Maybe they're a little more complicated or complex, but the thought process is the same, that equipment works, that sensors work, and that we have some way of justifying that we feel comfortable that equipment works and sensors work.

Where this does, I think, get us into a different realm, perhaps, is at the very top layer, when we start thinking about how our product is being released. And I think that there are potentially different ways to release product with additional technical capabilities and additional mindsets, and I think there are three of them that are up on the board.

The first one is pretty much what we do now, where we have a fixed set of parameters to manufacture a process and, subsequent to manufacturing--and this can be thought of not only in release of product, but release of product from one unit operation to the next so it encompasses both, I think. And that the release is subsequent to this manufacture by some external physical/chemical testing, that we run a unit operation or run a manufacturing process and then we test it, and based on that data, we then release the product from where it is.

The second condition is that--I'll just--you can read it as well as I, the product is manufactured according to certain process conditions that have been shown during development and manufacture to infer product performance. So that there is somewhat of a--that we believe we understand our product and process enough that by measuring the process itself, we infer product quality and that there are relationships that are developed and confirmed with external physical and chemical testing to verify that.

The third one is that we're actually measuring a product quality itself and that by measuring the product quality itself, then the process can be optimized and, back to what Bob was talking about, that you can actually learn about your process and change it and as it goes along, as long as you understand your product quality, that your process can be optimized and so on.

And I believe these are different ways of releasing--and at this level, not at the equipment level or sensor level, but at the product level, the meaning of validation changes, potentially changes, that instead of having three lots at a static condition and calling the rest of the manufacturing life cycle good based on limited testing that as you increase your sophistication of understanding of the product and the process that in some ways the product validation goes away in the ultimate realm of this. It at least changes dramatically in its concept at the product level. And that, perhaps, this could form a basis of deciding what validation means and differentiating between what is currently being done and the potential of the future as we add on these PAT technologies.

DR. KIBBE: Anybody else?

[No response.]

DR. KIBBE: I think getting back to your point, there's always been concern about measurements made during a continuous process being the right place to make the measurement and the right place to determine whether or not you should go forward. And I think what you're saying is that even though at some point we think we have a uniform product and we're ready to go, that doesn't mean that we have to stop watching that. And I don't think we've said that. I think what we're saying is that if we have a new method of looking at blend uniformity as we blend, then that's a good thing to use to know that at least at that point in our overall process, that particular process is well under control.

And then if another problem comes up--and I think that brings us back the fact that we are not prepared, I think, to throw out end-stage testing on any of our products until we have the whole process under control, but, as we, I think understood a little earlier, people aren't going to be able to put the whole process under control by turning a switch. And so we're going to do it bit by bit until we've finally gotten there. When we get there, then end-stage testing might or might not go away. And I really don't think it'll ever go away because behind it there's stability testing and that's really hard to do with PAT because it's a different kind of process, and that relies on our looking at and analyzing the product itself.

So, how many of you think that the ultimate reference for validating a process could very well be the endproduct analysis?

How many of you think that the ultimate way of validating an interim process or a process technology is the endproduct analysis? If I have a method that guarantees or looks at some stage in the process and I can do things to it to make it show that that is out of control and I test my product and the product is no good, and I do it and it shows that it is under control and my product is good, is that an ultimate--can we ultimately rely on that to validate our process?

DR NASR: I don't think so.

DR. KIBBE: Okay.

DR. NASR: And the reason is, when you do endproduct analysis, you do not analyze every capsule or tablet you are manufacturing. So it is a sampling issue.

MR. LEIPER: There's only one instance where we actually do that and we're not very good at it, and it's using USP-calibrated tablets for dissolution. We do 100 percent testing on them, and we occasionally get billets-doux from USP to tell us that they've had a problem with that batch of tablets.

DR. NASR: Right, how often--and that happens very often.

MR. LEIPER: And that happens very often. So if one wants a living proof of the problems that we've got, that is certainly one of the markers.

But I think the other thing that's interesting is that--and it's been brought out this morning--that we haven't changed our post-validation very much. What we've changed is our appreciation of what the need of the measurement is. That's what's changed and everything else has got to match with that some way or another. And it will happen by attrition. It will be units that we put in and it helps us with problems. There's absolutely no doubt about it. But I think from a lot of what we said yesterday, and it's been captured, you can certainly pull that out of what we've captured, I think.

DR. CIURCZAK: There was one comment, I think, that Arthur had made even that we're going to be doing the same type of thing. We're going to get numbers. Going back, Ken made an interesting comment to me yesterday, that when we talk about blend uniformity--and people are used to seeing HPLC data 97.8, 99.5, all this. And I had this same problem back at my last place of employment where one of the people doing the work in development wanted to see numbers. And, as Ken said, well, the principal components are numbers, things, like this mahalonova's [ph] distances are numbers, but it doesn't require if you do--and that was, I guess, Pfizer's first thing that came up years ago where you just look at the variation until it's a minimum standard deviation. You don't require the thing we've all agreed upon is crummy is actually putting a thief in and pulling it out.

If you want numbers, quantitative assays, so that you feel comfortable that that's what you always got before, then we're going to be taking a very elegant way of nonintrusion and in having to use an intrusive method to do it. So we have to be careful--we have to do education that you're not going to see this. You're going to see numbers, but you're not going to see the same numbers.

So it's a feel-good thing. You know, I think the biggest problem we had with a validation on a tablet is we had to make it look like an HPLC, we had to--before Gary and a number of people here worked on NIRVWOG committee to come up with the new USP proposal, the first thought before that happened that Gary and I would be playing with was can we use the same terms to make it sound like HPLC because the FDA doesn't need this, our own RQA needed it before we could ever get it approved. And I think we spent six months getting it bounced back because something that was in tabular form, they wanted to see in prose. And then something else they wanted to see as a footnote, and then, finally, I sat down with the director and said, Is there anything in here that's violating our SOPs or a CGMP or any FDA or any guideline that you can point out? Or is it just something that you haven't seen before? And three days later we got the approved package back. She was honest enough to sit down and say, yes, it's just because it's something I haven't seen before, I can't find anything wrong, technically.

So we're going to need to do that because if we try to be feel-good and do a blend uniformity, going back to that again, and when we have to start probing and doing HPLC to validate our NIR, it's very much like using a sledge hammer to validate microsurgery, that the error is orders of magnitude greater and we're not going to prove anything.

DR. TIMMERMANS: Gary, while you walk to the microphone--Emil, I think what you're saying just comes back to what, I think, we emphasized yesterday that everything is based on scientific rationale, not necessarily numbers but scientific rationale.

MR. RITCHIE: Art had gotten onto something, and, Ken, I wanted to pick up off of--regarding a specific example of an endpoint measurement that we currently make versus what we're doing when we're looking at the process. I already have quite a dossier of documentation for a process development where they've purposely changed certain components to determine if, in fact, my dissolution profile is going to be the same at the end of the development.

They change, let's say, one constituent. The product development people know exactly what that constituent is. I come along and say, hey, rather than doing the dissolution at the end, I can actually tell you during your development stage, using principal components--okay--what those certain components are going to be so that you can go and physically change them and I can correlate them now to a new measurement, i.e., principal component.

Well, you put that in the package and you submit that. The question now becomes is how am I going to convince my regulatory people and how are they going to convince the FDA that what we've looked at with this new measurement in changing those constituents are equivalent to the dissolution measurement at the endpoint. I think that's what I'm seeing going on here. That we're finding--that it's a problem to reconcile this endpoint measurement that we're currently doing in development versus what I'm showing them to do in real time during development.

I'm saying they mean the same thing. How? How are we doing that? That's what I think we need to focus on.

MS. SEKULIC: But that goes back to the education question, okay? There is no doubt that there is a lot of education that we're all going to have to go through, both industry and the regulatory authorities around the globe. But, again, if we can't make the science stand up on the basis of good science, if it's not defensible science, then we probably shouldn't be doing it.

I think what we're all saying is that this is defensible, validateable science that is going to be telling us a lot more about our processes and that's what we need to focus on. Yes, there will always be people who won't get it, who won't want to get it. But should that be the stopping point? No, I don't think so.

DR. KIBBE: Let's get back to our task, which is to help the agency come up with a guideline for validating these kinds of things. And the more and more I hear, the more and more I say to myself, well, we don't need anyone, they've got guidelines for validations, let them use the old ones.

T2 I have a feeling that that's not going to be a good answer for the industry because the industry isn't going to be that comfortable with that, and they'll want something from us that is both encouraging and empowering and gives them a place to start and a place to go and those who sit around here who have listened to the discussion have that and those that haven't been here don't have that and so on. I think we're going to end up with a new guideline or a new guidance document regardless. And the question I have for you is the information that we've already put together that we've seen up on the board, is that enough information? I think there's one other suggestion in that we put in references to things like ICH and other places to go. Perhaps we ask the agency to cross-reference to current validation documents for different kinds of processes so they could look at things that would be similar to what they're trying to do, those kinds of things. Is there anything else that we need to include that would be helpful?

DR. MARK: Well, there are certain places where you can point to where we know that the current guidelines would fall down, and one example that comes to my mind is, for example, the question of range. I mean, the--you know, the standard requirements from ICH and so forth say under various conditions 85 percent, 115 percent of target value and so on and so forth. And if you have a product with a high concentration of the analyte, you know, say 95 percent or so, well, you simply can't get 115 percent of target, okay, because you required more than 100--you know, more than the pure material. And that's a situation guidelines simply don't deal with and would be physically impossible to meet. And there are probably a couple of other things that I'm not aware of that could fall into the same category there.

So, certainly the guidelines need to be updated to cover these kinds of cases and probably some others, too.

DR. TIMMERMANS: I was going to make the exact same point that Howard did, and Gary and the NIRVWOG group have gone through the exact same exercise when we were trying to update USP 1119 for NIR methods. I think there should be some type of disclaimer that allows use of scientific rationale for not necessarily addressing all analytical process--I'm sorry, analytical method validation parameters for a process analytical technology.

Exactly, Howard gave one example, I think it also applies to some of the parameters that are currently being addressed in analytical method validation, and I think that that should be realized.

MR. LEIPER: I think that that point is very well made, but I don't know if you've seen Janet Woodcock's presentation where she actually speaks about CGMP being empirically based just now and she would prefer to see it scientifically based. She also makes a very astute comment on ICH standards, which are--she says that they're consensus-based standards, i.e., they're not scientifically based.

So, you know I think that there's no doubt that people in the agency have got some measure, I think, Joe, of some of the problems that exist in these areas. But we've got to recognize that the industry had the responsibility for putting them there.

MR. FAMULARE: You know, in terms of the basis for GMPs or things that are in ICH, you know, we recognize that the GMPs themselves say that specifications need to be scientifically sound, et cetera. So, I think that, you know, you have to take the references there in context in terms of much of the GMPs are also written about basic common-sense procedural issues, and I think what Janet is saying there is that I think we focus on those issues a lot, you know, whether we have the second signature on the batch record or other procedures in place which may or may not impact on--and sometimes we may miss the basic science there. So, I think our application very often is empirically and so forth, but, you know, to truly follow GMP, you should have science behind it.

MR. HALE: I'm kind of confused. I think the statements that we need to be science-based are right on, but there's been a lot of--you also hear a lot of complaint about what validation means right now. That we do three lots and call it done; that we do--that there have been years and years of our going over how to test blend samples and all of that. So I don't think validation is perfect as it stands and that this is an opportunity to address the ways that we can approach validation, and some of the comments that have been made that we can take a more statistically viable approach to looking at our processes don't fit into the current way we do validation now. That we do a bunch of work and then we run it three times and then we hang out for a while and collect data or don't collect data.

So, I'm not sure that our--that at least the practice of validation shouldn't change, and this is an opportunity to assess some of those things and to provide a framework to allow the companies to change the way we do current practice, everywhere from the unit operation side of things where, instead of taking samples, you can look at flow of powders to how we do manufacturing lot release and validation and allow us to learn and all of those things. So, I think the confidence in out current validation approach is not necessarily appropriate.

MR. FAMULARE: You know, I think as the science now is moving on, you know what I'm saying--what does this mean to be science-based--as the science moves on, the C in CGMP changes and that's why GMPs are written in such a broad, flexible way so that--I mean, the hope was when the GMPs were put in place that they wouldn't be constraining on future development. In actual practice, that may not always be the case because there's comfort in knowing that you have this program, this has been acceptable to the agency, this three-lot system. And there's fear in the change.

We talked about that a lot in the prior subcommittee meetings, so I don't think we need to go down that road, but I think just by seeing what's in these slides here this morning that there is certainly room for improvement in the concept of validation, change in the concept of validation and, you know, even as Ajaz said a lot in the prior subcommittee, the fact that validation, you know, our looking at this, you know, instead of saying blend for 20 minutes, because that's what we validated at, blend to a certain endpoint that your sensor's telling you, you know, we have to make those practical changes, if that's what the science is telling us.

MR. RITCHIE: Joe, that's a good point. Even further, what I imagine is what we're trying to do--the difference between an endpoint measurement that we currently do and release, and a development measurement is to try to say when I have a failure in my development measurement and I have a problem with that batch, more often than not, I still can't determine where that failure came from just because the dissolution failed. But during development, I knew that I made process changes to purposely make my dissolution fail. Now I come along and say, well, during development I have process measurements that I also made when you made your changes to that process, and I think there's some understanding now of why the dissolution failed.

Is that what we want PAT to do for us?

Because right now we can't say what cause and effect is. Do we expect PAT to be both a panacea for industry and the FDA to say, well, we can minimize the number of failures and minimize the number of recalls because now we have an expectation that we've seen the process from the beginning, now to the end?

MR. LEIPER: I think we are expecting--we understand it's processes that deliver consistent quality product. You know, and the pharmaceutical industry is not unique. And that's the way that we probably ought to move forward. And I think that validation is a case in point, but from my experience the problem that we get with the use of new technologies--and I think that you've been--Sonja will bear me out on this--is that we always get the difficult problems to solve. We never actually solve the easy ones.

I guess that what FDA are now looking for is to establish models on the way that we go forward, and I think that the point was made yesterday by Dave Rudd about using suspensions or, indeed, just using liquids and establishing principles for the way that processes will begin to look, because it's these kinds of systems--and then we can fit all the rest in around it.

MS. SEKULIC: I think it's imperative that we just start looking at our processes. And I keep going back to the method development component of this activity. You know, we've got to start looking at our processes, gathering data in order to translate the data into information and knowledge, to then take that knowledge and accomplish what we're all trying to accomplish, which is better utilization of that knowledge and our processes to eventually--or continue, hopefully, providing the customers with the appropriate quality product. That's really all that it's about. But we've got to start looking. I think that's my point.

MR. CHIBWE: I think one of the things that I expect we can do today is to begin to define in terms of unit operation validation, because if I go back to my job tomorrow and my boss asks me, we're going to implement ABC, how are we going to do it? You were on the subcommittee and working with those guys. How are we going to do the validation?

And really what I want to end up at the end of this day is to confidently say, look, this is where we're going to start working now. Or if we're going to implement the PAT in the batch mode or blending, I think blending is pretty simple. The science is already there, MIT, Purdue, and there are others. There's a lot of scientists already going into that, so I don't think we should hang up on small problems.

What I think we should move on to is the bigger picture in terms of the sample size, specificity, unit operations, and whether within the batch mode or we're going to do the whole unit operation. I could give you examples.

For instance, you could have rejection for content uniformity on-line. You could have LIF telling you that if the potency is below 95 percent, you reject the tablet. So you're going to have to validate that sort of monitoring and control. So I think that's what we should really go into, building on the principles that we've already discussed in the first meeting back in February.

So I think today let's sort of have a path which is going to give us some sort of guidance in general terms what we're going to do if we decide we're going to implement just the monitoring or monitoring and controlling. So I think those are the things that we should go into.

DR. TIMMERMANS: If I understand correctly, what our discussions have led us to this morning is that that wouldn't be any different than what you're doing now; you know, whether you use a PAT method or whether you use an off-line analytical method, your principles of validation do not change.

MR. CHIBWE: But, you know, what you have to realize is that you're always going to have struggles, especially within the QA departments within the different companies. As long as something looks strange to them, they will tell you they won't accept ABC because ABC is not HPLC anymore. And to them HPLC is primary when it's not.

So what I'm asking for is we should put it down; even if it looks common sense to us, it's not common sense to everybody. So what I'm saying is let's have something that we could work on, and that's actually going to take us forward in terms of--I mean, we don't want to--if we say what we have now is fine, then maybe we don't need to have the meeting to discuss validation.

DR. C. ANDERSON: I'd like to come back to--oh, I'm sorry. I was actually going to come back to Moheb's point precisely. If we include in this document that existing validation guidelines are adequate for process analytical technologies, we've answered your question. You have something on a document that says the way we validate things now is adequate, QA can see that, that makes your argument for you that it should be acceptable.

MR. CHIBWE: There are always going to be exceptions. We can't use everything that we currently know about validation for the new technologies. Some of the things that we currently use for validation are not applicable to the new technologies. Those are the things I want us to get into so that when we let down--especially, for instance, if I come to the statistical approach and using the rejection, if you're going to be controlling the system, you're going to reject. On what basis are you going to do that rejection?

DR. C. ANDERSON: I agree with you that there are exceptions, but I don't think it's this group's charge to list or prescribe action based on those exceptions. I think it's this group's charge--and I'm speaking for myself--to come up with general guidances and leave it to the scientists to make correct choices within those general guidances, is my perspective.

DR. NASR: I totally agree with Carl. I think the focus of this group and the assignment we have before us today is to come up with a general guidance, not to go to the specifics for every application and exception and limitation. That should be left to the scientists based on the particular application, and if it is science-based, it will be accepted by the agency.

MR. CHIBWE: What I'm asking for really is not specifics per se. What I'm asking for is principles in which you're going to operate. If you're going to do a unit--for instance, you're going to do a unit validation, how are you going to do the unit validation? Those are some of the principles I think we can get into, without necessarily being specific. But at least you could say this is what you're going to do, you're going to do at least--if the batch size is so large or whatever, but at least have some science-based principle that we should be using.

I don't know. I hope I'm not confusing everybody.

MR. LEIPER: Just a clarification here. When you say "unit," you know, when you're using "unit," what do you--

MR. CHIBWE: Unit operation.

MR. LEIPER: A unit operation, a unit process operation.

MR. CHIBWE: Part of the--yeah.

MR. LEIPER: Okay. I think that, you know, as I said earlier, the thing that's changed, the only thing that's changed from the discussions that we've heard is that we understand the need that we were addressing and have been addressing for the past ten years is not the real need. No, that's the significant change. The way that you would go about it is actually very, you know, quite similar. But we don't break things down into unit operations normally, and we don't do risk assessment or variability assessment. So that would be a change, but that's purely a structural--you know, that's an application of a system if it was seen as being appropriate to go forward. But we need systems to actually allow us to do that.

But that's not going to help you with your guys if they want to do HPLC because they don't understand the need. You know, they're going to solve--they're going to try and solve your company's problem in terms of the technology that they know and love irrespective of how inappropriate that might be. That's something that they go and see a shrink about. That's not a scientist.

[Laughter.]

MR. CHIBWE: Some of it actually goes to their education. The education--

DR. KIBBE: I think, though, that the point that we're talking about right now is how do we transfer what we think we have figured out to people who haven't heard the discussion and haven't bought into the process. And I think it might be useful--I don't know whether we want to do it here, but it might be useful for the agency to pick an example of a technology that is used in this way and say for that technology this might be an appropriate way of validating that technology in this position. And the reason I say that is because if it is so different, the data we're collecting is so large, the data set is so large, and we're not making point determinations but continuous determinations and we're looking at fingerprints of output, then that example, although not the guidance itself or the guideline, gives people food for thought and a way to understand the general principles of validation which apply regardless of how or what data you're collecting or what endpoints or what measurements you're using to keep track of your process.

Anybody? Go ahead.

MS. SEKULIC: Yes, I tend to agree. I think in keeping with the three-point strategy that Moheb referred to earlier, I think the first one that he cited was validation being tied to a suitable intended-purpose statement was one portion that he wanted to see; the second was sort of length of validation principles in which my opinion is that that really should state something like, you know, current cGMP validation principles should be utilized, you know, when and if applicable for intended use; and then the third component that he had was the sort of citations and, you know, pointing to other sources of information, which is where I think this sort of guidance or documents providing the examples of possible or likely scenarios might be included.

I'm just going to add that the biggest concessions that I think I've seen in all the discussions fall into two categories for me. One is the sort of encouragement or comment that could be included in the guidance--and we discussed this last time--regarding encouraging industry to have a technology in development, you know, a sort of special category which will alleviate the phobia of actually, you know, trying something on your processes, but not necessarily having to make a release decision on it. I think that's a big concession that industry will see, and I'd really encourage some commentary to that be included into the overarching guidance.

The other big concession that I recall from our discussions last time was the discussion regarding the increased level of scrutiny that some of these technologies may impart on our processes and how to handle that, and we had discussed it at length, the out-of-trend sort of investigation and learning from that as opposed to automatically branding a deviant result as an out-of-specification result, which carries with it its own burdens and paperwork and investigations and so on and so forth.

So, for me, looking at it, you know, at a higher level generally, not specific to any technique, not specific to any unit operation, those are the big things that I think will encourage industry to sort of, you know, start going down this path and, if possible, to incorporate some general statements on those two points in the guidance I think would be really helpful.

DR. KIBBE: So what you're suggesting is that the agency still sticks with its out-of-specification requirement for investigation, but if there's an out-of-trend, that's something internal and the agency shouldn't get involved with it? Is that--

MS. SEKULIC: Yes.

DR. KIBBE: Okay. Does everybody--okay? You see the subtle difference there? As long as the product is still in specifications but there's a trend that's been picked up by a new methodology, that's not subject to the same kind of regulatory oversight as an out-of-spec would be. I think we talked about that yesterday in generalities, and that's another specification.

Would you want that in a validation guidance document?

MS. SEKULIC: I think it's going to allay some fears in the industry and move us in the right direction. I don't know. I'm open to other people's opinions, but I think it would encourage folks to actually start using this technology.

MR. MADSEN: Let me just make a comment. I think we can't lose sight of the whole concept of control and a state of control in terms of a process. For example, if we had a validated analytical method for the active ingredient content of finished tablets coming off a press where we could on the fly catch--analyzed every one of them and reject with perfect accuracy the ones that were out of specification, let's say normally when we ran this process we found that we were rejecting 1 percent of the tablets, either super-potent or sub-potent, and this was typical, and one day we run this and we find out we're rejecting 30 percent of the tablets, there's still--all of the tablets in good bucket are good tablets, but we've all of a sudden rejected 30 percent of the tablets, which is different than the normal 1 percent.

Now, if I were a regulator, I would be concerned, even though the product that we're releasing is still good product. And I think somehow we have to make sure that we don't lose sight of this concept of state of control of the process.

MS. SEKULIC: Yes, but, interestingly, you used the word "out of specification." And if it does go out of specification, I think that we would all investigate. What we're talking about is if my process and all the tablets I'm looking at coming out of a tableting run are 98 to 102, but my spec is 85 to 115, there's a lot of room there that I haven't seen with my sensor capability. And so that increased level of scrutiny that I now have will tell me that I'm going out of my normal variability range of 98 to 102. And what happens between 85 and 98 and 102 and 115, that's a learning exercise that I'm venturing to guess the FDA may not necessarily want to be notified that it's happening, but it is important for me to understand my process, to improve my process efficiency.

MR. FAMULARE: I think that's exactly the way the FDA looks at it now. If you look at the current draft guidance that's out there on handling out-of-specification lab results, I think right in the beginning of it there's one sentence that states that if you have out-of-trend results, if you want to use this guidance internally in the company to examine those, feel free to use it.

But it's certainly, in a different regulatory scrutiny, it's certainly useful information to the company to maybe mitigate or prevent something that may happen in the future.

DR. KIBBE: Anybody else? Go ahead.

DR. MILLER: Just a quick comment. Certainly if all of a sudden the process was rejecting 30 percent of the tablets, it seems to me the company certainly would want to know about that and take corrective action immediately.

[Pause.]

DR. KIBBE: Have we reached a lull? You think maybe w all need a coffee break? It certainly looks like we need an infusion of my drug of choice, so why don't we--we're scheduled for a break, a 15-minute break at 10 o'clock. We'll take it now. We'll come back and maybe during the coffee you'll start to chit-chat and get courage and want to go back and redo this whole thing.

[Recess.]

DR. KIBBE: It would be very useful for all of us to listen to AstraZeneca and how they went about validating a PAT system for one of their products. I think it might be useful for those of us who are worried about how we're going to get started back at the shop to see it actually work somewhere and can be done and to ask some questions about that.

After that, what I would like to do is refer back to Ajaz's presentation on the very first day and the list of questions on the back of that presentation to make sure that we've addressed all the things that we need to address. After that, any other comments or questions or what have you from any of you would be well placed, and then I think we'll probably let you break, and it probably will happen earlier than our time frame. And I will sit with the stuff that we've put together and come up with a handful of slides for this afternoon's presentation to the full group.

Now that everybody has gotten a chance to kind of relax and get back in the mood for serious thoughts about PAT, we have AstraZeneca up from the floor, with overheads, no doubt. Overheads, Bob? Thanks. Overheads. Outstanding. Can we--wonderful. Technology is wonderful, isn't it, folks?

This is the application of older technology to the understanding of future technology. And remember, folks, that the technology that's most important is the technology you carry around inside your head, and that's been with us for millions of years.

MR. CHISHOLM: I'll keep this down to certainly less than ten minutes, but please ask any questions. I'm sure--I think Ali has come in, has he? Ali will be in, and Ken, also.

I like to put this up because I've been seeing it for the past two days now, and when it comes to what we're talking about, it's an essentially very important thing. "Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write," and that was H.G. Wells in 1925. And that's essentially what we're talking about here to a large extent.

What I wanted to talk about is a plant that we sanctioned and built in Germany and it's an important tablet facility. It's a very straightforward plant, solid dosage, therefore, you're talking a dispensary, and you've got two routes. You can either go dry granulation or wet granulation. If you go wet granulation, you go through a collect granulator and a fluid bed dryer. If you go direct compression, you don't go through a collect granulator and a fluid bed dryer. You go straight to blending, and then from blending into the tablet press.

I've put up the network diagram, not to alarm you but just to try and broaden the discussion, because what I think the discussion is seen to have done this morning is very much a view of an isolated system like a sensor, and these systems aren't isolated. If you're going to actually do this as a total solution, you've got to look at it holistically. And, really, you're talking about such things happening from cradle to grave throughout your plant.

If you look here, you'll see--I'm going to have to walk across, so I'll shout in my Scottish voice. Can everybody hear me?

Spectrometers here--

VOICES: Can't hear you.

[Pause.]

MR. CHISHOLM: Can everybody hear me now? Okay.

You see there are four spectrometers here for the solid dosage plant. The first one is basically monitoring everything that goes into the dispensaries, and also it's multiplexed so it's also controlling the fluid bed dryer. The second one is an especially developed one which mounts on an IBC on the blender, and then we have them also exit the tablet presses.

So everything coming in is checked. The blend is actually controlled to a blend endpoint which will be variable time depending on the formulation. And that's quality, if you like, control of what we're doing. It's actually a statistical process monitoring, if the truth be told.

Once you get to the tablet press, we're statistically monitoring tablets coming off, and that's your quality assurance. So you've got to think of the two as being different. Really, actually make it operate, as we have a final PC. This is all 21 C.F.R. 11, so this is password-controlled. It talks to a server, which is up here. Server calls in the analyzer. The operator then bar codes the product he's going to look at, fits in the probe and gets the reading back.

That's just simple and that's the sort of things that we do just now. But as you can see, for an application like this we've actually ethernetted the whole thing, and that's the NIR server controls everything, because we've taken a completely holistic view of the plant.

You could actually talk to the system from anywhere in AstraZeneca if you knew the right way to get into it, because it's on the ethernet up here, and it's also connected up to the company network. Okay?

So that in itself brings in a lot of validation worries because you have what's essentially an open system, and 21 C.F.R. 11 doesn't like open systems. So there are issues there that we have to get concerned about.

So you can see how that works. So throughout the batch, actually monitoring everyone going through the dispensaries, controlling the dryer, controlling the blend to endpoint, and then statistically monitoring tablet presses for things like active content, et cetera, et cetera, et cetera. Okay?

And that's not that much different really, I don't suppose, from what we do just now, except they keep using the word "statistically monitoring," because you do it throughout the batch, and we do it for every critical variable that we see there.

The thing you have to really start to worry about is how to handle the data sets you're going to get because these data sets are very, very big. If you think that of a product life, let's say, 20 years, and you may have to keep that data for regulatory purposes or whatever for 20 years, that's not been defined, and I think perhaps the guideline needs to start thinking about defining things like that. Then you've got a big job on your hands and you're into archiving.

If we look at it, the sort of things you need, the diagram I've just shown you is something like that there and that there, because that's the operational part in the plant. And that's the NIR server, which is the brains of the system, and down here you've got a number of analyzers with their associated controls, et cetera.

So let's try and think how this works. People have been talking about having to go back and implement something like this. Well, let's say that we take a tablet and we want to do the active content. Well, the first thing you've got to do in any of these things is this system's dumb, it's silly. You've got to create a model because it doesn't know what it's doing. So you take a tablet through an analyzer; the analyzer will analyze it, send a spectral up, and it will be stored here. So you've got to have spectral data and model version storage. You've got to have--these are module--these are functionalities. They're not necessarily separate computers. You've got to have some way in the long term of storing all the spectral.

So you've done that with your tablet. You've still got it because the nice thing about these techniques is they're non-destructive. So you want to go across, you stick it in your HPLC, it tells you the active content, and then it goes into the analytical data storage module.

Now, validation terms is a very critical issue here. If this says Batch A, Tablet 17, then that's got to say Batch A, Tablet 17, and these aren't simple issues. Because one day a regulator is going to come across and say tell me what happened to Batch A, Tablet 17. So all that data has to be stored, and basically it's got to be traceable. You then--and Sonja and Ali know an awful lot more about this than I do. This will go into some sort of kilometric modeling module, back down here, and gradually you would create your algorithm, which is your model.

Now, actually you've now done your modeling, and I would say to you from a validation viewpoint you need to continue to store all that modeling data, because one day someone from the agency will come along and say, How did you create the algorithm?

So there are a lot of problems in information storage and retrieval here, and we haven't really addressed any of these in what we've been saying. Whether or not it should appear in general data in any way, I don't know. It's up to you. But it's a lot more complicated than people think it is.

You've got your model there, nicely stored up here. So you've then got to validate your model. Notice I'm using the word "validate the model." Now, how do you do that? Well, you carry on and do the same thing as before. Tablets that are here, don't let them be destroyed. Stick them through there. And what's happening this time is spectral are coming up, the system is predicting, it's telling you what the active content is. You take it, HPLC it, that comes out here, and it tells you what the active content actually is.

That's a way of validating, isn't it? Because you're now relating your spectra and your model to actual data on the plant through registered process test the way we would have done it before. And in the initial stages of all these things, I cannot see any way to move away from the accepted test. That's why I said yesterday you've got to learn to walk before you can run.

We will have to base it on our old methodologies just to model and then to validate the model.

So you've now validated your model, and you're going to normal production. All that's happening is the tablets are coming through, statistically through the batch, not every tablet, because there's far too many, and you need lots of analyzers if you're going to do every tablet, and there's no need.

It comes up here. It says predict and tells you the result, and you release a tablet based on that result because you've got a validated model and a validated process. Okay? Is everybody happy with that?

MR. HALE: Bob, when you say you release a tablet, do you actually release a tablet or do you release a batch?

MR. CHISHOLM: That's a question to throw open to everybody. Clearly, you would take the results across a batch. You give me an immediate problem there, because if you find a tablet is now what you'd like it to be, you have to be able to identify that tablet given the data that are coming off the tablet press. This plant is just in the process as we speak of being validated, so we haven't practically released anything yet. So you've given me food for thought, which is what these occasions are all about. Yes, we've got to take these decisions.

Okay. So you've got your spectral data storage. You've got your servers and your analyzers. You've got your modeling module here, analytical data storage. You've got traceability for the inspector who comes in a few years later. You can show how you built your model, how you made the algorithm, how you validated it. So you've got to have something here that actually stores all these reports because you're going to have to have validation report for that stage, and you're going to have to have batch reports or functionality of reporting is required down here, again, long-term storage.

But there's something else I think you need, and the lady yesterday asked about control. What I've put down here is an HPE module and I started trending, manufacturing execution. To get the best out of these systems and improve your knowledge, what you're actually doing as you go through the batch is statistically process monitoring, just to make sure the trends aren't beginning to take you out of compliance. And you'll have alarm levels or, call them what you will, warning levels. And you'll watch that in the normal batch.

But over a period of time, you will have built up a history of a large number of batches, and you want to store that data because you want to data mine it; therefore, by data mining you can see when your process changed slightly, you begin to understand why it changed.

And I've heard one or two questions this morning about would, for instance, just doing end testing be sufficient? Well, for me the answer is no because I think you need to take a total approach to control. I would say that I'm control engineer.

One thing I've learned throughout my career at AZI, et cetera, is that things always change. Manufacturing processes always change. Materials always change. That's just a basic given. So you've really got to take that into account, and that's why we're trying to take a total approach to this.

Okay. Any questions? Does that help anybody? Is that you, Ali? I can't see that far back.

MR. AFNAN: The question that was asked of do you release the batch based on that tablet, I think another question is, yes, we would release a batch based on a statistically representative number of tablets which have been analyzed. Now, if you have a batch of two million, the question I have--and I don't have the answer--is: What is a statistically representative sample?

Now, let's say if you said it's 1 percent, out of 200,000 that's 2,000 tablets. Now, of the 2,000 tablets, considering that our processes are based on the way we've been manufacturing until now, if we did 2,000 tablets out of a batch, I have no idea, but I would be surprised if all 2,000 were within spec, whatever that spec is.

So then what do you do with the numbers that fall out of spec, and I think that was answered yesterday where you would see things which are out of your window of operation, window of acceptability. And that's a completely different new ball game. But there will be those that come because if you go from 6 to 2,000, you're going to see things you've never seen before.

So the answer is we probably would release the batch, but you would have to see what that change was, because at the same time we're no longer going to come up with an answer which says the tablet is good or the tablet is bad, but you actually say, well, yeah, you find that the solution was wrong but all the other aspects of it were right, because, again, we're not just looking at one property of one component of your product. We're looking at the full process. So it doesn't matter if one part of it is--well, "doesn't matter" is the wrong terminology. But you're looking at a complete picture rather than just one tiny part of it.

DR. KIBBE: Tom?

MR. HALE: I think it comes--in the context of validation, I think as information is gathered and experience is gained, one thing that will come up is the definition of a batch, because a compressing machine can be looked at as a continuous process. And as described here, it's a whole bunch of tablets coming off in a row, and it really is a continuous process.

As this advances and the opportunities are increased and knowledge is gained and people learn, I think what will be challenged is this idea of batch size, of what that really means. We artificially describe it somehow, but I think that especially in a guidance point of view, as these things evolve, we need to have the opportunity to address that issue both in terms of how--as was stated, the sample size, how we deal with samples, how we deal with them statistically, and how we deal with them from a batch size and validation point of view, and that the whole concept in the context of what Bob was saying of a holistic approach needs to be written into this guidance, I believe.

DR. KIBBE: Does it need to be into the validation guidance, or do we need to understand it in other ways? The possibility is that they will have process measurements or assessments that apply to every tablet as they come off the line. Now, that might be down the road, but it's a possibility. And then your question--do you release that tablet or do you release the batch?--really will go down to the fact that we release every tablet that fits and we throw every tablet out that doesn't. And when we start throwing out a lot of tablets, then we start relooking at our whole process. And in that case, batch becomes meaningless, and process control is everything. And that changes a lot of the way the end user looks at things, which is the physician and the patient.

And so there's a lot of---do you want to respond? I saw your hand come up. You have to talk into the mike, though.

MR. AFNAN: Okay. There is another side to this. We have a way of looking at the way we have been operating until now, which is you go in in the morning and you do nothing until the afternoon. In the afternoon, you look at the quality of your tablet.

Now, if you've actually been controlling--and I use the word there "controlling." I know a lot of people have difficulty with the word "control," but controlling your processes, then when you come to look at your tablets, all you're doing, you're assuring the quality. You're not controlling the quality. Because once it's a tablet, it's too late. If it's a bad product, it's a bad product. If it's good product, it's a good product.

What you should be doing--and I think that's what PAT is--make sure you make a good tablet. So then the whole concept becomes different by saying, well, let's not just look at the tablet. You have to look at the whole process. If you've looked at your process and you have been in control of your individual steps, then it's only really a final check. You know, when you make coffee, you pour coffee into the cup or into the jar. Well, in Europe we pour it into the cup, and you pour hot water on it. You don't stick it in your mouth to see whether it burns or not. You know it's hot. It will burn.

So it's the whole concept that you should look at the full process rather than, let's say, well, how many tablets do we release or how many do we reject? I don't think we're capable of doing the whole number of tablets which are being manufactured. That will not fly. And I don't think that would actually--you know, at the rate of 200,000 an hour coming out, there's too many tablets coming out in a given minute for us to control every one of those and say, well, we reject this one, we reject the other one. The whole concept is you shouldn't have any bad tablets rather than let's see which is bad and which is good. You shouldn't have any bad tablets. We're just confirming that we don't have any bad tablets.

DR. KIBBE: Anybody else?

[No response.]

MR. CHISHOLM: I'll finish off with this quotation and maybe to show you how difficult it is. It's called "The Impact of Innovation." "There is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new system. For the initiator has the enmity of all who would profit by the preservation of the old institutions, and merely lukewarm defenders and those who should gain by the new ones." That was Machiavelli in 1527, and I guess it applies to what we're doing today, because it's very difficult to get these things accepted inside your own companies.

Okay. No more questions?

MR. CHIBWE: I just had one question for you, Bob. Is the system optimized? And did you validate it?

MR. CHISHOLM: The system is being validated at the moment. The system is running. But the plant has only just started up. It's a new plant.

MR. CHIBWE: Did you have some sort of guideline to follow your validation, your--

MR. CHISHOLM: No, we--would you like me to talk a little bit about that? We had to invent our own.

MR. AFNAN: Logic.

[Laughter.]

MR. CHISHOLM: I'll talk a little bit about it. This is an existing product, which is a good one to start with. We have five years' worth of production experience, therefore, five years' worth of retained samples. So we have been creating a model using these retained samples to start with to get us going. So that's where the model is coming from to get us off.

Having done that, now we're starting the plants us, and we'll have this whole system running, and we'll be able to expand the model through the additional data. And that will change because whenever any new plant and things change, that's something you have to recognize. So you have to expand your model and make it more relevant. That's the stage we're at just now.

We're also making designer, for want of a better words, tablets because this is a very well controlled product and we want to broaden this across the specification range, which is another difficult thing. But you'll find if you have a very well controlled process, it's far better if your process was a bit of a mess because you get more data quicker.

So that's the stages we're going through. The actual validation of what we would intend to do is something like along the lines that I've described. Because it's an existing product, we would run traditional registered methods which are registered for this product, and also run the NIR and compile parallel dossiers to demonstrate equivalence between the two methods for a period of time we'd have to talk to the agency about. These are all new areas, and they're also difficult, I think, at this point in time to put in a gate because I don't think we necessarily know the answers. But I think the answer to that is that's something you've got to discuss, and you've got to try and make it statistically relevant, so we've got a statistician who is involved in experimental design of this and who will give us advice on these things.

MR. CHIBWE: Are there any lessons learned that you could probably share with us? I mean, you don't have to share any proprietary information, but just some lessons. I mean, as you go through a process, of course, you're going to go through certain things. I'm just wondering if there's things here in the U.S. that we could probably learn from you in terms of putting up the validation principles.

MR. CHISHOLM: I think maybe the lesson learned that I don't think we've been as good at as we should have been is you have to have a cross-functional team approach to this. It's not just Ali Afnan and Bob Chisholm. It's got to be the people in plants. It's got to involve pharmacists as well. It's got to involve QA people. We've now got a full-time QA person, and that's who's going to compile the dossier.

It's all about teamwork at the end of the day. The original concepts were Ali's, (?) , and mine. We did the original strategy. We actually ourselves sat down with Jim Drennen and brainstormed how we could do this, and we developed micromodel 1, micromodel 2, moving into micromodel in the plant with validation at each stage.

But all this, this is becoming accepted and the sort of normal vocabulary, but this is so new, you're doing it for the first time. And there's just nothing in the literature about it. So teamwork is very important or you won't succeed.

MR. CHIBWE: Just one last question. Are you doing cross-validation for all the critical pieces or just certain selected parts of the process?

MR. CHISHOLM: No, this is a new plant. This plant has been totally validated. What I'm describing is just the validation of the associated process analytical technology and our achievements. The plant itself has gone through all the normal validation you would expect: equipment validation, et cetera, performance qualification. Yeah, it's gone through all of that, and it's been done using existing methodologies and using existing registered tests because it's an existing product. It's a new facility but an existing product. That's why I tried to let people see there is a distinction to be drawn.

MR. CHIBWE: Thanks.

MR. CHISHOLM: Okay. Everybody happy?

DR. KIBBE: You have a question?

MR. RITCHIE: When you go live, will there be--I mean, I see an opportunity here for this to be a textbook model, if you will, on how the rest of the industry should proceed. When do you perceive that happening or becoming information in terms of a book or something?

MR. CHISHOLM: I've got no problem with that, to be honest, but there are others who would have a problem with it. It's my belief that FDA, MCA want the industry to move forward as an industry, and we'll get there quicker if we all move forward together. So I have no problem in information sharing.

I certainly would not be doing that sort of thing until we actually had made a submission, I don't think. That would seem reasonable because that's a very important part of it. And there may be a lot to learn from that. But it would be then up to my regulators and the others to decide whether or not we published everything or what was intellectual property. That would not just be my decision in isolation. But I totally agree with what you're asking.

DR. KIBBE: Okay? Well, thank you very much. From Machiavelli to H.G. Wells to 2002 and process and you.

One of the things that we've been asked to do is take a look at the method of validation issues that were listed on the back of Ajaz's handout that went with his first presentation earlier on. For those of you who have them, I think we can go through them in a reasonably expeditious system. Our support people here have been gracious enough to also put them on slides so that we can read them if you don't have them in front of you.

MR. HALE: Could I jump in before you start that to follow up on Bob's talk, that one thing we might want to think of in terms of our guiding principles for validation, or whatever that list was that we came up with, is that there is a need and a desire that if PATs lead to the introductions of new approaches for process control, that there will be a mechanism to work with the FDA to institute those new methodologies. I think it's critical to keep that door open, that as these technologies allow changes that are more fundamental than just sensors, that there is a mechanism and a desire to work with the industry to make that happen, as in the case of AstraZeneca. And it has to be a guiding principle, I think.

DR. KIBBE: Okay. Anybody else?

[No response.]

DR. KIBBE: All right. Tom? It's our last presentation slide, I think basically.

[Pause.]

DR. KIBBE: Okay, while they're typing, I hope everyone has got a copy of Ajaz's presentation. We could start with the first statement, which will also be put up there when they get caught up with us. It says that a validated laboratory method exists for regulatory parameter across NDA range. How do we replace this with a PAT method? Is there anyone who wants to comment on that?

DR. TIMMERMANS: Art, before we get into that, let me just put a little bit--not necessarily a disclaimer, but what Ajaz--what we're looking at right now is a number of discussion points that we went over when Ajaz came to Merck fairly recently. It's certainly not an all-encompassing list of what we see are necessarily issues, but it's just a couple of highlights that were plucked out and, you know, the answers that are written up here with some of the outcome of the discussion. But, again, that was done among a very small group of people with Ajaz and Chris Cole from the FDA guiding us. So just so people are aware and put this in the right context.

DR. KIBBE: Okay. We now have context. This is questions and responses that came from a discussion between FDA staff and members of one of the larger pharmaceutical firms--in beautiful downtown southeast Pennsylvania and at West Point?

DR. TIMMERMANS: Central New Jersey.

DR. KIBBE: Oh, interlopers. Okay. So regardless of where the item came from, what do we think?

We don't think? We do think? Jerry?

DR. WORKMAN: Is there any relevance between this discussion and the slides? I don't think so right now, right? The slides have nothing to do with this; is that correct?

DR. KIBBE: It should.

DR. WORKMAN: Oh, there we go. Sorry.

DR. KIBBE: These slides are these statements, I hope. Okay?

DR. WORKMAN: Sorry. Thank you.

DR. KIBBE: What I read I think is their number two. I was just using this paper as a--you know. I don't care. We can go anywhere.

T3A This is a regulatory parameter across an NDA range, and that's the first item under the PAT method of validation issues on the handout. Right? It's listed number two up there, but don't let that confuse you too much.

So the question is: Do we have any thoughts on these items? And we'll put them up one at a time, and if there are thoughts, then we'll try to see if that is needed to be reflected in what we've already produced. Have I got everybody completely and thoroughly confused? It's my role as an instructor to confuse the students so that when they take the exam, they don't do well. Because, otherwise, how can I flunk them out?

DR. WORKMAN: Excuse me. Does this involve correlating the new method to the old method? It's a question for the group.

DR. C. ANDERSON: I would take it as a given that it does. Further, in the answer to that example, we need to include some sort of statement that specifies that the PAT may or may not span the range of the original validated method, and that's acceptable.

DR. KIBBE: It also can go the other way, too. The PAT might actually have information that goes further than the validated method.

DR. WORKMAN: It may be implicit in this, but do you want to make it explicit that when you validate the PAT method that it does correlate with the original validated method?

DR. KIBBE: So we want to add to the second paragraph here that the methods are correlated and they don't necessarily cover the same range of information? And that's still acceptable?

How's my man doing over there?

[Inaudible comments off microphone.]

DR. KIBBE: Sure. That works. Italics, yes. There's a program called Edit that, when you--I always push "edit" on my word processor, and then when I start changing things, you have to accept or reject the edits. I don't have to worry about changing fonts or crossing-outs and things. It just does it. Horrible to be slaves to all of this equipment. Bring back the quill.

DR. CIURCZAK: There's one thing on this. We want to be careful about correlating it because you may be doing a process method for which there is no method right now. Thickness of coating, on-line, because, you know, I just mean that you have to be careful about correlating it to a method that doesn't exist.

DR. KIBBE: The statement says, assumes that there is one.

DR. CIURCZAK: Assumes, but, I mean, you may be doing more tests. You don't want to have the idea of having more tests, different tests.

DR. KIBBE: We have lots more questions, so this one said--okay. We've got one, we got a new one, what do we do? Well, we do a correlation.

DR. WORKMAN: Excuse me. There was also a statement about the ranges may not be identical.

DR. KIBBE: Right.

DR. C. ANDERSON: There is actually another example coming up that will address that. I got ahead of the game.

DR. KIBBE: Good man. Okay. So we're happy--yes, sir? We're not happy. You have to push your little button or we can't hear you.

DR. WOLD: The correlate is to me fairly diffuse. If you have a correlation of 0.1, it correlates, but it's not a very good correlation. And I think one needs some statement that it should correlate within the error measurement of the traditional method, or something like that, over the range of interest; otherwise, you are in trouble.

DR. TIMMERMANS: The question is whether it should correlate to the same accuracy as the existing method or should it correlate to the accuracy required by the process or the information that you need?

DR. C. ANDERSON: I think the answer to that is very clearly it has to be suitable for intended use, and the existing method may or may not be more precise than is necessary. So I think this is another suitable use argument.

DR. TIMMERMANS: Agreed.

DR. KIBBE: How's that?

MR. LEIPER: I think there is an assumption there that the validated method exists for a regulatory parameter. But does it actually meet the real need? You know, we haven't actually--there's nothing there that says it meets the real need. A real need.

MS. SEKULIC: Maybe we can provide the assumption that if an original method already exists, that a need has been identified. Maybe.

MR. LEIPER: Well, I think that that's the--

DR. KIBBE: That's the hope.

MR. LEIPER: That's the starting point. You know, does it actually meet the real need?

DR. MILLER: It seems to me if you have a new method, it would probably need to be validated essentially to the same extent that the original method was also. Now, the values from the old method could be used for those validation parameters where it's appropriate, such as accuracy, perhaps. But the other things, such as, you know, precision, which don't necessarily depend on the values obtained from the other method would probably have to be validated as though it were a completely new method anyway.

DR. NASR: I think we have to distinguish between using information or data from the old method to validate the new method, and using the same validation criteria for the new method, I think we have to make that distinction. The method should be suitable for the intended use. We can use the old method to generate data that we can utilize in validating the new method.

MR. COOLEY: I think that's a very important point to make. We utilize on-line HPLC to monitor and automatically cut purification columns, and the on-line assay has a large bias compared to laboratory assay. But the bottom line is we can set criteria that we can use information from that instrument to do process control with, and I can produce mainstream cuts that meet the forward processing criteria every time, even though there's a large offset between that--for a large bias between that assay and the lab assay. So it meets its intended use every time.

DR. KIBBE: So have we got that in a simple correction, or do we need more words? We're good? Let's try another one.

DR. TIMMERMANS: Well, the only thing that was missing here, we talked about the range--or are we--

DR. KIBBE: Different question. We're going to be home on the range soon. How to handle the validation method for a non-regulatory parameter.

We don't want to do that, right? We just don't want to--if it's not regulated, we don't want to know about it?

DR. WOLD: We get into a problem here. We have said that if we want to use measurements--measure during the process instead of making an end analysis, then we may decrease the end analysis a lot or maybe even get rid of it. If we just use methods corresponding to what we do today, but substitute for PAT everywhere and use them for end analysis and so forth, then we will not be able to move things earlier in the process, and we're in the same way as before. So we have to in some way have a mechanism to incorporate also measuring at new places earlier with new methods, and that will automatically be new. It was because it doesn't exist in the regulated method now for that, by definition. So we have to--and they have to be validated; otherwise, if AstraZeneca or somebody comes and wants to apply for a new drug and they say we do this now with new methodology and whatever, then we have to have validation demands on those.

DR. KIBBE: So the statement is correct the way it is; we don't have to change it? If you're going to put in a method--a process assessment technique, you have to validate it no matter who wants you to put it in. If you want to put it in for yourself or the agency comes and insists or someone--it doesn't matter. You really have to validate what you're doing. Generally accepted? Yes? No?

DR. WOLD: If you are going to use it for on-line quality control, of course, then you have to validate. But we have also said that for research use and for process investigation and so forth, you are allowed to put in methods just for, say, process studying purposes. And there we can't have the same demands on validation, or you don't need any validation at all, because part of it may be to investigate that this measurement works. And you have to be allowed to do that.

DR. KIBBE: It says "appropriate validation," right?

DR. C. ANDERSON: Can we address your comment by changing the question a little bit, by making the question to say validation of PAT methods for release criteria or for real production? That's where I hear you driving.

DR. WOLD: They are going to be used for release.

DR. C. ANDERSON: For release.

DR. WOLD: Yes. So after the question mark, put in "which will be used for release purposes."

MS. SEKULIC: Can I just suggest that we change the word "release purposes"? That has a different connotation. It means end-product release in a lot of cases. Maybe we want to change it to "decisionmaking"?

DR. C. ANDERSON: In-process criterion?

MS. SEKULIC: Yes.

DR. C. ANDERSON: What is the word that wants to be used there?

DR. KIBBE: Is "decisionmaking" okay? Because it's pretty general. Yes, let's go...

DR. WORKMAN: Might we add to the second italicized point "are allowed for research purposes," something...something that reflects that they don't need to be validated, they're allowed for research purposes?

MR. COOLEY: Could you explain the example you guys were discussing there when you're talking about a non-regulatory parameter? Because I'm having difficulty understanding what that might be.

DR. TIMMERMANS: I was trying to remember a specific--whether we did actually discuss a specific example. But, for example, a crystallization onset, okay, process parameter, we measure, we might want to measure the concentrations of various components in the solution or the concentrations of the various crystal forms as they're being formed.

Now, that's not a regulatory parameter. It's something that we use to make a decision as to whether we go forward with that crystallization process, but it's not filed with the FDA. So that would be an example of a non-regulatory process analytical technology that we would use and would want to implement.

MR. COOLEY: Wouldn't that still be considered GMP, though?

DR. TIMMERMANS: It would be considered GMP, correct.

MR. COOLEY: But your definition of GMP is not necessarily that it's a regulatory parameter?

DR. TIMMERMANS: When I talk about a regulatory parameter, it's something that is filed.

MR. COOLEY: In the NDA.

DR. TIMMERMANS: In the NDA.

MR. COOLEY: Okay.

MR. CHIBWE: So is that just for information only? I mean, just collecting the information just for information only?

DR. TIMMERMANS: No. We may make a decision off of the measurement.

MR. FAMULARE: In my mind, I wouldn't call that a non-regulatory parameter. Maybe a non-filed parameter. But I don't see that--to me, a non-regulatory parameter may be some function of running the machine--or the equipment to use the least amount of electricity or something of that nature that you may want to monitor through some means.

MR. ELLSWORTH: Process optimization parameters, not necessarily regulatory. That's what I see from that.

MR. COOLEY: I can give you an example of that where we--you know, biotech processes may have ultra-filtration filters or a centrifuge, and the waste stream we monitor in both of those with optical density measurements to keep from losing product. So it's a business decision, but it has nothing to do with product quality. But that's a good example. We still validate that in the same way as we do our GMP sensors.

DR. TIMMERMANS: I agree with Joe that in this case the term "non-regulatory" was probably a poor choice of words.

DR. KIBBE: Go ahead.

DR. WOLD: I think we have to specify more the decisionmaking about what, because anything we use for some kind of decision, it should be a decision about the product or the process or something like that.

MS. SEKULIC: But both of those fall into the same regulatory scrutiny bucket, so I guess I don't see the distinction. But I agree, it covers both cases. Because as soon as--as we've just discussed, as soon as you start taking action based upon, you know, a method, a data point, a piece of information, then it's decisionmaking.

DR. WOLD: Yes, but we do other decisions. We say, oh, I like this, and I want--in the research we make decisions, too. We say this works.

MS. SEKULIC: Yes, I see this is covering the validation component, and the only suggestion I was going to make was to make a distinction between the method development or the learning phase. I'm assuming that this takes off from when we actually have established what it is that we want to monitor and how we want to monitor it. Therefore, I have a method; I'm now looking at validating that method.

DR. KIBBE: Are we ready to move?

MS. SEKULIC: The "non-regulatory," do we want to fix that before we go ahead? Maybe "non-filed"?

DR. KIBBE: Is that better? Remember that we're not writing regulation here. We're talking about issues that eventually will go into a guideline. We need to do as good a job as we can, but not beat the horse to death here.

MR. FAMULARE: The only suggestion I could make--I don't know if "non-filed" does it for me, either. You may want to still validate a method because it's necessary for GMP, so that I think we're--I don't know what--I'm not quite sure of the purpose in this example, but maybe you're trying to look at something that's not that critical versus something that is more critical to validate. And I think the degree of validation should hinge off how critical that particular process or parameter is.

DR. C. ANDERSON: Isn't that what the answer is saying there, Joe, that even though this may be a non-filed--or however one wishes to say it--a less than critical parameter in the manufacturing? As for other analytical methods, use scientific judgment to develop appropriate validation? So what we're saying is use validation appropriate--

MR. FAMULARE: Right. In the further statements. I don't know what the distinction is in that example. You could have a critical thing that isn't filed.

DR. NASR: What if we use "non-critical"? How to handle validation of method for non-critical parameters?

MR. FAMULARE: It's not critical, but you use it to make a decision.

[Inaudible comment off microphone.]

MR. FAMULARE: Well, that may be the answer, too. Maybe--well, not to measure it, but--I don't know, I just--I don't know what it does for you, that first example. But--

DR. WORKMAN: Could we make that statement broader? The parameter that will be used for learning or decisionmaking? Because even if you've established the process, there may be other things that you can learn for optimization, especially economic-related. So...

MR. CHIBWE: I don't know if you really need to do formal validation for some process that's going to be filed. I'm just wondering if that's necessary to do formal validation. A good example is it's really fractured during research papers. You don't necessarily validate that. I mean, you're going to have your polymers, maybe two, three different polymers you could distinguish. But we usually don't go to the extent of doing any validation for the method. So I don't even know if validation here is going to apply, other than you making sure that your parameter measurement is robust enough, just for information only within the company.

MR. ELLSWORTH: I have a question and maybe a comment. I'm not absolutely sure why we are even dealing with or trying to deal with something that may not be a regulatory requirement in an FDA guidance. We usually don't speak to thinks that deal with process optimization. So if it's not--if it doesn't have a regulatory purpose, then really why are we dealing with it in this guidance? I guess that's my question.

DR. C. ANDERSON: As a user, I would like to see some acknowledgment that these technologies may be used for purposes beyond direct regulation. I think it goes to the safe harbor idea, to formalize some of those ideas a little bit, that we are committing as companies to do validation and do it properly, but at the same time looking for sort of the exemption to be able to use this as an information-only-type article.

MR. FAMULARE: That may be okay. I'd have to think about that. But, still, the distinction of filed or non-filed does nothing for me. At least, you know, when FDA sits down to write the guidance, that--I'd probably remove that term right off the bat.

MR. CHIBWE: I really don't think that it's appropriate to do validation for information only. It's information only--if it's during the safe harbor, you really don't need to do any formal validation until you reach a point where you say you're going to implement that, your system is optimized, and the FDA is definitely going to look at that. That's when you're going to go to the formal validation. So I really don't think this is an appropriate question to address at this point for this purpose.

MS. SEKULIC: I'm just wondering in reading the questions--and I certainly don't want to put words in Mark's mouth, but was it possible that the distinction between the two questions is that one scenario, the first question on the hard copy, was where you already had a method in existence that you could correlate to, whereas the second part was where you don't necessarily have an analytical laboratory method in place, and so you're monitoring, you learn something, and you're in that situation, how do you validate and go forward? I'm just trying to understand the questions, because I think I tend to agree, we're going to be held to the same level of scrutiny no matter, you know, whether it's a GMP question or a regulatory filed method. And as scientists we're probably going to validate the thing, anyway, just to get confidence that, you know, the sensors and the methodology is giving us the appropriate information anyway. So, I mean, I think that drops that sort of whole question unless the intent was to probe, if I see something on my process sensor but I don't have a direct laboratory method, what do I do then? I don't know. You might want to comment.

DR. TIMMERMANS: Yes, again, you know, this is a synopsis of a discussion that we've had for a whole day, and I truly did not expect Ajaz to bring this up here and start this as a discussion point for, you know, what should be included into the guidance.

In this specific case, I think as I mentioned before, we were talking about parameters which were not necessarily in our filings. We might or might not fall under GMP scrutiny that could be used for multiple purposes, you know, for process learning, for increased understanding of our processes to provide us a pathway, you know, to gain the process understanding, and, you know, that's really the context that this was discussed in. I'm not sure--I agree with Carl's point that, you know, the guidance should preferably provide some type of information or position on how these methods should be used, but agree also with Doug that, you know, for non-regulated, non-GMP, non-filed methods, you know, how can you provide guidance. You can't.

DR. KIBBE: Is Merck prepared to claim proprietary information and have us move this because it's secret and we shouldn't talk about it?

[Laughter.]

MR. LEIPER: I think that Merck would be glad that we're confusing ourselves with it.

[Laughter.]

MR. RITCHIE: Can I add, with respect to what Carl said, I think I'm having a problem with giving the industry the right to reserve the use of data for investigational use or development purposes with never the intention of having that show up a filing.

I also need to be able to defend the use of that measurement for someone who inside, you know, looks at it and says, What are you doing with this? Why haven't I seen it?

So for instructional purposes, I think you need to straighten out the usage, because both--the investigator needs to know the difference between a reported value that's used for development or investigational use to learn about the process versus the final one that's going to be reported. I don't know if that straightens it out, but that's what I think is going on.

DR. KIBBE: Are we comfortable with what we've done here? Do we have enough confusion added to the pot to go on to the next one and try confusing that one?

DR. NASR: Did we decide to drop the question or what?

DR. KIBBE: We haven't thrown anything out. We were looking at this to see if we could enlighten Ajaz, because he already has this list. And if we can't make it a more enlightened statement, we're going to let him live with what he's got. How's that?

I see someone with a finger on the button. Go.

DR. HUSSAIN: Well, I think you talked about why I brought that list here. In a sense, I think it was prudent of me since we had that discussion in sort of a closed session, and I didn't want that to sort of remain in a closed session, and so that was the reason to bring those questions here. It's your choice whether you want to drop that question or not. So that's fine with me.

DR. C. ANDERSON: I think we've substantially modified the question by taking out the whole non-filed, non--all the "non" stuff out of there. The "non" sense, as it were.

[Laughter.]

DR. C. ANDERSON: I think the question as it stands now bears looking at and deciding whether, as it's written now, if it makes sense.

DR. WORKMAN: To me it makes sense, for instructional purposes.

MR. SILVANS: Can we use not only for process monitoring but also for process setup? Because sometimes we use, for example, NIR for--see the flowability and particle size, and from these physical properties we set up the filling machine, for example, as a practical use.

DR. KIBBE: What word would you add?

MR. SILVANS: Say method for process monitoring or process setup.

DR. KIBBE: What was the word?

MR. SILVANS: Instead to say for process monitoring, that's okay, but we can use for process setup.

DR. KIBBE: Setup.

MR. SILVANS: Yes. Before starting your operations in the morning, you set up the machines on the basis of the results you have.

DR. KIBBE: Okay? All right. I've got 11 o'clock, and we've got several of these, and we're having so much fun with them. We'll move on to the next one.

Number 3, when and how do you validate. I think how is up to the process--we've had lengthy discussions about letting people use a reasonable scientific approach to validating based on the instrument in this process or system you're trying to validate. I think more importantly is when, and being naive and being an academic, I always go with you validate when you want to have faith in the answers you're getting, you don't validate when you don't care.

MR. COOLEY: Art, I think you make a valid point. Validation--there's two drivers for validation. One is for compliance and regulatory, and the other is for business reasons. And it doesn't make a lot of sense to put a sensor into a process and not do some type of validation to ensure that the data you're getting out of it means something. But obviously there's many, many levels of validation that you would be dealing with there.

DR. KIBBE: I'm glad we're talking about making valid points in a validation discussion.

MR. COOLEY: I have a question on the first point. Are you inferring there that you would not validate at all? It says calibrate PAT method for use in pilot plant--or these sequential steps that you're talking about you would go through.

DR. TIMMERMANS: Correct.

MR. COOLEY: Okay.

DR. TIMMERMANS: You know, in order for you to be able to validate the method, you first need to calibrate it. So what would be your first step in the process?

MR. COOLEY: Okay. I didn't know if those were multiple-choice questions as to which you would do or whether they were sequential.

DR. KIBBE: We're okay? We're going to go to the next one. No one's going to jump in here and object? All right. Go.

[Pause.]

DR. KIBBE: I think this kind of implies a concern that people have. If you put one sensor on a blender and it starts to screw up, does that mean you have to kill the whole blend because your sensor is screwing up? Or is there a way to nest our process technology so that if one monitoring system is going bad on you, it doesn't mean that you have to kill the whole run, or whatever? I think that's where we're--I'm not putting words in Merck's mouth, but I think that's where they're going with that. How do we want to handle that? Go ahead.

DR. WOLD: Well, again, I'm not speaking for Merck here, but I think that remembering the business interest, I mean, nobody should put just one sensor on to measure just one thing. You always need redundancy, and that comes from the process people. If you have good process people, they will ensure that, and you don't need to regulate that because the business interest is to not let this happen.

DR. TIMMERMANS: I think Ajaz discussed this in part yesterday in his presentation as well when he was talking about, you know, overlapping systems and several layers of redundancy being built into the process. So I think that that in part addresses this question or this issue.

DR. KIBBE: I wonder whether the concept of robustness of our testing method or in-process control method or technology ought to enter into this. If you have a very robust system, then there's less need to do lots of redundancies. If you have one that fails on you every two weeks, you should be doing something different. That is truly a business decision.

DR. MARK: The question here kind of reminds me of something we started talking about yesterday a couple of times and never really got all the way through it. The question came up yesterday, if you have a continuous process, it's running along okay, and then all of a sudden something happens to it, it goes bad, then what do you do? And we never really followed through because the second part to that question, which probably also--you know, that needs a discussion in itself. The second part of the question is now you've fixed the problem--maybe, let's say, it's an hour later. You've fixed the problem, and then what do you do? Is it still--if your sensor and process are in control again, the sensor's been fixed, whatever the problem is has been fixed, and now the process can run along and be measured and be kept in control, can you then go ahead and continue taking the product and eventually releasing it?

These are two related but separate questions which we never really followed through the discussion yesterday. This question seemed to be addressing it also.

MR. MADSEN: And, again, I think it makes a big difference whether this is a sensor that's used to control the process or just to monitor the process.

DR. KIBBE: My own personal temptation is redundant systems, so that if I have a monitor that goes down, then I'm not left wondering where the thing is going. But, you know, I don't spend the money.

DR. C. ANDERSON: In general with this sort of question--

DR. MARK: I was going to say, that's okay if it's the sensor went bad. What if the process went bad and the sensor did its job and caught it? You know, it doesn't remove it entirely, I think.

DR. C. ANDERSON: My comment actually goes very nicely to what Howard was just saying, I think. It's the company's responsibility to have procedures in place that address these, and I think from the level of the guidance, the guidance needs to specify that procedures need to be in place. I don't think it's our job to prescribe those procedures. I think it's the individual company's job to come up with reasonable procedures to address this type of contingency.

MR. LEIPER: I think that the other thing that's important is that we're actually reinventing the wheel to some extent here, because many industries actually run continuous processes and they do have contingency plans for these particular issues, to such an extent that their processes are so hazardous that if they did go out of control, they'd be blown up or something like that.

So I think rather than debate it all here, the answer is to go out to some of these industries, find out how they handle it, and see how much of it can be imported into our strategies, because we don't have this experience. None of us around the table have actually got this experience.

DR. CIURCZAK: Well, in a way, if you look at a small enough part, the same concept if you get in a short enough area, the Earth is flat. If you're running tablets from a single granulation and it takes three days to make the batch, so those three days it's a continuous process. And you've got your first million and a half tablets, then 10,000 go bad, you fix whatever it is, and then the rest that are good, is it legal to throw away that little piece in the middle and sell the rest of the batch? That's basically what Howard's saying. How do you judge that?

DR. KIBBE: Anybody else? I think rather than putting up there the statement that we need a robust sensor, what we really need is that the company needs to develop a contingency plan for failures in the process. And they have contingency plans now for failures in the process. It's just we now have a different method of monitoring the process, and so the contingency plan has to take that into account.

DR. WORKMAN: Might I add that it is implicit in here, but some of these other industries that Ken was talking about are monitoring the monitor all the time, so they know whether it's the monitor or the process. That's what you--that's part of the plan.

MR. CHIBWE: I don't know if we should use the word "non-regulatory" or probably just say "for information only parameters." Number 1 there. Because the whole environment is a regulatory environment, so I don't know if we could specify non-regulatory parameter. Maybe you could just use the word "for information only parameter."

DR. KIBBE: I'd be real tempted to make that one statement and get rid of regulatory, get rid of non-regulatory, get rid of--I mean, we have a parameter--if we're looking at a parameter, we must think it's important. If we're looking at things just for ha-ha's, then we're spending money for no reason at all. And so if we're looking at a parameter, then we need to have a way of making sure that the parameter is measuring something we want to measure and that we can depend on the outcome.

MR. COOLEY: Could we not do what you just mentioned earlier, Art, and just strike both of those and just say that there will be a compliance plan--I mean a contingency plan in place that--it's up to the company to determine what the appropriate contingency plan is.

DR. KIBBE: I'm with that.

DR. C. ANDERSON: I agree.

MR. LEIPER: Totally agree.

[Pause.]

MR. COOLEY: Art, Ken brought up a good point.

DR. KIBBE: He always does.

MR. COOLEY: Is this considered a GMP document? If so, do we just need to strike it out once and then initial it that we've changed it?

[Laughter.]

DR. KIBBE: We're doing it electronically, so we will have to initiate a method for electronic initialations. Okay? And so we're going to have to validate that method, and then we're going to have to monitor the initialator.

Are we ready for in-vessel?

DR. C. ANDERSON: My first suggestion is that this isn't restricted to in-vessel. There are examples I can think of that are out-of-vessel that are just normal processing things, that the only time we can gather data is while the process is running. So I guess rather than in-vessel, perhaps in-process might be a little bit more specific.

DR. KIBBE: Let me see if I've got this. PAT methods are--I don't know--in-process methods, right? So we're going to make this in-process? In-process?

DR. C. ANDERSON: It looks very reasonable. He just changed it to "a PAT method."

DR. KIBBE: I like that.

DR. C. ANDERSON: Which seems quite reasonable.

DR. KIBBE: Are we okay with this one? You've got something? Go.

DR. WOLD: We are tying our hands here, or the process people. If we start to operate outside this optimal range, then we are actually getting data where we can compare the PAT method with the laboratory method, so you can use it for updating. So we shouldn't say that we always do this. It becomes very static.

DR. C. ANDERSON: Not necessarily. What this says to me is that if I wish to use it outside of the initial operating range, I have to revalidate to demonstrate that the extended range is appropriate.

DR. WOLD: But, I mean, we are getting data. We are saying we can collect data only from the run in process, and suddenly we start to run the process somewhere else. Now we have data, so we can compare the process at this point or in this little range to the laboratory method. So then I agree, we should then update the model or whatever we are doing. But the way it's written here when it's operating outside this range, this is this little initial range, then we forever must use the laboratory method.

DR. C. ANDERSON: As a point of clarification, I agree with you, yes.

DR. KIBBE: Good. That's good. I'm glad you think so, too. We're ready to move on, right? Six.

That's generally the same statement. Okay. I don't think we have to do anything with it unless you want to just delete it.

Let's go to the next one, which is, I think, the last one, which is always nice.

All right. Jack, no one has anything? Okay. Well, we've done that little job.

I'm one of those people who don't like to work any more than I absolutely have to. Is there anything else that we need to discuss?

MR. COOLEY: One thing, Art. It's not a point that I don't think we've discussed the last day and a half, which is surprising. It has to do with measurement uncertainty and how that ties into process limits, and I guess gets back into the suitability of the sensor to be used for controlling a process that is within those limits. And I don't know if that's something that should be included in this guidance document. It is something that's starting to be observed by some of the field inspectors, and I don't know if it's a good thing to capture for other companies that haven't gone through that process yet.

DR. KIBBE: You're not just talking about the Heisenberg uncertainty principle, right?

MR. COOLEY: No. No, I'm talking about, I mean, determining what the uncertainty of the method is, the total uncertainty, and in setting--and defining that in the method, and then there's kind of a consensus standard that you will have a 4:1 ratio of measurement uncertainty to the process limit, that you'll operate within that range. We really--we haven't captured anything to that level of detail, and I don't know whether that's something we should or not. It kind of gets down to you don't--obviously you don't want to have a measurement uncertainty that equals your process limits, or even comes close to that.

DR. C. ANDERSON: I agree with you completely, but I think we are getting beyond--below, if you would, the scope of this guidance.

MR. COOLEY: Okay.

MS. SEKULIC: I'd say that probably gets covered under the appropriate for intended use consideration perhaps.

DR. MARK: There's a phrase in a couple of these questions which brings up a point which I haven't heard addressed here either, and the phrase used is "long-term maintenance." We all know that a lot of these methods--you want to have some sort of quality control on the method, that, you know, at some intervals you compare it again with your laboratory or your prior analytical method if you've calibrated it against a prior method to make sure that it's still maintaining its accuracy and so forth. And I think something should be in the guidance about how often and to what extent the ongoing quality control procedures should be applied. Probably it does not need to be as thorough as the initial validation of the method, but depending on how frequently it is, you possibly may want to have a guidance that says you'll do something minimal at weekly intervals, and something a little more extensive at monthly intervals, and something like that. But I think there probably should be something mentioned about the question of this long-term maintenance procedure.

MS. SEKULIC: I guess I'm going to disagree. We have instrument guidelines in place that tell us how to calibrate, how to performance verify, how to do this, how to do that. If we're talking specifically about monitoring a process unit operation with a sensor that is product-dependent--it's going to be really difficult to provide a useful guidance that isn't so general that it becomes redundant, because we have, what, 50 processes, 50 products that are manufactured at any given time, each one of those will require different cycle times, different number of batches being manufactured per campaign. So depending on how you set up your sensor activity and your process monitoring activities, those may actually require--and the complexity of those, they may require different verification/sensor monitoring activity to be implemented. And that, I would also venture to say, would probably go into the method development documentation, shall we call it.

DR. MARK: That could be. Maybe we need something as simple as to say that there shall be an ongoing long-term maintenance procedure put in place.

DR. C. ANDERSON: That was on there.

MS. SEKULIC: Yes, I thought we captured that in one of the questions.

DR. MARK: These questions just sort of assume that it's there, but it doesn't say that it should be there.

DR. NASR: I think it is a given in existing GMP environment that you have to have--maintain your equipment and you have to have all calibration and all that. I don't see anything new here.

DR. TIMMERMANS: Well, I think, Moheb, the only thing different here, and speaking from experience, if you, you know, take a specific example where you replace a KF measurement by a NIR measurement, how do you know your KF measurement is not going to drift, but it's very possible that either your spectrometer or your materials drift or your calibration drifts. So the question then is how often--and I think that that's what Howard was coming to. How often do I need to verify that my calibration is still appropriate? And what do I need to do to verify that that's appropriate?

But I agree with Sonja that, you know, we're talking in very general terms here, and we cannot provide specific guidance. I think the only thing, as we said before, is that we have to have a long-term maintenance program in place, and the appropriateness needs to be determined, you know, at method validation.

MR. COOLEY: You think there are guidances available. The NCSL, the National Congress on Standards Labs has procedures or consensus standards that deal with PM frequency analysis and that sort of thing. You could use those.

DR. KIBBE: I want to thank everybody for all of their energy and effort. What I intend to do, if we break, is I'm going to go look through the slides we developed earlier that we all seem reasonably comfortable with, and they're going to make the basis for our team presentation after lunch. Just if anybody is interested and wants to go through them again with me, we'll stand around the young man with the computer and make sure that they're appropriate. All of this material is being captured in electronic format so the agency will have all of it. None of what we've done is the letter of the guidance or guidelines or the law that's going to go into effect. We know that FDA staffers will get a chance to go through it again and, you know, fluff it up or tone it down or whatever.

But I think what we have attempted to do is give them some really good direction for that ultimate guidance, guidelines, and I think you've all served your companies' interests well and the interest of the public, and you've been open and honest with us, and we really do appreciate that. As a reward, you get to go to lunch early.

[Laughter.]

DR. KIBBE: And we will see you at 1 o'clock. It is our understanding that at 1 o'clock we'll have reports from the standing--or the sub-groups, and then we'll be out of here. I think Ajaz and I have estimated that you will probably be on the road at 3 o'clock if you've already checked out, or in the bar at 3 o'clock if you haven't, whichever direction you want to take your life, although I do recommend to you that you hold to the normal process limit for the consumption of alcohol. It's one drink an hour.

[Whereupon, at 11:27 a.m., the Process and Analytical Validation Work Group was adjourned.]

AFTERNOON SESSION

[1:05 p.m.]

DR. KIBBE: In light of the wonderfully sunny, pleasant weather outside, I thought we could go ahead and get started. The presenter is always praying for rain during his presentation and not after. And so what we're hoping to do is that this rain will blow over in a couple of hours while you're stuck in here with us communing about the wonderfulness of PAT, and then you'll be able to get out in a cooler environment than you arrived in, with pleasant sunlight and a nice view of the freshly washed Gaithersburg, for those of you who have traveled here from afar.

x We're going to try to summarize the efforts of the individual working groups that worked yesterday late in the day and early this morning. And I think using the power of the Chair, I'm going to get mine over with first. That will give you an idea of how much time we've left you for the other people so that we can keep things on the move. Just remember that Ajaz wants to summarize at the end, and I know Ajaz, and that's an hour and a half. So that leaves us--

[Laughter.]

DR. KIBBE: That leaves us a little time.

I was chit-chatting hoping my colleagues up here are ready. How are we?

[Laughter.]

DR. KIBBE: So, Judy, we're loading yours, and then I'll do mine, and we'll do yours, and then we have to do an equipment exchange for the training people because the training people didn't bring equipment to allow them to transfer their information. Training, non-transference of information, that sounds good. That sounds wonderful.

While he's loading, let me tell you that, first, I enjoy these meetings immensely, which only goes to prove that I have a very limited life.

[Laughter.]

DR. KIBBE: But on a more serious note, there were a number of people who worked with me yesterday and today who are both experts in their field and have courage and determination to try to move forward on something that will ultimately be a great benefit to both the industry and the general public in years to come.

I understand that some of them have some fears and trepidations about a regulatory body that has been in the past inconsistent at times, and even punitive when necessary. But I really do appreciate their willingness to look at this in the environment that we find ourselves in now, with a regulatory body willing to go the extra mile to make the improvements in their regulated industry. This is a wonderful opportunity for all of us.

Now hopefully there is a slide behind me that says something that I can keep going from. Being a university professor, I always do things in 50-minute blocks.

The first move is, of course, to find the button to push the slide, right? Which one of these--you sure you like this one? That worked really well. Left. Left-right arrows? You're sure? Outstanding.

Well, since I've tried up-down, left-right does work. This is called validating the process. When you have four possible outcomes, you check them all and see which one actually changes the--

[Laughter.]

DR. KIBBE: We have a working definition of process analytical technologies. I keep hoping that we will somehow change analytical to assessment technologies because I think analytical ties us in our own minds to the history of HPLC, and for those of you who are old enough to remember real titrations and gravinometric(?) measurements.

This is a working definition that will allow us to move forward. We hope that the validation guidelines will include some of the kinds of information that we include on this first slide of definitions. This is a system for the analysis and control of manufacturing process. What is the validation that we need to go into? You know, three lots and done. Ha, ha.

When we had our discussion, we recognized that this is a new way of looking at what we're doing. It's not an analysis of a snapshot. It's the continual monitoring of a process. In order to do that effectively, we have to know what the process is. If we don't know what we're monitoring, how can we expect that the results of our monitoring can be useful?

We had the discussion about validation and some background information. We have a belief that a lot of what we do doesn't correlate well with the process we're trying to monitor. We know that we have in the past used univariate measures, but we're looking at PAT and we're recognizing quite easily that it is a multivariate analysis, and so we have to look at these things slightly differently.

We sometimes measure what we can measure, even though it is of no value to us, and not what we really need to measure. And I think we need to be more rigorous in our attempt to measure what is essential to our processes.

Measurement has not been seen as process-related in the past, and we need to change that. And we need to have--some people call it a paradigm shift. I don't think it's nearly as dramatic as a paradigm shift. But we need to think differently about how we go about maintaining quality in our products. We have to recognize that our approach is to control the process which ultimately gives us a quality outcome.

We have to understand the process, break it down into unit operations, assess the risk potential for each unit operation, design systems to manage the risk, remembering its univariate measurements are not appropriate for multivariate systems. We have to develop our systems. We have to establish proof of concept. And then we have to challenge validation.

Our objective, of course, would be to confirm the process and measurement validity in a real time across a life cycle of the process.

Some postulates that we think should be included in the guidance that would help the industry understand how to proceed, and a couple of things that came up in our discussion that is also worth nothing is that a lot of us think that we understand how to validate an individual activity or a process or an individual way of monitoring an outcome or a product. And we think that some of those understandings, especially if they're backed up with science, solid science, can be applied to understanding a PAT or a process assessment technology. But at the same time, we have to recognize that they are different, and so we're on the horns of a dilemma or a paradox as we have over here on the structure in the upper right-hand corner. And that is that we think we know how to do validation, but we think we know how to do it in a certain area or aspect. Can we apply all of those same principles to our new area or aspect or our new way of doing things? And if so, how successful can we be? And I think part of it is keeping your mind open to what you're dealing with, which is a process and a static measurement, and realizing that we don't need to go to excruciating detail to reinvent the wheel, but we need to know that the wheel we've selected fits the car we're driving.

We have a checklist for sensor and chemometric validation which we think ought to be included in the validation guideline to give industry some sense of what we're looking at, to remind them, more than instruct them or teach them, of the things that they look for when they do a validation. And if they do it right in the past, then they can probably use these same reminders to go ahead and do it again in the next stage. So a sensor validation, software validation, and remember, when we look at PAT--and all of you have been looking at it over the last few days, if not long before that--we recognize that these systems are going to generate a tremendous amount of data. And how we manage the data is going to be equally important. How we get real information out of a sea of data is also going to be important, and how validation uses that information as well as the data that it's presented with.

Targets for validation and method types. We have primary methods and secondary methods, and, again, this should be included in the validation guideline as a way of reminding you of the kinds of things that you think about when you go through validation now and perhaps how that can be applied to these types of systems. Analytical types, direct measurement, in the past we've looked at only active ingredient. Now, of course, we want to look at active ingredient and all the excipients simultaneously. Our general thinking should be approximately the same.

Now, interventionality--and we can't say this more often than is necessary, and that is that we're looking at multivariate, we're looking at fingerprinting a process, and hoping that the fingerprint is very instructive as to how well controlled the process is and validating on that fingerprint so we have multivariate systems.

Implementation questions. What information is needed and why? Where are the appropriate measurement points? When and how often are the measurements needed, and how is PAT provided the information to be used? And who will interpret this information? All right? All of those things have to be addressed as you begin to add these types of technology into your processes.

There are three distinct ways of analyzing unit operations and releasing products that are being developed and manufactured. Condition one, generally the current operating scenario, the product is manufactured according to a fixed process condition set. One of the best examples, of course, we've talked about over and over again is that we set up blend in a specific piece of equipment to last a specific length of time.

When we look at in-process or PAT applied to blending, we agree that perhaps there will be an endpoint and that 15 minutes isn't the endpoint but, rather, at some point when the sensors say they have a uniform mix, that's the endpoint. And so there is some of the way we shift and the way we think about things.

Release is conducted by physical and chemical tests subsequent to manufacture. Some of the concerns that we talk about is when can PAT replace some of these end-stage release measurements, and I think we generally agree that early on, probably not, for a number of reasons. First, we think all of our QC people would go crazy if they thought they lost their job, and they would insist on doing the study anyhow. And if they thought they were losing their job, they would stop any attempt at putting PAT in place because they wouldn't want to lose their ability to assay all these little tablets that they get. But also because there will be some uncertainty at various levels within our companies and there will be some assurances needed that what we're doing is really going to do what we want to do. And I think we had a wonderful slide, and Machiavelli told us that if we want to change something, we'll be opposed quite dramatically by people who like the way we do things already and supported only lukewarmly by those who want to--who think they might get something out of it, and so we're going to have that issue in front of us.

Product is manufactured according to a process condition that had been shown during development and manufacture to infer product performance and is confirmed during the initial process and product validation. This is the direction I think we're going in, and this is where we want to see our processes in the future. Relationships are developed and confirmed with physical and chemical tests subsequent to the manufacturing runs, and release is conducted by review of process conditions during each batch manufacture.

Some of you are happy to share with us some of the successes you've had moving in this direction. Others of you are excited about making a submission to the agency to get at least part of your system under a PAT system or a PAT method of controlling the process. Some of you are sitting there going, Oh, my God, what am I going to do next?

Well, that probably will continue on for the next few years, but I remind you all that technology has increased at an exponential rate since well before the Industrial Revolution. If you follow the ascent of man technology, every so often there has been a breakthrough and a change. Those breakthroughs have come closer and closer and closer together as we've moved through the last century. If you drag your feet when this technology starts--takes off in the hope of letting it all shake out over the next 10 or 12 years, 12 years from now you'll find yourself all alone and your company significantly disadvantaged.

Product is manufactured according to a process condition that are responding to direct measurements of in-process product quality where unit dosage forms are being manufactured. Relationships are developed between process and product performance that are optimized and bound by the data obtained in the development and manufacturing runs. Release is conducted by data collection from in-process product or each dosage form during manufacture.

Release specification form validation criteria can be defined for each condition based on the nature of this release, and I think that's where we're headed.

Questions that we think need to be addressed in the guidance as we move forward. Should there be a difference in expectations between the developmental product releases for P1, 2, and 3, then the routine manufacturing lots? And we discussed differences when they happen and when they don't happen.

We kept coming back to the same theme, a theme that I think should be near and dear to everyone's heart in here, if there's good science behind it, and we can explain our decisionmaking based on data that we've acquired and understand; and if we can understand our process, then we should move forward. And if we can't, then we probably aren't doing the right thing.

Could and should there be official designation for products and processes that are inherently capable of being appropriately measured and controlled would allow for predicting product release characteristics? And I think this is an evolutionary question. As people get more and more understanding of how PAT works, we'll get more and more understanding of how well we can control certain processes and how well they are in terms of predicting the outcome better than we do now.

Content recommendations for the guidance document, suitable for the intended purpose. In other words, the process that you have and the validation you apply should be suitable for the outcome you want to achieve. The general validation criteria, we expect that the agency's guidelines will be in general and not specific. They won't be guidelines that will come out that will tell you how to use a near-infrared to measure content uniformity in a blend, but, rather, that will give you some guidelines in terms of how to proceed.

There will be references to existing guidance documents to help you apply the appropriate document to the appropriate situation. If you have a sensor, you have to validate the sensor. If you have another technique, you have to validate it and so on.

We expect that the agency will allow you to get into the research mode, find out about these sensors before they're applied to the system, without interfering with your attempts to understand PAT in your own hands and your own system. And, of course, there is always the safe harbor which boils down to OOT versus OOS. In other words, if you have something that you see because you have a really good way of looking at it, and it's a little bit out of the trend that ha occurred in the past, that's okay. If it goes out of specs which were previously established, that's not okay. And no matter how you measure something, if you're out of specs, you're out of specs. All right?

So if your old method would have called you out of specs and the new method calls you out of specs, guess what? You're still out of specs.

If the old method wouldn't have noticed that you're a little off trend and the new method does, you're not out of specs. Your trend has to be watched, and you have to decide as a company how important that trend is. And we can go for exquisite examples, but if you have a 90 to 110 percent active ingredient on your tablet and your tablet is run and you're measuring and you have a system now that tells you that every other run you've had, you've been between 98 and 102, and this run you're between 98 and 103, maybe there's a trend here, but it's certainly not out of specs. You're going to release your product. You're going to continue to march. And perhaps you're going to think about it in terms of internal controls.

Encourage the use of PAT. FDA should encourage it. We see it as a tool to improve the industry's productivity and the quality of the products the industry produces. And so, therefore, the agency as a responsible agency of the United States Government, interested in the welfare of the public, will be involved in encouraging you to use these things to make things better in the long run.

Now, we also looked at a group of questions that were proposed as a result of a discussion between Ajaz and members of the industry off-line, and we responded to those. And I've chosen not to share them with you one after the other because they essentially reiterate some of the points that we've talked about, and they will be used by Ajaz and the other members of the agency to try to put together this overall guidance document for validation.

So, with that being said, I'm going to stop, and I'm going to turn it over to people in my group who have anything to add. So we have some major contributors to the information we've put forward today, some of them actually hiding in the audience now. And if they have anything they'd like to add or anything they think needs to be clarified, please, do that.

I can't believe that I was that good at summarizing that they don't need clarification. Go ahead.

Don't forget, we need a mike so we can record your clarification.

DR. C. ANDERSON: A very brief clarification on the general validation criteria. One of the themes that came up in the group over and over again is that the accepted validation criteria for method validation are generally applicable to PAT-type applications, so that line is in there specifically to denote that, that the generally accepted practice for method validation should be continued for PAT applications.

MS. SEKULIC: Just to throw out one additional comment that came out in the discussion over lunch, I guess for the record, if it could possibly be stated so, we keep thinking that we're going to write this guidance and this is it, it's going to be carved in stone. And I just want to throw out there, you know, as technology evolves so does the guidance. And so I just kind of wanted that be recorded, I guess, for posterity.

DR. KIBBE: Like any FDA guidance, they're subject to review and change and update. The FDA has not been carved in stone, even in 1938 when they started actually deciding that drugs might need to be safe to be sold in the United States. So I think that's a really good point.

Anybody else? Does the FDA want to comment?

DR. HUSSAIN: Just sort of a question or a comment on the point you made with respect to the jobs of analytical chemists. I thought with this actually you're going to increase--you have increased the number of lab-based analytical chemists to do all the calibration work and so forth. So actually they shouldn't worry about losing their job. They should worry about getting an extra burden of more work to do, because I think how--where will the calibrations come from? You have to balance the--so analytical chemists, I think their numbers are going to increase.

DR. KIBBE: Good to know job security is there, too.

DR. SHEK: Just a general question. I would assume--just a point of clarification, there are two aspects of validation. For us it's validation of PAT as an analytical tool, okay? Then validation of the process itself. And I tried to follow up on the slides and whether you are referring--if we are going to use PAT and will basically---let me step back and say validation, the way I understand today, there are some rules. We are saying three batches being tested according to a predetermined protocol and with preset, you know, specifications. And if it passes, we are saying the process has been validated.

Now, if we are going to use PAT, we'll generate continuously, possible, more data than we do today, not selectively, if still this concept of process validation still exists or now the scheme is a little bit different now, because maybe we are validating every time we make a batch. And I don't know whether that was captured there or not, or that--

MR. FAMULARE: That actually was one of the bullet points in the slide that I thought really hit the nail on the head. The ability exists now with this technology to validate each batch, and that was--the number two bullet point on one of the previous slides.

DR. HUSSAIN: When I saw this, follow the "c", I said it's continuous GMP now.

DR. KIBBE: If you can get the technology set up so that you can continuously follow the process from before the material shows up at your door until the finished product leaves your door, then that's exactly what you have, a continuously--constantly revalidating it, manufacturing process under complete control, that's like the golden fleece, this process.

Now, to think that we're going to have that next week is a little, you know, Polyanna, but to think that that's not an unreasonable goal and to have the guidance or the guidelines allow that process to evolve I think is what we're hoping for.

MR. HALE: I think there are layers of validation and the terminology is used somewhat loosely. I think that parts of validation will remain similar or not changed at all. The equipment still has to be validated and methods still have to be validated and sensors, too. Probably the biggest change in all of this is this issue of the process and that there was a lot of talk, and I think one of the greatest opportunities in this is to take the larger holistic view of the process and product in mind, and that part of validation will potentially change the most if we can implement some of these technologies.

So I think validation means different things to different people, but the opportunity is in the process and product arena.

DR. KIBBE: Anybody else?

[No response.]

DR. KIBBE: Seeing no one leaping to the microphone, Judy?

DR. BOEHLERT: While I'm waiting for our slides to be mounted, I'd just like to thank all of the participants in our sessions. We had very interactive sessions from the committee members as well as from a number of the audience members. So my thanks. We were still going strong at 12 o'clock today, so that's a testament to the discussions we had.

Okay. We did take a look at Ajaz's questions and go down them in order because it helped us to sort out our comments. And the first item that we looked at was the R&D focus and what should be documented to justify suitability. And the important thing to consider here is the focus in R&D is different than that is in manufacturing. And R&D is looking at boundaries of processes. They're trying to understand the process. They're not trying to control the process. Manufacturing is more on the lines of controlling the process and use PATs for that purpose.

So during our R&D, the PATs are used to gain understanding. During manufacturing they're used to monitor and control.

Not all PATs will make it to manufacturing, and I think that's an important concept. During R&D you may look at a number of different parameters, and the whole point here is to decide what's important and what's not important. So it's very common that you'll see that PATs are studied during R&D that don't make their way to the final manufacturing process.

Demonstrate suitability of PAT measurement for intended use. This is a basic principle that I think we need to look at. You know, they're used for predicting very open end-product quality attributes. Some PATs--we looked at three different kinds of PATs that you might use: ones that replace existing technology, if you're doing an assay, you can do it on-line using NIR, perhaps, instead of off-line using HPLC. And that's a replacement, and you can look at equivalency.

There are other PATs, for example, using acoustic technology to get a prediction of what particle size might look at in a granulation. That's a different concept. You might also look at, for example, measuring something like mag stearate as a predictor of dissolution. So each of those is a different kind of PAT that you might look at.

You need to demonstrate that it's validatable. For example, the sensor suitability, location, number of sensors, the number of sensors, as well as traditional measurement attributes that you might use. And I've got a thing across my screen here. PAT performance requirements--that's interesting. Is there a way for me to move that thing up, the writing here? I have to find the mouse on this one. It's the little button in the middle, right? Unless you expect me to remember what word we had under there. Oh, rigorous. I knew that was--I was trying to think of that word.

But what we're saying here is that PAT requirements are more rigorous if intended use of PATs either individually or as an aggregate combined is to replace end-product testing. There is a difference. If you're using a PAT just to monitor one process or one step in a process, that's different than using a PAT to replace end-product testing. And, therefore, the requirements there would be more rigorous.

Then we looked at--bear with me.

[Laughter.]

DR. BOEHLERT: That's not funny. There are only so many clicks you can do here before it jumps.

DR. KIBBE: This is a process of too many process variables not being under good control, right?

DR. BOEHLERT: Yes, this is not under good control. I have to validate--

DR. KIBBE: I think FDA will close you down.

DR. BOEHLERT: I didn't expect this, but, anyhow, the next thing that we looked at was the suitability of PATs as used in manufacturing. And what we're saying is that the points we stated earlier applying to R&D still apply, but there are some additional things here that you need to consider. And the most important, of course, is your ability to transfer the use of those PATs from an R&D environment to a manufacturing environment. You have equipment design issues, scale-up issues, interface changes, ongoing calibration, maintenance, equipment calibration, consider safety of the operator or final user of that product due to contamination. All of these things need to be taken into consideration because you can't always just transfer that technology from an R&D process on a small scale to a manufacturing process on a large scale.

You may need to look at refining the models that you use. We talked more about a process signature rather than a fingerprint, and we saw fingerprints as part of that signature, and a fingerprint might be--something like an IR spectrum is a fingerprint, but what we're looking at really are process signatures. And what you need to do in the guidance is define some of these terms, so we're all looking at things the same way. Because in R&D you develop information based on very limited studies, and so these things are likely to change as you move in manufacturing and produce more lots.

The concept of PAT can be submitted as sa protocol in an original NDA or as a prior approval supplement. And then implementation of PAT could be done through less burdensome filing mechanisms, for example, CBE or annual reports. So you would file--you know, what we're saying is, you know, file your protocol for how you're going to bring PAT into the process and implement your protocol. That gets approved, you implement your protocol, and then implementation is through CBE or annual report.

Routine manufacturing using PATs, what should be the regulatory standard for accepting an on-line measurement to replace end-product testing, the level of built-in redundancy. We're saying the body of PAT information should have equivalent or better informing power than the corresponding conventional approved end-product test. Notice we're not saying it's equivalent, the tests are equivalent; it's just the decision that you make based on PAT has to be equivalent to better than the kinds of decisions you can make now.

We recommend that the guidance include a table, and apparently the CPMP guidance has such a table that shows the comparability of different procedures, PAT and conventional techniques, and that would be very helpful--tablets, for tablets. That would be very helpful to the reader of this guidance.

Parallel PAT testing and conventional testing is going to happen. For in-process and/or release tests, both of them could be subject to PAT changes. Should be performed for a significant number of batches. What we said was probably a minimum of three because that's--nobody does only one, two's probably not enough, and three's sort of a minimum, in the absence of historical manufacturing data, because if you've got a lot of data, you've collected it on other products, then that may reduce the burden if you make the same change on this new product.

The level of redundancy you build in here is often a business decision. How much risk do you want to take? How much redundancy do you want to build into your systems? So that comes down to each company making that decision.

Identify steps for resolving OOS observations. Under what conditions can end-product testing be used to resolve OOS observations? The advantage of PATs is it may allow selective rejection or partial batch release, and when you use it for that purpose, you may indeed reduce the number of OOS observations you have. So that's good. Within-batch trend information with PAT also facilitates any investigation of an OOS observation.

Until PATs are approved for regulatory purposes, the approved conventional test should supersede PAT results because those are the approved tests. If an OOS result, however, is traced to instrument failure--you know, you've got PAT approved, you have an instrument failure, and you get an OOS result, and you trace it to the fact that the sensor failed, then traditional approved analytical method can be utilized for batch release.

But once you get PAT approved, that is the standard against which you measure your product. But there may be an exception here. Your sensors all failed, do you, you know, throw out the batch? What we're saying is you can use conventional testing.

I have a page blank here, but using--this question actually addressed method validation. So we deferred any discussion and comment on this issue to the other group, and they've handled that very well.

What criteria should be used to ensure that relevant critical formulation process variables have been identified and appropriate PAT tools selected? Well, the criteria should be based on product performance, adequate process control, and your ability to assure product quality. And what you have to look at are PATS either individually or in aggregate, because very often it's a combination of PATs that gets you to that final product quality control.

What information should be collected to justify use of indirect measurements, e.g. signature correlations that relate to product quality? Product and process signatures are a sum of multiple measurements, and this is why we don't like the term "fingerprint" because it's all of these multiple measurements you make. You need to demonstrate then a link between the PAT parameter, end-product characteristics. If you're using surrogate kinds of PAT tests, then you need to make sure those are scientifically based. An acceptable variation in the population should be established. So these are all things you're going to need to collect information on.

Finally, where and to what extent should FDA involvement facilitate PAT? Well, definitely we should issue a guidance, define terms, provide a glossary. We've heard that today and yesterday, and we're all looking at these terms in different ways, including things like in-line, on-line, at-line. All of these terms may mean different things to different people so we need to define them. To develop training programs, both internal, which you're already working on, and external, for others in industry and elsewhere that might be interested. To develop workshops and include in those workshops mock submissions, case studies, things that will be helpful to the attendees.

As you already indicated, provide the opportunity for meetings between the agency and applicants that should facilitate these kinds of submissions.

And, finally, to look at global harmonization and ICH guidance as a way to go in the future.

So I would likewise ask if the committee members have anything further to add, but that concludes my remarks.

Not hearing any, thank you.

DR. KIBBE: Thank you, Judy. We have to have an equipment change now. The training team has their own equipment, and they felt--

DR. MORRIS: This will prepare you for the flights home today where you'll probably have equipment changes, too.

DR. HUSSAIN: A question regarding the redundancy, the question you were asking. In many cases, the answer from the working group was often a business decision. But in a sense, if you're looking at the totality of an application and so forth, then should not the level of redundancy be part of that decision, not generally a business decision?

DR. BOEHLERT: Would you repeat that?

DR. HUSSAIN: I think the recommendation from the group was that the built-in redundancy should be a business decision--

DR. BOEHLERT: May often.

DR. HUSSAIN: May often be, okay.

DR. BOEHLERT: May often be, yes.

DR. HUSSAIN: My thoughts were in a sense I think we really need to pay attention to the redundancy if we have to rely on a total systems-based approach for assessing and so forth. And so I was not sure whether it's truly a business decision. It's a science decisions. It's an approval decision in some cases, too.

DR. BOEHLERT: It may very well be. We just didn't get into it in that depth where we said there may be some instances where, you know, it is justified. But, in general, you wouldn't put into place redundant systems unless it provided, you know, some payback to you. You might be willing to lose a batch rather than put in redundant systems.

DR. MORRIS: This will represent some of the products of the training sub-group, working group, and as was alluded to by Ajaz earlier, this is really a key component in getting PAT up and running in the real sense because it is, after all, the reviewers and investigators who are responsible for making sure that the methods are faithfully--both communicated to the agency as well as making sure they understand the basics of it.

So we started with course objectives as we laid out this morning. We actually did the course objectives in retrospect because we had a good bit of the syllabus in hand, but then went and modified it as well, and the group was very anarchistic. Essentially the committee itself expanded to include the whole audience. There were several reviewers and investigators present as well, which helped us a good deal.

So on completion of this program, the certification program, the participants should be able to evaluate the adequacy and performance of current and emerging PATs. This certification will require a demonstrated understanding of the fundamentals, importance, and impact of PATs, and we have five outcomes, expected outcomes, including the distinguishing characteristics of the PAT. The participant should be able to demonstrate understanding of the distinguishing characteristics of the PAT, the ID and use of PCCPs, because as Enrico Fermi said, nothing looks as much like a new phenomenon as a mistake. Suitability and validity of statistics, chemometrics, and instrumental approaches to PAT. Typical PAT applications and the associated capabilities and limitations of the methodology, with the understanding that you can't possibly cover all possible implementations. Data handling, analytical control and engineering tools, and vocabulary relevant to PAT.

So these are the outcomes, and I'll go briefly through this, the top line syllabus elements, and then go through a little bit of the course structure, and then, as you like, we can open this to discussion.

We came to the consensus that a background section was necessary. The duration of each of these sections will be the subject of logistical meetings that will follow or strategic meetings that will follow. But the background to include an overview of PAT concepts and examples and a review of pharmaceutical unit operations. This is in recognition of the fact that, in general, reviewers will be typically Ph.D. scientists who are well developed in an area; whereas, investigators will have very broad knowledge, maybe broader than the reviewers, even, but it will not be as in-depth in some areas. So to try to consolidate this team--which I should have mentioned, which is a real key element; having the reviewers and the investigators together is really what is the heart of this concept, not by our doing but by Ajaz's, I suspect, in that it's really forming a team that is capable of both recognizing the importance of specific PAT issues as well as understanding the implications of their actions when they are reviewing them--reviewing or investigating.

So going on to, again--and this came up in Judy's section. The ones that have stars by them are the ones that were identified by the reviewers and investigators as being elements that should be emphasized. So the PCCP definitions and identification strategies and their impact on sensor selection, this would include a fair amount of discussion of the elements of the unit operations that may or may not lend themselves for monitoring and being able to determine when something is monitored, but not correlated to the final performance evaluations that you are employing.

Measurement systems--and, again, I won't go through all of these, but obviously the data handling measurement systems and the associated statistics form a large fraction of what needs to be covered to be able to make sure that everybody is familiar with the concepts at the very least.

Measurement systems, which include everything from the description of typical sensors to variations on the techniques that are impacted by the unique features in pharmaceutical materials, then sampling systems and issues, the representativeness, efficacy, timeliness, and the distinction between on-, at-, and in-line measurement.

Data handling--this is Mel's term which sort of served to collect a lot of the activities that fall within a conceptually cohesive element, but from relatively diverse areas, so it has basic statistics, dimensionality, that is the sort of description of it, basic statistics, and then through chemometrics, and as we heard from Art, pattern recognition, process signatures, and fingerprints, including--Sonja just left, but Eva wanted to make sure that we put this in, that the informatics was not an orphan here, but is encompassed in the database design and mining aspects of the course.

Process control, this was a point of a lot of discussion because there are levels of process control, many of which we don't employ now, but if we're considering the audience that would be in the course and the background they would have to this point, obviously the next leap is that you could do process control so it needs to be introduced. Yet in terms of what will be on their plate most immediately, the areas of batch automation and control implementation were identified as key. So there is a whole range of topics here.

Each of these elements is not going to be equally weighted with respect to time, and the ones that are starred will get more.

Documentation, DQ, IQ, OQ, PQ, and what should be included in each section, and this includes a lot of the details that you saw in Art's summary, which includes through calibration, transfer and maintenance, and data security and audit trails. So these are all topics that were identified as--I'm sorry?

[Inaudible comment off microphone.]

DR. MORRIS: Audit trails, yes. Mike, you'll have to--I was just the secretary at that point. That's what you want, right? Yes. Yell at him. Not tails.

And then wrap-up and recap. Wrap-up and recap is not just a nice job to see you at lunch. It's really a fairly intensive review of all of the topics, a little more cohesive in the sense of a summary so that we tie typical sensors to typical processes, typical as we say here, basic capabilities, analysis and control concepts, and then case studies to bring this home.

In terms of the logistics, this is just a short list, but it's pretty inclusive. You have to fill in a lot of gaps. There would be a pre-course preparation using materials supplied to the members of the training session, and some materials that they would get on their own, but it would be reviewed prior to the onset so that you didn't spend a lot of time because the duration of this course would be somewhere--the didactic part would be somewhere between one week to two weeks. That would still be titrated. So with the limited amount of time and given the levels of education and experience of most of the reviewers as well as the investigators, it's not necessary to spoon-feed them material they've already had. They know most of it, some of it better than we do, of course.

The second point--and this is not in chronological order, of course--the evaluation would consist of reviewing of published or generated PAT examples. So, in other words, at the end of the sessions as well as in the homework activities, there would be examples of--excuse me, let me just kill this. There would be examples of processes and--individual processes and maybe whole lines where PAT was employed. And the idea would be to interpret this in a way that would be evaluated by the instructors.

The course structure would be a little different. This is sort of a hybrid structure from some Washington, Purdue, and Tennessee ideas. A didactic portion from, for instance, 8:30 to 3:00 p.m., followed by a team-based case study review. So for the last two hours of the day, instead of lecturing to people who have been blunted and bludgeoned by eight hours of continual speaking, you would go as a group--this would include instructors and students, to go through the case studies together and pull out points and have teams. The initial size of the participants would limit the number of teams, of course, but eventually.

Then homework would be included, which would essentially be application of the day or the combined days' instruction to sort of build up to the evaluation or the assessment that would terminate the course.

The practical training, which, again, would occur before the final assessment, but the practical training would be divided--this is--a lot of this is open for reorganization, but would flow something like two to three days at Washington, Tennessee, and Purdue, with the individual schools using their facilities and their strengths to broaden the training to the point that people have hands-on experience doing some monitoring, have hands-on experience doing data handling and looking at more than one sensor, so that by the time the participants finished, they've hopefully been exposed to it, at least to the extent to appreciate the problems. And, again, some--one of the reviewers in the audience--I don't see him here, but, you know, he's been looking at applications that had NIR in it. Some of them are 20 years old. So it's not like this is brand new. But to get hands-on I think would be a great benefit.

That's the state at this point, and I'll be glad to try to address comments, and the rest of the team is here as well, if there are any additions.

DR. RUDD: I have a couple of observations. First of all, just to say it looks really good. Where do I sign up?

DR. MORRIS: You'll probably be signed up but as an instructor.

[Laughter.]

DR. MORRIS: Hold that thought.

DR. RUDD: Really, a couple of observations about things that maybe aren't included and, you know, this is in the interest of being constructive.

DR. MORRIS: Actually, if you'll hold that thought for just one second, I'll pull up our "what's missing" list. You can talk.

DR. RUDD: All I was going to say is under the list of process analytical technologies, I don't know whether you've included it with some of the headings you've used, but I'd like to say something about acoustic monitoring, obviously. You've got a phrase in there of chemical imaging, and I wonder if we ought to extend that to include spectral imaging as well.

DR. MORRIS: Yes, I think that's sort of what we had in mind. It was supposed to be inclusive of that, but maybe we should say it specifically.

DR. RUDD: The other term is--I don't know how common this is, but process tomography. I think there's a whole area there, 3-D imaging of the process.

DR. MORRIS: Yes, there's a fair amount of--

DR. RUDD: You may have included it, so I'm just really--

DR. MORRIS: No, not really, but--

DR. RUDD: Just as a safety net.

The bit that I think is really noticeable by its absence, though, is any reference to the processing equipment itself, so I'm moving away from the analytical. And I'm just thinking, Is there value in an appreciation and an understanding of how the analytical technology needs to interface with the processing equipment?

DR. MORRIS: Yes, I sort of envisioned that as being encompassed in part--and I don't know, Mel, you'll have to correct me if that's what you're thinking, in the list of going through the unit operations--

DR. RUDD: Okay.

DR. MORRIS: --you would be describing the equipment. Is that--

DR. KOCH: Well, I'm not sure if you're referring to the sample interfaces or just the feedback?

DR. RUDD: Well, I guess what I'm thinking about is, you know, heaven forbid, you could envisage a situation where a perfectly applicable PAT is being used, but maybe the way it's been interfaced with the blender, the granulator, whatever it might be, or even the granulator or blender itself that's being used could be inappropriate. And I think--I would hope that a reviewer would have just some kind of basic understanding of the rights and wrongs of how to do--

DR. KOCH: I think we had one point in there that had to do with applicability--

DR. MORRIS: Is this the one, sensor sample placement and maintenance?

DR. KOCH: No.

DR. RUDD: But I think it's interfacing at the first level, but then it's about not just have you hooked the PAT and the processing equipment together correctly. It is, is that combination appropriate?

DR. MORRIS: Ah, yes.

DR. RUDD: I'm not sure if I'm making that clear.

DR. HUSSAIN: I think you have--David, for example, a classical example of that is you are doing blend uniformity for a blender and you have a probe in one location, that's an inappropriate--it's not going to catch that spot and so forth.

DR. RUDD: Yes.

DR. HUSSAIN: But it's a tumbling blender, one--so that--

DR. RUDD: It's exactly that sort of thing, just a basic appreciation of the strengths and weaknesses of different processing equipment and how they can be interfaced with what might be perfectly good PATs but used wrongly.

DR. CHIU: Another point is I think for the benefit of the FDA reviewer and investigator, it would be very useful to have hands-on experience in a pharmaceutical manufacturing setting, if some companies can offer us.

DR. MORRIS: We've talked about that, and Kelsey Cook from Tennessee has talked about that in terms of trying to get into some specific companies with whom they have relationships, and Mel has done the same.

At Purdue, we have a pilot lab set up which would probably suffice, at least for that, but in terms of seeing an operation, there's--in terms of getting in to see an operation, there are certainly potentials that we can view. In terms of hands-on using it, I think that would be restricted. Most of the companies aren't going to want people coming in and actually performing batch production. But, yes, that's certainly on the list.

MR. LEIPER: One of the things that's actually quite interesting, I think the content is superb, but I think the context is--might be a bit that's missing. We've been talking an awful lot about holistic approaches, et cetera, and now we're delving into specific areas, and we could quite easily get into these areas, which are quite--could be quite irrelevant without some methodology to put that in place. And the thing that I see as maybe being missing here is looking at risk assessment, a formal approach to risk assessment to actually select how you're going to manage your risk, which is what the effective use of PAT is actually about.

Now, FDA happened to have this exceptionally good system, but the industry doesn't know about it. And the other thing that's interesting, and Ajaz made the comment, that, you know, in risk assessment it was for safety and efficacy. But the risk assessment goes back to the design of the process, et cetera. And I feel that if that kind of thing is missing, we could be in danger of what we've been doing in the past, which is to say any problem that we get, the answer is HPLC. The answer is the most appropriate solution that manages the variability and it actually manages the noise in the system, and the way that you do that, I believe, is through good risk assessment and management systems to ensure that that risk that's been identified is properly managed.

DR. MORRIS: Yes. I'm not sure exactly how to capture that, but we'll--

MR. LEIPER: I'm staying for a day.

DR. MORRIS: Okay. We'll put it in as a formal approach to risk assessment, and maybe we can talk with Mel a little bit afterwards as well.

Rick?

MR. COOLEY: A couple other unit operations that appear to be missing, one was process chromatography. It was--

DR. MORRIS: I thought we had that in there. Did we not, Mel?

DR. KOCH: We don't have it in as a unit operation.

DR. MORRIS: Not as a unit op. We have it in--

DR. KOCH: Analytical technique but not as a unit op. We still have some additions to fill in under measurement systems.

DR. MORRIS: Yes, but we do have in-process sensor--this is where we have it.

MR. COOLEY: Right. But up under your process operations, there wasn't any mention, under separation techniques of process chromatography operations as a manufacturing step.

DR. MORRIS: As a manufacturing step. Yes, I think we were sort of lumping everything, including distillization--

DR. KOCH: That's a good point.

DR. MORRIS: Crystallization.

DR. KOCH: You could add chromatography under--in addition to separation, or in addition to extraction.

MR. COOLEY: Also, I don't know if you would like to have filling operations on that list of unit operations.

DR. RUDD: I think actually there's quite a few missing, you know, things like compression and suspension preparation, that kind of thing. The list is not comprehensive.

DR. MORRIS: Right, right.

Let's see. Who's not here? Eva. Send all of your suggestions to Eva.

[Laughter.]

MR. COOLEY: Was there a mention in there on validation, like software validation and the analyzer validation?

DR. MORRIS: Yes. Well, there's a couple of places. In the DQ, IQ, OQ, PQ, there's--

MR. COOLEY: Okay, analyzer--

DR. MORRIS: --analyzer validation.

MR. COOLEY: I don't know if you need to spell out software validation since that's going to be an important part of it.

DR. MORRIS: Yes, I think that's--that was somewhere. I don't know what happened to it. Was it specific somewhere? I can't remember.

DR. KOCH: We thought the vendors mentioned yesterday that they had that taken care of.

MR. COOLEY: Okay. Could I get his name?

[Laughter.]

MR. COOLEY: Then one last thing. It's kind of like David was talking about, ensuring that what the analyzer is seeing is correct, and that could be as simple as how do you know that a window isn't blinded or a sensor's window isn't blinded during operation. Have you taken that into account to assure that that doesn't occur? And if it does, how do you detect that? And extending that further into an on-line analyzer versus an in-line analyzer, if you're extracting a sample from the process, you know, review with the person to make sure they have something in place to ensure that they're getting the valid sample to that analyzer.

DR. MORRIS: Yes, I think we have a separate sampling section. I can't find it right now, but it's in here somewhere. Here we go. So in here you're saying--

MR. COOLEY: Maybe cover it by just mentioning representative. That may take care of it.

DR. MORRIS: Right. I mean, these will have to be fleshed out a good bit for the actual didactic part. And, hopefully, I mean, if you come and watch a line where you're doing a wet granulation on-line, you'll have to become sensitive to a window filing and things like that as your data flat-lines.

[Inaudible comment off microphone.]

DR. MORRIS: Yes, right. You can get a--you can really come to an endpoint quickly.

MR. HALE: Ken, did I see this was a one-day course?

DR. MORRIS: Oh, no, no.

[Laughter.]

DR. MORRIS: Half-day, half-day. Just 8:30 to 3:00, that's it.

No, no. It's somewhere between a one-week and a two-week didactic. Then the two- to three-day stints at the universities or companies would follow that. I don't know if they would follow right on top of it. It would depend.

DR. HUSSAIN: And which school will give the master's of science in PAT on this?

[Laughter.]

DR. MORRIS: I don't know. Maybe Wilkes.

Anything else?

DR. RAJU: I thought it was a really nice course formulation. I can't believe you did this in three hours.

DR. MORRIS: Well, actually a lot of this came--was done--Ajaz had given us--if you remember, Kelsey, Steve, and Mel all submitted some, so we had a good backbone to start with.

DR. RAJU: It was interesting to see that you had performance evaluation at the end to figure out if the people you were teaching were taught well and learned well. And I notice that you used a case study format to do that evaluation.

First, why did you choose that? Why did you choose not to include more of a theoretical understanding as a second measure of testing? And, third, how do you make that case as real as possible to the industry situation they will ultimately review?

DR. MORRIS: Let me just preface it--wait one second, Mel, let me just preface it by saying the homework is actually an ongoing evaluation process.

Go ahead, Mel.

DR. KOCH: The purpose of putting the case studies in there is that we were going to try to make sure that we reflected back on the case studies as ways to have demonstrated some of the theoretical things.

DR. RAJU: You would connect them back--

DR. MORRIS: Yes, we would definitely link them back to the theoretical--the physics and the engineering essentially, but in a context that they would typically find themselves working in. But the homework would be the ongoing evaluation.

DR. WORKMAN: I keep looking at that and I see chemometrics, and yet many of those topics are chemometrics. So I was wondering how you are distinguishing that item from, say, correlation, pattern recognition, other things that are normally grouped in that category?

DR. MORRIS: I'll have to defer to the University of Washington for this.

DR. KOCH: We still have to refine that, but it started out as a list of all those things that when we're leading up to chemometrics and actually we stuck in the basic statistics as a way to get the ball rolling. And certainly we can refine because you get into regression and some of the other things, and, yes, they could be subsets of--this is still awful early in terms of finalizing it. We weren't sure there was a chemometrician left in the crowd.

DR. MORRIS: Is there something that looks like it ought to be altered?

DR. WORKMAN: Well, I would suggest you take out chemometrics and put, you know, other items specifically that you will cover that do fall within chemometrics, or put everything under chemometrics that refers to chemometrics. Either way.

DR. MORRIS: I think there will be, as Mel said, there will be a list under chemometrics by the time the participants have to weather this.

DR. RUDD: There was a point coming out of our group which Judy included in the summary that I'd really like just to bring to the fore, and that is that we see a program like this as being applicable to R&D people from industry as well. This is not just about educating the reviewers.

And I think, you know, speaking personally, I would say the creation and existence of this program really is an important step and a strong message to, I guess, address the issue that Ajaz talked about in the first session yesterday, which is that one of the barriers or one area of resistance, passive it may be, is actually within R&D in the industry, and we need things like this, an accumulation of things like this, to really bring that message out and to create the incentives that R&D needs to do all of the exotic but additional stuff that we've been talking about in the last two days. It's important that it's good. It's important that it exists.

DR. KOCH: To add on to that, I think that's definitely a situation that needed to be addressed with regard to R&D. But I think there's another group that's intermediary between these, and that's the regulatory affairs and quality assurance groups within industry that are going to be reluctant to move things through unless they understand some of the basic terminology. So there may be a remedial course of some kind.

DR. MORRIS: But I think there's also--there's a clear intent that the course transition to a broader audience, is my understanding.

DR. KIBBE: Has anybody discussed the possibility of either putting this on-line or taping it and then getting a bigger distribution?

DR. KOCH: We're trying to at least get it on paper here first.

DR. MORRIS: But it's a good idea, particularly for people who can't make it.

Anything else?

[No response.]

DR. KIBBE: Thank you, Ken.

We're moving along at a breakneck pace. This is the kind of efficiencies you get when you put PAT in your process. You get to end several hours early and brave the weather.

I believe on my schedule, this is where Ajaz gets to do his two-and-a-half-hour presentation in 20 minutes.

DR. HUSSAIN: Well, I think this second meeting is coming to an end. In many ways, I think my emotional highs and lows sort of reflect the first meeting, again. I was going down, down, down the first day in terms of, you know, what to expect from this meeting, and then it sort of comes back again and gives me much, much more hope to move on. And I think this meeting again did that in the sense that the types of recommendations and information that you are providing is very, very useful to us and it keeps us going and making sure that we're on the right track.

So I have some sort of closing remarks and sort of next steps here, and I thought I'd start with a reminder. One thing that sort of started pulling me down the first day was the discussion on flaws, flaws, flaws. And I think a reminder to myself and to everybody is that we--I personally believe the quality of products available to U.S. patients is good. In fact, I think when we go to India every other year on a long trip, we take all of our medicines from here. And my wife is a physician. She won't buy anything from there. So you can see how much faith and trust we have.

So just personally speaking, as a consumer, and also from an FDA perspective, I think the PAT initiative did not raise that as a concern. And I just want to remind us that we are not questioning the quality of products available to the U.S. patient. It is good.

Why is it good? And I think the current quality assurance system, which is setting the specifications, cGMPs, and the testing, is able to prevent the release of low-quality products. I can just look at the number of Class I recalls. They're very, very few. You can count on one hand the number of Class I recalls.

There are a number of Class III recalls which I think to my thinking reflect some of the efficiency issues that we are trying to talk about. But from a safety and efficacy perspective and the concern, I don't think we have that concern.

So what we are talking about is that currently level of process understanding is low and, therefore, requires a very high level of scrutiny and need to reject product of unacceptable quality.

I believe the reason for that is our process understanding has been limited because we deal with complex systems. These are not simple systems, although a tablet looks simple, but in terms of physics and chemistry, it's quite a complex system. It's multivariate, and traditionally we have approached formulation development as--I used the term "odd" (?), and I'll use it again, with the perspective of saying that--I mean, that's how we emerged in terms of developing formulations and so forth. And the tradition has been, as we treat these systems as univariate systems, and we do one factor at a time experiments and somewhat trial and error experiments. So it really doesn't give us the level of information that I think is now needed. It was okay 30 years ago, but now I think we are dealing with far more potent drugs, far more complex drugs in terms of their physical and chemical behavior.

I think we have reached a limit of what our empirical approaches have been able to provide for us in the past. And when I talk or when Janet talks about empirical-based GMP, it's not--it's sort of a criticism of the GMP, but it's essentially a criticism of the data on which the GMPs are based. The data itself is empirical trial and error, so what do we expect?

The other aspect, I think, I strongly believe that our raw materials, especially excipients, are not well characterized. I don't see a solution to that in terms of functionality test as a solution to address that issue. It will help, but not truly. PAT I think brings the issue more directly on to the mixture that we're interested in.

Our equipment selections have been by tradition, and the process factors that we deal with, we generally have limited information. And the question, at least from the FDA perspective, always seems to be: Are they truly optimal or not?

We have development crunch, and clearly, post-approval changes that require prior approval supplement is a hindrance in the process. So combine all this together, I think we need--or we have a system which can really be improved. And efficiency, although not directly linked to quality, I think there is a link. Because if you have low efficiency, you actually have a risk of poor quality. I'm not saying we have a risk of poor quality. If you have enough resources and so forth, the quality is maintained. But our resources are getting tight and tight. So I think we are working harder and harder, and there comes a point when the system starts breaking down. And before that happens, I think we need to change. And so we have an opportunity to change and improve before we run into a crisis.

So, again, limited but sufficient for approval process understanding can lead to it, because that's the current situation. Low process capability, scrap, rework, recalls, protracted production cycle times and low capacity utilization, resolution of process-related problems slow and difficult, and high cost of compliance.

But from a public health perspective, it leads to risk of drug shortages, and we deal with that on a daily basis. Releasing of poor quality product, recalls, here I would put the Class III recalls. Delaying approval of new drugs, again, at least since I joined the agency, the last three, four years, this is when we are seeing quality problems holding back your blockbuster drugs.

Quality problems also we've seen can confound your very expensive safety and efficacy database itself. And keep in mind, quality is the foundation that allows you to make the safety and efficacy decisions that you make. The other way around, if you say it's safe and efficacious, you can't change the quality standard. So I think that has to be sort of understood.

So the next step, I think, what are the approaches available to us? Approach 1, Option 1, increase the level of FDA scrutiny. However, FDA resources are limited. While the numbers of product and manufacturing establishments are increasing, our number of folks available for inspection are the same or are going down. And our ability to inspect, our ability to manage the review and assessment process is being challenged in terms of the resources that are available to do that.

So we felt Option 2 was a better option: increase the level of process understanding so that allows us to prevent rather than scrutinize much more. And PAT is being used as a model system that's not only technology. There are other approaches to this. But PAT is a way for us to move forward and hopefully bring other technologies and other approaches along with it.

So the current system in a sense is predicated--it is very essential to have very strict adherence to SOPs and all other documented procedures. This is a critical step in the quality assurance. So the cGMP part, without the cGMP part, the testing literally will not have any value. So the two combined make sense for the quality system. So the GMP part and the testing part are both part of the same system, and each is an extremely important step.

We have re-specified time and testing, and we use that to document conformance. We have univariate assessment not a systems approach for quality decisions. Learning essentially stops after validation, inability to connect the dots, and the system is not conducive to continuous improvement.

We are hoping that PAT system will address some of these things. Why? We hope to have more performance-based assessment, and we can use this to conformance throughout the process and prevent manufacture of unacceptable end-product quality--or prevent manufacture of product--of unacceptable end-product--I'm saying (?) . Systems approach for quality decisions. Why do I say systems approach? I think when you start looking at process and you're supposed to make decisions of releasing a product on the basis of process data, you have no choice but to look at a systems approach. You have to look at every part of the system and connect every part of the system to make those decisions correctly.

Learning and validation is continuous. We can--some of the dots that we are missing are connected, and this continues there. I hope this will be a process which is conducive to continuous improvement. It will be a challenge, but how we set that, I think we have to make sure our first guidance is in that--is moving us in that direction.

Clearly, we'll still have strict adherence to SOPs and all of the documented procedures. But how we arrive at these SOPs and how we arrive at the documented requirement will now be different because of the higher level of scientific understanding and so forth. So you're turning things upside down in one sense. Hopefully that will be the right approach, and I'm hoping that with your help we can make sure it's the right approach.

So there are seven emerging PAT guiding principles. Too many spelling mistake. I didn't check my--anyway, let's look at an NDA or an ANDA situation. The guiding principle here is whatever we do, we should not prolong the review times due to introduction of PAT. How we do that, early meetings with PAT reviewers, industry meetings with PAT reviewers. Expert technical support available to these reviewers, and we are creating a group of four or five individuals with expertise in PAT available to serve as consultants to our reviewers and inspectors.

At these early meetings, we will identify GMP issues and discuss it with the PAT inspector, possibly have reviewers participate in pre-approval inspection with the PAT inspection, so you have a team concept. And also consider interim specifications for PATs. Clearly, we know that you will need far more data. The three batches for validation, the concept, may not be suitable for PAT, but it doesn't mean that you hold back your approval. You'll still go through the same procedure, but you would finalize your specification on PAT later on as part of the Phase 4 commitment.

In the post-approval world, at least in my mind, the scenario is a company will go out and collect data to establish PAT proof of concept or suitability. We may or we may not be involved with this process. This could be a totally independent process that a company does on its own. But I think if a company wishes to talk to us, at this point we could consider making ourselves available to see whether you would agree with the processes that are already started. But that's an option.

Then once a company has collected information to establish proof of concept and suitability, we could have a PAT meeting. It would be sort of a special meeting to come and talk about how a company wishes to bring this on line. And actually we're going through one--we actually went through one such meeting in May with the first company that has come through with a PAT submission.

So a PAT meeting with the PAT team. The goals and objectives of this meeting would be to develop consensus on how to introduce PAT on an existing line and questions to be addressed or data to be collected for validation. Discuss the safe harbor concept. What would that mean to that particular product? And then work out a submission and inspection strategy--when, how, what should be done?

Continuing on that, I think FDA will focus on a high level of training, communications and a systems approach to review and inspection, and here is the CDER/ORA team approach. My hope is that we'll have minimal reliance on the prior approval supplement process. We haven't worked this out, but we will keep this in mind as we move forward, find ways to have minimal prior approval type of requirements for PAT, because you already have an approved system, so we can actually think of moving towards annual reports and other types of mechanisms to do this. That probably decreases certainty much more.

Increased emphasis on underlying science and mechanism and assess risk of poor quality. In our discussions and our meetings with the companies, these would be sort of more emphasized than what we do today. I don't say that we don't do these things today, but I think this becomes a much, much more emphasized aspect.

Now, the question is: Is industry willing to move on--I can't speak for the whole industry, but at least one or two companies which have already indicated they're moving in this direction, one has met, the other company we hope will come and meet with us soon. So, clearly, FDA is not the hurdle. So three years from now if this doesn't happen, don't come to FDA and say you were the hurdle. I think this is over. You don't have this excuse anymore.

FDA is working with industry to minimize the risk side of the equation. Industry has to determine the benefit side of the equation by itself. I don't think we can help--although there was one suggestion that FDA should define the benefits. I don't think that's our role.

Success of this initiative depends on one or two companies who will take the lead. So far, I think we're very fortunate we have found those companies. Hopefully this process works out with those two.

Can we afford to fail or not move forward? I think you have to make that decision.

Sort of wrapping up, one thing which sort of pulled me down and I was feeling a bit down for this meeting was--I said we didn't plan this meeting well. We had time left. We could have done more. But, anyway, I think Meeting 3 had very different objectives in mind. The discussions on general principles of validating computer systems and models, especially Part 11 issues, whatever that needs to be discussed, we will discuss those there.

We'll have a dry-run exercise on a mock PAT application, review and inspection decisions. Need case studies. We set up two mechanisms to get case studies. The docket that was talked about--you have the information in your packet--was essentially created to get these case studies. And what I would like to do is members on this committee sort of contact different industry members and see how we can get examples and create these case studies, and we can structure the meeting or a working group session at the next meeting so that we can actually--since we have already identified the reviewers and inspectors for PAT, we can have them go through the submission, although they would not have gone through the training, but at least we can see whether we can do a mock run. And that would be, I think, an important aspect of the next meeting.

We also wish to discuss issues related to rapid microbial testing. What information should be incorporated in the general guidance to address rapid microbial testing? One of the major concerns expressed by microbiologists was that the chemistry part cannot handle the microbiological part. There are significant differences. But the general guidance is not specific to any technology and so forth. The general concept and principles should essentially be sufficient here, too. But we would invite some of the microbiology experts to come and talk to us next time, and we will go through this discussion and make sure the general guidance can have one or two paragraphs to address these issues also.

What I plan to do is have this group essentially run in parallel. When we have the microbial discussion happening in one room, this group could actually focus more on the dry-run exercise. So we can have those two happen in parallel so that we can do a more efficient job of completing the program in one day.

NIST has expressed an interest to hold a workshop at the time of the third meeting, so there will be an optional workshop at NIST. I don't have the program defined or anything, but if there is interest, we would work towards a workshop where NIST would like to sort of share with the group development of reference standards, development of calibration standards, even computer validation aspects, what they have been doing. So there is a possibility--I can't promise whether this will happen, but we're working towards an optional workshop for people to attend this the next day or a day before, whenever this meeting is.

So that's the next step right now. I'll stop, and if you have any questions, I'll be glad to answer them.

DR. KIBBE: Anybody? Anybody determined to have the last word? Yes, sir?

DR. RUDD: I'll go for it, Art. I'm sure it won't be the last word, but I'll go for the second to the last word, maybe.

Just a point of protocol. How quickly can we get copies of those summary slides? I'm thinking for internal purposes they would be extremely useful.

MS. REEDY: These will be on the Web probably Tuesday.

DR. RUDD: Okay. That's good. Thanks.

And really just a question, Ajaz, about the rapid micro. I just wonder if we could gain any prior experience from the food industry, for example. I'm assuming they must have addressed that issue before us.

DR. HUSSAIN: I think since I have not been involved, I'm going to have the micro folks handle that part of the discussion. So I don't have that expertise.

DR. KIBBE: Anybody else have any questions or comments? There's someone behind you, Ajaz.

MR. RITCHIE: Yes, a question in terms of the availability of making your PAT--not a road show, but if I needed to do more than I'm doing for my company, would it be possible to hear from you live at my site?

DR. HUSSAIN: Well, I've been on the road show for a long time now. Definitely I think we would love to come and talk. In fact, on Monday I'm driving up early morning to Teva Pharmaceuticals. So I'll be spending a day with Teva Pharmaceuticals in Pennsylvania. So, Gary, send me an invitation. We'll have either me or somebody else come and talk to you.

DR. KIBBE: Okay. We're coming to the end of our two days of discussion. I want to thank all of you for your contributions, your patience with some of my poor humor, and I'm sure that what we've done will have a lasting effect on the industry and the regulatory body and the public that we serve.

Again, thank you. Have a pleasant trip home, and we'll see you at the next meeting.

[Whereupon, at 2:38 p.m., the meeting was adjourned.]

- - -