FDA Guidance on Clinical Trial Data Monitoring Committees (DMCs)

FOOD AND DRUG ADMINISTRATION
CENTER FOR BIOLOGICS EVALUATION AND RESEARCH

OPEN PUBLIC MEETING FDA GUIDANCE ON
CLINICAL TRIAL DATA MONITORING COMMITTEES (DMCs)

TUESDAY, NOVEMBER 27, 2001

Hyatt Regency Bethesda
Rockville, Maryland


Part 2

We believe and advise strongly that the sponsor determine the minimal amount of information required. If what you really want to know is that the conditional probability of the success based on, say, your alternate hypothesis, is 60 percent, you don't need to see all the data from all the trial; you just need to know whether the conditional probability of success is over 60 percent or under 60 percent.

Having determined the minimal amount of data, we'd recommend that the trial formulate written questions so that they get exactly what they want and that there is a written record of exactly what was requested and what was given in terms of information, that those preferably be yes/no questions. "Is this number over 10 percent or under 10 percent?" Not "What is the number?"

That they receive only written communications from the DMC where possible, not meet with the DMC. We know that, of course, there's a lot more that can be communicated in person and that can certainly have its advantages but it also raises substantial concerns about the implications for the trial in what is a very dangerous situation when such meetings occur.

There should be standard operating procedures that identify who needs to know and access the information and that ensure that others do not have access to the information. And the individuals with access should avoid any further role in trial management and should avoid taking actions that might allow others to infer what the results are.

The use of efficacy data from an on-going trial is discussed in Section 6.6. It's very uncommonly done. It's not uncommon to have safety reports that contribute to a labeling if it's an important part of the safety database and the trial has a long way to go to completion. Efficacy data would be very uncommonly done and it's generally ill advised because it might endanger the trial. However, there are exceptional circumstances that may arise, that have arisen on rare occasions, and we advise that before accessing and using data in a regulatory submission sponsors should talk to the FDA, as well as the data monitoring committee, to consider the implications of using those data, and also to consider approaches, such as what data should be looked at, who should look at them. Can they go straight from the monitoring committee to the FDA without going through the sponsor? That's been done in some cases to help preserve the integrity of the trial, and so forth. Those issues merit discussion before decisions are made.

I'm going to conclude this talk with a few brief case examples that exemplify some of the problems that have arisen, some of the issues that this guidance is trying to alert to. I have three examples--I have four examples. I have three examples that specifically have to do with involvement on the monitoring committee and access to interim data. Of the three, one is at the NIH, two are industry examples. Two involve data coordinating centers and two involve sponsor statisticians, so we have some good food for that discussion and debate.

I'm sure a number of you are familiar with the studies about 10 years ago of HA-1A, an antibody to lipopolysaccharide for treatment of patients with sepsis. At a particular point in time two-thirds of the data had been reviewed at an interim analysis. Of note for this difference, the sponsoring company's vice president for research and development attended the closed session of the monitoring committee and viewed the interim data. In addition, the statistical coordinating center, which was a private organization contracted to by the company, prepared the data monitoring committee report and the president of this statistical coordinating center also chaired the data monitoring committee.

Subsequent to this interim analysis, the sponsor submitted a revised analytic plan to the Food and Drug Administration. They told us that they had not seen any of the data at the time. The plan modified the primary analysis, changing from 28-day to 14-day analysis, modified subgroups. There were different groups of gram negative infection and sepsis and gram negative bacteremia groups that modified which groups were important to the analysis, changed to a rank analysis from a point in time analysis, a landmark analysis of survival, and made many other clarifications because the original analytic plan was rather vague on a number of issues, made a lot of useful clarifications but also some significant changes.

These changes were made by people who had seen all the analyses, both those that were defined by the original protocol and defined by the new protocol. They weren't fully made by those people, in fact, but they were reviewed. The new plan had been signed off by this vice president and by the statistical center, both of whom had seen unblinded data but assured us that they didn't allow that to bias or influence their decisions on the acceptability of the changes.

The outcome of this situation was that these changes, once we learned the conditions under which they were made, raised in our minds and ultimately in the public mind considerable questions about the validity of the data. We attempted to revert to original analytic plan, although it was somewhat ambiguous in a number of areas. Other issues arose from the fact that the sponsor had misrepresented the situation and that led to some significant implications that I won't digress into.

There may be some misunderstanding. The product was not approved but it was not not approved largely for these reasons. It was not approved because their trial was not a successful trial, although it had been published in the New England Journal as having a mortality P value of 0.012. By our assessment of the best prospective analysis the P value was 0.6. We requested a confirmatory trial and that was done and it was stopped for the safety stopping rule with a trend toward excess deaths on treatment.

Actually I'll come back to that trial in example number 4 if time permits.

The second example is an example of the development of PPA, tissue plasminogen actovase, altoplase, whatever. The trial was the Neurologic Institute-sponsored, a phase II placebo-controlled trial. The primary end point of this trial was neurologic function as assessed at 24 hours. The secondary end point of their trial was the functional status of the patient, their level of disability, residual disability, at 90 days. It's the secondary end point that's the one that the FDA recognizes as an appropriate type of end point for approval of a drug, the primary end point, a useful end point potentially for drug development. That's, of course, up to the sponsor to choose.

Now an interim analysis had been conducted with about three-quarters of the data in and at some point in time subsequent to that the steering committee of their trial, which was largely blinded to this interim analysis, proposed switching the end points and increasing the sample size. They felt that it could be very difficult to do a confirmatory trial in this setting. If the trial was successful and if the secondary end point was successful, since the drug was already on the market for treatment of patients with myocardial infarction, that physicians could just use it and if they could just use it, they may not be willing to enroll patients for their successful trial so they should make this more definitive by making the primary end point, the clinical one, increasing the sample to power it.

The problem with that proposal, which was a logical one on the face of it, was that the statistician, who was also the study coordinator and worked at the study coordinating center, was unblinded and this statistician had joined the steering committee when the proposal was formulated. So the statistician met together with the committee, did not share the unblinded information but was part of the discussions that led to this proposal. Then the statistician came to the FDA and presented this proposal to switch the end points, together with some other members of the steering committee and to change the size of the trial.

In this particular case the agency felt that there was just no way to know the amount of bias that could have come into this by the fact that that study coordinator knew both what was going on with the primary end point and the secondary end point, knew that this was either a very good idea or a very bad idea in terms of the ultimate desire of the institute in proving the drug effective or not, and that despite the best intents of the institute and the study coordinator, that that could introduce uncorrectable bias and shouldn't be done.

We said they should simply complete this trial and start another trial with alternative end points, with switching the end points. They did that. They worded it and published it as part A and part B of the same trial but they were separately analyzed, as we proposed and suggested. And in fact, it turned out that both trials gave essentially identical results, which was a very strong positive finding on both sets of end points. It turned out that the interim data that had been viewed by the study coordinator showed actually a more powerful finding on the secondary end point of functional status at 90 days than on neurological function at 24 hours, suggesting that the switch would have been one that would have been good for success and wouldn't even have required the extra people for powering.

And again, knowing that the study coordinator knew that information and participated in those discussions, we felt essentially rendered it impossible to make those changes without the potential of endangering the trial.

It's probably a good idea in that particular case that there were, in essence, two trials because thrombolytics can cause intracranial hemorrhage. There were other studies that were done previously and subsequently at different doses with different drugs or in different patient populations, not as rapidly treated perhaps, which haven't achieved the same level of success and I think there's still a significant question in the field as to exactly when and in whom this treatment is more useful than harmful, but the fact that there were two successful studies was, I think, a very important part in terms of the development of that treatment.

My third example, which I'll try to go through quickly, of this sort of modification of a trial was one in which there was interim data from most of a phase III trial--I don't have the exact numbers with me--that had been prepared by the sponsor's statistician for review by the data monitoring committee.

Subsequently, the sponsor decided the trial had been underpowered. Basically they said well, we always knew that our estimated treatment effect was too high but it was based on how much money we had available from management to do the trial and now they gave us more money and we want to be able to power to do a larger trial.

Well, this happens and you know, larger trials tend to be better than smaller trials. Of course, the problem is if you've looked at the data at the end of a trial and you say well, our P value just missed so we're going to extend the trial a little longer to turn it into a success, that would have some rather problematic effects on type 1 error and we didn't know, of course, the extent to which that may have happened since, at the very least, the statistician who was part of the sponsor's organization planning the trial was, in fact, aware of the interim data. As this notes, the sponsor's statistician sat on the trial planning team and attended internal meetings to discuss and decide upon the extension.

In this particular case the company went to the lengths of getting sworn affidavits that no, the sponsor never talked to anybody. The affidavit didn't mention whether he smiled at somebody or nodded when they proposed these changes. It clearly was millions of dollars additional being invested into a drug that was going to mean hundreds of millions or billions of dollars to the company so at least the concerns certainly were there that somebody might have wanted to know what the statistician knew and that the statistician knew information that may have influenced his participation and role in the trial.

We did allow the increase in the size of the trial, since we thought that it would provide useful information. However, in this particular case we expressed our reservations in terms of how we would interpret the data under certain circumstances.

That's the end of my talk but I'm going to take just a minute to present one more example that really fits in better with the next session about interactions with the FDA, which is being presented by Bob Temple, but he suggested that it would probably be better for flow if I mention it here. This one is really about the FDA ourselves knowing interim information about trials.

The CHESS trial is the trial that was done to confirm whether HA-1A really worked in sepsis. It was initially named confirming HA-1A efficacy in septic shock but when it failed they changed the C from confirming to the name of the company actually, which I don't mention here, or something like that. I thought that was kind of cute. They thought it was unethical to do the trial because they were convinced that it had to work.

In any case, the interim analysis showed a strong trend toward harm. It was .07, one-tailed, I think, toward harm. That met a stopping rule. It also met a futility stopping rule and the trial was terminated the next day on the 17th. This is in '93.

At the same time there was a trial in a related but different condition, meningococcemia, a type of sepsis but a different pathophysiology and affecting very young children, but because of the excess deaths in this trial they suspended enrollment. And then they asked the FDA the next day, on Monday, they came to the FDA--we had already read the news--and said all of this has gone on and we'd like you to look at the data from the meningococcemia trial to determine if we can't restart that trial because of there were concerns that the drug might be harmful; on the other hand, it might be very different in their trial and helpful and the company wasn't sure the best way to proceed.

The FDA in this case, as we do in many cases or in a number of cases, looked at who was on the DMC and how well constituted it was because we have an important obligation to protect safety of patients in this trial, as well. On the other hand, we have a desire not to unblind ourselves, where possible, because of our potential role in considering changes to a trial and the way in which that can be biased by knowledge of the data.

In this case we had an excellent data monitoring committee, a lot of experts in the field. I remember Janet Wittes was on this particular committee and others. We felt that this data monitoring committee, if they saw the data from both the CHESS trial and the interim data from the meningoccemia trial, was well constituted to determine the appropriate fate of this trial without unblinding the FDA and we suggested to the sponsor they have the committee meet immediately with that information.

The monitoring committee recommended continuation and interestingly, about two years later in that trial the sponsor did propose some significant changes to their trial and we were pleased to still be blinded to the data outcome as we considered that proposal.

And with that, I'll thank you for your attention.

DR. LEPAY: Jay, thank you very much. I'd like to invite the members of the second panel to join us here, and Mary, as well, and perhaps I can also get some assistance from the audiovisual people, since we won't be needing the slides until after the break.

I'd like to go down the line of our distinguished panelists for the second panel. Dr. Thomas Fleming, who's chairman of the Department of Biostatistics and professor of statistics at the University of Washington Seattle. Norman Fost with the Department of Pediatrics and the program in medical ethics at University of Wisconsin in Madison. Larry Friedman, special assistant to the director of the National Heart, Lung and Blood Institute at the NIH. Ira Shoulson, professor of neurology, medicine and pharmacology and Louis Lazania professor of experimental therapeutics at the University of Rochester. And Steven Snapinn, senior director of scientific staff at Merck Research Laboratories.

I'd like to follow the format that we tried this morning and ask if each of the panelists could perhaps deliver a few remarks in response to their own experiences and what they've heard today and hopefully this will help us, as well, develop comments that will be useful in our review of this particular guidance document.

So with that I'll start with Dr. Fleming.

DR. FLEMING: Certainly this topic of data monitoring committees is rich, complex and controversial. And while a 20- to 25-page guidance document can't be comprehensive, I've been very impressed that this has been extraordinarily well done in really capturing in many areas the essence of many of the key issues.

The sections that we're considering here, one of the sections is Section 6 on independence. A quick comment. I'm very pleased that the document brings out the conflicts of interest here that we need to be aware of and need to take account of are not only financial but also professional or scientific.

I'll be focussing probably more in the few comments that I can make on Section 4 and as it relates to this in Section 6 on issues of confidentiality and let me just quickly touch on what I see as some key issues, maybe to expand a bit on what's in the guidance document.

First, in Section 6.4, as Jay Siegel had called our attention to, there's discussion about multiple roles of statisticians and you might characterize those in an oversimplification in two key domains, one being the role of the protocol or steering committee statistician being involved in the overall design of the trial and the role of the statistician who I might call the liaison between the data monitoring committee and the database.

And very quickly, I think there is a lot of wisdom in what's been discussed to consider the advantages of having those be different statisticians in that certainly the liaison has to be unblinded to the data, whereas the statistician who's interacting with the protocol team needs to have those interactions not only during the design of the trial but during the conduct of the trial. Jay had raised some issues, for example, maybe there's more money available that would allow the study to be made much larger in size. Or maybe there are external data that come to light that might lead to the need to change end points or to change key aspects of the analysis and the statistician needs to be integrated into those discussions and, as a result, would need to be blinded. So I think it is something to consider as an advantage in having different people serving in those two roles.

Another issue in Section 4.3, an issue is brought to light that is something that I know has been on the minds of many of us who've been on monitoring committees. I did an informal survey of a number of statistical colleagues who'd been on monitoring committees and I asked them, what's your most frustrating or controversial issue? And it was surprising to me how often people mentioned as their first frustration proposals that the monitoring committee itself be blinded.

I think the fundamental issue that's concerned us is that our first and foremost role in monitoring trials is safeguarding the interests of study participants and to do so in a way that the data monitoring committee is uniquely positioned to do, it's critically important for that committee to have full insight. And I was pleased that in Section 4.3 the document says the DMC should generally have access to actual treatment assignments for every study group.

Another issue that Jay and Mary Foulkes got into in Sections 4 and 6 relates to sponsor access to interim data for planning purposes. It was in Section 6.5. I guess I would in general argue that one should be extremely cautious about what you would be providing.

Now a related point comes up in Section 4.3, where there's discussion about the content of the open report and I would argue that much of what is there I would argue is certainly on target. The open report should be presenting data, aggregate data that gives a good insight about how the study is progressing and study conduct, issues that relate to overall recruitment, overall retention, overall adherence.

What's controversial, though, is should aggregate data on efficacy and outcomes or safety outcomes be presented in an aggregate manner? And I would argue there that can lead to great concerns. You may have an advanced cancer trial where you know that there's a 15 percent--you anticipate a 15 percent natural history survival at two years. If aggregate data show 25 percent or 10 percent, that could give clues about whether the intervention is working or not working respectively.

Or you may have a behavioral intervention looking at reducing transmission risk of HIV. If you look at the secondary data in the aggregate on behavioral effects and you see major behavioral effects, that may be interpreted as clear indication of efficacy or maybe even the need to change the primary end point. These are issues that I think have to be very carefully dealt with when one is considering what information should be presented in aggregate.

On the other hand, you may have an IL2 trial where you're looking at preventing HIV transmission and it's well known that IL2 is going to change CD4, so showing aggregate data on CD4 in that setting is simply getting at whether there's proper adherence. So it's an issue that needs to be thought through on a case by case basis.

Information in the open report is what I would consider as public information that could be widely disseminated. There is need in some cases for information on a more limited basis. A medical monitor may be needing to present information on a regular basis to regulatory authorities about emerging problems. That person must have access to the emerging safety concerns that are SAEs in an aggregate sense, to carry out their responsibility.

Or you may need to adjust sample sizes based on event rates. That information could be provided. I argue it should be provided on a need-to-know basis. It should be provided only to those people who need to have access to that data to carry out those responsibilities.

Maybe just a couple of other really quick points. Mary talked about the chair this morning and I think one of the concepts that comes to mind there is the concept of consensus development versus voting. She had mentioned that one of the characteristics of the chair is that it should be a person who's a consensus-builder. I think that's an extremely important point.

I've often had it said we have to have an odd number of people on the DMC so that when we vote it won't come out tied. I object generally strongly to votes on DMCs. I believe that the DMC's responsibility should include discussing issues at a length and in a depth to arrive at consensus about what ought to be done. And I agree with Mary that as a result, the chair needs to be somebody particularly skilled at developing consensus.

Finally, as has been stated, there needs to be minutes of open and closed sessions. The sponsor's responsibility should be to ensure that those minutes are obtained. The FDA, in turn, I believe, should routinely request those minutes after the study has been completed.

DR. LEPAY: Thank you.

Dr. Fost?

DR. FOST: Thank you. I just can't resist commenting that Tom's comment about closed votes reminds me of the patient who got a telegram, "Union Local 221 wishes you a speedy recovery by a vote of 15 to 14."

I want to make four points. First, I was very pleased that the draft document has very strong positions and clear positions on the nondata analysis functions of the so-called data monitoring committee. That is, it says in a couple of places that these committees should review the consent form, that they should review the design of the study, they should take account of external information that may arise in the course of the study, all of which I agree with. None of those are data monitoring functions and it's important; it leads to two things.

First, it's important that it be in this guidance because in at least three DMCs that I've been part of, rather acrimonious fights erupted at the beginning about my raising these kinds of issues, charges being made that this is a data monitoring committee; those are IRB functions or steering committee functions; it's not for the DMC to do.

f it's important, as obviously the writers think it is, I think it would be helpful to put the reasons in there. It's just sort of stated and a justification is not provided for. The justifications are the independence of this group--it's supposed to form some independent assessment of the propriety of the study--and the personal integrity of the DMC members. I or a statistician can't be participating in data monitoring for a study that we think is not protecting subjects because the consent is flawed or because the design is flawed or because there's outside information.

One more conclusion follows from that and that's the name of these groups. And with all respect to Susan's very good slide about the thousand different ways you could name these things, I think it doesn't make sense to call it a data monitoring committee. In fact, it undermines these nondata aspects. So I would much prefer that they be called independent monitoring committees or just monitoring committees so it makes it quite clear that the function of the group is something other than or in addition to just data monitoring.

Point number two with regard to the consent process, as an IRB chair I can report that almost never do consent forms these days tell the subjects about these data monitoring committees and particularly the part that the subject might be interested in knowing about, that the study may lose its equipoise well into the study while recruitment is still going on and while patients or subjects are still in it. That is that there may be in the course of the study good evidence that A is better than B, but the study's going to continue because maybe A is more toxic than B. A recent anti-platelet trial showed efficacy early on but it looked like there was a lot of bleeding going on early on and how these things balanced out required some more time and some more data.

Now right now there are very few patients who know about this and maybe fewer who care about it but litigation is rising rapidly in this field--it's been relatively uncommon--and somebody is sure going to bring a suit or some critic is going to say this trial continued when it was no longer in equipoise; there should have been an agreement or a contract with the patient to do that. I think it's a boilerplate kind of paragraph that can be constructed and we're well on our way to 30-page consent forms but I don't know any way around it if we're going to include meaningful information.

So I would suggest that the existence of data monitoring committees and what they do in terms that would be meaningful to a patient should be in the consent form.

Third, having said that these nondata functioning activities are important, I want to say something against these activities or at least one of the problems with them that one needs to look out for.

First with regard to design, I don't know how you can not review the design when you join one of these committees. If you think it's very faulty obviously you can't ethically participate. But I've been on at least three data monitoring committees in which the investigator became enraged when the data monitoring committee started making comments about change in design. You know, this had been under discussion for years, serious, intense meetings for the better part of a year, and now for somebody else to come in with a different view, maybe a legitimate view, but to say "Do it our way, not your way" was quite outrageous.

So when the committee gets involved in all this is very problematic. You can't be part of the planning of the study but if it comes in too late after the study has started and thinks the design is so faulty that they can't ethically participate in it, it can lead to very acrimonious discussions.

I don't know what the solution to that is but I think it's a hazard of getting involved in design. I think the answer is that the committee has to have a high threshold for going to war over it. That is, they should not demand some change in design unless it's something that's really very fundamentally wrong, not just "I think it would be better if you did it this way or the other way."

Second, the same kind of cautions arise with regard to the consent process. The risk here is that the data monitoring committee takes over the position of the IRB or more commonly, competes with the IRB; that is, sees the consent form at the outset of the trial and says oh, this is faulty in some fundamental way and says it needs to be changed. So the steering committee is then obliged to send a note to all the IRBs in a multi-center trial requiring them to change the consent form but the local IRB may not agree with this change, so the investigator is caught in the middle.

And as an investigator myself and an IRB chair and a member of DMCs, I can say it's very frustrating for investigators, IRBs and DMC members to get buffeted about in this sort of endless loop of who has the final say over the consent form.

So again the answer to this I think has to be that the threshold has to be pretty high but having said that, I've been part of a DMB where halfway through a study involving 10,000 people, when new data came in from the outside involving risk of the study drug, we insisted that a revised consent form, that is, reconsent, go out to almost 10,000 patients. This was not appealing to the study directors but we thought it was sufficiently important because it was a major risk and we thought people should participate in it.

On the other hand, I've been part of a DSMB in which a consumer advocate who had had no prior IRB experience insisted on minute changes in the style and wording of the consent form and I think it was important for the DMC, while being sympathetic to a colleague, not to participate in that sort of micromanagement of the consent form because of this endless loop and the very long time that it can take.

With regard to these issues about the hazards of DMCs competing with IRBs, I mentioned to Susan during the break John Crowley, a statistician and former colleague at the Fred Hutchinson Center, has written on this, problems with DMCs replacing IRBs and oversight committees, steering committees, and particularly studies with cooperative oncology groups, and so on, where there's been quite a lot of vetting and good statistical consultation ahead of time, to have the DMC come in and start now micromanaging can be quite problematic. So there is a contrary view out there.

Last and a minor point just to repeat what Dave DeMets said the discussion this morning, something needs to be said in this document about local studies that can't afford full DMCs as to what a reasonable substitute would be. I think we've heard from several people and I concur heartily that an IRB can't be a monitoring committee; it's just way beyond its capacity. But something needs to fill in there and maybe it's just saying something like hiring and independent statistician or a clinician or the two of them and having them review the data on an interim basis. So something less than the full detailed elements of the guidance but something that would be better than nothing. Thank you.

DR. LEPAY: Thank you.

Dr. Friedman?

DR. FRIEDMAN: Thank you. Obviously I'm going to be speaking from an NIH perspective so take that into account. I thought the document as a whole was outstanding and brought up a number of issues which people have talked about for a long time but it's nice to see in a document that is going to be widely distributed. Having said that, I have a couple of points I'd like to make.

First, I think we have to remember why we do clinical trials and what our objective is in doing those studies. It's clearly to gain important medical knowledge, and certainly from the NIH it's public health-important knowledge. And simply conducting a clinical trial is just part of the overall way we go about getting that important knowledge.

Taking it one step further, a data monitoring committee is one tool to be used in making sure that we have high quality clinical trials. Obviously it's a very important tool but it's just one aspect of study design, participant safety, and indeed monitoring because I would hope that others are doing monitoring on an on-going basis, as well. Clearly a data monitoring committee only meets occasionally and only sees the data in tabular form when other things will be going on on-line and people have to be able to react.

So that brings me to the point of independence. Yes, independence is important and I have argued for many years that a data monitoring committee has to be independent in the sense of not having a vested interest in the outcome. But to the extent that we concentrate on independence and forget about why we're doing the trial in the first place is a mistake and I think we have to recognize that independence is not the end of what we're--is not our goal. Independence, to the extent it's important, is another tool in making sure that all data monitoring is conducted appropriately.

To the extent that--and Joe Constantino brought this up this morning--to the extent that we concentrate so much on independence and forget the other aspects, which may be more important in given circumstances, I think we're doing a disservice to both the study and most importantly, to the participants in that study.

This comes up in whether or not we want a truly independent statistician to present the data who may not understand the protocol as well as someone who lives with it on a day-to-day basis, who may not know all the nuances of what's going on and may not have gotten all of the reports on a day-to-day basis.

So these are trade-offs that I think need to be considered. I'm not arguing necessarily against it but I think it's something that needs to be considered and it's not a necessary this-or-that.

Similarly, and again speaking from NIH, attendance by sponsors at meetings. I'm not talking about being members but attendance. Obviously it's important for NIH to know what's going on, to hear what's going on, because we have a broad mandate from the public to produce high quality research for public health purposes. And yes, of course, we want the best possible advice from "independent committees" but to the extent that that best possible advice is not communicated in a way that is optimal for our broad purposes is not ideal and I think we strongly need to think about why and when it's appropriate for sponsor--in my case government but potentially others--ought to be available and ought to hear the kinds of discussions that are going on so that the real objective, conducting the best quality study, is accomplished.

I did hear the comments by Susan and others how these are suggestions, guidelines, that it's not an attempt to make sure everything is the same, but I think there's a tone here that conveys a certain way and I think the document would be better if it were perhaps more open on some alternative approaches. Thank you.

DR. LEPAY: Thank you.

Dr. Shoulson?

DR. SHOULSON: I'll try to make my comments brief because it looks like you're running out of time.

Just a few things. I wanted to congratulate the agency for developing this document but also mindful of the fact that the document was really developed on the basis of collective experience in the past few decades, largely based on anecdotal shared experience, not so much in terms of a database that we can go to. And I think one thing just to keep in mind is that moving forward, we need to develop a database that we could tap into to really look at the experience of DMCs and hopefully this will be more of a prospective experience and a more systematic type of database, just as a general comment.

The other general comment about the document is obviously the audience of the document are sponsors, either sponsor's companies or sponsor's steering committees or CROs, and that's appropriate but I just point out that there's an important group here, namely, the investigators in the trial and the IRBs which they are accountable for--and obviously in the long run they're accountable to the research participants and their patients--that needs to be addressed. I won't repeat many of the remarks made by Dr. Fost--I guess we share as investigators a lot of these issues--but I think it's important at the same time either in this document or in a subsequent version that's perhaps broader is to clarify the roles of the IRBs and the DMCs in regard to the monitoring of trials.

Obviously one difference is the IRBs are responsible for the up-front judgments in terms of benefits and risks, although they do have an on-going responsibility, and the DMCs, of course, have to look at accumulating data in the course of a trial.

I think one important part of a DMC is in its constitution that at least in terms of my experience, that the members should at least appreciate or share the equipoise that has been developed by the investigators and sponsors in the trial. If they cannot share that genuine uncertainty or appreciate the genuine uncertainty about the merits of the relative treatment arms then that would be a good time to decide not to participate.

There is, I think, an important role for sponsors and particularly companies that they sometimes delegate or relegate to DMCs too many things that perhaps they're responsible for. For example, the stopping guidance, stopping rules as some would speak of them, I think really the first draft of this should come from the sponsor to the DMC and then perhaps get comments back on that until that's really developed. So I think that's an important responsibility of the sponsor.

Just a few other points. Training, I think, is a critical issue. I think we underestimate how we have insufficient expertise of clinical investigators, biostatisticians, bioethicists, that people really need it. And I think that we need to approach this in a more systematic fashion and I think that we need to think perhaps outside of this particular box about curriculum standards, credentialing and the type of database needed to train people on DMCs. And I know that just reading this document and hearing the discussion, this has been enlightening for me in terms of our own commitment to training of individuals involved in experimental therapeutics.

One point. I only counted once in the document that the word "medical monitor" was raised and this is an important person from the point of view of investigators and sponsors and I think that should be delineated a little bit further in terms of that position in which the medical monitor sits--quasi-independent type of role in the study.

Finally, I just want to mention the importance of dissemination of information to the public. It was mentioned by Dr. Fost about IRBs. In our multi-center trials we have several IRBs who will not even review a trial unless submitted to them the composition of the DMC, the stopping guidelines of the DMC for that trial. And oftentimes, of course, this is not developed at the same time that the initial model consent form is. I think IRBs are doing this one, because of their commitment to ensure the safety and welfare of the research subjects but also they want to clarify what their role is and what the DMC's.

So I think this blurring of roles and delineation of roles is a very important issue that really needs to be addressed.

And the final thing I'll say about dissemination of information is that we need to educate the public in general, not just the public participating in the clinical trials, but the public in general about monitoring accumulating data and possibly performance in a trial. I think it's a very challenging thing to do but I think it behooves us and I think at the end of the day the public will be more competent about the value of clinical trials as a result of that. Thanks.

DR. LEPAY: Thank you.

Dr. Snapinn?

DR. SNAPINN: First, as a way of background, as a statistician in the pharmaceutical industry I've had the opportunity to play the role of an unblinded statistician reporting to DSMBs on a few occasions. Also I cowrote the SOPs that my company uses for interactions with for forming and for DMCs in general.

In reading the draft guidance I was very happy to see that with one or two notable exceptions the guidance is extremely consistent with our own SOPs but one of the exceptions, as you might have guessed, has to do with whether or not an industry statistician should be unblinded in reporting the results to the independent DMC.

Now the distinction between the two documents is not all that great. First, I think we all agree that the unblinded statistician in the sponsor should not participate in any discussions regarding the protocol, protocol modifications; those would be totally out of bounds. And this person should be isolated to the extent possible from the project in general and only doing the interim analyses and, in a sense, is an independent person working for the DMC for the purpose of that one study.

Now I suspect that we're going to have a serious discussion about this issue over the next half hour or so but let me just start it off with maybe a less serious comment. It's possible that one of the reasons for the disagreement and one of the reasons why I and maybe some others in industry prefer to keep the role within the industry is that it's so much fun to do these analyses. Maybe fun is not the exact right word but it's extremely exciting and rewarding to be working on these trials, to watch the results emerge as the trial's progressing and usually it's an important and exciting medical research that you're involved with and you get to interact with the DMC, which, of course, is comprised of some of the world experts in the field. So if this role is taken away from the industry, the life of a pharmaceutical statistician becomes a lot less interesting.

Just a couple of other brief comments. First, I'm actually not very comfortable with some of the things in the document about the nondata functions of the DMC. Let me just bring up one example which maybe crystallizes my concern here. This is a trial, an experience I've had earlier this year where the trial was on-going, a placebo-controlled trial in patients with type 2 diabetes and while our trial was on-going some other results were published, other placebo-controlled trials with drugs in a similar class, with very positive results. So there was a question as to whether it was ethically acceptable for our placebo-controlled trial to continue on the basis of this external information.

In the case of this study our fully blinded steering committee ultimately decided the trial had to stop; it was not ethical to continue it, which I was very happy with. My greatest concern was that the DMC would make a similar recommendation because if they had, I have no idea what the impact on type 1 error would have been. Would we be required to compare the observed P value with the interim monitoring P value, which, of course, is quite small--in fact, I think it was .001 at the time the trial would have stopped--or would it have been appropriate to ignore the interim monitoring guidelines and use the final adjusted P value of .045, say, to determine statistical significance in that trial?

If you would agree that .045 were acceptable then isn't there the opportunity for the DMC to consciously or subconsciously say well, the trial is leaning in the right direction, .02, .03, therefore I think we can appeal to the ethics of the situation and stop early? I mean isn't there the opportunity for that kind of a problem in this case of external data and maybe in some other cases of nondata functions of the DMC? So that has me somewhat concerned.

And just two other quick issues that I'll mention without giving an opinion on. One, I think we'd agree that DMCs should have access to the database when questions arise during the course of the trial, that they should be able to request additional analyses. And I think we would agree that anything within reason is acceptable. But are there any boundaries? That's the question I think we could have some discussion on. Does the DSMB have carte blanche to request any amount of resources from the sponsor or from the coordinating center or is there some kind of a limit there?

And another question, I think the document mentions that the DMC's responsibility is to protect patient safety, patients in the trial and patients yet to be randomized. Question: does that extend to future patients and does the DMC have any responsibility to protect potential future patients, not necessarily just those who would be part of the clinical trial?

DR. LEPAY: Thank you. At this time I think I'd like to open the discussion up to the audience and we can continue to pursue some of these topics with the panel in the course of this discussion. Again if people could step up to the microphone, we're recording this so please identify yourself.


OPEN PUBLIC DISCUSSION

MS. EMBLAD: I'm Ann Emblad from the Emis Corporation. I wanted to make a remark about the definition of the independence of a DMC. With respect to the definition that says a sponsor should not have access to event data by treatment, I think that applies pretty well to efficacy data but I'm not sure it always should extend to safety data.

There are plenty of examples where these two things are intertwined. There are also examples where they aren't. One dear to my heart is eye disease, where a primary outcome would be vision, where a safety outcome may be mortality and I would contend that the sponsor has the ultimate responsibility for the patient's safety. Even whether they delegate this to a CRO or to a DMC, if something goes wrong, the buck is going to stop with that sponsor.

So because these are guidelines, they will become quoted and people will point to this definition of independence as the gold standard. I think there needs to be some softening of the language to consider, in cases where appropriate, that a sponsor may need and should have access to safety outcome by treatment, not just in aggregate. Thank.

DR. LEPAY: Any comment from the panel?

DR. FLEMING: Certainly in monitoring trials the sponsor, the regulatory authorities, the investigators, caregivers, patients are all very concerned about the best interest of patients both on the trial, as well as future patients and those concerns are more globally reflected by what I would call benefit-to-risk, which certainly is made up of both the relative efficacy profile and the relative safety profile.

There have been extensive discussions within this briefing document draft, as well as elsewhere, that broad access to such emerging data on benefit-to-risk can be very detrimental to overall integrity and credibility of the trial and providing access to one domain of that, i.e., the risk component, is certainly providing important insights about overall benefit-to-risk.

You also mentioned mortality. Well, mortality could be an integral part of the efficacy end point, as well. So when you have access to relative safety data there are certainly major concerns about whether that could lead to all of the issues of concern that have been articulated in the briefing document draft.

DR. SHOULSON: Just one brief comment. I actually think the ultimate responsibility for the welfare of research participants is that of the investigator. The contract is actually made at that level and that is the one that has the enduring responsibility. The buck may start and stop with the sponsor but I think that--and, as I said, this document is focussed on the sponsor but I think we really have to be mindful of the agreement made between the investigator and the research participant in the oversight of the IRB.

MR. BLUMENSTEIN: I'd like to raise two issues.

DR. LEPAY: Please identify yourself.

MR. BLUMENSTEIN: I'm Brent Blumenstein. I'm a group statistician for the American College of Surgeons Oncology Group.

I'd like to raise two issues somewhat related. The first has to do with the confidentially agreement that the data safety monitoring committee has with the sponsor in light of the potential for the sponsor to act in opposition to the recommendations of the data and safety monitoring committee. And the second is related to when the role of the data monitoring committee ends. And those two things are related because there are representation of results issues that could extend beyond the time when the results of the trial become known and are published in public forums or in peer-reviewed literature.

The ultimate judge of the data in an industry-sponsored trial, of course, is the FDA and the FDA gets a chance to look and scrutinize the data but in the meanwhile there can be a lot of things that are done to represent the results of the data that could be contrary to what the data monitoring committee is recommending.

I'd like to see some discussion of the possibility of a recommendation in these guidelines to give the data and safety monitoring committee a chance to--a kind of safety valve. In this case my suggestion is that if they're in strong disagreement with the sponsor that they be able to bring the disagreement to the FDA, that this would become part of a charter for data monitoring committees.

DR. LEPAY: Thank you. Any comments from the panel?

DR. SHOULSON: One thing is that the confidentiality agreement between the DMC members and the sponsor should not extend beyond the point that the data are analyzed because oftentimes these confidentiality agreements may extend 10, 20 years beyond that and whatever comes first, when the data becomes available members--either the DMC as a whole or members of the DMC--should be free to talk about that. And, of course, they should have the minutes available to document their proceedings.

DR. SIEGEL: I wanted to comment regarding the remark about DMCs being able to bring in disagreements to the FDA, that the guidance does state that if a data monitoring committee makes a recommendation for a trial change based on safety concerns, that even if the sponsor does not make those safety concerns, that it is--and it uses the wording from our regulations--that the fact that that recommendation raises safety concerns that are of a nature that would normally by regulation require the sponsor to within 15 days tell us of that recommendation and its basis, and presumably their reason for not following it.

So that may help address some of those issues. We don't have any guidance--we steered clear of any guidance suggesting any type of direct communication between data monitoring committees and the FDA. However, we have in certain rare instances been contacted by monitoring committees and in other instances contacted monitoring committees. Throe are rare. When it's happened it's largely, I think, been useful but it's not something that we've specifically addressed or recommended and I don't think we have enough experience to draw general rules.

DR. LEPAY: Dr. Fleming?

DR. FLEMING: I think, Jay, if I'm interpreting Brent's comments, essentially he's stating concerns about confidentiality agreements that DMC members may have and regulations in DMC charters that would preclude even the option that a DMC might have in the case of in particular serious ethical concerns, of conveying those concerns directly to the FDA.

My sense is it would be very rare when that would occur but I think if I'm interpreting his comment, he's concerned about that not even being allowed in those rare cases.

MR. DIXON: Dennis Dixon from the National Institute of Allergy and Infectious Diseases.

I want to raise a question about something that Mary introduced in her presentation and then we heard about later, and that is the production of detailed minutes of the DMC meetings. In the guidance, the proposed guidance, there's even discussion that there should be sort of open and closed portions of those minutes.

For the DSMBs--DMCs--that our institute has worked with and that some of today's speakers are fairly familiar with, we have never kept such minutes. We produce written recommendations, a summary of the DMC recommendations, which are then conveyed to the steering committees and in some case to the local IRBs. But there's been no production of written detailed records of the nature described in the guidance that would be held confidentially until sometime afterwards. And when it's come up in the discussions it seems like it's sort of obvious to the speaker or in the document why these are needed and I wonder if those reasons could be shared.

I know that it is a substantial amount of work even to get consensus agreement on the written form of the actual recommendations, which for any one study is less than one page. And the notion that we would produce detailed minutes that would then have to be circulated and get agreement by the members of the committees is daunting, especially if very few people are even in the closed sessions so that somebody on the committee would actually have to be taking these notes and producing these minutes.

DR. LEPAY: Mary?

DR. FOULKES: I'd like to address two words that you mentioned, Dennis--detailed and daunting. We don't intend to recommend something excessively detailed and certainly not excessively daunting but I know you and I have both seen minutes that are exceedingly terse. One of our panelists at one point in his life suggested that those terse reports out of the data monitoring committees should say "We met, we saw, we continue," and that's it. I hope I'm quoting him accurately. Am I?

I think that's probably a little too minimalist but there has to be something in between.

Okay, why? We've heard that at the end of a trial a lot of information is made available both to the sponsor and to the FDA and we've also heard discussions of need for training, and so forth. In all of throe three contexts the entire process needs to be more visible than it has been during the closed and blinded period. There has to be some understanding and appreciation particularly when a new drug or biologic or device is being evaluated how we got there.

So basically that's--and there has to be something in between nothing and excessively detailed.

DR. FOST: Dennis, I would just say it's not uncommon that there are very contentious discussions about very important issues but that don't lead to a conclusion at this time to bring it to the attention of the steering committee. But if X happens or Y happens or depending on their response to an inquiry, we might change our view. Or at the next meeting we want to look at this very carefully again and comes the next meeting, we've all got our memories and everyone might disagree as to what it was we said we were going to do. It seems to me there needs to be some internal record of these very complicated discussions that nobody can remember six months later.

DR. FRIEDMAN: If I can make a plea for something that is not done often enough--Dave DeMets has done it a fair amount and a few others--that is after a study's over there ought to be a report, a publication of the interesting issues so we can all learn from what went on in these studies. I don't mean airing dirty laundry but saying how certain kinds of decisions, difficult decisions were made. I think that will get at some of the educational aspects. Unfortunately there are very few such publications.

DR. FLEMING: Just very briefly, I think, Dennis, clearly what you've referred to is a very important element of the minutes, which are the recommendations and there's no controversy about that.

I've been very impressed in interacting in wide industry-sponsored settings that in those settings sponsors have been very consistent in ensuring that a process is in place to have documentation for open and closed sessions. It's not extensive, as Mary says, but it's the essence of what happened, a few pages. Someone is designated with that responsibility. It's very helpful to the committee and I think it's going to be very helpful and it is very helpful to the sponsors when the study is over, to be able to have access to what actually happened. And I believe the FDA should have access to that thinking, as well.

DR. LEPAY: Thank you. In the back?

MR. BRYANT: My name is John Bryant. I'm the group statistician at NSABP and probably my remarks should be interpreted in that light in that I feel that I have some understanding of the cooperative group process and perhaps less so of industry-sponsored trials.

Nevertheless, I think this guidance, however it turns out, will have profound implications for the U.S. cooperative cancer groups. Most of the studies, as I'm sure you all know, that we conduct do have registration implications, at least potentially, so we're clearly interested in this guidance.

I heard it said earlier today that statisticians are a self-effacing lot and perhaps that's one of our big problems and I guess I'll attempt to dispel that notion a little bit here.

The first point that I'd like to, I guess, take some exception to is that the guidance is pretty clear that it's not intended to be proscriptive but rather it's supposed to describe generally acceptable models. And I guess I would argue that at least in some aspects the document is extremely proscriptive and I guess I'd like to read maybe two sentences. "The integrity of the trial is best protected when the statistician preparing unblinded data for the DMC is external to the sponsor. And in any case, the statistician should have no responsibility for the management of the trial and should have minimal contact with those who have such involvement."

Now one, I think, can reasonably agree or disagree with those statements but I think it's fairly clear, at least to me, that they're highly proscriptive statements. And I believe that if it's the intent of the drafters of this document to actually describe generally acceptable models and not to be proscriptive that perhaps some change in tone and perhaps in substance should be contemplated.

It's probably fairly clear that I do personally have considerable concern with the notion that a cooperative group data coordinating center, in essence, be blinded not only to efficacy data but also at least in some degree to safety data. And I guess I'd like to reinforce what I at least thin I've heard said by my friend Joe Constantino and Larry Friedman and Tom Lewis.

Some good arguments have been made here for blinding the statistician or blinding the coordinating center to efficacy aspects of the trial and to have results presented to the data monitoring committee through an independent statistician. Ultimately, though, I think there are some real down sides to that that have been articulated by others and I think that this document, in order to do what it's supposed to do--i.e., prescribe generally acceptable models, needs to pay some attention to the real down sides of having data presented to a DMC by someone who ultimately is not very familiar with that data.

I have some experience in these matters. I've presented data for the NSABP for years to our data monitoring committees. I've sat on data monitoring committees both as, shall we say, nonparticipating statistician and I've also participated on data monitoring committees where, in fact, I have been the statistician who actually did the interim analysis. So I have some familiarity with these matters.

I have the highest respect for everybody I've served on data monitoring committees with. They're clearly a very highly functioning group. But I guess the bottom line is that the people who really know the trial best are within the cooperative groups who run those trials. If it is not our mission to objectively compare treatments in the U.S. cooperative groups, then I simply don't know what our mission is.

Now it may be that more attention does need to be paid to the issue of the degree to which the interim analysis statistician and the trial management statistician in some sense have to be separated. That's a good point that needs to be thought about. But I think the idea of trying to divorce the day-to-day monitoring of a clinical trial, at least in cancer, from a data coordinating center is extremely dangerous. I think it will lead to diminished safety of participants and I really think that this is something that I think this guidance has to address. It doesn't address any of the down sides of divorcing the data coordinating center from the day-to-day conduct of the trial and I think it needs to do that.

DR. LEPAY: Thank you.

DR. SIEGEL: Those comments are certainly appreciated. I would perhaps clarify a point or two. Nowhere does the document endorse the notion that the statistician who presents the data to the committee should be someone who is not familiar with the data, not receiving the adverse event reports on a day-to-day basis, not very familiar with the trial and its protocol issues that were implied or stated by a couple of comments, including earlier comments. It simply states that that person ought not to be in the employ of the sponsor or, if in the employ of the sponsor, ought to be completely separated from any role in trial management and then points out the cautions of how difficult such a separation can be and, in some cases, perhaps not feasible.

The only other comment I would make, because the issue was raised of objectivity and the coordinating centers being objective and also the issue was raised by Dr. Friedman's comments about NIH approaches and some discussion about differences between government- and industry-sponsored trials, that a significant part of our concern here, as exemplified by the examples I gave, one of which involved the NIH, is not an issue of objectivity; it's an issue of how knowledge of the data can bias your ability to manage a trial.

I pointed out in my fourth example the rather considerable efforts the FDA makes in many of these cases to keep ourselves blinded to the trial. We consider ourselves quite objective but feel that once we know the interim data of the trial, when a sponsor comes to us and wants to make protocol changes and needs our approval to make them, we're going to be in a very compromised position.

So it's not because we're not objective but simply because we have that knowledge. So it's important to recognize that we're not impugning anybody's objectivity in any situation here, just trying to make people cognizant of concerns.

One final quick comment about that. That has to do with the issue of directivity and whether this is prospective or not.

In regulatory parlance, which I'm sure many of you are not familiar with, if we say something should be done we consider that nonprescriptive. It may be read that way. So the quote that was read said the statistician should have no responsibility for the management of the trial. That is a nonprospective statement.

If we write a regulation, we don't use that word. We say the statistician must have no responsibility. In that case if you do it, you can get in trouble, even if you have the world's best reasons. If we say they should have no responsibility, what we're saying is what you're thinking, that here's all the reasons why they shouldn't and we think in general they shouldn't but, in fact, there may be in specific cases reasons that are even more compelling why they should and that can be quite acceptable. And if you're willing to bear the risks to the trial that this talks about and to take those approaches and to try to minimize those concerns, those are considerations.

That's why this is a guidance. Perhaps we can make that a little bit more clearly. It's not intended to be prospective in the sense we think of being prospective, which is to say you don't do it this way and you're automatically in trouble. It simply says this is a way that we believe is consistent with our regulations and a good way to do it. However, there are other ways. If you choose to do it other ways you ought to have a good reason for showing why and how those are consistent with regulatory requirements.

DR. LEPAY: Dr. Fleming?

DR. FLEMING: Just briefly, certainly it's extremely complex and controversial as to how you optimize these goods. One good is knowledgeable oversight and the other good is independence to achieve maximal integrity and credibility. And no one, I believe, is advocating that we give up knowledge for independence. What we're talking about is ensuring that individuals who are on monitoring trials are knowledgeable.

I'm director of a stat center so I have the hat on frequently of turning our studies over for monitoring by an independent committee. I don't believe that because I'm the lead statistician on a trial that I'm the only one who can be highly knowledgeable about issues that are extremely important in the monitoring of that trial.

Clearly the people we have on monitoring committees and the liaison statisticians must be chosen to be very knowledgeable people but we also augment that insight that they have by open sessions, as are advocated here in the guidance document. Open sessions allow for further sharing of insights by those individuals who have unique insights who aren't also members of the data monitoring committee.

So the entire structure is intended to achieve this balance between knowledgeable oversight and independent oversight.

DR. LEPAY: This is an important issue. Dr. Fost?

DR. FOST: Jay, with all respect, we've gone through now--we're in the middle of a six- or seven-year period when OHRP began issuing guidance documents of incredible detail, not regulations, arguably even tolerated by the regulations, about which there's terrible disagreement and, as you know, major institutions have been shut down for months at a time not for deaths, not for adverse events, but because of failure to comply with guidance documents. Which is not to say that--

DR. SIEGEL: Not by the FDA.

DR. FOST: Not to say that the FDA would ever do such a thing.

DR. SIEGEL: We wouldn't.

DR. FOST: Well, with all respect again, there have been instances from the FDA. Stanford some years ago was almost threatened with a shutdown because of things its IRB were doing. I mean it got very stern letters from the FDA that, as I was saying--

DR. SIEGEL: Oh, we'll shut down trials, sure, but not for noncompliance with guidance documents. Noncompliance with regulations.

DR. FOST: As an IRB member and as any dean of a research center, to not comply with guidance from a federal agency these days is to risk having your entire university shut down for months.

MR. CANNER: Joel Canner, statistician with the FDA practice group at Hogan & Hartson in Washington.

I applaud the FDA for the very detailed and comprehensive description of the form and function of DMCs but I'm trying to figure out how to apply this to the companies that I work with, which are by and large small device manufacturers. These companies typically do small studies that may or may not be controlled, may or may not be randomized, concurrently controlled, and so forth, often not even possible to single-blind them, let alone double- or triple-blind. There are often cost restraints and companies typically manage their own trials without the help of an outside CRO or other agency.

All that having been said, many companies of their own volition decide that they need a DMC or perhaps the FDA insists on it and the question is in establishing a DMC do these companies in these situations need to buy into all the many detailed aspects of this guidance or is there a sort of DMC lite for these trials that don't fit the large multi-center long-term heavy duty trials that the pharmaceutical industry engages in?

DR. LEPAY: Excellent.

DR. CAMPBELL: I'm Greg Campbell from CDRH.

I think you raise a very important question and one of the things I did not mention this morning which perhaps I should have are questions about when a DMC may not be mandated or may not be recommended and there are certainly lots of examples that you and I can come up with where the trials are small, where the length of time is short. I mean if you can go down the list of all the questions that I posed this morning there are lots of situations where it's not clear that a data monitoring committee, in and of itself, adds a lot of value to the trial.

Having said all that, there are still some advantages that companies might see in having a data monitoring committee, especially having to do with being able to look at the data on an interim basis and perhaps stop early for reasons having to do with effectiveness or perhaps even safety.

Having said all that, I think that there are probably other models than the ones that are set forth in this document and this is guidance, it's only guidance and we don't want to discourage people or companies from coming to us with other ways of thinking about things.

DR. LEPAY: Thank you. We have about five more minutes and three people standing. I'd like to see if we can address those comments. There's another open discussion session at the end of the next panel.

MR. CONSTANTINO: Joe Constantino from the University of Pittsburgh and the NSABP and I'll just be very quick since I did speak this morning. I'm hearing from the panel things that I'm glad that I did come to hear because they're saying things which are not reflected in the document.

Dr. Fleming, I just heard you say there is a give and take between the drive for independence of a statistician and the safety. That really doesn't come across in the document. That might be the intent but it comes across very loud and clear that everything is for independence, that it's all one way.

Dr. Siegel, you said that you're not driving to say that the statistician has to be independent of the sponsor, has to be isolated. Your document doesn't say that. Your document says very specifically it is best that the statistician preparing the data be external to the sponsor. Now if you said that--I mean I don't see how someone could be in a cooperative group--some statistician who has to be involved with the data day to day who then can transmit it to the data monitoring committee cannot be considered part of that sponsor by the definition of what you're calling a sponsor.

So to me there's a conflicting thing. You have to be paid by somebody to be there day to day to see the data and that's going to be the cooperative group, no matter how you look at it. You can say this guy has the office all by himself in a separate building maybe but that doesn't come clear. You say he has to be external of the sponsor and I think some wording into the document to make it clear that there is a give and take and that there are alternatives is what's needed.

And just one last question, to reiterate how we are focussing on independence versus what the real issue of what we're doing is all about. Dr. Siegel, you gave three very good examples of things that should not happen in clinical trials. They have had nothing to do with whether or not the statistician knew the treatment codes of the unblinded data. They were poor science and poor clinical trial design.

The first one was there was no up-front data analysis plan well defined and it was tried to be changed in the middle of the trial. You don't do that. That's poor statistics. You don't do that.

The second one was dealing with changing end points in the middle of a trial. You can't have a primary hypothesis planned a priori before randomization if you change it in the middle of a trial. You don't change the end points. It's that simple. You can't do it. It's poor statistics. It has nothing to do with if you know the blinding or the unblinding.

The last one was changing the sample size to increase the power. Again you can't change the primary hypothesis. It's based on some set power. You can't change it after the fact. You can increase sample size to maintain the power because perhaps your hazard rate wasn't what you thought it was going to be but you can't change the sample size to improve your power. Poor statistical design.

If you have an up-front, well designed and specified analytical plan, if you have an interim monitoring plan that's well specified up front, all those kinds of problems that you gave as examples go away.

DR. SIEGEL: I would just quickly say that in all of those examples sure, things might have been planned better but nonetheless, in those examples and in many examples we see, it simply is not true or correct to state that end points shouldn't be changed, sample sizes shouldn't be changed, trials shouldn't be changed.

Trials take a few years to conduct. Over the course of those few years other trials get completed with the same drug, you learn about the appropriate dosing of the drug, you learn new information about adverse events, you learn about competing drugs that need to be incorporated into the trial. There is an imperative, to protect patients and to do good science, to be able to change trials in mid-stream. It is part of good trial design and it is best, indeed it is only accomplished without bias if it's done by people who are not biased by knowledge of internal information.

Secondly, on the question you raised of balance, we need to look at the balance of the language in this document. I think the point is perhaps very well taken. It's certainly been taken by many people that there isn't a discussion, as much discussion about the issue that the statisticians and others be knowledgeable of the trial and its design and I would suggest that the reason that's not there is that we've seen several trials have regulatory failure because of these sorts of lack of independence, and that's an important message to get out.

We can try to improve the balance but I do want this audience to know that--I certainly appreciate the comment, too, that we can say something's not binding and it often gets interpreted as being binding but it is not binding; it's here in the language right after the sentence you quote that says "The integrity of the trial is best protected when the statistician is external to the sponsor" is a statement. "In any case, the statistician should have no responsibility for the management of the trial." That certainly acknowledges that they may be part of the sponsor but should not be responsible for management of the trial. The statement that they should not doesn't mean that they cannot; it means that they can but if they do, as it says right at the beginning of the document, "The intent of this document is not to dictate the use of any particular approach but rather, to ensure wide awareness of the potential concerns that may arise in specific situations."

So there's not much more that we can do to say that it's to raise your concerns and alert you to problems and it's not binding than to write that in several places in the document. We can try to write it in a few more places in the document; maybe that needs to be done. But that is, in fact, the intent and that is, in fact, the way the document will be used.

No IRB will be shut down and no company will be shut down because the sponsor's statistician or the data center statistician was part of the monitoring committee. However, if that statistician was involved in proposals to change the trial, those proposals may not be looked favorably upon or the trial, if changed with knowledge of interim data, may be viewed as invalid. That's a reality; that's what this document is trying to alert you to.

DR. LEPAY: Dr. Fleming very quickly?

DR. FLEMING: I'll try to be real quick. Not all studies are confirmatory but those studies that that are confirmatory, I'd like to be able to interpret them in that manner. It means, as the speaker was saying, I'd like to have a prespecified hypothesis that I then confirm.

At the same time, there can well be during the course of a long trial external information that could enlighten us as to what the hypothesis really ought to be. I actually don't have a problem if I'm certain that it's external data that leads to that refinement and this is the essence of where this independence and separation enables or empowers the sponsor to have that flexibility.

The other aspect is judgment is inevitably always going to be necessary. It's not unique to us here in monitoring committees that we want our judges to be independent, unbiased. That's true of any judge in any setting. So the concept of having an independent group of individuals who have sole access is simply our attempt to implement concepts that are widely recognized in many other areas.

DR. LEPAY: Thank you. Again I'd like to thank our panel and those participants from the audience. A round of applause.

[Applause.]

DR. LEPAY: And we have a 15-minute break scheduled. We'd like to convene promptly at 3:30 and we'll proceed to Bob Temple's talk.

[Recess.]


DR. LEPAY: Thank you very much. We'd like to move on to our last series of discussions, the final two sections of the guidance document and our third panel for the afternoon.

So to initiate the discussion I'd like to introduce Bob Temple, who's director of the Office of Drug Evaluation, one, and associate director for medical policy in the Center for Drugs. He's going to be providing us with information on Sections 5 and 7 of the guidance document.


DMCs AND OTHER REGULATORY REQUIREMENTS

DR. TEMPLE: Thanks, David. These are relatively short, not very detailed or very directive sections, so this will be fairly short and we'll have lots of time for questions.

Section 5 talks about data monitoring committees and regulatory reporting requirements. That'll be short because data monitoring committees mostly don't have regulatory reporting requirements. And sponsor interactions with FDA regarding DMCs. Then I'm going to add on a little extra topic, which you'll see when I get to it.

There are really two sections of part 5, one about safety reporting, one about expedited development. Under the heading of safety monitoring it's important to distinguish two kinds of adverse events or potential adverse events. One is the obvious thing--a patient dies of acute hepatic necrosis or has agranular cyrtosis or aplastic anemia, something like that. You don't need a data monitoring committee to interpret those events. They speak for themselves. In fact, the sponsor, if those were not known to be problems, has to report such events within seven or 15 days. And in almost all cases the sponsor chooses to take responsibility for that on its own.

These are relatively obvious, easily recognizable, not part of the normal history of the disease. There should be very little confusion. If that's not true then that's another question.

They can be submitted to FDA blinded or unblinded and some people like to keep them blinded but I frankly have never understood that so maybe that's something we can talk about. I don't see how a case of agranular cyrtosis unblinded interferes with the study. And, as I said, it's usually submitted by the sponsor.

Their responsibility to do that is so urgent that unless the data monitoring committee meets very often they would violate their rules if they put it through the data monitoring committee, but they usually do not.

It's worth noting and the document notes this, that such serious unexpected--that is, things not in the investigator's brochure--adverse events are reported to FDA and to all investigators, who then under various other sections of the rules--not guidance, rules--have to report them to IRBs.

There are cases in which direct reporting to IRBs by the data monitoring committee or the sponsor have been arranged. For example, if there's a central IRB that's not a bad idea, but that's not required.

A second whole category of adverse events and one much more appropriate to consideration by data monitoring committees are events that are part of the disease process or relatively common in the study population. Heart attacks in a lipid-lowering trial, even if heart attacks aren't the end point, will be something that would be common in the population. It would be hard to look at a single event and know whether it meant anything or reported anything or should be reported. Death in a cancer trial and other things that are either common or expected.

In this case it's very difficult to assess an individual event and the data monitoring committee role is crucial because you need to look at the rates and make some determination about whether the rates are worrisome or not worrisome. They therefore need to be done by a party that is neutral, that doesn't have a bias, because judgment's involved and we want our judgments to be unbiased.

This almost always would include events that are the study end point--that's sort of obvious--but other serious events that are relatively common in the population and sometimes what you have is a greater than expected rate of a recognized adverse consequence of the drug--for example, bleeding with a TB3A inhibitor. The rate might be higher than you expected, even though you knew that there were going to be some.

The document notes that this is sort of an opinion about a regulation but it's only guidance.

A data monitoring committee request for a safety-related change in a protocol, such as lowering the dose to avoid toxicity or change in the consent form to warn of an emerging safety concern would be interpreted by us as a serious unexpected event and therefore reportable to the FDA by the sponsor or by the data monitoring committee if they've made that arrangement. So these are obviously important; it's a relatively unusual thing.

The second reporting requirement that's described is expedited development and this, as anyone who reads it will note, is a somewhat vague section because this doesn't happen very often, we're not too sure what the track record tells you and in general, FDA interaction with DMCs is not a thing we try to promote because they're supposed to be independent and for various reasons it's potentially a problem for us.

However, we do note that where we're really interested in a serious and bad disease we may be more than usually involved with the progress of trials. Therefore if any interaction with the data monitoring committee is anticipated it's very important to try to dope those out ahead of time.

Again we expect that FDA access to unblinded data is going to be a very unusual thing. First of all, as has been touched on, knowing interim results would keep us from advising independently on changes in the protocol, just as a sponsor would be unable to do that if the sponsor knew the data, and I would say just as a DMC would be unable to do that if the DMC knew the data.

The other reason we're careful about learning early results is you can get a sort of public health tension in either direction. You know, we're the government; maybe we should stop this awful thing. We believe we know of at least one example of where a study was stopped probably prematurely because we got nervous and we'd rather not be exposed to that. That's why they pay the data monitoring committee members all that money.

There's also a potential for a very damaging premature judgment. That is, if we tell a company oh yeah, you've got to stop now, and then we look at the data more closely and half of the cases turn out not to be really heart attacks or something, we're in a very difficult position when it comes to reviewing the data.

So for all those reasons we generally don't like to do it but there have been cases where we did. We were reviewing a drug for adjuvant breast cancer chemotherapy and it showed clearly superior response rate and time to progression. We wanted to know before we approved it that at least the mortality wasn't worse. The mortality results weren't mature yet; they were still under development. And we were able to work with the chair of the data monitoring committee and receive assurance that it at least wasn't going the wrong way. That may seem small but it was a big step to us. We worried about it a lot.

This is a very odd, recent case. A sponsor wanted to consult us on whether to make the primary analysis the whole group under study or a subset of the group that was started somewhat later with an additional treatment. And they'd actually been advised by their DMC that they should look at the latter. We thought the DMC was in full knowledge of all the study results, both of the subgroup and the total, but today's been a learning experience and they, in fact, were not at the time they gave the advice. But in seeking the advice--and this isn't the company's fault; it's because we asked for it--we obtained the data that had been presented to the data monitoring committee eventually that showed the results using the whole study group or the subset, and the company's now coming in to ask us which they should do.

Well, of course, we couldn't tell them. We were contaminated. So obviously they hadn't thought about it, for sure we hadn't thought about it, but it does turn out the DMC had thought about it, even though at the time I wrote the sidle I didn't know that.

So there are major disadvantages and care needs to be given when we see interim results. It really restricts us.

But, of course, just to add to that, and I forget whether this is on a later slide or not, we will--oh, yeah, this comes up again.

Now a somewhat overlapping question is sponsor interactions with the FDA regarding how to set up a DMC. It would probably be very useful to discuss data monitoring committees with us but I have to say that it's not common to have those discussions with one exception, and the exception really isn't about the data monitoring committee; it's about stopping rules, which, strictly speaking, is about the protocol, not the data monitoring committee.

But what we could consult on is planning the data monitoring committee, what its role is going to be, who's going to be responsible for what kinds of adverse reactions. We might comment on the members, although we don't like to identify particular individuals. That makes us nervous but we might talk about widening the membership to include someone from South America or whatever seemed necessary or bona fide, well trained, properly constituted ethicists.

So those are things we do think about and it would be worth discussing those matters. Probably in some cases we'd tell people that we didn't think they needed one, which might save people trouble, too.

We are very interested, as has been discussed repeatedly now, with how the group performing the interim analysis would be protected from other parts of the sponsor. I won't go into that further but obviously it's a point of great interest, however it gets resolved. And we'd certainly be interested in participation of the sponsor at meetings. Again as has been discussed at length, we didn't try to set a rule but we did note that certain things are potential problems.

And, of course, there's been some discussion of this. I guess I think interim analysis plans or stopping rules are something that should be developed by the sponsor and presented to the data monitoring committee, who can then respond with "This is stupid," or something like that, but it's basically part of the protocol. At least that's what I think.

Any intent by the sponsor to access interim data is a major step and should certainly be discussed with FDA in advance. The one case where this will be expected, of course, is in association with a recommendation by a data monitoring committee to terminate a study. At that point the reasons have to be given and the sponsor will see the data.

A recommendation to terminate a study for success puts the sponsor in a difficult place. First of all, they like the idea and hope that we will, too, but sometimes you pay a price for these things and we would certainly want to at least think about the adequacy of the safety data, whether the study has been stopped so quickly that we don't really know what we needed to know about the duration of benefit, whether we're uninformed about critical subgroups or whether there are funny things in there that are a problem. And, of course, you often don't know much about secondary end points.

The trouble is it's hard to do all that with a proposal to terminate the study in hand and all of those things should have been considered earlier, if possible. We often, for example, recommend that studies not be stopped except for survival or some other major event kind of benefit because you end up with a tremendous loss of data and a less convincing protocol. So those are all good things to discuss before the committee launches a recommendation at you.

Of course, if there's a recommendation to terminate a study for safety, that would always require an FDA submission. There would obviously be implications regarding on-going studies and we'd certainly hear about all that.

There are lots of things a data monitoring committee could recommend in the way of protocol changes and some of those would have little implication with respect to approval but some of them would. Changes in end points could lead to an end point that was no longer considered reasonable. Changes in permitted concomitant medications or in dose or schedule could cause problems in interpretation. I don't have examples of those but they could.

But most important and I don't think it's emphasized in the draft enough probably, the unblinded data monitoring committee really can't credibly change end point, sample size, subset plans or anything, any more than an unblinded sponsor could, without at a minimum affecting alpha or introducing bias that we don't know how to correct. That probably needs some discussion.

Okay, now for something completely different. Sections 4, 4.15 and 4.42 refer very briefly to a possible different kind of data monitoring committee and some of the discussion today has gone in this direction. I actually first, even though these things have existed for a long time in actual fact, the first time I heard anybody talk about it at length was at a meeting at Duke that Rob Califf had set up and someone from Lily said oh, we set up data monitoring committees to look at our whole program. We get wise heads together, people from outside not so invested in a particular approach and we find that very useful.

So this sort of thing, which one might call DMC type 2, isn't developed to monitor a single large trial but rather, to observe an entire developing database, obviously looking at safety across the whole database but also thinking about how to design the new studies, whether special monitoring ought to be introduced to worry about something, whether there ought to be special tests, and even to look at potential advantages or disadvantages that might be explored in studies.

This differs in a lot of ways from the more usual type. First of all, I think the principal expertise is in many cases clinical here and that's different because despite their modesty, we know that biostatisticians are incredibly crucial to the data monitoring committees of the other kind.

I believe you could say that complete independence from the sponsor is not as critical here. We're talking about descriptive things. It's perfectly reasonable for them to argue with each other. You don't really have to be blind to think about what the next study ought to do or whether you should design it differently. But it does seem particularly useful to have a strong external element, first of all, to obtain additional expertise if you need it but also some needed freedom from past obligations and assumptions, a little independence of judgment.

As I said, this focus is on the whole database, not on single trials. It's especially helpful in a high-risk population where looking at a bunch of trials may start to reveal things that are not obvious from a single trial. Our past model for this might be FIAU but there are many cases where things sneak up on you that aren't obvious.

Such a group could pay attention to developing effects and subsets so that instead of being dismissed at the time of approval they'd actually be studied and there'd be real data on them because somebody planned a test for them. So there are a lot of opportunities.

It is worth noting that this whole idea would work best in a situation of what might be called rational drug development, where one study informs and modifies later studies. That is the way people sort of used to do it but it's uncommon now to see that sort of leisurely pace of drug development. What you see much more commonly now is a couple of phase II studies to make you think there's a drug and then phase III all at once.

So the burden there, since you don't get to learn from the results of one study in planning another, is to try to build all the variety into phase III that you can, and I would not say that's commonly done. But an outside advisory committee, thinking broadly about this along with the company, could think about studying a wide range of severities, could be sure that they're looking at the appropriate dose and dose interval, looking at appropriate combinations with other drugs, making sure that an adequate duration of trials has gone on, thinking about randomized withdrawal studies. The whole idea is that not just the company alone but the company with some help would be thinking about the whole development program.

Section 442 about early studies proposes something not so different from that but for a special case and that is a case where there's high-risk drugs and where the investigator has a potential conflict of interest. In that case the data monitoring committee or even a data monitoring person, as I think someone said, may enhance the credibility of these efforts, especially when there are important ethical dilemmas involved.

It's just worth making one last point. There's a tendency to try to get perceived problems in an environment addressed by the groups that seem to be functioning well so there's a certain tendency to want data monitoring committees and also to some extent FDA, I have to say, to solve all the problems because they seem to be able to do their jobs pretty well.

Well, that doesn't work. You won't learn about an important adverse effect unless the investigator reports it. It won't go to an IRB, it won't go to a data monitoring committee, it won't go to FDA unless someone recognizes that coughing for a week isn't an intercurrent illness but is a response to an inhaled drug. So a canny investigator, a well trained investigator, can't be substituted for by a data monitoring committee. Having said that though, an external person could help an alert investigator interpret what he or she saw and might be useful.

So that's the end of my advert.

DR. LEPAY: Thank you very much. I'm going to invite our last set of panelists to come up if they would and our AV people again to help terminate the slide presentation here.

I'd like to introduce the members of our panel. Michaele Christian, who's associate director of the Cancer Therapy Evaluation Program at the National Cancer Institute of the NIH. Dr. Robert Califf, who's associate vice chancellor for clinical research and director of the Duke Clinical Research Institute, professor of cardiology in the Department of Medicine at Duke University. Dr. David DeMets, professor and chair, Department of Biostatistics and Medical Informatics from the University of Wisconsin. Dr. Bob Levine, professor in Department of Medicine and lecturer in pharmacology at Yale University School of Medicine and author of the book "Ethics in Regulation of Clinical Research." And Dr. David Stump, senior vice president for drug development at Human Genome Sciences, Incorporated.

And again I'd like to use the same format we've had throughout the day and ask if Dr. Christian would like to begin by making a few remarks.

DR. CHRISTIAN: I have to confess that I arrived late because I had some competition so I wasn't familiar with the format but I do have a few remarks.

I wanted to point out some areas that I think probably merit some additional discussion and I want to put this in the context that the Cancer Institute as a sponsor sponsors over 150 phase III trials at any given time, so we have a large number of trials on-going and our collaborating sponsors, if you will, the multi-site, large cooperative groups that do these studies, may have 20 trials on-going at any one time, phase III trials.

So the model that we've used for data safety monitoring boards for all of our phase III trials for many years is that each group has a data safety monitoring board which overlooks all of these trials. So it's a little bit different than the flavor that I got from the guidance, which was that it dealt primarily with DSMBs for large single trials and I think that's probably something that one might want to comment on in thinking about this.

So that has some practical implications and while clearly our DSMBs follow most of the principles outlined here there are some significant differences. And I think that we need to think a little bit about not creating excessive burdens for DSMB members that are already covered by other reviewing bodies. For example, there are suggestions that protocols and consents and analytic plans and other aspects of protocols be reviewed before studies are initiated by DSMBs and I think that actually bears some discussion.

At any rate, other issues that I think are important here are that there was, I think, for us some confusion about the role of the DSMB versus the IRB, the institutional review board. And again I think part of that related to this issue of initial review of the consent, the protocol, et cetera. So there's some confusion, I think, about the relative responsibilities of those two bodies, both of whom have patient protection as a primary focus.

Another area that I think could stand some clarification is the role of the FDA for non-IND phase III studies. We sponsor quite a few important phase III studies that are monitored by DSMBs but are not done under INDs, so the role of the FDA and the advice and guidance for some of those, I think, is important.

You're laughing, Bob. There are some appropriately done that way, I think.

Finally, I think an area that probably also bears some additional discussion is the responsibility for toxicity evaluation. I think that this is pretty complicated and DSMBs, of course, usually meet every six months or so and the responsibility for on-going toxicity monitoring by the study team and the need to potentially see comparative toxicity data in order to exercise that responsibility carefully I think is something that bears further discussion.

And similarly, I think the sponsor, which can put comparative toxicities in the context of a larger toxicity experience and database, is an important issue. I think they're well positioned to monitor safety in an on-going way. So I think those are the major points that I wanted to bring out.

DR. LEPAY: Thank you.

Dr. Califf?

DR. CALIFF: I guess I'll play my usual role and just take a few potshots at everybody here to see if it raises discussion.

First of all, I will say I think this document is a major step forward, interpreted in the right light, which is that it is a set of recommendations which anyone could logically disagree with individual points and come up with better ways of doing things. So unless it's written down and generates discussion, we're not making progress, so I'm really glad to see this being done.

I'll just start with our federal friends. In general I would characterize the current environment as federal chaos and widespread panic. The federal chaos is that we don't get the same guidance from the FDA, the OHRP, the NIH and the IRB in their interpretation. And as Ira Shoulson said, at the most fundamental level a human experiment is a contract between a patient and either a doctor or someone else who's providing medical care and the widespread panic is coming from our IRBs, which are responding to the federal threat of institutions being shut down by going to the most onerous common denominator.

So the agency that has the most onerous demands is going to win out in terms of what gets done and it's dramatically increasing the cost of clinical research and slowing it down in the U.S., which I would argue is not good for patients.

So the good news about the emphasis on protection of human subjects, the interaction with the FDA and others is that more money is being spent on protecting of human subjects. The bad news is that probably most of it is being spent on the wrong things and I know a lot of people on the panel agree with that assessment. What to do about it is a different issue.

Secondly, we have a real international problem which I don't think has been addressed here, which is that FDA and the European regulators and the Japanese regulators don't agree, particularly on issues of adverse events and how to deal with them. And for those of us who do large international trials, there are really major problems that arise because you can reach a great agreement with the FDA, for example, on a more streamlined approach to a clinical trial, and then it becomes the most onerous country that rules the day. So if Germany says you've got to have every adverse event reported in real time no matter what it costs, then that's what companies have to do and the associated investigators.

So despite all the efforts at harmonization, this is an area that needs considerable work in terms of the interaction.

Third, I'll just take on the company regulatory groups and pharmacovigilence groups, which everyone is scared to death of because a word from them inside a company and it's a major problem, and I think there is a need for a better--I don't know how to do this but better dialogue between the good intentions at the FDA in particular and the regulatory groups. It seems to me that it's hard for that to happen because of the interactions that can lead to the negative repercussions at times.

So this relates to data monitoring committees because there is a sort of semi-independent activity that's been referred to of adding up and calculating adverse events. Let's face it; at least in large clinical outcome trials if you've added up the adverse events you often have the answer to the trial in real time and I don't know of any way to get around this except devising rules which have the adverse events go through an independent organization. And yet, as was pointed out by a questioner already today, if the ultimate responsibility lies with the company, we have some guidance here which may be in a bit of conflict.

Then finally, the NIH I'll get on for not investing enough money in studying how clinical trials should be done. Despite the fact that we do them all the time we're still left mostly today with people's opinions based on anecdotal experience when there's enough empirical evidence now about a lot of what should work and what shouldn't that if there's just a little bit of funding relative to what goes into other things at the NIH in studying how to do it better, I think we would do better.

Now as relates to this complex interaction, just an observation I'd have is that there seem to be three views of what clinical trials are. The one that we're most afraid of, I think those who do it professionally and have studied it, is the so-called engineering approach, which seems to be rampant mostly in company executives and sometimes in people at the NIH who want a public health answer to come out a particular way.

What I mean by engineering is the goal is to get a result in the trial and the purpose of monitoring is to steer the trial to get the result that you need. Although people may deny this happens, my experience is it frequently happens and part of what we're trying to do is protect against that.

The second would be to regard the trial as an inanimate immutable object and that was brought up by a person already today, that you're stuck with what you started with and that actually would take care of almost all the problems we've discussed today if you did it that way but I would agree with Jay that it just brings up a whole new set of problems of you can't ignore external evidence and things that change. So I would advocate that a trial is a living organism that has to be nurtured and fed, requires a lot of judgment. It can be changed but it has to have a set of rules that everyone agrees to and I think this document is a good start in that direction.

So I've taken a few potshots. Hopefully Dave, as usual, can straighten of the things I've said.

DR. LEPAY: Thank you.

Dr. DeMets?

DR. DeMETS: I've been trying to straighten out Dr. Califf for years but I haven't succeeded.

I think that this document is a step forward, as Rob said. I think the Greenberg Committee would be very proud of where we are but they might wonder why it took us 35 years to get here. Nevertheless, I think it's a major step and it will be a living document which will change over time.

Over the course of today I wrote down a few things that struck me as issues that I just wanted to comment on. When I look at a data monitoring committee I think it has several priorities. One is to the patient, two is to the investigator. At some distance--there's a gap--the next would be the sponsor and lastly would be the FDA.

If you're looking at a trial which has an outcome that's not mortality or major irreversible outcome, such as hospitalization or death, and at the halfway point you see a 5 standard error result, you've met the contract that you have with the patient and what concern, if any, should the monitoring committee have about the regulatory implications of terminating that trial early? I don't know but I think it's a tension that happens in many trials and it seems that the answer lies somewhere in what the informed consent says about that kind of situation. So I think we need some guidance about those because they do happen.

Second, the quote about we met, we saw, we continue, was not about the minutes of the meeting but what we should tell the IRB and the sponsor. I think we do need to have minutes that are at least summaries. I don't think we should have transcripts or detailed minutes. I think that almost inhibits free discussion.

Finally, not finally but some additional what I would call myths. One is DMCs are expensive. I think that's ridiculous. I think they're a small percent of the cost of a total trial. If you assume you're going to be monitoring data at all somebody's got to do the monitoring and prepare the reports. The added cost of a data monitoring committee is quite small in the context of the trial and you get a lot of benefit from doing it, as we've heard about. So I don't think we should burden the data monitoring committee issue with the fact that it's expensive. There's some expense but it's relatively small in my experience.

Another myth is that the FDA demands a monitoring committee to be blinded. I hear that a lot and, as you've heard today, that's necessarily true. It doesn't say that anywhere. In fact, it's encouraged to not be blinded. But that's something that is said over and over again by sponsors and it certainly adds complications to the monitoring committee's way of doing business.

Another myth is to minimize the number of interim analyses, to do as few as you can get away with. That seems to be moving in the wrong direction. Your job is to protect the patients and the investigators, as I said, but it's something that is quoted.

Another myth is that you must follow a rigid schedule, no deviations, no change of analysis plans. Obviously a monitoring committee must respond to the situation it sees, so that it cannot follow exactly always a rigid schedule or the analysis that was laid out in some set of tables at the beginning.

Finally, the issue of the benefits of an independent or external statistician. There is the issue of the firewall, which we've talked about, but another issue which I think is almost more compelling is that when studies are done and completed, it's amazing to me how quickly for negative studies or neutral studies staff at sponsors are reassigned to new projects. The investigator therefore and the investigative team is left without any access to the data. And if they're in any academic environment they want to publish the results and if that happens, even in the best of companies, resources are limited and staff get reassigned.

So one added benefit to having that external statistician and statistical center is that while the sponsor may reassign their staff for better promising results, the academic community can still have access to the data and publish it.

My final comment is this process is not new. We've been practicing it for 30 years. We're getting better at it. Maybe we'll get it right. But as it evolves I think it has a very good track record and yes, there are variations but overall I think it's served us very well in the past 30 years and I think we should strive to always improve it, but I think it has a great track record.

DR. LEPAY: Thank you.

Dr. Levine?

DR. LEVINE: Thank you very much. I've also taken some notes in the course of the day and have picked out a few favorite comments to make.

I would like to begin by saying that the guidance document that we were asked to respond to is an outstanding document and those who know me well will have trouble recalling the last time I said that about a federal document.

I particularly appreciate Susan Ellenberg's starting us off with a list of definitions. I want to recommend two more candidates for definition. One is the word "equipoise." I have heard the word "equipoise" misused at many, many meetings, including this one. Those who want to use this word should look up its definition.

And the second most commonly misused word is "dilemma." We very rarely encounter bona fide dilemmas in data monitoring but sometimes we do, but we've heard dilemmas discussed as if they were part of the routine business of a data monitoring committee.

I think the document does a good job in recognizing the different styles of data monitoring that are necessary in different contexts. Thinking about that haws caused me to reflect on the assignments I've received as a member of a data monitoring committee from various agencies, both federal and in the private sector.

I think almost invariably the data monitoring committee is asked to monitor for patient safety, sometimes to the exclusion of anything else. That's a very important role for the data monitoring committee and it gives us many important trade-offs in the overall system for human subjects protection. I'll mention one of those in a minute.

Or secondly most commonly, the data monitoring committee is asked to monitor the actual collection of data. Are the case report forms being returned completely and in a timely way? Is one center doing a little bit better than another in getting in their paperwork? This is not a rewarding function. I think basically you could do that function very well by hiring the people who are about to become unemployed as the airport security people are replaced by federal agents.

I think it's very important that somebody keep track of whether the cases are being reported properly and in a timely way and I think it would be good to take the summary of their findings and turn that over to the data monitoring committee, which should have the expertise to tell whether or not some deficits in the monitoring process or in the reporting process could be detrimental to the conduct of the trial.

I think the thing that the data monitoring committees are called upon least to monitor is that which they're best at, and that's efficacy. The reason we're concerned with a lot of this blinding and so on has to do with the implications of efficacy monitoring and particularly taking interim looks at efficacy data and I would like to see that made the largest role for the typical committee and have that role emphasized in whatever guidance documents might be issued.

Now a second point I want to make has to do with the interplay between various agents and agencies in the human subjects protection system. One of the things, I was very sympathetic with Dr. Califf talking about how IRBs are responding to things that university administrators are heaping on them based upon their reading of the requirements of federal agencies in the newspapers, usually shortly after a major institution has been closed.

One of the most onerous and least productive things they've been asked to do is to conduct periodic approval or reapproval of protocols at convened meetings. To show you how senseless this is, shortly after there was a report or shortly after there was a survey of all of the reports from then OPRR on closing various research institutes or research establishments in universities, somebody enumerated what was mentioned most frequently and found one of the two most frequently mentioned things was failure to conduct annual reapproval at a convened meeting. At a meeting not too long after that I told what I thought was a joke, that my university had responded by buying the IRB two shopping carts to transport all of the protocols to the convened meeting and when I said that, smiling, two other people from other universities said they had exactly the same experience.

I think that reviewing the adverse events that are reported worldwide to every IRB that's involved in reviewing research connected to what might be called a test article is probably the least fruitful, the lowest yield activity that the IRBs get involved in. They are certainly nowhere near as well equipped at doing this as the data monitoring committee. And I think the data monitoring committee has the special advantage of when they're looking at all of these adverse events they also have denominator data, which the IRB never has.

I think part of the trade-off here should be that the IRB should only be asked to look promptly at reports of adverse events that occur within their own institution and then only those that are both serious and unanticipated. I'm often asked why should they even look at those and the main reason they should look at those is because some people in their institution don't understand what the requirements are for passing this information over to, for example, the Food and Drug Administration and the sponsor. So that's part of the purpose of having them review these. Also, sometimes they will find something peculiar in the local environment that could account for an adverse event, which may not have been apparent to the investigator.

There's many, many understandings of how best to use an IRB. We've had frequent government reports saying that the IRBs are overburdened, overworked and this threatens their effectiveness but every time we see such a report the recommended remedy for the problem usually entails increasing the burden on the IRB. Enough of that. We're not here to discuss the IRBs' problems.

I think if I had to make one major editorial correction in the guidance document it is that at several points reference is made to the conflicts between science and ethics and I hope we can agree that there is no conflict between science and ethics. In fact, in the international documents that give a rank ordering to the ethical rules that have to be followed, the first mentioned is always that the science, the design of the science must be adequate for its purposes. The CIOMS document states as its first requirement or in part of the discussion of that first requirement that unsound science is, and I quote, "ipso facto unethical."

And my final comment would be yes, speaking of the CIOMS document, when Susan Ellenberg presented her very interesting review of the history of data monitoring committees she omitted the point that the first mention of a requirement for a data monitoring committee in international guidelines is in the 1993 version of the CIOMS International Ethical Guidelines. Thank you very much.

DR. LEPAY: Thank you. Dr. Stump?

DR. STUMP: Thank you. I'll try to keep my comments brief.

First I'd like to thank the agency and Dr. Ellenberg in particular for taking the leadership role in pushing this forward. It's a long-awaited document. It's an important document. Some of us had the benefit of having small group discussions on many of these topics off and on over recent years and we know what the issues are but I think it's incredibly important that the field at large develops an awareness of these because I think it can only lead to higher quality work and getting new drugs to patients sooner.

I agree on many things but I would like to separate my thoughts into two discrete buckets. One is how we handle DMCs in later so-called pivotal trials versus how we would handle data monitoring in earlier trials. I think it's quite clear that DMCs are useful if not required for the later trials.

I have bought into the independence concept. I have realized that as a sponsor, which by the way is what I largely bring to this field, I feel that DMCs across a variety of products, variety of therapeutic areas in biotechnology in the last coming up on 15 years; I believe that my flexibility as a sponsor is greatly enhanced by remaining blinded to data. It gives me total flexibility to manage the trial based on the changing dynamic occurring external to that trial and I really need that flexibility if I'm going to do my job.

I've had many spirited discussions and I'll say this with my biostatistics colleagues, some of whom are in the room, who have taken issue with me and my view on this and I think we heard earlier some comments about how important it is to the biostatistician's job quality to be involved in what is arguably one of the most stimulating parts of what they do. However, I have countered that that individual is incredibly valuable to me as a joint participant in clinical development planning, in clinical strategy, and I can't possibly see them as being of maximal value in that role when I know that they're unblinded to data. And I have walked that tightrope with colleagues in the past and it's not easy. I prefer if there is an equally effective alternative solution that we pursue that and maintain the full participation of my biostatistician.

I would comment we've discussed briefly that lay membership on these committees is kind of an emerging concept. I have found that to be an okay thing. I think they bring a perspective that has been at least reported to me to be quite valuable and I've not seen problems with confidentiality being compromised in that setting. In fact, I have been involved with some programs where the program itself has had greater vitality because of the general awareness in the field that there was lay representation on the monitoring committee, so that, I do support,

The concerns I have, and I raised one of them this morning, would be whether the extension of guidance would be perceived to have to require much earlier trial monitoring. This is becoming more of a problem. Maybe some of you in the audience are as aware of that as I am.

I think there must be alternative ways to handle this. I have actually been on DMCs for phase I trials. I've constituted DMCs for phase I trials. I really haven't had a really good experience with that yet. I think there has to be a way to develop credibility for the approach we take with good medical monitoring, oversight within the sponsor of that medical monitoring function, close adherence to regulatory communications, discussions with our reviewers there as to how we're doing in that job, what data we're seeing.

The flexibility that you need at that early stage of development, those trials are seldom blinded and you really need maximal information at that point. I would be concerned if unintended, the message in the guidance were perceived by some audiences to be you need DMCs for these very early trials. We are getting requests more and more from IRBs to field DMCs at an early stage.

We have tried to come up with a solution that I think should be helpful and that is to formally constitute an internal DRB within the sponsor. This is something that Allen Hopkins and I worked out at Genentech in our years there; it worked very well for us. It had some real advantages. It gave us a very flexible means of overseeing these early trials. It provided a group of clinical biostatistics, regulatory if need be, legal if need be, external medical consultants to join us to actually protect the project team itself from the bias of being too near the work in assessing objectively certain adverse outcomes.

It also provided a means for receiving reports to the sponsor from external committees, particularly for late trials. It was a way that we could discuss with the committee, if need be discuss with the FDA, who would see what and when and under what conditions and at what risk. I think Drs. Siegel and Temple stated eloquently the risk. Having been part of one of your case studies, Jay, it turned out okay; we did what you told us.

This internal committee is a great tool. I recommend it to any sponsor who's thinking of a vehicle for managing what is becoming a more complex infrastructure for data monitoring.

It's also an excellent tool for training internal, sponsor internal medical monitors as to interact with external committees. We try to help them learn on us, work out some of their inefficiencies due to experience before we toss them out on the field at large. We know you have a very hard job when you are actually called to be on one of our DMCs, so this has been a definite plus for us.

But overall, I think if you can pick excellent people, you write a very clear charter up front, you get everyone's buy-in--the committee, the agency--and then you move forward and I think that has worked well. If we can make sure we don't undercut our efficiency at the very early stage of drug development I think this is going to go a very long way to clarifying things for the field.

DR. LEPAY: Thank you. I'm going to invite people to come up to the microphones for comments but I believe Dr. Califf has a comment as people are moving toward the microphones.

DR. CALIFF: I left out one important group to chastise, those of us at academic medical centers, and it relates back to I think a common problem we have with David Stump that's really growing.

If you look at outright fraud and shedding and misrepresentation of data and the place where I think the issue of human subject protection is most difficult, it's actually in phase I trials because very often you're not talking about any therapeutic experiment. You're really talking about doing an experiment on a human being that may be quite harmful to them to learn some things that are in your interest, either as an investigator or as a company.

But how to deal with this in an efficient way when it's not big enough to have a committee with a large amount of quantitative data, I think, is very difficult. I think all of us, including the FDA, dealing with investigator INDs and the academic community really need to work on this particular issue quite a bit more.

DR. TEMPLE: Just a couple of things provoked by the comments. I don't think there's anything in the document that suggests you can't have a multi-armed data monitoring committee to look at all the trials for a cooperative group. You might have to modify a little bit what they do. It sounds like they get very busy but there's certainly nothing in the document that suggests that's not reasonable.

I'm very sympathetic to the idea that one doesn't want to give the data monitoring committee a whole bunch of things that the IRB does and I don't think the document does. I think it says obviously they're going to be somewhat interested in the study they're supposed to be monitoring and if they just hate it, they may be in a difficult position to do it, but they're not supposed to redo what the IRB does, I don't think. And I'm skeptical about asking them to review the consent form and all that stuff. I really think that's been done already and I don't believe the document says that they need to, although if they have something to say nobody's going to tell them to go away.

Rob mentioned that sometimes company regulatory affairs groups want to know every adverse reaction, including every death, so that they can report properly to us. Just for what it's worth, that's their problem; that isn't ours. The rules make it very clear that reporting arrangements can be modified and described and made to soup the study, so if reporting every death in an outcome trial would unblind the study, they don't have to it. They just have to say who's responsible for watching it and that there's a data monitoring committee doing it. That's completely all right.

As you know, the reporting requirements can be modified considerably from what is usual and as long as everybody agrees on them, that's okay. There's a specific rule that allows that. It's not a guidance; it's a rule. So we're allowed to do that.

Dave raised the question of, if I understood you, about what you do with trials of symptomatic treatments where they've obviously shown what they set out to show and I don't think there's been a whole lot of discussion of that but I also don't think there's any need to stop the trial. I mean we replicate those trials. We do dose response studies in them. We do placebo-controlled trials in the first place, even though there's existing therapy. It's very hard for me to think that there's an obligation to stop those trials.

That said, it wouldn't be a bad idea if trials always said what the circumstances of monitoring and stopping a trial would be. It seems to me that would be important. It's a subject for another day, I imagine, but sometimes a trial that--well, as I said, we often tell people to only stop a trial early for survival. That may mean that the other combined end point might be relatively statistically extreme. The benefit to everybody is you get to look intelligently but carefully, of course, at subsets. You get to look at a longer duration of treatment, which you're worried about; you know it doesn't reverse. There's a lot of advantages but I do think you're obliged to tell people what you're doing.

The British way of doing that is to say they don't stop a trial until it would be convincing to everybody, so they get P values out as long as your arm but I don't think there's a standard practice of actually telling people what's going on.

I just want to talk briefly about what Dr. Stump said. I think the idea that there's either an internal or internal with a little external help group watching over the way things go is a very good idea. Whether that solves the problem of a conflicted investigator in phase I is not clear to me. CBER is certainly working on that because of some difficult experiences that they've had. But it's a thorny problem and as I wanted to say before, the problem is that you have to recognize the event as an event worth noting, which means there's no substitute for the investigator. That's the only person who can recognize the event really, as a practical matter. So whether that's a matter of training or having somebody there holding hands, I don't know, but some kind of monitoring situation in that setting seems reasonable.

DR. LEPAY: Thank you. I'd like to open this up now for discussion, if people could come to the mikes.


OPEN PUBLIC DISCUSSION

DR. FLEMING: Fleming, University of Washington.

Rob, you introduced your comments by talking about taking potshots at a number of different areas where there were concerns. I'm surprised maybe you didn't go a little bit further. Let me be specific.

We've talked a lot during this meeting in the guidance document about the important responsibility that monitoring committees have in safeguarding the interests of participants during the course of a trial. Let's suppose now the trial has reached its completion, either through an early termination of having run its course.

How are we doing in ensuring that there is timely reporting of the results from that trial to the public, both to serve the participants in the trial and external? Are we, in fact, doing fine? Is there, in fact, a responsibility ethically and scientifically that may or may not be consistently being addressed here? What is the role of the DMC in that responsibility?

DR. CALIFF: Well, I think the role could obviously be debated but I like the word you used, an independent judge. I think at least my understanding from my NIH training now in human experimentation is that the basis of informed consent when I enroll a patient in a clinical trial is that we will be creating generalizable knowledge. If I was doing it to help that individual person then it would be unethical to do the experiment because I would be helping them by doing what I thought was right, not asking to participate in a randomized trial.

Therefore if the result is not made public I don't know how you can call it generalizable knowledge. So the question comes up if you have stopped a trial for ethical reasons do you bear a responsibility to see it through that the data's not buried? And you don't have to be a genius to see that if the trial's positive it gets out in a hurry. If the trial's negative it could be months to years to never before it ever sees the light of day.

I think this is a major problem and I don't see it diminishing. I actually see it growing right now. In our own institution we're seeing increasingly onerous confidentiality contracts, even for members of data safety monitoring committees, that would forbid you by contract from talking about the results for up to 10 years, which I think it's a violation of the basis of informed consent.

Now I could have gotten this wrong but at least that's my view of it.

You've been on a lot of committees. Now you can't get away without--do you agree or not?

DR. LEPAY: Are there any other comments from the panelists?

DR. LEVINE: I think it's certainly true that industrial sponsors commonly ask data monitoring committee members to sign these pledges of confidentiality and when the trial comes out showing a satisfactory result, usually there's considerable haste at making the information public.

I don't know exactly what the rules are about a negative result but I do want to mention very briefly two experiences. I was on one committee which recommended a stop in a trial on the basis of futility and on that occasion the corporate executives called an emergency meeting of the board of directors because they had to make an announcement to the Securities and Exchange Commission. And they had the emergency meeting at 11:30 p.m. on the day of the data monitoring committee meeting and the statement to the SEC was made right before the market opened. Then the market opened and the price of the stock dropped 33 percent in the first hour. So I was pretty impressed that that was a very rapid contribution to generalizable knowledge.

I was also on another committee where we found that a trial should be stopped on grounds of futility and although we had signed contracts, the chair of our data monitoring committee insisted that we send a letter to the corporate offices of the sponsor saying that if they didn't do the right thing by way of reporting this event to the FDA that the members of the committee would have to consider doing that independently. We were not tested in that regard, I'm very happy to say, but that's yet another experience.

DR. TEMPLE: It does strike me for reasons that Bob just gave that bad news about products in development or about attempts to extend a product line do get out. You know, the failure of Riapro in the acute coronary syndrome was all over the papers. Everybody knew about it. A great disappointment, obviously. People would have had reasons for not wanting it be known but there it was known. And for all the reasons that you have to tell your stockholders about things, I do think they do get out. Now you must know of some things that are contrary to that.

I guess the other observation I'd want to make is that at least for academic institutions these people have organizations that set ethical standards and I don't understand why a confidentiality agreement of the kind you described is still considered ethical and I would think that there's something you could do about it.

DR. CALIFF: I have to respond to that. I want to point out one thing. I think Dr. DeMets is probably--no offense--has probably been involved in more trials that were controversial for not reporting the results than anyone I know.

There's a big difference between a press release that says a trial was stopped and actually showing the data so that people can understand how it may relate to the patients they're currently treating or patients that they have in other trials of related compounds. There are legal reasons why companies frequently make press releases, often with long periods of latency before anything is done.

DR. TEMPLE: So it isn't the result that's hidden; it's it details.

DR. CALIFF: It's anything that would be helpful. But again this is not the majority. I think the majority are just like you said; people are responsible and they do the right thing. But some of the examples that aren't in the majority are important.

DR. STUMP: I wouldn't say that the reporting of a sponsor to be in compliance with SEC requirements is a simple task. I would say that more often than not I have been--and I've been in the situation a lot--I have been conflicted more by having my attorney say I want you to put more information into the public domain, rather than less. And I've had investigators who really wanted sanctity of that information to have it reserved for publication in peer-reviewed journals and not have that undercut, rather than vice versa.

Maybe you've had other experiences but you've got multiple stakeholders here and this whole process can't succeed if everybody's needs aren't at least felt to be met. More often than not I'm pulled the other way, to not put lots of specific data into the press release by the investigators, rather than doing so at the request of my own lawyers.

DR. DeMETS: I think the issue is that some very large trials which have important clinical significance don't get published. Remember I said that one of the benefits is you have access to the data and one way that doesn't happen is that resources get reallocated, so that database doesn't get cleaned up ready for publication.

There's a famous case in the AIDS arena where a trial was stopped early; the database did not get cleaned up. The investigators, I think, complained, eventually published what they had. It's now in the courts or at least it was a legal situation.

There's other trials I've been involved with which are still not published. We know what they are. One's called Profile. And these things do happen.

As Rob said, it's not that the news doesn't get out. It's the details which, in fact, could be very helpful for future trials.

DR. LEPAY: We have about 10 more minutes left so I want to make sure we at least get a chance for the people who are currently standing here to address their comments or questions.

DR. SHOULSON: Ira Shoulson. I was just going to comment on this publication issue. It's very dear to my heart as an academic investigator and we insist in doing trials ourselves that not only free and unrestricted right to timely publication but those types of assurances from sponsor to do that are really hollow assurances without having the data.

So it's really access to the data and that's why we get back to data monitoring committees, that at least the point that David DeMets made is important. Having been a friend of the FDA for many decades and served there, I can just say though at this point the FDA has not been a friend in terms of supporting this issue of free and unrestricted right to publication because as far as the FDA's concerned, just so we see the data we don't care if it's published in this journal or that journal. That's okay; just so we get to analyze the data and take a look at it, and that's certainly consistent with their mandate and the regulations that they have.

But I think at least in the context of data monitoring committees, if at least some kind of statement could be made to ensure that there is a publication, a free and unrestricted peer review-type of publication, of the data and perhaps link it to the data monitoring committee, that certainly would be of great benefit to the public in terms of generalizabilty of findings.

DR. WITTES: Janet Wittes.

I think one thing one could do that would make a big difference and would be pretty easy is to think about adding to the charters of the DSMBs something about their responsibility after the trial is over. I mean one of the things that happens is the trial is over or you have your last meeting and the trial isn't really completely over, the report isn't done, and that's the end of the responsibility. I think a little bit of addition to the charter might go a long way.

ATTENDEE: Does the data monitoring committee have any responsibility if there is a publication that results from a flagrant misanalysis of the data in which, say, a P value is reported at below 001 when a proper analysis leads to a P value of, say, .6?

DR. LEPAY: Does anyone want to take that?

DR. CALIFF: I think there is a responsibility. I think once you sign on to be a data monitoring committee member or a data monitoring person in a small phase I study that if you see something that's not--you're the watchdog. You're the independent judge and I really think that should be part of the charter.

Just quickly, I need to comment on Ira's comment about free and unrestricted. Those words are very tricky. Just on behalf of the industry side of things, about three months ago I made an offhand comment in the middle of a negotiation with industry about this right to publish. What do you think a chemistry professor's going to demand the data and come and take it from the database and try to publish it? They said it's funny you should mention that; that just happened about six months ago to our company because the university had a free and unrestricted right of any faculty member to publish the data.

So I actually don't think it should be free and unrestricted. I think it should be planned and organized and multilateral.

DR. LEPAY: Other comments among the panelists?

DR. FLEMING: If we're going to change the subject, maybe just a quick follow-up comment to my original question.

Basically my sense is that the issue of timely reporting of results after termination of a trial is not a common problem. In my own sense, in most cases people given a reasonable period of time to make sure that they understand and present a clear message, that within that period of time results are reported.

However, when you monitor a lot of trials you run into counterexamples to this. All of the problems that we have heard do, in fact, occur where results--a study hits its completion point either through early termination or running its full course and there is an extended period of time without getting results, or as they're published in the literature, as a DMC member you're very uncomfortable that this publication represents a truly objective representation of the data.

The question I don't believe we have really adequately considered is what are our responsibilities to patients to ensure that there is appropriate, timely, accurate dissemination of data once the study is completed? And there are at least two elements to this. One of those elements is what is the data monitoring committee role in this if, in fact, you become aware of something that wont' happen very commonly but on occasion does happen where you have ethical concerns and scientific concerns?

And secondly, is it proper for monitoring committees to be signing what is not standard but often confidentiality agreements that indicate that we won't release information to anyone outside of those that are involved in data monitoring committee discussions? Do we, in fact, need to ensure that such agreements aren't part of consulting contracts? Do we need to go further, as Janet says, and ensure that charters actually indicate in these uncommon settings monitoring committees, acknowledging their ethical and scientific responsibilities that could, in fact, go to the point of after the study is terminated? And, in fact, should monitoring committees then actively in these unusual circumstances carry out that ethical responsibility to ensure that if there is a problem in their perception that they are able to address that either with the FDA or the scientific community.

DR. LEPAY: Any comments?

DR. TEMPLE: That all seems desirable but the mechanism for making that so is not obvious. A data monitoring committee is arranged through a contract with a sponsor. Under what law can we or somebody else say you can't have such an agreement?

I really do think it seems an obvious thing for academic societies to at least discus and make rules about. As Rob said, free and unrestricted might be trouble but something that says it's their job to report the truth as they see it and you won't accept agreements that bar that.

DR. STUMP: Tough question. At least my understanding of what these confidentiality agreements from a sponsor's perspective are are really an assurance that during the in-life monitoring part of the study there will be no breach of confidentiality. I don't believe they're intended to be a muzzle, if you will, for eternity.

I think that once data is in the public domain, that's substrate for any qualified scientific opinion to be expressed and I don't see why--

DR. FLEMING: In my experience there's tremendous diversity, Dave, in this and some of them are very explicit, stating that there wouldn't be any communication with the FDA, regulatory authorities or anyone outside of those involved.

DR. STUMP: I think the FDA communication is perhaps a more difficult issue, given the reporting relationship that exists. I think the way a study is meant to work and as I've heard from the agency, they really don't want DMCs reporting to them directly. They'd prefer that be through a sponsor. We certainly set up vehicles to accommodate that reporting and would certainly entertain any discussion from any DMC member--I would--of hey, I don't like how you're handling this and we would be open in describing how we see it.

I think that the data itself certainly has to be at some point owned by the investigator. Certainly a DMC has only seen data during the in-life portion of a trial and that may or may not be representative of what the data really are at the end of the trial and I think the investigators are empowered to interpret that data, to publish it in their peer review systems in the medical literature that are supposed to oversee that so I don't know why the DMC would have to be an added portion of peer review to that process. But I hear the question; I just don't have the easy answer.

DR. TEMPLE: One of the difficulties one hears about--you guys would know better than I--is that any given investigator in a multi-center study has a lot of difficulty getting a hold of the total data. Someone has to make it available to that person. The data monitoring committee, of course, has been given the data at least at some point, even if not the final, so they're somewhat more in a position to see the whole database.

Just from our point of view, if anybody found something presented publicly as grossly distorted we'd be interested.

DR. STUMP: I think any sponsor knows that they will ultimately be standing before the agency and have to defend their policy, so we will undergo your peer review eventually.

DR. TEMPLE: But we miss things and we'd like help.

DR. STUMP: Surely not.

MR. CANNER: Joe Canner with Hogan & Hartson.

Before I change the subject I think there are some interesting situations, particularly in device trials but not uniquely, with new, unique, novel products where the company has a pretty good reason to want to suppress negative results, especially if the product is not going to be approved. There's no, at least within the United States there's no reason why a physician should have any information about a product that has not yet been approved. But that's not my area so I understand there are other issues and I'll move on to my other question.

To follow up on my question from before about unique aspects of device trials, I have a particular question about stopping criteria, something that's been mentioned throughout the day. I just need for clarification on it.

Device trials are typically not planned to be stopped early for efficacy for a variety of reasons but it may be appropriate to stop them early for safety. But oftentimes the safety issues are not terribly obvious up front for a number of reasons, whether it be because of unexpected issues, because of the difficulty of establishing the relationship between an event and a device, lack of prior data, and also just to evaluate events in the context of a risk-benefit, where sometimes the device is being compared to something totally different, which has a totally different risk-benefit profile.

So it's very difficult up front for a sponsor to establish stopping rules but sometimes the FDA asks the company to establish stopping rules for safety in the protocol and then dictate them to the DMC and I'm just wondering if there's any clarification on that and if it wouldn't be appropriate in some instances to allow the DMC the freedom to kind of make it up as they go along and see events as they occur and to see the evidence accumulate before making any specific criteria for stopping.

DR. CALIFF: I've got to respond to your first comment because I think it's critical for people to really think about this and for at least some thought to go into a final document.

I think there are two reasons why a device that doesn't get on the market where a study has stopped early, the results need to be known. The first is that the investigator has signed a contract with the patient to do a human experiment, the basis of which is that it's being done to create generalizable knowledge. And to not make the results public is a violation of the fundamental concept of informed consent, at least as I've been taught in my IRB training.

Secondly, there are many devices that don't get to the market that are similar to devices that are on the market and in particular circumstances where a device has failed in its testing where there's a generalizable concept, even though it may disadvantage the company that did it, it's putting patients at risk who are not in the trial, the knowledge of which would have allowed people to be treated in a more humane fashion. I think there's an ethical construct here that truly overrides the profit motive of the device company.

Obviously I feel strongly about this but I think these issues really need to be considered and people monitoring trials need to have some responsibility for making sure that the basic fundamental construct of a human experiment is adhered to.

MR. CANNER: I would agree and I'd just respond. I think you could concoct a situation though where it really would be in the best interest of both the patients and the industry to, in the interest of trying to develop enhancements to a product, especially if it's a unique product that isn't already captured in the market, that instead of casting a pall on all further studies of that device by saying that the first go-around was negative, instead to allow the company to improve the product and come up with something that might actually work, without the bias of previous studies.

DR. CALIFF: I think there needs to be reasonable time. There are always exceptions. I agree.

DR. DeMETS: In response to your second question, I think monitoring committees themselves need to be reminded of the fact that the data are spontaneous and random and if you have no plan in place you can deceive yourself in reacting to something that is just a chance event.

Of course, in the safety business one never knows what to expect so we're always sort of making some rules up as we go, as we see new events. But to have nothing to start with, I think, is kind of dangerous. I think you need to have some plan at least to give you some navigational aids as to how to assess and remind yourself as a committee that there are these chance events. To say nothing, I think, opens the door too wide.

DR. LEPAY: We're just about at our closing time here so we'll let Jay respond.

DR. SIEGEL: On that point, the document, to the best of my recollection, does not specifically address the issue of stopping rules for safety, and correct me if I'm wrong. For efficacy they're addressed because of the need for prospective rules to ensure appropriate protection of type 1 error. That said, the word "rules" here is not used the way the FDA uses them, which is they may be stopping rules but we understand that a good DMC may, for good cause, choose to disregard those rules. Nonetheless, that should be rare and they ought to be in place and probably agreed to by the DMC, if not, as some have suggested, written by them.

I think in safety it's a different issue. It's not addressed in the document so we don't have guidance in that area. I think experience would suggest that sometimes they're used if it's the same parameters, if it's a mortality trial for mortality going the wrong direction, but experience has shown that usually there are futility rules that kick in before the safety stopping rules do, anyhow. If by the time you've reached a point where you seem to have proven harm, you earlier reached the point where the likelihood of proving success is so small that trials often get stopped for that reason.

The only other thing I would note, because it is germane to a lot of discussions we've had earlier, when safety is an issue that relates to outcomes other than the primary end point, often there's not only the issue that the safety event may be unanticipated so hard to preplan for, but it's also often critical to integrate that safety outcome in the context of the likelihood that the drug may be benefitting. And even when we've gotten unblinded data from a trial and learned unexpectedly that a drug may be or seems to be increasing the risk of a serious adverse event that wasn't anticipated, more commonly than making a decision that the trial needs to be stopped or even altered, we'll often kick that back to the monitoring committee to look at that finding in the context of the efficacy data because you might have serious bleeding in the context of a trial that's suggesting an important new benefit on mortality and it's very hard to plan in advance for how much serious bleeding should stop a trial that may be saving lives.

MR. O'NEIL: Bob O'Neil, FDA.

I was wondering if the panel had any thoughts on an issue related to the complement of where Greg Campbell started and the comment of the gentleman previously about data monitoring committee lite.

A lot of effort was put into the document to think about what data monitoring committees, which would be independent, and which trials might be eligible for that. Once you make that decision it leaves a body of trials that don't have to have this independent data monitoring committee structure, the bureaucracy of it, but the spirit of it sort of lives on, particularly if you want to do industry-sponsored trials where the industry is going to monitor the trial to some extent. There's a lot of literature and methodology these days on flexible study designs which allow you to prospectively, in the learn-confirm environment, given, as Bob indicated--Bob Temple had indicated that a lot of folks are not necessarily going through a sequence of trials. They're doing some early phase trials and they're getting into a phase III trial real fast, trying to get it all done, but most of these phase III trials are often learning trials in their own right.

So the flexible designs can drop an arm, they can drop a dose, they can up-size the trial, they can do them all in a legitimate way and this gets hard real fast. I'm concerned that this is much beyond the monitoring job that a data monitoring committee needs to do. And I guess what I'm asking is do you see that the document leaves room for how to implement in a firewall sense flexible designs where it needs access to unblinded data and where interim decisions have to be made to get onto the next step in terms of what you do and to preserve the validity and credibility of the trial?

There's an answer to that both for the independent data monitoring committee model and there's probably another answer to that for the trial that would use a flexible design but wouldn't rise to the level of an independent data monitoring committee model. I was wondering if you had any ideas on that because this document doesn't address that right now.

DR. DeMETS: Well, I'd only comment on one specific. The document does discourage using unblinded data to adjust sample size--I think at one point it talks about that--yet we know there's research going on which says, in fact, you can do what seems to be heresy, statistical heresy. In fact, you can change the sample size based on the interim delta and do it in such a way that you don't screw up the alpha level, at least not in any way we care about.

But we're not there yet that this has been tested, examined, challenged, so these developments are probably too new, but the current document is at somewhat at odds if you take it literally, the way it's written right now. So it doesn't leave much room for some of that and I guess this is a document that also is a living document. When we get there maybe you'll change it but right now it's kind of keeping the door pretty tight on that and things like that.

DR. LEPAY: Any other comments from the panelists?


CLOSING REMARKS

DR. LEPAY: Well, I want to thank everyone very much for their participation today. This has been very valuable for FDA. I'd like to thank our panelists of this last session.

The comments we've certainly appreciated. They will certainly be taken into account as we move forward with this document.

For those you know who may not have seen this document we encourage its circulation. Again it's open for public comment until the 19th of February. Please participate in our process here. We thank you very much again for your attendance.

[Whereupon, at 5:05 p.m., the meeting was adjourned.]

Part 1

 
Updated: January 25, 2002