FDA Logo--links to FDA home page
U.S. Food and Drug Administration
HHS Log--links to Department of Health and Human Services website

FDA Home Page | Search FDA Site | FDA A-Z Index | Contact FDA | FDA Centennial

horizontal rule

Email this Page
To a Friend
email a friend

FDA Consumer magazine

March-April 2007

The Advancement of Controlled Clinical Trials

By Linda Bren

From an antacid to a new cancer treatment, every drug must be proven safe and effective in controlled clinical trials before the Food and Drug Administration allows it to be sold in the United States.

When federal law first required controlled clinical trials in 1962, most people didn't know how to do them and they were rampant with problems, says Robert Temple, M.D., director of the FDA's Office of Medical Policy. Over the years, FDA experts evaluated the many shortcomings of the clinical trials they saw and worked to fix them. To this end, the FDA established regulations and guidance to ensure that sponsors of clinical trials, medical researchers (investigators), and ethics committees (institutional review boards) understand how they can effectively carry out their responsibilities and comply with federal law.

"As FDA expertise evolved, so did controlled clinical trials, and the quality of their design, conduct, analysis, and interpretation improved," says Robert T. O'Neill, Ph.D., director of the FDA's Office of Biostatistics, who has seen much of the evolution of controlled trials since he joined the FDA in 1971. "FDA has been the major force behind the development of good principles for the design and interpretation of controlled trials. We've promoted, fed, and cared for controlled clinical trials as a critical force in drug development, and we continue to do so today."

What Is a Controlled Clinical Trial?

A clinical trial uses an investigational product in human volunteers to examine its effects. An investigational product may be an experimental drug, medical device, or biologic, such as a vaccine, blood product, or gene therapy.

"If the course of illness were always predictable, you would simply observe the treated participants, see how they fared after treatment, and decide whether the treatment helped and what its bad effects were," says Temple.

But most diseases have a variable course, so to find out the effect of a medical product, you must compare a group that got the experimental product (test group) with a group as similar as possible to the test group, but that received an inactive substance (placebo) or a treatment known to be effective. This group is called the control group, thereby making the clinical trial "controlled." Any differences in results between the test group and the control group can then be attributed to the experimental product.

A shortcoming of many early clinical trials was failure to decrease the possibility of bias. Bias refers to any factor that distorts the true outcome of a study, leading to overestimating or underestimating the effect of the product under investigation.

Bias may be introduced when investigators interpret results in one group more favorably than results in the other group, even if they are the same. Or bias may occur when investigators use their knowledge of a participant's condition to assign that person to a specific group; for example, people who are less ill may be assigned to the investigational treatment group.

Randomization to treatment groups and double-blinding are two ways used to minimize bias in clinical trials. Randomization, essentially a coin toss by a computer, means participants are assigned by chance to a group so that neither group will be healthier or more likely to improve than the other.

Blinding is used in conjunction with randomization. Single-blinding means the participant doesn't know whether he or she is receiving the experimental product, an established treatment for that disease, or a placebo—but the research team does know what the participant is receiving. Clinical trials are usually double-blinded, meaning that neither the participant nor the research team knows during the trial which participants receive the experimental product.

Blinding is done to make sure that such factors as investigators' preferences or expectations, or participants' desire to please investigators or hopes of improvement, cannot influence and distort results.

When a company wants to market a drug, it must submit an application to the FDA that includes data from controlled clinical trials. FDA experts review the application to decide whether the clinical trials are well-controlled, whether they show the effectiveness of the product, and whether all available data show that the product is safe enough to allow it on the market.

Proof of Safety

Before 1938, manufacturers could market a drug without submitting any information to the FDA or any other agency. The Federal Food, Drug, and Cosmetic Act (FD&C Act) of 1938 was passed when over 100 children died from taking a sulfa drug that had not been tested in people. This law required companies intending to market a new drug to submit an application to the FDA that included data to demonstrate that the drug was safe for its intended use.

The FDA now interprets safety to mean that the benefits of a product outweigh its risks. "As drugs always have side effects and are never completely free of risk, it is hard to interpret safety any other way," says Temple, "but from 1938 to 1962, the FDA had little good evidence of benefit to weigh against risk."

The FD&C Act gave the FDA the authority to refuse to approve a drug application on specific grounds that are still valid. "The legal requirements for safety set forth in 1938 were very broadly written and are absolutely unchanged today," says Temple, "although we have learned a great deal about how to fulfill and interpret them."

"We weren't very good at safety evaluation in the early days," adds Temple, who has reviewed data from numerous clinical trials since he joined the FDA in 1972. "There were no standards, no controlled trials, and no post-marketing surveillance. But we got better over time. And in the age of effectiveness, we also got better at safety."

Evidence of Effectiveness

"Crises usually create the opportunities for changes," says O'Neill.

It took an international crisis to lead to an important change in drug law. In 46 countries, more than 10,000 deformed babies were born to mothers who had taken a drug called thalidomide to help them sleep at night and to quell their morning sickness. The United States averted widespread tragedy because, although the manufacturer submitted an application to market the drug in this country, the FDA would not approve the drug based on the sketchy safety data provided.

The thalidomide disaster led to a law that was directed not only at drug safety, but also at drug effectiveness. The landmark drug law, the 1962 Kefauver–Harris Amendments to the FD&C Act of 1938, required new drugs to be shown effective as well as safe. It was based on several years of testimony before Senator Kefauver about how poorly the effectiveness of drugs was evaluated and strong urging that drugs be studied in a way that would provide meaningful evidence of effectiveness.

But how was effectiveness to be determined? The law said there had to be "substantial evidence" that a drug will have the effect it is claimed to have under proposed labeling.

Before this law, the country operated on opinion-based medicine, says O'Neill. "Prior to 1962, a doctor would be pictured with a drug on the back of Look magazine, saying 'this is the best thing since sliced bread.' The 1962 law said, 'opinion is not the only thing that matters. Now you have to show evidence.'"

The law also specifically stated that the only source of substantial evidence would be evidence derived from adequate and well-controlled studies. "It was really not the effectiveness requirement that was radical," says Temple. "But the requirement for adequate and well-controlled studies changed all of medical science."

The 1962 amendments also incorporated important ethical features regarding drug experimentation on humans, including a requirement for informed consent from participants.

Poor Studies, Ineffective Products

The 1962 law made it clear that companies had to prove their drugs were both safe and effective if they wanted to market them. But what about the more than 3,000 prescription drugs already on the market that had been approved based on safety alone?

The law said that the FDA must conduct an extensive review of all the drugs that the agency had approved between 1938 and 1962 to ensure that they also were effective. If they were not, the FDA was to get them off the market under what was known as the drug efficacy study implementation (DESI) program.

With the help of the National Research Council of the National Academy of Sciences, the FDA reviewed the evidence of effectiveness for thousands of drugs, finding that many of them were not effective for any of their claimed uses. Over 1,100 drug products that had been FDA-approved were withdrawn from the market as part of the DESI program.

The more than 20-year DESI effort was "an instruction in every way a study could be badly designed, analyzed, and reported," says Temple, who reviewed many of the DESI drugs. In 1970, for the first time, the FDA described in regulations the characteristics of an adequate and well-controlled study.

The DESI effort had another significant effect, says O'Neill. "Because of that experience, and reflecting on what went right and wrong in clinical trials, we were able to write guidances for industry beginning in the mid-1980s on how to report clinical trials, which influenced the design, conduct, and interpretation of clinical trials."

Unlike a regulation, a guidance is not enforceable, but companies have a strong incentive to follow it because it represents a pathway the agency has said is acceptable. If clinical trial investigators don't follow FDA guidance, they must show that their alternative approach meets the requirements, says O'Neill. "The guidance does a lot of the hard work for them."

As the following examples show, when new problems have arisen in clinical trials, FDA experts have worked to solve them and to translate the lessons learned into new guidances and regulations.

No Data Fishing Allowed

In one early clinical trial involving a drug intended to treat pain, investigators tested the drug in a group of people with pain without finding any effects, says Temple. So they broke the participants into groups—those with moderate or severe pain, and those with mild pain—and found an effect in the moderate-to-severe group. In the next study, they repeated the process, finding an effect only in the participants with mild pain.

"They'd go sifting through the data, breaking patients into lots of groups and picking the one that showed the drug worked!" says Temple. "If you go fishing through data like that, you'll always find something positive."

A fundamental, but crucial, concept, called pre-specification, arose from this type of distortion, says O'Neill. "A clear statement of the study's objectives is required and study investigators must specify in advance how they will judge a trial's success or failure." Study objectives must be included in a study plan, called a protocol, which describes what study procedures will be done, when, and by whom.

Counting All People

The need to account for all participants in a trial had not been an issue for the FDA until the Anturane Reinfarction Trial (ART) of 1980, says Temple. The drug Anturane (sulfinpyrazone), according to the drug company, helped to prevent sudden death during the first six months after a heart attack (myocardial infarction).

"ART seemed a model effort," says Temple. It was double-blinded, randomized, and placebo-controlled, and it involved about 1,600 participants.

But there were problems. Although published reports did not indicate it, nine participants who had died—eight of them on Anturane, and one on a placebo—were excluded from the trial results.

"The drug company decided that three patients had not been 'compliant'—they found all the drug in their rooms after they died—and that they weren't eligible for the trial after all," says Temple. "When you included these patients, there was no evidence of an effect from the drug."

The FDA discovered another fault with the ART. The investigators attempted to distinguish and classify deaths using various definitions they developed. "They had concluded that their drug prevented sudden death," says Temple. "But to get to that conclusion, the cause of death assignment had to be distorted. In many cases, when a death occurred on their drug, they called it something other than sudden, for example, a heart attack. But if a death that looked the same occurred on placebo, it was often called sudden death."

As a result of the ART experience, FDA regulations require any new drug application to account for all participants, including those who dropped out of the trial or died. In addition, determination of cause of death must be done by investigators blind to the investigational product.

Open and Transparent

Researchers, health professionals, and the public can gain insight into the FDA's views on specific products and clinical trials through the public advisory committee process. And access to FDA reviews of products that have been approved is available under the Freedom of Information Act (FOIA). Many of these reviews are on the FDA's Web site.

"That is priceless information for someone seeking to understand the logic behind our decisions, what the data show, and the pros and cons of a treatment," says O'Neill.

"No other country has that yet," adds Temple, "although there is some movement toward greater availability of information."

Advisory committees were created under the Federal Advisory Committee Act of 1972. They are groups of independent, expert scientists called upon when the FDA seeks outside advice regarding product approval or labeling, or more general advice on clinical trial design or the kind of evidence the FDA should be seeking for particular products. The FDA has 18 advisory committees for drug products, as well as numerous other committees responsible for advising the agency in other regulated product areas.

At an advisory committee meeting, much of which is open to the public, the company presents findings from its clinical trials and other data in its application, and FDA staff present their assessments. The committee carefully considers the information and votes on questions posed by the FDA intended to guide the agency in its decision to approve or not approve the product for marketing or for a new claim.

"The FDA's advisory committee system is a critically important component of the FDA review process," says FDA Commissioner Andrew C. von Eschenbach, M.D. "Our willingness to expose our medical and scientific judgments to open public scrutiny, which often involves intense intellectual debate, ultimately helps to strengthen our decisions and to bolster public confidence in them."

Dose-Response Principle

FDA scientists discovered that some drugs were initially marketed at what were later recognized as excessive doses. Until the late 1970s, participants in clinical trials were usually started out on a small dose of medication. The dose was then gradually increased (titrated) to a level where it could not be tolerated or to some clinical endpoint, such as worsening of the disease.

"The thinking was that at higher levels, the experimental drug was sure to show itself better than a placebo," says Temple. Because of that assumption, how well the participants responded to the drug at lower doses was not always assessed. And when it was examined in a titration study, the results were confusing because participants were not randomized to different doses.

"In fact, often, people who were moved up to the higher doses were the poorer responders, so that it looked like higher doses gave poorer responses," says Temple. "The result was that these studies did not allow a well-characterized look at the beneficial and adverse effects of different doses."

The remedy was to randomize people to different doses and keep them on those doses. In the early 1980s, the FDA began urging this type of clinical trial design, known as the randomized, fixed-dose, dose-response study. In 1985, the agency revised its regulation on adequate and well-controlled studies to include this design. The design helps find the lowest drug dose with a useful effect and a dose beyond which there is no further benefit.

"We converted the field," says Temple. "Before we started urging people to use this design, almost every trial was a titration trial where they gradually inched the dose up."

The benefit of the FDA's extensive knowledge about well-controlled clinical trials extends beyond the health of the American public. The dose-response principle, for example, became the world norm when it was adopted in 1993 by the International Conference on Harmonization (ICH). The ICH, a joint initiative involving both regulators and researchers of the European Union, Japan, and the United States, publishes agreed-upon testing procedures and scientific guidance to ensure the safety, quality, and effectiveness of medicines and to prevent unnecessary duplication of clinical trials.

Surrogates, Expanded Access

For serious or life-threatening conditions, the faster a product can get to a patient, the greater the likelihood that the product, if it is effective, will help that individual. Relying on surrogate endpoints as a basis for drug approval can speed access to drugs.

A surrogate endpoint is a laboratory measurement or physical sign used as a substitute for a measurement of how a person actually feels, functions, or survives (clinically meaningful endpoint).

The agency has relied on persuasive surrogates before. "We've approved drugs because they lower LDL cholesterol without specific evidence that they decreased the rate of heart disease," says Temple, although this evidence came later. "We've also approved drugs because they lower blood pressure, an effect shown often over the years to decrease rates of stroke and heart attack."

But surrogate endpoints took on new importance in clinical trials when AIDS was first identified in the United States in 1981. Caused by a virus that attacked the immune system, HIV, the disease took the lives of thousands of Americans, mystified the medical community who didn't know how to fight it, and activated an advocacy community that demanded new treatments for people who were dying too quickly to wait for the completion of years' long clinical trials.

"The state of the science moved very, very rapidly and we had to become very adept at adapting trial designs very quickly as well," says Rachel Behrman, M.D., an FDA infectious disease specialist and reviewer of some of the early AIDS drug applications.

In 1991, the HIV epidemic was already well into its first decade with only one approved therapy, AZT (zidovudine). The FDA had approved AZT based on a dramatic reduction in death rate, but some people did not respond to it, others lost their responsiveness, and some couldn't tolerate the drug's side effects. There was an urgent need for new therapies.

The FDA approved the second anti-HIV drug, didanosine, on the basis of a surrogate endpoint—an increase in CD4 cell counts from ongoing clinical trials. HIV infects and kills CD4 cells, the infection-fighting white blood cells that coordinate the body's immune response. There was good evidence that an increase in CD4 cell count predicted an actual clinical benefit.

"The big change came in the mid-1990s when there were several therapies available and it was clear that AIDS patients could not be maintained on two- to three-year trials," says Behrman. On the basis of data analyzed across many clinical trials, the FDA was able to accept the surrogate endpoint of a large decrease in the amount of virus (viral load), measured as HIV ribonucleic acid levels in a person's plasma, after the person had been on an investigative product for 26 weeks. This acceptance of a surrogate endpoint greatly shortened the duration of the clinical trials before drug approval, although studies continued to 52 weeks after approval to evaluate whether or not the benefit to patients continued.

"There was intensive, pro-active FDA involvement every step of the way," says Behrman, "because with a public health crisis, it wasn't business as usual and we had to be a more active participant in clinical trial design."

Since the 1970s, the FDA had allowed the use of drugs that weren't approved (investigational new drugs, or INDs) in very ill patients who had no good alternatives. In some cases, these uses involved tens of thousands of patients. In 1987, the FDA formally changed its regulations on INDs to allow this use—the so-called treatment IND—although there was always concern that this availability could interfere with the controlled trials needed to establish effectiveness and to support marketing.

"The HIV drug development experience showed that you could have drug access programs exist in parallel with the controlled trial programs and successfully complete the controlled trials," says Behrman.

The AIDS drug experience also set the stage for the FDA's 1992 regulations on accelerated approval of new drugs for serious or life-threatening illnesses. The regulations allow the FDA to approve new treatments for these illnesses if the treatments appear to offer meaningful benefits compared with existing treatments on the basis of their effect on a surrogate endpoint "reasonably likely" to predict an actual clinical benefit. The approval is conditional—controlled clinical trials to verify the benefits to participants must be completed after approval. If the trials fail to show a benefit, or benefits appear not to outweigh risks, the FDA can withdraw the drug from the market.

Diverse Patients

There has been an impression that older people, women, and blacks were excluded from clinical trials in the past, says Temple. "That isn't true, but what is true is that nobody ever looked to see whether results in men and women, old and young, black and white were different. We led the way in making sure that we looked for possible differences in these subsets of the overall population."

In a series of guidance documents issued beginning in 1989 and continuing through the 1990s, the FDA encouraged researchers to perform analyses of differences among these groups in their response to an experimental drug in clinical trials. In 1998, new regulations required drug sponsors to submit these analyses in their drug applications.

"Collecting better information on the responsiveness of both genders and across all races and ethnic and age groups is a crucial step toward developing better treatments for everybody and reducing health disparities," says FDA Senior Advisor for Clinical Science David Lepay, M.D., Ph.D., who is also the director of the FDA's Good Clinical Practice Program. "It is through efforts such as properly designed and properly conducted clinical trials that we have enormous potential to benefit both individuals and society."

Also in 1998, the FDA promulgated its Pediatric Rule, a regulation that requires manufacturers of certain medical products to conduct clinical trials to assess their safety and effectiveness in children.

And in 2000, a public database of information on enrollment in available federally and privately supported clinical trials was launched online at www.ClinicalTrials.gov, thereby expanding patient access to studies of promising therapies.

The Future of Clinical Trials

The FDA continues to work toward making controlled clinical trials better and more informative.

"There has been growing recognition that individuals can have significant genetic and metabolic differences," says Temple, which makes them differ in their risk for a disease and in their response to a treatment. Scientists will look at genetic and other markers to identify the group of people most likely to respond to a treatment, and future clinical trials will be designed to examine effects in that group of people. Treatments can then be developed for those most likely to benefit, and fewer participants who don't respond well to a treatment will be exposed to its risks.

FDA Deputy Commissioner and Chief Medical Officer Janet Woodcock, M.D., foresees improving the quality of clinical trials by standardizing and automating trial procedures, conduct, and data processing. "We see the entire process becoming automated," she says, "including having little hand-held devices for patients in trials to record their condition so they can report daily on how they're doing or what their symptoms are."

Some clinical trials of the future may have different endpoints than the endpoints used now, says Woodcock. "One of the shifts we're going to see is toward patient-reported outcomes. For symptomatic diseases, what really matters is how you feel, not how the doctor feels about you." So trials may shift from measuring laboratory tests alone, such as viral load, to also measuring how well a person is feeling and functioning.

horizontal rule

Development of Controlled Clinical Trials

The Federal Food, Drug, and Cosmetic Act of 1938 required that new drugs be shown to be safe, and provided the foundation for a new system of drug regulation based upon scientific data. Here are some milestones in the development of controlled clinical trials:

1962
Kefauver–Harris Drug Amendments are passed, requiring drug manufacturers to prove to FDA the effectiveness of their products in controlled clinical trials before marketing them. Amendments also required informed consent from trial participants.

1966
The FDA contracts with the National Research Council of the National Academy of Sciences to evaluate the effectiveness of 4,000 drugs approved on the basis of safety alone between 1938 and 1962.

1968
The FDA forms the Drug Efficacy Study Implementation (DESI) to act on recommendations of the National Academy of Sciences' investigation of effectiveness of drugs first marketed between 1938 and 1962.

1970
FDA regulations define what constitutes an adequate and well-controlled clinical trial.

1972
Federal Advisory Committee Act lays the groundwork for groups of independent, expert scientists to advise the FDA on clinical trial study design, types of acceptable evidence to show product safety and effectiveness, and product approval and labeling issues.

1976
The FDA introduces Bioresearch Monitoring Program—on-site inspections of investigators, sponsors, and Institutional Review Boards (IRBs).

1980
Deficiencies of the Anturane Reinfarction Trial lead to FDA guidance calling for full patient accounting and cause of death determination by investigators blind to the treatment.

1981
Regulations require IRB review and approval of clinical trials regulated by the FDA.

1985
Regulations are revised to include randomized, fixed-dose, dose-response study design as a type of well-controlled study. The design helps find the lowest dose of drug with a useful effect and a dose beyond which there is no further benefit.

Mid-1980s
The FDA begins issuing guidance for industry on how to report clinical trials, influencing design, conduct, and interpretation of studies.

1987
Regulations are revised to expand access to investigational new drugs (INDs) under a treatment IND for patients with serious diseases and no alternative therapies.

1992
Accelerated approval regulation allows the FDA to rely on surrogate markers, with conditions, to approve new treatments for life-threatening diseases.

1993
International Conference on Harmonization adopts the FDA's dose-response principle as world norm.
The FDA issues guidelines calling for improved assessments of medication responses as a function of gender. Companies are encouraged to include subjects of both sexes in their drug investigations.

1997
International Conference on Harmonization guidance, ICH E6–Good Clinical Practice: Consolidated Guideline establishes unified quality standards among the United States, the European Union, and Japan for the conduct of human clinical trials.

1998
The FDA promulgates Pediatric Rule, a regulation that requires manufacturers of selected medical products to conduct clinical trials to assess their safety and effectiveness in children.

FDA regulation requires researchers to include analyses of any differences among different ethnic groups, age groups, and genders in responses to experimental drugs during clinical trials in drug applications.

2000
The National Institutes of Health launches public database on enrollment in available federally and privately supported clinical trials at www.ClinicalTrials.gov, expanding public access to studies.

2006
The FDA issues final guidance on use of data monitoring committees, adding to the protection of subjects in clinical trials.

horizontal rule

For More Information

Clinical Trials
www.ClinicalTrials.gov
www.fda.gov/oashi/clinicaltrials

Good Clinical Practice in FDA-Regulated Clinical Trials

Inside Clinical Trials: Testing Medical Products in People

The FDA's Drug Review Process: Ensuring Drugs Are Safe and Effective

FDA Reviews of Approved Drug Products Under Freedom of Information

horizontal rule