[Skip navigation] FDA Logo links to FDA home page
Center for Drug Evaluation and Research, U.S. Food and Drug AdministrationU.S. Food and Drug AdministrationCenter for Drug Evaluation and Research
  HHS Logo links to Department of Health and Human Services website

FDA Home Page | CDER Home Page | CDER Site Info | Contact CDER | What's New @ CDER

horizonal rule
CDER Home About CDER Drug Information Regulatory Guidance CDER Calendar Specific Audiences CDER Achives

FDA Centennial at CDER

The History of Drug Regulation in the United States

The History of Drug Regulation

The U. S. Food and Drug Administration, the oldest federal agency dedicated to consumer protection, is a scientific, regulatory, and public health agency that oversees items accounting for 25 cents of every dollar spent by consumers. Its jurisdiction encompasses most food products (other than meat and poultry), human and animal drugs, therapeutic agents of biological origin, medical devices, radiation-emitting products for consumer and professional use, cosmetics, and animal feed. Originating as a single chemist appointed to the U. S. Department of Agriculture in 1862, FDA's modern era as a consumer protection agency began with the passage of the Pure Food and Drugs Act in 1906. The FDA in 2006 employs more than 10,000 chemists, pharmacologists, physicians, microbiologists, pharmacists, veterinarians, lawyers, and others, with a budget of $1.83 billion.

The challenge of providing the American public with safe and effective medicines has grown in concert with the expansion of the drug armamentarium over the 20th century-the "chemotherapeutic revolution." While this revolution has unquestionably enhanced the public health, it has not been a phenomenon of unimpeded progress and improvement in the public health. Indeed, changes in the way this country regulates drugs typically have been borne out of adversity, out of events that have killed and injured thousands.

This historical overview will discuss the evolution of the current drug regulatory system, recognized globally as the gold standard for drug safety and efficacy. The regulation of drugs in America is anchored in landmark legislation during the Progressive Era, the New Deal, and the New Frontier, though other noteworthy developments in this area have emerged in the past 150 years. In addition to the key laws, their enforcement by the FDA will be emphasized. Also, this story will examine how the legislative and judicial branches of government, regulated interests, consumers and their representatives, and the media all played a role in the evolution of this system.

1848: Imported Drugs

With a fledgling domestic industry, the drug supply in 19th century America depended largely on imports. But as the health sciences, professions, institutions, and legal framework in the U. S. lagged noticeably behind European nations, America became a dumping ground for adulterated drugs. British statesman and prominent pharmacist Jacob Bell noted that manufacturers understood that drugs reduced by decay or ingenuity were still "good enough for America." A concern for drug quality led to the establishment of the first pharmacy schools in this country and the publication of the United States Pharmacopoeia, all in the 1820s.

Two developments in the 1840s facilitated a legislative response to the problem. First, Lewis Caleb Beck's Adulteration of Various Substances Used in Medicine and the Arts (1846) provided ample documentation of the problems in the American drug market. Second, the Mexican-American War of 1846-1848 provided a political impetus for a new law. Attributing high mortality among American soldiers to the administration of weak, adulterated drugs, Congress whipped up support for a law. In truth, the drugs available would have done little for the yellow fever, cholera, dysentery, and other responsible ailments. The blame should have been fixed on the insanitary camp conditions and poor nutrition.

The Drug Importation Act, signed by President James K. Polk on June 26, 1848, prohibited the importation of unsafe or adulterated drugs, enforced by a cadre of inspectors stationed at key ports of entry. While the law worked well at first, inspector appointments soon were made on the basis of political spoils rather than qualifications. In addition, the law did not address the proliferating problem of domestic patent medicines. According to eminent physician and pharmacist Edward R. Squibb, the Drug Importation Act was a dead letter by the beginning of the Civil War.

back to top

1902: Biological Therapeutics

In 1890 Emil von Behring and Shibasaburo Kitasato in Berlin drew on work about the nature of immunity and the specific character of diphtheria when they discovered an effective antitoxin for diphtheria from blood serum of animals injected with diphtheria toxin. That effort, the identification of nearly two dozen pathogens responsible for specific diseases from 1880 to 1900, and the subsequent discovery of various strains of pathogens, launched a wave of interest to control infectious diseases through so-called serum therapy. With varying degrees of success, researchers used serums against tetanus, typhoid, rabies, pneumonia, meningitis, and other diseases.

Americans, led by public health laboratories, quickly adopted the techniques for producing cures for diphtheria and other scourges. But on October 26, 1901, a five year old St. Louis girl died from tetanus; two days earlier she had been admitted to the hospital with diphtheria and given the antitoxin. Eventually 13 children in St. Louis died of tetanus, and the cause was traced to a supply of diphtheria antitoxin prepared by the St. Louis Board of Health from a tetanus-infected horse. Although the St. Louis disaster was the worst, it wasn't the only such incident in the United States and Europe. Camden, New Jersey, was the site of almost a hundred cases of post vaccination tetanus, including the deaths of nine children, in the Fall of 1901. The likely source was a commercial concern.

These events spurred action in Congress, and the Biologics Control Act of July 1, 1902, was passed quickly and without any notable opposition. The act mandated annual licensing of establishments to manufacture and sell vaccines, sera, antitoxins, and similar products in interstate commerce. Biologics had to be labeled with the name and license number of the manufacturer, and the production had to be supervised by a qualified scientist. The Hygienic Laboratory, forerunner of the National Institutes of Health, was authorized to conduct regular inspections of the establishments and to sample products on the open market for purity and potency testing. Jurisdiction over biological therapeutics was transferred to FDA in 1972.

back to top

1906: Labeling Drugs

The food and drug marketplace was so corrupt that some states began to hire their own chemists to certify the quality and purity of foods and drugs sold within their states, and to defend against the sale of adulterated and inferior foods and drugs from outside. A few states passed sweeping laws, but there was little agreement on standards for foods and drugs. In addition, these laws typically punished retailers, but manufacturers were the greater problem.

Companies wielded substantial influence, especially those in the patent medicine industry. There was little to stop patent medicine makers from claiming anything and putting anything in their products. In fact, by the 1890s patent medicine manufacturers used so-called "red clauses" in their advertising contracts with newspapers and magazines. These muzzle clauses voided the contract if a state law regulating nostrums were passed. Thus, not only were many editorials silent on the need for such laws, they actively campaigned against them. But the nostrum makers weren't able to stifle the entire fourth estate. A few muckraking journalists helped expose the red clauses, the false testimonials, the nostrums laden with arsenic and other harmful ingredients, the unfounded cures for cancer, tuberculosis, syphilis, narcotic addiction, and a host of other serious as well as self-limited diseases. The most influential work in this genre was the series by Samuel Hopkins Adams that appeared in Collier's on October 7, 1905, entitled "The Great American Fraud." Analogously, Upton Sinclair's novel, The Jungle, exposed egregious offenses in the food industry.

More than anyone else, Harvey Wiley, head of the Bureau of Chemistry of the U. S. Department of Agriculture, led the way toward consumer protection. He worked tirelessly for decades to amalgamate the efforts of state food and drug officials, the General Federation of Women's Clubs, and the national associations of physicians and pharmacists toward a comprehensive federal law. That law, the 1906 Pure Food and Drugs Act, prohibited interstate commerce in adulterated or misbranded drugs; it required that the presence and amount of selected dangerous or addicting substances, such as alcohol, morphine, heroin, and cocaine, had to be labeled; and it identified the United States Pharmacopoeia and the National Formulary as official standards for drugs.

back to top

1938: Drug Safety

Important as it was, the 1906 act was rife with shortcomings, such as its failure to regulate medical devices or cosmetics, the lack of explicit authority to conduct factory inspections, the difficulty in prosecuting false therapeutic claims following a 1911 Supreme Court ruling, and the inability to control what drugs could be marketed. The S. E. Massengill Company bore out the truth of the last shortcoming when they introduced Elixir Sulfanilamide in September 1937. This attempt to introduce a flavorful oral dosage form of the new antiinfective wonder drug was a disaster. The firm used an untested solvent, diethylene glycol, which is chemically related to antifreeze. By the time FDA became aware of the problem and removed the product from pharmacy shelves and medicine cabinets around the country, the preparation had caused 107 deaths, including many children. The firm, whose president maintained that the deaths were due to idiosyncratic reactions to the sulfa drug, could be prosecuted only for distributing a misbranded drug; an "elixir" had to contain alcohol as a solvent.

The Elixir Sulfanilamide disaster reinvigorated a bill to replace the 1906 act that had been languishing in Congress since 1933. Further refined, President Roosevelt signed the Food, Drug, and Cosmetic Act into law on June 25, 1938. Among many other provisions, the 1938 act required that firms had to prove to FDA that any new drug was safe before it could be marketed: the birth of the new drug application. The new law covered cosmetics and medical devices, authorized factory inspections, and outlawed bogus therapeutic claims for drugs. A separate law brought drug advertising under the Federal Trade Commission's jurisdiction. The law recognized the problem of squaring the desire of consumers to pursue self medication with the introduction of potent and effective new drugs, such as the sulfonamides. Thus drugs had to bear adequate directions for safe use, which included warnings whenever necessary.

back to top

1951: Prescriptions

With some potent drugs, the thin margin of error between therapeutic and adverse effects, as well as the variability of dosage or duration of treatment vis-à-vis the individual patient and the disease treated, required informed decisions in drug therapy. With this in mind, the FDA ruled within two months of the 1938 act that some drugs simply could not be labeled for safe use. Rather, they required medical supervision for individualized directions for use, and had to be labeled accordingly. By 1941 FDA identified over 20 drugs or drug groups that had to be appropriately labeled and sold only through a physician or dentist's prescription, such as sulfas, barbiturates, amphetamines, and thyroid.

Illegal sales of dangerous drugs--the vast majority of problems involving barbiturates and amphetamines--occupied more drug regulatory time at FDA than all other drug problems combined from the 1940s to the mid-1960s. Early on, unlawful over-the-counter sales and unauthorized prescription refills in pharmacies were the principal sources for illegal direct-to-consumer sales. Pharmacies were indeed responsible for most of the illicitly acquired amphetamines and barbiturates, but they were not the only source. Sloppy prescribing habits shared some of the blame. Also, from the early 1950s on, sales through nontraditional channels--truck stops, bars, cafes, individual peddlers, and other venues--increasingly contributed to illegal distribution.

The 1938 act was vague on issues such as what a prescription was and who would be responsible for identifying prescription versus non-prescription drugs. This lack of statutory direction created many battles between FDA, regulated industry, and professional pharmacy, and within some of these groups as well. The 1951 Durham-Humphrey Amendment to the 1938 act helped clarify some of these disputed issues. It identified fairly clear parameters for what constitutes a prescription drug, who would be responsible for identifying such drugs, and the conditions under which a prescription could be refilled.

back to top

1962: Drug Efficacy

A push for revisions of the drug statutes emerged from hearings into the practices of the pharmaceutical industry by Senator Estes Kefauver of Tennessee, which began in 1959. Though the Kefauver hearings started out as an investigation of the cost of medicines in America, other issues soon came under scrutiny, such as advertising abuses, questionable efficacy of drugs, and the lack of regulation in these areas. But as was the case with the 1902 and 1938 laws, a therapeutic disaster laid bare the need for new legislation. On September 12, 1960, an American licensee, the William S. Merrell Company of Cincinnati, submitted to FDA a new drug application for Kevadon, the brand name of a sedative that had been marketed in Europe since 1956: thalidomide. The FDA medical officer in charge of this review, Frances Kelsey, believed the data were incomplete to support the safety of this drug.

The firm continued to pressure Kelsey and the agency to approve the application--until November 1961, when the drug was pulled off the German market because of its association with grave congenital abnormalities. Several thousand newborns in Europe and elsewhere suffered the teratogenic effects of thalidomide. Though the drug was never approved in this country, the firm distributed Kevadon to over 1,000 physicians under the guise of investigational use. Over 20,000 Americans received thalidomide in this "study," including 624 pregnant patients, and about 17 known newborns suffered the effects of the drug.

The thalidomide tragedy resurrected Kefauver's bill to enhance drug regulation that had stalled in Congress, and the Kefauver-Harris Amendments became law on October 10, 1962. Manufacturers henceforth had to prove to FDA that their drugs were effective as well as safe before they could go on the market. Control over clinical investigations--including a requirement for informed consent--was placed on a firm statutory basis. FDA received authority to regulate advertising of prescription drugs, establish good manufacturing practices as a means to promote quality assurance, and access certain company control and production records to verify production procedures. Finally, the law required that all drugs introduced between 1938 and 1962 had to be effective. An FDA National Academy of Sciences collaborative study showed that nearly 40 percent of these products were not effective. A similarly comprehensive study of over-the-counter products began ten years later.

back to top

1983: Rare Diseases

The early 1980s witnessed the confluence of several different interests on a public health issue that theretofore had received little attention: orphan diseases, serious and debilitating rare diseases affecting less than 200,000 people, which typically receive little funding toward their prevention or treatment. About 20 million Americans suffer from at least one of the more than 5000 known rare diseases. Representative Henry Waxman of California initiated hearings into the lack of drugs for orphan diseases. Also, health care providers, researchers, and patient advocacy groups--especially the National Organization for Rare Disorders--promoted the development of orphan drugs as a public issue. In 1981, an episode about Tourette's syndrome aired on the television series, "Quincy," which helped galvanize public support, too. The Orphan Drug Act finally became law in 1983.

Since pharmaceutical companies had been reluctant to develop treatments for these diseases because there was little chance of recovering their research investments, the law was crafted to induce industry's interest. In the case of an unprofitable or unpatentable drug targeted for a patient population of less than 200,000, the manufacturer would receive development grants, assistance from FDA in planning its animal and clinical protocols, 50 percent tax credits for clinical investigation expenses, and a seven-year monopoly to market the drug. Over 700 drugs have received orphan designation so far.

back to top

Drug Label Evolution I

In the history of non-narcotic drug control in this country, legislation and regulations have addressed pharmaceuticals on many different fronts to protect the consumer, from testing and manufacture to advertising and distribution. One area of regulatory concern has focused on the principal source of drug information for the physician, pharmacist, and consumer, the drug label. Drug labeling today conveys a vast array of information about indications, dosage, untoward reactions, and other elements, but there was a time when the consumer had no idea what was in a pharmaceutical product, much less what it might do.

Early drug containers were valued for decorative rather than informative elements. There might be no label at all, or at most a symbolic allusion to an entity that may or may not have related to health. When the name of a drug appeared on a drug container, it was typically abbreviated in Latin, the professional vernacular. Sometimes paper tags were attached to indicate content. Though pharmacopoeias have existed for centuries, early drug jars did not necessarily have a standard terminology.

In the U. S., there was no federal check on what drugs could contain or claim before 1906. But the 1906 act, principally a labeling law, mandated a variety of information on the drug package. The provision for labeling 11 dangerous ingredients led many patent medicine makers to reformulate their products to avoid honest (and possibly embarrassing) labeling; some simply went out of business. While the 1906 act identified the United States Pharmacopoeia and the National Formulary as the official compendia for drug standards, the law permitted the marketing of nonstandard drugs as long as the label stated the specific variation from the standard.

Manufacturers or wholesalers could label their drugs or foods with a guarantee that the article complied with the law, exempting the retailer from prosecution under the 1906 act. The original version of the guarantee stated, "Guaranteed under the Food and Drugs Act, June 30, 1906." However, some manufacturers advertised the guarantee as a government endorsement. A revised guarantee in December 1908 read, "Guaranteed by [name of guarantor] under the Food and Drugs Act, June 30, 1906." Convinced that the public continued to be misled, the government abandoned the labeled guarantee in 1918.

back to top

Drug Label Evolution II

The 1938 law improved on many of the labeling shortcomings of the 1906 act. For example, it mandated complete listing of ingredients, and it required that drugs be labeled with adequate directions for safe use. The adequate directions mandate affected both the consumer and the prescriber. By 1940 FDA had developed over two dozen warning statements for different drugs intended for sale directly to the public. The agency's decision in 1938 that some drugs simply were too dangerous for labeled directions to the consumer led to the requirement that such drugs be labeled for prescription distribution only, a requirement that the 1951 Durham-Humphrey Amendment codified as the prescription drug legend. That a drug was transferred to prescription use did not invalidate the need for adequate directions for use, which FDA required the manufacturer to make available to the prescriber, often in the form of a package insert. The law exempted investigational new drugs from these labeling provisions.

Most of the basics of a drug label--such as the listing of ingredients and directions for the patient and prescriber-were established in the first half of the century, but that is not to say the latter half of the century has been without important developments. Consumer information on both over-the-counter and prescription drugs has increased substantially over the past few decades. In 1970 FDA issued regulations requiring oral contraceptives to provide patients with information about their use--the beginning of patient package inserts for prescription drugs. In 1995 FDA announced a program to provide patients with more information about prescription drugs with significant risks through standardized literature, or "medguides," provided by pharmacists. In 1994 the agency introduced prototypes for over-thecounter drug labels in an easy-to-read format.

back to top

Drug Approval in the 1990's

The standard drug testing and approval process today involves several stages. First, pre-clinical research and development--including animal testing--can take from one to three years. Next, once a firm files an investigational new drug application with FDA, phase 1 clinical studies proceed to examine the drug's toxicity and pharmacology in 20 to 100 volunteers, and this stage requires several months. Then the drug is tested in larger groups of patients who have the disease the drug is intended to treat. Phase 2 clinical studies, with as many as several hundred patients, last from several months to two years. Phase 3 investigates the drug in several hundred to several thousand patients for one to four years. Then the results, in the form of a new drug application, are reviewed by FDA over an average of about two years. Advisory committees of scientists, health care professionals, and consumer representatives outside the agency consult on drug reviews, but the final decision rests with FDA. If the agency approves the drug, post-marketing surveillance will continue after the medicine is on the market. Of every 100 drugs that begin the investigational process, about 20 will be approved by the agency.

In the last two decades there have been several significant changes to the drug approval process. For example, treatment INDs (1987) expand patient access to experimental drugs for serious diseases with no alternative therapies. Accelerated approval mechanisms (1988-1992), including study designs developed by the sponsor with FDA, speed approval of drugs for life-threatening diseases based on findings that predict therapeutic benefit, though the drug sponsor must continue studies on actual clinical benefits. Parallel track investigations (1992) make experimental drugs more widely available to HIV patients while controlled trials of the drugs continue. As much as any interest group, the community of patients, families, and others affected by HIV and AIDS have actively engaged the agency in drug approval policies.

The Prescription Drug User Fee Act of 1992 provides for sponsor support of drug review and ancillary expenses toward speedier evaluation of new drug and biologic applications and elimination of the backlog of pending applications. The combination of adding drug review staff through user fees and streamlining review procedures independently of the 1992 law have reduced approval times significantly. Drugs for serious and life-threatening diseases such as cancer and AIDS now are approved typically in less than six months, and sometimes in a few weeks. The agency's drug review improvements were recognized in October 1997 by the John F. Kennedy School of Government of Harvard University with the Innovations in American Government Award.

back to top

Drug Approval in the 21st Century

FDA began work in the 1990s to develop standards for the exchange of electronic information critical to the agency's mission. This recognized both the inefficiency of paper for transferring mass quantities of data and the need to develop a harmonized format that would be usable by FDA as well as its counterparts in the European Union and Japan. Consequently, firms are now able to submit paperless product applications and related material to world regulatory agencies more efficiently, while each review authority maintains its own high standards for product evaluation.

Because all drugs have some risk, a 1999 FDA task force advised the agency to make more systematic use of the principles of risk management in the way FDA oversees drug development and marketing. Following this recommendation, FDA implemented a model for risk management that identifies the risks at stake when using a drug, finds ways to minimize them, conveys information to those affected, and oversees how effectively risks are being contained. While there is no such thing as risk-free drug therapy, this approach to drug regulation will help reduce the potential harm to the patient.

Despite various reforms to FDA's processes, developing new medicines has become increasingly expensive and time-consuming. The Critical Path Initiative is FDA's effort to address the need for up-to-ate scientific means of evaluating the safety, efficacy, and quality of medicines. The object of this initiative is to reduce the time, cost, and uncertainty of product development.

Back to Top     Back to Centennial

Created: May 26, 2006
horizonal rule