2000 FDA SCIENCE FORUM

Abstracts for the Plenary and Break-out Sessions

(not all abstracts are available)

Breakout Session A (02/14): Food Safety Initiative - The Risk Perspective

The Promise and Progress of the Food Safety Initiative
Susan Alpert, M.D., Ph.D., Food and Drug Administration, 200 C St, SW, Washington, DC, 20204

The focus of the multi-Departmental, multi-Agency and in FDA Multi-Center Food Safety Initiative is the risks and hazards presented by the food supply. Although arguably the safest supply in the world, there are still a significant number of illnesses and deaths attributable to foods in the US. Not all of the microbiological, chemical or pesticide risks are as yet clearly understood. Adding to that the fact that there are always new emerging pathogens with food as a vehicle for transmission poses a number of challenges for this effort.

A major goal of the Food Safety Initiative is to have an effective system to protect the public health from food-borne illness and hazard. The focus is on the entire system –"From Farm to Table." Such a system can only be effective if it is based on good science that is evaluated and leads to definitive actions that can be implemented with impact. A key step in this continuum is the evaluation of the science.

One objective of the initiative is to strengthen the scientific basis for food safety policies and regulatory decisions through rigorous assessments of risk. Once a risk is identified, its quantification and relevance to the public health must be determined. To do this, we need to bring the best scientists, reliable reproducible data and appropriate methods for assessment together in a meaningful way. Only then can we analyze how this risk compares with others and use that information to frame meaningful public policy.


Breakout Session A (02/14): Food Safety Initiative - The Risk Perspective

Building Better Outbreak Investigations: From Detection to Source
Craig W. Hedberg, Division of Environmental and Occupational Health, University of Minnesota School of Public Health, Minneapolis, MN 55455

Outbreak investigations are critical to identify new food safety hazards and provide feedback on the effectiveness of food safety systems. Outbreaks are detected in one of two ways: 1) illnesses associated with events or establishments and 2) case clusters identified through pathogen-specific surveillance. The first requires prompt, thorough investigation to identify the agent and the source. This may be the only way to identify "new" foodborne pathogens, such as diarrheogenic E. coli, or Cyclospora. The second requires epidemiologic investigation, aided by molecular subtyping, to identify the source. Both require close collaboration between epidemiologists and public health laboratories. Both also require close collaboration with environmental health specialists and field investigators to identify and trace the source of potential vehicles. Although product trace backs are frequently started only after a food item has been implicated, detailed product information is actually critical to epidemiologic analysis. Thus, collecting product source information should begin as early in the investigation as possible. This will help build the better investigations that are needed to improve the safety of our food supply.


Breakout Session A (02/14): Food Safety Initiative - The Risk Perspective

Research Opportunities Under the Food Safety Initiative
Arthur J. Miller, Center for Food Safety and Applied Nutrition, Food and Drug Administration, Washington, DC 20204

The Food Safety Initiative (FSI) offers new and novel research funding opportunities within FDA. While laboratory based research on microbial pathogens serves as the keystone to the program, mixing laboratory and field studies to better address regulatory research and risk assessment needs is encouraged. The current intramural portfolio includes 22 projects supporting the 1997 National Food Safety Initiative and the Produce and Imported Food Safety Initiative. This is strengthened by collaborations with the National Center for Food Safety & Technology and the Joint Institute for Food Safety & Applied Nutrition. A purchasing program has funded over $1million in new equipment to date, and will continue. In FY2000, an intramural competitive post doctoral program will be initiated. Collaboration with ORA is fostered by funding 4 new ORA research positions, and by continuing a research exchange program, to promote technology transfer. Extramural activities include a research and risk assessment competitive grant program, and HACCP implementation research on unpasteurized apple cider and orange juice, and seed sprouts. Together these programs contribute to the science based development and implementation of regulatory activities, and to rapid responses in emergency situations.


Breakout Session A (02/14): Food Safety Initiative - The Risk Perspective

FDA Perspectives on Methylmercury in Fish
P. Michael Bolger, Ph.D., D.A.B.T., Chief, Contaminants Branch, Center for Food Safety and Applied Nutrition, US Food and Drug Administration, Washington, DC

Methylmercury (MeHg) is an environmental contaminant which ultimately arises from both natural (e.g. volcanic) and human-derived sources (e.g., coal derived power generation). It is enriched in the food chain, specifically the aquatic food chain, with the highest levels found in certain larger, long-lived fish. Information from several poisoning episodes in Japan and laboratory animal experiments prompted the FDA to establish an action level of 1 ppm in 1979. MeHg is almost completely absorbed from the gastrointestinal tract and the primary target organ is the nervous system. Humans may be particularly susceptible to the toxic effects of MeHg because of the potential susceptibility of the developing nervous system. Effects on the fetus may occur in a dose-response range lower than for effects in adults (e.g. paraesthesia). Since the earlier poisoning episodes newer information has been published from well designed prospective studies which specifically analyzed the effect of low level MeHg exposure via fish consumption on neurological development. The potential differential sensitivity between the adult and the fetus is a critical issue in the context of safety/risk assessment. Depending on the dataset the use of the traditional safety/negligible risk assessment paradigm can result in an answer which identifies certain levels of exposure to be unsafe. This process has always been useful in providing a yes or no answer, but it does not describe the risk in an absolute sense, nor does it provide a means of gauging the level of effort appropriate for reducing the risk above the safe level of exposure. The determination of appropriate public health measures required to mitigate MeHg exposures requires quantitative estimates of the risks associated with degrees of exposure of MeHg. The risks of MeHg exposure must be weighed against the benefits of fish consumption and the extent to which this is accomplished will directly influence the degree to which public health decisions are well informed.


Breakout Session A (02/14): Food Safety Initiative - The Risk Perspective

The Power of Prevention Strategies
Robert E. Brackett, Center for Food Safety and Quality Enhancement, University of Georgia, Griffin, GA 30223

The best strategy for reducing the incidence of foodborne illness is to prevent hazards before they have the opportunity to cause illness. Such a strategy will involve all aspects of food production, processing, and distribution. The employment of the Hazard Analysis Critical Control Point (HACCP) system is an initial and foundational component in assuring the safety of foods. The HACCP system employs identification of hazards, means to control the hazards, and monitoring of the controls. The use of quantitative risk assessment is becoming and will continue to emerge as one of the most important tools in identifying foods and foodborne pathogens of highest priority for regulatory concern and research focus. New pathogen reduction strategies involve modern food processing techniques to reduce or eliminate pathogens. New and innovative techniques such as high pressure processing, pulsed-light and ohmic heating help to maintain quality of foods while eliminating foodborne pathogens. Innovative environmental and microbial monitoring methods such as rapid detection techniques and biosensors can assure that proper sanitation and pathogen reduction technologies maintain food safety. Finally, education is an important component of any prevention strategy to reduce foodborne illness. Agricultural and food processing workers need to be informed of their roll in assuring food safety as well as proper food handling techniques. Efforts should also be made to develop innovative and understandable means to educate consumers regarding their role in minimizing foodborne illness.


Breakout Session B (02/14): Postmarket Surveillance - Beyond Passive Surveillance


Susan N. Gardner, Office of Surveillance and Biometrics, Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD 20850

The Safe Medical Device Act of 1990 and 1992 mandated universal reporting of device related adverse events by user facilities. The FDA Modernization Act of 1997 has legislated that the system be changed to reporting by a representative subset of users. FDA has proposed a newly designed device surveillance network that not only meets the legislative criteria, but incorporates lessons learned from a pilot study that looked at barriers to reporting, and advice from experts in the fields of surveillance and medical error reporting. Goals of the proposed system include collecting high quality data about adverse device related events, analysis of data to quickly identify emerging problems and changes in device use, and timely dissemination of information to health care professionals and the public. The knowledge gained from this reporting system will be applied to the device approval process and shared with industry to aid improving device performance.


Breakout Session B (02/14): Postmarket Surveillance - Beyond Passive Surveillance

Pharmacovigilance in France: Role of the Regional Centers
Nicholas Moore, M.D., Ph.D., Department of Pharmacology, University of Bordeaux 2, Bordeaux, France

Spontaneous reporting of adverse drug reactions (eg, medwatch) is the mainstay of regulatory pharmacovigilance. Typical spontaneous reporting, however, may be hinderd by a number of problems such as lack of knowledge of the existence of a reporting system, misunderstanding of the need to report, uncertainty as to what to report, poor quality of reports, triviality of reports, or lack of interaction with a distant administration. In such a context, the establishment of local or regional centers may be profitable. In France, a network of 31 regional centers was progressively established from 1972 to 1995. These centers are in clinical pharmacology departments of university hospitals, and are staffed by medical doctors and pharmacists. They receive spontaneous reports from health professionals. They also act as drug information centers, answering health professionals’ questions about the use and risks of drugs, and can participate in the clinical management of adverse drug reactions, at the physicians’ request. This has double consequence: physicians perceive the regional center as a useful resource for information and patient management rather than as a distant administration, and the centers retrieve high-quality data on clinically significant and often novel adverse drug reactions. By early interaction and participation in case management, more information on cases can be generated as needed, such as relevant lab tests. Some centers have consultation services, to which physicians can address their patients. Being within reference hospitals, center personnel can interact directly with the departments receiving the more serious cases, such as dermatology, intensive care, hematology or hepatology, for systematic retrieval of incident cases. Additionally, since center personnel participate in medical and pharmaceutical curricula, they can teach future physicians the management of ADRs, and further the awareness of the existence of and need for such structures and reporting. They are also involved in research on ADRs. This research ranges from the fundamental, eg pharmacokinetics or pharmacogenetics, to epidemiological studies, using reporting data or adhoc studies. Centers receive yearly 18000 reports, up from about 9000 ten years ago, for a population of 55 million inhabitants. 30 to 40 % qualify as serious reports. This represents about half the reports generated yearly in the country, the other half being reported to industry. Every report transmitted to a regional center is reviewed for completeness and causality using a common method, before being input locally into a common database accessible by all centers, and by the central drug agency. Centers provide the National Medicines Agency with a pool of experts for commissions and boards, and to review alerts. Nearly 200 peer-reviewed papers are published each year by the centers. Regionalisation of pharmacovigilance represents an effective use of local resources, involving clinical pharmacologists in routine regulatory activity and research. It result in the creation of a population of clinical pharmacologists trained for drug safety assessment.


Breakout Session B (02/14): Postmarket Surveillance - Beyond Passive Surveillance

Bayesian Data Mining in Large Frequency Tables with an Application to the FDA Spontaneous Reporting System
William Dumouchel, Ph.D., AT&T Labs - Research, Florham Park, NJ

A common data mining task is the search for associations in large databases. Here we consider the search for "interestingly large" counts in a large frequency table, having millions of cells, most of which have an observed frequency of 0 or 1. We first construct a baseline or null hypothesis expected frequency for each cell, and then suggest and compare screening criteria for ranking the cell deviations of observed from expected count. A criterion based on the results of fitting an empirical Bayes model to the cell counts is recommended. An example compares these criteria for searching the FDA Spontaneous Reporting System database maintained by the Division of Pharmacovigilance and Epidemiology. In the example, each cell count is the number of reports combining one of 2,524 drugs with one of 941 adverse events (total of cell counts = 4.4 million), and the problem is to screen the drug-event combinations for possible further investigation. Key Words: Adverse drug reactions, association, gamma-Poisson model, mixture model, shrinkage estimate.


Breakout Session B (02/14): Postmarket Surveillance - Beyond Passive Surveillance

Post Market Surveillance in a HACCP Environment
John E. Kvenberg, Food and Drug Administration, Center for Food Safety and Applied Nutrition, Office of Field Programs, Washington, DC 20204

The Hazard Analysis Critical Control Point (HACCP) system is a preventive system for assuring the safe production of food products. HACCP principles have long been in place in the FDA regulated low-acid canned food (LACF) industry. Currently, within the USA, the FDA has mandated HACCP for seafood processing and is proposing mandating HACCP for facilities producing fruit and vegetable juices. FDA has incorporated HACCP into its Food Code, a document that gives guidance to and serves as model legislation for state and territorial agencies that license and inspect food service establishments, retail food stores, and food vending operations in the United States. In addition to the regulations, the agency is conducting pilot HACCP programs with volunteer food companies. The pilot program is intended to provide information that food science professionals can use in determining whether HACCP should be expanded beyond seafood as a food safety regulatory program. FDA advocates HACCP and agrees that it is the best food control system of choice.


Breakout Session B (02/14): Postmarket Surveillance - Beyond Passive Surveillance

The Value and Limitations of Pregnancy Drug Registries
Janet D. Cragan, Centers for Disease Control and Prevention, Food and Drug Administration, Atlanta, GA 30341

Interest in establishing pregnancy registries to improve post-marketing surveillance for prescription drugs used by women of reproductive age has increased among pharmaceutical manufacturers. Established registries include those for acyclovir/ valacyclovir, antiretroviral drugs, antiepileptic drugs, sumatriptan/naratriptan, and bupropion. These registries prospectively record voluntary reports by health-care practitioners of patients exposed prenatally to the drug of interest. The presence of major defects is then obtained from the same practitioners after delivery, and defect rates are compared with those reported for the general public. The Acyclovir Pregnancy Registry is the largest, begun by Glaxo Wellcome in 1984. As of July 31, 1998, the proportion of births with defects in the registry was 28/1045 (2.7%, 95%CI:1.8-3.9%). No increased risk or unique pattern of defects was identified. Strengths of the registry approach include its prospective nature, which minimizes selection bias, and its statistical power to detect a defect resulting from a specific drug exposure. Weaknesses include the lack of power due to small sample sizes, lack of detail about exposures and outcomes, and lack of a defined comparison group. Additional issues include loss to follow-up and data confidentiality.


Breakout Session C (02/14): Contemporary Issues in Risk Assessment

Confidence Associated with Multiple Safety Factors: Probabilistic Values for Acceptable Daily Intakes
Ralph L. Kodell, National Center for Toxicological Research, Food and Drug Administration, Jefferson, AR 72079

Acceptable daily intakes (ADIs) of potentially toxic substances are often derived by reducing experimental no-observed-adverse-effect levels (NOAELs), lowest-observed-adverse-effect levels (LOAELs), or benchmark doses (BMDs) by a product of uncertainty factors. These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10, but the degree of safety provided by factors of 10 has not been quantified satisfactorily. In order to provide a stronger scientific basis for selecting uncertainty factors, researchers have compiled databases reflecting chemical-to-chemical variability with respect to each of these sources of uncertainty. These databases indicate that such variability may be characterized by distributions of lognormal random variables. This presentation describes a statistical procedure for using estimates of means and standard deviations of these individual distributions to estimate percentiles of the distribution of the product of uncertainty factors. An upper percentile of the distribution of this product can be chosen to ensure a high degree of safety (say, 95% or 99%) for ADIs with respect to these combined sources of uncertainty. Based on the databases examined here, a simple "rule of 3’s" is suggested as a short-cut procedure for choosing a combined uncertainty factor that exceeds the estimated 95th percentile of the distribution of the product of uncertainty factors. With this rule, if only a single uncertainty factor is required, it should be 33. For any two sources of uncertainty, a factor of 3×33» 100 should be used. Any three sources of uncertainty would require a combined factor of 3x100=300, and any four sources a combined factor of 3x300=900. For 99% assurance of safety, an additional factor of 3 should be used.


Breakout Session C (02/14): Contemporary Issues in Risk Assessment

Microbial Food Safety Risk Assessment at the FDA Center for Food Safety and Applied Nutrition: From Concept to Reality
Robert L. Buchanan, Ph.D., Senior Science Advisor, Food and Drug Administration, Center for Food Safety and Applied Nutrition, Washington, DC 20204

The FDA Center for Food Safety and Applied Nutrition has a long history of using risk assessment techniques as a tool to help ensure that its decisions related to the chemical safety of foods are based on the best scientific information available. Historically, equivalent techniques were not available for food safety concerns related to infectious microorganisms. However, as a result of recent scientific advances and the support of the National Food Safety Initiative, microbial food safety risk assessments have gone from a concept to an increasingly important regulatory tool in less than 5 years. The CFSAN is learning rapidly how this new tool can be most effectively be used, and how the results of the assessments should be interpreted and effectively communicated to interested parties.


Breakout Session C (02/14): Contemporary Issues in Risk Assessment

Risk Assessment Methodology for Medical Devices and Radiation-Emitting Products
Ronald P. Brown and W. Howard Cyr, Ph.D., Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD 20857

FDA's Center for Devices and Radiological Health (CDRH) is responsible for ensuring the safety and effectiveness of medical devices and eliminating unnecessary human exposure to man-made radiation from medical, occupational and consumer products. This broad mandate requires microbial, chemical and radiation risk assessments to be performed to support regulatory decision making in these areas. For example, microbial risk assessments have been performed to support the establishment of Sterility Assurance Levels (SAL) for devices. Chemical risk assessment activities in CDRH focus on three areas: 1) the development and validation of new risk assessment methodologies, 2) bench-top research to provide information for the hazard identification and dose-response assessment stages of the risk assessment process, and 3) the application of risk assessment approaches to assist with regulatory decision making. Risk assessment methodologies under development in CDRH include PBPK models to describe the pharmacokinetic behavior of compounds released from devices and a risk assessment-based approach for the biological evaluation of medical devices. Risk assessments have been used by CDRH to assist in reaching decisions on various issues that have received considerable attention in the media, including the safety of phthalate esters released from PVC devices, dioxin released from tampons, and 2,4-toluenediamine released from polyurethane-foam covered breast implants. CDRH actively participates in the development of consensus standards for risk assessment through various organizations, including the International Organisation for Standardization (ISO).

Radiation risk assessments have been conducted for exposure to electronic product radiation in a wide variety of situations, e.g., malfunctioning diagnostic or therapeutic x-ray machines; UV-emitting sunlamps, tungsten halogen lamps, and fluorescent lamps; microwave and extremely low frequency radiation from cell phone and police radar, and ultrasound diagnostic imaging systems. These radiation risk assessments often use assumptions, models and procedures that don’t necessarily fit into the methodology that has been developed for chemical risk assessments. Furthermore, with radiation, there is often a high natural background level of exposure. Sometimes this level can be very high, as in the case of ultraviolet radiation. In such cases, a fraction of the background level may be used to establish a regulatory limit.


Breakout Session C (02/14): Contemporary Issues in Risk Assessment

Recent Advances in Computational Toxicology and Regulatory Applications
Joseph F. Contrera, Ph.D. and Edwin J. Matthews, Ph.D., Center for Drug Evaluation and Research, Office of Testing and Research, Food and Drug Administration, Rockville, MD 20857

There is an urgent and growing need to develop and apply better, more efficient and scientifically defensible methods to meet regulatory mandates, prioritize research and reduce unnecessary testing and animal use. CDER is a unique resource of scientific information from both clinical and animal studies that has applicability beyond the area of pharmaceuticals. Toxicology and clinical adverse event databases have been created at CDER and the challenge now is developing effective ways to convert this information into useful knowledge to advance the science of risk and hazard assessment. Computational toxicology offers a means of rapidly analyzing large databases and identifying relationships and patterns which are especially useful to support regulatory and product discovery decisions. Computational toxicology software modules to evaluate carcinogenic potential were developed and validated at the CDER Office of Testing and Research under a Cooperative Research and Development Agreement, and others are in development. Computational toxicology modules operate as "virtual experts" assessing the potential toxicological properties of compounds by evaluating their degree of similarity to compounds with known chemical, physical and biological properties in the CDER database. As a result of collaboration with CFSAN, computational toxicology software was developed and is being used to support regulatory decisions related to new FDAMA requirements for food contact substances. At CDER, computational toxicology is being used to support regulatory decisions on the necessity, nature and extent of testing for excipients, contaminants or degredants that are identified late in the NDA process. The knowledge derived from the extensive resource of scientific information from both clinical and animal studies at CDER is being applied to improve the scientific basis of regulatory decisions and to facilitate the development of safe products.


Breakout Session C (02/14): Contemporary Issues in Risk Assessment

Imaging Technology: The Emerging Revolution in Toxicology and Risk Assessment
David S. Lester, Food and Drug Administration, Center for Drug Evaluation and Research, Division of Applied Pharmacology Research, Laurel, MD 20708

Imaging technologies, including CT scanning, Magnetic Resonance Imaging (MRI), and ultrasound, have had a tremendous impact on the clinical sciences, in particular, the diagnosis of disease. These techniques allow rapid, noninvasive, high-resolution analysis of the state of disease progression of many pathologies. In addition, imaging can be used to monitor the progress of treatments. Examples will be given. Imaging approaches can be used to monitor structural and/or functional changes of numerous tissues. Structures or activities can be monitored using intrinsic or extrinsic markers or probes. In general, MRI is most useful for detecting and locating structural changes, which may be a strong indicator of potential toxicity. Functional changes are more complex to interpret as it is difficult to distinguish between the pharmacology vs. the toxicology. While the majority of imaging technologies have been designed for clinical procedures, there is increasing interest in the application of these approaches to animal models. This provides an opportunity to monitor the same animal over an extended treatment period. It may supply some insight into biological endpoints that could be monitored in the clinical trials. The use of imaging for identification of potential surrogate markers will be discussed. It is clear that imaging has not been directed towards toxicology and risk assessment, however, the potential is there and should be encouraged by industry, academics, and government regulatory agencies.


Breakout Session D (02/14): Women's Health and the Science of Gender Differences (02/14)

Investigation of Sexual Dimorphism in the Inflammatory Response to Biomaterials
K. Barry Delclos, Ph.D., National Center for Toxicological Research, Food and Drug Administration, Jefferson, AR 72079

The extent and resolution of the host inflammatory response induced by implantation of a foreign material plays a major role in determining the long-term success of an implanted medical device. Sexual dimorphism in the immune response is well documented in both animals and humans, and pilot projects were initiated to evaluate possible sex differences in the response to polydimethylsiloxanes (PDMS). Premenopausal women, postmenopausal women on hormone replacement therapy, and males age-matched to the females were recruited for the study. Premenopausal women were sampled in the luteal and follicular phases of the menstrual cycle, and the other groups were sampled twice at a similar interval. The chemiluminescent response to the materials was measured to determine the ability of the materials to activate cells in the absence of known cell stimulators. Subsequently, human serum-opsonized zymosan was added to measure the extent to which the phagocytic activity of the cells is modified by the materials. Whole blood and monocytes from females gave a significantly stronger oxidative burst to the materials than did cells from males. The inflammatory response in male and female Balb/c mice to subcutaneous PDMS implants was evaluated by histology and by immunohistochemical staining for the inflammatory mediators tumor necrosis factor-a and interleukin-1b (IL-1b). The reaction at the implant site was qualitatively similar in both males and females. The intensity of staining for the inflammatory mediators, particularly IL-1b , was stronger in the females at early time points. The response of females in these studies was modestly greater than that of males, but whether this difference is of sufficient magnitude to have long-term impact on the stability or performance of implanted devices remains to be determined.


Breakout Session D (02/14): Women's Health and the Science of Gender Differences

Reproductive Toxicity: New Perspectives on Preclinical Studies for Vaccines
Marion F. Gruber, Ph.D., Center for Biologics Evaluation and Research, Food and Drug Administration

The Office of Vaccines Research and Review of the Center for Biologics Evaluation and Research of the US Food and Drug Administration has responsibility for regulating a broad spectrum of vaccine products for the prevention and therapy of infectious disease indications. The target population for these products frequently includes women of childbearing potential and, in certain instances, pregnant individuals. Under current FDA regulations (21 CFR 201.57 (f)(6)) the label for a licensed product must include a narrative information about the products teratogenic effects and effects on reproduction and pregnancy, as well as postnatal development. However, for most biological products including vaccines this subsection of the label contains a paucity of data from human clinical trials and minimal or no preclinical reproductive toxicity data. A number of FDA wide initiatives to address these concerns are ongoing, with the ultimate goal of providing additional relevant preclinical and clinical information for the estimation of risks imposed on women and/or the developing offspring. The Maternal Immunization Working Group within CBER has been charged with addressing requirements for the performance and design of preclinical reproductive toxicity studies for vaccines and to develop comprehensive policy in this area. The recommendations set forth by the maternal immunization working group are summarized in a guidance document for evaluation of the reproductive toxicity potential of vaccines indicated for maternal immunization as well as for women of childbearing potential. In addition, the guidance authored by the International Committee on Harmonization (ICH) entitled "Detection of Toxicity to Reproduction for Medicinal Products" addressing the design of animal reproductive/developmental toxicity studies has been used by CBER, where appropriate, to assess the reproductive/developmental toxicity potential for biological products including vaccines. The CBER approach to reproductive toxicity testing for vaccine products will be discussed.


Breakout Session D (02/14): Women's Health and the Science of Gender Differences

Gender Differences: New Perspectives in Pharmacokinetics and Pharmacodynamics
Raymond L. Woosley, M.D., Ph.D., Department of Pharmacology, Georgetown University Medical Center, Washington, DC 20007

Conventional medical education teaches that there are hormonally responsive tissues with specific hormone receptors. For sex hormones these tissues are usually considered to be those responsible for sexual differentiation and/or reproduction. Cardiac muscle has generally been considered, like many tissues, unresponsive to sex hormone. However, we now know that cardiac tissue contains both estrogen and androgen receptors, although their function has been unknown until recently. Although, it has been known for over 70 years that the QT interval on the electrocardiogram is longer in adult females than males, only recently has it been observed that this difference develops at puberty. Another recent observation is that sex hormones, perhaps acting through their receptors (although this is not clear), can alter myocardial function. This research demonstrated that sex hormones can alter the expression of drug targets (ion channels) in cardiac tissue and their response to drugs. These results explain prior clinical and experimental observations. For example, after the local anesthetic bupivacaine, previously considered to be safe, began to be associated with cardiorespiratory arrest in pregnant patients, experiments in animals demonstrated that pretreatment with progesterone markedly increased the sensitivity of cardiac ion channels to block by bupivacaine. Furthermore, the far greater incidence of life-threatening ventricular arrhythmias in women than men treated with drugs such as quinidine, sotalol, terfenadine, erythromycin and halofantrine (all potassium channel blockers) can be at least partially understood with recent data indicating that sex hormones and gender influence the expression and drug sensitivity of potassium channels in the heart. Males have a smaller degree of change in the cardiac QT interval at the exact same plasma concentration of quinidine. Physicians caring for women and scientists developing new drugs should be aware of the potential physiologic and pharmacologic effects of sex hormones on cardiac and other tissues that could alter response to the actions of drugs.


Breakout Session D (02/14): Women's Health and the Science of Gender Differences

Risk Communication – Measuring Confidence in Hazard Claims: Gender Differences in the Perception of Risk
John D. Graham, Harvard Center for Risk Analysis, Harvard School of Public Health, Boston, MA 02115

This study involves the degree of confidence lay people and scientists have in hazard claims publicized by the mass media. Confidence levels for different hazards were compared and related to individual respondent characteristics. A survey of 1,019 residents of the United States by random digit-dial telephone and by mail to 264 scientists was conducted. Mean hazard confidence scores were elicited on a scale from zero to 10, where zero indicates complete confidence that a hazard does not exist and 10 indicates complete confidence that a hazard does exist. A gender effect was noticed among scientists as well as lay people, although female scientists tended to express less confidence in hazard claims than lay females. Hazards studied included claims made about pesticides on food, electric and magnetic fields, global warming, ozone depletion, environmental tobacco smoke, radon, dust and particles in city air, violence in daily life, radiation from medical X-rays, and cigarette smoking. Interesting differences were evident in response to an exploratory question regarding explanations for the reported rise of breast cancer. The roles of delay of first delivery, using estrogen therapy, and breast feeding were also examined for confidence levels evoked.


Plenary Session (02/15)

Science Based Decisions and Risk Assessment: A European Perspective
Arpad Somogyi, D.V.M., Ph.D., Directorate General Health and Consumer Protection, European Commission, Brussels, Belgium

The central significance of science in regulatory decisions is universally recognized, and therefore the quality of scientific advice is of paramount interest in decision making. Many institutions underwent recent and substantial organizational changes with the aim of enhancing the level and integrity of the scientific contributions to regulation. At the same time it is widely understood that science is only one factor in regulatory decision making. New concepts and methods in risk analysis and in the interpretative application of the precautionary principle, have given impetus to novel developments in several areas of safety evaluation. Current international efforts to harmonize the scientific principles of regulatory systems could lead to the enhancement of health protection of consumers as well as free and fair trading practices in food, drugs and the large variety of other commercial products all over the world.


Breakout Session A (02/15): The Art and Science of Risk Communication

Ethics and Risk Communication
John H. Fielder, Ph.D., Villanova University, Villanova, PA

While the primary ethical issue in risk communication is the patient’s right to be given the appropriate information needed to make informed health care decisions, in actual circumstances there are many other factors that complicate notification. This is especially true when the notification is done by a federal agency, such as FDA. Notification of patients and health care professionals about problems with medical devices must take into consideration the interests of a number of different stakeholders, the significance of Type I and Type II errors, lack of information and its change over time, the wide range of devices and things that can go wrong with them, boundary issues, amount of evidence needed, public vs. expert perceptions of risk, and defining levels of harm, risk, and evidence. More fundamentally, notification can proceed from a scientific outlook or a clinical one, which will radically shift the approach to the issues listed above.


Breakout Session A (02/15): The Art and Science of Risk Communication

Perspectives on Risk Perception
Paul Slovic, Ph.D., Decision Research, Eugene, OR 97401

Perceived risk can be characterized as a battleground marked by strong and conflicting views. The paradox for those who study risk perception is that, as people in many industrialized nations have become healthier and safer on average, they have become more - rather than less - concerned about risk, and they feel increasingly vulnerable to the risks of modern life. The stakes are high! These perceptions and the opposition to technology that accompanies them have puzzled and frustrated industrialists and regulators and have led numerous observers to argue that the American public’s apparent pursuit of a "zero-risk society" threatens the nation’s political and economic stability. Studies of risk perception attempt to understand why people’s concerns are increasing and why perceptions are so often at variance with what the experts say people should be concerned about. In medicine, perceptions of drug risks are likely to influence patients’ treatment choices, their compliance with treatment regimens, their views on the acceptability of adverse reactions and the drugs that cause them, and their attitudes toward government regulation of drugs. Understanding perceptions is a prerequisite for designing better communication materials for patients and the public.


Breakout Session A (02/15): The Art and Science of Risk Communication

FDA’s Role in Risk Management and Communication
Janet Woodcock, M.D., Director, Center for Drug Evaluation and Research, Food and Drug Administration

FDA is charged with assuring safety of the food, medical and veterinary products that the Agency regulates. Given that none of these products is 100% safe under all circumstances, safety may be operationally defined as "having risks that are reasonable, given the expected benefits." In balancing benefits and risks, FDA must acknowledge that safety is not an innate property of any product; rather, it is dependent on appropriate handling by the users. FDA risk assessments must include an evaluation of the capacity of users to act in ways that minimize risk; the willingness of recipients to assure risk and the weight they put on the risks; and the ability of "the system" to effectively communicate both risk information and risk management steps to the public. Understanding these issues requires extensive communication with product users. Interaction with the public, including consumer and patient involvement in decision-making, public educational campaigns, traditional risk communication, and provision of public information, is an increasing component of FDA risk management strategies.


Breakout Session A (02/15): The Art and Science of Risk Communication

The Journalist's Challenge in Deciphering and Communicating Scientific Risk
Lauran Neergaard, The Associated Press, Washington, DC

How the press covers scientific risk varies widely, often depending on the experience and expertise of the individual reporter but also according to day-to-day deadline constraints, the availability and reliability of scientific data, dissension among the "experts," and the public's interest. Topics to be explored include how journalists decide what is news; how we investigate risk on deadline; scientific consensus vs. the obligation to report all sides; how scientists and journalists communicate; and some examples of risk coverage.


Breakout Session B (02/15): Risk Assessment in Action

Relative vs. Additive Risk for Predicting Human Cancer Risk from Animal Data
David W. Gaylor, National Center for Toxicological Research, FDA, Jefferson, AR 72079

Cancer risk assessments commonly assume equal excess (absolute) risk per unit dose across species with appropriate interspecies dose scaling. It is equally appropriate to assume equal relative risk across species. Then, the human cancer risk is estimated to be the excess site-specific animal risk multiplied by the ratio of the background site-specific human cancer risk to the background site-specific animal cancer risk. In the absence of information regarding potential tissue sites for human cancers, a maximum value of 2% for the lifetime incidence is used for common human tumors, other than skin cancer or lung cancer associated with smoking. Hence, for cancer risk estimates based on common animal tumors, e.g., liver tumors in male mice with a greater than 2% incidence in control animals, human cancer risk estimates would be reduced. For rare animal tumors with a historical background rate in controls of less than 2%, estimates of potential human cancer incidence would be increased.


Breakout Session B (02/15): Risk Assessment in Action

Listeria Risk Assessment
Richard C. Whiting, Food and Drug Administration, Center for Food Safety and Applied Nutrition, Washington, D.C. 20204

Listeria monocytogenes is a bacterium widely found in soils, water and food processing plants. Consequently, it is frequently found in our foods, usually in low numbers. For most people this bacterium is harmless but for individuals with impaired immune systems from diseases, medical treatments and old age, it can cause systemic infections and meningitis (listeriosis). Pregnant women and neonates are also highly susceptible to listeriosis. The rate of infection from Listeria is relatively low compared to other food borne pathogens, however, death results in 20% of the cases. Epidemiological investigations are hampered by the sporadic occurrences of listeriosis and the two to three weeks or longer between consumption of the food and onset of illness. Based upon a qualitative evaluation of this information, governments have implemented different degrees of stringency in the tolerance of this microorganism in foods, particularly ready-to-eat foods that are eaten without cooking immediately prior to consumption. This risk assessment collected all of the relevant scientific information and constructed a quantitative description of the sources of Listeria in our diet and the health risk of consuming particular numbers of Listeria. In addition to describing what is known, the risk assessment provided an assessment of the quality of this information and highlighted where crucial information is lacking. This approach provides regulatory agencies with a more comprehensive and integrated description of this public health hazard than the more qualitative and subjective interpretation done previously, thereby leading to more informed decisions.


Breakout Session B (02/15): Risk Assessment in Action

Creutzfeldt-Jakob Disease and Blood Safety: Addressing the Risk, an Industry Perspective
Robert W. Kozak, Ph.D., Bayer Corporation, Berkeley, CA 94701

In 1995, in response to increasing uncertainties about the theoretical risk of CJD to life-saving plasma-derived products and the patients who depend on them, Bayer Corporation proactively developed a TSE/CJD research program. A scientific task force established an approach based on world-wide accepted principles to control and minimize any risk from potential adventitious agents. In addition to careful selection of raw materials, a sensitive Western blot assay was optimized to detect the disease marker (termed prion) and to determine the prion clearance capabilities of manufacturing processes. The western blot assay data provides similar clearance results to the rodent bioassay but in less than two days instead of the 8-14 months needed by the bioassay. The increased throughput of the western blot provides sensitive, reproducible data that increased the statistical confidence of the clearance data unattainable with the bioassay alone. The Bayer TSE/CJD program was reviewed with international experts in the field including the FDA to assure the approach and information generated was scientifically sound, and would provide missing scientific knowledge and additional safety to patients. Updates on the progress of the program were provided to FDA, world health authorities and consumer groups and recommendations obtained incorporated into the program. A brief description of the program and results will be given along with the partnership established with the FDA.


Breakout Session B (02/15): Risk Assessment in Action

Creutzfeldt-Jakob Disease and Blood Safety: Addressing the Risk, a New Paradigm
Corey S. Dubin, Committee of Ten Thousand, Advocates for Persons with HIV/Aids, Washington, DC 20003

The HIV/AIDS blood epidemic of the 1980s changed the landscape of blood/blood products in this nation. The unimaginable occurred, a deadly pathogen entered this nation’s blood supply. While the regulatory and oversight system wrestled to respond, thousands of Americans were infected with this new and unknown killer. By the time the regulatory structure began to seriously react it was too late for thousands of Americans. The questions remain today, what are the lessons of the AIDS/blood epidemic and have we used those lessons to alter our regulatory system? The obvious goal is to create a regulatory structure that can be responsive to emerging and unknown threats to the blood supply. The issue is not if a new threat will emerge, it is when that threat will emerge and are we prepared to respond in the most effective manner possible. Can we contain the next killer that enters the blood supply or will history repeat itself with another epidemic that ravages the users of blood/blood products? Over the last 3 years we have also seen the emerging revelations regarding the magnitude of the transmission of hepatitis C during the 1980s and 1990s. This was a threat that was identified in the mid 1970s, yet we now learn that up to one million Americans were exposed to this dangerous pathogen over a twenty-four year time frame. In this instance the question is not assessing our response to an unknown, it represents the failure of the regulatory system to respond effectively to a known dangerous pathogen in the blood supply. How do we interpret these events and what is their impact on how we manage this nation’s blood supply? Has this knowledge motivated changes in the system that prevents future blood borne disasters such as HIV and hepatitis C? Over the past ten years, Creutzfeldt-Jacob Disease (CJD), has been at the center of the debate regarding unknown or so-called theoretical threats to the blood supply. The unanswered question remains, is CJD or nvCJD transmissible through blood/blood products? We have yet to gain a definitive answer to this important question. Yet given its deadly effects on those afflicted, CJD and nvCJD have resulted in a high degree of anxiety among the users of blood/blood products, especially those such as persons with hemophilia who are so totally dependent on plasma derivative products. The perception is that the regulatory structure has addressed the threat of CJD within the old perspective or paradigm. The same paradigm that was operative in the 1980s regarding HIV and hepatitis C. America’s delay in deferring British donors is one example of this perceived problem in perspective. This delay in deferral has been compared to the delays in action during the first 3 to 4 years of the AIDS/blood epidemic when a more proactive response in this area surely would have saved lives. When it is stated that the risk from CJD and nvCJD remains theoretical, consumers respond with a reminder that they were told in 1982 and 1983 that the risk posed by HIV was still an unknown and that government and industry would not act until "the data confirms the risk". While significant improvements in blood/blood product safety have been attained, the question remains one of perspective on emerging threats, approach to those threats and the operative paradigm within which emergent and unknown threats such as CJD exist.


Breakout Session B (02/15): Risk Assessment in Action

Spontaneous Combustion of Latex Examination Gloves
Scott G. McNamee, Center for Devices and Radiological Health, Food and Drud Administration, Office of Science and Technology, Rockville, MD 20852

During 1994 and 1995, four warehouse fires were reported as having been caused by the spontaneous combustion of latex examination gloves stored therein. FDA was alerted to the problem and an investigation was initiated. Two risk assessment questions were raised while addressing this issue. The first question was "What is the risk of latex spontaneously combusting?" Latex does not ordinarily give rise to spontaneous combustion, so the suspect gloves were tested for excess generation of heat. The suspect gloves, which were powderfree latex examination gloves labeled as having been made in China and control powderfree latex examination gloves were taken through identical heating profiles while the internal temperature of the glove mass was monitored. A clear overheating by the suspect gloves indicated a potential for spontaneous combustion. The second risk assessment question was "What is the risk of this occurring again?" Infrared studies of both the surfaces (FTIR-ATR) of the gloves as well as extracts (FTIR-transmission) from the gloves showed a difference in the chemical makeup of the suspect gloves from control gloves. It was concluded that the latex gloves were improperly manufactured using inappropriate additives resulting in an unsafe product. A public health advisory was issued concerning the storage of suspect gloves.


Breakout Session B (02/15): Risk Assessment in Action

Risk Analysis Model of the Human Health Impact of Fluoroquinolone Use in Rearing Broilers
David J. Vose, David Vose Consultancy, Ltd., "La Coutancie", Dordogne, France

The use of antimicrobials in the rearing of animals intended for human consumption brings with it a potential human health impact. As part of a program to evaluate this human health impact, the FDA-CVM commissioned a risk assessment to determine the incremental campylobacteriosis human health impact resulting from the use of fluoroquinolone with broilers. The assessment takes the form of a mathematical model that estimates the number of annual cases of campylobacteriosis resulting from consumption of domestically reared poultry that were caused by fluoroquinolone-resistant Campylobacter and that would have been prescribed fluoroquinolone by a medical practitioner. The amount of fluoroquinolone-resistant Campylobacter contaminated poultry was also estimated. This allows a model to estimate the future human health burden for a given fluoroquinolone-resistant Campylobacter contaminated prevalence in poultry, which means that the FDA can monitor the poultry prevalence of fluoroquinolone-resistant Campylobacter to ensure that a predetermined threshold for acceptable human health impact is not exceeded. The risk assessment determined that some 5,000 people would have had an increased duration of campylobacteriosis as a result of the use of fluoroquinolone in poultry in 1998.


2000 FDA Science Forum | FDA Chapter, Sigma Xi | CFSAN | FDA
Last updated on 2000-FEB-18 by frf