pmc logo imageJournal ListSearchpmc logo image
Logo of nihpaNIHPA bannerabout author manuscriptssubmit a manuscript
Clin Trials. Author manuscript; available in PMC 2006 July 4.
Published in final edited form as:
doi: 10.1191/1740774506cn136oa.
PMCID: PMC1484572
NIHMSID: NIHMS10658
Data and safety monitoring in social behavioral intervention trials: the REACH II experience
Sara J Czaja,a Richard Schulz,b Steven H Belle,c Louis D Burgio,d Nell Armstrong,e Laura N Gitlin,f David W Coon,g Jennifer Martindale-Adams,h Julie Klinger,b and Sidney M Stahli
a Department of Psychiatry and Behavioral Sciences and Center on Aging, University of Miami Miller School of Medicine, Miami, FL, USA,
b University Center for Social and Urban Research, University of Pittsburgh, Pittsburgh, PA, USA,
c Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA,
d Center for Mental Health & Aging, The University of Alabama, Tuscaloosa, AL, USA,
e National Institute of Nursing Research/National Institutes of Health, Bethesda, MD, USA,
f Center for Applied Research on Aging and Health, Thomas Jefferson University, Philadelphia, PA, USA,
g Department of Social & Behavioral Sciences, Arizona State University, Glendale, AZ, USA,
h University of Tennessee-Memphis Health Science Center, VA Medical Center, Memphis, TN, USA,
i Behavioral and Social Research Program, National Institute on Aging/National Institutes of Health, Bethesda, MD, USA
Author for correspondence: Sara J Czaja, Professor of Psychiatry and Behavioral Sciences, University of Miami Miller School of Medicine, 1695 N.W. 9th Ave., Suite 3204, Miami, Florida 33136, USA. E-mail: sczaja/at/med.miami.edu
Abstract
Background
Psychosocial and behavioral interventions trials targeting a broad range of complex social and behavioral problems such as smoking, obesity and family caregiving have proliferated in the past 30 years. At the same time the use of Data and Safety Monitoring Boards (DSMBs) to monitor the progress and quality of intervention trials and the safety of study participants has increased substantially. Most of the existing literature and guidelines for safety monitoring and reporting of adverse events focuses on medical interventions. Consequently, there is little guidance for investigators conducting social and behavior trials.
Purpose
This paper summarizes how issues associated with safety monitoring and adverse event reporting were handled in the Resources for Enhancing Alzheimer’s Caregiver Health (REACH II) program, a multisite randomized clinical trial, funded by the National Institutes on Aging (NIA) and the National Institutes of Nursing Research (NINR), that tested the efficacy of a multicomponent social/behavioral intervention for caregivers of persons with Alzheimer’s disease.
Methods
A task force was formed to define adverse events for the trial and protocols for reporting and resolving events that occurred. The task force conducted a review of existing polices and protocols for data and safety monitoring and adverse event reporting and identified potential risks particular to the study population. An informal survey regarding data and safety monitoring procedures with investigators on psychosocial intervention trials was also conducted.
Results
Two categories of events were defined for both caregivers and patients; adverse events and safety alerts. A distinction was also made between events detected at baseline assessment and those detected post-randomization. Standardized protocols were also developed for the reporting and resolution of events that occurred and training of study personnel. Results from the informal survey indicated wide variability in practices for data safety and monitoring across psychosocial intervention trials.
Conclusions
Overall, the REACH II experience demonstrates that existing guidelines regarding safety monitoring and adverse event reporting pose unique challenges for social/behavioral intervention trials. Challenges encountered in the REACH II program included defining and classifying adverse events, defining “resolution” of adverse events and attributing causes for events that occurred. These challenges are highlighted and recommendations for addressing them in future studies are discussed.
Introduction
During the past 30 years, psychosocial and behavioral interventions designed to maintain and improve health and quality-of-life have proliferated. Researchers have targeted a broad range of complex social and behavioral problems such as smoking, obesity, medical compliance and family caregiving. Recently, there has also been a growing demand for evidence-based practice. Clinicians, social agencies and policy makers increasingly require evidence about real-world effects of treatments when making decisions about investing in intervention programs. In response, the randomized clinical trial (RCT) design, recognized as the gold standard for evaluating medical interventions, is commonly used to evaluate the effectiveness of behavioral intervention approaches.
To ensure that RCTs meet the highest scientific standards, many aspects of the design and conduct of a clinical trial such as participant recruitment, treatment adherence, intervention outcomes and participant safety must be carefully monitored. Toward this end, the use of independent Data and Safety Monitoring Boards (DSMBs) to monitor the progress and quality of a trial and participant safety has increased substantially [1,2]. In fact, in an effort to improve the quality of clinical research and ensure the protection of human subjects, the National Institutes of Health (NIH) has issued guidelines and regulations to increase the use of data and safety monitoring within clinical trials. It is now the policy of NIH that each Institute and Center should have a system for the appropriate oversight and monitoring of the conduct of clinical trials to ensure the safety of the participants and the validity and integrity of the data. A DSMB is required for all Phase III multisite clinical trials involving potential risks to participants and may be required for Phase I or II trials, and even smaller intervention studies if the study population is vulnerable or other study characteristics support the need for an external board [3]. The Federal Drug Administration (FDA) has also recently issued draft guidelines on the formation and responsibilities of DSMBs for trials subject to FDA oversight. Theses guidelines are fairly consistent with procedures followed for NIH-funded trials [4].
The primary role of a DSMB is to ensure the safety of trial participants, through review of adverse events. A key secondary role is to preserve the quality and credibility of the trial in order to provide reliable results to clinical and policy communities. Although there is general agreement about the basic roles of DSMBs, how they are used and function varies widely across trials and sponsoring agencies [5]. Many issues such as determination of when DSMBs are needed, methods for conducting interim data analyses and confidentiality of interim results remain controversial [6,7]. For example, questions often arise about policies used to guide decisions about the safety and efficacy of trials. Although a number of statistical approaches are available for assessing interim data these statistical procedures in and of themselves are seldom sufficient for making recommendations about trial termination and continuation. There are several cases reported in the literature where strict adherence to the stopping rules established for a trial would have led to less than optimal conclusions about the potential benefits or harm of a treatment [6,8,9]. A related controversy is whether access to interim outcome data should be restricted to DSMB members. The rationale for masking is preservation of trial integrity and credibility and protection from bias. Arguments for unmasking are based on the premise that excluding trial members from access to interim outcomes may result in erroneous conclusions about treatment effects as DSMB members may not have access to key information they need to interpret the results of the interim analysis. Clearly safety monitoring can mean different things to different people, depending on their relationship to a particular study.
Controversies also exist regarding the definition and reporting of adverse events. Adverse events are generally defined as any unfavorable or unintended symptom, sign or disease associated with a medical treatment or procedure that may or may not be related to the treatment or procedure [10]. Investigators are typically required to report all AEs and assess severity whether or not they are related to study treatments. In principle, the term adverse event should be non-judgemental with regard to the relationship between treatment and the event. AEs can be associated with the treatment, the disorder or behavior being targeted, a concurrent disorder or treatment, or it may be entirely unrelated.
Existing guidelines for the definitions and reporting of AEs are somewhat broad and vague. The current FDA guideline requires reporting of AEs that are “serious and unexpected” whereas the NIH requires reporting of “unanticipated problems” posing risks to study participants [11]. Clearly, there can be considerable variability in the interpretation of terms such as “serious”, “unexpected” and “unanticipated”. Furthermore, existing policies offer little guidance regarding required documentation and protocols for reporting. Support for this view can be found in recent reviews of the AE literature which have demonstrated fairly wide variance in the terms used to describe adverse events (eg, adverse events versus side effects versus complications) as well as variations, even within trials, in AE reporting, especially with respect to judgements of severity or relatedness [12,13]. Consistency in AE documentation, characterization and evaluation is important since lack of consistency can ultimately affect decisions about treatment adoption. Judgements of causality must also take into account the complex dynamic interplay between the inherent risks of the intervention and contextual factors such as comorbidities related to a disease that can influence the type and frequency of AEs that occur within a trial. Lack of understanding of these factors and their potential relationship to study treatments can lead to biasing in data reporting and interpretation as well as poor decisions about when to stop a trial.
Issues surrounding safety monitoring and AE reporting are even more complex for social and behavioral intervention trials. Because most of the existing literature regarding AEs is based on medical interventions, there exists little guidance for investigators conducting social and behavior trials. To help fill this gap, this paper reports how issues associated with safety monitoring and reporting of adverse events were handled in the Resources for Enhancing Alzheimer’s Caregiver Health (REACH II) program. Data are also reported from an informal survey conducted with investigators of other currently active psychosocial intervention trials regarding data and safety monitoring procedures. The overall goal of the paper is to identify the challenges of applying existing guidelines for monitoring clinical trials for safety to social/behavioral intervention trials and to make suggestions as to how they might be addressed in future studies.
Overview of the REACH II program
REACH II was a multisite randomized clinical trial, funded by the National Institutes on Aging (NIA) and the National Institutes of Nursing Research (NINR) that tested the efficacy of a multicomponent social/behavioral intervention for caregivers of persons with Alzheimer’s disease. The randomized cohort consisted of 212 Hispanic/Latino, 219 white Caucasian, and 211 black/African-American caregivers recruited from five sites in the US: Birmingham, AL; Miami, FL; Memphis, TN; Palo Alto, CA; Philadelphia, PA. The study also included a coordinating center at the University of Pittsburgh.
Eligibility
Eligibility criteria for caregivers included being Hispanic/Latino, white/Caucasian, or black/African-American; being over the age of 21; living with or sharing cooking facilities with the patient; providing care for a relative with Alzheimer’s Disease and Related Disorders (ADRD) for a minimum of four hours per day for at least the past six months; caring for a patient with memory or behavior problems, and feeling overwhelmed, or angry, or having crying spells, or feeling cut off from family or friends because of caregiving demands. Caregivers were excluded if they were involved in another caregiver intervention study, participated in the earlier REACH I trial, or had an illness that would prevent them from participating for at least six months. Other requirements were logistical and included having a telephone, planning to remain in the geographic area for at least six months, and competency in either English or Spanish.
In order to be eligible for the study, caregivers had to confirm that their relative had a diagnosis of Alzheimer’s disease or related dementia. Patients who scored above 23 on the Mini-Mental State Exam [14] were required to have a physician’s diagnosis of Alzheimer’s disease or a related disorder.
Protocol
Participants were screened for eligibility, given a baseline assessment, and subsequently randomized to treatment or control condition within each of the three ethnic groups. Caregivers were assessed a second time six months later after the intervention was completed. The intervention was designed to improve the quality of life of caregivers in multiple domains. Therefore, the primary outcome was a multivariate quality of life indicator that assessed caregiver burden, depressive symptoms, self-care, social support, and patient problem behaviors. In addition, caregiver clinical depression and patient institutional placement were assessed.
Study design
The design of the intervention was guided by consideration of the existing literature and findings from the multisite REACH I [15,16]. The evidence from both sources indicated that caregiving presents multiple challenges and that there is no single, consistently effective method for achieving clinical significance effects among caregivers. As a result, the intervention was based on a risk appraisal approach and five areas linked to caregiver stress health processes: burden, depression, self-care, social support and care recipient problem behaviors [17] were matched to five corresponding intervention components. Because there is considerable variability in the needs of caregiver/care recipients, a structured risk appraisal was administered at baseline and dosing was adjusted to level of risk present within each area. For example, a person who had minimal problems with depression would only receive a small dose of the intervention component designed to improve emotional well-being. To deliver the intervention in a cost effective manner the intervention was administered using a combination of in-home visits augmented by telephone/computer technology in 12 sessions over six months. In addition, five telephone administered cross-site support group sessions were available to intervention arm participants. Caregivers were also provided with a Caregiver Notebook that contained basic educational materials as well as other instructional materials provided by the interventionist during the home sessions.
In contrast, caregivers in the control arm received a packet of basic educational materials and two brief (<15 minute) telephone “check-in calls” at three and five months post randomization. They were also invited to participate in a workshop on dementia and caregiving following the six-month assessment. All materials were available in English and Spanish.
Overview of safety monitoring in REACH II
Because REACH II was a multisite intervention trial and involved a vulnerable population (caregivers and dementia patients) an independent DSMB was required by the sponsoring agencies. The members of the DSMB were identified by the sponsoring agencies, with recommendations from the Trial Steering Committee, prior to the start of the study. The five members included experts in intervention research, caregiving, biostatistics, and ethics. The primary responsibilities of the DSMB included monitoring of participant recruitment and safety, protocol compliance, and data quality. The DSMB met twice yearly; once in person and once by conference call. They received data reports a month prior to each meeting that contained information about participant recruitment, retention, participant characteristics and adverse events. In addition to the DSMB members meeting attendees included the program officers from the NIA and the NINR, the study statistician, the Principal Investigator of the Coordinating Center and the Chair of the REACH II Steering Committee.
At the initial meeting, the DSMB reviewed the study protocol (eg, informed consent forms, intervention protocols, data collection instruments), and agreed upon data reporting requirements (frequency, type of data and reporting format). They also reviewed the definitions of adverse events and protocols for resolution of those events adopted for the trial (Table 1). At subsequent meetings the DSMB reviewed the progress of the study (eg, recruitment by race/ethnicity at each site, protocol deviations, intervention adherence, adverse events, site visit summaries, data quality, attrition, effectiveness of randomization procedures) and made recommendations concerning its continuation. The decisions of the DSMB were considered advisory to the NIH. Formal stopping rules were not specified for the trial.
Table 1Table 1
Definitions of adverse events and safety alerts and protocols for resolution used in the REACH II program
The DSMB worked with the Coordinating Center and the program officers to choose a monitoring approach that best suited the study. An interim data analysis, performed by the trial statistician, was also conducted. To avoid potential bias, all site investigators and the PI of the Coordinating Center were masked to the results of the interim analyses. The Coordinating Center generated minutes for each of the DSMB meetings which were then submitted to the NIA, NINR and the sites who then distributed the minutes to the local Institutional Review Boards (IRB).
Challenges encountered in the REACH II program
Defining and classifying adverse events (AEs)
One of the initial challenges faced by the REACH II investigators was defining and classifying AEs for the trial. To facilitate this process a task force with representatives from each of the five sites, the coordinating center, and the sponsoring agencies was formed. Given that this was a multisite trial, it was important to ensure that the definition of AEs was standardized across the five intervention sites. In addition, as the focus of the trial was the dyad, AEs needed to be defined for both the caregiver and the care recipient.
Also, the intervention was based on a risk appraisal approach and a baseline assessment, which included measures of depression, quality of care, care recipient problem behaviors was administered prior to randomization. Thus, AEs and potential risks to the participants could be detected prior to the start of the intervention. For example, the risk appraisal questionnaire asked caregivers if the care recipient had threatened to harm him or herself or others, had access to a gun or was still driving. Although, events detected at baseline could not be attributable to the intervention, because of ethical and IRB requirements they needed to be reported and addressed. Consideration also needed to be given to the characteristics of the participant population and contextual factors surrounding the caregiving situation. For example, dementia patients are likely to be elderly and have medical or behavioral comorbidities such as wandering or aggression. Likewise, it is not uncommon for caregivers to suffer from depressive symptoms. While these types of events do not fall under the standard definition of serious AEs they still pose a potential risk to the individual. Finally, as noted, the population was ethnically diverse and events such as institutionalization of the patient tend to be more common among some caregiver populations (eg, non-Hispanic whites) as opposed to others (eg, Hispanics/Latinos) [18].
Based on these considerations, we distinguished among two categories of events: adverse events and safety alerts. The definition of “adverse event” was consistent with traditional definitions of AEs and included events such as death, hospitalization and emergency room visits. “Safety alerts” were events that were relevant to the study population and posed safety risks to study participants. Examples of safety alerts included caregivers having symptoms of depression or the care recipient driving (Table 1). A distinction was also made between events that were detected at baseline (baseline adverse events and baseline safety alerts) versus those that occurred following randomization and the six-month follow-up assessment (adverse events and safety alerts).
Defining “resolution”
A second task involved defining what constituted “resolution” of a safety alert or an adverse event. This task was challenging as events such as institutionalization are common among dementia patients and often permanent. For example, a question arose regarding resolution of patient institutionalization. Should resolution be defined as the return of the patient to the home setting or simply knowledge that placement occurred and the reason for the placement decision? Obviously, placement of patients who were permanently placed would never be “resolved” if the definition of resolution of this event was the patient returning to home. The final definitions of resolution for AEs and safety alerts are presented in Table 1.
The definitions of adverse events and safety alerts and protocols for event resolution were submitted to the DSMB for review and approval. An important aspect of this process was educating the DSMB about the nature of the intervention and the characteristics of the target population. Although all of the members of the DSMB had experience with clinical trials and expertise in intervention research some of the members had limited expertise with caregiving and dementia patients. Following approval by the DSMB, study personnel (assessors and interventionists) at the five intervention sites and the Coordinating Center were trained in protocols for identification, reporting and resolution of AEs and safety alerts. These protocols were also included in the manual of operations.
As shown in Table 2, the most common events among caregivers were evidence of high levels of depressive symptoms. Among care recipients, the most common events were hospitalization, comments related to death, institutionalization and death. Also, as indicated there was some variation in frequency of event according to ethnicity of the dyad. Institutionalization, access to a gun and continued driving was more common among the white/Caucasian care recipients as compared to the Hispanic/Latino and black American care recipients.
Table 2Table 2
Adverse events and safety alerts in REACH II by ethnicity
Reporting requirements and attribution
Developing a standardized reporting system was also complicated given differences in requirements among the site IRBs. Some sites were required to report all events to the local IRB irrespective of event severity whereas other sites were only required to report AEs and not safety alerts. The DSMB required reporting of all events.
To help ensure consistency in reporting across the sites, adverse events and safety alerts were tracked using standardized forms that recorded the date of the event, type of event, attribution of the event (eg, was it intervention related), whether the event was resolved or controlled and the resolution date. These forms were completed by the site PI or designee (eg, clinical supervisor, project coordinator) and faxed to the Coordinating Center within 24 hours of learning of the event. Sites were also required to complete an Adverse Event Resolution Note which further detailed the specifics of the how the event was addressed.
The issue of attribution proved to be somewhat of a challenge for the REACH II program. Events such as hospitalization are common among dementia patients given the nature of the illness and the fact that dementia patients tend to be elderly and have other comorbid conditions. In fact, in the REACH II trial hospitalizations were the most common AE among the care recipients (Table 2). Though unlikely to be related to the intervention, the relatively high frequency of hospitalizations generated concern among the members of the DSMB, particularly since more were reported for care recipients in the intervention condition than in the control arm (Table 3). Because of this concern, the DSMB required further analyses of these events. It was determined that the higher frequency of hospitalizations among care recipients in the intervention condition was likely due to greater contact between the interventionists and caregiver who received the intervention and not related to the intervention. Similar issues arose for care recipient emergency room visits. The problems associated with determining attribution experienced in the REACH II program highlight the difficulties of applying existing definitions of AEs, developed for medical intervention, to social behavioral intervention trials. In addition, this issue underscores the importance of ensuring that members of the DSMB understand the nature of the intervention, the disease or behavioral problem of interest and the characteristics of the target population. A lack of understanding of these factors among DSMB members can potentially lead to erroneous decisions about the safety and impact of the intervention.
Table 3Table 3
Post-randomization adverse events and safety alerts by intervention condition
Results of the informal survey
As noted, the authors were interested in ascertaining to what extent the adverse event issues encountered in the REACH II project were shared by other researchers conducting psychosocial clinical trials. To address this question, we conducted an informal survey with Principal Investigators of other trials via questionnaire. The questionnaire consisted of 22 items, including yes/no, checklist, and open ended questions regarding challenges/difficulties encountered related to data safety monitoring and the reporting of adverse events.
Sample
Trials (N = 84) were identified from the NIH.GOV and Clinical Trials.Gov websites. The search was limited to behavioral and social intervention trials or trials that combined behavioral and medical interventions and were currently active. The survey instrument was mailed to the Principal Investigator of the identified trials. The response rate was 49% (N = 41).
The interventions being evaluated in the studies that responded included cognitive and psychosocial interventions (47%), education (18%), skills training (14%), exercise (10%), medically-related (8%) and mind/body interventions (2%). The study populations included patients with chronic diseases (39%), mental-health problems (29%), family caregivers (15%), and older persons with physical frailty (7%) or cognitive impairment (2%). Eighty percent (N = 33) of the trials included a control condition such as standard care (42%), placebo, information only or no treatment control (38%). The remaining studies did not include a control group and were not randomized trials. Seventy-one per cent of the trials (N = 29) were multisite.
Data safety monitoring and AEs
Results of the survey indicated that there was considerable variability among the trials in protocols for data and safety monitoring. Fifty-four per cent of the trials had a formal DSMB; 37% had a monitoring plan but no external Board, and 10% reported using neither. As expected, formal DSMBs were more common among multisite trials (83%) as compared to single site trials (41%). Only 31% of the studies with DSMBs had formal stopping rules for trial termination. The type of data reported to the DSMB varied. Most trials reported data related to the occurrence of serious (83%) or other types of AEs (71%), participant recruitment (71%) and retention (76%). However, reporting of data related to participant baseline characteristics (51%), data quality (46%) and data timeliness was less common (37%). Several trials did not report AEs or serious AEs.
Only 78% of the trials had established protocols for defining adverse events; 60% had established protocols for reporting the attribution/causality of serious adverse events and 45% had such protocols for other-than-serious events. AEs were identified through a variety of sources including participant self-report (77%), interventionists interaction with the participant (77%) or standardized questionnaires at schedules assessments (54%). In most studies attribution was determined by the Principal Investigator (85%) or the IRB (67%). Similarly resolution was typically determined by the Principal Investigator (64%) or the IRB (64%).
Key challenges
Investigators were also asked to describe any challenges or problems that arose during the trial related to safety monitoring or reporting of adverse events. Commonly reported problems included definition of what constituted an adverse event (especially for those trials that included a vulnerable population), determination of attribution and lack of consistency in reporting of AES by study staff. Overall, the list of problems was similar to the challenges faced by the REACH II investigators.
Discussion
In an effort to improve the quality of clinical research and ensure the safety of research participants, safety monitoring is becoming an integral component of clinical research projects. Potential benefits associated with safety monitoring include early identification of treatments that pose risk to individuals or which are likely to be ineffective, information on the extent to which recruited participants reflect the profile anticipated and overall improvements in data quality. Summary data on adverse events can also provide useful insights into the needs of study populations and aid in the design of future intervention approaches. Currently however, guidelines for data safety monitoring are somewhat broad and vary across sponsoring agencies, including agencies within the federal government. As a result there is wide variability in policies and protocols for conducting data and safety monitoring and much debate surrounding issues related to the use of DSMBs, stopping rules, and definition and reporting of adverse events. Questions regarding safety monitoring are especially complex for social/behavioral intervention trials.
This paper describes the protocols adopted for safety monitoring within the REACH II project, a multisite randomized clinical trial that evaluated the efficacy of a multicomponent psychosocial intervention for caregivers of dementia patients. Unique characteristics of the REACH II program included a risk-appraisal based intervention approach, a focus on the dyad, and inclusion of an ethnically diverse and vulnerable study population. Challenges that were encountered in developing a plan for safety monitoring in the REACH II program included defining and classifying adverse events, defining “resolution” of adverse events and attributing causes for events that occurred. Results of an informal survey suggest that these problems are not unique and common in other behavioral trials.
On the basis of the REACH II experience, the following is a summary of recommendations for implementing the existing guidelines for safety monitoring in social behavioral trials. Our intent is to provide suggestions rather than a prescription as it is recognized that models for data and safety monitoring vary according to the need and characteristics of a particular trial.
Data and safety monitoring boards (DSMBs)
One issue that needs to be addressed is the need for, composition and role of the data safety and monitoring board (DSMB). Prior to the start of the trial the role of the DSMB also needs to be clearly defined as do protocols for data reporting and interim analyses. In some cases having the DSMB members serve only in a scientific advisory capacity as opposed to a formal board who makes decisions about trial termination may be sufficient. In other cases, a formal DSMB may not be needed and a safety monitoring plan may be adequate. The degree of monitoring should be based on the study’s risk profile. Factors to be considered in the risk assessment include the characteristics of the population being studied; risks associated with the intervention from prior studies and potential risks to the study population in the absence of the intervention [19]. For example, for vulnerable populations such as dementia patients a higher risk must be assumed.
If a formal DSMB is required, protocols for the structure, function and responsibilities of the DSMB as well as the format and content of DSMB reports, statistical procedures and monitoring guidelines should be clearly established before the start of the trial. Ellenberg and colleagues [7] and the DAta MOnitoring Committees: Lessons, Ethics, Statistics (DAMOCLES) study group [20] maintain that intervention trials would benefit from the development of a charter outlining the protocols and responsibilities for data and safety monitoring. Both groups provide examples of such a charter that encompass guidelines for DSMB membership, responsibilities of the DSMB, protocols for the organization of DSMB meetings, data reporting, interim data analyses and decision making and reporting hierarchies. However, these charters while useful as guidelines are primarily oriented at medical intervention trials and may need to be adapted for psychosocial interventions. Furthermore, they do not address the issue of defining adverse events and determining what constitutes “satisfactory” resolution of events that occur. We recommend that these proposed charters be considered a reference and perhaps checklist for the issues that need to be addressed if a formal DSMB is required for a trial.
If a formal DSMB is required, the committee must include individuals with expertise in the clinical area being studied and the target population. Expertise in both is needed to ensure appropriate interpretation of adverse events. For example, in the case of REACH II having expertise in both caregiving and Alzheimer’s Disease was important. Having individuals with some knowledge and experience with clinical trials and data safety monitoring is also valuable. In any case, it is essential that all members of a DSMB have a thorough understanding of the intervention protocol, the problem area being addressed and the characteristics of the study population.
However, the size of the committee is also an important consideration as the number of members is likely to have an impact on the quality of the decision process.
Identifying and defining adverse events
Careful consideration also needs to be given defining what constitutes an adverse event for a trial. Defining adverse events, according to criteria developed for medical trials, may not be appropriate for some types of interventions. As demonstrated in REACH II events such as hospitalization and placement are common among patients with dementia and unlikely to be related to behavioral interventions. Reporting these types of events and investigating their causes may place undue burden on study personnel, DSBM members and local IRBs. A more effective strategy would be to have DSMB members and investigators reach consensus about those adverse events important for assuring the safety and well-being of specific study population enrolled in the study. These judgements should be based on the types of individuals and problems being studied, the nature of the interventions being tested, as well as the findings from related prior research. Since ascertaining the causes of adverse events can be labor intensive and costly, the focus should be on adverse events that reasonably might be linked to the intervention.
Protocols for monitoring and reporting adverse events
Protocols also need to be developed for standardized tracking and reporting of adverse events. These protocols need to include delineation of the type of data that needs to be reported (eg, group versus site level data), and the timing and frequency of data reports. This is especially important for multisite trials to ensure consistency in data reporting. Where possible, there should also be consistency in reporting requirements between local IRBs and the sponsoring agencies. This would help minimize duplication of effort and costs associated with data reporting. For example, in the case of REACH II, differences in event report forms required by local IRBs and those developed for the caused a duplication of effort for study personnel. Equally important is insuring that study personal are trained in protocols for identifying and reporting adverse events. Criteria also need to be established for assigning event attribution.
Interim data analysis
Finally, procedures for the interim data analyses need to be clearly established prior to initiating the trial. Important issues that need to be considered include the outcome measures that will be included in the analyses, who will conduct the analyses, who will be included in the discussion of the analysis, and the extent to which the investigators are masked with respect to study outcomes. With respect to the issue of masking, we recommend that, at minimum, the study statistician be included in the discussion of the results of the interim analyses to ensure that the findings are interpreted appropriately by DSMB members.
Overall, there are a number of issues with data safety monitoring that need to be addressed within clinical trials. These issues are likely to become more salient as the demand for evidence-based treatment and translational research continues to grow. Practices for data and safety monitoring need to achieve an appropriate balance between the protection of research participants and maximizing the quality and scientific validity of research trials. It is hoped that the lessons learned from the REACH II trial will help other investigators establish protocols for data and safety monitoring in social/behavioral intervention trials.
Acknowledgments
This study was supported in part by grants from the National Institute on Aging and the National Institute of Nursing Research (AG13305, AG13289, AG13313, AG20277, AG13265, NR004261).
References
1.
Sydes MR, Spiegelhalter DJ, Altman DG, Babiker AB, Parmar MKB. DAMOCLES Group. Systematic qualitative review of the literature on data monitoring committees for randomized controlled trials. Clinical Trials. 2004;1:60–79. [PubMed]
2.
Ellenberg SS. Monitoring data on data monitoring. Clinical Trials. 2004;1:6–8. [PubMed]
3.
NIH Policy for Data and Safety Monitoring.http://grants1.nih.gov/grants/guide/notice-files/not98-084.html Access date 10 December 2004.
4.
DeMets D, Califf R, Dixon D, et al. Issues in regulatory guidelines for data monitory committees. Clinical Trials. 2004;1:162–69. [PubMed]
5.
Interim Guidelines for NIH Intramural Principal Investigators.http://www.nihtraining.com/ohsrsite/irb/attachments/5-10_serious_adverse_event_rep.htm Access date 13 December, 2004.
6.
Clemens F, Elbourne D, Darbyshire J, Pocock S. DAMOCLES Group. Data monitoring in randomized controlled trials: surveys of recent practice and policies. Clinical Trials. 2005;2:22–33. [PubMed]
7.
Ellenberg SS, Fleming TR, DeMets DL.Data monitoring committees in clinical trials: A practical perspective. Wiley, 2003.
8.
O’Neill RT. Regulatory perspectives on data monitoring. Statistics in Medicine. 2002;21:2831–42. [PubMed]
9.
Diabetes Control and Complications Trial Research Group. The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. NEJM. 1993;329:977–86. [PubMed]
10.
Friedman LM, Bristow JD, Hallstrom Aet al Data monitoring in the Cardiac Arrhythmia Suppression Trial. Online Journal of Current Clinical Trials 1993; Doc 79, July 31.
11.
Guidance on Reporting Adverse Events to Institutional Review Boards for NIH-Supported Multicenter Clinical Trials.http://grants1.nih.gov/grants/guide/notice-files/not99-107.html Access date 14 January, 2005.
12.
Fleming ST. Complications, adverse events, and iatrogenesis: Classifications and quality of care measurement issues. Clinical Performance and Quality of Health Care. 1996;4:137–47.
13.
Raisch DW, Troutman WG, Sather MR, Fudala PJ. Variability in the assessment of adverse events in a multicenter clinical trial. Clinical Therapeutics. 2001;23:2011–20. [PubMed]
14.
Folstein MF, Folstein SE, McHugh PR. Mini-mental state: A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189–98. [PubMed]
15.
Belle SH, Czaja SJ, Schulz R, et al. Using a new taxonomy to combine the uncombinable: Integrating results across diverse caregiving interventions. Psychol Aging. 2003;18:396–405. [PubMed]
16.
Gitlin L, Belle SH, Burgio L, et al. Effect of multicomponent interventions on caregiver burden and depression: The REACH multisite initiative at 6-month follow-up. Psychol Aging. 2003;18:361–74. [PubMed]
17.
Schulz R, O’Brien AT, Bookwala J, Fleissner K. Psychiatric and physical morbidity effects of Alzheimer’s Disease caregiving: Prevalence, correlates, and causes. The Gerontologist. 1995;35:771–91. [PubMed]
18.
Schulz R, Belle SH, Czaja SJ, McGinnis KA, Stevens A, Zhang S. Long-term care placement of dementia patients and caregiver health and well-being. JAMA. 2004;292:961–67. [PubMed]
19.
Brass EP. Implementation of a data safety and monitoring plan in a general clinical research center. Journal of Investigative Medicine. 2001;49:479–85. [PubMed]
20.
DAMOCLES Study Group. A proposed charter for clinical trial monitoring committees: helping them do their job well. Lancet. 2005;365:711–22. [PubMed]