Advances in Patient Safety: From Research to Implementation Advances in Patient Safety: From Research to Implementation Volume 3. Implementation Issues

"Near-miss" Reporting System Development and Implications for Human Subjects Protection

Harvey J. Murff, Daniel W. Byrne, Paul A. Harris, Daniel J. France, Christa Hedstrom, Robert S. Dittus

Abstract

Background: Reviews of recent research-related fatalities have demonstrated that clinical research system failures likely contributed to the event. Current research safety-reporting mechanisms focus on individual protocols and are therefore less likely to detect system-level failures. Methods: We have implemented the "near-miss" reporting system for a general clinical research center to detect latent failure within the research environment. Results: An identified research-related near miss includes a research volunteer being mistakenly directed into an incorrect protocol. Before beginning the incorrect study, the participant recognized that the protocol did not coincide with the consent document and the error was detected without harm. Lack of both reliable research-participant tracking and verification programs was believed to be an important latent failure associated with the research unit. Discussion: Collecting research unit-specific information on potential safety concerns could identify system failures that might not be identifiable through traditional human subjects protection programs.top link

Background

Ensuring that human subjects who volunteer for clinical trials are free from any undue harm should be the highest priority of the scientific community. Yet several widely publicized research-related fatalities have translated into an erosion of public trust in the medical research community. 1, 2 To regain this trust, tangible and sustainable improvements in research participant safety must be achieved. 3 Efforts are underway to improve human subjects protection programs through encouraging the use of data and safety monitoring boards (DSMBs), 4 increasing investigator education in research ethics, 5 and promoting human subjects protection accreditation programs. 6

To implement an effective human subjects protection program, however, all potential research risks must be identified and managed appropriately. Traditionally, risks associated with research have been viewed as intrinsic properties of either the investigational agent/device being studied (e.g., diarrhea occurring as a result of an experimental drug during a Phase I study) or the measurement tool being employed (e.g., hypoglycemia associated with a glucose-clamp experiment). However, additional research risks exist that are not intrinsic to one specific protocol, but are system problems existing at an organizational level. These risks are not optimally managed through traditional human subjects protection methods, but will require approaches similar to those employed in clinical medicine. 7

The nature of research-related risks

Risks associated with research activity have traditionally been viewed as intrinsic to a particular investigational agent or protocol. These risks are often associated with an investigational agent's pharmacological or physiological effects on the subject. Many of these risks, called "study risks," are often unknown (unanticipated) at the early phases of an experiment. As the experiment progresses, the body of knowledge that makes up the known (anticipated) risks of an experimental agent or procedure grows (Figure 1). Investigators, sponsors, and institutional review boards (IRBs) can reassess the risk-to-benefit ratio of a study based on this new knowledge and determine whether the study protocol should be modified or even discontinued. Information about these risks is presented to potential subjects during the consent process, allowing individuals to make an informed decision on whether to participate in the trial. The management strategy for study risks has centered on the timely identification and reporting of adverse events and the informed consent process.top link

Figure 1. Relationship of unanticipated study risks to anticipated study risks

top link

There remains another research-related risk that is quite different from these traditional or study risks. These are risks that are not related to a specific protocol, but are related to constraints that might be organizational-specific and extrinsic to any individual protocol, and could be best described as system failures. As defined by Thomas Nolan, a system is "a collection of interdependent elements that interact to achieve a common purpose." 8 Whether the goal is to reduce LDL cholesterol in adults or to evaluate a novel therapeutic agent, multiple stakeholders are necessary to accomplish these objectives, and breakdowns in these interactions constitute a system failure. This reasoning has begun to permeate into clinical medicine and many organizations are embracing systems theory as a means of improving medical quality and patient safety. 9  -- 11 top link

Traditional approaches to human subjects protection

Since the 1970s, the major focus of human subjects protection has remained the IRB. Over the past several decades, as the IRB has become more accountable to the Federal Government, the number of active clinical research protocols has increased at a tremendous rate. The primary means that an IRB had to demonstrate that an institution was in compliance was documentation produced by the IRB or investigator. This had several unintended side effects. IRB entities became overwhelmed, resulting in protocol-review delegation to smaller subcommittees and, ultimately, less time for scientific and ethical reviews. 12, 13 Investigators began seeing additional paperwork necessary to document compliance as a nuisance 14 and did not take it seriously. Consent documents became longer and more complex, resulting in many IRBs producing consent document templates exceeding their own recommendations on readability standards. 15 The end result is that IRB entities have become very effective at helping institutions and investigators comply with the Office for Human Research Protection (OHRP) guidelines, but current practices may not necessarily optimize human subjects protection.

Several recently released requirements geared more toward investigators appear to take a similar regulatory focus. As a result of recent problems in clinical research programs conducted within the Veterans Health Administration (VHA), investigators are now required to complete a Web-based course on good clinical practice (GCP) and increase the frequency of their IRB training, thus enabling the local research and development office to document compliance with these guidelines. 16 The National Institutes of Health (NIH) has released several guidance documents related to data and safety monitoring plans and DSMBs and has required that all protocols (not just clinical trials) conducted within a general clinical research center (GCRC) contain a data and safety plan. 17

Finally, several important agencies and groups ---including OHRP, the National Bioethics Advisory Commission, the Institute of Medicine, the Association of American Medical Colleges, and the VA ---have promoted research program accreditation as a means for improving safety. 6 Several private groups have released proposed quality indicators for clinical research, and some institutions have even been evaluated. 6, 18

While these new proposals are laudable, their overall effectiveness at improving human subjects protection is questionable. In analyses of accidents within nonmedical industries, prespecified regulations have generally not been adequate to prevent major catastrophes, predominately because catastrophes were unforeseen 19 and increasing regulations and paperwork is unlikely to result in substantial safety benefit. 20 The persistent emphasis on regulatory compliance has promoted what Jeffery Kahn calls a "culture of compliance," not a "culture of conscience." 5 top link

System analysis in clinical research

Any research on humans could loosely be defined as "clinical research." For the purposes of this manuscript, "clinical research," refers to clinical trials research, which typically involves the evaluation of a specific therapeutic or diagnostic intervention in a population at risk for or infected with a particular disease of interest. The clinical research enterprise represents multiple complex systems and is not immune to organizational system problems. Several case reports suggest that these extrinsic risks do, in fact, exert significant pressures on the research environments. 14, 18, 21, 22 Examples of systems in clinical research could include a GCRC, a vaccine-trial consortium, or even a program for the development of novel drugs. Individual protocols, because they operate within the context of these systems, will therefore be influenced by system factors. Organizational factors, such as how an institute manages investigator conflicts of interests or handles "whistleblowers," can have effects on the overall culture of an institution. 5 Team factors such as inefficient communication can also degrade the performance of a system. 23 Other factors include task-related factors, such as the introduction of new technology, and work environment factors, such as staffing volumes. 24 Because these research risks associated with system flaws are similar to those often seen in clinical medicine, 20 we will refer to them as "clinical risks."

Clinical risks must be managed in a very different manner than traditional "study risks" to effectively minimize these system failures (Table 1). Many clinical medicine organizations use institutional-level detection of adverse events, followed by aggregate analysis to identify risks associated with system failures. The VHA provides two examples of the use of aggregate data to study health care quality and safety concerns. On a local level, VA hospitals are using aggregate root-cause analysis as a means to address system failures associated with frequent and severe adverse events. 25 Across the entire VHA, surgical outcomes are aggregated by institution and benchmarked against national levels for the National Surgery Quality Improvement Program (NSQIP). 26 In the first 3 years of the program's existence, 30-day surgical mortality decreased by 9 percent. The most important advantage of this strategy is that it allows for both meaningful corrective action and postintervention evaluation of effectiveness. Outside the VHA, after studying aggregate data on adverse drug events, it was determined that many medication errors occur during the ordering and administration stage of medication delivery. 27 This knowledge has lead to targeted interventions, such as computerized physician order entry 28 and barcoding systems. 29

Table 1. Differences between research related "study" risks and "clinical" risks and proposed management strategies to minimize these risks


Study risks Clinical risks

Definition Risks directly associated with use of investigational agent Risks associated with process of conducting research
Relationship to study protocol Intrinsic, physiologic, or pharmacological Extrinsic, organizational system failures
Possible methods of risk identification Study investigators, medical monitors, data and safety monitoring boards Study investigators, research unit-based staff nurses, investigational drug service pharmacists, research participant ombudsman
Proposed reporting body Institutional review board Human subjects protection program quality officer or ombudsman
Proposed method of event analysis Events analyzed within the context of an individual protocol Events analyzed in aggregate
Proposed method of corrective action Modification of informed consent or study protocol Systematic changes on organizational level

The current structure for institutional human subjects protection is not designed to collect and analyze aggregate data from multiple protocols. Most analyses of adverse events are in the context of the individual protocols to which they are assigned. To understand how this approach could result in an inappropriate determination of research risk, we present the following example:

Suppose two studies are simultaneously being conducted within an organization. Both involve the administration of an intravenous agent. In protocol A, the agent is an investigational drug; in protocol B, the agent is an approved drug and given for research purposes. Both studies use a research unit set aside for the volunteers to receive the agents. Both volunteers are consented about potential risks of intravenous catheters. In protocol A, the primary investigator places the catheter; in protocol B, a unit nurse places the catheter. Both subjects develop catheter infections. In protocol A, the investigator reports this as an anticipated, nonserious adverse event that is not attributed to the investigational agent. This event is reported to the IRB, categorized as a known risk of the procedure, which was included within the consent document. The event undergoes an expedited review by the IRB committee chair, and no action is suggested. In the context of a single, expected procedural complication, one could see little justification in recommending further action. In protocol B, the catheter infection is reported to the IRB as a protocol violation. (The approved protocol had mentioned that a co-investigator was to place the intravenous catheter.) The IRB issues a warning to the investigator, who subsequently submits a protocol modification allowing additional personnel to place intravenous catheters. The adverse event is recorded as nonserious, and corrective action to prevent future protocol violations is put in place. Further suppose that the unit room in which both participants were studied contains limited access to sanitizing hand-washing preparations, a known barrier to hand-washing compliance. 30, 31 This system flaw could predispose other research participants to potential catheter infections and could be easily corrected. Both events, when reviewed within the context of their specific study protocols, would not be expected to raise suspicion about a possible systemic problem. Only if a single protocol resulted in multiple similar events would an investigator or IRB member suspect that a system defect exists. This would be similar to a hospital trying to determine if it has systemic problems with catheter infections by recording infections associated with a single health care provider. While this approach might detect egregious problems, it is unlikely to contribute to reducing overall catheter infection rates.

With this in mind, the authors have sought to develop and implement a method for identifying and tracking potential system problems related to the daily operations of a GCRC. General clinical research centers are specialized facilities designed to promote patient-oriented research. 32 Studies performed in a GCRC have both inpatient and outpatient components. Research subjects admitted to a GCRC can potentially face much of the same safety concerns experienced by patients admitted to a clinical ward and therefore make a good study population for evaluation of a near-miss reporting system.

A "near miss" represents the identification of a potential safety problem, prior to it resulting in an injury. Typically the same system failures that might have resulted in an injury also are present in a near miss. Researchers in human error suggest that by identifying near misses, patient safety can be improved by identifying latent failure prior to the occurrence of a catastrophic injury. 20 The most well-accepted model for how catastrophic adverse events develop from human error is Reason's "Swiss cheese" model, 33 which suggests that there are multiple latent failures within any system. A latent failure is typically a subtle design flaw that generally goes unreported because in isolation, it does not result in an adverse event. However, these latent failures serve as "holes" in the usual safety mechanisms. When enough of these failures are present, major errors can travel unimpeded through the normal safety mechanisms, ultimately resulting in an adverse event.

Near-miss reporting systems have many potential benefits over adverse event detection systems. Because an injury has not occurred, liability is limited. There is likely a greater frequency of near misses than adverse events, making near misses easier to accumulate. Finally, valuable information can be analyzed regarding practitioners' methods to recover from potential events. 34 This information has great potential for quality improvement efforts.

Few studies have investigated the reporting of near misses. Many industries, including clinical medicine, have either implemented or are experimenting with near-miss reporting systems. 34, 35 Most near-miss systems currently in practice involve transfusion medicine. In a study performed in Scotland, the majority of errors detected were in the form of near misses. Furthermore, the investigators were able to classify safety mechanisms most successful at detecting near misses. 36

We are developing a novel near-miss reporting system designed for a clinical research unit and testing it at Vanderbilt University Medical Center General Clinical Research Center. We believe this tool will allow us to identify latent failures within the clinical research enterprise that might represent human subjects safety issues. The remainder of this paper will detail the authors' implementation strategy for the system and give preliminary examples of research-related near misses.top link

Methods

In 2003, The National Center of Research Resources (NCRR) allocated more than $280 million to fund GCRCs across the United States. 32 These specialized facilities are designed to provide inpatient and outpatient space, laboratories, equipment, and supplies for clinical research. Vanderbilt University Medical Center General Clinical Research Center has been continuously funded since 1960. Fifty-one full-time equivalent positions are funded through the grant, including 22 full-time and part-time registered nurses. In 2003, the Vanderbilt GCRC supported approximately 185 protocols.

The first step to ensure successful implementation of any reporting system is education of end users. Through didactic sessions, GCRC staff and investigators are instructed on the rationale behind implementing the reporting system, the importance of the system, and how to use it. These sessions are intended to create a nonthreatening environment that encourages reporting, protects confidentiality, and is nonpunitive. Specific staff members are also identified to serve as "advocates" for the system, as local buy-in is critical for organizational acceptance. System advocates are selected to represent diverse positions within the GCRC, including nurses, clinical investigators, clinical staff, and study coordinators.

To encourage participation, several reporting mechanisms are being developed, including Web-based reports, paper-based reports, and direct contact with the unit safety officer. Reports are anonymous and the information obtained is analyzed using established quality improvement techniques, such as Pareto diagrams and failure modes and effects analysis (FMEA). Pareto diagrams are a way to graphically arrange data (usually in a bar graph) to prioritize process improvement efforts by those that offer the most substantial net gain. FMEA is a prospective risk-assessment methodology that identifies a wide range of potential failure modes and subsequently narrows the analysis using formal methods to address specific failure modes. These techniques have been used to enhance patient safety in clinical medicine, yet have not been explored as a method of quality improvement in clinical research. These tools will help determine root causes for a near miss and generate potential hypotheses for improvements. Regular feedback is given to the staff regarding improvements initiated through use of the system. The authors hope to use the model of the continuous quality feedback cycle to maintain the reporting system.

Currently, reports are collected in a free-text format. As reports are collected, the authors develop a taxonomy for research-related near misses, which will allow for a standardized reporting system facilitating research-safety data sharing across institutions. Other strategies employed within the unit to promote a culture of safety include debriefing sessions after unanticipated serious adverse events. These sessions allow multiple parties to carefully review events surrounding an adverse event and determine strategies to prevent future occurrences.top link

Results

The GCRC near-miss reporting system has only recently been implemented and data are limited. Nevertheless, the authors have already received several near-miss reports. For the purposes of this paper, the authors will present a few representative examples.

Near-miss example 1. A research volunteer arrived at the GCRC unit over the weekend to be admitted for a specific research protocol. The protocol for which the participant had consented was being led by an individual who was a principal investigator for multiple GCRC-based protocols. The participant identified himself to the ward clerk as being in the investigator's study, but did not identify the study by name. The participant did not have a copy of the consent document with him. The participant was then mistakenly directed into another protocol that involved the administration of an investigational agent and invasive procedures to which the participant had not consented. Before beginning the study, the participant recognized that the protocol did not coincide with the consent document and notified the GCRG staff nurse. This error was detected without harm to the participant. An important lesson learned from this near miss was that a lack of a reliable research-participant tracking and verification program constituted an important latent failure identified within the research unit. As a result, improvements have been made in the unit census administrative and tracking protocols, and implemented to ensure this type of potential error would not occur in the future.

Near-miss example 2. A study was being conducted with the assistance of the investigational pharmacy. The regular pharmacist who worked in this specialized unit was unexpectedly unavailable. A pharmacist unfamiliar with study protocols was temporarily transferred to the investigational pharmacy to assist. The pharmacist inadvertently prepared an incorrect concentration of a study infusion drug. This medication error was detected by a staff nurse prior to drug administration. Further investigation revealed no formal policy on how temporary pharmacists are orientated to the investigational pharmacy and research protocols. Since identifying this potential error source, the investigational pharmacy has implemented important safeguards to prevent experimental medication preparation errors.

Adverse event example. A debriefing strategy was employed after a recent unexpected serious adverse event that occurred on the unit. The event involved a research participant who developed severe abdominal pain after receiving an investigational agent. At the time of this adverse event, several investigators, many of whom were not affiliated with this particular protocol, happened to be on the unit, attending an educational conference. They were able to respond immediately to the staff nurses' requests for assistance. The participant was examined and the pain managed with narcotic analgesics. The participant was then admitted to the acute care hospital for further management. Although the stakeholders involved in the event noted the rapid response to the participant's symptoms by multiple clinical personnel, it was determined that the unique situation of having an abundance of clinical researchers on the unit was not typically the situation that occurs overnight. The night shift nurses noted that identifying and contacting investigators at night was often problematic in two ways. First, it was often unclear from the nurses' perspective whom was to be contacted for overnight concerns. Second, covering investigators were often unfamiliar with the protocol and unaware that a participant had been admitted to the GCRC. They suggested that had this adverse event occurred overnight, rather than during the day, the response to and management of the adverse event would have likely been less timely. Based on lessons learned from this debriefing, a clinical research sign-out system is being developed. This system will link investigators to specific research participants who have overnight stays as part of their research protocol. Copies of the sign-out document will be available for GCRC staff nurses, principal investigators, and covering investigators. Covering investigators will have a brief description of the protocol and the medical history of the admitted participant, which they may refer to while at home. Staff nurses will be able to immediately identify the relevant investigator to contact regarding potential problems. Prior research suggests that sign-out systems can be very effective for minimizing communication problems, 37 but the authors are unfamiliar with any published description of a clinical research sign-out system.top link

Discussion

Managing clinical risks that threaten research-participant safety will require the implementation of safety strategies novel to traditional human subjects protection plans. In this methodological perspectives paper, the authors have attempted to detail reasons why system approaches to safety are applicable to clinical research and describe a novel near-miss reporting system. Preliminary data suggest that latent failure can be identified and addressed through this method. Further evaluation of the impact of the tool on unit culture and safety are planned. Other novel methods to identify and mange potential system failures in clinical research need to be explored.

Adopting a system approach to human subjects protection will require substantial cultural changes for patient-oriented research. The first step necessary for cultural change is to accept that many risks to research volunteers are not intrinsic to the protocol, but are systemic and can only be minimized through the collection of organizational data to be analyzed in aggregate. Currently, no data exist that might suggest what proportion of adverse events or protocol violations associated with clinical research are due to system problems. While these clinical risks likely represent only a small fraction of the total risks that a research participant might face, the major concern is that they remain unidentified. New methods must be explored to detect these clinical risks and appropriately minimize them. This approach faces numerous barriers because clinical research remains a very competitive enterprise and investigators often have very legitimate reasons for wanting to minimize data sharing. Furthermore, as in clinical medicine, 38 investigators might be concerned that reporting of study problems might expose them to undue litigation risks. This potential concern is justifiable, considering the current rise in research-related litigation. 39

To manage these clinical risks, individuals or groups of individuals must be identified to monitor and track reported problems. Managing study risks is a full-time endeavor, and already-overstretched IRBs 12, 13 cannot be expected to take on this task. Research ombudsmen or research quality officers might be potential candidates, particularly if their role involves overseeing multiple protocols. A mechanism for evaluating research clinical risks could also be developed within the framework of a research accreditation program. 6

One challenge is determining optimal methods for identifying system risks. For investigators, this will require some education in systems theory. Adverse events are now graded for severity and attributed based on their relationship to an investigational agent. Perhaps investigators could also be asked to rate the likelihood that the adverse event resulted from an organizational problem, rather than an intervention within the protocol. An appropriately selected body could then further evaluate these investigator-identified adverse events. Several other members of the research community might be ideal for detecting clinical risks ---research nurses or staff nurses (such as in the GCRC) who engage in clinical research on a regular basis and are involved in multiple protocols are ideally suited for this task. Their involvement in multiple protocols ---often in rapid succession ---allows them to more clearly see research-related risks that stem from organizational problems. Investigational drug service pharmacists might also be suited to identify these problems.

This paper does not seek to trivialize all of the tremendous effort that has been involved in formulating and implementing human subjects protection plans. These programs have protected millions of participants from undue harm and have greatly advanced patient safety. However, ensuring subjects protection and overcoming the substantial barriers that prevent complete participant safety require more than just a certificate indicating one is compliant with investigator training. Greg Koski, former director of OHRP, in an open letter to the research community notes that the "key element of the remodeling process in human research protections is the move from a system focused on regulatory compliance to a system focused on prevention of harm." 40 Highly reliable industries (industries that involve high-risk technologies yet incur few work-related fatalities) have manifested system-thinking approaches and have produced a culture preoccupied with safety. 41 Medical institutions, most notably the VHA, are just now trying to adopt this organizational culture of safety. 42 The first steps in this process require acknowledging that potential safety concerns exist, promoting a blame-free environment to encourage reporting of problems, developing multidisciplinary collaboration involving all stakeholders to seek possible solutions, and building organizational support. 43 Barriers, such as fear of litigation and skepticism about new approaches, hinder the adoption of this culture in clinical medicine. For clinical research, conflicts of interests 44 and litigation liability 39 will likely represent the most substantial barriers.

While the authors have proposed a near-miss reporting system to promote system awareness, other strategies will also augment human subjects protection. Information technology should be designed to facilitate communication between investigators, data and safety monitoring committees, and institutional review boards. Web-based adverse event reporting will help notify the IRB of potential problems as they occur. Improvement in communications has been a priority for clinical medicine safety interventions 45 and should take a similar priority for clinical research. Communication between participants, investigators, and research institutions should be developed and encouraged. Public trust can be improved by making clinical research more transparent, but it requires participants' full understanding of the risks associated with participation, as well as potential conflicts of interests that might be present.top link

Conclusions

Research risks have traditionally been viewed and managed in the context of a specific research protocol. However, this approach does not allow institutions to identify and manage system failures that might also threaten research participant safety. To manage these system problems, human subjects protection programs will need to implement novel strategies for research participant safety, including organizational-level surveillance and aggregate adverse event analyses. These activities will require new commitments from investigators, academic institutions, and research sponsors. The effort to ensure that human subjects receive maximal protection from research risks will require that all research risks be identified and appropriately managed.top link

Acknowledgments

Dr. Murff is supported in part by grant 5 M01 RR-000095 from the National Center for Research Resources, National Institutes of Health. The funding organization had no role in the design and conduct of this study; the collection, analysis, and interpretation of the data; or the preparation, review, and approval of the manuscript.top link

Author affiliations

Division of General Internal Medicine, Vanderbilt University, Nashville (HJM, RSD). Department of Veterans Affairs, Tennessee Valley Healthcare System, GRECC, Nashville (HJM, RSD). General Clinical Research Center, Vanderbilt University, Nashville (HJM, DWB, PAH, CH). Department of Anesthesiology, Vanderbilt University Medical Center, Nashville (DJF).

Address correspondence to: Harvey J. Murff, M.D., M.P.H., Department of Veterans Affairs, Tennessee Valley Healthcare System, GRECC, 1310 24th Avenue South, Nashville, TN 37212-2637; phone: 615-327-4751, ext 6823; e-mail: Harvey.murff@med.va.gov.top link

References

1. Shalala D. Protecting human subjects ---what must be done. N Engl J Med 2000. 343(11):808-10. (PubMed)

2. Abate T. Experiments on humans business of clinical trials soars, but risks unknown. San Francisco Chronicle 2002, Aug 4. p. A-1.

3. Faden R. Creating systems that work: part II, recommendations and solutions. Presented at Accountability in Clinical Research: Balancing Risks and Benefits. 2002; Indianapolis.

4. NIH policy for data and safety monitoring. National Institutes of Health 1998 June 10. http://grants1.nih.gov/grants/guide/notice-files/not98-084.html (Accessed 2003 Aug.)

5. Kahn JP, Mastroianni AC. Moving from compliance to conscience: why we can and should improve on the ethics of clinical research. Arch Intern Med 2001. 161(7):925-8. (PubMed)

6. Preserving public trust: accreditation and human research participant protection program. Washington, DC: National Academies Press; 2001.

7. Vincent C. Understanding and responding to adverse events. N Engl J Med 2003. 348(11):1051-6. (PubMed)

8. Nolan TW. Understanding medical systems. Ann Intern Med 1998. 128(4):293-8. (PubMed)

9. Suchman AL. Error reduction, complex systems, and organizational change. J Gen Int Med 2001. 16:344-6.

10. Plesk P. Innovative thinking for the improvement of medical systems. Ann Intern Med 1999. 131:438-44. (PubMed)

11. Nolan TW. System changes to improve patient safety. BMJ 2000. 320(7237):771-3. (PubMed) (Full Text in PMC)

12. Institutional review boards: a time for reform. A testimony by George Grob, Deputy Inspector General for Evaluation and Inspection, Department of Health and Human Services, before the Committee on Government Reform and Oversight. Washington, DC: Department of Health and Human Services; 1998.

13. Burman WJ, Reves RR, Cohn DL, et al. Breaking the camel's back: multicenter clinical trials and local institutional review boards. Ann Intern Med 2001. 134(2):152-7. (PubMed)

14. Steinbrook R. Protecting research subjects ---the crisis at Johns Hopkins. N Engl J Med 2002. 346(9):716-20. (PubMed)

15. Paasche-Orlow MK, Taylor HA, Brancati FL. Readability standards for informed-consent forms as compared with actual readability. N Engl J Med 2003. 348(8):721-6. (PubMed)

16. Wray NP. Research and development stand down. Official memorandum. Washington, DC: VA Research and Development; 2003.

17. National Advisory Research Resources Council. Recommendations to general clinical research centers for patient safety in clinical research. Available at: http://www.ncrr.nih.gov/clinical/gcrcpatientsafety20010622.asp . (Accessed 2002 Jul 2.)

18. Federeman DD, Hanna KE, Rodriguez LL, editors. Responsible research: a systems approach to protecting research participants. Washington, DC: National Academies Press; 2003.

19. Reason J. Managing the risks of organizational accidents. Aldershot, UK: Ashgate; 1997.

20. Leape LL. Error in medicine. JAMA 1994. 272(23):1851-7. (PubMed)

21. Steinbrook R. Improving protection for research subjects. N Engl J Med 2002. 346(18):1425-30. (PubMed)

22. VA research: protections for human subjects need to be strengthened, Washington, DC: General Accounting Office, Health and Human Services Division; 2000.

23. Gosbee J. Communication among health professionals: human factors engineering can help make sense of the chaos. BMJ 1998. 316:642. (PubMed) (Full Text in PMC)

24. Vincent C, Taylor-Adams S, Stanhope N. Framework for analysing risk and safety in clinical medicine. BMJ 1988. 316:1154-7.

25. Neily J, Ogrinc G, Mills P, Williams R, et al. Using aggregate root cause analysis to improve patient safety. Jt Comm J Qual Saf 2003. 29(8):434-9. (PubMed)

26. Khuri SF, Daley J, Henderson W, et al. The Department of Veterans Affairs' NSQIP. The first national, validated, outcome-based, risk-adjusted, and peer-controlled program for the measurement and enhancement of the quality of surgical care. Annals of Surgery 1998. 228:491-507. (PubMed) (Full Text in PMC)

27. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA 1995. 274(1):35-43. (PubMed)

28. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998. 280(15):1311-6. (PubMed)

29. Bates DW, Gawande AT. Improving safety with information technology. N Engl J Med 2003. 348:2526-34. (PubMed)

30. Harris AD, Samore MH, Nafziger R, et al. A survey on handwashing practices and opinions of healthcare workers. J Hosp Infect 2000. 45:318-21. (PubMed)

31. Burke JP. Infection control ---a problem for patient safety. N Engl J Med 2003. 348(7):651-6. (PubMed)

32. NIH National Center for Research Resources. http://www.ncrr.nih.gov/clinical/cr_gcrc.asp . (Accessed 2004 April.)

33. Reason J. Human error. New York: Cambridge University Press; 1990.

34. Barach P, Small SD. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ 2000. 320(7237):759-63. (PubMed) (Full Text in PMC)

35. Callum JL, Kaplan HS, Merkley LL, et al. Reporting of near-miss events for transfusion medicine: improving transfusion safety. Transfusion 2001. 41(10):1204-11. (PubMed)

36. Ibojie J, Urbaniak SJ. Comparing near misses with actual mistransfusion events: a more accurate reflection of transfusion errors. Br J Haematol 2000. 108(2):458-460. (PubMed)

37. Petersen LA, Orav EJ, Teich JM, et al. Using a computerized sign-out program to improve continuity of inpatient care and prevent adverse events. Jt Comm J Qual Improv 1998. 24(2):77-87. (PubMed)

38. Leape LL. Reporting of adverse events. N Engl J Med 2002. 347(20):1633-8. (PubMed)

39. Mello MM, Studdert DM, Brennan TA. The rise of litigation in human subjects research. Ann Intern Med 2003. 139:40-5. (PubMed)

40. Koski G. An open letter to the human research community. Rockville, MD: Department of Health and Human Services, Office of Research Protection. http://ohrp.osophs.dhhs.gov/references/oltr2.pdf . (Accessed 2003 Jul 9.)

41. Reason J. Human error: models and management. BMJ 2000. 320(7237):768-70. (PubMed) (Full Text in PMC)

42. Weeks WB, Bagian JP. Developing a culture of safety in the Veterans Health Administration. Eff Clin Pract 2000. 3(6):270-6. (PubMed)

43. Pizzi LT, Goldfarb NI, Nash DB. Promoting a culture of safety. In: Shojania KG, McDonald BW, Wachter RM, editors. Making health care safer: a critical analysis of patient safety practices. AHRQ Evidence Report/Technology Assessment, Vol 43. Rockville, MD: Agency for Healthcare Research and Quality; 2001. pp. 457  -- 67.

44. Kelch RP. Maintaining the public trust in clinical research. N Engl J Med 2002. 346(4):285-7. (PubMed)

45. Murff HJ, Bates DW. Information transfer. In: Shojania KG, McDonald BW, Wachter RM, editors. Making health care safer: a critical analysis of patient safety practices. AHRQ Evidence Report/Technology Assessment, Vol 43. Rockville, MD: Agency for Healthcare Research and Quality; 2001. pp. 481  -- 96.