Advances in Patient Safety: From Research to Implementation Advances in Patient Safety: From Research to Implementation Volume 4. Programs, Tools, and Products

Online Patient Safety Climate Survey: Tool Development and Lessons Learned

Lynne M. Connelly, Judy L. Powers

Abstract

Objective: A key tenet of patient safety programs is the elimination of the "culture of blame." The On-line Patient Safety Climate Survey was developed to evaluate the corporate safety climate of the U.S. Army Medical Department (AMEDD). Methods: The survey tool was designed to measure willingness to report errors, problem-solving processes, and perceptions of the leadership's concern for patient safety. The survey included two demographic questions, 19 items using a 4-point Likert scale (1 = strongly disagree to 4 = strongly agree), and one text item that asked respondents to identify the number one safety issue at their facility. After the instrument was tested to evaluate its psychometric properties, it was administered at 37 military hospitals and clinics in an effort to establish a systemwide baseline. Results: In 2001, staff at 37 medical treatment facilities (MTFs) participated in the survey (N = 10,769). The overall systemwide score (all respondents) was positive (2.96), and analyses of specific items demonstrated that error reporting was an area of concern. Conclusions: The OnLine Patient Safety Survey demonstrated adequate psychometric properties and the ability to provide an accurate assessment of the overall safety climate across the various clinical treatment facilities of an organized health care system. The results provided information useful for establishing a corporate baseline and identifying specific quality improvement needs.top link

Introduction

There has been a great deal of recent media and consumer attention focused on the health care industry's response to a national imperative aimed at improving hospital patient safety. As a result, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) has published patient safety standards emphasizing the critical role of organizational leaders in the development of a safety culture. 1 In addition, former President Bill Clinton issued an executive order during his tenure mandating actions on the part of all Federal agencies involved with patient care to reduce medical errors and improve clinical safety. In support of these regulatory requirements, the Army Medical Department (AMEDD) took a health care leadership role and implemented a proactive, standardized, and integrated corporate patient safety campaign.

The AMEDD leadership made a priority of collecting baseline data on staff perceptions related to the organizational safety climate prior to developing and implementing its quality improvement project. To this end, an Internet-based survey tool was created to maximize staff participation and ensure anonymity. The tool evaluated three safety climate factors: (1) staff willingness to report medical errors, (2) organizational problem-solving processes, and (3) perceptions regarding the organizational leadership's concern for patient safety. This initiative supported the AMEDD's comprehensive strategy for establishing an environment that encourages all personnel to identify actual and near-miss patient safety events, places a high value on the analyses of systems issues contributing to such events, and supports identification and redesign of vulnerable patient care processes.top link

Literature review

The 1999 Institute of Medicine (IOM) report, To Err Is Human: Building a Safer Health System, 2 addressed the high human and economic costs of medical errors and urged health care leaders to take immediate steps to improve patient safety. According to estimates in the scientific literature, as many as 100,000 deaths occur each year in the United States as a result of medical errors, and 50 percent of such deaths may be preventable. 2  -- 6 Although earlier publications had reported similar estimates, it was the IOM report that drove home the importance of patient safety and focused the media on national efforts to inform the general public of this critical issue. 3, 5, 7, 8 As a result, the emphasis on improving safety to prevent patient harm has become a key priority for health care professionals, legislators, and accreditation organizations both nationally and internationally.

The authors of the IOM report emphasized the complexity of the nation's health care system ---rather than professional incompetence or individual carelessness ---as the key factor contributing to the vast majority of medical errors. Historically, the medical culture expected and reinforced the concept of infallibility, and the immediate response to a medical error was to identify, blame, and punish the individuals involved. 2, 9, 10 Errors were considered individual performance problems and were addressed through counseling, retraining, and practice restrictions. Blame was assigned routinely to the physician or nurse, without giving consideration to the possibility that organizational and systems-related factors may have been at fault. 11, 12 This punitive model provoked anger, guilt, and embarrassment in health care professionals, who feared a loss of professional status if they openly and honestly acknowledged errors. 6, 9, 12 The name-and-blame health care model discouraged error reporting, and years of opportunities that might have been devoted to objective outcome analyses, high-risk process redesigns, and the communication of learned lessons were lost.

The literature suggests that health care organizations would benefit tremendously from modeling their error reporting and analysis practices on the high-hazard, but highly reliable, aviation industry. 12  -- 14 Successes in aviation safety have been attributed to the efforts of organizational leaders and their ability to recognize, prioritize, and implement effective safety improvement strategies into daily practice. Examples of these success strategies include confidential actual and near-miss error reporting; recognition and reward programs for individuals who report safety events; and structured crew resource management programs that improve communication, teamwork, and facilitate shared decisionmaking. 12  -- 14

Safety management programs command the full attention and priority of senior leaders in aviation and other high-hazard industries and are comprehensive, pervasive, and visible. These same attributes are considered the key components of high-reliability organizations. Researchers have found that errors are reported more often in nonpunitive environments. 3, 4, 12, 13 High-reliability organizations place a heavy emphasis on learning and recognize the fact that complex systems make error-free performance difficult for even the best trained and motivated workers. These organizations understand and plan for the most common types of errors, and they design systems to prevent errors or compensate for them before they cause harm. 10  -- 12, 15  -- 17 In high-reliability organizations, risk is readily acknowledged, error reporting is rewarded, and injury prevention is regarded as everyone's responsibility.

To achieve a culture of safety, we all must become personally accountable for safety. 18 In order to make health care safer, organizational systems must be redesigned to make the commission of errors more difficult. 19, 20 An interdisciplinary approach to patient safety begins with an organizational structure committed to cooperation and communication, and one that welcomes and encourages positive change. 9, 10 If we are to attain the level of change needed in the health care industry, a new understanding of accountability is needed ---one that moves beyond blaming and punishing individuals to focusing on learning from errors and designing organizational systems that anticipate problems and implement effective solutions to protect patients from harm. 9, 10, 12

The success of the AMEDD patient safety program is dependent upon the creation of a positive safety climate in each of its medical treatment facilities (MTFs). According to Flin et al, "safety climate can be regarded as the surface features of the safety culture discerned from the workforce attitudes and perceptions at a given point in time. It is a snapshot of the state of safety providing an indicator of the underlying safety culture of the work group, plan or organization." 21 Although most studies addressing safety climate have emerged from industrial settings, the key components of a positive safety climate are transferable to the clinical health care environment. A positive safety climate includes the active involvement of the organizational leadership in safety programs, high rank and status for safety professionals, strong safety training programs and communication processes, and a clear emphasis on recognition for safe performance, rather than a reliance on punishment and enforcement. 12, 13, 16, 17, 18, 20, 22 top link

Purpose

The "On-line Patient Safety Climate Survey" was developed to gauge the patient safety mindset of staff at Army medical organizations. This electronic tool was designed to 1) provide a quality improvement assessment of the patient safety climate at various facilities; 2) encourage participation through the use of a short, easy-to-complete format; 3) protect the identity of survey respondents; and 4) provide for easy analysis of the collected data. The Internet-based survey structure was selected for its ease of administration and respondent convenience.top link

Conceptual framework

Our framework was based on the need to reduce any "climate of blame" that may exist in our MTFs, with regard to the reporting of unintentional clinical errors. We perceived the patient safety climate as consisting of three factors: 1) how most people in the organization regard the reporting of their own errors and those of others; 2) the willingness of people to cooperate in the development of solutions to patient safety problems; and 3) the perceived patient safety attitudes of people in leadership positions. We feel strongly that the safety climate of an organization is reflected in the majority opinion of its members. If the majority of staff in an MTF place a great deal of importance on the aforementioned factors and hold them in high regard, then the organization will have a "climate of support" as opposed to a "climate of blame."top link

Setting

This project was part of an AMEDD patient safety program implemented in 37 small, medium, and large MTFs located throughout the U.S., Europe, Japan, and Korea. Most of the facilities include acute care and ambulatory units. The primary issues of concern that arose during development of the survey were the need for world-wide respondent access to the survey document and the need for ongoing data analysis as a means of monitoring participation. These concerns led to the decision to implement the survey across the Internet. The online survey was developed in such a way that respondent data flowed immediately into a database that was set up to reverse-code negatively worded items and calculate basic descriptive statistics, including frequencies and means. The database provided ongoing participation information and permitted initial analysis. We provided periodic feedback on participation to survey coordinators at each facility, along with reminders on the importance of encouraging participation.

The survey needed to be short in length and easy to complete in an effort to encourage maximum participation from the very busy staff at each facility. We wanted appropriate tool development methods, but it is important to realize that this tool was designed for quality improvement assessment and not for rigorous research. We decided, for example, not to ask for extensive demographics that could lead to the identification of particular respondents and discourage participation in smaller facilities. This aspect required some education of the sample group.top link

Survey development and psychometrics

The first step in developing the survey was to review similar surveys that were in use or published in quality management literature. Most of these tools were too lengthy or did not address each of the specific topics that we needed to assess, so we decided to develop our own tool. The survey items were generated from the results of a literature review and were based on our conceptual framework. In order to ensure content validity, the items also were submitted to a panel of five experts with backgrounds in either quality management or tool development. They were asked to rate each item on a 4 point scale (4 = very relevant; 3 = relevant but needs minor alterations; 2 = unable to assess relevance without item revision; 1 = not relevant). The Content Validity Index (CVI) is the proportion of items rated as content valid (i.e., a rating of 3 or 4 by the experts). It is regarded for its ability to establish validity beyond the .05 level of significance. 23 Any items with a CVI lower than 1.0 (i.e., those that were not rated a 3 or 4 by all of the experts) either were rewritten or eliminated. A second round of content validity testing was conducted to substantiate the revised items. We then conducted a pilot study with the final version to verify the items were easily understood, to determine the time needed to administer the survey, and to ensure its ease of use.

The final version of the survey consists of two demographic questions, 19 Likert scale items, and one text item. The Likert scale items were scored on a 4-point continuum, from strongly disagree to strongly agree. We selected a forced-choice scale deliberately to prevent a neutral item in the middle. 24 The text item asked the respondents, "What is your perception of the number one patient safety issue at your medical treatment facility?" As mentioned previously, the respondents were asked to provide only the name of their MTF and their position (staff nurse, physician, pharmacist, etc.) to better preserve the anonymity of the survey. Four items on the survey were worded negatively to prevent response set (answering all items the same way). These items were reverse-coded to provide a meaningful total score. In the tables presented, they are written as originally worded with the actual score, so the items make sense. The total score is based on the reverse-coded scores.

The following definitions of select terms were provided on the survey form to better clarify related survey items:

Patient safety is defined as actions undertaken by individuals and organizations to protect patients from being harmed by the effects of health care services.

A near miss / close call is an event that could have resulted in harm to a patient, but did not, either by chance or through timely interventions.

Sentinel events are unexpected occurrences involving death or serious physical or psychological injury.

Survey experts recommend reporting the overall reliability estimates, as well as the subscale estimates, to provide a total picture of an instrument's reliability. 25 The overall reliability addresses the validity of all of the tool's items, and therefore represents the ability of the tool to accurately measure the overall construct of a patient safety climate. The subscales demonstrate the reliability of the individual items that make up the subcomponents of the larger construct.

To ensure the tool's reliability, we tested the tool for internal consistency (Cronbach's alpha = .86) prior to administration with a group of 43 students with clinical backgrounds attending a graduate program at the Army Medical Department Center and School. A test-retest reliability measurement (r = .98) was conducted two weeks later with the same group of students. Lynn recommends that reliability be tested on each administration of any instrument. 26 The Cronbach's alpha for the baseline assessment (n = 10,769) was .90. In addition, a paper version of the same instrument was administered at a small Naval hospital (n = 47) with a Cronbach's alpha of .91. On the first administration, the subscales also were tested for reliability: willingness to report errors (a = .724), problem-solving processes (a = .848), and attitude of leadership (a =.762).

An exploratory factor analysis also was conducted with the respondents who completed the survey on the baseline assessment to assess the validity of the construct. A principle components analysis with Varimax rotation was used with the data loading on a three-factor solution. The three factors were similar but not identical to the subscales. The first factor included items on reporting errors and sharing information. The second factor was related to problem-solving and the positive aspects of leadership, and the third factor had items related to the negative aspects of leadership and negative consequences. The three-factor solution explained 50.78 percent of the variance.top link

Survey administration

The survey was conducted via an Internet Web site during an administration period that extended from mid-August to the end of September 2001. Originally, the survey was to remain online for about three weeks prior to the beginning of the AMEDD Patient Safety Training Program. The original timeframe had to be extended for approximately one month, however, due to military issues related to the terrorist attacks on September 11, 2001, and a flooding event in one of the medical facilities.

Each MTF commander (equivalent to a CEO) assigned an individual to serve as the survey coordinator for their facility. A standardized explanation of the survey, instructions for administering it, and the Web site URL (address) were provided to the coordinators. We communicated participation figures back to the commanders and coordinators by e-mail at frequent intervals. We also asked them to distribute weekly reminder messages encouraging staff participation; however, the surveys were not mandatory and military regulations prohibit the use of material incentives for volunteer recruitment.top link

Findings of baseline assessment

A total of 10,769 MTF staff members participated in the online survey, which translates to a corporate response rate of about 40 percent based on Army Medical Department staffing figures. It was difficult to obtain more reliable figures due to frequent military deployments during the administration period, so the aforementioned percentage represents an estimate of the overall response rate. There were no data available on non-responders. The findings from the survey represent a baseline assessment of the patient safety climate in each MTF, and in the corporation as a whole, for quality improvement purposes. The breakdown of respondents by profession/specialty is provided in Table 1. The overall corporate score (all respondents) was 2.95 on a 4-point scale, which was in the desired positive direction.

Table 1. Respondent demographics


Respondents by position

Administrator/Supervisor 2,083
Dietary Technician 45
Dietitian 62
Lab Technician 332
LPN/91WM6 626
NA/91W 691
Nurse Practitioner 185
Occupational Therapist 42
Physician's Assistant 137
Pharmacist 152
Pharmacy Technician 200
Physical Therapist 133
Physician 1,349
Psychiatric Technician 101
Psychologist 108
Radiology Technician 193
Social Worker 227
Speech Therapist 16
Staff RN 1,108
Other 2,979

Total responses: 10,769

Table 2 shows the item averages and standard deviations. The following distribution formula was used to place the items into categories for comparative purposes: 3.5 --4.00 = highly agree; 2.5 --3.49 = agree; 1.5 --2.49 = disagree; and 1.0 --1.49 = strongly disagree.

Table 2. Item analysis


Item AMEDD* SD[dagger]

Most people in this MTF...
1. are willing to report clinical errors. 3.04 .681 Agree
2. agree that patients also play a role in preventing clinical errors. 3.25 .610 Agree
3. fear there will be negative consequences associated with reporting clinical errors. 2.50 .780 Agree
4. provide support for those who make unintentional clinical errors. 2.96 .666 Agree
5. cooperate with one another to resolve patient safety issues. 3.19 .648 Agree
6. are not willing to admit to patients when they make an error. 2.37 .747 Disagree
7. regularly report clinical errors. 2.70 .709 Agree
8. feel comfortable reporting unsafe patient conditions to the supervisor. 3.08 .712 Agree
9. believe things will be done to reduce the likelihood of a clinical mishap. 3.25 .557 Agree
10. do not believe the organization's senior leaders place a high priority on patient safety. 1.96 .804 Disagree
11. believe most clinical errors are preventable. 3.13 .525 Agree
12. are willing to discuss what went wrong when a sentinel event occurs. 3.10 .611 Agree
13. often blame others for their own mistakes. 2.23 .757 Disagree
14. are willing to report near miss/close call patient incidents. 2.74 .668 Agree
15. believe their immediate supervisors are committed to improving patient safety. 3.10 .646 Agree
16. hesitate to change practice habits to improve patient safety. 2.14 .731 Disagree
17. are willing to share information about clinical errors and what caused them. 2.95 .625 Agree
18. regularly report clinical errors whether or not the patient was harmed. 2.74 .701 Agree
19. believe MEDCOM leadership is truly committed to improving patient care. 3.06 .658 Agree

Average Overall Score 2.95 .403

* AMEDD = organization-wide score
SD = standard deviationEach item was scored on a 4-point scale from Strongly Disagree (1), Disagree (2), Agree (3),and Strongly Agree (4).3, 6, 10, 13 and 16, are negatively worded, therefore a "disagree" answer is the more positive response.The following groupings were used to categorize items: 3.5 --4.00 highly agrees, 2.5 --3.49 agrees, 1.5 --2.49 disagrees, and 1.0 --1.45 strongly disagrees.

The results from the survey items were found to populate the desired (positive) categories, with the exception of two items. The respondents agreed with the statement, "Most people at this MTF fear there will be negative consequences associated with reporting errors." They further agreed with the statement, "Most people in this MTF often blame others for their own mistakes related to patient safety." In an effort to identify other potential problem areas, we looked at other items that fell in the 2.50 --2.75 scoring range for positively worded items and the 2.25 --2.50 scoring range for negatively worded items. These ranges indicate items that are close to falling in the opposite direction. Two items had average scores falling in these ranges; both were related to reporting errors: "Most people in this MTF regularly report errors," and "Most people in this MTF regularly report errors whether or not the patient was harmed."

The survey instrument's three subscales also were analyzed. The average scores for the subscales were as follows: willingness to report errors (2.78), problem-solving processes (3.37), and perceptions of leadership (3.11) (Figure 1). Although the scores for each of the subscales fell into the positive categories, the willingness to report scores lagged behind the other two.top link

Figure 1. Subscale Items

top link

Responses to the optional text item, "What is your perception of the number one patient safety issue at your medical treatment facility?", included 6,053 different issues addressed by 5,621 respondents. (Many of the respondents cited multiple issues, despite being asked in the item to describe the "top" patient safety issue.) Each of the comments was coded according to the topic described, and then similar topics were consolidated into larger categories. Table 3 summarizes the categories with the most frequently mentioned items, and indicates the number of times each was mentioned and the relative percentage of overall responses. Staffing and medication error issues were identified as the top two patient safety concerns.

Table 3. Number one patient safety issue at your MTF (Question 20)


Issue Number Identified Percent of Total

Medication Errors 920 15.20 %
Staffing 864 14.27 %
Facility 433 7.15 %
Inexperience/Lack of Training 362 5.98 %
Falls 294 4.86 %
Continuity of Care 267 4.41 %
Culture/Leadership 249 4.11 %
General Comments about Patient Safety 205 3.39 %
Equipment 164 2.71 %
Infection Control 147 2.43 %
Children Unattended or Uncontrolled 146 2.41 %
Documentation Errors 145 2.40 %
Reporting of Errors 131 2.16 %
Patient Identification 127 2.10 %
Communication 126 2.08 %
Lack of Time 119 1.97 %
Patient Education 113 1.87 %
Security 105 1.73 %
Poor Attitude 101 1.67 %
Scope of Practice 89 1.47 %
Housekeeping 79 1.31 %
Accountability/Attention to Detail 75 1.24 %
Lack of Supervision 63 1.08 %
Patient Confidentiality 49 0.81 %
Missed Diagnosis 43 0.71 %
Taskings 40 0.66 %
Transfer/Transport of Patients 40 0.66 %
Restraints 35 0.58 %
Needle Sticks 34 0.56 %
Not Following Orders/SOPs 26 0.43 %
Policy 24 0.40 %
Stress 24 0.40 %
Specific to Facility 16 0.26 %
Positive Comments Related to Organizational Support & Involvement in PS 352 5.82 %

Total 6,053 100.00 %

Lessons learned

The importance of working with a computer programmer possessed of the right level of programming knowledge cannot be overemphasized. We worked with the IM/IT developer to create an online report that could be generated directly from the survey database to provide respondent scores, aggregate scores for the MTF, and an average score for each survey item. We then fine-tuned the report by actually sitting down with the IM/IT developer for half a day on two separate occasions. These consultations ensured that the online report would provide us with the data we needed. Frequent communications with the programmer also were necessary to work through unanticipated problems that arose as the survey was administered, and the programmer also made suggestions that improved our creation of reports. This process worked well until the developer with whom we were collaborating left the organization for another position. The individual who was subsequently assigned to work with us did not have the same level of skill and was unable to provide the same level of support.

The survey coordinators at each facility were essential in their ability to communicate participation reminders and for their help in securing access to computers. It was also necessary to educate those involved at the facility level on the nature of the survey and its use as a quality improvement tool. Researchers in the field often wanted to perform comparisons between facilities to determine if they were "normal." As the project leadership, we had to emphasize frequently that normal is no longer a suitable patient safety goal, while reminding the researchers that the survey was intended to measure relative internal improvement and was not designed for external comparisons. We also sought to reduce the natural competitiveness that sometimes ensues in these situations. The facilities varied by size, make-up of personnel, geographic location, and mission to such a great degree that comparisons across facilities were meaningless. The goal of the development team was a safer patient care environment, achieved through encouraged and measured internal improvement, and this aim was decidedly more important than external comparisons. One additional problem that surfaced during the data collection period was the appearance of another Army-sponsored online survey that led to confusion in at least some of the MTFs. Every attempt should be made to define the purpose and scope of the project in such a way that potential participants will not confuse it with another research effort taking place in same facility.top link

Limitations

The survey findings should be reviewed carefully, as this instrument was conceived as a quality improvement project and was not designed to facilitate rigorous research. The response rate is an estimate based on Army-wide databases, so some caution is needed when interpreting findings. However, considering the number of respondents, it appears to provide a clear picture of the patient safety climate in this multi-facility organization. Due to resource limitations, no attempt was made to examine non-responders. The psychometric properties, while tested carefully, were balanced by the need for a survey instrument that was short in length and easily administered to encourage the maximum participation of busy medical professionals. The final version of the tool is useful for internal quality improvement assessments, but the relatively small number of survey items warrants caution with regard to its use for other purposes.top link

Conclusions

The survey tool described herein supports and assists our MTFs and the entire AMEDD structure in meeting the JCAHO patient safety standards requiring assessment of the organizational climate and staff willingness to report medical errors. It provided the AMEDD leadership with valuable baseline information, while facilitating periodic reassessments to identify areas in need of improvement and strategies essential to achieving recognition as a high reliability organization with a safety-focused culture.

Additionally, the instrument demonstrated acceptable psychometric properties and the ability to measure change over time. The findings indicate that the AMEDD patient safety organizational climate is relatively positive, although there is still work to be done. The key area of concern continues to be a reluctance on the part of the staff to report medical errors ---an issue that also is reflected in the literature.top link

Acknowledgments

The opinions expressed in this article are solely those of the authors and do not necessarily represent the official views of the U.S. Government, the Department of Defense, or the Department of the Army.

We would like to thank Dr. Carol Reineck for reviewing an earlier version of this manuscript.top link

Author affiliations

Lynne M. Connelly, PhD, RN, Interim Associate Dean for Information Technology and Curriculum Resources School of Nursing, University of Texas Health Science Center, San Antonio, TX (LMC). Great Plains Regional Medical Command, U.S. Army Medical Department, Fort Sam Houston, TX (JLP).

Address correspondence to: COL (Ret.) Judy L. Powers, MSN, CNAA; e-mail: jlpowerstexas@yahoo.com.

References

1. New safety and error reduction standards for hospitals. Joint Commission Perspectives. 2001 Feb; 21(2):1, 3.

2. Kohn, LT, Corrigan, JM, Donaldson, MS. To err is human: building a safer health system. A report of the Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Press; 2000.

3. Buerhaus P. Lucian Leape on the causes and prevention of errors and adverse events in health care. Interview by. Image J Nurs Sch 1999. 31(3):281-6. (PubMed)

4. Leape LL, Woods DD, Hatlie MJ, et al. Promoting patient safety by preventing medical error. (Editorial) JAMA 1998. 280(16):1444-7.

5. Brennan TA, Leape LL, Laird NM. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. New Engl J Med 1991. 324:370-7. (PubMed)

6. Leape LL. Error in medicine. JAMA 1994. 272:1851-7. (PubMed)

7. Leape LL, Simon R, Kizer WK, et al. Reducing medical error: can you be as safe in a hospital as you are in a jet? National Health Policy Forum, Issue Brief 1999. 740:2-8.

8. The Quality Interagency Coordination Task Force (QuIC). Doing what counts for patient safety: federal actions to reduce medical errors and their impact. 2000 Feb.

9. Berwick DM, Leape LL. Reducing errors in medicine: it is time to take this more seriously. BMJ 1999. 319:136-7. (PubMed) (Full Text in PMC)

10. Classen DC, Kilbridge PM. Roles and responsibility of physicians to improve patient safety within health care delivery systems. Acad Med 2002. 77(10):963-72. (PubMed)

11. Perry SJ. Profiles in patient safety: organizational barriers to patient safety. Acad Emerg Med 2002. 9(8):848-50. (PubMed)

12. Sexton J, Thomas E, Helmreich R. Error, stress, and teamwork in medicine and aviation: cross sectional surveys. BMJ 2000. 320:745-9. (PubMed) (Full Text in PMC)

13. Gaba DM, Singer SJ, Sinaiko AD, et al. Differences in safety climate between hospital personnel and naval aviators. Hum Factors 2003. 45(2):173-85. (PubMed)

14. Goodman GR. Fragmented patient safety concept: the structure and culture of safety management in health care. Nurs Econ 2003. 22(1):44-46.

15. Becher EC, Chassin MR. Improving quality, minimizing error: making it happen. Health Aff 2001. 20(3):68-81.

16. Beyea , SC . Creating a just safety culture. AORN J 2004. 79(2):412-14. (PubMed)

17. Singer SJ, Gaba DM, Geppert JJ, et al. Culture of safety: results of an organization-wide survey in 15 California hospitals. Qual Saf Health Care 2003. 12(2):112-18. (PubMed) (Full Text in PMC)

18. Geller ES. Ten leadership qualities for a total safety culture. Prof Safety 2000. 45(5):38-41.

19. Beurhaus , PI . Follow-up conversation with Lucian Leape on errors and adverse events in health care. Nurs Outlook 2001. 49(2):73-7. (PubMed)

20. Berwick DM. Taking action to improve safety: how to increase the odds of success. In: Enhancing patient safety and reducing errors. Chicago: National Patient Safety Foundation; 1999: pp. 1  -- 11

21. Flin R, Mearns K, O'Connor P, et al. Measuring safety climate: identifying common features. Safety Science 2000. 34:177-192.

22. Gershon RM, Karkashian CD, Grosch JW, et al. Hospital safety climate and its relationship with safe work practices and workplace exposure incidents. Am J Infect Control 2000. 28(3):211-221. (PubMed)

23. Lynn MR. Determination and quantification of content validity. Nurs Res 1986. 35:382-385. (PubMed)

24. Urden LD. Patient satisfaction measurement: current issues and implications. Lippincott's Care Management 2002. 16(5):194-200.

25. LoBiondo-Wood G, Haber J. Nursing Research: Methods, Critical Appraisal and Utilization. Mosby: St. Louis; 1994.

26. Lynn MR. Instrument reliability and validity: how much needs to be published. Heart & Lund 1989. 18(18):421-23.