Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

Mistake-Proofing the Design of Health Care Processes

Chapter 4. Design Issues, Caveats, and Limitations

Introduction

Mistake-proofing is not without its pitfalls. In the 15th century, knights and men-at-arms wore heavy armor to protect themselves from their enemies' weaponry. In the context of mistake-proofing, this can be thought of as a strategy to reduce the influence of the "mistake" of being injured by enemy weapons. The French's heavy armor worked against them, however, when they fought the British in the Battle of Agincourt on October 25, 1415. Instead of saving lives, it contributed to their defeat. Heavy rains on the recently-plowed battlefield created deep mud. Soldiers wearing heavy armor were unable to maneuver in the mud when they became even slightly injured or were pushed to the ground. Some even drowned in it. Lightly-armored or unarmored English archers, on the other hand, were able to move more nimbly and inflict severe damage on the French.

One lesson to be learned from the Battle of Agincourt is that it is important to take design issues into account as part of an effective implementation. Otherwise, mistake-proofing efforts intended to reduce errors or their impact could cause significant problems themselves. Implementation problems can be avoided by managing design issues and, at the same time, recognizing the limitations or liabilities of mistake-proofing. In the TRIZ methodology,1 problems associated with a solution are referred to as secondary problems. It is no surprise that most mistake-proofing devices contain secondary problems. Almost every solution does. There are several recurring mistake-proofing design issues that must be taken into consideration. These include the need to:

Each of these issues is discussed in this chapter.

Return to Contents

Mistake Proof the Mistake-Proofing

Mistake-proofing devices should be mistake-proofed themselves. They should be designed with the same rigor as the processes the devices protect. The reliability of mistake-proofing devices should be analyzed, and if possible, the device should be designed to fail in benign ways.

Reliability of Devices

Reason2 warns that systems with extensive automatic error detection and correction mechanisms are more prone to a devious form of failure called a latent error.3 Latent errors remain hidden until events reveal them and are very hard to predict, prevent, or correct. They often "hide" inside automatic error detection and correction devices. An error that compromises an inactive detection and recovery system is generally not noticed, but when the system is activated to prevent an error, it is unable to respond, leaving a hole in the system's security. This is an important design issue, although it is quite likely that the errors prevented by the automatic error detection and correction systems would have caused more damage than the latent errors induced by the systems.

Devices Sometime Fail

The following scenario, in which a mistake-proofing device failed, is a tragic example of the type of latent error Reason identified. In Chapter 3, devices were modeled as if they were perfectly reliable. Devices are not perfectly reliable. The analysis below suggests how device reliability can be modeled to assess the benefits and risks presented by the latent error.

In January 2002, two women died during the same routine heart procedure in the same room.4 They were both mistakenly given nitrous oxide instead of oxygen because a device that regulates oxygen flow was plugged into a receptacle that dispenses nitrous oxide (Figure 4.1).

The flow regulator was missing one of the index pins designed to prevent such mix-ups. The mistake-proofing depended on pins connecting the oxygen regulator at 12 and 6 o'clock and the nitrous oxide regulator at 12 and 7 o'clock. The missing pin broke off. A mistake-proofing device failed.

The fact that devices fail, while sometimes tragic, does not mean that mistake-proofing is an unsound prevention strategy. Consider the fault trees in Figures 4.2 and 4.3. They correspond to the harmful event and benign failure shown in Chapter 3, Figure 3.9, except that the mistake-proofing device is not perfectly reliable. In Chapter 3, Cause 4 was completely removed from the harmful event fault tree. Here, the mistake-proofing device is not very reliable, failing 1 percent of the time. Cause 4 remains in the tree because it can cause the harmful event any time it occurs and the mistake-proofing device also fails. The probability that both of these events will occur is 0.0005, two orders of magnitude smaller than without the device. Device failures should be a catalyst for further exploration of improvements that could be incorporated into the device's design to improve reliability.

Devices should fail benignly, too. An approach to improving the reliability of devices is to design them so that they fail in benign ways. The air brakes on tractor-trailer trucks engage if air pressure is lost. When scuba regulators fail, they are designed to deliver a constant flow of air instead of no flow at all. A different design for the pin indexing system that failed would be one that uses pressure on the index pins to open the flow of gases. If a pin is broken off, the gas would not flow. If devices cannot be designed to fail benignly, improve the reliability of the device by creating more system redundancy. Alternately, a system of careful maintenance, calibration, and inspection should be put in place.

Return to Contents

Avoid Moving Errors to Another Location

When designing mistake-proofing devices, it is important to avoid the common problem of moving errors instead of eliminating or reducing them. For example, in jet engine maintenance, placing the fan blades in the correct position is very important. The hub where the blade is mounted has a set screw that is slightly different in size for each blade so that only the correct blade will fit. This solves numerous problems in assembly and maintenance throughout the life of the engine. It also produces real problems for the machine shop that produces the hubs; it must ensure that each set screw hole is machined properly.

Moving the error to another location can provide a benefit in the following circumstances:

  1. If the error is moved to a location in the process where interruptions are more controllable or less likely.
  2. If the means of detection are better.
  3. If the consequences are less severe or reversible.

Return to Contents

Prevent Devices from Becoming Too Cumbersome

How mistake-proofing devices affect processes is another design issue that must be considered. The device could be cumbersome because it slows down a process while in use or because the process, once stopped, is difficult to restart.

Slow Down the Process

If a mistake-proofing device slows down the process, workers will find coping strategies (also known as "work-arounds") to enable them to get their work done. Consider the table saw. It is a common woodworking power tool. Each saw is equipped with a blade guard in the factory (Figures 4.4A and 4.4B). The guard covers the spinning blade as the wood is passed through the saw. In an unscientific survey of manufacturing workers who own table saws at home, the majority report removing the guard from their table saws. When asked why they removed the guard, most responded that it "got in the way" or "did not operate smoothly." The lesson is that workers will circumvent cumbersome devices that make work more difficult.

Ideally, mistake-proofing should be designed so that the device is transparent to the process, like the orientation check of the 3.5-inch diskette drive. It does not slow down the process until an error occurs.

In some cases, mistake-proofing devices can actually make the correct execution of the process easier and faster. The pick-to-light bin system (Figure 4.5) is one such device. Workers select items from the bin to fill customer's orders. Each order is different. The bin system is linked to a computer that downloads each customer's order. Each bin has a light above it, and an infrared beam detects the insertion of the worker's hand. Workers fill the order by picking an item from each lighted bin. The light goes off automatically as the item is picked. An alarm sounds if workers insert a hand into the wrong bin. The subsequent operation, order packaging, will not operate until all the lights on the bin system are off. The pick-to-light system improved worker productivity dramatically compared with paper orders. Omitted parts defects were reduced from 400 per million to 2 per million.5

Difficulty Trouble-Shooting

If too many mistake-proofing devices stop the process in the same way, it can become difficult to determine which error is responsible and how to resume normal process operation.

During the Christmas season of 1997, Toymax sold a very popular toy called "Metal Molder" (Figure 4.6). This toy enabled children to mold molten metal into small charms and trinkets. The toy was thoroughly mistake-proofed to keep children's fingers separate from the molten metal. Various locking mechanisms prevented the process from proceeding until all the required conditions had been met, and all the previous steps had been completed. There were so many reasons that the process could be stopped that the children for whom it was intended (8-year-olds and older) were mystified about how to get it to work. Something was obviously wrong; they just could not figure out what it was.

Similar outcomes occurred with early computer software, when the descriptor "user-friendly" differentiated new versions from older ones. Some of these programs checked user inputs so carefully that it became difficult to know what was wrong or how to fix it.

Effective implementations of mistake-proofing must have enough different manifestations of process stoppages to make troubleshooting the error obvious and rapid.

Return to Contents

Commit the Appropriate Resources

Because many mistake-proofing devices are simple and inexpensive, they pay for themselves very rapidly. In safety-critical industries where error costs are high, they could pay for themselves the first time an error is detected.

Under other circumstances, ensuring that a device is cost-justified requires careful cost-benefit-analysis. Multimillion-dollar investments in high-technology solutions like computerized physician order entry, widespread bar-coding, and robotic pharmacies certainly require careful financial deliberations. Consumable devices that have small per-unit costs, but are used in large quantities per error detected (i.e., errors are relatively rare), may also require careful cost justification (go to the Bloodloc™ section in Chapter 7, Example 7.8). In addition to the traditional cost-benefit analysis, models based on the economic design of statistical process control charts are available.6

An unwillingness to invest enough in error reduction projects is common. Repenning and Sternman7 wrote an article with a particularly salient title: "No One Ever Gets Credit For Fixing Problems That Never Happened." It seems to be easier for managers to pay for a lawsuit after the incident than it is to justify investing in prevention before the fact. Repenning and Sternman7 describe how difficult situations that managers confront contribute to this bias against investing in prevention. They assume that productive capabilities deteriorate over time and that ongoing investments in capabilities are needed to ward off entropy. These capabilities lead to actual performance, which is assessed against the desired performance. Managers typically feel pressure to resolve performance gaps between actual and desired performance.

Applied behavior analysis suggests that people respond best to outcomes and rewards that are "soon," "certain," and "positive." As a result, managers are biased. They would rather not reallocate worker time to "improvement work" but instead concentrate on doing the core production work of the firm. Managers are biased toward production work because its impact is immediate (soon), it is completely within their control (certain), and it will likely reduce the performance gap in the short term (positive). Improvement work tends not to offer the same rewards. Outcomes, though promising (positive), will be delayed as teams organize, define, measure, improve, and control (not soon), and if the performance problem is not solved, the effort will have been in vain (not certain).

It seems to be easier for managers to justify paying for the lawsuit after an incident than it is to justify investing in prevention before the fact.

Faced with these alternatives, managers decide to concentrate on core production work that improves performance in the short term, but that allows capabilities to deteriorate in the long term. These deteriorating capabilities give rise to performance gaps that generate more pressure on managers to focus on production work instead of improvement work, and a vicious downward spiral of capabilities ensues. Consequently, improvements, including the implementation of mistake-proofing, will require managers to subordinate short-term pressures to the long-term goals of the organization. The performance gap will need to widen initially if it is to narrow in the long term (Figure 4.7).

Return to Contents

Avoid Type I Error Problems

If mistake-proofing is used for a mistake detection application and replaces an inspection or audit process in which sampling was used, changing to the 100 percent inspection provided by a mistake-proofing device may have unintended consequences. Specifically, there will be significantly more information collected about the process than there would be when only sampling is used.

Suppose the error of inferring that something about the process is not correct when, in fact, the process is normal (Type I error) occurs only a small percentage of the time. The number of opportunities for a Type I error increases dramatically. The relative frequency of Type I errors is unchanged. The frequency of Type I errors per hour or day increases. It is possible that too many instances requiring investigation and corrective action will occur. Properly investigating and responding to each may not be feasible. Papadakis8 discusses this problem and a possible remedy.

Return to Contents

Avoid Unintended Utilization of Benefits

The benefits of mistake-proofing can include lower cognitive workload, reduced chances of error, and faster and more easily learned processes. Whether these benefits are used to generate patient safety or some other benefit is an important and open question. The strength of the organization's safety culture will dictate the answer.

Risk Homeostasis

Risk homeostasis, as presented by Wilde,9 maintains that:

In any activity, people accept a certain level of subjectively estimated risk to their health, safety, and other things they value in exchange for the benefits they hope to receive from that activity... people continuously check the amount of risk they feel they are exposed to. They compare this with the amount of risk they are willing to accept, and try to reduce any difference between the two to zero.9

In the context of mistake-proofing, risk homeostasis means that design changes intended to improve patient safety might actually result in changes of behavior that provide other benefits instead. Wilde9 points out that anti-lock brakes do not result in fewer or less severe accidents. Drivers choose to go faster in inclement weather because they know they have anti-lock brakes. They use the brakes to facilitate risky behavior while maintaining a constant overall risk level. All other things being equal, drivers essentially trade safety enhancements for additional speed.

Consider the case of the oxygen flow meter with a missing index pin.4 The flow meter was inserted in a nitrous oxide outlet where the view of the outlet was obstructed. The worker had a legitimate expectation that the flow meter could not be inserted into an incorrect outlet. One might speculate that the obstructed view was tolerated and the labeling, color-coding, and other cues about what to do were not used because of a reliance on the pin indexing system. If the worker knew that the pin indexing system was not in place, he might have looked at the outlets more carefully. Here, convenience and speed could have been obtained by a behavioral change facilitated by a mistake-proofing device.

Such behavioral changes should be an anticipated secondary problem during the mistake-proofing device design process. Use of FMEA, fault trees, and other analyses throughout the design process can help identify and resolve these changes. It also highlights how critical safety culture is in monitoring and managing the risk level tolerated by individuals in the organization.

Reduced Cognitive Content

Mistake-proofing can reduce the amount of cognitive content in work tasks. However, the benefits that accrue to organizations from this reduction can be perceived very differently according to the organization's intent, culture, and strategy.

Erlandson, Noblett, and Phelps10 studied the performance of students with cognitive impairments at Northwest Wayne Skill Center (NWWSC). These students/workers ranged in age from 15 to 22 years. Their IQ scores ranged from 45 to 86. Their job was to assemble fuel filter clamps for the automotive industry (Figures 4.8A and 4.8B). This task was initially very difficult for the students and led to low morale among those assigned to the task. Quality levels were between 35 percent and 70 percent acceptable production. The rate of production was approximately 62.5 assemblies/student-hour. Mistake-proofing was employed to create a work fixture that made it difficult to make mistakes. The mistake-proofed fixtures allowed for the use of "a much larger worker pool, reflecting a broader range of cognitive disabilities. The students were able to produce approximately 167 completed, acceptable assemblies/student-hour with accuracy rates approaching 100 percent."10 NWWSC found that worker morale improved. Workers reported to work early and, at the end of the day, congratulated each other for their significantly increased productivity. The purchasing company was also enthusiastic about the "quality and quantity" of production. NWWSC reported a zero return rate after producing 100,000 clamps following the introduction of mistake-proofing.

As was the case at NWWSC, when designing and implementing mistake-proofing devices, it is important to ensure that there exists a culture in which patient safety will be enhanced. This culture should use the benefits of mistake-proofing to free health care professionals from attending to the minute details of a process so that they can attend to more important aspects of patient care.

The methods employed at the NWWSC seem to be much more ominous when employed by organizations that are less interested in promoting the well-being of workers. Social critics have found fault with the dehumanization of work since the dawn of the industrial revolution. Mistake-proofing has been used to exploit workers. Unfortunately, lowering the skill level or cognitive content of work tasks encourages some companies to reduce training costs and exhibit little concern for treating employees well enough to retain them over the long term. When processes are mistake-proofed, workers become interchangeable and can be treated as a disposable commodity.

Pursuing such a strategy to simplify the work also enables employers to employ individuals with fewer economic options. It could be argued that the human resources policies of these companies reveal that their intent is not the same as that of the Northwest Wayne Skill Center. NWWSC employs the disabled in a meaningful way, providing them with more options and upholding their dignity in a culture that encourages respect for workers.

Return to Contents

Prevent Worker Detachment from the Process

Bose,11 a self-proclaimed proponent of mistake-proofing (poka-yoke), discusses concerns that the use of mistake-proofing devices may estrange workers. Over zealous mistake-proofing "generates its own cultural attributes, its own work ethic. It sows the seeds of operator detachment from the product..." Bose points out that North American industries have been trying to involve workers in process management for the past decade. Bose differentiates between the useful simplification of the process, including the elimination of the possibility to create defects, and the detrimental elimination of required skills. Mistake-proofing should "enable the operator to easily oversee the process."11

Return to Contents

Prevent Workers from Losing Skills

Bainbridge12 and Parasuraman et al13 assert that reducing workers' tasks to monitoring and intervention functions makes their tasks more difficult. Bainbridge asserts that workers whose primary tasks involve monitoring will see their skills degrade from lack of practice, so they will be less effective when intervention is called for. Workers will tend not to notice when usually stable process variables change and an intervention is necessary. Automatic features, like mistake-proofing devices, will isolate the workers from the system, concealing knowledge about its workings, which are necessary during an intervention. And, finally, automatic systems will usually make decisions at a faster rate than they can be checked by the monitoring personnel. Parasuraman, Molloy, and Singh13 looked specifically at the ability of the operator to detect failures in automated systems. They found that the detection rate improved when the reliability of the system varied over time, but only when the operator was responsible for monitoring multiple tasks.

Know the Third Boundary

Rasmussen14 and Rasmussen, Pejterson, and Goodstein.15 warn that errors play an important role in learning to become more efficient. Extensive use of automatic error detection and correction mechanisms, such as mistake-proofing devices, could have a negative effect on this learning. Rasmussen and his co-authors argue that the workers are being constrained by three boundaries:

  1. The boundary of unacceptable workload, which workers will desire to move as far away from as possible.
  2. The boundary of financial breakdown, from which management will drive the workers away.
  3. The boundary of functionally acceptable behavior, beyond which system failures occur.

Efficiency is gained by learning the exact location of the third boundary, so that processes can take place as far from the other two boundaries as possible without crossing the third. The location of this boundary is discovered through trial and error testing. Automatic error detection and correction mechanisms, by concealing the boundary of control, can prevent learning and skill development that might otherwise promote efficiency.

Conclusion

This chapter suggests that, concerning mistake-proofing in health care, lessons can be learned from several diverse disciplines. Mistake-proofing is not a panacea, and it is not without its limitations and liabilities. Inattention to these limitations and liabilities in the design and implementation of mistake-proofing devices can lead to problems. Designing and implementing devices effectively requires careful thought. The design issues identified in this chapter should not be a deterrent to implementing mistake-proofing. Rather, they should serve as a basis for thorough and thoughtful consideration of mistake-proofing designs.

References

1. Innovation Workbench Masters Program. Self-sufficiency in inventive problem solving. South Field, MI: Ideation International; 2001.

2. Reason JT. Managing the risks of organizational accidents. Aldershot, UK: Ashgate; 1997.

3. Perrow C. Normal accidents: living with high-risk technologies. New York: Basic Books; 1984.

4. Collins J. City police investigate St. Raphael's: Hospital could face charges for accidental deaths of two women. New Haven, CT: Yale Daily News; January 21, 2002. http://www.yaledailynews.com/Article.aspx?ArticleID=17694. Accessed December 2006.

5. Grout JR. Mistake-proofing: process improvement through innovative inspection techniques, In Woods, J, Cortada J, eds, The quality yearbook. New York: McGraw-Hill; 1998.

6. Downs BT, Grout JR. An economic analysis of inspection costs for mistake-proofing binomial attributes. J Qual Technol 1999; 31(4):417-26.

7. Repenning NP, Sternman JD. Nobody ever gets credit for things that never happened: creating and sustaining process improvement. California Management Review 2001; (43)4:64-88.

8. Papadakis EP. A computer-automated statistical process control method with timely response. Engineering Costs and Production Economics 18:301-10.

9. Wilde GJ. Target risk 2: new psychology of safety and health. Toronto: PDE Publications; 2001.

10. Erlandson RF, Noblett MJ, Phelps JA. Impact of a pokayoke device on job performance of individuals with cognitive impairments. IEEE Transactions on Rehabilitation Engineering 1998; 6(3):269-76.

11. Bose R. Despite fuzzy logic and neural networks, operator control is still a must. 1995: CMA 69: 7.

12. Bainbridge L. Ironies of automation, in Rasmussen J, Duncan K, Leplat, J, eds., New technology and human error. New York: John Wiley & Sons;1987.

13. Parasuraman R, Molloy R, Singh I. Performance consequences of automation-induced 'complacency.' The International Journal of Aviation Psychology 1993: 3(1):1-23.

14. Rasmussen J. The role of error in organizing behavior. Ergonomics 1990; 33(10/11):1185-99.

15. Rasmussen J, Pejterson A, Goodstein, L. Cognitive systems engineering. New York: John Wiley & Sons; 1994.

Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care