Skip Navigation

Office of Public Health Emergency Preparedness (OPHEP)

Quality Improvement
Implications for Public Health Preparedness

Michael Seid, Debra Lotstein, Valerie Williams, Christopher Nelson, Nicole Lurie, Allison Diamant, Jeffrey Wasserman, Stefanie Stern

TR-316-DHHS

February 2006

Prepared for the U.S. Department of Health and Human Services Office of the Assistant Secretary for Public Health Emergency Preparedness

Contents

Preface
Figures
Executive Summary
Acknowledgments
Abbreviations
Chapter One.  Introduction
What Is Quality Improvement?
Examples of QI-Related Initiatives in Public Health
Issues Examined in This Report
Organization of This Report
Chapter Two.  Approach for Applying Quality Improvement In Public Health Emergency Preparedness
A Conceptual Framework for Applying Quality Improvement in Public Health Preparedness
Criteria Used in Our Evaluation
Approach for Site Visits
Chapter Three. Examples Of Quality Improvement In Public Health Emergency Preparedness
All Sites Reported Having Performance Goals for PHEP
Some Sites Had Concerns About Using CDC Guidance as a Source for Their Goals
Many Sites Had Difficulty Prioritizing Goals
Some Sites Wanted a Clearer Definition of Preparedness
Sites Frequently Used Performance Measures for Routine Processes
Sites Commonly Used Drills and Exercises to Measure Rare Processes
Many of the Measures Were Imprecise or Lacked Clear Objectives
Some of the Measures Were Not Relevant for PHEP
The Use of Measurement Was Not Pervasive, and Documentation Was Often Lacking
Sites Had Typically Implemented Only a Few QI Practices
Several Sites Considered the Incident Command Structure to Be a Means of Improving PHEP in Response Situations
Different QI Practices Were Used at the State and Local Levels
QI Practices Were More Likely to Be Implemented If They Were Integrated into Daily Work
Successful Implementation of QI Practices Requires Specific Effort
Few Sites Had a Systematic Process for Incorporating Changes Suggested by Improvement Efforts
After Action Reports Were Often Not Acted Upon
Organizational Culture and Leadership Was Key to QI Efforts
Lack of Adequate Resources Was a Barrier to QI
Financial Resources Were Also Critical
Incentives for QI Were Typically Lacking
Many Sites Emphasized Training and Skill Development to Facilitate QI
Chapter Four. Overarching Themes and Recommendations
Overarching Themes
Limitations
Recommendations
Conclusion
Appendix A.  Quality Improvement Components and Contextual Factors
Appendix C: Screening Interview Protocol
Appendix D: Site Visit Interview Protocol
References
Copyright Information: Rand Corporation

Figures
Figure 2.1.  High-Level Schematic for Public Health Preparedness

Preface

Over the past four years, the U.S. Department of Health and Human Services (HHS) has made significant investments in state and local public health to enhance public health emergency preparedness. The RAND Corporation was contracted to work with the HHS Office of Public Health Emergency Preparedness (OPHEP) to develop resources and to prepare analyses to help describe and enhance key aspects of state and local public health emergency preparedness (PHEP).
As part of this contract, RAND was asked to develop a framework for and evaluate models of applying quality improvement in public health and to investigate the applicability of those models to PHEP. Specifically, RAND was to address the following questions:

  • Are there examples of QI in public health and what can we learn from them with regard to PHEP?
  • What organizational and contextual factors facilitate QI and what are the implications of such factors for PHEP?
  • How can public health leaders implement and accelerate QI as it relates to PHEP?

To answer these questions, we identified public health departments that were nominated as having promising practices with respect to quality improvement overall, and in public health emergency preparedness in particular; performed site visits with interviews at a subset of these sites; and drew themes from our interview notes.
This work was carried out from October 2004 through October 2005. This report was prepared specifically for the Office of Public Health Emergency Preparedness, but it should be of interest to individuals working in public health preparedness at the federal, state, and local levels.
Comments or inquiries should be sent to the RAND principal investigators, Nicole Lurie (Nicole_Lurie@rand.org) and Jeffrey Wasserman (Jeffrey_Wasserman@rand.org), or addressed to the first author of this report, Michael Seid (mseid@rand.org).
This work was sponsored by the OPHEP and was carried out within the RAND Health Center for Domestic and International Health Security. RAND Health is a division of the RAND Corporation. A profile of the Center, abstracts of its publications, and ordering information can be found at http://www.rand.org/health/healthsecurity/. More information about RAND is available at http://www.rand.org.

Executive Summary

Recent events such as the terrorist attacks of September 11, 2001, the anthrax attacks, the flu vaccine shortage of 2004-2005, and the response to Hurricane Katrina have all rekindled interest in strengthening the nation's public health infrastructure and, in particular, have shown the importance of public health emergency preparedness (PHEP).  To enhance the public health system and address gaps in preparedness, the U.S. government has spent billions of dollars since September 2001 to introduce surveillance systems, purchase equipment, and develop plans and measures.

Despite these efforts, concerns remain about the ability of the public health system to respond to emergencies.  Federal and state budget deficits strain the current system, while changes in the health care delivery system and ambivalence about the role of government have resulted in relatively low expectations for public health and what it can achieve.  Standards for defining and measuring preparedness are lacking, and there are few measures with which to assess the performance—and progress—of health departments in emergency preparedness or to implement systematic change.  Adding to these challenges is the complexity of the public health system itself, which includes thousands of county and city health departments; local boards of health; state and territorial health departments; tribal health departments; public and private laboratories; parts of multiple federal departments and agencies; hospitals and other health care providers; volunteer organizations, such as the Red Cross (Lister, 2005); and private vaccine and drug manufacturers and distributors.  Moreover, the broad mission of public health, which extends from the promotion of physical and mental health to disease prevention, means that emergency preparedness must compete with many other programs and activities.

The goal of this study is to help address gaps in public health by showing how quality improvement (QI) methods can be used to improve the emergency preparedness of the system.

What Is Quality Improvement?

The term “quality improvement” has been used broadly by practitioners in various fields to refer to a range of strategies and techniques designed to improve performance and quality. Originally developed in manufacturing, QI methods have been applied to health care, engineering, service industries, and emergency response organizations, although not widely within public health.

Our review of the literature suggests that, although different fields use QI in many different ways, all QI programs designed for complex organizations reflect a set of core concepts (Langley, Nolan, et al., 1996; Cretin, Shortell, et al., 2004; Lighter and Fair, 2004; Daita 2005):

  • Emphasis on systems.  QI considers quality to be the result of complex, interdependent systems, with individuals working within these systems. Quality problems are solved by focusing on the system, not by exhorting poor-performing individuals to “try harder.”
  • Product or outcome focus.  QI efforts are oriented toward a specific product or outcome, and planning is focused on the needs of those served by the organization.  
  • Use of quantifiable measures.  Potential changes and procedures are evaluated according to their effect on quantifiable measures.
  • Reduction in variability.  The focus of QI efforts is on the design of processes to reduce unwarranted variability in the product or services provided (outcomes of the process).
  • Continuous improvement.  QI efforts need to be ongoing, rather than one-time initiatives, since organizational processes and systems will usually change with time.

In sum, this report defines QI as a multidisciplinary, systems-focused, data-driven method of understanding and improving the efficiency, effectiveness, and reliability of public health processes and practices related to preparedness. The ongoing process of QI requires all four of the following elements: performance goals, performance measures, QI practices, and feedback and reporting (Langley, Nolan, et al., 1996; Lighter and Fair, 2004).

Approach

In the study, we examined the QI practices used in a small group of public health departments (HDs).  We conducted site visits with seven HDs and telephone interviews with two others. During these interviews, we looked for examples of four components of QI:

  • Performance goals, i.e., agreed-upon targets for improvement efforts of the organization or work unit
  • Performance measures that focus on structures, processes, or outcomes of an organization’s work
  • QI practices, i.e., systematic techniques and approaches used to understand processes, identify needed changes, and implement improvements
  • Feedback and reporting mechanisms to document and track performance.

During our site visits, we also noted instances of several organizational and contextual factors that can affect the success of QI initiatives, including organizational culture, leadership, information systems, technical skills, and incentives. 

Key Findings

Although no sites had comprehensive, fully functioning QI processes for PHEP, many sites had one or more components of QI.  We highlight some key findings below.

All sites reported having performance goals for PHEP.  CDC cooperative-agreement guidance was the main source of performance goals, while other public HDs cited the National Public Health Performance Standards Program (NPHPSP) or, in some cases, developed their own performance goals. One local health department (LHD) used a special process to develop SMART performance goals--i.e., goals that are specific, measurable, achievable, realistic, and time-weighted.  These characteristics facilitate the development of performance goals by precisely defining the target, stating the measure used to evaluate whether the target has been reached, naming the accountable parties, and ensuring that goals are neither too easy nor too difficult and that deadlines for achieving goals are clear.

Some sites expressed concern about using CDC guidance in setting their goals.  Some noted that the CDC guidance came out too late to be useful for planning. Another concern was that the guidance places too much emphasis on structure (for example, having one epidemiologist for every 500,000 people) and ease of measurement, as opposed to goals for improvement.

Several LHDs also noted difficulties in prioritizing performance goals. LHDs reported that performance goals were often developed on an ad hoc basis, in response to immediate needs and requirements. At several sites, the lack of a widely accepted, quantifiable definition of preparedness was also a stumbling block to engaging in QI in PHEP.

The use of some form of performance measurement existed for both routine and rare processes.  Several HDs had implemented measures for routine, ongoing processes, many in the area of infectious disease. Examples include a measure of the timeliness of disease-reporting across the state and a measure of LHD progress toward statewide performance goals for PHEP. HDs measured rare preparedness processes both in response to naturally occurring events and as a part of drills or exercises. HDs cited West Nile virus, anthrax, and flu season, among others, as opportunities to measure preparedness--e.g., in number of people vaccinated during a flu clinic, the extent to which the HD was able to shape communication to the public with regard to West Nile virus, and in the HD’s success in contacting individuals with special health care needs in the event of a natural disaster.  Drills and exercises were used to capture easily quantifiable metrics, such as the percentage of people reached in an alert. However, more-comprehensive and more-complex exercises represented a difficult measurement enterprise.

Many of the measures were imprecise, lacked clear objectives, or were not relevant.  Specific exercise objectives and metrics for complex exercises were often not identified in advance. Many of the sites routinely drafted after action reports (AARs) following exercises and considered them to be valuable ways to measure rare or response processes. However, we found that AARs often measured “success” implicitly, without reference to a priori goals and measures.  Even if there were a priori goals, measures were often imprecise. Moreover, some of the measures were only indirectly related to actual preparedness--e.g., measures to assess whether a plan was written, whether a training was held, or whether certain equipment was purchased.

On the whole, measurement and documentation of PHEP processes were not pervasive.  Each of the nine sites had at least one example of a measure for PHEP; however, few sites could describe more than a handful. In addition, several sites lacked documentation for the measures they had developed.

Many sites had examples of QI goals and measurements; fewer had implemented many QI practices.  Few sites had command of more than one QI practice, and practices were often employed without conscious understanding that the specific instance was an example of a broader class of QI. Examples of QI practices included cyclical QI practices based on iterative cycles of setting improvement aims, testing a change, evaluating the effects of the change, and providing feedback to inform the next cycle.  Another QI practice was the use of a collaborative, which refers to a structure in which various agencies or health departments can share experiences, learn from one another, and find ways to cooperate and pool resources, training, and knowledge.

As to using QI practices for rare events, several of the HDs considered the skilled use of Incident Command Structure (ICS) to be a way to improve PHEP for response situations. The ICS refers to the combination of facilities, equipment, personnel, procedures, and communications used during emergency response.  When applied correctly, ICS lays out a systematic way to respond to emergencies that specifies clear roles and responsibilities and that has built-in processes to facilitate a cyclical planning and response process.  The HDs that found ICS most useful were those that incorporated ICS into routine practice or that had other opportunities to practice it.

State and local HDs typically implemented different types of QI practices appropriate to their respective roles.  LHDs would typically be first responders in the event of a public health incident; state HDs were more likely to help coordinate or facilitate the local and regional response. As a result, LHDs tended to focus internally on QI, whereas state HDs saw their QI role more in terms of providing resources and training, and facilitating structures within which LHDs could improve.

QI practices were more likely to be implemented if they were integrated into daily work and made work easier or more efficient.  For example, one LHD made changes to its infectious-disease-reporting process to reduce reporting time, which streamlined the data-entry process, thus making the work process easier as well as faster. Sites noted that implementation of QI practices was more successful when there was an explicit effort to do so. 

HDs typically did not have systematic feedback mechanisms for QI.  Many of the HDs we visited told us that they often made changes to the way they worked and created plans for preparedness based on lessons learned from actual events and exercises. However, we found few examples of HDs that used an explicit process to routinely incorporate back into practice those changes suggested by their improvement efforts. Moreover, the findings of AARs were often not acted upon following a drill.  Nor did sites routinely retest after changes have been made to determine the success of the changes.

Organizational culture and leadership were keys to QI efforts.  Cultures that fostered cooperation, valued input from all staff, and empowered employees were important drivers in facilitating QI efforts.  Leadership style was critical in developing a supportive organizational culture.  Many different leadership styles and functions can support QI, including leaders who foster empowerment and accountability, those who use their authority to drive and enforce QI efforts, and those who use a more charismatic style to champion change.

Barriers to QI included a lack of resources and incentives.  A lack of time and, especially, an inadequate number of staff members were cited as key barriers to QI.  Not having sufficient or well-trained staff often means that health departments are in a responsive mode rather than a strategic mode and are often incapable of building an infrastructure for QI.  Other resources, such as data and informational systems, were also viewed as crucial to establishing an infrastructure to facilitate QI.  Many sites emphasized training and skill development to achieve the skilled workforce needed to use available technologies effectively.  Other barriers included a lack of federal funding and  the notion that PHEP itself is as an add-on, rather than something integral to public health. 

Overall, there were not strong incentives, either at the organizational or individual level, to engage in QI for PHEP.  Better performance was not linked to funding. Moreover, there was a reluctance, given the perception that preparedness-related funding might be time-limited, to invest in long-term improvement efforts.  Most HDs had not linked performance to staff salaries.

Recommendations

For QI to flourish and become standard practice, changes to the status quo are necessary. We make the following observations and recommendations.

Building QI capabilities and capacity is foundational. Implementing QI requires both theoretical knowledge and practical skills at all levels of the organization. Although much emphasis has been placed on the need to improve, less investment has been made in creating the organizational capacity to improve. The discipline of QI and the skills and techniques needed to pursue QI must be broadly disseminated throughout public health. Vehicles for increasing QI capacity could include development grants, education and training, technical assistance, tool (including information technology) development, leadership and management training, and grants that incentivize and reward QI practices and continuous improvement in performance.

Attention needs to be paid to organizational development and change. This study outlines many issues faced in PHEP, including the need to plan and develop a response capacity across well-established silos and the need to transform the traditional workforce into an integrated ICS structure in the face of an emergency.  These very difficult organizational challenges require sustained attention and resources to build the organizational and leadership capacity for public health (PH) to be prepared to protect the public in a sustained emergency. As above, vehicles for increasing organizational-development capacity could include development grants, education and training, technical assistance, and leadership and management training.

PHEP and QI for PHEP will be most successful when integrated into routine PH practice and into daily work.  The integration of QI into usual PH processes and into daily work provides an opportunity to practice QI skills and improve PHEP-relevant processes and capabilities while avoiding “preparedness burnout,” which could occur if staff are asked to add on additional work to address PHEP.

Performance goals and measures that are meant to facilitate improvement should be specific, measurable, relevant, and time-bound.  Performance goals and measures must be relevant to PHEP and structured so that they track and assess improvement. A useful approach for stating performance goals is the SMART method, i.e., make sure that goals are specific, measurable, achievable, relevant, and time-bound.  It is also important to match performance goals to the appropriate level (e.g., specific process, organization, state,  or federal level).

Measuring and documenting key processes is essential to QI.  Measurement of key processes should be widespread and documented. Measurement for routine processes,  such as surveillance and disease-reporting, training, and lab functions, can be replicated and expanded to other routine processes. AARs for drills should be structured using a template, to facilitate the recording of consistent data.

Implementing QI practices systematically and rigorously is difficult, but essential.  HD staff need to become more systematic in their use of QI practices. It is helpful to use specific QI practices matched for the type of process to be improved. Routine processes are well suited to cyclical improvement practices. ICS training is being used to improve emergency response, although it should be considered the beginning, not the end, of QI practices.

Information from AARs will be more likely to be used to drive process change and set up the next measurement cycle if formalized procedures are implemented to do so.  For each problem identified by the AAR, HD staff should identify the work process(es) or capability(ies) involved in the problem, create SMART goals to address the problem, and design focused drills or mini-exercises to test the changes implemented.

State efforts to facilitate QI for PHEP at the local level should be emphasized. State HDs were very valuable QI facilitators. States should foster collaborative structures among LHDs; assist LHDs with measurement; disseminate best practices; provide training; and coordinate resources.

Federal grants should incentivize QI at the state and local levels.  Federal grant guidance should require state and local HDs to engage in QI and should be structured to create incentives for such engagement and to show improvement. Guidance could include specific language for developing goals and measures that facilitate improvement; supporting information-technology infrastructure to measure for improvement; training the public health workforce in QI strategies;, and providing tools to incorporate lessons learned.

Acknowledgments

We gratefully acknowledge the directors and staff of the public health departments we talked with and visited. They made us feel welcome and were generous with their time and comments.  We thank our reviewers, Terri Tanielian, Lillian Shirley, and Paul Jarris, for their helpful critiques. Jennifer Li and Kristin Leuschner helped to edit this report. We also thank William F. Raub and Lara Lamprecht at the U.S. Department of Health and Human Services for their support and guidance throughout this project.

Abbreviations


AAR

after action report

CDC

Centers for Disease Control and Prevention

DHS

Department of Homeland Security

EH

environmental health

EMS

Emergency Medical Services

FMEA

Failure Mode and Effect Analysis

GPRA

Government Performance and Results Act

HD

health department

HHS

U.S. Department of Health and Human Services

HRSA

Health Resources and Services Administration

HSEEP

Homeland Security Exercise and Evaluation Program

ICS

Incident Command Structure

JCAHO

Joint Commission on Accreditation of Healthcare Organizations

LHD

local health department

MAPP

Mobilizing for Action through Planning and Partnerships

NACCHO  

National Association of City and County Health Officials

NIMS

National Incident Management System

NPHPSP

National Public Health Performance Standards Program

OPHEP

HHS Office of Public Health Emergency Preparedness

PDSA

Plan-Do-Study-Act

PH

public health

PHEP

public health emergency preparedness

PHR

Public Health Ready

QA

quality assurance

QI

quality improvement

RWJF

The Robert Wood Johnson Foundation

SARS

Severe Acute Respiratory Syndrome

SMART

specific, measurable, achievable, realistic, and time-weighted

STD

sexually transmitted disease

Chapter One.  Introduction

Such recent events as the terrorist attacks of September 11, 2001, the anthrax attacks, the flu-vaccine shortage of 2004-2005, and the response to Hurricane Katrina have all rekindled interest in strengthening the nation's public health infrastructure and, in particular, have shown the importance of public health emergency preparedness (PHEP).  To enhance the public health system and address gaps in preparedness, the U.S. government has spent billions of dollars since September 2001 to introduce surveillance systems, purchase equipment, and develop plans and measures.  Despite these efforts, concerns remain about the ability of the public health system to respond to emergencies.

Federal and state budget deficits strain the current system, and years of disinvestment, coupled with changes in the health care delivery system and ambivalence about the role of government, have resulted in relatively low expectations for public health and what it can achieve.  Standards for defining and measuring preparedness are lacking, and there are few measures with which to assess the performance—and progress—of health departments in emergency preparedness or to implement systematic change.

Adding to these challenges is the complexity of the public health system itself.  Public health includes more than 3,000 county and city health departments and local boards of health; 59 state and territorial health departments; tribal health departments; more than 160,000 public and private laboratories; parts of multiple federal departments and agencies; hospitals and other health care providers; volunteer organizations, such as the Red Cross (Lister, 2005); and private vaccine and drug manufacturers and distributors.  Responsibility for the system is divided among the states, which typically have greatest authority for public health; the federal government, which can influence public health through funding decisions and via its authority over interstate commerce; and local public health departments, which exercise a great deal of independence and authority in many states.  In addition, public health is only one part of a much broader emergency preparedness community, which also involves the heads of 32 federal agencies and departments, including the Department of Health and Human Services (HHS).

The broad mission of public health, which extends from the promotion of physical and mental health to disease prevention, means that emergency preparedness must compete with many other programs and activities.  Public health fulfills its mission through the exercise of three core functions: (1) assessment, including disease surveillance to identify disease outbreaks; (2) policy development, including development of recommendations to prevent further infections in health care and community settings and enforcement of quarantine laws; and (3) assurance, including responsibility for ensuring that the population receives needed preventive care and disease treatment, whether provided directly by the health department or in private settings (Institute of Medicine, 1988).

The goal of this study is to help address gaps in public health by showing how quality improvement (QI) methods can be used to improve PHEP. QI refers to a set of strategies and techniques to improve system performance in order to better achieve the desired outcomes. QI posits that a complex organization can be thought of as a “production system,” and QI methods are used to set goals, measure performance, and apply systematic changes to improve the performance of that system.

In the remainder of this introduction, we provide a more detailed definition of QI and offer a brief overview of some current initiatives in public health that use elements of QI.

What Is Quality Improvement?

The term quality improvement has been used broadly by practitioners in various fields to refer to a range of strategies and techniques designed to improve performance and quality. Originally developed in manufacturing, QI methods have been applied to health care, engineering, service industries, and emergency response organizations, although not widely within public health.

Our review of the literature suggests that, although different fields use QI in many different ways, all QI programs designed for complex organizations reflect a set of core concepts (Langley, Nolan, et al., 1996; Cretin, Shortell, et al., 2004; Lighter and Fair, 2004; Daita, 2005), which include:

  • Emphasis on systems.  QI considers quality to be the result of complex, interdependent systems, with individuals working within these systems. Quality problems are solved by focusing on the system, not by exhorting poor-performing individuals to “try harder.”
  • Product or outcome focus.  QI efforts are oriented toward a specific product or outcome, and the planning for such efforts is focused on the needs of those served by the organization.  
  • Use of quantifiable measures.  Potential changes and procedures are evaluated according to their effect on quantifiable measures.
  • Reduction in variability.  The focus of QI efforts is on the design of processes to reduce unwarranted variability in the product or services provided (outcomes of the process).
  • Continuous improvement.  QI efforts need to be ongoing, rather than one-time, initiatives, since organizational processes and systems will usually change with time. 

In sum, this report defines quality improvement as a multidisciplinary, systems-focused, data-driven method of understanding and improving the efficiency, effectiveness, and reliability of public health processes and practices related to preparedness. The ongoing process of QI requires all four of the following elements (discussed further in Chapter Two): performance goals, performance measures, QI practices, and feedback and reporting (Langley, Nolan, et al., 1996; Lighter and Fair, 2004).

Examples of QI-Related Initiatives in Public Health

Many existing initiatives in public health have adopted components of QI, including the use of measures and quality indicators.  These initiatives have been fueled, in part, by an overall trend since the 1990s toward improving the quality of public programs and services.  The Government Performance and Results Act (GPRA), for example, requires federal agencies to routinely measure the performance and outcomes of the programs they administer, and to demonstrate accountability for the federal funds they use to support those programs.  Another important catalyst for performance improvement in public health has been the publication of the Institute of Medicine’s 1988 report The Future of Public Health.  Several recent initiatives reflect the use of performance measurement and QI-related strategies within public health organizations.1

Mobilizing for Action Through Planning and Partnerships (MAPP).  This initiative, launched in 2001 by the National Association of City and County Health Officials (NACCHO) and the Centers for Disease Control and Prevention (CDC), is a communitywide strategic planning tool for improving community health. Facilitated by public health leadership, this tool helps communities identify and prioritize long-range community public health goals and identify resources for addressing them. Similar to QI programs, MAPP aims to improve public health processes, and it incorporates planning, implementation, and evaluation steps.  However, MAPP does not contain a mechanism for ensuring continuous improvement.

Project Public Health Ready (PHR).  This initiative, also developed by NACCHO in conjunction with CDC,2 is a voluntary recognition program to prepare local governmental public health agencies to protect the public’s health through a program of planning, competency-based training, and exercises.  Sites that successfully demonstrate preparedness are recognized as “Public Health Ready.”  PHR contains many of the elements of QI, including performance goals, measurement and analysis, and QI techniques.  It is less clear, however, how feedback reporting is structured to improve public health preparedness or whether there is an improvement step after the reporting.

Turning Point Program.  The Turning Point Program is an initiative of the Robert Wood Johnson Foundation and the Kellogg Foundation, designed to transform and strengthen the public health system. Turning Point has developed specific models for a more effective and more responsive public health system. Through 21 partnerships of state and local public health and community-based agencies, Turning Point has provided a number of tools for improving the accountability of public health efforts, including a model for performance management in public health. Turning Point’s efforts appear to have strengthened the public health infrastructure and this could, in turn, support QI initiatives in public health.

Development of Public Health Quality Indicators.  Many local health departments have developed public health quality indicators--i.e., quantitative statements about expected or recommended public health processes or outcomes.  For example, a health department might track the proportion of “immediate-response” infectious disease reports at the facility that are investigated within 24 hours as a means of assessing how well the department responds to potential infectious-disease outbreaks. Although not comprehensive, these indicators fill an important gap in public health quality-improvement efforts by providing methods for quantitative quality assessment.

Accreditation Programs.  Plans for accreditation of public health agencies also sometimes include QI-related components.  Accreditation, along with the related issue of staff certification, is seen by some as a priority in building and sustaining the public health infrastructure (Baker and Koplan, 2002).  quality To the extent that some degree of complementarity exists between accreditation scores and performance indicators, it is possible for accreditation to support quality improvement.  There are hopes that accreditation will increase performance measurement and accountability, allow public health departments to document performance, and establish performance standards that could be used to benchmark data for agency quality-improvement efforts (Halverson, Nicola, et al., 1998).  However, many accreditation efforts tend to focus on just one aspect of QI:  standardization.  Standardization for external evaluation and certification is different from the day-to-day application of the management science of QI throughout an organization. While an emphasis on accreditation might be useful in setting a lower bound on the public health processes performed, it will not automatically lead to improved outcomes in health and related domains.

Issues Examined in This Report

This report builds upon the discussion of QI-related initiatives and practices just described by considering site-specific examples of QI in public health and PHEP and by offering recommendations for how to develop and accelerate QI in PHEP. The study focuses on nine public health departments (five local and four state) and asks the following main questions:

  • Are there examples of QI in public health, and what can we learn from them with regard to PHEP?
  • What organizational and contextual factors facilitate QI, and what are the implications of those factors for PHEP?
  • How can public health leaders implement and accelerate QI as it relates to PHEP?

To answer these questions, we analyzed information from public health departments that were nominated as having promising practices with respect to QI in PHEP. 

Organization of This Report

The remainder of this report is divided into three chapters.  In Chapter Two, we explain our approach for examining the role of QI in public health, and in PHEP specifically.  In Chapter Three, we present the findings of our analysis.  In Chapter Four, we offer conclusions and recommendations.

Chapter Two.  Approach for Applying Quality Improvement In Public Health Emergency Preparedness

In this chapter, we describe our approach to examining the role of QI in public health emergency preparedness.  We begin by describing a conceptual framework for understanding PHEP.  We then describe the criteria used in our examination of QI and our method for conducting site visits.

A Conceptual Framework for Applying Quality Improvement in Public Health Preparedness

To understand how QI could be applied to PHEP, we can think in terms of a “preparedness production system,” such as that shown in Figure 2.1.3  The figure illustrates the various processes or activities that make up PHEP. On the left side, the figure shows that public health departments (HDs) engage simultaneously in both capability-building processes and ongoing surveillance and detection processes.  These processes can influence each other and both include a PHEP component and contribute to the HD’s response capabilities (shown in the center of the figure).  Response processes involve both the direct response of the HD as well as coordination with other entities also responsible for emergency preparedness.

Each set of processes can be subdivided further into its constituent subprocesses. Response processes, for example, include a set of processes for mitigation, such as prophylaxis, which in turn is comprised, in part, of vaccination distribution, which is comprised in part of setting up a point of distribution, and so on.  Ultimately, PHEP produces outcomes (shown on the right), including reduced morbidity and mortality and enhanced psychological, social, and economic outcomes.

The figure suggests that the processes comprising PHEP can be classified into two broad categories: routine and rare. Routine processes, such as capability-building processes and surveillance and detection processes, occur regularly and are repeated with some appreciable frequency. Rare processes, such as response processes, are usually associated with the HD’s response to an emerging or declared public health emergency. This distinction has implications for measurement and improvement: it will be discussed further below.

Figure 2.1.  High-Level Schematic for Public Health Preparedness

Figure 2.1. High-Level Schematic for Public Health PreparednessNOTE: This figure assumes an existing infrastructure of public health laws, information technology, laboratory capacity, and equipment.

The figure also illustrates that PHEP is not something “extra” or in addition to the normal activities undertaken by HDs; instead, it is integrated with other routine activities and responsibilities.  Indeed, the activities shown in the figure (italicized) include eight of the ten “essential public health services”4 identified by a U.S. Centers for Disease Control and Prevention steering committee, working with representatives of U.S. Public Health Service agencies and other major national public health organizations.  In other words, routine public health processes should be considered part of the preparedness production system, and improving these processes should also improve preparedness.

A QI model can be applied to any of the sets of processes or the component processes represented in the preparedness schematic. Individual processes at any level can become the focus of improvement efforts. The evaluation conducted in this study looks at a variety of different processes and activities that fall into different parts of the framework shown here.

Criteria Used in Our Evaluation

We now describe the components of QI and the contextual factors used in our review of QI practices at the HDs.  During our site visits and interviews, we looked for examples of four factors.  The ongoing process of QI requires all four elements (Langley, Nolan, et al., 1996; Lighter and Fair, 2004). A fuller description of these elements can be found in Appendix A.

  • Performance Goals.  These are agreed-upon targets for improvement efforts of the organization or work unit.  Performance goals can be at the level of the organization, division, team, or individual.
  • Performance Measurement.  This is an activity that focuses on structures, processes, and/or outcomes of an organization’s work (Derose, Asch, et al., 2003).  Structural measures assess the capacity and characteristics of the organization.  Process measures focus on actions the organization takes, and include both technical processes and interactions among the organization and consumers and stakeholders. Outcome measures involve both short-term and long-term results of performance. Quantifiable measures are essential for QI.
  • Quality Improvement Practices.  These are systematic techniques and approaches used to understand processes, identify needed changes, and implement improvements.  They include approaches to define a roadmap for improvement, apply process-analysis techniques (e.g., process mapping, failure modes, and effect analysis), implement repeated small tests of change, and create a culture that is supportive of QI on an organizational scale.
  • Feedback and Reporting.  These are attempts to document and track performance within an organization and to apply lessons learned. Ideally, feedback is the first step in an ongoing improvement cycle.

During our site visits, we also noted instances of several organizational and contextual factors that can affect the success of QI initiatives.  These include organizational culture, leadership, information systems, technical skills, and incentives.   

Approach for Site Visits

Here we briefly describe our approach for conducting the site visits.  Details of our method are found in Appendix B.

To conduct our assessment, we visited nine sites, consisting of five local HDs (LHDs) and four state HDs.  These were selected from a large group of potential sites, including sites nominated by experts in the field as ones that understood and had implemented aspects of QI; sites that had participated in previous initiatives such as Public Health Ready, and other sites identified on their websites as having participated in a QI program or having a QI department.  We telephoned the sites on this initial list to verify, by the site’s self-report, that it had implemented some form of QI.  The screening protocol used for these calls is found in Appendix C.  We excluded certain nominated sites based on prior knowledge about the absence of QI at that site and on the basis of size (we also chose to exclude very small LHDs).

We selected a subsample of nine sites for further in-depth review.  We conducted in-person visits with seven of these sites and performed telephone interviews with two others.  The site visits or phone calls involved semistructured individual interviews with relevant health department personnel, supplemented by written materials where available.

A case-study protocol was developed to guide the interviews for each site visit.  This guide is provided in Appendix D. Detailed notes from each interview served as the main source of data for our analysis of QI elements implemented at the sites.

Chapter Three. Examples Of Quality Improvement In Public Health Emergency Preparedness

In this chapter, we present our findings. Although no sites had comprehensive, fully functioning QI processes for PHEP, many sites had one or more components of QI.  We looked for examples of the four components of QI, as well as for organizational and contextual factors affecting QI.  Many of the sites reported the use of performance goals, measurement, quality-improvement practices, and feedback and reporting. Sites also indicated that many of the factors cited as critical determinants of successful QI implementation were also important in PHEP. In identifying examples of QI components and other factors, we were inclusive, rather than exclusive. For example, if an interviewee mentioned an example even briefly, we counted it. We describe several examples below and also identify key factors affecting implementation. We focus first on the components of QI, and then on organizational and contextual factors.

All Sites Reported Having Performance Goals for PHEP

All sites reported having performance goals for PHEP. Most HDs, both state and local, acknowledged that CDC grant guidance was the main source of performance goals, because the CDC required reporting of performance as a condition of its grants.  Several HDs mentioned using the National Public Health Performance Standards Program (NPHPSP), a national partnership initiative that has developed performance standards for state and local public health systems and for public health governing bodies. Others developed their own performance goals in addition to or (in the case of LHDs) instead of the CDC guidance.

For example, one state HD worked closely with LHDs to identify needs and performance targets. Building on a multiyear effort to develop public health performance standards, the state HD created a strategic preparedness set of performance standards based on needs identified by both the state and LHDs. Some of the state and/or local performance goals overlap with federal guidance; others do not. Interviewees at this site reported that these goals made it possible for the state and LHDs to work in a sustained manner toward goals they thought were locally relevant.

In another state, performance goals were more directly based on the CDC guidance, but locally relevant modifications had also been made. The state HD, through iterative discussions with LHDs, translated the CDC guidance into LHD performance goals and deliverables. These, along with additional state-relevant indicators, are standard reporting requirements for the LHDs. In addition, the state HD provided education and training to the LHDs to facilitate reporting.  In this way, the state HD serves a coordinating and facilitative function with respect to developing performance goals. Further, by requiring standard reporting on common performance goals, the state HD set the stage for LHDs to compare their performance to that of others and share lessons learned.

Another state HD in a state with a strong home-rule tradition also developed its own improvement standards program, in which LHDs were invited to participate and for which they receive a small subsidy.  Although the subsidy was not large, all but two local departments participated. The improvement standards were based on the CDC’s 10 essential services and the National Public Health Performance Standards. There were six improvement standards, each of which had five sub-standards with several associated optional measures.  LHDs selected from among these measures.  For each substandard, an LHD had to identify at least one standard that it had adopted and one that it planned to adopt. Interviewees at this site reported that this program has given the state HD more (and more-honest) information about LHDs’ activities and capabilities, although the state HD admits that the program may need additional, more-specific measures with regard to PHEP. Moreover, early measures were reported not to be satisfactory, although subsequent iterations were made and interviewees were ultimately more satisfied.

Another LHD used a special process to develop goals that are structured in a way to facilitate improvement. In this LHD, staff were trained to develop performance goals that are specific, measurable, achievable, realistic, and time-weighted (SMART).  Performance goals with these characteristics facilitate improvement because the target is precisely defined, the measure used to evaluate whether the target has been reached is stated, the accountable parties are named, the goal is neither too easy nor too difficult, and deadlines for achieving the goal are set. Originally described by Peter Drucker in 1954 as part of his concept of “managing by objective” (Drucker, 1954), SMART goals have been used in many industries and settings.  In using the SMART method, this LHD has a systematic method for developing performance goals that are appropriate for any level of the organization and that lend themselves to improvement.

Some Sites Had Concerns About Using CDC Guidance as a Source for Their Goals

Some sites expressed concern about using CDC guidance in setting their goals.  Some noted that the CDC guidance came out too late to be useful for planning. Sites could not shift emphasis if the guidance came out partway through the year. Another concern was that the guidance places too much emphasis on compliance with grant guidance and ease of measurement, as opposed to goals for improvement. For example, at one site, an interviewee reported that there was an emphasis on “measuring for grants versus measuring for best results. The tendency is to focus on checking the boxes for the guidance. . . . It seems to limit creativity.”  Or as another interviewee stated,

You don’t want to put numbers on something that is a poor proxy for preparedness and. . . [need] to be careful not to misplace what you are measuring with something that is not predicting if you are prepared.  You have to make sure you are not measuring something because it is easy to measure versus because it is related to preparedness.

Finally, many HDs felt that the guidance targets moved too frequently. Some characterized it as a “flavor-of-the-month” problem and felt that their efforts could not build on themselves.

Many Sites Had Difficulty Prioritizing Goals

Several LHDs also noted difficulties in prioritizing performance goals. LHDs reported that performance goals were often developed on an ad hoc basis, according to immediate needs and requirements. The need to prioritize performance goals was seen as a key operational challenge, with several interviewees noting that it was difficult to know where to start with so much to work on. 

Some Sites Wanted a Clearer Definition of Preparedness

At several sites, the lack of a widely accepted, quantifiable definition of preparedness was a key stumbling block to engaging in QI in PHEP. As one interviewee related,

What do people mean and interpret to be “preparedness”?  It’s used as a buzzword for a lot of different things.  You should try to define what you mean. . . . Different people use different definitions.  You can’t measure what you can’t define.

However, interviewees at other sites felt that a standard definition of preparedness was not necessary in order to have performance goals and to begin the process of QI.  These interviewees expressed the opinion that such a comprehensive definition was not needed for improvement as long as there was a clear understanding of which processes are important to preparedness.

Sites Frequently Used Performance Measures for Routine Processes

The use of performance measurement was also common at the sites visited.  We found several HDs that had implemented measures for routine, ongoing processes, many of which were in infectious disease. For example, one LHD shifted the focus of measurement within a sexually transmitted disease (STD) clinic from the number of patients seen to the infection rate in the targeted population. After it began to measure the infection rate, the LHD changed the goals for running the STD clinic, in the process improving access, outreach, and tracking of known contacts. By doing these things, the LHD was able to reduce the rate of infections over time.  Additionally, one state HD with robust information technology was able to measure the timeliness of disease-reporting across the state. 

We also found an example of a state HD that routinely measured LHDs’ progress towards statewide performance goals for PHEP and sent feedback to each LHD. The state allowed each LHD to choose how it would demonstrate that it is meeting the performance goals.  Although there are as yet no universal metrics for performance, and reporting and feedback are qualitative, interviewees felt that, over time, a set of agreed-upon measures would emerge from this process.

Sites Commonly Used Drills and Exercises to Measure Rare Processes

HDs measured rare preparedness processes both as a result of naturally occurring events and as a result of drills or exercises. HDs cited West Nile virus, anthrax, and flu season, among others, as opportunities to measure preparedness--e.g., in terms of the number of people vaccinated during a flu clinic, the extent to which the HD was able to shape communication to the public with regard to West Nile virus, and the HD’s success in contacting individuals with special health care needs after a natural disaster.
However, many interviewees stated that drills and exercises were the main way they measured preparedness. HDs tested processes that are important in both routine public health practice and in response to an emergency event. For example, as one interviewee describes:

[I]n preparation for West Nile virus testing we sent text messaging alerts to a subset of the department [to see if they can] reply [in a timely way], including the use of a BlackBerry.  Can we all text message back in?  We do mini-tests from time to time.

A similar drill involved staff from a state HD making simulated media calls to LHDs to test their ability to respond to information queries from the press. State HD staff rated how well the LHD responded. Another example of a useful measurement exercise concerned how employees would be able to report to work during an emergency:

There was concern about the ability of environmental health (EH) employees to get in during an emergency.  In a staff meeting, an EH employee suggested that they do an exercise to resolve some of these issues; in the parking lot, they chalked out a map of the county and people stood on the spot representing where they live.  They discovered interesting things that they hope the health department will look at more broadly.  For example, in an emergency (e.g., earthquake), how do we know if bridges are out?  If they are, then three-fourths of EH people can’t get to their assigned location because of where they live.  There’s only one building on the west side of the river.  This exercise led them to make changes to their preparedness plan for EH.

These simple drills captured easily quantifiable metrics, such as the percentage of people reached in an alert. The more-comprehensive and more-complex exercises, however, represent a more difficult measurement enterprise.

Many of the Measures Were Imprecise or Lacked Clear Objectives

For complex exercises, it is necessary to specify, a priori, exercise objectives and metrics to measure how well these objectives are met. If the exercise has these objectives and metrics, performance can be summarized in an  after action report (AAR), which can serve as a measure of preparedness. Many of the sites we spoke with routinely drafted AARs and considered them to be valuable ways to measure rare or response processes.

However, we found during our site visits that AARs often measured “success” implicitly, without reference to a priori goals and measures. Further, even if there were a priori goals, measures were often imprecise. As a result, there was a sense that AARs could be made more useful if exercise objectives and measures were made explicit and documented in a standardized fashion.

Some of the Measures Were Not Relevant for PHEP

Moreover, a few of the sites described measures that were not relevant to PHEP. For example, measures to assess preparedness during a rare event might include whether a plan was written, whether a training was held, or whether certain equipment was purchased. One site developed software to document the number of people attending training sessions. While these activities may be related to improved preparedness, interviewees expressed concern that using such criteria as measures of preparedness amounts to a “counting Band-Aids approach.” As one LHD emergency preparedness manager explained,

We need a change in philosophy to get away from the “counting Band- Aids” approach. Counts are okay. But no one ever evaluates what you can do with different count levels. I’m more concerned about how many resources does it take to accomplish [a desired outcome]. But instead, we just have a standard that specifies so many epidemiologists per 100,000 population… . They [CDC] don’t start with the outcome, as they should, but with the front end. But you need to map backwards (from our outcome) . . . .

The Use of Measurement Was Not Pervasive and Documentation Was Often Lacking

Although we found many examples of measures in use at the sites, measurement and documentation of PHEP processes were not pervasive.  Whereas every site we visited had at least one example of a measure for PHEP, few sites could describe more than a handful. With respect to documentation, we noted that several sites described measures but did not have documents available to be shared. As one interviewee stated, “This is a very hectic place. We don’t have someone who can jot it all down.”

Sites Had Typically Implemented Only a Few QI Practices

While many sites had examples of QI goals and measurements, fewer had implemented many QI practices. For example, all sites had at least one example of a QI practice, but few had command of more than one, even if they described additional practices. Further, these QI practices were often employed without a conscious understanding that the specific instance was an example of a broader class of QI practices. This lack of understanding limited the extent to which HDs took advantage of their success in one instance to spread it to another.

We did find several examples of cyclical QI practices for routine processes. Such practices are based on iterative cycles of setting improvement aims, testing a change, evaluating the effects of the change, and feeding that information back to inform the next cycle (referred to as “Plan-Do-Study-Act,” or “PDSA” cycles).  Cyclical QI practices were sometimes used in surveillance and detection processes. For example, one LHD was using such a PDSA cycle to reduce STD reporting times for the county’s prison population. Another LHD had made several small-scale tests of changes to its infectious-disease-reporting process, thus reducing the time needed to report as well as improving the completeness of the reports.

Variants of cyclical QI were used in capability-building processes as well.  For example, one state HD instituted a performance-improvement cycle for training local HD staff. The HD performed a survey to assess training needs and developed training to meet those needs. The improvement, however, came about after the department re-surveyed the same staff to see whether training needs had been met and then instituted another set of training sessions to address those subsequent needs.  A cyclical QI process was also used in developing preparedness plans. One state HD required LHDs to submit PHEP plans using a common template. Staff at the state HD then reviewed each plan and provided feedback and suggestions to the LHDs to facilitate further improvement.

Another type of QI practice we found was the use of a “collaborative”--i.e., a structure in which various agencies or health departments can share experiences, learn from one another, and find ways to cooperate and pool resources, training, and knowledge. One LHD described a successful regional collaborative structure that had given members opportunities to share experiences, coordinate resources, and learn from one another. A state HD described being able to collect best practices from LHDs, disseminate those best practices to the LHDs, and convene the LHDs on a regular basis to facilitate learning among the LHDs. These collaboratives were described as important both for sharing know-how and, equally, for creating an atmosphere of trust and familiarity that was thought to be beneficial in a response scenario.

Several Sites Considered the Incident Command Structure to Be a Means of Improving PHEP in Response Situations

With regard to using QI practices for rare events, several of the HDs saw in the Incident Command Structure (ICS) a way to improve PHEP in response situations.  The Department of Homeland Security’s National Incident Management System (NIMS) defines ICS as “the combination of facilities, equipment, personnel, procedures, and communications operating within a common organizational structure, designed to aid in domestic incident management activities” (Department of Homeland Security, 2004.)  The ICS planning process includes five primary phases: (1) understand the situation, (2) establish incident objectives and strategy, (3) develop the plan, (4) prepare and disseminate the plan, and (5) evaluate and revise the plan. The planning process includes the requirement to evaluate planned events and check the accuracy of information to be used in planning for subsequent operational periods. Planned progress is to be regularly compared with actual progress in order to allow for modifications and subsequent iterations. In this way, ICS contains a potential improvement cycle.

Although ICS may not be widely thought of as an improvement strategy, we heard from several sites that ICS, when performed correctly, has an improvement cycle built into. As one interviewee explained,

ICS has inherent in it an improvement cycle--it’s what’s expected.  If you talked to someone from Fire, they expect a hotwash and after action reports, additional training, increased capacity, etc.

The HDs that found ICS most useful had incorporated ICS into routine practice or had had multiple opportunities to practice it. For example, one LHD was confronted with a sudden, urgent requirement to restructure its department and cut a large percentage of its workforce. The LHD instituted an ICS, established an emergency operations center, staffed accordingly, and ran 24-hour planning cycles. As one LHD official said of the fire department he was attached to previously, “We used to stand up an incident command structure to plan our annual picnic.” Forcing ICS into routine public health practice was seen as crucial to becoming practiced and skilled at using ICS. Other HDs had benefited from having to use ICS in actual situations. For example, one LHD was using ICS for sheltering during hurricanes. The LHD faced challenges in applying this approach to a public health workforce that had never used it before, but, overall, the leadership felt that using ICS was improving performance. 

Different QI Practices Were Used at the State and Local Levels

One theme that emerged across site visits was the different roles that state and local HDs might play in a response situation and, concomitantly, the difference in the types of QI practices they might implement. Staff at LHDs, and most state HD staff, agreed that LHDs would be first responders in the event of a public health incident. As several interviewees stated, “all response is local.” On the other hand, state HDs in many cases saw themselves as coordinators and facilitators of local and regional response. As a result, LHDs tended to focus internally on QI, whereas state HDs saw their QI role more as one of providing resources and training, and facilitating structures for LHDs to improve.

QI Practices Were More Likely to Be Implemented If They Were Integrated into Daily Work

One theme cited by many interviewees was that potential QI practices were more likely to be implemented if they were integrated into, rather than added onto, existing work, and more successful if they made work easier or more efficient.  For example, the LHD in which staff made changes to its infectious-disease-reporting process found that these changes also streamlined the data-entry process, thus making the work process easier as well as faster.

Successful Implementation of QI Practices Requires Specific Effort

Another theme that emerged from the site visits was that implementing QI practices was more successful if there was an explicit effort to do so. Some sites used QI-related practices without intentional follow-through. In these cases, it appeared that more progress could have been made if the effort had been more intentional and sustained, or if sufficient knowledge had existed to spread the use of these QI strategies to other processes. 

 Few Sites Had a Systematic Process for Incorporating Changes Suggested by Improvement Efforts

Many of the HDs we visited told us that they often made changes to the way they worked and created plans for preparedness based on lessons learned from actual events and exercises. However, we were particularly interested in finding examples of HDs that used an explicit process to routinely incorporate back into practice those changes suggested by their improvement efforts. We found fewer examples of this kind of feedback process.

One state HD mandated that the state’s LHDs complete AARs after every exercise. LHDs are then required to submit AARs to the state HD. The state HD, in turn, follows up and assesses the LHDs’ progress in addressing areas of concern.  We also found examples of sites that made changes as a result of an AAR. During last year’s flu season, one state HD had difficulty communicating health-risk information to the public because of a problem with the purchase order to print materials. The AAR described this problem and, as a result, the HD now plans to have public messages ready before the event takes place, recognizing that flu season brings similar issues each year.

After Action Reports Were Often Not Acted Upon

AARs were usually available following an exercise; however, often the findings were not acted upon.  One state HD’s Health Resources and Services Administration (HRSA) grant manager told us, in commenting on the usefulness of exercises to hospitals, that 

[a]necdotally people say that (doing exercises) makes them think about their plan, and that they go back and add to the hospital plans. They write the AAR.  We collect them, but do they (the AARs) make their way into the next training and next plan version?  At this point, I can’t tell if this occurs.  “Sentinel indicator” (the grant performance goal) asks how many drills occur and address[es] biological, chemical, and radiological issues.  However, did the information that came out (of this drill) really get back and establish learning that was implemented?  That’s the question. 

One interviewee mentioned DHS’s Homeland Security Exercise and Evaluation Program (HSEEP) as having a promising template for AARs. The HSEEP establishes a simplified after-action exercise-reporting format that reduces the burden of report writing yet is more specific in what must be done to correct an issue and who is responsible for taking necessary actions.

Interviewees reported that it is critical to explicitly recognize that improvement requires feedback and iteration, but they stated that these steps did not always happen. For example, an official at a state HD stated that.

One challenge to measuring performance has been getting the LHDs out of a checklist mentality, which in the past has meant that they complete an activity, check it off the list, and never revisit it again. The state HD is trying to instill the mantra and philosophy that a preparedness plan is a living, breathing document, which will continually need revision.

A key implementation challenge is retesting after changes have been made. After the AAR is complete and specific changes are recommended and then enacted, it is necessary to follow up with those changes to re-test in order to determine whether performance has improved.

Organizational Culture and Leadership Was Key to QI Efforts

Information collected from the site visits and interviews indicated that a key determinant in the development and implementation of QI efforts was the organizational culture. Cultures that fostered cooperation, valued input from all staff, and empowered employees were important drivers in facilitating QI efforts.  In some sites, developing such cultures involved changing the way things were traditionally done and breaking down “silos.” 

Our site visits confirmed the importance of leadership in shaping organizational culture and creating an environment in which QI can succeed. Our interviews indicated many different leadership styles and functions supported QI.  One LHD favored a leadership style fostering empowerment and accountability. Many interviewees at this site credited the director with creating a learning-oriented and improvement-oriented environment, first by bringing in facilitative management and, second, by investing in ICS.  A strong, authoritative leader who is willing to use the bully pulpit can also drive QI. For example, at another LHD, progress was credited to the director, who “is always willing to be the hammer.”  In another LHD, charismatic leadership was cited as an important ingredient in spearheading coordinating efforts: 

Leadership is key. XX, the fire chief, like YY, is charismatic, has everyone’s respect.  The rest of the fire department follows him.  We have experienced, respected people leading the agencies--champions.  

One leader was cited as key to improvement because he protected his staff from political fallout if they admitted the need to improve. Moreover, because QI in many respects depends on public support, some sites suggested that leaders should also be able to communicate public health and its improvement activities to the public, particularly since the public is an essential player in any PHEP response. Finally, leadership should be willing to counsel people who are not willing or able to support QI efforts.

Lack of Adequate Resources Was a Barrier to QI

A number of sites indicated that the major barrier to QI was lack of time and an inadequate number of staff.  For most LHDs, interviewees reported that the largest performance constraint, which consistently impeded meeting deliverables in full, is a personnel or workforce resource limitation.  Not having sufficient or well-trained staff often meant that health departments were in a responsive mode rather than a strategic mode and were often incapable of building an infrastructure for QI.  There was also a sense that PHEP itself was viewed as an add-on, rather than as something integral to the business of public health:

While preparedness is really important it is secondary to the day-to-day structure.  You have to elbow your way in to say it’s important.  It’s an organizational issue to get it moving when there’s a competing structure.  It competes with everyday life.  It’s not like we have a bunch of people waiting for an emergency.  It’s not like a fire department that goes out and practices. We’ve dealt with these issues by deliberately standing up Incident Command; it’s an opportunity to try out our roles.

Sites that have been able to overcome these limitations have focused on integrating PHEP and usual public health processes so that the same task serves both purposes.

Resources, such as data and informational systems were also viewed as crucial to establishing an infrastructure to facilitate QI initiatives. One state HD had developed Web-based and other electronic reporting systems that allow the department to access performance measures and track performance data.  In another state HD, the information technology system served as a unifying structure across all of the LHDs and, essentially, extended the reach of the state HD, thereby enhancing coordination.  Other sites that lacked this technological infrastructure were, nevertheless, aware of its importance.  For example, in one state HD, the lack of funding for technology was seen as a major barrier to the QI efforts: 

The state legislature is very interested in accountability and performance measurement, but it doesn’t support much technology infrastructure.  This is contradictory.  We need improved technological infrastructure to do performance improvement and measurement.

Financial Resources Were Also Critical

Lack of financial resources such as federal funding was cited as a serious barrier to implementing and sustaining QI efforts.  In one state HD, the threat that federal funding might go away had the unanticipated effect of pushing state health departments toward quick fixes and wariness of long-term investments, such as QI.  However, in another local HD loss of State funding had the opposite effect.  Rather than limiting QI efforts, concerns about State funding served to drive efforts toward improvement--specifically, in efficiency--because doing so was seen as a way of dealing with continuing budget cuts and having to do more with fewer staff. 

Incentives for QI Were Typically Lacking

Overall, we noted that there were not strong incentives, at either the organizational level or the individual level, to engage in QI for PHEP. Interviewees noted that better performance was not linked in any way to funding. Moreover, there was a reluctance, given the perception that preparedness-related funding might be time-limited, to invest in long-term improvement efforts. Although some interviewees felt that incentives were not necessary, since LHDs did not lack motivation, others felt that the right incentives could motivate more efforts toward improvement. For example, incentives that served to increase public awareness of the success and the contributions of the health departments were seen as effective, as were incentives that instilled accountability systems (without penalties) and brought attention to areas in which performance was lacking. This was, however, dependent on the political context. For example, one LHD interviewee related that local government has a “stoning in the public square mentality” when something goes wrong. This context makes it difficult to engage people in measuring and improving their performance for fear that engaging in QI might be taken to mean that improvement is necessary and, therefore, the HD is not doing a good job of protecting the public.

Most HDs had not linked performance to staff salaries as an incentive. Mostly, this was due to the fact that there were very few funds budgeted to support differential pay based on performance. But there was also a sense that measurement and accountability systems had not reached the level of sophistication and precision needed to link pay to performance. Also, several HDs cited union contracts as a barrier to differential pay. Because they could not rely on individual financial incentives, HDs had to use nonfinancial incentives. One interviewee stated that

One strategy in a resource-poor setting is to use the opportunity to be creative as an incentive. Creative people will be rewarded by the opportunity to do a project where creativity is valued.

Similarly, finding people who are excited by change and allowing them to work on a change project is a way of providing nonfinancial incentives. As another interviewee stated, “we seek out early adopters to be informal leaders and engage them.  You line up the adrenaline junkies and they jump on the bus.”

In sum, incentives were not aligned with improvement at the organizational level and were limited to nonfinancial incentives at the individual level.

Many Sites Emphasized Training and Skill Development to Facilitate QI

An essential component of having the available technological infrastructure is developing a skilled workforce able to utilize the data and information in meaningful ways.  Thus, many sites have invested heavily in providing training opportunities for staff.  For example, one LHD requires all managers to be trained in Facilitative Management--a form of results-based management that emphasizes empowering employees to make improvements--and retrained every two years. At this LHD, there was also a major effort to get as many people as possible trained in ICS, because management in this LHD believes that ICS is a crucial skill for responding to events and for improvement in general.  In another LHD, initial efforts at training staff in QI using hired consultants in the state quality improvement process did not go well because the audience had not been adequately prepared and viewed the QI process as being extremely complicated.  However, as a result of that experience, the LHD has instituted more in-house training, and the staff has been more receptive to the QI process.  In other sites, the lack of authority and perception of  siloed programs has made it difficult to coordinate training efforts.  In one state HD, the personnel were divided into different focus areas and various centers.  Without having the authority to mandate training, education, or improvement activities across focus areas or centers, it was difficult to get people to attend training programs on QI activities. 

Chapter Four. Overarching Themes and Recommendations

In this chapter, we identify some broad lessons from our site visits, describe the limitations of our study, and make some recommendations on how the use of QI might be facilitated in PHEP.

Overarching Themes

While the previous chapter showed that some elements of QI were evident at each of the sites we visited, we found that QI for PHEP is in the early stages of implementation. All sites were practicing discrete elements of QI, but no sites had worked their way up to a comprehensive QI system for PHEP. We found examples of performance goals, measurement, QI practices, and feedback that illustrate ways in which QI could be applied to PHEP. Options include the use of SMART goals, cyclical improvement strategies (such as PDSA) in routine processes, collaboratives among LHDs on either a statewide or a regional basis, and the use of ICS as a QI practice.

There were challenges regarding the implementation of QI methods, as would be expected for any complex endeavor.  For example, not all performance goals were relevant to PHEP, and the goals were not structured in a way that could be used to track improvement. Measurement occurred at all sites, but measurement for PHEP was neither pervasive nor routinely documented. Potential QI practices were difficult to implement systematically. And more needed to be done  in feedback and reporting, to apply needed changes back within work processes and to test whether such changes made a difference. Addressing such issues will spur QI in PHEP.

Leadership and culture were cited as key facilitators of QI. There are several different leadership styles that are useful, and several ways to create an appropriate culture. Also important are information technology infrastructure and training.

Resources (money, time, and staff) are always a problem and can be limiting factor. This should be distinguished from incentives. Having resources does not necessarily mean that the incentives are aligned correctly. And the right incentives for the right behaviors can motivate improvement, even in the face of scarce resources. Nevertheless, many sites we visited recognized that, at the organizational level, incentives are limited, and sometimes there are disincentives for investing or engaging in QI for PHEP.

Limitations

Our findings come with several caveats. First, we do not know whether the findings from these sites are generalizable to other HDs.  We sampled selectively to try to find HDs that were thought to be good examples of QI in PHEP. We do not draw generalizations from these findings to the whole population of HDs. Second, we may have missed other examples of QI in PHEP. We did not attempt to visit every site with exemplary QI for PHEP. Our purpose was, rather, to describe illustrative examples of how QI could be applied to PHEP. Third, we did not always verify, through observation or review of documentation, self-reports of QI practices. And, fourth, we did not attempt to examine whether QI actually improved PHEP, either in general or for our specific case examples.

Recommendations

Despite these limitations, the findings for our study suggest that QI is applicable to PHEP. But in order for QI to flourish and become standard practice, changes to the status quo are necessary. Below, we make several observations regarding the implementation and acceleration of QI for PHEP.

Building QI Capabilities and Capacity Is Foundational. Implementing QI requires both theoretical knowledge and practical skills at all levels of the organization. While much emphasis has been placed on the need to improve PHEP, less investment has been made in creating the organizational capacity to improve. The discipline of QI and the skills and techniques needed to pursue QI must be broadly disseminated throughout public health. Leaders and managers must have an understanding of QI in order to be able to formulate and communicate a vision for improvement. They, and program directors and staff, must have fundamental QI skills to translate this vision into practice.

Knowing how to choose appropriate outcomes to improve, how to write measurable objective goals, how to analyze and understand the work processes used to produce these outcomes, how to choose and implement small tests of process change to improve these outcomes, and how to evaluate the effectiveness of such small tests are all essential skills without which QI will not move forward.

Vehicles for increasing QI capacity could include development grants, education and training, technical assistance, tool development (including information technology), leadership and management training, and grants that incentivize and reward QI practices and continuous improvement in performance.

Attention needs to be paid to organizational development and change. This study outlines many issues faced in PHEP, including the need to plan and develop a response capacity across well-established silos and the need to transform the traditional workforce into an integrated Incident Command Structure in the face of an emergency.  These very difficult organizational challenges require sustained attention and resources to build the organizational and leadership capacity in order for PH to be prepared to protect the public in a sustained emergency. As above, vehicles for increasing organizational-development capacity could include development grants, education and training, technical assistance, and leadership and management training.

Our examples suggest that PHEP and QI for PHEP will be most successful when integrated into routine PH practice and into daily work.  We conceptualized PHEP as a production system consisting of sets of processes that interact to produce certain outcomes. Importantly, this system sees routine public health processes as important contributors to PHEP outcomes. The QI components we have identified can be applied to any process or set of processes within the PHEP system.  These ideas lead to two related recommendations:  first, that QI be incorporated into usual public health processes and, second, that QI be incorporated into daily work. The integration of QI into usual PH processes and into daily work serves two important functions. First, it provides an opportunity to practice QI skills and improve PHEP-relevant processes and capabilities as part of ongoing work. Second, it avoids ”preparedness burnout,” which could occur if staff are asked to “add on” additional work to address PHEP.

Performance goals and measures that are meant to facilitate improvement should be specific, measurable, achievable, relevant, and time-bound.  Our site visits suggested several recommendations for performance goals. First, at the most basic level, performance goals and measures must be relevant to PHEP. Many interviewees warned of the dangers of “lamp post” goals or measures--goals that are easy to measure rather than those that actually improve preparedness. Creating performance goals and measures that are relevant to the structure or outcome of interest requires deep knowledge of the system and an understanding of what processes are key to PHEP. Therefore, goal-setting and measurement-development should involve experts within the organization who come from a variety of perspectives. Second, performance goals and measures must be structured so that they track and assess improvement. Simple yes/no goals are important for setting baseline levels of performance, but they are less useful for motivating improvement. Vague goals, goals without a time limit, or goals that cannot be measured concretely are similarly not useful for improvement. A useful method for stating performance goals for improvement is the SMART method, which helps ensure that goals are specific, measurable, achievable, relevant, and time-bound. Third, it is important to match performance goals to the appropriate level (e.g., specific process, organization, state, federal). Higher-level performance goals will inform lower-level goals, but this requires some translation and knowledge of the production system. Lower-level goals are more likely to be SMART goals.

Measuring and documenting key processes are essential to QI.  Measurement of key processes should be widespread and documented. This is a tall order, but it is a well-known management adage that “what gets measured gets done,” and measurement is key to QI. We found examples of robust information technology that allowed HD staff to create measures and take measurements with little additional effort. This sequence is most likely to happen when work is done in such a way that the process of doing the work creates the data itself. Thus, for example, online case-tracking will allow for measurement of epidemiological investigations. Even in the absence of sophisticated information technology, it is still possible to structure work processes in this way.

We found several examples of measurement of routine processes, such as surveillance and disease-reporting, training, and laboratory functions. These examples should be replicated and expanded to other routine processes. We also heard from many interviewees that drills and exercises should be used regularly as ways of measuring performance of processes only used during rare emergencies. We concur, but recommend that AARs be structured in a consistent way, using a template, so that consistent data are recorded in AARs.

Implementing QI practices systematically and rigorously is difficult, but essential.  The heart of QI, and the hardest part of QI, is implementing improvement. This was also the area in which HDs were greatly challenged. First, we recommend that HD staff become more systematic in their use of QI practices. While we found examples of QI practices at every site we visited, such practices tended to be singular, rather than one of many. While all improvement is change, not all change is improvement: More-systematic application of these practices would lead to more improvement in PHEP. Second, it is helpful to use specific QI practices matched for the type of process to be improved. Routine processes are well suited to cyclical improvement practices, such as the PDSA cycle, whereas ICS has been developed to respond to emergency events. Although we had not originally considered ICS as a QI practice, we learned during the course of the study how QI is ”built in” to ICS (if executed correctly). Given that learning NIMS is a requirement in the new CDC guidance, this is an excellent opportunity for HDs to improve their ability to learn ICS as a QI practice. A caution and a recommendation, however, is that ICS training is not the end but the beginning. Several interviewees told us of the importance of finding ways to use ICS so that the skills are practiced rather than forgotten.

Information from AARs will be more likely to be used to drive process change and set up the next measurement cycle if formalized procedures are implemented to do so. AARs are important for measuring the response process and documenting lessons learned. We found that even those sites that routinely completed structured AARs often considered their task complete once the AAR was turned in. Other sites took this one step further and made changes to plans based on AARs. Neither is enough, however, to create improvement. What is necessary is to use the AAR to drive process change and then to test the result of these changes in a systematic way.  For each problem identified by the AAR, HD staff should identify the work process(es) or capability(ies) involved in the issue, create SMART goals to address the problem, and design focused drills or mini-exercises to test the changes implemented for that issue.

State efforts to facilitate QI for PHEP at the local level should be emphasized. Many of our interviewees commented on the difference between QI efforts at the state and local levels. The consensus seemed to be that state HDs were very valuable QI facilitators. The examples we found supported this notion. States should foster collaborative structures among LHDs, assist them in measurement, disseminate best practices, provide training, and coordinate resources.

Federal grants should incentivize QI at the state and local levels.  While leadership, culture, skills, training, resources, and incentives are important facilitators of QI, we learned that incentives were not aligned with improvement at the organizational level and were limited to nonfinancial incentives at the individual level. Federal grant guidance should require and be structured in such a way to facilitate state and local HDs to engage in QI and to show improvement. Guidance could include specific language for developing goals and measures that facilitate improvement, supporting information-technology infrastructure to promote measurement for improvement, training that public health workforce in QI strategies, and providing tools that facilitate incorporating lessons learned.

Conclusion

This report has documented examples of QI elements for PHEP at state and local HDs, as well as issues related to implementation, and it has described organizational and contextual factors that serve as barriers to or facilitators of QI.  Applying QI to improve PHEP is possible and holds promise for closing the gaps between public health’s current and ideal state of readiness. Implementing the recommendations contained within this report could spur QI in PHEP and improve the nation’s preparedness.

Appendix A.  Quality Improvement Components and Contextual Factors

In this appendix, we provide more-detailed descriptions of the QI components and contextual factors described in Chapter Two and used in the analysis.

Components of Quality Improvement

In identifying the components of quality improvement, we adapted the model for performance management developed by the Performance Management National Excellence Collaborative of the Turning Point initiative (Hassmiller, 2002; Nicola, 2005), described in Chapter One.  First, it presents a broad and generalizable framework that avoids the jargon seen in other QI models.  Second, the model describes QI from the level of an organization (i.e., at a department or division level, rather than a smaller work group), which was consistent with our organization-level analysis for this report.  Third, the model is part of a public health initiative and thus would likely be familiar to the HDs we interviewed (Public Health Foundation 2002).

Performance Goals.  Performance goals refer to agreed-upon targets for improvement efforts of the organization or work unit.  These goals should be clearly outlined and communicated with the various members of the work team. Performance goals can be at the level of the organization, division, team, or individual. Broader goals are appropriate for the organization level, while specific goals are better at lower levels such as units or teams, which can define responsibilities precisely in terms of who is responsible for accomplishing the goal, what will be accomplished, and when and how it will be accomplished.  For example, a federal goal might state “PH department preparedness and response plans should be exercised on a regular basis;” a more appropriate LHD level goal would state “A pandemic influenza exercise, conducted by the PHEP division of the LHD, with hospital, EMS and other first response agencies, will be conducted every 6 months.”  Performance goals can be externally imposed or internally generated.

Performance Measurement.  Performance measurement focus on structures, processes, or outcomes of an organization’s work (Derose, Asch et al. 2003).  Structural measures assess the capacity and organizational characteristics of the organization.  Actions the organization takes are measured as processes and include both technical processes and interactions between the organization and consumers and stakeholders.  Outcomes of the organization’s actions might include population health status and consumer satisfaction with the HD’s services.  The QI literature suggests that measures that are process- or outcomes-driven will be the most effective in terms of creating meaningful, lasting improvement (Langley, Nolan et al. 1996; Derose, Asch et al. 2003).  Objective, quantifiable measures are preferred for quality improvement (Langley, Nolan et al. 1996; Derose, Asch et al. 2003), as they represent higher levels of evidence of high quality performance.  The more closely a process is linked to a given outcome (either by research or expert opinion), the more likely it is that improvement efforts focused on that process will lead to meaningful changes in outcomes  (Derose, Asch et al. 2003).  However, measuring relevant processes relies on deep knowledge of key processes, a level of knowledge that is often not available in PHEP, given the lack of research and expert consensus on PHEP processes.

The measures selected will be influenced by the nature of the process or outcome targeted for improvement. If a process or outcome is routine or occurs regularly (e.g., laboratory or administrative functions), its outputs might be measured directly, allowing improvement efforts to be focused on day-to-day work processes.  However, for processes related to rare events such as errors, accidents, and responses to disasters, performance will have to be measured via proxy outcomes or through drills and exercises.

Data collection for QI needs to provide “just enough” data to indicate trends in performance and to separate out random chance variation in a “good enough” way (Solberg, Mosser et al. 1997). This emphasis on fast “good enough” measurement allows for quick, just-in-time changes to be made, allowing effective teams to work quickly and productively without getting bogged down in the process of measuring itself.

QI Practices. QI practices are systematic techniques and approaches used to understand processes, identify needed changes, and implement improvements.  QI practices include branded approaches that define a roadmap for improvement. These include Six Sigma and the Method for Improvement, a practice that has been applied for the past ten years in the personal health care industry (Langley, Nolan et al. 1996; Daita 2005).  Other QI practices include process-analysis techniques, such as process mapping and failure-modes effects analysis (a way to outline a process and the probability and risk of failure of key processes), both of which help improvement teams identify and test changes likely to lead to sustained improvement (Lighter and Fair 2004).  Another QI practice is the use of repeated small tests of change, called the Shewhart, or Plan-Do-Study-Act cycle (Langley, Nolan et al. 1996).  This cycle describes the steps a work group should go through in trying to implement changes that might lead to improved performance.

Other efforts are central to creating the kind of culture that will support QI on an organizational scale.  One type of effort is focused on creating a learning organization--i.e., one that welcomes performance measurement and continuously seeks ways to improve performance and staff capacity. Another type focuses on improving coordination and teamwork among different individuals and parts of a system that must interact for a given process.

Feedback and Reporting.  Finally, observations about changes that lead to improvement, as well as about areas that are in need of ongoing improvement, should be fed back to leaders within a team or organization on a frequent basis. Such feedback allows changes that appear to improve performance quality to be integrated into the daily operations of an organization--enabling information about lessons learned to be re-integrated into how the organization works from day to day.

Organizational and Contextual Factors

The effectiveness of QI initiatives has been shown to be highly variable.  QI is a complex and heterogeneous organizational intervention that, in many instances, is being imposed on an equally complex and dynamic system.  Research that seeks to understand the conditions under which QI is most effective rather than whether it is effective can therefore provide important insights into the factors that are most important in order for QI to work.   The success of QI is affected not only by the presence of the four components just described, but also by contextual and organizational factors, including skills, organizational culture, leadership, and incentives.  These factors are largely interdependent and mutually supportive.

Organizational Culture.  Successful implementation of QI requires cooperation and a willingness among individuals within an organization to examine their processes with a dispassionate eye and to accept and share “bad news.”  An organizational culture that fosters openness, collaboration, teamwork, and learning from mistakes appears to be optimal for QI (Shortell, O'Brien et al. 1995).  Moreover, certain organizational types appear to be more inclined towards characteristics linked to successful QI implementation--in particular, those that demonstrate what Kimberly and Quinn have called a “group culture,” which is based on norms and values associated with affiliation and teamwork (Carman 1996).  Other elements of culture that have been linked to QI success include transparent organizational structures; the sense that QI is not a distinct initiative but, rather, an everyday part of each employee’s job; high morale and low staff turnover; open communication; reward for risk-taking; and a culture that readily accepts change (Parker et al., 1999).

Leadership.  It is unlikely that QI will be sustainable without serious, long-term commitment from senior leadership.  Berwick and colleagues identified the support of management as one of the ten principles underlying successful quality-improvement efforts (Berwick, Blanton et al. 1990).  Specifically, management leadership that creates  an organizational culture committed to continuous improvement and learning, as opposed to one of merely correcting deficiencies or meeting current standards, is critical in the successful implementation of QI initiatives (Carman 1996).   Bradley and colleagues (2003) identified the following five roles and activities that were important in quality improvement efforts within hospitals: (1) personal engagement of senior managers; (2) management’s relationship with clinical staff; (3) promotion of an organizational culture of quality improvement; (4) support of quality improvement with organizational structures; and (5) procurement of organizational resources for quality improvement efforts (Bradley, Holmboe et al. 2003).  Other important leadership characteristics include vision and the ability to motivate. The Baldridge Quality Program cites visionary leadership as an important determinant of quality, stating that senior leadership should set directions and create a customer focus, clear and visible values, and high expectations.  Senior leaders should also be inspirational and motivate staff to contribute, to develop and learn, to be innovative, and to be creative.

Information Systems.  Resources such as data and information systems are important in establishing an infrastructure to support QI initiatives.  Effectively measuring and analyzing performance data and managing organizational knowledge are needed in order for QI initiatives to succeed.  Finally, informational systems are critical for generating population-level data that can be used to assess the performance of the health system in caring for discreet populations, improve the monitoring of disease outbreaks, and  assess in “real-time” whether a response to a public health emergency is effective.

Technical Skills.  Central to this use of data and information are individuals trained in specific skills, such as devising appropriate measurement strategies, analyzing data for patterns, making inferences about program design and performance based on data, and developing and executing sound recommendations for program redesign. These skills can be acquired through formal training and can often be “bought” on the labor market through the hiring of individuals with these skills already in hand.

However, the technical skills described above also rely heavily on “softer” skills, such as the ability to prioritize problems and think in systems terms.  Indeed, developing improvement plans inevitably relies on some sort of cause-effect understanding; and such knowledge often comes from experience with organizational systems.  Similarly, the ability to design and execute effective improvement plans often requires in-depth knowledge of the organization one seeks to change, the ability to anticipate reactions to the improvement plan, and project-management skills.

Incentives.  Incentives can be necessary to spur investment and engagement in QI.  Although most incentives are financial, nonfinancial incentives also can be influential. Incentives can be at the organizational or individual level, or both.  Incentives must be structured and aligned correctly so that better performance is rewarded at a sufficiently greater rate than is the status quo. This is often referred to as the ‘business case’ for QI. In industries where consumers can readily measure quality and there is true competition – the automobile industry, for example –  higher quality is rewarded through higher prices or greater market share.

A business case does not exist in every industry, however. In the personal health care system, for example, the absence of a business case for improving the quality of health care is widely acknowledged as one of the most important obstacles to improving health care in the United States (Blumenthal and Kilo 1998; Galvin 2001).  Similarly, there is no business case for quality for PH or PHEP, in which measuring preparedness is difficult and taxpayers are unable to shop around for public health services. In the absence of a business case, there are fewer incentives to invest the resources (time, staff, and money) necessary to pursue QI and fewer incentives to focus the organization on improving quality.

Appendix B.  Methodology for Site Visits

In this appendix we provide a detailed description of the methodology used for our site visits.

Sample Identification

We purposively sampled from the universe of all state and local health departments. We sought nominations from experts in the field for health departments that the experts judged to understand and/or have implemented QI in PHEP. We also included sites that had participated in previous preparedness initiatives such as Public Health Ready and in previous public health improvement initiatives such as Turning Point. We supplemented these nominations with Internet searches to identify additional HDs. If an organization’s website mentioned a quality improvement program or a quality improvement department, we included these in our sample.

Identification of Potential Sites

To determine whether the nominated sites were engaged in some form of QI, we developed a standardized screening instrument. We telephoned potential sites and used this protocol to verify, by the sites’ self-report, that they had implemented some form of QI.  We also excluded certain nominated sites based on prior knowledge about the absence of QI at that site and on size (we chose to exclude very small LHDs). The screening protocol for these telephone calls is in Appendix C.

Site Visits

We selected a subsample of potential sites for further in-depth review through site visits. We selected this sample to be inclusive of both state and local HDs and diverse with respect to structure, size, and geographic region. We were further constrained to selecting sites within states of interest to other tasks within the project. With two exceptions, we traveled to these sites to conduct in-person interviews. In the case of the two exceptions, we performed telephone interviews.

For each site visit, the research team included two researchers and a note taker. The site visit involved semi-structured individual interviews with relevant health department personnel, supplemented by written materials where available. The specific job titles of interviewees differed slightly due to variation among health departments. Our aim was to interview as many relevant individuals as possible. Therefore, the site visit team usually interviewed at least the head of the department or his/her deputy, the individual in charge of coordinating or overseeing preparedness (e.g., a deputy director for preparedness or a bioterrorism coordinator and/or the nominal Principal Investigator for the CDC cooperative agreement), an individual involved in day-to-day preparedness activities (often this was the same person as the individual in charge of coordinating/overseeing these activities, but could also be an emergency response manager, an epidemiologist, or a similar individual within the communicable diseases division), and, if such a unit existed, the individual responsible for quality or QI within the department.

A case study protocol was developed to guide the interviews for each site visit using a semi-structured interview guide. The Site Visit Interview Protocol is provided in Appendix D.  Interviews were not audiotaped. Notes were taken on a laptop computer and supplemented by hand written notes. RAND’s Human Subjects Protection Committee approved the project.

Data Analysis

Detailed notes from each interview served as the main source of data. To analyze the data, a team of three researchers who had each performed at least two site visits reviewed the site visit notes. Based on site self-report, the researchers identified examples of QI elements implemented at the various sites. Through repeated reviews and discussions, the researchers also extracted crosscutting themes related to issues in implementing these QI elements or barriers and facilitators of QI. Crosscutting themes were ideas or issues that occurred at several different sites and/or that cut across all sites or all types of departments. Potential themes were identified based on the literature and our conceptual models of QI and of PHEP and on whether they occurred at more than one site. These themes were then reviewed by other research team members as an accuracy check. We did not attempt to stratify these themes or our findings based on site characteristics such as state versus local HD, geography, centralization of PH infrastructure or other factors. Examples of such themes include “incentives available to motivate QI” “business case for QI” and “performance goals structured for QI.”

Description of Sample

We identified 91 potential sites: 34 State HDs and 57 LHDs. We performed screening calls with 22 of these 91 (7 State HDs and 15 LHDs). The nine sites selected for site visits consisted of 5 LHDs and 4 State HDs. Of the 5 LHDs, 1 was in a large urban area, 3 were in medium urban areas, and 1 was in a small urban area. Of the 4 State HDs, 1 had a centralized PH structure, 2 had a decentralized structure, and 1 were mixed. Of the 9 HDs, 3 were on the West Coast, 2 were in the Mid-West, 2 were in the South, and 2 were in the Northeast.

Appendix C: Screening Interview Protocol

Screening Interview for Potential QI Site Visit Locations

Target Interviewee:
Person in charge of quality improvement activities (may or may not be someone high up in the organization).

Introduction:
I am a member of a RAND team working through the Department of Health and Human Services’ Office of the Assistant Secretary for Public Health Emergency Preparedness on several related projects to strengthen the public health infrastructure, specifically with regard to emergency preparedness. 

One of the tasks within this project is to find ways to improve preparedness in public health. The first step here is to identify public health agencies that are actively involved in quality-improvement projects, not necessarily in regard to emergency preparedness specifically, but rather in regard to public health functions generally. We are not doing a survey or collecting data at this time; we are just hoping to identify departments that are engaged in quality improvement efforts. 

By “quality improvement” I mean the continuous study and improvement of the processes of providing public health services, which generally includes the continuous cycle of setting aims, defining performance, gathering data on performance, implementing change in processes, and evaluating the results of that change.

(skip if speaking with someone in a QI dept, or with that title)
Is your agency actively engaged in quality improvement efforts either for preparedness or in general?
(If say do not know, could you recommend someone else in your agency that might know?)

I’d like to ask you some questions about quality improvement in your organization, both in general, and specifically regarding preparedness.  Today’s discussion should take approximately 10-20 minutes. Do you have any questions before we begin?

I.  QUALITY IMPROVEMENT PROCESS

Does your organization have processes in place to improve performance? Are any of these organized around preparedness? What about other agency functions? If so, are these processes standardized?

Do you have processes to implement needed changes in policies, programs or infrastructure? (Are the processes standardized?)

Is there a regular timetable for your QI process? (e.g. monthly meetings, 2 weeks after an event, etc)

Does the organization regularly develop PI or QI plans that specify timelines, actions, and responsible parties?

5. Is QI training available to managers and staff?

II.  ORGANIZATION AND LEADERSHIP

Is there a stated commitment from high-level leadership to a quality improvement system? If so, in what ways are they showing this commitment?

Is there a team or an individual responsible for quality improvement efforts within various departments and/or across the organization?

Are these quality improvement efforts done in all areas of work that the agency does, or only in some?

Are personnel and financial resources assigned to quality improvement functions?

III. USING PERFORMANCE GOALS AND MEASURES
In this next set of questions, I will ask you about performance goals and measures.  I am using the term “performance measure” to specifically refer to quantitative measures of capacities, process or outcomes relevant to the assessment of a performance standard.

Do you use performance goals or goals in the work that you do?

Are managers and employees held accountable for meeting goals and targets?

Do you have specific measures for all or most of your established performance standards or goals? (In general and for preparedness)

Do you collect data for your measures?

Are personnel and financial resources assigned to collect this data?

How is this data integrated with the overall daily operations of the agency? Is information/data shared throughout the environment?

IV. FOLLOW-UP:

Do you think your organization would be a good case to study quality improvement processes? Why or why not?

Are you available for further follow-up, if we have questions?

 Appendix D: Site Visit Interview Protocol

Enhancing Public Health Preparedness, Phase II:
Exercises, Exemplary Practices, and Lessons Learned
Task 7:  Developing a Quality Improvement Model for Public Health Preparedness

Site Visit Protocol Introduction Script

Introduction:

Thank you for taking the time to meet with us today.  [INTRODUCE SELF AND COLLEAGUES – PROVIDE BUSINESS CARDS];

[INTRODUCE RAND AND THE STUDY]:

Before we begin I’d like to give you some background information on RAND and this study and also go over our confidentiality agreement.  RAND is a private nonprofit research institution established in 1948 to conduct independent, objective research and analysis to advance public policy.  RAND has been contracted to work with the U.S. Department of Health and Human Services Office of the Assistant Secretary for Public Health Emergency Preparedness to develop resources and to prepare analyses to help describe and enhance key aspects of state and local public health emergency preparedness, including bioterrorism.  We have taken the approach of defining preparedness rather broadly, so we are interested in preparedness for any kind of a public health emergency, including emerging infectious diseases such as SARS or pandemic influenza, and the activities needed to support such preparedness.

Required Consent Procedures:  Before we get started, let me assure you that your responses to these questions will be held in strict confidence, except as required by law.  Summary information from these interviews, together with material taken from public documents, will be presented at the state level; however, no specific individual will be identified by name or affiliation in any reports or publications without his or her permission.  Findings from the study will be shared with all participants.

Your participation in this discussion is completely voluntary. We would like to have your responses to all of the questions; however, if you are uncomfortable with any question we can skip it.  We estimate that the interview will take about 1 hour.

Do you have any questions about our confidentiality procedures before we begin? (If yes, respond to all questions. If no, proceed with discussion).

First, I want to assure you that we are NOT evaluating your QI program in any way; we are here to learn about what health departments are doing for QI.  We want to understand what works, what doesn’t work and what lessons you’ve learned.  In addition we are hoping that some of the health departments we visit will be interested in participating in a learning collaborative.  We’ll talk more about that and whether your department might be interested at the end of our interview.

Now we would like to ask a few questions about your background and about your department [AGENCY, ORGANIZATION, ETC.].  Then we have some specific questions about quality improvement in your department, both in general, and specifically regarding preparedness. By ‘quality improvement’ I mean the continuous study and improvement of the processes of providing public health services, which generally includes the continuous cycle of setting aims, defining performance, gathering data on performance, implementing change in processes, and evaluating the results of that change.  Do you have any questions before we begin?

If no, proceed to interview.

AGENCY/ORGANIZATION:

DATE of INTERVIEW:

INTERVIEWERS:

INTERVIEWEE(S):

TITLE:

POSITION:

BACKGROUND Based on the information you shared with us prior to our site visit, we understand the following…briefly summarize information, Is there anything you would like to add to this?

Description of the site and the department (follow-up questions will be based on pre-visit information gathering)

Respondent’s roles and responsibilities

Respondent’s time in this position

Respondent’s past QI experience

TOPICS
(NOTE: This is not meant to serve as a verbatim script, please follow the general topic areas)

DESCRIBE CHARACTERISTICS OF QI PROGRAMS

Describe the QI program at the Department overall.

How long has the program been in place?

How did it get started - what motivated the QI efforts in the agency generally? (looking for potential role of funding agencies/feds/state/local government)
Have these motivations changed?
How much momentum and support does it have (from whom)?

Extent of QI efforts: in all areas of work that the agency does, or only in some?
(Ask if there is someone else we should talk/meet with).

Are there any components of your QI program that focus on preparedness?  If so, please provide some specific examples.

How is the QI program organized (try to get organizational management chart prior to visit)?
Is there a team or individual responsible for QI activities across the organization, or is it department/division specific?
Describe the team or individual responsible for quality improvement efforts across the organization – structure, role, and resources (e.g., personnel, financial, time) available.

Do you use a formal QI method or framework?
Which one?  How did your HD find it and learn it?

What resources have been invested in QI at your site overall? Specifically for preparedness?

(time, personnel, money, training, attention from leadership, social or organizational capital)

What has your organization done to encourage “buy-in” by staff and management to QI?

Describe how, if at all, incentives for performance improvement are built into the structure of the organization. 
What did it take to achieve this?

DESCRIBE QUALITY IMPROVEMENT PROCESS 

How do areas that are the focus of QI activities get identified? 
How do problems come to light?
Is there a regular process for identifying problems, or is it just as they arise?
Who determines which areas are selected for QI focus?
Are these people held accountable for seeing the changes through?

What challenges have you experienced in measuring baseline performance?  In measuring change or performance improvement?  Specific to preparedness?
How did you overcome those challenges?
Do you use any performance standards or goals in measuring improvement?  If so, what have you used?

Does the organization regularly develop PI or QI plans that specify timelines, actions, and responsible parties? 
How well does the organization adhere to these plans?
What carrots and sticks are used to ensure follow through?

Focusing now on preparedness, is there a process to identify lessons learned after an event or a preparedness exercise? (AAR, look back activities, debriefing after an event)
How do you move from documenting lessons learned to actually making changes to improve preparedness performance?
How do you know that the changes you made worked?

FACTORS THAT MIGHT FACILITATE OR IMPEDE QI

How successful do you think your QI program is?
On what basis do you come to that conclusion?

What are the main factors in the success of your QI program?

What challenges/barriers have you faced in implementing your QI program?
How were they overcome?

We’ve discussed factors that have facilitated and hindered your QI program.  What would need to happen in order to make your QI program better, both in general, and related to preparedness?

EXPLORE DIFFERENCES AND SIMILARITIES BETWEEN QI FOR GENERAL PUBLIC HEALTH FUNCTIONS AND PREPAREDNESS

How is preparedness thought of in relationship to other more traditional public health functions? (tension, synergy, “biodiversion”)
Are there traditional PH activities that are also important for preparedness?
Do people in your HD see the connections?

How would you characterize the similarities and differences, in your experience, in QI for public health in general vs. for preparedness specifically?

What would need to happen in your department to expand the role of QI in preparedness activities specifically?

Do you think your QI program has improved your department's level of preparedness?  If so, how?

IDENTIFY AND DESCRIBE POTENTIAL TOOLS (I.E. ELEMENTS OF A CHANGE PACKAGE) FOR IMPROVING PREPAREDNESS

QUALITY IMPROVEMENT ACTIVITIES/PROJECTS

Describe a particular QI activity or project. 
(If possible, describe QI in preparedness. If none available discuss most advanced or robust example of QI – may ask to think of an example of two in advance of visit).
What was the issue(s) to be addressed/improved from this activity? 
How did it come to be identified as an issue or problem?
How was the aim established? 
How was optimal performance defined in relation to this aim?
How was data on this performance measure collected at baseline and subsequently?
What changes did you implement to improve performance?*
How did you go about making the change?
What are you doing differently as a result of your QI activity? 
How do you know things have improved?
Has performance improvement been sustained?
Are there continual cycles of measurement, or is this an activity that you do only once?

*[NOTE: We are also trying to get at the general idea behind the specific change (e.g. reduce the number of handoffs in tracking lab results, improve communication capabilities among potential first-responders, increase shared situational awareness, etc.]

Were there any QI projects that you tried and that didn’t work?
What were they?
Why do you think they didn’t work?

In the process of making changes to improve performance, did you use materials from other sites or develop your own?
Is there material that you used or developed that might be particularly useful to others? 

Would you be willing to share this experience/information with other HDs?

DERIVE LESSONS FOR HOW TO ENCOURAGE QI AT OTHER STATE AND LOCAL PUBLIC HEALTH SITES

What advice would you give to other sites just starting out in QI?
What are some specific things to watch out for?

What about sites that have some QI but want to expand it into the preparedness arena?

What could other sites do in the next few days or weeks that could enhance their QI program or activities?

What could other sites do in the next few days or weeks to improve preparedness?

CONCLUSION
Is there anything else that I haven’t asked about that you feel it is important for me to know in order to understand QI at your site?

We are in the process now of visiting sites in half a dozen states around the country. We hope to use these interviews to understand ways to improve QI in preparedness nationwide. To that end, we are going to be developing and testing whether a ‘learning collaborative’ of sites involved in preparedness QI can help spur improvement. In a learning collaborative, sites would share ideas for what has worked, commit to trying out new ideas at home, and reporting their experiences back to the group. Our first learning collaborative is going to be this summer. Would your department be interested in participating in this learning collaborative?

[THANK RESPONDENT; ASK IF WE CAN CALL OR E-MAIL FOR FUTHER INFORMATION/CLARIFICATION

References

Baker, E. L., Jr. and J. P. Koplan (2002). "Strengthening the Nation's Public Health Infrastructure: Historic Challenge, Unprecedented Opportunity." Health Affairs (Millwood) 216): 15-27.

Berwick, D. M., G. Blanton, et al. (1990). Curing Health Care: New Strategies for Quality Improvement in Health Care, San Francisco: Jossey-Bass Publishers.

Blumenthal, D. and C. M. Kilo (1998). "A Report Card On Continuous Quality Improvement." Milbank Q 76(4): 625-48, 511.

Bradley, E. H., E. S. Holmboe, et al. (2003). "The Roles Of Senior Management In Quality Improvement Efforts: What Are The Key Components?" J Healthc Manag 48(1): 15-28; discussion 29.

Carman, e. a. (1996). "Keys for Successful Implementation of Total Quality Management in Hospitals." Health Care Management Review 21(1): 48.

Cretin, S., S. M. Shortell, et al. (2004). "An Evaluation Of Collaborative Interventions To Improve Chronic Illness Care. Framework And Study Design." Eval Rev. 28(1).

Daita, L. (2005). The Quality Portal. Available at http://www.thequalityportal.com/. Last accessed November, 22, 2005.

Department of Homeland Security (2004). National Incident Management System. Available at http://www.dhs.gov/interweb/assetlibrary/NIMS-90-web.pdf. Last accessed November 18, 2005.

Derose, S. F., S. M. Asch, et al. (2003). "Developing Quality Indicators For Local Health Departments: Experience in Los Angeles County." Am J Prev Med 25(4): 347.

Drucker, P. (1954). The Practice of Management. New York, Harper & Row.

Galvin, R. S. (2001). "The Business Case For Quality." Health Aff (Millwood) 20(6): 57-8.

Halverson, P. K., R. M. Nicola, et al. (1998). "Performance measurement and accreditation of Public Health Organizations: A call to action." Journal of Public Health Management and Practice 4(4): 5-7.

Hassmiller, S. (2002). "Turning point: the Robert Wood Johnson Foundation's efforts to revitalize Public Health at the State Level." J Publilc Health Manag Pract 1: 1-5.

Institute of Medicine (1988). The Future of Public Health. Washington, DC, National Academies Press.

Langley, G. J., K. M. Nolan, et al. (1996). The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. San Francisco, CA, Jossey-Bass.

Lighter, D. E. and D. C. Fair (2004). Quality Management in Health Care: Principles and Methods. Boston, MA, Jones and Bartlett Publishers.

Lister, S. (2005). An Overview of the U.S. Public Health System in the Context Of Emergency Preparedness. Available at http://www.fas.org/sgp/crs/homesec/RL31719.pdf. Last accessed November 18, 2005.

Nicola, R. M. (2005). "Turning Point's National Excellence Collaboratives: Assessing A New Model For Policy And System Capacity Development." Journal Of Public Health Management and Practice 11(2): 101-108.

Parker, V. A., et al. (1999). "Implementing Quality Improvement in Hospitals: The Role of Leadership and Culture." American Journal of Medical Quality 14(1): 64.

Public Health Foundation (2002). Turning Point Performance Management Collaborative Survey on Performance Management Practices in States. Seattle, WA, Turning Point National Program Office at the University of Washington.

Shortell, S. M., J. L. O'Brien, et al. (1995). "Assessing the Impact Of Continuous Quality Improvement/Total Quality Management: Concept Versus Implementation." Health Serv Res 30(2): 377-401.

Solberg, L. I., G. Mosser, et al. (1997). "The Three Faces Of Performance Measurement: Improvement, Accountability, And Research." Jt Comm J Qual Improv 23(3): 135-47.

 Copyright Information: Rand Corporation

The research described in this report was prepared for the U.S. Department of Health and Human Services. This research was produced within RAND Health’s Center for Domestic and International Health Security. RAND Health is a division of the RAND Corporation.

The RAND Corporation is a nonprofit research organization providing objective analysis and effective solutions that address the challenges facing the public and private sectors around the world. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.

R® is a registered trademark.

A profile of RAND Health, abstracts of its publications, and ordering information can be found on the RAND Health home page at www.rand.org/health.

© Copyright 2006 RAND Corporation

All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from RAND.

Published 2006 by the RAND Corporation
1776 Main Street, P.O. Box 2138, Santa Monica, CA 90407-2138
1200 South Hayes Street, Arlington, VA 22202-5050
4570 Fifth Avenue, Suite 600, Pittsburgh, PA 15213
RAND URL: http://www.rand.org/
To order RAND documents or to obtain additional information, contact
Distribution Services: Telephone: (310) 451-7002;
Fax: (310) 451-6915; Email: order@rand.org

-------------------------------------------------------------------------

1 Efforts at measuring performance and improving quality in public health have existed since 1945 with the publication of the Emerson report, which advanced several national standards in public health. However, the difficulties of developing consensus on appropriate measures to judge the performance of public health departments and methods to assess quality have limited the wide-scale adoption of standard quality improvement initiatives.

2 The Center for Health Policy at the Columbia University School of Nursing was another initial PHR partner.

3 The figure draws upon the CDC definition of PHEP, which emphasizes the development of preparedness goals designed to measure public health system response performance as well as the National Response Plan, which defines preparedness as “the existence of plans, procedures, policies, training, and equipment necessary at the Federal, State, and local level to maximize the ability to prevent, respond to, and recover from major events (Homeland Security Presidential Directive (HSPD-8), December 17, 2003).