Skip to main content
  • Expert Commentary
  • November 09, 2009

A Selection of Statistical Process Control Tools Used in Monitoring Health Care Performance

Statistical Process Control (SPC) is a methodology used for the ongoing monitoring, control, and improvement of processes through the use of statistical tools (1). SPC contains a number of graphical methods that helps achieve several objectives: quantifying one or more measures of a process; determining whether the process is operating within an acceptable range of variability; identifying ways that the process can be improved to achieve its best target value; and eliminating unacceptable variability. An example of a health care process is the prevention of patient falls in a hospital. A measure of this process would quantify the number of patient falls in a given month per 100 patient days. One would expect some patient falls every month, but that the number of patient falls would vary from month to month due to natural variation.

The purpose of this article is to briefly outline some of the SPC tools available for monitoring a health care process with corresponding references to their application. Although variability is expected in any process, reducing this variation to the extent possible and operating the process at optimal levels are the main goals. Measurement is the first step, since only through measurement are we able to determine the stability of the process and whether it operates at an acceptable level. Furthermore, only through measurement can improvement efforts be undertaken to achieve their maximum effect.

Variation constitutes an expected component of any health care process. The challenge of monitoring a health care process lies in the ability to identify when it varies systematically from its controlled pattern of operation, or a normal part of a stable process. This variation is known as common causes of variation, while the systematic deviations from this stable pattern are known as special causes of variation. Once all special causes of variation are identified and eliminated, the process can be predictably characterized and efficiently improved through systematic redesign of hospital processes, thus eliminating needless waste (2). In the patient falls example given above, a common cause of variation would be monthly differences in the number of patients on high risk medications, which could impact a patient's risk of falling. An example of a special cause would be a new, inexperienced nurse added to a floor who has not yet been fully trained in fall prevention strategies.

Control charts are a primary tool used in SPC to quantify the amount of variation in a process, determine whether the process is operating predictably, and to distinguish between common and special causes of variation. The basic control chart consists of two parts: (1) a series of measurements, or a summary of measurements within a particular time, plotted in time order and connected with a line, called the center line, and (2) two lines that frame this center line, one drawn above the center line called the upper control limit (UCL), and one drawn below the center line called the lower control limit (LCL). The control limits are used to help determine common causes from special causes of variation (3). One way to determine these limits is to examine past years' measurements to provide some indication of the degree of natural variation. An example of a control chart for patient falls is given in Figure 1.

Figure 1: Control Chart for Patient Falls Example

Figure description: Figure 1 is a line graph depicting an example of a control chart for patient falls. This chart displays the number of patient falls per 100 patient days over the course of 24 months, as follows: Month 1 was 5.9 falls per 100 days; Month 2 was 7.5 falls per 100 days; Month 3 was 6.4 falls per 100 days; Month 4 was 7.8 falls per 100 days; Month 5 was 3.4 falls per 100 days; Month 6 was 0.0 falls per 100 days; Month 7 was 4.3 falls per 100 days; Month 8 was 0.4 falls per 100 days; Month 9 was 4.7 falls per 100 days; Month 10 was 0.4 falls per 100 days; Month 11 was 2.9 falls per 100 days; Month 12 was 14.8 falls per 100 days; Month 13 was 3.1 falls per 100 days; Month 14 was 4.4 falls per 100 days; Month 15 was 5.5 falls per 100 days; Month 16 was 7.1 falls per 100 days; Month 17 was 6.4 falls per 100 days; Month 18 was 1.5 falls per 100 days; Month 19 was 7.1 falls per 100 days; Month 20 was 10.8 falls per 100 days; Month 21 was 2.3 falls per 100 days; Month 22 was 6.9 falls per 100 days; Month 23 was 6.6 falls per 100 days; Month 24 was 4.0 falls per 100 days. The Upper Control Limit was 12.0, the Center Line was 5.175, and the Lower Control Limit was 0.

There are different types of control charts; each type calculates the UCL and the LCL in a different way. The correct control chart to use depends on the type of data being analyzed and the number of data points collected at each time point. The two main types of data are continuous data (e.g., time to antibiotic administration or blood pressure), which are measured on a continuous scale, and discrete data which involves either counts of a quantity (e.g., number of patients or tests) or attributes (e.g., whether a patient died or received a pneumonia vaccination). The different types of data and the appropriate control chart to use are (4):

  • Continuous data with one data point available at a time: xmr-chart, or individuals chart
  • Continuous data with 2-9 data points per time: x-bar and R charts
  • Continuous data with greater than or equal to 10 data points per time: x-bar and S charts
  • Count data: c-chart (using counts) or u-chart (using ratio of counts to size of population)
  • Attribute data: p-chart or np-chart

For rare events such as hospital-acquired infections or asthma attacks, g-type control charts have been developed for the time or number of cases between events (5). The control chart of the falls example in Figure 1 is a u-chart since the actual number of falls in a month is counted data, which is expressed per 100 patient days.

Once a control chart is produced it is then used to identify special causes of variation. There are a number of abnormal patterns, called out-of-control patterns, that would indicate the presence of a special cause of variation. One case is to have one or more data points that fall outside the control limits. Another case is for the data to exhibit an abnormal nonrandom data pattern, such as eight successive points on the same side of the center line (6,7).

When a process has been monitored for a period of time and is operating in control, it is desirable to be able to detect early small, but sustained, deviations from a performance standard which may not be detectable through the use of standard control chart methods. In this case, another type of control chart called the cumulative sum (CUSUM) control chart can be used. In its most basic form, a CUSUM chart for attribute data is a plot of the cumulative count of the number of cases with an attribute, for instance death, against a count of the sequential number of cases. By using information about patterns of small sustained increases in a process from the entire sequence of data points, sensitivity to small process shifts is increased. As with standard control charts, control limits are added to the plot to detect out-of-control patterns (8). The exponentially weighted moving average (EWMA) control chart is another method used to detect small changes in a process and uses a moving average of the observations to monitor changes in the process. The moving average acts to "smooth" the data, or filter out some of the "noise" or variability in the data so that significant trends can be more easily detected.

When analyzing measures that represent clinical outcomes, such as mortality, it is important to adjust the analysis to account for differences among the individual units of analysis (e.g., patients) in their likelihood of experiencing the outcome. This is necessary in order to fairly compare the performance of providers such as hospitals with each other or to accurately evaluate an individual provider's performance over time. Methods have been developed for producing both risk-adjusted control charts (9) and risk-adjusted CUSUM charts (10).

The monitoring methods considered above have used internal standards. It is often desirable to compare an organization's data to external standards. The comparison chart is a tool that can be used to compare an organization's process results to those of an external comparison group. The comparison chart provides guidance to an organization about whether it is performing at an acceptable level of performance or whether it should try to improve its current performance. An expected range is calculated around this comparison value which is a function of the variability of the organization's observed value. Observed performance falling outside this expected range identifies performance that is significantly different from the comparison value (11).

When comparing the performance data of a number of organizations is of interest, a form of control chart known as a funnel plot has been recommended (12). In a funnel plot, each observed measure value is plotted against a measure of its precision, with a target line representing a comparison value and control limits superimposed on the plot. The funnel plot adjusts for the "over-dispersion," or excess variability often found when comparing performance data over multiple disparate organizations, and can be adapted to risk-adjusted data. The funnel plot provides the control chart user a more accurate method of identifying performance that is significantly different from the norm.

In the falls example, it might be determined from the control chart that the monthly fall rate, although stable and in control, is too high. Or, the control chart may have identified a couple of months where the fall rate was unusually high. In the first case, interventions such as the initiation of a fall prevention program could be used to reduce the fall rate, and the control chart could be used as the basis for determining improvement. In the second case, investigation could begin to explore the special causes behind those identified months with high fall rates to help preventing this from occurring in the future. If special causes of variation are detected using the above tools, or if it is desired to reduce the common cause variation in the process, there are a variety of quality improvement tools that can be used to identify the source of the problem(s) and to initiate improvement efforts. These tools include brainstorming (a technique used to generate a large number of ideas in a short period of time), the affinity diagram (a tool to organize large number of ideas into their natural relationships), the work-flow diagram (a diagram showing the movement of people, materials, or information through a process), the top-down flowchart (diagram showing an overview of the most important steps in a process), the flowchart (a more detailed diagram of the separate steps of a process in sequential order), the Pareto chart (bar graph whose heights reflect the frequency or impact of problems), and the fishbone diagram (a visual diagram of the causes and effects of a problem) (13).

Author

Stephen Schmaltz, PhD
The Joint Commission, Oakbrook Terrace, IL

Disclaimer

The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor, ECRI Institute.

Potential Conflicts of Interest

Dr. Schmaltz declared no potential conflicts of interest with respect to this commentary.

References

  1. Oakland JS. Statistical Process Control, 6th ed. Oxford UK: Butterworth-Heinemann, 2008.
  2. Berwick DM. Controlling variation in health care: a consultation from Walter Shewhart. Med Care. 1991;29:1212-1225.
  3. Carey RG and Lloyd RC. Measuring quality improvement in healthcare: a guide to statistical process control applications. Milwaukee, WI; American Society for Quality, Quality Press, 2001.
  4. Mohammed MA, Worthington P, Woodall WH. Plotting basic control charts: tutorial notes for healthcare practitioners. Qual Saf Health Care, 2008;17:137-145.
  5. Benneyan JC. Number-between g-type statistical quality control charts for monitoring adverse events. Health Care Management Science. 2001;4:305-318.
  6. Gitlow H, Gitlow S, Oppenheim A, and Oppenheim R. Tools and methods for the improvement of quality. Homewood, IL; Irwin, 1989.
  7. Benneyan JC, Lloyd RC, and Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12:458-464.
  8. Montgomery DC. Introduction to statistical quality control. 5th ed. New York; John Wiley, 2005.
  9. Hart M, Lee K, Hart R and Robertson W. Application of attribute control charts to risk-adjusted data for monitoring and improving health care performance. Qual Man Health Care. 2003;12:5-19.
  10. Woodall WH. The use of control charts in health-care and public-health surveillance. Journal of Quality Technology. 2006;38:89-104.
  11. Lee YL and McGreevey C. Using comparison charts to assess performance measurement data. Jt Comm J Qual Improv. 2002;28:90-101.
  12. Spiegelhalter DJ. Funnel plots for comparing institutional performance. Stat Med. 2005; 24:1185-1202.
  13. Tague NR. The quality toolbox. Milwaukee, WI; American Society for Quality, Quality Press, 1995.