iSixSigma

Using Control Charts or Pre-control Charts

Every process falls into one of four states:

  1. Ideal: produces 100 percent conformance and is predictable
  2. Threshold: predictable but produces the occasional defect
  3. Brink of chaos: not predictable and does not produce defects
  4. Chaos: not predictable and produces defects at an unacceptable rate

Processes tend to migrate toward chaos if not effectively managed.

Pre-control Charts

There are two basic philosophical differences between those who support control charts (or Shewhart charts, named for their developer, statistician Walter A. Shewhart) and those who support pre-control charts. The pre-control folks tend to view any product within specification as being of equal good. All outcomes are considered to be “good” or “bad” and the dividing line is a sharp cliff. A part that barely meets specification is as good as a part that is perfectly centered on the target (T) value. Producing product tighter than the specification limits is viewed as an unnecessary expense.

Figure 1: “Good” Within Product Specifications

Figure 1: “Good” Within Product Specifications

Rath & Strong consultants, including statistician Frank Satterthwaite, developed pre-control charts in the 1950s. This technique focuses on the voice of the customer in that the pre-control limits are based on upper and lower specification limits (USLs and LSLs). These limits are chosen such that the hard stop limit to pre-control charts are at the customer specification and cautionary limits are at ±50 percent of the specification (see Figure 2).

Figure 2: Example of a Pre-control Chart

Figure 2: Example of a Pre-control Chart

To establish process capability, five consecutive units must fall between the pre-control limits in the green region. After this condition is met, two successive units are periodically sampled. If the two units fall in the green zone, continue production. If one unit falls in the green zone and the other falls in the yellow, continue production. If both units fall in the yellow zone, stop and adjust the process. If one unit falls in the red zone, stop and adjust the process. To resume normal production five units in a row must be within the green zone. The sample frequency is determined by dividing the interval between stoppages by six.

Control Charts

Control chart philosophy more closely follows the Taguchi Loss Function even though control charts were developed in the 1920s and the Taguchi Loss Function was not introduced until the 1960s. The Taguchi Loss Function states that as the parameter (x) varies about the target (T) there will be a loss [L(x)] to society. Thus, a part produced at the target is more valuable than a part produced at the specification limits. This is because throughout the value stream accommodations have to be made to be tolerant to that variation from the target value. That adds cost to subsequent steps in the value stream. (See Figure 3.)

Figure 3: Taguchi Loss Function

Figure 3: Taguchi Loss Function

Shewhart chart control limits are chosen so that time is not wasted looking for unnecessary trouble. The practical goal is to take action only when necessary. Control limits are calculated by estimating the standard deviation of the sample data adjusted for sample size and multiplying that number by three. That number is then added to the average for the upper control limit and subtracted from the average for the lower control limit. Shewhart gave us constants to use that ease these calculations. The control chart tests are design to flag points that are not behaving “normally” (i.e., exhibiting special cause variation).

The Shewhart chart focuses on the variation that is due to the process itself. Control limits are developed from the process data and not tied to the specification limits. This is commonly referred to as voice of the process (VOP) as the process is providing information about itself.

Figure 4: Example of a Control Chart

Figure 4: Example of a Control Chart

Control Chart Test for Special Cause Variation

There are eight control chart tests that can be done to reveal special cause variation. (Refer to Figure 4 for Zone references.)

  1. One point beyond Zone A detects a shift in the mean, an increase in the standard deviation or a single aberration in the process.
  2. Out-patient workload
  3. Nine points in a row in a single (upper or lower) side of Zone C or beyond detects a shift in the process mean.
  4. Six points in a row steadily increasing or decreasing detects a trend or drift in the process mean. Small trends will be signaled by this test before Test 1.
  5. Fourteen points in a row alternating up and down detects systematic effects such as two alternately-used machines, vendors or operators.
  6. Two out of three points in a row in Zone A or beyond detects a shift in the process average or increase in the standard deviation. Any two out of three points provide a positive test.
  7. Four out of five points in Zone B or beyond detects a shift in the process mean. Any four out of five points provide a positive test.
  8. Fifteen points in a row in Zone C, above and below the center line detects stratification of subgroups when the observations in a single subgroup come from various sources with different means.
  9. Eight points in a row on both sides of the center line with none in Zones C detects stratification of subgroups when the observations in one subgroup come from a single source, but subgroups come from different sources with different means.

Shewhart charts determine what kind of variation the process is exhibiting. Common cause variation is systemic, chronic variation that is produced by any process. It is often thought of as “random” variation and is produced by the process itself. It can be large or small. Special cause variation is caused by a unique disturbance. It is unpredictable and can be large or small. The cause may be known or unknown and is not always bad.

What is the concern in identifying our observed variation as common cause or special cause? Treating common cause variation increases variation as illustrated by Dr. W. Edwards Deming’s funnel experiment described in Out of Crisis. The experiment shows that treating common cause as special cause degrades process performance. Dr. Deming called this tampering.

Figure 5 displays results from a simulation to illustrate the effect of tampering. It shows that treating common cause variation as special cause variation greatly increases variation from the target value; by treating common cause like special cause, the problem worsens. If special cause variation is treated like common cause variation, the root of the problem is not found. Additional variation and cost to the process are likely to be introduced.

Figure 5: Effects of Tampering

Figure 5: Effects of Tampering

Control Charts or Pre-control Charts: An Example

In much of the literature that supports the use of pre-control control, claims are made that control charts are a waste of time and are too cumbersome to use. Often those who hold to control charts claim that pre-control charts will cause users to tamper with their process and actually increase variation. Which group is correct? Consider the following example.

A set of 500 normally distributed data points with a mean of 100 and a standard deviation of 5 was created. Setting specification limits at 100 ±15 results in a Cpk of 1, which is optimum in pre-control terms. The data being normally distributed and centered on the target value is a fair condition for traditional control charts.

Figure 6: Probability Plot

Figure 6: Probability Plot

The individuals chart (Figure 7), which is the closest Shewhart chart to the pre-control chart, flags the points as greater than three standard deviations from the process mean. This is expected as the process is centered on the specification mean for this example; 1 in 370 points are expected to fall beyond three standard deviations in a normal distribution. The individuals chart is also the most sensitive of the Shewhart charts but should always be used in conjunction with the moving range chart.

Short term variation is not investigated in an individuals chart. That is the job of the moving range chart (Figure 8). The moving range chart indicates that seven moving range points seem to be behaving abnormally and should be investigated.

The pre-control chart (Figure 9) flags eight additional adjacent pairs as falling two standard deviations away from the specification mean and, thus, require process adjustment. Following the pre-control rules would lead to tampering. A total of 59 points require additional evaluation beyond the Shewhart method in this example.

Figure 7: Individuals Chart

Figure 7: Individuals Chart

Figure 8: Moving Range Chart

Figure 8: Moving Range Chart

Figure 9: Pre-control Chart

Figure 9: Pre-control Chart

It appears that the pre-control chart would have a higher false positive and encourage tampering. Pre-control measures compliance with customer specification, the voice of the customer. Control charts are measuring process variation or VOP. Control charts offer power in analysis of a process especially when using rational subgrouping. Rational subgrouping also reduces the potential of false positives; it is not possible with pre-control charts.

Pre-control charts have limited use as an improvement tool. Pre-control does not detect shifts, drifts and trends with statistical certainty as control charts or run charts do. See the table below for a side-by-side comparison of the two tools.

Comparison of Control and Pre-control Charts
Control Charts Pre-control Charts
Protects the CustomerIn conjunction with process capabilityThe goal of pre-control charts
Useful in Process ImprovementHighly usefulMinimally useful
Variation Inflation RiskMinimalLikely
Ease of Use1. Readily available software
2. Chart-based
1. Must develop manually or write custom software
2. Charting not required
Broadly AcceptedYesNo
Conducive to Rational SubgroupingYesNo
Statistically ValidYesQuestioned

Many quality professionals have declared that pre-control charts have gone the way of the Dodo bird. They are, however, a helpful tool to use after changeovers. Pre-control charts can help to roughly center the process until there are enough points to calculate control limits and reestablish capability – but only if the rules are slightly modified. “If…, stop and adjust the process” should be changed to “If …., stop and investigate the process.” In the event of a pre-control chart trigger, problem-solving analysis tools should be employed rather than blindly adjusting the process.

By using this slightly modified pre-control charting as part of a changeover procedure the customer can be protected until stability, control and capability can be established. There is a great deal of variation as to the number of points required to calculate control limits, from as low as 14 to as high as 100; 30 is the most common. If an institution uses a higher number of points, there might be a place for pre-control charts in its changeover practices.

Resources

  1. Berardinelli, Carl, “A Guide to Control Charts,” iSixSigma.com, isixsigma.com/tools-templates/control-charts/a-guide-to-control-charts/.
  2. Bhote, Keki R. and Bhote, Adi K., World Class Quality Using Design of Experiments to Make It Happen, American Management Association, January 4, 2000.
  3. Deming, W. Edwards, Out of the Crisis, The MIT Press, 1982.
  4. Martin, Tripp, “Shewhart Charts and Pre-Control: Rivals or Teammates?” Annual Quality Congress,
    May 1992, asq.org/qic/display-item/index.html?item=9811.
  5. Wheeler, Donald J. and Chambers, David S., Understanding Statistical Process Control, SPC Press, 1992.

You Might Also Like

Comments 4

  1. Mike Carnell

    This article has left me with very mixed feelings about what will happen if someone with very little experience takes it at face value. Normally I don’t prefer to not get involved to deeply in these tool issues. Most of these isolated tool discussions absolutely flies in the face of what Deming referred to as profound knowledge.

    First, using pre-control in a machine setup process has been done for years. Pre-control becomes very useful there because without a process that forces people to setup to a target rather than a range (USL to LSL) a process can turned over to production with it set to operate just within a specification. Using a Pre-control chart will force that setup to the center of the specification. The idea being if I am setting up and just measuring a few parts I am ignoring the distribution around that particular point. If I setup right on a specification limit on a normally distributed process I should expect 50% defects. Even using Pre-Control and forcing the setup to the center of the spec and not having knowledge of the std. dev. I still run the risk of building defects because I have no knowledge of capability.

    Second issue is you state “Control chart philosophy more closely follows the Taguchi Loss Function even though control charts were developed in the 1920s and the Taguchi Loss Function was not introduced until the 1960s.” The problem is “more closely follows.” The article doesn’t specifically state that the specification range is being used as a target but it does imply it. Your diagram of the Taguchi Loss Function is correct in that it shows it as a point. Anytime I produce something other than that point then there is a loss imparted to society even when I am within the specification limits.

    Let’s use gold plating to demonstrate the issue because it is easiest for people to understand the loss. I was required to plate gold within a specification. Any place within that specification was acceptable to the customer. That doesn’t mean that it automatically becomes my target based on the business I am working for. The most profitable point is the LSL as a mean and a std. dev. of zero. That isn’t going to happen but it is the ideal target. So operationally what is the target? It has to be somewhere within the spec but how do I determine the target? I don’t want to get into the shift vs no shift discussion but it could be something between 3-6 std. deviations from the specification. That leaves you suboptimal to the Taguchi loss function (the loss imparted to society) but still in control (assuming the process is in control to begin with). The job then becomes reducing the std. dev. and shifting the process towards the LSL. I am not sure at this point why there was no discussion of Cpm or for old timers Cpt which is basically capability to hit a target. In most manufacturing situations the difference in the target can be invisible to the people on the commercial side of the business but when it comes to gold the difference shows up very quickly. At that time my sector VP stated “I can drive a Lincoln Continental off a bridge every day for what you scrap.” Obviously it left an impression since I still can see him saying that almost 25 years later.

    Basically a big part of this discussion that is missing is the effects of variation. When I am using a Pre-control chart and just an X-bar chart I am ignoring the issue of variation. measures of central tendency tend to be knob variables. Shifting a process location should be relatively easy. There is also the effect of hetero and homoscedasticity. Unless you are paying attention to the variation tweaking can have a large amount of ancillary effects that you may or may not understand.

    The whole drive around Six Sigma was to stop focusing on the average and to understand the effects of variation. As much as I am an advocate of pre-control for machine setup There is more to running an efficient process than setup. With todays technology even control charts have some serious limitations. Something as simple as bottle filling could be control charted but when you look at line speeds in the time it takes to pull samples, make measurements, plot points and react you will have a disaster on your hands.

    Basically as technology is advancing it is so easy to build intelligence into a process a lot of things have gone the way of the Dodo bird. The whole idea of turning your profitability and efficiency over to a chart can be particularly risky. Doing it without thoroughly understanding a process is even more risky.

    Just my opinion.

  2. Keller

    Despite the tease at the beginning, I’m glad to read the conclusion that pre-control charts are generally not recommended. Yet, there are a couple of issues that should be noted:
    1. You state “The individuals chart is also the most sensitive of the Shewhart charts…”. Actually, the Individuals chart is the LEAST sensitive of Shewhart control charts. The sensitivity of a Shewhart chart in detecting a process shift increases with subgroup size: the larger the subgroup size, the greater the sensitivity to detect process shift; however, there is diminishing return as larger subgroups are used, and this must be balanced with the need for a rational subgroup. Larger subgroups tend to increase the chance of a special cause within the subgroup, which would lead to irrational subgroups. Therefore, stick with the 3 to 5 observations in a subgroup, or for some processes, use individuals data. For individuals data, a Moving Average or EWMA chart would be preferred.
    2. I don’t see how anyone can credibly claim the pre-control chart “protects the customer”, as it’s well-known (thanks to Deming) that using specification limits to control your process will lead to tampering and a resulting increase in process variation. That’s hardly protecting the customer! Rather, while the control chart allows you to predict the process variation and ensure the process is reliably producing output that meets the specifications, the pre-control chart is incapable of prediction, and you are forced to sample 100% of your process output to ensure conformance to specification.

  3. Kathryn Loncle

    Hi Carl, enjoyed your article very much, and also the comments.

    I researched pre-control charts several years ago because I needed to find a solution for manufacturing operators in an environment where updating paper charts was difficult and where computer access was limited.

    I used a visual similar to your figure 2, but the zones were not set up the same way. My intent was to avoid the situation Paul mentions about tampering; the possibility of someone getting a result near the spec line and retesting until they got a value that was in. It might be more accurate to call what I used a Stop Light Chart-Green, Yellow, Red, where you stop cold at the Red zone.

    What we did differently was set the red area inside the spec limit on each side, not beyond the spec limit. All zones were therefore within the spec limits. I won’t get in the detail of what we put in place, how many points you had to get before action and what the actions were for each case, but the overall process worked very well for our purpose, and prevented our knowingly producing bad product. It would happy to post an example but not sure that I can add image in comments. Kathryn Jansen

  4. Kicab Castaneda-Mendez

    There are several reasons why your point that “Control chart philosophy more closely follows the Taguchi Loss Function” is not true. The Taguchi loss function is based on the premise that any deviation from nominal or target creates a loss and therefore, rather than having specifications every deviation from nominal should be viewed negatively as something to prevent.
    1. You state that “pre-control folks tend to view any product within specification as being of equal good. All outcomes are considered to be ‘good’ or ‘bad’ and the dividing line is a sharp cliff.” However, using 3-sigma (standard error) limits on control charts is creating exactly this cliff to which you object and contradicts the Taguchi loss function. A point just inside the 3-sigma limits is viewed as “good” while a point just outside is viewed as “bad.”
    2. Any control chart that aims to have stability but not zero deviation from nominal is inconsistent with the Taguchi loss function. Using Xbar or X charts, the center line should equal the nominal. If it doesn’t then it doesn’t matter whether the process is in control—you have on average a loss. Assuming that the center line of the control chart equals nominal, every point not on the center line has created a loss, according to the loss function—even if the process is stable. Equally, the variability charts S, R, and MR and the attribute charts P, NP, c, and u with nonzero control limits show losses even when displaying stability. Thus, these charts are inconsistent with the Taguchi loss function.
    3. Your simulation does not show that the case when a process is slightly out of control the 3-sigma limits cannot detect it or is less likely to detect it with the 3-sigma limits.
    4. Even the other rules, e.g., rule 3, using 6 or 7 or 8 points instead of 9 is more likely to detect these small but loss-creating changes. In fact, only the first rule (a point beyond the control limits) relies on the 3-sigma control limits. All the other rules can be used with pre-control charts contradicting the claim that “Pre-control does not detect shifts, drifts and trends with statistical certainty as control charts or run charts do.”
    5. If the control limits are not probability limits but empirical limits (as some people claim), then the claim that control charts are statistically valid while pre-control charts are not, is false or at least not applicable. The only requirement for applying statistics is an assumption of a probability distribution, which can be made with pre-control charts also. Thus, both or neither are statistically valid.
    Thus, it isn’t the type of control chart that makes the chart consistent or inconsistent with the Taguchi loss function. It is the 1) purpose of aiming for stability regardless of the amount of deviation and 2) rules that determine stability regardless of the size of deviations from nominal.
    You might respond by stating that we are not interested in all changes but only the critical few and not the trivial many. That would prove the point that control charts are inconsistent with the Taguchi loss function.
    You might respond by stating that there would be an additional cost of chasing many false alarms. However, since we don’t know how the “loss to society” is calculated (as it is merely conceptual), we don’t know 1) how that additional cost is added to the “loss to society” and 2) if it exceeds the cost of failing to catch and correct the small deviations that did produce losses–sometimes huge financially or fatal.

Leave a Reply