iSixSigma

Carl Berardinelli

Activity

  • You should not recalculate control limits with out a reason such as a process change.

  • Please refer to Shewhart’s original work; Economic control of quality of manufactured product, Statistical method from the viewpoint of quality control.

  • What are these constants? Can they be calculated? Yes, based on d2, where d2 is a control chart constant that depends on subgroup size. See below. D2 constant is the result of an infinite integral. Thankfully, […]

  • Control limits of a process control chart have no relationship to specification limits.
    If the process is stable and in control then process capability could be determine. Process capability is a different […]

  • The is a lot of truth in the saying; “The only thing worse than no data is bad bad.”

  • Very nice article. This criteria would serve any deployment well.

  • Arun,
    I agree. You migh want to review these article too:
    A Guide to Control Charts http://www.isixsigma.com/tools-templates/control-charts/a-guide-to-control-charts

    Multivariate Control Charts: T2 and […]

  • Aishah, Jaime,

    Minitab automatically does the decomposition. If I remember correctly, JMP also does the decomposition.

    Thank You,

  • Deepak,

    You can certainly use a run chart to determine if your data contain special cause variation, however a run chart is not a substitute for a full gauge study. You seem to be describing a short term […]

  • The run chart is a powerful, simple and easy-to-use process improvement tool. Often, the run chart is shortchanged as the statistical tests that can be used with run charts are overlooked. This article takes the […]

  • Every process falls into one of four states:

    Ideal: produces 100 percent conformance and is predictable
    Threshold: predictable but produces the occasional defect
    Brink of chaos: not predictable and does […]

    • This article has left me with very mixed feelings about what will happen if someone with very little experience takes it at face value. Normally I don’t prefer to not get involved to deeply in these tool issues. Most of these isolated tool discussions absolutely flies in the face of what Deming referred to as profound knowledge.

      First, using pre-control in a machine setup process has been done for years. Pre-control becomes very useful there because without a process that forces people to setup to a target rather than a range (USL to LSL) a process can turned over to production with it set to operate just within a specification. Using a Pre-control chart will force that setup to the center of the specification. The idea being if I am setting up and just measuring a few parts I am ignoring the distribution around that particular point. If I setup right on a specification limit on a normally distributed process I should expect 50% defects. Even using Pre-Control and forcing the setup to the center of the spec and not having knowledge of the std. dev. I still run the risk of building defects because I have no knowledge of capability.

      Second issue is you state “Control chart philosophy more closely follows the Taguchi Loss Function even though control charts were developed in the 1920s and the Taguchi Loss Function was not introduced until the 1960s.” The problem is “more closely follows.” The article doesn’t specifically state that the specification range is being used as a target but it does imply it. Your diagram of the Taguchi Loss Function is correct in that it shows it as a point. Anytime I produce something other than that point then there is a loss imparted to society even when I am within the specification limits.

      Let’s use gold plating to demonstrate the issue because it is easiest for people to understand the loss. I was required to plate gold within a specification. Any place within that specification was acceptable to the customer. That doesn’t mean that it automatically becomes my target based on the business I am working for. The most profitable point is the LSL as a mean and a std. dev. of zero. That isn’t going to happen but it is the ideal target. So operationally what is the target? It has to be somewhere within the spec but how do I determine the target? I don’t want to get into the shift vs no shift discussion but it could be something between 3-6 std. deviations from the specification. That leaves you suboptimal to the Taguchi loss function (the loss imparted to society) but still in control (assuming the process is in control to begin with). The job then becomes reducing the std. dev. and shifting the process towards the LSL. I am not sure at this point why there was no discussion of Cpm or for old timers Cpt which is basically capability to hit a target. In most manufacturing situations the difference in the target can be invisible to the people on the commercial side of the business but when it comes to gold the difference shows up very quickly. At that time my sector VP stated “I can drive a Lincoln Continental off a bridge every day for what you scrap.” Obviously it left an impression since I still can see him saying that almost 25 years later.

      Basically a big part of this discussion that is missing is the effects of variation. When I am using a Pre-control chart and just an X-bar chart I am ignoring the issue of variation. measures of central tendency tend to be knob variables. Shifting a process location should be relatively easy. There is also the effect of hetero and homoscedasticity. Unless you are paying attention to the variation tweaking can have a large amount of ancillary effects that you may or may not understand.

      The whole drive around Six Sigma was to stop focusing on the average and to understand the effects of variation. As much as I am an advocate of pre-control for machine setup There is more to running an efficient process than setup. With todays technology even control charts have some serious limitations. Something as simple as bottle filling could be control charted but when you look at line speeds in the time it takes to pull samples, make measurements, plot points and react you will have a disaster on your hands.

      Basically as technology is advancing it is so easy to build intelligence into a process a lot of things have gone the way of the Dodo bird. The whole idea of turning your profitability and efficiency over to a chart can be particularly risky. Doing it without thoroughly understanding a process is even more risky.

      Just my opinion.

    • Despite the tease at the beginning, I’m glad to read the conclusion that pre-control charts are generally not recommended. Yet, there are a couple of issues that should be noted:
      1. You state “The individuals chart is also the most sensitive of the Shewhart charts…”. Actually, the Individuals chart is the LEAST sensitive of Shewhart control charts. The sensitivity of a Shewhart chart in detecting a process shift increases with subgroup size: the larger the subgroup size, the greater the sensitivity to detect process shift; however, there is diminishing return as larger subgroups are used, and this must be balanced with the need for a rational subgroup. Larger subgroups tend to increase the chance of a special cause within the subgroup, which would lead to irrational subgroups. Therefore, stick with the 3 to 5 observations in a subgroup, or for some processes, use individuals data. For individuals data, a Moving Average or EWMA chart would be preferred.
      2. I don’t see how anyone can credibly claim the pre-control chart “protects the customer”, as it’s well-known (thanks to Deming) that using specification limits to control your process will lead to tampering and a resulting increase in process variation. That’s hardly protecting the customer! Rather, while the control chart allows you to predict the process variation and ensure the process is reliably producing output that meets the specifications, the pre-control chart is incapable of prediction, and you are forced to sample 100% of your process output to ensure conformance to specification.

    • Hi Carl, enjoyed your article very much, and also the comments.

      I researched pre-control charts several years ago because I needed to find a solution for manufacturing operators in an environment where updating paper charts was difficult and where computer access was limited.

      I used a visual similar to your figure 2, but the zones were not set up the same way. My intent was to avoid the situation Paul mentions about tampering; the possibility of someone getting a result near the spec line and retesting until they got a value that was in. It might be more accurate to call what I used a Stop Light Chart-Green, Yellow, Red, where you stop cold at the Red zone.

      What we did differently was set the red area inside the spec limit on each side, not beyond the spec limit. All zones were therefore within the spec limits. I won’t get in the detail of what we put in place, how many points you had to get before action and what the actions were for each case, but the overall process worked very well for our purpose, and prevented our knowingly producing bad product. It would happy to post an example but not sure that I can add image in comments. Kathryn Jansen

    • There are several reasons why your point that “Control chart philosophy more closely follows the Taguchi Loss Function” is not true. The Taguchi loss function is based on the premise that any deviation from nominal or target creates a loss and therefore, rather than having specifications every deviation from nominal should be viewed negatively as something to prevent.
      1. You state that “pre-control folks tend to view any product within specification as being of equal good. All outcomes are considered to be ‘good’ or ‘bad’ and the dividing line is a sharp cliff.” However, using 3-sigma (standard error) limits on control charts is creating exactly this cliff to which you object and contradicts the Taguchi loss function. A point just inside the 3-sigma limits is viewed as “good” while a point just outside is viewed as “bad.”
      2. Any control chart that aims to have stability but not zero deviation from nominal is inconsistent with the Taguchi loss function. Using Xbar or X charts, the center line should equal the nominal. If it doesn’t then it doesn’t matter whether the process is in control—you have on average a loss. Assuming that the center line of the control chart equals nominal, every point not on the center line has created a loss, according to the loss function—even if the process is stable. Equally, the variability charts S, R, and MR and the attribute charts P, NP, c, and u with nonzero control limits show losses even when displaying stability. Thus, these charts are inconsistent with the Taguchi loss function.
      3. Your simulation does not show that the case when a process is slightly out of control the 3-sigma limits cannot detect it or is less likely to detect it with the 3-sigma limits.
      4. Even the other rules, e.g., rule 3, using 6 or 7 or 8 points instead of 9 is more likely to detect these small but loss-creating changes. In fact, only the first rule (a point beyond the control limits) relies on the 3-sigma control limits. All the other rules can be used with pre-control charts contradicting the claim that “Pre-control does not detect shifts, drifts and trends with statistical certainty as control charts or run charts do.”
      5. If the control limits are not probability limits but empirical limits (as some people claim), then the claim that control charts are statistically valid while pre-control charts are not, is false or at least not applicable. The only requirement for applying statistics is an assumption of a probability distribution, which can be made with pre-control charts also. Thus, both or neither are statistically valid.
      Thus, it isn’t the type of control chart that makes the chart consistent or inconsistent with the Taguchi loss function. It is the 1) purpose of aiming for stability regardless of the amount of deviation and 2) rules that determine stability regardless of the size of deviations from nominal.
      You might respond by stating that we are not interested in all changes but only the critical few and not the trivial many. That would prove the point that control charts are inconsistent with the Taguchi loss function.
      You might respond by stating that there would be an additional cost of chasing many false alarms. However, since we don’t know how the “loss to society” is calculated (as it is merely conceptual), we don’t know 1) how that additional cost is added to the “loss to society” and 2) if it exceeds the cost of failing to catch and correct the small deviations that did produce losses–sometimes huge financially or fatal.

  • I would recommend that you study the CSSBB Primer from the Quality Council of Indiana and similar prep material.

    Here are some links that might be…[Read more]

  • Online training can provide basic knowledge however there is no substitute for hands on application. A blended approach online and classroom has been proven by researches to be the most effect form of adult education. The next most effective is hands on and instruction delivery and lastly the online only. I understand the logistics may force you…[Read more]

  • Yes, Imr charts can be used to detect trends. You would have to consider sampling strategy and measurement system capabilities.

  • ThumbnailControl charts have two general uses in an improvement project. The most common application is as a tool to monitor process stability and control. A less common, although some might argue more powerful, use of […]

  • Carl Berardinelli changed their profile picture 9 years, 6 months ago

  • Carl Berardinelli became a registered member 9 years, 9 months ago