Control Chart 3 sigma Control Limits
Six Sigma – iSixSigma › Forums › Old Forums › General › Control Chart 3 sigma Control Limits
 This topic has 27 replies, 13 voices, and was last updated 16 years, 11 months ago by Ken Feldman.

AuthorPosts

July 15, 2003 at 9:13 am #32792
My Question is that why do we set the control limits as 3 Sigma in a control chart?
If we want to achieve 6 Sigma Quality can it be detected by a Control Chart with Control limits as 3 Sigma if the Process is operating at 4.5 Sigma (Example) will it be detected by the Control chart.
Please help me update my knowledge
thanks in advance0July 15, 2003 at 11:39 am #87943The limits are set so that you can predict where 99.7% of material being processed is going to fall into.
The six sigma philosophy is a methodology for continuous improvement. Achieving 6 sigma (3.4 PPM) is a philosophy. Control charts are todays reality.0July 15, 2003 at 1:37 pm #87949Control charts do NOT measure the sigma level of an overall process. They measure whether a process is in statistical control, ie: does the project generally follw a normal distribution. Since +/ 3 sigma encapsulates 99.73% of the data in a normal distribution, if you process falls within that limit, you have a process that is in statistical control, and now you can begin to apply some of the other tools. The sigma calculation is used to measure the effectiveness of the project on the end results of the process. These are separate measurements, and should always be treated as such.
0July 15, 2003 at 1:39 pm #87950Remember that the control limits on an Xbar chart are the +/ 3 sigma limits for averages, not individuals. The standard error of the mean is:
s Xbar = s / n1/2
Also, control limits are based on historical data and process variation. As your process capability improves the control limits will need to be recalculated. This may be evidenced by the lack of plotted points near the limits, points avoiding the outer zones, and various other out of control conditions. As a result of the reduced variation in your process, the spread of the new +/ 3 sigma limits will be less than the previous limits.
The +/ sigma limits on the chart should be somewhat of a reflection of your process Sigma Level. The higher your SL, the tighter the control limits.
Hope you find this useful.
0July 15, 2003 at 2:40 pm #87956
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.You are mixing two different things: Stability and Performance.
“3 sigma control limits” refers to stability. “Operating at n sigma” refers to performance.
Stability is equality of behavior over the time. The control limits define a zone where you expect a great ammount of the population to fall if the process behaves as it was behaving when the control limits were defined. So if you find a point outside those limits, the reasoning is that it is so unlikely that such a point would be delivered by the process behaving as usual that one suspect that it is not, and that there is a special cause of variation arround that should be identified and eliminated to prevent reccurrence (if it worsens the process) or standarized and make it part of the standard process so it is no longer a “special cause” but a “common cause” (if that is feasible and improves the process, in which case the control limits must be recalculated to match the new process behavior). Now, what about how many sigmas are the limits appart? That relates to the previos words “great ammount” at the beginning of this paragraph: How “great”? If you use control limits at too few sigmas, about any small shift in the process behavior will be detected (that’s good) but also (because that “great ammount” will not be so great) you will have too many points outside the limits just by chance when there is no special cause to adress (that’s bad). That is called a false alarm. Now, if you use control limts at too many sigmas, vrtually the whole population will fall within the limits and then there will be no false alarm (that’s good), but also something very strange must happen to the process to put a point outside those wide limits, so you will miss most shifts in the process behavior and then miss most opportunities for improvement (that’s bad) in which case, why are you charting the process in fist place? So the key here is cost/risk management. How costly/risky is to chart the process, find most special causes, but also have many false alrarms? How costly/risky is to chart the process, be virtually free of false alarms, but miss many of the special causes? Control limst at 3 sigmas were found (and are widely accepted) to be a good balance.
How does all this relates with process performance? It does not. Performance is how fit is your process to meet the specifications. Note that “specification” was not even mentioned in the previous paragraph, because it has nothing to do with stability. A process operating at 6 sigma means that the process average is 6 sigmas away from the closest specification limit (and virtually no part out of specs) . The process is what it is regardless of wheter you chart it or not, or whether you are using 3 sigma control limits or others. Changing the control limits do not change your process performance, and hence do not change the “sigmas” at which your process operates. At best, charting the process will help you to improve its performance and verify and monitor the improvement efforts. Also, once you reach an acceptable level of stability, you can feel confident that the process will perform in a consitent way. It is meaningless to say that the process IS 4.5 sigma if its behavior IS NOT consistent over the time (i.e, if it is not stable).
By the way, note that if, for example, you used 6 sigmas control limits instead of 3 sigmas ones, you would not detect and correct most of the process shifts so, in the long term, it would perform worse (operate at fewer sigmas) than if using 3 sigma limts (sounds ironic, doesn’t it?)
Hope that helped more than confused.0July 15, 2003 at 2:51 pm #87957
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Chris, I fully agree, but just to clarify:
“As a result of the reduced variation in your process, the spread of the new +/ 3 sigma limits will be less than the previous limits.”
Not in terms of “how many sigmas”. You set the original limits to be ±3 sigma, then your process improves and the process sigma is reduced, then now the control limits are at more than ±3 sigma, so you change them to be ±3 sigma again.
“The +/ sigma limits on the chart should be somewhat of a reflection of your process Sigma Level. The higher your SL, the tighter the control limits.”
Same as before. It’s true in terms “units of measurement” such as mm. It is not true in terms of “how many sigmas are the control limits appart”. They are allways at ±3 sigmas regardless of the sigma level of the process.0July 15, 2003 at 11:49 pm #87986Thank you all for the information you have provided to me, now i understand i was making a comparision between two different parameters.
0July 21, 2003 at 5:25 am #88153
JonathonParticipant@Jonathon Include @Jonathon in your post and this person will
be notified via email.Just addressing the first question: why are control limits set at +/ 3 standard deviations?
As others have said, we are not looking at 3 standard deviations of the raw data (except for the indivviduals portion of an individualmoving range chart). In fact, it just happens that we use 3 standard deviations for charts displaying means of continuous data (either the individuals chart or the Xbar chart), and for attribute control charts. For charts displaying ranges, moving ranges, and standard deviations, the control limits are asymmetrical about the mean. So let’s stay with those charts that do use three standard deviations.
The question remains: why 3 such standard deviations, instead of 2, or 4, or any other number? As best I can tell, it is because 3 standard deviations corresponds to the lowest sum of alpha plus beta decision errors. One error is concluding that a special cause exists when it does not; the other is concluding that common cause previals when it does not. At any value other than three standard deviations, the sum of those two probabilities increases.
I’m pretty sure that Dr. Shewhart also established limits for the asymmetric charts based on minimizing decision error, too. Rumor has it that he was pretty astute.
For purists in the extreme, this means that neither XBar nor individuals charts should be constructed using nonnormal process data. Pragmatists recognize that control charts are fairly robust, even when nonnormal data is used – I operate out of the latter camp, and have not been burned. My frustration is with organizations that claim to understand these charts, but relentlessly continue to make bad decisions using them.
I hope this doesn’t muddy the waters.0July 21, 2003 at 6:58 am #88156
DANG Dinh CungParticipant@DANGDinhCung Include @DANGDinhCung in your post and this person will
be notified via email.Good morning,
When some event has a very small probability of happening, we first ask the question why is it happening. May be it is statistically normal because an event never has zero probability. But it is practically sure something is going wrong.
The probability that a measure is outside of the [mean – 3 sigma ; mean +3 sigma] bracket is less than 0,0025. As this figure is very small, we should wonder if something is going wrong even if we are still inside the tolerance bracket. In this case, which is called special case, we stop the process, search for the cause of this malfunctioning, repair, then restart. That is why the upper control limit is set to (mean + 3 standard deviations) and lower control limit to (mean 3 standard deviations).
Generally speaking, if you detect a rare event (an event with very small probability of happening) we have to search for its cause instead of going on working. For instance, two successive measures between survey and control limits, seven successive sample means growing or lowering,…
The capability index Cp shows if you are able to produce very few products outside the tolerance bracket if you are aligned. If it is less than 1.33 the probability of producing nonconformities is high. The biggest it is the less you produce nonconformities. This index is used TO SELECT A PROCESS FOR A MANUFACTURING.
The capability Cpk has the same use if your process is not aligned. This value is used during you are producing, TO DECIDE TO GO ON WORKING OR TO STOP because of too high proportion of nonconformities.
Best regards,
DANG Dinh [email protected]
0July 21, 2003 at 12:09 pm #88158
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Jonathon:
“For charts displaying ranges, moving ranges, and standard deviations, the control limits are asymmetrical about the mean”
That’s wrong. Look at this example, for subgroups of size 7:
For Range and Moving Range: LCL=0.08*Rbar, UCL=1.92*Rbar, (UCL+LCL)/2=Rbar
For Standard Deviation: LCL=0.12*Sbar, UCL=1.88*Sbar, (UCL+LCL)/2=Sbar
Also for p chars, for example, you use +/3 sigmas.
So Shewhart was not “pretty astute” by “establishing limits for the asymmetric charts based on minimizing decision error”. He still used +/3 sigmas. He just was not so stupid to use negative control limits for parameters that have an absolute bound at zero, and then when the 3 sigma limit was negative it was dafulted to zero. That happens with small subgroups, when the average is already close to zero. You can use the negative limit if you want. Anyway, you will never get a point between zero and that limit, so there will be absolutely no difference.
That puts into a different perspective your last paragraph: “For purists in the extreme, this means that neither XBar nor individuals charts should be constructed using nonnormal process data” (where “this means referred to the previous wrong comment about Shewhart using “special” limits for asymetric charts).
Now, why +/ 3 sigmas? Because it works! It was found to be a good balance between “too many false alarms” (when the limits are too narrow) and “never find a signal” (when the limits are too wide). If you find another ammount of “sigmas” that works better for you, then use it!0July 21, 2003 at 5:22 pm #88171
JonathonParticipant@Jonathon Include @Jonathon in your post and this person will
be notified via email.Gabriel, thanks for pointing out my error regarding the symmetry of control limits on range charts. You’re right, I was in error.
I tried to point out that attribute charts do use +/ 3 standard deviations; I don’t know if I stated that understanding poorly. So we agree on that point.
As for whether normally distributed data is essential for control charts: I have heard the purists rant about such a need, and I know that the charts are robust when used with nonnormal data. I could not tell whether you agree with that or not.
Finally, I think you and I agree, at least in part, on why +/3 standard deviations work. That “good balance” between false alarms and undetected changes is indeed the same as minimizing the sume of alpha and beta errors.
Where we disagree is on the choice to chnange the limits. If one is going to change the limits, one has tacitly agrred to incur more of one risk or the other.0July 22, 2003 at 12:39 pm #88200
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Jonathon, sorry for the delay. I “lost” a somehow long answer and didn’t feel like typing it all over again yesterday. Well, it was more or less like:
I had understood your point about attribute charts. I just tried to reinforce the concept by showing that it was just another example of symetrical limits where the 3 sigma limit is defaulted to zero if negative.
I don’t have a final position on when one should concern about lack of normality in SPC. What I can say is tha I never normalized the charted parameter nor used “special” limits (such as percentile based limits) even when, sometimes, I knew forehand that the distribution was not normal. The standard use of +/3 sigma limits for ohter known notnormal parameters (such as range) made me feel that that was the natural approach. Also, I never related a problem with that lack of notmality (what can mean that I never had such a problem or that I failed to relate it).
About the use of 3 sigma limits to minimize the sum of alpha and betta risks, I find that hard to “buy” for a number of reasons:
1) SPC is not really hypothesis testing. It looks and works like, but it is not. For example, if you ask me the risk of a false alarm, I would say 100% (sooner or later). So what is tha alpha risk?
2) While for each point the risk of a false alarm is constant (for example, 0.135% for one point above UCL), the risk of missing a signal reduces when the subgroup size is increased. So 3 sigma can’t be the optimum solution for every case (optimum from your perspective).
3) The risk of a false alarm (what would be compared to alpha) is quite small. Have you ever used p<0.00135? Typical values are from 0.1 to 0.01 (10 to 100 times more!). On the other hand, the chances of missing "a not very big change in the process behavior" (what would be compared to betta) is typically quite large. So it doesn't seem to be optimized from your perspective. And you still have to define what "a not very big change in the process behavior" is. Remember that in an hypothesis test (not this case), to define the betta risk you need to define a minimum difference you want to detect. If not, you can't quantify betta, then you can't cuantify the sum of alpha and betta, and then you can't minimize that sum.
4) Yet, if you could quantify all that, you still have to put the risks in context. At the end, is not that one wnats to minimize the risk but the cost. How costly is the risk of missing a signal, thus increasing the risk of sending a bad product and missing the opportunity of an improvement? How costly is the risk of stopping the process and making an investigation because of a falsa alarm? They are not the same for every process of every product of every organization in every circumstance. And anyway, these risks and associated cost are very difficult to quanify in such a way that you cand find the “cost” function, derivate it and equal that derivative to zero in order to find the minimum cost.
As I understand it, 3 sigma control limits were found to work fine in a practical way, and not in a theoretical way, and were used with success for nearly one century now.
To sumarize the “normality / lack of normality” and “how many sigmas?” issues I would say: Use the standard SPC control limits unless you have solid databased evidence that strongly supports that using other limits would be better in that specific case. I never run thruough one of these specific cases yet, but I never did a big effort to prove that other limits would be better either.0July 22, 2003 at 2:42 pm #88208
JonathonParticipant@Jonathon Include @Jonathon in your post and this person will
be notified via email.I think we’re in a state of agreement on the practical use of control charts. If I understand your points as the rubber meets the road:
We both continue to use control charts even when data display nonnormal distributions.
We seldom mess with the default choice of +/ 3 standard deviations. (Choosing narrower limits increases the likelihood of false alarms; wider limits increases the risk of missing a real process change.)
If the above two points are correct, then we are in accord on the most significant aspect of our chats.
As for the rest of our discourse: If you believe the choice was mostly empirical, and if I believe it may have some degree of theoretical underpinnings, that probably is beside the point. If you feel some urgency to continue debating whether or not false alarms & missed special causes comprise formal decision errors, we may want to take the discussion offline. I suspect that it could prove distracting to some who may not really be interested.0July 22, 2003 at 4:10 pm #88219
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Yes, we agree on the “how?”, but not on the “why?” But the original question was:
“My Question is that why do we set the control limits as 3 Sigma in a control chart?”
I think that saying that you and me agree to use 3 sigma control limits will not be enough for him.
Your original answer to that original question was:
“The question remains: why 3 such standard deviations, instead of 2, or 4, or any other number? As best I can tell, it is because 3 standard deviations corresponds to the lowest sum of alpha plus beta decision errors.”
As I said, I do not agree with that, but I can be wrong.
Reading the original question and your answer, I find it relevant to discuss this online. If you still want to do it offline: [email protected]. I am still willing to get a better understanding on the subject.0July 22, 2003 at 5:03 pm #88222
JonathonParticipant@Jonathon Include @Jonathon in your post and this person will
be notified via email.I guess we should direct the question to the originator of the topic: have you had enough, or do want us to continue this discussion through this forum?
0July 23, 2003 at 1:42 pm #88252
HP StaberParticipant@HPStaber Include @HPStaber in your post and this person will
be notified via email.The control chart was introduced by Dr. W. Shewhart at Western Electric in the 1920’s.
He wrote a famous book about “Economical Control of Quality in Manufactured Product” in 1931 in which he elaborates on a way to find a practical and economical way to investigate quality performance (page 275 ff).
He concludes that using X_bar +/ 3 sigma is an economical way to do so.
BTW : data outside the control limits does not mean that the process is out of control. We have no EXACT knowledge of the process ! “Suspicious” data is just a signal that it is worthwhile to start looking for a special cause in the process. No “suspicious” data is just an indication that variation is due to comon causes.
I have a spreadsheet template on my homepage with which you can establish XmR charts :
http://geocities.com/hpstaber/biz.htm0July 23, 2003 at 1:49 pm #88253
HP StaberParticipant@HPStaber Include @HPStaber in your post and this person will
be notified via email.We have NO knowledge about the process. This includes that we do not know and that we will never know whether it is a normal distribution, a pdistribution or whatever. Search the web for Dr Demings famous Red Bead experiment. It should be an eye opener.
Shewhart was aware and prooved that it is not necessary to know the type of distribution. Using Xbar +/ 3 sigma is still sufficiently good to determine whether one should start looking for special causes or whether one should leave the process alone.0July 23, 2003 at 1:52 pm #88254
HP StaberParticipant@HPStaber Include @HPStaber in your post and this person will
be notified via email.Go to the source : Dr Shewhart in his 1931 book on page 275 and following. He explains rigorously.
See my other post.0July 23, 2003 at 2:25 pm #88256I sense that there is some confusion over what “+/3 sigmas” is. The Greek letter (symbol) Sigma is often used to represent Standard Deviation – a measure of variation. The word Sigma is also used in Six Sigma to describe the sigma level of processes. In this context, sigma is a statistical measure of variation based on defects and opportunities for defects. This universal sigma level allows people to compare processes across the world no matter what company, business, etc.
The Empirical Rule shows us that, in a normal distribution, 99.7% of the points will fall within +/3 sigmas (standard deviations) from the mean. We set our control chart limits at these levels in order to measure variation and identify out of control processes.0July 23, 2003 at 6:13 pm #88263
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.I agree that we can not have a complete knowledge about the process, but I think that by saying “We have NO knowledge about the process” you are going a bit too far.
You can’t say that the process is normal, but you can say that it is very far from normal close to normal. You can’t say that a point does not belong to the stable process distribution, but you can say that it would be very unlikely that it belongs (and then call it “suspicious”). You don’t know the true process average, define a confidence interval. And so on.
If we could have no knowledge about the process, then why to bother trying to improve it? We don’t know how it is now and we will not know it gets after the improvement effort.0July 25, 2003 at 8:59 pm #88357
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Jonathon, it seems that the originator is not interested. I am. How is that about minimizing the sum of alpha and betta risks?
Would you discuss the subject with me, either online or offline? How is that about minimizing the sum of alpha and betta risks? You have my email, I don’t have yours.0December 11, 2004 at 2:03 am #112217
spc charterMember@spccharter Include @spccharter in your post and this person will
be notified via email.The central limit theorem allows one to control chart a nonnormal distribution and still use control chart rules to determine whether or not the process is in control. Even if the original distribution is not normal (or not known), the central limit theorem practically guarantees a normal distribution of the SAMPLES (for decent size sample sizes)
three standard deviations was chosen based on the probability of a SAMPLE point not residing beyond those limits. in other countries they use slightly different values but close to 3. choosing 2 or 4 should only be done based on knowledge of the process0December 11, 2004 at 3:51 am #112219
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.Nice timing. Good post to a year old thread. Problem is that you are incorrect. Please check the following link:
http://www.qualitydigest.com/sep96/spctool.html
0December 11, 2004 at 11:59 am #112228
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Darth,
Another one.
Regards0December 11, 2004 at 2:09 pm #112231
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.Mike, who appointed you my PP? That is Posting P i m p. The posts I respond to are those which I feel I might make a little contribution towards or those who need to be pointed in an appropriate direction. Plus the silly threads we love to glom onto, especially from Vinny and Phil and most recently Mike R your alter ego and the Hon. Rev. Graham.. I also select the shorter posts since my attention span is limited. The exceptions are things like Ambrose’s recent ramblings, they were great. Of course, anything you or Stan write are also devoured since you guys are my idols of SS. I will take anything offline to my DrDarth66 address as well, like you do. Seems Stan is the only smart one since he rarely offers his offline email address. I do miss some of the olde Posters like Gabriel and Statman. Learned a lot from their responses. Guess they got fed up with some of the BS being floated on the Forum. Now, instead of tossing posts my way, how about some big SS consulting gigs :).
0December 11, 2004 at 5:45 pm #112232
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.Thanks for your offline message. I still am interpreting your comment as the CLT is what makes a control chart work…which it doesn’t.
If I have misinterpreted that, let me know.0June 18, 2005 at 10:45 pm #121747In 3 sigma control limit, we expect what percentage of points to fall inside the control limit when the process is in control.
a. less than 1%
b. 5%
c. 10%
d. almost all of them0June 19, 2005 at 2:20 am #121748
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.Yes. Now do your own test and don’t cheat by posting on a public forum.
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.