# UCL LCL

Six Sigma – iSixSigma › Forums › Old Forums › General › UCL LCL

- This topic has 46 replies, 11 voices, and was last updated 12 years, 10 months ago by 50 years ago?.

- AuthorPosts
- April 19, 2007 at 5:48 am #46768
Can someone please explain the significance of UCL & LCL? I understand what both signify but I am trying to figure out how these numbers come about. For example, an UCL of 22. Why not 25 or 20? How is it determined that if a process exceeds a particular LCL that it is out of specs?

Thanks0April 19, 2007 at 9:54 am #154962Control limits are added to represent the range of expected variation and are approximately +/- 3 standard deviations off the average (99.7% of the points in a set of normally distributed data will fall between the limits)Do not confuse these with specification limits, control limits are based on data and tell you how the process is performingHope this helps

0April 19, 2007 at 10:18 am #154963

CaliandroParticipant@Caliandro**Include @Caliandro in your post and this person will**

be notified via email.UCL = CL + k * S

LCL = CL k * S

Where

CL, center line, is the mean of observations

S is an appropriate valuation of standard deviation

k is a rational coefficient , usually but not always k = 3.

Why these limits are choose to decide if the events are caused by assignable reasons, and why a process reporting observation out of control limits is considered to be out of control, its a long story plentiful reported in the SPC literature.

Search, on this site, for UCL or Control Limits both for discussion forum and articles.

Regards, Emilio

0April 19, 2007 at 10:18 am #154964

Six Sigma ShooterMember@Six-Sigma-Shooter**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.I respectfully submit you both read Walter Shewhart and how (and why) he developed control limits.

0April 19, 2007 at 10:34 am #154965I’m aware of Walter Shewharts accomplishments and also the further work of DJ Wheeler. I’m also aware of the counters made by varying authors and have arrived at practical application. I can only think that my other question should have been around what size of changes are you looking to control/detect

0April 19, 2007 at 10:43 am #154966

Six Sigma ShooterMember@Six-Sigma-Shooter**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Again, I respectfully submit that there is a difference between knowing of someones work and understanding how and why that work was done. Shewhart control limits were established to minimize the economic impacts of process decision making errors.

0April 19, 2007 at 11:00 am #154968and by measuring those impacts and errors you are able to manage them, what did Shewhart develop as the measurement of this variation?I do not profess to be an expert on the stratified work of Shewhart, nor the further work of Wheeler (having only read 3 publications) however I set to apply a practical answer to the OP’s question

0April 19, 2007 at 12:29 pm #154973

Six Sigma ShooterMember@Six-Sigma-Shooter**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.I believe they were set more for the distinction between assignable and common causes and the Type I and II errors associated with them.

0April 19, 2007 at 2:53 pm #154980

Jim ShelorParticipant@Jim-Shelor**Include @Jim-Shelor in your post and this person will**

be notified via email.I respectfully submit that you demonstrate your superior intellect by answering the question rather than making snide comments about other people’s knowledge.

I am laughing at the superior intellect.0April 19, 2007 at 3:02 pm #154982

Allthingsidiot OParticipant@Allthingsidiot-O**Include @Allthingsidiot-O in your post and this person will**

be notified via email.Agree

Most cotributors of this site are making snide comments about other people’s knowledge (as you said),instead of answering the questions or giving hints..etc0April 19, 2007 at 3:03 pm #154981

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Read on and you might find the answer. Do some research, and you might find the answer. Read Shewhart and you might find that the selection of control limits of +/- 3 have absolutely nothing to do with 99.73 or any association with probablilities.

I respectfully submit that if you see nothing wrong in the statement “Control limits are added to represent the range of expected variation and are approximately +/- 3 standard deviations off the average (99.7% of the points in a set of normally distributed data will fall between the limits) you may want to learn, rather than have a sense for others so called “superior intellect” and get huffy about it.0April 19, 2007 at 3:14 pm #154983

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.If trying to point someone in the right direction where they can learn is considered “superior intellect” and “snide,” then I guess I am guilty as charged. However, that was not my intent. I’d rather teach someone to fish than give them a fish. It’s the difference between learning and and memorization based upon what they hear from some unknown somebodies on a forum. My apologies to those whom I upset with my efforts.

0April 19, 2007 at 3:40 pm #154985

Jim ShelorParticipant@Jim-Shelor**Include @Jim-Shelor in your post and this person will**

be notified via email.I was referring to your first reply.

“I respectfully submit you both read Walter Shewhart and how (and why) he developed control limits.”

That was simply nasty and contributed little to the concersation.0April 19, 2007 at 3:43 pm #154986

CaliandroParticipant@Caliandro**Include @Caliandro in your post and this person will**

be notified via email.Shooter ( and all )

When I read the first post I was attempted to write several lines to help Steve but due the fact the original question looks more a conceptual doubt I humbly decided to address him to what has been excellently reported by other more qualified of me.

No apologies from my side.0April 19, 2007 at 3:47 pm #154987

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.My intent to be helpful aside, I guess I cannot be held responsible for how you take things. That’s an issue you have to solve for yourself. Best of luck.

0April 19, 2007 at 4:09 pm #154989Six Sigma Shooter, whilst your comments did appear to belittle the brief explanation I gave to the OP there is no offence taken, it is typical of forums, you see the post from “nobodies” on a forum and see the worlds exposure in that single post.When I say more than aware it pertains to more than something I heard or memorised, to build a judgement off a misunderstood statement is fairly poor however you challenged the response to the OP. I’m more than aware of the layers and calculations for computing limits to provide separation of signals and noise. I’m also aware that approx 80% of the limits used have been +/-3SD, if you are asking of the significance of a CL then a summary statement of the majority should answer the question?Have you read Wheelers book, “Understanding Variation: The Key to Managing Chaos and Understanding Statistical Process Control”As you disagree so strongly, please submit the reason of which you disagree and again, I also reiterate answering the questions practicallyMany thanks

0April 19, 2007 at 4:18 pm #154990

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.The OP’s posting shows a fundamental lack of understanding of control limits, their use and what they do, as does the first eleven words of your OR to his question. As for the rest of it, already answered by my subsequent posts.

0April 19, 2007 at 4:38 pm #154991

Jim ShelorParticipant@Jim-Shelor**Include @Jim-Shelor in your post and this person will**

be notified via email.Six Sigma Shooter,

I just went back and reviewed about 100 of your posts.

As hard as it is to admit, I misjudged you. In my very opinionated estimate, the probability that you will make an inappropriately snide remark is approximately 3% :-).

I recognize my sample size is too small to make a statistically significant judgement.

Please accept my apologies for my snide remark.

Respectfully,

Jim Shelor0April 19, 2007 at 6:11 pm #154996I’ve pondered a myriad of responses to these posts. Whilst you say your previous posts have answered the rest I beg to differ.One part of me wants to quantify and segment exactly the statement made, the other part tells me not to bother as it neither worth the bother or the timeSteve, I hope you have had some value out of thisRegards

0April 19, 2007 at 9:29 pm #155011

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Jim,

No worries, mate. Accepted and thanks. I’ve had my moments – we all do. Mine usually come after a 12 hour time change and crossing the international date line. :-)

Shooter0April 19, 2007 at 10:11 pm #155013Hello Shooter,

I submit to you that the basis of using control charts have almost everything to do with probabilities. Starting first with most used central tendency, the average. It assumes, 50/50 probability. Interestingly enough, the special cause rules Schewhart utilizes are also based on probabiliities. That is why the UCL/LCL are 3 STDs from the average. We know that inherent in the calculation of the third STD, is the statistical fact that 99.73% of everything is to normaly fall between the UCL/LCL. Otherwise you have broken a law of probability and that is an indication you have special cause variation in your process performance. YES? That is partially what makes using control charts the most economical method by which to gage and judge performance.

Does anyone agree?

Helper0April 20, 2007 at 12:00 am #155015

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Shewhart, Deming Wheeler and I disagree with you. Here’s a link that does a far better job of explaining the issue than any feeble attempt I might make on this forum.

http://www.spcpress.com/pdf/Wheeler_Neave.pdf0April 20, 2007 at 8:38 am #155023Thanks for posting that link, a good collation of cuts from publications that works well in explaining your challengeI previously mentioned that in 80% of the cases the method posted is more than sufficient. Practical experience runs the age old question of “what am I trying to achieve by doing this”, clustered amongst other variables of the corporate you are working for thus dictates what is practically vs. statistically significant.Despite the documented refresher I still stand behind my original post although with hindsight I should have been clearer on the application and questioning of what you actually want out of the toolRegards

0April 20, 2007 at 9:40 am #155027

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.To each their own, I guess. We must agree to disagree. If you have made the conscious decison to follow the probabilities approach to control charts, understanding its limitations, that is one thing. To teach others seeking knowledge and understanding of the concepts, one must teach the whole story and give enough information so that the student can then make their own decision in a fully informed manner.

Your answer to the OP, in my opinion, explains the probabilities approach (which you have accepted and support), thus limiting the full potential and power of control charting techniques, relegating them to a monitoring role and not to a continual improvement role within the organization, as has been well stated by Shewhart, Deming, Wheeler and Neave.0April 20, 2007 at 10:18 am #155029

Bower ChielParticipant@Bower-Chiel**Include @Bower-Chiel in your post and this person will**

be notified via email.Hi Everyone

May I refer those interested in this discussion to another author – Professor William Woodall of Virginia Tech. His paper “Controversies and Contradictions in Statistical Process Control”, which he presented in 2000, is available at: –

http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf

My copy is well thumbed and coffee-stained from my efforts to better understand control charts. I find the distinction between Phase 1 and Phase 2 applications very helpful. He refers to Phase 1 as “use of a control chart on a set of historical data to determine whether or not a process has been in statistical control” and to Phase 2 as “use prospectively with samples taken sequentially over time to detect changes from an in-control process”. He has a section on control charting and hypothesis testing and another on the role of theory. In the latter he states that “to first use a control chart in practice, however, no assumptions of normality or independence over time need to be made”.

Well worth reading!

Bower Chiel

0April 20, 2007 at 1:08 pm #155030Helper,

I agree that probability plays a big role in control chart design. Whether you specifiy the sigma multiplier or you back-calculate limits based on a type 1 error rate of say, .001, probability plays an important role.

The type I error rate for the 3 sigma approach is .0027. This means that even if the process remains in control, a point could plot beyond the control limits 1 time out of 370. Control charting also has a strong similarity to hypothesis testing. The ability to detect a shift of a certain size can be quantified with the use of an OC curve. The idea is to manage the risks effectively. (Am I over-rejecting the process or am I missing signals when a critical variable encounters a change?)

The biggest issue I have with SPC is that we usually skip the process characterization piece! We miss the importance of rational subgrouping. We don’t study the distribution of means to see if the central limit theorem holds true. I suppose my concerns are not with the SPC itself, but the things that are skipped prior to implementing effective SPC.

HACL0April 20, 2007 at 1:12 pm #155031Thanks for all the replies to my question. Not sure if my original question, however, was answered. And perhaps there isn’t one. I have been doing research and have been unable to find an answer. I guess a fair response is that it doesn’t matter. Originally I wanted to find out the significance of the 3 standard deviations that make up the UCL and LCL. How do we come to this conclusion that if a process exceeds these limits it is out of spec? Why not 2 standard deviations or 4? A couple of posts stated the 99% which makes sense but I believe this was challenged. Anyway thanks again.

I would like to make a comment on the tone of numerous posts. I think it is only fair that responses are made with an assumption that research and web searches were already done. It seems we have abunch of insecure people who will take adavantage of anyone in a forum given the chance. I don’t need responses stating to look it up or do your own research or do your own homework. My post is not directed to you. It is directed to the ones who are willing to offer some guidance, support, and knowledge.

Thanks0April 20, 2007 at 1:42 pm #155033Steve, sorry that this turned into such a debate… I hope that you can find some value in the responses and that some of your questions were answeredBest,

0April 20, 2007 at 2:28 pm #155039Steve, I think your question has been answered in the posts — the confusion is that you ask, “How do we come to this conclusion that if a process exceeds these limits it is out of spec?”. The control limits have nothing to do with that specific decision.

Not going through all of the posts here, but understanding whatever I say from now on is subject to criticism . . . the control limits help in decision-making about the process. Again, ignoring the potential criticism, a process that is in control is “semi”-predictable into the future within the limits of the LCL & UCL. If the control limits are at +/-3 standard deviations, then the predictability is at that 99+%. At this level, there is a very good chance that a process result within the control limits is from the same population (e.g., same as saying that the process output can be characterized with the same average and standard deviation) as represented by the previous results. The predictability of a process is important, because as a process manager, when the process becomes ‘unpredictable’, then you longer have control over the process. Likewise, the output of the process of the process is no longer predictable to customer. When a process becomes unpredictable, it is no longer from the same population as previous results; it and future results will be characterized by a different, but unknown average and std. deviation; and you cannot predict even predict future results because the process may change again. With respect to Wheeler, you are now managing chaos.

The control limits are decision-making tools to help understand if the process is stable. With the first control chart rule, when the process output exceeds a control limit, the process manager must make take action — investigate the cause, and adjust the process if the cause for the process shift is identified (or correct the cause). If the control limits were set at +/- 2 standard deviations, there would be a much higher frequency of times when the process manager would have to take action. If the process limits were set at +/4 std. dev., the manager would take action at a lower frequency. Thus, the cost-effectiveness of +/-3 vs. +/-2 (higher cost due to more oversight by the process manager) or vs. +/-4 (higher cost due to potential of more variable output to the customer).

The control chart represents the process — and is sometimes called the ‘voice of process’. A process does what it does, regardless, of the specification. The specification is the ‘voice of the customer’ — this is what the customer wants. The way to understand or compare the voice of the process vs. the voice of the customer is through process ‘capability’.0April 20, 2007 at 3:39 pm #155047Steve,

To reiterate my point, the choice of limits will help to minimize type 1 and type II errors.

3 sigma limits would have a false alarm every 370 points if the process is in control. The idea is to have the chart signal to you at the correct time, and allow you to react to special cause events.

The typical alpha risk level for hypothesis testing is .05. One might also ask where this came from and why is this the most common choice. I think your best bet is to learn about alpha and beta risk (type I and II errors), OC curves, and the statistical basis of control charts.

Statistical Quality Control by Montgomery is a good resource, and there are several other good choices.

Good luck,

HACL

0April 23, 2007 at 5:47 am #155100Now thats what I am talking about! Excellent reply and I really do appreciate your and everyone else help. I was having trouble stating the question originally and I think this is what brought on a lot of the confusion.

Thanks

0April 23, 2007 at 5:52 am #155101This helps a lot. You have helped with my question and cleared up a lot for me. Much of the reading tells you the hows but rarely the whys. I will start reading up on alpha and beta risks as well.

Thanks0April 23, 2007 at 10:40 pm #155153Hello Six Sigma Shooter,

Please allow me to respectively disagree with you regarding your suggestion if Shewhart and Deming aggree with my assertions. It is from Shewarts 1926 publication and Demings subsequent writing that I have arrivde at my conclusions. Their work provides for the basis of the practical application of control limits. I am not to faimilar with Wheeler’s contentions but from some of his students/followers who have responded to this website I would just say that you would need to do the math without any help from any software package and the understanding will become very clear to you.

I will, however, take the time to read the Wheeler pdf and respond to you in due course.

Helper0April 23, 2007 at 11:07 pm #155154

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Hey Helper,

I’ll wait until you read the article by Wheeler and Neave before giving further comment. I am interested in hearing your take on the article.

Regards,

Shooer0April 23, 2007 at 11:09 pm #155155OK,

I read Wheeler’s article. Interesting how he takes liscence and draws conclusions based on assertion. But, here is the big question: Why was the 3 sigma limit most economical as control limits? Did the emperical evidence Shewhart studied merely confine itself to the law of probabilities that classical and modern statistics bases statistical inferencing on today? Is that why we are safe to assume that “within certain limits” we can expect a process to perform in such a manner such that it is free of any culprit that we can attribute a disruptive influence to? Here is a question for Wheeler, “Should the method be called I guess process control or statistical process controll?”

I will be waiting on your response….

Helper0April 23, 2007 at 11:27 pm #155156

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.I think Wheeler and Neave are quite clear and specific, not taking license from assertion, but quoting both Shewhart and Deming on the subject that the +/- 3 sigma control limit selection had no basis in probablilities and their postion for saying so. You can try to back into some future state statistical reasoning behind their selection, but the guys who developed the control charts (Shewhart and Deming) are very clear on the subject about why they were selected and why probablilities cannot be used in association with the control chart.

To answer your question: they were selected as most economical becasue Shewhart’s studies identified them as such, not because of any assignment of probabilities to them. I believe the article and Shewhart’s and Deming’s writings are also quite clear on this point.0April 23, 2007 at 11:40 pm #155157Shooter,

You make a very interesting point in that the reasoning behind the selection was based upon the fact that emprical evidence supported the decision. Just to go a bit deeper….. why did the emperical evidence support it? Wheeler brought up the flipping of the coin and the randomness that chance plays on the outcome. Without any intervening influence, randomness or chance dictates that the odds for heads or tails is 50/50. I think we all agree with that. Lets take that assertion and put it into the context of a control chart. If the mean is the measure of central tendency on that chart, as opposed to the median or the mode, then the odds of detecting a “shift” as a special cause rule is .5 to the power of 8. The emperical evidence supports that an outcome of this sort suggest the influence of an unstable force or assignable cause variation. The laws of probability also concur with that conclusion in that the odds suggest that the odds of 8 consecutive points falling above or below the mean is .0039.

Interested in your thoughts…..

Helper0April 24, 2007 at 12:10 am #155158

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Your assertion requires that you know the true distribution, mean and standard deviation of the data – you do not – all you have is an estimate. I also believe that Shewhart, Deming and Wheeler are quite clear on this point. As Shewhart and Deming stated, perfect world scenarios required for the application of the probability approach to control charting do not exist in the real world, therefore they cannot be applied to control charts. To quote Shewhart: “Some of the earliest attempts to characterise a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterised such a state. When the normal law was found to be inadequate, then generalised functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.”

0April 24, 2007 at 5:37 am #155162Hi Shooter,

Interesting thought. Please allow me to provide some clarity. My assertion does not require us to know the true distribution, nor that of the population (if you believe there to be a difference). What we have is a sampling distribution, and because of the laws of probability, and that of the central limit therom we know that we can approximate (or estimate) the distribution of the population. That is what makes sampling, rational subgrouping and SPC the powerful tool it is. It enables us, through samples, to approximate the attributes of the work products of the population based on what the sample yeilds to us. Without the laws of probability and the CLT, we are unable to do so.

A thought: is it that statisticians try to back into the findings of Shewhart, or did the findings of Shewhart merely confine itself to the laws of probability that govern the Gausian Distribution and hence that of the basics of the laws of physics? Which came first, the chicken or the egg?

I am interested in your response.0April 24, 2007 at 8:09 am #155166

Six Sigma ShooterMember**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.At this point, I will leave you with these final thoughts.

In one of your initial posts, you said that you’ve read Shewhart and that, from his publications, your underastanding is that probabilities had everything to do with the setting +/- 3. I have presented you with the evidence that both Shewhart (the creator of control charting) and Deming (his colleague and friend) have both stated and well documented throughout their writings that the probability approach to control charts doesn’t work, nor was it used in selecting +/- 3.

I have provided you with an article by Wheeler and Neave that explores this issue, likewise using the writings of Shewhart and Deming to explain it. I have supported my position with the evidence from the creator of control charts. and the research of others who have done likewise.

Your counter has been to basically ignore all of it and to present your own theories which fly in the face of the evidence. Even in your own responses, you give the very reasons why the probability approach to control charts doesn’t work, yet you refuse to see it. This is the essence of the debate on this issue that has gone on for years and I am sure that it will continue long after you and I have assumed room temperature.

If you are truly interested in understanding this issue, there are many papers, books and articles that you can read and explore. I have provided you with but a few examples to support my position. You might call Fordham University and talk with Dr. Joyce Orsini on the topic (Director of the Deming Scholars MBA Program and Board President of the Deming Institute), or you can delve into the archives of Deming’s work through the W. Edwards Deming Institute. At this point, however, we must agree to disagree and leave it at that. I wish you well in your search for profound knowledge.0April 24, 2007 at 4:39 pm #155192I have very much enjoyed our conversation. The article written by Wheeler and Neave is very interesting. This conversation has lead me to further research to gain insight into the reasoning and conclusions of the article and the assertions you site. And it has lead me to answer the question, “What did both Deming and Shewhart mean by the term Statistical Control/Stable process?”

In Deming’s book, “The New Economics” Chapter 8 Shewhart and Control Charts, Deming cites that “false signals” are possible when using control charts (pg 176,177). He states, “It is possible that a control chart may fail to indicate existence of a special cause when one is actually present (Type II Error); it may send us scouting to find a special cause when there is none (Type I Error). His next quote you will find interesting, “It is wrong (misuse of the meaning of a control chart) to suppose that there is some ascertainable probability that either of these false signals will occur. We can only say that the risk to incur either false signal is very small” He further goes on to state that, “It is a mistake to suppose that a control chart furnishes a test of significance – that a point beyond a control limit is significant.” I think this serves your thinking. Yes?

Herein lies the argument. Deming was a statistician who lead the census to its first use of sampling to estimate the US population based on statistically valid sample sizes. As you know, the level of confidence of one not producing a Type I or Type II Error is typically 95% (significance testing). Demings conclusion, is that a control chart cannot deliver that 95% level of confidence or 95% probability of committing the Type I or Type II error. The alpha and pvalue of a control chart is unknown. Furthermore, on page 175 Deming cites that he and Dr. Shewhart agree that if all points fall within the control limits over a long period, assume that variation is random. To any statistician the word “random” is analogous to “chance” like 50/50 chance. In other words, that the variation due to common causes is probable, so it would be uneconomical for us to go scouting for special causes when one probably does not exist. Hence the process is in statistical control. What level of confidence do we have that it is in statistical control? According to Shewhart and Deming, Wheeler and Neave…that is an unknown.

Hope this helps……

Helper0April 24, 2007 at 4:41 pm #155193An exerpt from an enlightening conversation between Helper and Six Sigma Shooter. I encourage all others to respond accordingly.

=================================================

I have very much enjoyed our conversation. The article written by Wheeler and Neave is very interesting. This conversation has lead me to further research to gain insight into the reasoning and conclusions of the article and the assertions you site. And it has lead me to answer the question, “What did both Deming and Shewhart mean by the term Statistical Control/Stable process?”

In Deming’s book, “The New Economics” Chapter 8 Shewhart and Control Charts, Deming cites that “false signals” are possible when using control charts (pg 176,177). He states, “It is possible that a control chart may fail to indicate existence of a special cause when one is actually present (Type II Error); it may send us scouting to find a special cause when there is none (Type I Error). His next quote you will find interesting, “It is wrong (misuse of the meaning of a control chart) to suppose that there is some ascertainable probability that either of these false signals will occur. We can only say that the risk to incur either false signal is very small” He further goes on to state that, “It is a mistake to suppose that a control chart furnishes a test of significance – that a point beyond a control limit is significant.” I think this serves your thinking. Yes?

Herein lies the argument. Deming was a statistician who lead the census to its first use of sampling to estimate the US population based on statistically valid sample sizes. As you know, the level of confidence of one not producing a Type I or Type II Error is typically 95% (significance testing). Demings conclusion, is that a control chart cannot deliver that 95% level of confidence or 95% probability of committing the Type I or Type II error. The alpha and pvalue of a control chart is unknown. Furthermore, on page 175 Deming cites that he and Dr. Shewhart agree that if all points fall within the control limits over a long period, assume that variation is random. To any statistician the word “random” is analogous to “chance” like 50/50 chance. In other words, that the variation due to common causes is probable, so it would be uneconomical for us to go scouting for special causes when one probably does not exist. Hence the process is in statistical control. What level of confidence do we have that it is in statistical control? According to Shewhart and Deming, Wheeler and Neave…that is an unknown.

Hope this helps……

Helper0April 24, 2007 at 4:50 pm #155194I trust you have found our conversation as useful as I have. You have extended a professional courtesy seldom found on this site.

Correction to part of my response:

Herein lays the argument. Deming was a statistician who lead the census to its first use of sampling to estimate the population based on statistically valid sample sizes. As you know the level of confidence of one not producing a Type I or Type II Error is typically 95%. Demings conclusion, is that a control chart cannot deliver that 95% level of confidence or 5% probability of committing the Type I or Type II error. AND on page 175 Deming cites that he and Dr. Shewhart agree that if all points fall within the control limits over a long period, assume that variation is random. To any statistician the word “random” is analogous to “chance” like 50/50 chance. In other words, that the variation due to common causes is probable, so it would be uneconomical for us to go scouting for special causes when one probably does not exist.

0April 24, 2007 at 4:55 pm #155195I like your point very much and I agree with your concern. For those trapped in the Wheeler argument, I would like to ammend my postings and conversations with the following:

An exerpt from an enlightening conversation between Helper and Six Sigma Shooter. I encourage all others to respond accordingly.

=================================================

I have very much enjoyed our conversation. The article written by Wheeler and Neave is very interesting. This conversation has lead me to further research to gain insight into the reasoning and conclusions of the article and the assertions you site. And it has lead me to answer the question, “What did both Deming and Shewhart mean by the term Statistical Control/Stable process?”

In Deming’s book, “The New Economics” Chapter 8 Shewhart and Control Charts, Deming cites that “false signals” are possible when using control charts (pg 176,177). He states, “It is possible that a control chart may fail to indicate existence of a special cause when one is actually present (Type II Error); it may send us scouting to find a special cause when there is none (Type I Error). His next quote you will find interesting, “It is wrong (misuse of the meaning of a control chart) to suppose that there is some ascertainable probability that either of these false signals will occur. We can only say that the risk to incur either false signal is very small” He further goes on to state that, “It is a mistake to suppose that a control chart furnishes a test of significance – that a point beyond a control limit is significant.” I think this serves your thinking. Yes?

Herein lies the argument. Deming was a statistician who lead the census to its first use of sampling to estimate the US population based on statistically valid sample sizes. As you know, the level of confidence of one not producing a Type I or Type II Error is typically 95% (significance testing). Demings conclusion, is that a control chart cannot deliver that 95% level of confidence or 5% probability of committing the Type I or Type II error. The alpha and pvalue of a control chart is unknown. Furthermore, on page 175 Deming cites that he and Dr. Shewhart agree that if all points fall within the control limits over a long period, assume that variation is random. To any statistician the word “random” is analogous to “chance” like 50/50 chance. In other words, that the variation due to common causes is probable, so it would be uneconomical for us to go scouting for special causes when one probably does not exist. Hence the process is in statistical control. What level of confidence do we have that it is in statistical control? According to Shewhart and Deming, Wheeler and Neave…that is an unknown.

Hope this helps……

Helper

0April 24, 2007 at 4:56 pm #155196An exerpt from an enlightening conversation between Helper and Six Sigma Shooter. I encourage all others to respond accordingly.

=================================================

I have very much enjoyed our conversation. The article written by Wheeler and Neave is very interesting. This conversation has lead me to further research to gain insight into the reasoning and conclusions of the article and the assertions you site. And it has lead me to answer the question, “What did both Deming and Shewhart mean by the term Statistical Control/Stable process?”

In Deming’s book, “The New Economics” Chapter 8 Shewhart and Control Charts, Deming cites that “false signals” are possible when using control charts (pg 176,177). He states, “It is possible that a control chart may fail to indicate existence of a special cause when one is actually present (Type II Error); it may send us scouting to find a special cause when there is none (Type I Error). His next quote you will find interesting, “It is wrong (misuse of the meaning of a control chart) to suppose that there is some ascertainable probability that either of these false signals will occur. We can only say that the risk to incur either false signal is very small” He further goes on to state that, “It is a mistake to suppose that a control chart furnishes a test of significance – that a point beyond a control limit is significant.” I think this serves your thinking. Yes?

Herein lies the argument. Deming was a statistician who lead the census to its first use of sampling to estimate the US population based on statistically valid sample sizes. As you know, the level of confidence of one not producing a Type I or Type II Error is typically 95% (significance testing). Demings conclusion, is that a control chart cannot deliver that 95% level of confidence or 5% probability of committing the Type I or Type II error. The alpha and pvalue of a control chart is unknown. Furthermore, on page 175 Deming cites that he and Dr. Shewhart agree that if all points fall within the control limits over a long period, assume that variation is random. To any statistician the word “random” is analogous to “chance” like 50/50 chance. In other words, that the variation due to common causes is probable, so it would be uneconomical for us to go scouting for special causes when one probably does not exist. Hence the process is in statistical control. What level of confidence do we have that it is in statistical control? According to Shewhart and Deming, Wheeler and Neave…that is an unknown.

Hope this helps……

Helper0April 24, 2007 at 8:46 pm #155208Thanks for the great article from Wheeler and Neave. I have referred to Wheeler’s work many times, and seen Wheeler contend that control charts do not rely on normal distributions. Now I have a much better understanding for the source.

However, I have to agree with previous posts concerning the concept of ‘probability’ in relation to the control chart. Without reading the article and based on the posts, my thoughts were that Shewhart’s work was independent of probability. However, I think that is subject to your interpretation. Both Deming and Shewhart were skilled statisticians, and understood the ‘probability’ significance of 3-sigma limits vs. 2- and 4-limits for normal distributions. And the article references the understanding of Error Types. The article does state, page 4, that “the calculations that show where to place the control limits on a chart have their basis in the theory of probability”. To me, because Shewhart identified ‘3-sigma limits’, it indicates that he was trying to keep his method on a statistical, even probablity, foundation. I think the thesis of the article is that if statistical rigor is applied to the Shewhart method, it would be considered rather weak because real processes are not always stable, normally distributed populations unaffected by assignable causes. But instead of throwing away the tool, it is still valuable and useful for making decisions — thus, the control limits and the method have a strong economic, rather than statistical, basis. I get the sense that Shewhart and Deming were both defending the ease of use and universal application of the tool in the face of statistical rigor.

Again, great article and insight into the method, but after reading it, I think that the previous posts referring to probabilities are still on target. The reason being is stated on page 3: “Exact mathematical methods are both easier to teach . . .” The original poster did not have a firm grasp of statistical theory, and needed to understand probability to grasp the significance of the control limits. (Even Wheeler shows a line of ‘normal distribution’ plots lined up on a control chart on the cover of one of his books . . .). It kind of like teaching someone that atoms are round, look like planets rotating the sun, follow momentum and mass laws, etc . . . and after you grasp all that, you find out that it was only a way to introduce you to quantum theory.

Thanks again for the article.0April 25, 2007 at 6:32 pm #155261

50 years ago?Participant@50-years-ago?**Include @50-years-ago? in your post and this person will**

be notified via email.Hey Helper,

Sorry it took me awhile to get back to you – been kinda busy around here.

I have also enjoyed to dialogue and found it refreshing. As you zeroed in on in your last post, “What level of confidence do we have that it is in statistical control? According to Shewhart and Deming, Wheeler and Neave…that is an unknown.” It is also the unknown and unknowables that cause us to not be able to apply the probability approach to control charts. All we can do is gain an estimate of the true population, but the dynamic nature of manufacturing processes defies its use. This is what Shewhart learned in his studies. After several attempts to apply distributions to control charts, he found they could not be used when using sample data and he could never know the true population.

Very much apreciate your input. Take care and best wishes,

Shooter0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.