Control Chart Patterns (novice question)
Six Sigma – iSixSigma › Forums › Old Forums › General › Control Chart Patterns (novice question)
 This topic has 41 replies, 21 voices, and was last updated 11 years, 11 months ago by Dharma Bum.

AuthorPosts

October 30, 2003 at 3:53 pm #33739
AlStatsParticipant@AlStats Include @AlStats in your post and this person will
be notified via email.Hi everybody
I think this should be a easy question for the experts, so any help is appreciated
When you have a run of 78 points on either side of the center line in a control chart, we should call this ‘lack of control’ of the process, my question is:
Why 78 points and no 6 or 9 points?
Thanks in advance for your help
AlStats0October 30, 2003 at 4:10 pm #91823Hi AlStats,
Assuming a stationary process the chance for a data point to be in one side is p=1/2.
What’s then the chance to find n points within the same side? It’s:P = p**n
When you enter n=7, your chance to find 7 data points on random within the same side is:P = 1/128 < 1%
So you can be can be >99% sure that this event (7 data points found within one side) is systematic, not at random. The answer to n=6 or n=9 should be obvious now.0October 30, 2003 at 5:20 pm #91825
Heebeegeebee BBParticipant@HeebeegeebeeBB Include @HeebeegeebeeBB in your post and this person will
be notified via email.It’s the Western Electric Rules, baby!
Rule 5: Eight points in a row fall on one side of the mean.
Check it out:
http://www.linkllc.com/WesternElectric.htm0October 30, 2003 at 6:21 pm #91826What are the “oficial” Western Electric rules?
It seems that different companies have different rules, but they all call them the “Western Electic” rules.
For example, points at one side of the mean, I’ve seen 7, 8 and 9. The same about the trends (increasing or decreasing), not to say that for the link of the previous thread the trends is just not one of the rules. Also for points in zone A (1 sigma at eahc side of the central limit) I’ve seen different numbers. And different criteria for “how many points from how many in a row” in zones B or beyond or in zone C, at one or either side of the average.
Could someone who have reference to the original source post the “official” Western Electric Co rules?0October 30, 2003 at 7:34 pm #91829
Mario PerezWilsonParticipant@MarioPerezWilson Include @MarioPerezWilson in your post and this person will
be notified via email.There was a carriage return problem with the previous email.The Western Electric Tests are the following:
For the Upper half of the control chart:
A single point above the UCLTwo out of three consecutive points in Zone A or aboveFour out of five consecutive points in Zone B or aboveEight consecutive points in Zone C or above
For the Lower half of the chart:
A single point below the LCL
Two out of three consecutive points in Zone A or below
Four out of five consecutive points in Zone B or below
Eight consecutive points in Zone C or below
I you want to know how to derive these test let me know. They are based on probabilities from the normal distribution.0October 30, 2003 at 9:00 pm #91837Heebeegeebee BB,
Thanks; a different way to view the same story. When you want to derive the WER’s, you’ll do calculations as I wrote about. Or visualize it by a probability tree. Just basic stochastics, no myth ;)
binomi0October 30, 2003 at 9:11 pm #91838So the following OOC criteria are not from Western Electric, even when they are usually attributed to them?
15 points in a row within the C zones (at both sides of the central line)
7 or 8 points in a row consistntly increasing or decreasing (trend)
8 consecutive points in the B zones or beyond (at both sides of the central line) (bimodal).0October 30, 2003 at 10:24 pm #91843
Heebeegeebee BBParticipant@HeebeegeebeeBB Include @HeebeegeebeeBB in your post and this person will
be notified via email.Bingo, buddy!
0October 31, 2003 at 1:41 pm #91871Alstats,
You need to consider a few things before defining what a special cause signal is on your control chart.
1. Control charts do not have any definite probabilities associated with them.
2. You need to consider the economics associated with the potential of a special cause and the identification of the cause.
3. Dr. Shewhart (the developer of Economic Control) stated that a process is consider out of control when the chart exhibits a nonrandom pattern. In addition, consider the economics associated with it.
For example, in some processes, a run of six points, seven points or even the magic 8 or 9 points is easily checked for a change in the process with little to no economic impact incurred. A slight shift in the mean above or below the average is of little consequence. It is easily detected and corrected. So, whether you investigated at 8 points or 12 points, no big deal. However, in some cases, the economics are considerable. I’ve worked in companies when a run of 5 was a real red flag. The consequences of a potential shift in the mean was serious enough to begin investigating the process. It was worth the time and money, even if no special cause was found.
These rules of outofcontrol signals based on defined number of points or areas of probability on the control chart are very dangerous and are not based on the extensive theory and work of Dr. Shewhart. Most of them came from individuals looking for a cookbook formula with little to no understanding of the fundamentals of control charting.
Eileen0October 31, 2003 at 2:18 pm #91878Each of the Western Electric Rules has about the same probability of occurance (approximately 3 times in 1,000).Rule 1: A single point falls above or below three standard deviations (beyond Zone A).Rule 2: Two out of three successive points lie in Zone A on one side of the mean.Rule 3: Four out of five successive points fall in or beyone Zone B on one side of the mean.Rule 4: Fifteen points in a row fall in Zone C on either side of the mean.Rule 5: Eight points in a row fall on one side of the mean.Note: Control limits on a control chart should be recalculated whenever there is a major shift in the process (planned or unplanned).
0October 31, 2003 at 3:30 pm #91881
AlStatsParticipant@AlStats Include @AlStats in your post and this person will
be notified via email.All,
Thanks for your inputs, these are very helpful!
Best regards,
AlStats0November 2, 2003 at 9:14 pm #91950HI, AIStats !
I believe your main questions have been rather clearly answered by
(a). binomi´s basic exercise which can be extended and explained in terms of binomial and multinomial probability discretely applied to all other additional situations which might rise up, and ….
(b). Eileen´s economic and/or reliability considerations associated with Dr. Shewhart´s original works.
0November 4, 2003 at 12:52 am #92013You are interested in the shift rule regarding special or assignable causes.
The shift rule is based on probabilities. The chance of 7 consecutive points or more either above or below the center line is the same as getting 7 tails in a row when flipping a coin – which is (.57) or .008. This says that the chance of seven consecutive points, above or below the center line if nothing has changed in the process, is .008. More than highly unlikely.
Hope this helps…..
0November 4, 2003 at 3:11 pm #92038
AlStatsParticipant@AlStats Include @AlStats in your post and this person will
be notified via email.Thanks, Helper this helps
0November 4, 2003 at 3:21 pm #92039No problema………
If you have more questions feel free to contact me at [email protected].
Regards0November 4, 2003 at 6:54 pm #92049Alstats,
Just to reinterate – there are no defined probabilities associated with points on a control chart. Depending on a calculation of probability for an outofcontrol signal is wrong.
Whether a shift in the process occurs depends on the process you are monitoring. What is the economic impact of a potential shift? How much does it cost to investigate a potential shift? How much would it impact your business if you missed the shift? Probability or mathematical equations do not answer your question.
Remember, Dr. Shewhart defined a signal of an outofcontrol condition is a nonrandom pattern on the control chart. Are you really looking at a process or just looking for something to put in teaching material? Eileen0November 4, 2003 at 8:20 pm #92058
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Eileen:
“How much does it cost to investigate a potential shift? ” “How much would it impact your business if you missed the shift?”
Who cares!? Anyway, I can have no clue about how likely is that what I am investigating as a shift is actually a shift or just chance, and I can have no clue about how likely is that I miss a shift. After all, probabilities have nothing to do with SPC, right? I don’t know how the word “Statistical” got before “Process Control” in first place.
A point can be either above or below a given value. And there IS a probability that a point belonging to a distribution falls at either side. And the probability that 7 consecutive points from the same distribution fall all at the same side of that value is the probability of 1 point at the power of 7. That’s irrefutable (well, you may try).
The problem here is that we don’t know the exact distribution of Xbar, as we don’t know the exact distribution of any other real population. But we can have an estimation. And then we can have an estimation of the median which, by definition, is the value that leaves 50% of the population at each side. Then we can estimate as 0.5 the chances that a point falls on a given side of the median. Then we can estimate as 0.5^7=0.008, or 0.8%, the chances that 7 consecutive points fall at the same side of the median. If we take Xbarbar as our estimation of the median of Xbar, then we can estimate that there is a probability of 0.8% that 7 consecutive points fall at the same side of Xbarbar.
And no matter how many times you repeat that there are no probabilities associated with SPC, just repeating it will not convince me and I think that it will not convice many others too.0November 6, 2003 at 5:41 am #92143Gabriel,
While I agree with you regarding the probabilities, doesn’t Eileen have a valid point when she says that the relative costs should be taken into consideration? Isn’t she basically saying if the cost of a shift is relatively high, she can live with a higher false alarm rate and investigate a trend of 5, 6, or 7 data points in a row?0November 6, 2003 at 2:08 pm #92167It is sad that so many people do not understand the basis of statistical process control. Perhaps, people really don’t care and just view the details as unnecessary. But, I feel that only with a thorough understanding of the foundation of control charts will one be able to apply them correctly.
I have included an excerpt from an article I wrote on this topic as well as a quote from Dr. Deming.
There are no individuals with a greater understanding of the theory and development of statistical process control than Dr. Shewhart and Dr. Deming. For the few that are interested in learning, here are their thoughts on this topic.
Dr. Shewhart’s definition of an assignable cause of variation is:
“The principle function of the chart is to detect the presence of assignable causes. Let us try to get clear on just what this means from a practical and experimental viewpoint. We shall start with the phrase “assignable causes.” An assignable cause of variation as this term is used in quality control work is one that can be found by experiment without costing more than it is worth to find it. As thus defined, an assignable cause today might not be one tomorrow, because of a change in the economic factors of cost and value of finding the cause. Likewise, a criterion that would indicate an assignable cause when used for one production process is not necessarily a satisfactory criterion for some other processes.
Obviously, there is no a priori, formal, and mathematical method of setting up a criterion that will indicate an assignable cause in any given case. Instead, the only way one can justify the use of any criterion is through extensive experience. The fact that the use of a given criterion must be justified on empirical grounds is emphasized here in order to avoid the confusion of such a criterion with a test of statistical significance.”4
Control charts are not a mathematical test of hypothesis. Many authors and instructors of spc approach a control chart as a statistical test of hypotheses. This is also very misleading. Dr. Shewhart wrote: “As a background for the development of the operation of statistical control, the formal mathematical theory of testing a statistical hypothesis is of outstanding importance, but it would seem that we must continually keep in mind the fundamental difference between the formal theory of testing a statistical hypothesis and the empirical testing of hypotheses employed in the operation of statistical control.”5 Dr. Deming has said in his usual succinct manner “Rules for detection of special causes and for action on them are not tests of hypothesis that the system is in a stable state.” 6 Remember, there is not one mathematical model used in process control. There can be no assigned probability associated with the plotted points. Subsequently, without any probability distributions, there can be no statistical tests of hypotheses.
References:
4. Shewhart, W. A., Statistical Method from the Viewpoint of Quality Control (Graduate School of Agriculture, 1939) p. 30.
5. Shewhart, W. A., Statistical Method from the Viewpoint of Quality Control (Graduate School of Agriculture, 1939) p. 40.
6. Deming, W. Edwards, Out of the Crisis (Cambridge, MA: MIT Center for Advanced Engineering Study, 1986), p. 335In addition, Dr. Deming stated in his book “Out of the Crisis”, page 334
“Control limits do not set probabilities. The calculations that show where to place the control limits on a chart have their basis in the theory of probability. It would nevertheless be wrong to attach any particular figure to the probability that a statistical signal for detection of a special cause could be wrong, or that the chart could fail to send a signal when a special cause exists. The reason is that no process, except in artifucial demonstrations by use of random numbers, is steay, unwavering.”
For those who are still reading, I strongly recommend you read Dr. Shewhart’s book and Dr. Deming’s book.
Eileen0November 6, 2003 at 4:02 pm #92176
AlStatsParticipant@AlStats Include @AlStats in your post and this person will
be notified via email.Wow! guys I’m learning a lot with your inputs, I just want to thank everyone!
AlStats
0November 6, 2003 at 5:08 pm #92182Eileen,
What is sad is when people become dogmatic about statistical and probability application. The dogma is not useful.
First, you need to put the quotes of both Dr. Deming and Dr. Shewhart in to the proper context. These were intended to point out the risk of applying enumerative statistics (hypothesis testing) to a process that was not in statistical control. There is no argument against this. However, this does not mean that the detection of special causes and assessment of statistical control can not be based on probability. The notion of alpha and beta risk still applies. In fact, probability based decision analysis of stochastic processes have been successfully applied for many years. The bottom line is it works!
I have used this George Box quote on this forum before but I think it is apropos to this as well All models are wrong, but some models are useful. The probability models are useful. How is your model of control theory useful?
I have been in this game for along time, I went through the period in which the Deming followers were all saying that enumerative statistics were bad and analytical statistics were good. To the point in which I would hear statements like: you can not run a DOE on a process that was not in statistical control. My response was that one was not good and the other was bad, they both have there place and value. Now we are in an era where enumerative statistics are good and analytical statistics are bad to the point that we see tools that are intended for a random sample of a defined population applied to processes that are not in statistical control; for example, using the AndersonDarling test to test normality on data from a process that is not in statistical control. But, my response is still the same, they both have there place and value.
This is not to take any thing away from the tremendous contributions that both Dr. Deming and Dr. Shewhart have made to the world.
Statman0November 6, 2003 at 7:20 pm #92194Wow! You are totally confused. Control charts are used for both enumerative and analytic studies ( See ASQ statistics division Newsletter).The distinction between these studies has nothing to with this discussion. And you are wrong about the quotes. They are not limited to an analytic situation.Oh, and by the way, an outofcontrol chart has a special cause and this has nothing to do with enumerative vs. analytic studies.
There are no alpha or beta errors or probabilities associated with statistical process control charts. Reread the quotes and the books by Dr. Shewhart/Dr. Deming.
Your statements alone do not compare to the references by Dr. Shewhart and Dr. Deming.
0November 6, 2003 at 8:29 pm #92196Hi Eileen,
Im not sure what post you are responding to because if it is mine, you have certainly read a lot more into it than what I said.
Control charts are used for both enumerative and analytic studies
– never said they were not
And you are wrong about the quotes. They are not limited to an analytic situation
– never said they were
Oh, and by the way, an outofcontrol chart has a special cause
– never said it didnt
There are no alpha or beta errors or probabilities associated with statistical process control charts
doesnt mean that alpha and beta risk can not be applied
Please answer my question for me. How is your view of control chart theory in any way helpful to the practitioners? Also, if probability can be usefully applied to control charts for detection of special causes and assessment of statistical control, what is your problem with it?
Regards,
Statman0November 6, 2003 at 10:29 pm #92203
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Absolutely!
But read what you said:
“aying if the cost of a shift is relatively high, she can live with a higher false alarm rate”
But what is “a higher false alerm rate” if, without using probabilities, you can’t have a clue of a the false alarm rate to begin with?
I am not against taking the cost into consideration. What I said is that “cost of an event” is not enough if you don’t know the chances for that event to happen. You need both: cost and probabilities. But Eileen said something like SPC has nothing to do with probabilities, and that’s what I am against.0November 7, 2003 at 12:06 am #92208
Mario PerezWilsonParticipant@MarioPerezWilson Include @MarioPerezWilson in your post and this person will
be notified via email.The tests for unnaturalness or test to ascertain if a control chart is exhibiting a pattern not under statistical control are based on probabilities. In the case of the Average (xbar) chart, it is based on the normal distribution, CLT, etc.
Take a look at the PDF file under Tests for Unnaturalness that I have posted at http://www.mpcps.com/Free.html
In there it shows how to derive the Western Electric Tests.
Besides these tests, there are other ones, that also will tell you if the pattern on your control chart in unnatural, nonrandom or not under statistical control.
Now, this answers the question of AlStats. The only problem that I see on his question is that he refers to “lack of control” of the process. And to add more fuel to the fire, control charts do not control anything. They merely monitor the response or independent variable that you are plotting. The control part, comes from the reaction that the “individual” that is, the operator, technician, engineer or statistical methods engineer (SMEs as I used to called them in Motorola) takes, when the pattern fails a particular test.
Some of the tests actually tell you that something good is happening with the process. For example, if I have eight consecutive points in Zone C only (lowerhalf of the chart), this pattern fails a test of unnaturalness. This does not mean that my process shows a “lack of control”, it merely tells me that something unnatural is happening and I need to investigate what it is. In this case, it might be very good, as my averages are coming closer together. It may be that a new lot of materials has been introduced into the process, and all of a sudden you discover that this is causing uniformity in the average and a reduction in variation (standard deviation or range). Now you have discovered that there is lottolot variability from your supplier. (Aside: for those of you who love to argue, I am not implying that this is how you determine lottolot variability) You may also need to look at the rangechart to see what the variability is doing. So, applying the tests to the control chart is done for detecting if something unnatural is happening. In other words, if the pattern is nonrandom, or if it is failing to behave as it did, when you put the control chart in the production area and calculated its control limits.
To use the statement of “outofcontrol” in relation to a control chart that fails a test for unnaturalness is not good, because it gives the impression that your process is behaving totally chaotic, and in reality it may be producing more uniform product as it fails the tests.
Now, concerning the other arguments that I see from some posts, I see two things here.
1) The tests are based on probabilities and are needed for the operators to react and initiate the process of investigation when control charts exhibits a nonrandom pattern.
2) The investigation or reaction to the charts (establishing control) when it fails the tests. The failure to each test should be investigated every time. Now, to simplify your life, you may established an Action Plan to react to the tests, that way, you can delegate some actions to the operator, technician or “Control Chart Keeper”. A reaction may be needed or not depending on how much knowledge you have of the process, but that is very particular to the response, independent variables and the process. Definitely, I would not base it on economics. I can just see a lot of “couch potato” engineers that will tell you it is not economical for me to do something every time that charts is “OUTOFCONTROL.”0November 7, 2003 at 12:42 am #92209Mario,
Great post and a fully agree with you points of veiw and positions.
In the future, please attach the document rather than posting a link to your businiess web site. Lets honor the rules of promoting products and services on this forum
Statman0November 7, 2003 at 2:48 pm #92226if i don’t use probabilities, then how would i go about in choosing let’s say between 2 out of control rules (example, 7 points or 8 points on one side of centerline)?
0November 7, 2003 at 4:02 pm #92231Hi Migs,
I guess according to someone on this string you will have to pull out the ouiji board or your crystal ball. Maybe you could use the recently developed by Eileen “outofcontrol chart”.
Just joking of course.
Don’t worry about it. Of course your decision will be based on probabilities.
Statman0November 10, 2003 at 9:25 am #92313
Philip WhateleyParticipant@PhilipWhateley Include @PhilipWhateley in your post and this person will
be notified via email.I think that this debate highlights the problems associated with the two main strands of thought in quality control.
Strand 1 is the developement of control charts (or process behaviour charts) from Shewhart, through Deming and on to Don Wheeler in the present day.
Shewhart, Deming, and Wheeler all state that control charts are empirical. The rules and tests etc may be “consistent with” probability theory, but they are very specifically NOT “based on” probability theory. Probability theory was used by Shewhart to test the generality of the charts, but the 3sigma limits etc exist because they work, not because of any foundation in probability theory. Suggest you read “Advanced topics in Statistical Process Control” and “Normality and the Process Behaviour Chart” by Don Wheeler.
Strand 2 happened when Statisticians got hold of Shewharts theories, and misunderstood the fact that Shewhart had used some statistical theory as implying that control charts were based on statistical theory. Strand 2 inevitably leads to assertions that control charts only work for “normally distributed” data, with all the associated rubbish about data transformation etc. It also led to the idea, popular in my home country (the UK) for some time, that control limits should be set at 3.09sigma as this is equivalent to a probability of 0.001. To paraphrase Deming, Wheeler etc “An understanding of statistics is a positive hinderance to the effective use of control charts”.
On the subject of enumerative vs analytical studies:
Enumerative studies are those conducted within a “frame” of data, for example a controlled trial. With proper “blocking” the efects of uncontrolled (special cause) variation within the frame can be reduced. With proper randomisation, the effects of uncontrolled variation can be rolled into experimental error (noise) so it does not confound with the controlled factors. With proper identification and measurement of uncontrolled variation, ANACOVA can be used to reduce the effect and make the model more sensitive. However, the one thing that cannot be done in the presence of uncontrolled variation, is to extrapolate the results of the study OUTSIDE the frame of data to other parts of the process. To quote Don Wheeler:
“Virtually every statistical technique for making comparisons between groups, items or categories is built on the assumption that the data within each category are homogeneous. When this is demonstrably and globally not true, all comparison techniques become arbitrary and capricious. And when your comparison technique is arbitrary and capricious, your results will be no better.
Think about what the preceding paragraph means with respect to the uncritical use of confidence intervals and tests of hypotheses to make comparisons….”
Analytical studies, on the other hand, attempt only to test, as each additional point is added, whether the process generating the points is predictable or unpredictable. If there is evidence of unpredictability, then the correct use of the charts is that you should then look for and eliminate the cause. If there is no evidence of unpredictability YET, then you should continue to look. The fact that the principal use of control charts is analytical explains why the various rules are intended to be conservative in nature. For an analytical study, you cannot go back and replicate to confirm a result. It is therefore important that the risk of false alarms is kept very low. Each time you add an additional rule you increase the risk of false alarms. This is why it is actually better to use a run of 9 either side of the mean, because the probability of it occuring at random is lower. You should be very careful in using control charts “enumeratively” they are actually not designed for this purpose as the bias correction factors are not intended to work within a “frame” of data. This is why the bias correction factors are adjusted for ANOM to account for sample size “within the frame” Run tests within “enumerative” control charts may also be shorter, as the purpose becomes one of clue generation for further test. However, run tests in enumerative charts should also be based on the median rather than the mean.0November 10, 2003 at 9:42 am #92314
DANG Dinh CungParticipant@DANGDinhCung Include @DANGDinhCung in your post and this person will
be notified via email.Good morning,
A control chart is mainly used to survey a process.
While a process is running and something infrequent happens it may be the consequence of chance or, more frequently, of a special cause.
When you detect an infrequent event in your process, i.e. an event the probabilty of happenning of which is close to zero, it is safe (a) to imagine something worng is happening, (b) to stop the process even if your product is still conform, (c) to find out and remove the special cause (d) before restarting.
You can detect infrequent events on a control chart : a point outside the control limits, two consecutive points close to the control limits, seven consecutive points in one side of the central line, seven consecutive points showing growth or decrease,… Western Electric Rules give some example, but a probabilist can imagine an infinite number of other situations.
Best regards
DANG Dinh [email protected]0November 10, 2003 at 12:13 pm #92319
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.“It is therefore important that the risk of false alarms is kept very low. Each time you add an additional rule you increase the risk of false alarms. This is why it is actually better to use a run of 9 either side of the mean, because the probability of it occuring at random is lower.”
Sounds a lot like probability theory and hypothesis testing…0November 10, 2003 at 6:49 pm #92338
SwaggertyParticipant@George Include @George in your post and this person will
be notified via email.Fun discussion. Lots of thoughtfullness. Here’s some more (at least that’s the intention). In the world of SPC there are 3 concerns: magnitude of effect (or difference), relationship among effects, AND consistency of effect. Consistency of effect is often overlooked when examining & analyzing data. I’ve often wondered where these “rules of thumb” regarding control chart interpretation came from, but in examining a Probability Table for the Binomial Expansion, the rules line up very nicely with the probabilities. For example, 7 consistent points outside of a control limit has about a 0.01 probability of occuring by chance; 7 points with one of them being inconsistent, (6 outside, 1 inside) has a probability of 0.06. 8 points improves those odds by about 50% (0.005, and 0.035 respectively).
So, if you need a “solid reason” for using these rules, the Binomial Expansion may just provide it for you. Additionally, the Binomial Expansion makes no assumptions regarding underlying distributions (i.e., homogeneity of variance). One assumption that is made, however, is that two mutually exclusive events (e.g., headstails, insideoutside) have an equal probability of occurrence. If one event is favored over another, say 80/20, the 50/50 assumptions leads to a conservative estimate of probability, and there is no danger of overinterpreting the data.0December 1, 2005 at 12:42 am #130548As a section leader from a Science & Religion forum in a past life, I find this thread oddly familiar.
Yes, some may treat SPC rules as revealed dogma, but how can any of us who have a background in statistics not agree that the Western Electric rules work because they are consistent with probability theory?0December 1, 2005 at 10:04 am #130567Dale,
If you admit a past life, you should also admit the possibility of autocorrelation :)
Andy0December 1, 2005 at 11:17 pm #130613Andy,
Indeed, my present life is haunted by autocorrelation :)
Perhaps, as an accused, well, historian of this forum and this field, you might be in an ideal position to address my conjecture more directly.
Q: For some people, have SixSigma and Quality and Control Charting taken on some aspects of revealed religious dogma, rather than being merely applied scientific/business tools that are eternally open to rational inquiry and empirical study and criticism and cost/benefit analysis and improvement?
Dale0November 11, 2009 at 7:49 pm #186753The rules were develooped by someone at Western Electric, thus the name, just like the majority of statistical manufacturing techniques were developed at Bell labs, the student t is the prime example.
0November 11, 2009 at 7:52 pm #186754The basic underlying assumption is a Normal distribution, which is the basis for all SPC. Any underlying distribution that is averaged in sample sizes of 5 becomes normally distributed.
0November 11, 2009 at 8:25 pm #186756It is truly astonishing what one can learn by opening up a random post on this forum…I guess 2003 was some kind of vintage year since we seem to have revived a number of them from that time.
As for new information – I never knew that Gosset worked for Bell labs. If that’s true then are we to assume the “t” in t test was short for …telephone????? All of the books I’ve read say Gosset developed the ttest while working for Guinness Brewery back in the 19th century (and that it was Fisher who named the distribution the tdistribution for reasons unknown).
The “fact” that averages of 5 values from any kind of an underlying distribution will result in normally distributed values is also quite interesting and also contrary to most everything I’ve read. While it is true that distributions of averages will tend to normal the number of data point per average needed to make this happen will vary depending on the underlying distribution – for some extreme value distributions the number per average can easily reach 100.0November 11, 2009 at 8:54 pm #186757Wow, ask for your money back from any classes you’ve taken.
0November 11, 2009 at 8:55 pm #186759No, demand your money back.
0November 11, 2009 at 9:09 pm #186761
Dharma BumParticipant@DharmaBum Include @DharmaBum in your post and this person will
be notified via email.Joe,
You couldn’t be more wrong…WOW0November 11, 2009 at 9:11 pm #186762
Dharma BumParticipant@DharmaBum Include @DharmaBum in your post and this person will
be notified via email.Joe, man…you are 2 for two. You are not in a position to be offering insight on the topic of SPC. Please stop, lest someone take your nonsense for gospel.
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.