Capability for a non normal distribution
Six Sigma – iSixSigma › Forums › Old Forums › General › Capability for a non normal distribution
 This topic has 59 replies, 19 voices, and was last updated 15 years, 1 month ago by Quainoo.

AuthorPosts

December 19, 2006 at 3:44 pm #45572
Hello everyone,
I am working on an inventory reduction project.
I have calculated how many months of inventory is currently in stock for each item.
The distribution is not normal
My specification limits are: mini: 0 and maxi 2 (months of inventory).
Since the data is not normal, I followed the following procedure in order to calculate the capability.
Total number of references: 2517
Number of items with more that 2 months of inventory: 1783
DPU 0,71 (*)
Z score: 0,37
Cp (**): 0,12 (Z/3)
(*) Because 0,71 represent both sides of the curve and I am looking at a Z table that takes into account only one side, it makes sense to me to divide this value by 2 (0,35) to get the Z value.(**) I have a feeling that the result is more a Cp than a Cpk but I am not sure.
I would like to have your input about the above methodology.
If not correct, I would appreciated to know the right way to calculate thes capability in this situation.
Thanks
Vincent0December 19, 2006 at 4:38 pm #149282
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.The post below and the discussion thread that follows it may provide some answers to your question.
https://www.isixsigma.com/forum/showmessage.asp?messageID=177430December 19, 2006 at 5:09 pm #149284
The ForceMember@TheForce Include @TheForce in your post and this person will
be notified via email.use weibull to determine capability for nonnormal data
0December 19, 2006 at 7:56 pm #149289Capability means consistently capable of meeting specification. To do this a process must be in control. It makes no difference what type of distribution the data has … and you will never exactly know it anyway.
0December 19, 2006 at 8:16 pm #149291
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.With respect to standard capability calculations the statement “It makes no difference what type of distribution the data has … and you will never exactly know it anyway.” is in error.
The assumption of normality is central to the Cpk calculation. The post below has the details.
https://www.isixsigma.com/forum/showmessage.asp?messageID=44476
As for the second part – the issue is not that of knowing a distribution exactly the issue is being able to say with a degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).0December 19, 2006 at 9:08 pm #149292How many real world histograms have you ever looked at ?
How many even remotely look like a normal distribution ?
Take 100 random normally distributed points and create a histogram … it won’t look anything like a normal distibution.
A person lack of fit test will test the fit of 100 data points out to just +/1.65 sigma.
Try some time based distributions … they are very assymentric and often bimodal. Transforms don’t help such data.
Forget about normal distributions !!!!0December 19, 2006 at 9:40 pm #149293
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.How many real world histograms have you ever looked at ?
Answer: Quite a few but in and of themselves they aren’t worth much – it’s too easy to change the binning to modify their shape as a result you can make them look like almost anything you choose.
The way you visually examine a distribution is to plot it on normal probability paper (or its computer graphic equivalent). The way you give yourself a sense of just how nonnormal, perfectly normal data can look is to generate repeated 10, 20, 30, 60, and 120 samples using a generator of random numbers from a normal distribution and plot these results on normal probability paper.
How many even remotely look like a normal distribution ?
Answer: Once you have generated the graphs in #1 above you will realize that they can be very far from the ideal normal curve as displayed on the old ten Deutschmark note but once you combine the visuals with various tests you can see that even when they don’t visually meet your ideal of what a normal should look like they still pass muster with respect to being viewed as data points from a normal parent distribution.
As far as forgetting about normal distributions – I wouldn’t recommend this.
1. If you choose to ignore the normal distribution then you are choosing to ignore all of residual analysis – this means you are choosing to ignore most of the guidelines surrounding the tests for significant terms in a regression model. This, in turn, means you are choosing to identify significant terms or insignificant terms….how? This also means you are going to determine model adequacy…how?
2. If you choose to ignore the normal distribution then you will go wrong with great assurance with respect to capability calculations if your data is nonnormal (Bothe Chapter 8 has the details).
3. If you ignore the normal distribution you run the risk of misapplying any number of statistical methods.
4. If you ignore the normal distribution then you are walking away from the utility of the concept of central tendency.
On the other hand, I wouldn’t stand around and assume that if my data failed a normality test that all was lost.
As far as time based distributions are concerned – don’t know the term could you give some examples?0December 19, 2006 at 9:55 pm #149295There are many real world distributions that are very non normal. Have you ever looked at time based processes for example … call centres, help desks etc ?
If Bothe is your hero, no wonder you are confused !!!
0December 19, 2006 at 10:06 pm #149296Davis R. Bothe wrote an article entitled Statistical Reason for the 1.5 sigma Shift, in ASQs journal Quality Engineering (Vol. 14, No. 3, MARCH 2002, pp. 479487). There he addressed the question why six sigma followers add a 1.5 sigma shift to the average before estimating process capability. If Bothe is the basis for your suggestions, you are a very poor statistician. I assume that you are also a believer in the 1.5 shift ?
0December 19, 2006 at 10:07 pm #149297
SigmordialMember@Sigmordial Include @Sigmordial in your post and this person will
be notified via email.Hi Steve,
If Robert Butler is confused, then sign me up for confusion. I have read quite a number of his posts, and he has been spot on. He has also been cordial to the rare snipes.
Kudos and thanks to Robert for his participation!0December 19, 2006 at 10:23 pm #149298
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.I didn’t think the issue was whether or not there were real world distributions other than the normal. I thought the issue, as presented, was that we were supposed to forget about the normal.
Based on what you have posted it would appear that what you mean by a time based distribution is the Poisson. As noted in the literature the Poisson is a good representation of the distribution of data that arises from the measure of events that occur over equal intervals of time (or space). There are others that are far more extreme. For example, most of my work is with data that can best be described as having underlying distributions that are either binary, Weibull, extreme value, log normal, or ZIP (zero inflated Poisson).
I’ve never met Bothe so I don’t know a thing about him. What I do know is that his book Measuring Process Capability is the best reference I’ve seen on the subject and, as I mentioned, Chapter 8 is an excellent discussion of computation of process capability in the presence of nonnormal data. On that same line, Chapter 9 does a great job of capability calculations when dealing with attribute data.0December 19, 2006 at 10:31 pm #149299
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.Well Phil, rather than go into all of the 1.5 stuff again please just read what I posted on that subject a long time ago:
https://www.isixsigma.com/forum/showmessage.asp?messageID=39663
Actually, all Bothe showed was that if you made a series of assumptions about the process you could come up with 1.5. The big point he made in his paper was that if any of these assumptions were violated then all bets were off.0December 19, 2006 at 10:56 pm #149300You seem to be saying that you agree that the 1.5 is crap but you do believe Bothe’s use of it in calculating capability !!!!????
0December 19, 2006 at 11:02 pm #149302Poisson is a theoretical distribution that can relate to time based processes. Real world process are quite different. If there is a way to post graphics, I could post some help desk data … you could play around for days trying to fit distributions … but for what purpose !?
The meaning is in that data itself, no matter how skewed, lumpy or whatever, not in attempting to fit distributions to it.
The aim should be to use the histogram to gain insight into whatever the process is.0December 19, 2006 at 11:25 pm #149303Monte Carlo simulation uses distribution fitting. Just curious, why do you have such heartburn over this? Are you just a fundamentalist?
0December 19, 2006 at 11:28 pm #149304I fully agree with the comments you made. Robert should be the forum and the field’s standard for both professional deportment and accuracy of input versus the railed at exception. I always enjoy reading and learning from Robert’s postings.
Dr. V.
0December 19, 2006 at 11:35 pm #149305Read “Normality and the Process Behaviour Chart”.
It will give you a new (non normal) way of looking at processes.0December 19, 2006 at 11:43 pm #149307I wonder if we can analyse a distn here for a help desk service times … non normal of course
ooooooox
ooooxxx
oooxxxxxxx
ooxxxxxxxxxxxx
oxxxxxxxxxxxxxxxxxoooxoxooooooooxoooxxxxoox
xxxxxxxxxxxxxxxxxxxoxxxxxxxoxxoooxxxxxxxxxooxoox
0December 19, 2006 at 11:55 pm #149308A Dr. V learning from an M.S? That’s not much of a Dr. now, is it?
0December 20, 2006 at 2:21 am #149311Mmmm… that didn’t work so well. However, I have found an interesting exercise on non normal distributions:
http://www.qskills.com/nm/nm.htm
0December 20, 2006 at 2:22 am #149312
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.Regarding this statement “You seem to be saying that you agree that the 1.5 is crap but you do believe Bothe’s use of it in calculating capability !!!!????”
I guess I don’t see how the two thoughts are connected. Bothe wrote an article which, as far as I’m concerned, put paid to any idea that 1.5 had any generalizable merit. His book is about the issues surrounding process capability. I looked over Chapter 8 (the one relevant to the initial poster’s question) and I can’t find a thing concerning 1.5 when computing the equivalent 6 sigma spread.
Just so we don’t leave Vincent in the lurch – if we assume your process is reasonably stable and if your data is nonnormal and if you have to provide some estimate of process capability then the method I would recommend, and the method I’ve used many times, is the one outlined in Chapter 8 of the Bothe book.
If you don’t have ready access to the book the idea is this: take your data and plot it on normal probability paper and identify the .135 and 99.865 percentile values (Z = +3). The difference between these two values is the span for producing the middle 99.73% of the process output. This is the equivalent 6 sigma spread and you can substitute this value in the equation for computing an equivalent capability index.
As for the sidebar discussion about distributions all I can do is reiterate what I said at the beginning – the issue is not that of knowing a distribution exactly (nor of fitting a distribution exactly) the issue is being able to say with some degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).
If I know my data can be treated as though it came from some parent distribution then I have some assurance that the data does not violate the basic theoretical assumptions of a given test/analysis. This, in turn will mean I will be that much more certain that the conclusions I draw from the results of the test/analysis will be of use with respect to addressing whatever problem I was asked to solve.0December 20, 2006 at 3:40 am #149315
Prof DeventerParticipant@ProfDeventer Include @ProfDeventer in your post and this person will
be notified via email.Sounds good. You don’t support the 1.5 or six sigma tables. And it sounds as good an approach as any other in defining a capability – after all these are just numbers. The traditional calculation of capability is just as good. We can calculate any numbers we like as long as we attribute proper meaning and utility to them.
Unfortunately as you know, it takes an extraordinarily large amount of data to make any assumptions about the parent distribution. Most commonly, this is not some “ideal theoretical” curve anyway. Rather than curve fitting, we can learn much more by considering why the histogram looks the way it looks … for example rather than trying to fit some theoretical distribution to a bimodal histogram, we may look for possible mixed data from 2 sources.
We should always have a purpose. For example what is the real purpose in calculating capability and what is the purpose in using this or that formula. For example, there is no real purpose or benefit in plotting a bimodal data set on normal probabilty paper.
It is easy for people to get lost in doing all sorts of calcs and transforms and manipulations, without thinking about what the data is really saying. The aim is not to generate numbers. The aim is to listen to what the process is saying and discover how to improve it.0December 20, 2006 at 1:17 pm #149320That was not much of a question. Quite evidently not one familiar with what you do or don’t know, are you? Dr. V.
0December 20, 2006 at 2:11 pm #149323
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.The statement “For example, there is no real purpose or benefit in plotting a bimodal data set on normal probabilty paper.” is in error and may indeed be at the heart of a lot of the misunderstanding in this thread.
If I have a bimodal distribution such that the bimodality consists of two separate and distinct peaks then it probably doesn’t matter whether I plot it as a histogram or plot it on normal probability paper. On the other hand, if the two distributions have a large degree of overlap then it is very easy to have binning in a histogram routine that will hide this fact and lead you to the false conclusion that bimodality isn’t present.
Take that same data and plot it on normal probability paper and the bimodal signature is unmistakable. Consequently, the real benefit of running a normal probability plot on a block of unknown data isn’t to see how close it falls to the ideal line but rather to see its shape and use what you see to make a judgment call with respect to the proper treatment of the data.
Steve gives the impression he deals with data that could be viewed as possibly having an underlying Poisson or perhaps Negative Binomial distribution. Assuming he has some kind of package that allows him to generate Poisson’s with different lambda values it would be worth his time to generate and plot a series of these and keep them as a reference. When he gets an unknown block of data, run a normal probability plot and check it against the ideal plots – if it’s close then he probably can’t go too far wrong if he treats the data as having come from a Poisson..and if it isn’t then he will have to think about it some more.0December 20, 2006 at 2:17 pm #149324You’re welcome, always nice to see your fluffy rear end :)))).
0December 20, 2006 at 3:25 pm #149325Id imagine so, as you appear to be attracted to fluffy rear ends.
0December 20, 2006 at 6:21 pm #149333Not much intellectual sharpness in that response either “Dr.” V.
0December 20, 2006 at 6:59 pm #149335Cant argue with you there.
If it would offend your sensitivities less, in the future when complementing a forum contributor on his/her consistency in responding in a professional and informative manner, I could leave off the Dr.
I understand both how in some circles its almost an expected title prefixed to a name and in other circles its not. Apparently its a not for you so Ill work with you and leave it off. But in my doing so youll just have to accept the V. is for valedictorian of some really good schools with very rigorous programs – hope thats not also offensive to you.
V.0December 20, 2006 at 8:52 pm #149342
Prof DeventerParticipant@ProfDeventer Include @ProfDeventer in your post and this person will
be notified via email.You don’t need to plot on probability paper to detect a bimodal distribution. Such plotting is an exercise in futility.
Curve fitting achieves no practical purpose. For all your lengthy diatribe, you have yet to ascribe any benefit in attempting to fit distributions.0December 20, 2006 at 8:58 pm #149343Monte Carlo simulation uses distribution fitting.
0December 20, 2006 at 9:50 pm #149347Your posts are at a reading level of a 10th grader. So, that’s an accomplishment! Now let’s work on your grammar: If it would offend your sensitivities less, in the future when complementing a forum contributor on his/her consistency in responding in a professional and informative manner, I could leave off the Dr. . You are only a few steps away from achieving your dream: That we all believe that you are a true “valedictorian of some really good schools”. How can I be offended? You have a dream. One day your dream will turn into a true “PhD.”. Until then, keep that acronym. It describes you well.
0December 20, 2006 at 9:58 pm #149348
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.I guess I don’t see where I was advocating fitting distributions. What I said was:
“all I can do is reiterate what I said at the beginning – the issue is not that of knowing a distribution exactly (nor of fitting a distribution exactly) the issue is being able to say with some degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).
If I know my data can be treated as though it came from some parent distribution then I have some assurance that the data does not violate the basic theoretical assumptions of a given test/analysis. This, in turn will mean I will be that much more certain that the conclusions I draw from the results of the test/analysis will be of use with respect to addressing whatever problem I was asked.”As far as believing probability plots of data are fuitle with respect to identifying a bimodal (or any other kind of distribution) you are free to believe as you wish. My experience is to the contrary
0December 20, 2006 at 10:22 pm #149352Phil,
I tried to have input regarding the capability issue a couple of nights ago, i have read some of your replies, you are rude, in fact you are a twat and hide behind the fact you are well educated in the subject we speak about, i am not and still learnig, but i know i will get there.
I can say without a doubt, you along with div(darth),steve are the biggest prats i have ever had to communicate with, you kill this subject because you like to jump onto others errors,lack of knowledge,i could go on but you and the other two di.k heads are not worth it.
pick all the grammer mistakes out of this post you tnuc.
Regards,
Dave
0December 20, 2006 at 11:09 pm #149355Dont worry Dave. Phils not worth it. He has little imagination and even less capability. If all he can do is hide under the anonymity of a forum like this and take little potshots at those trying to communicate hes too wrapped up in his own insecurity to do other than snipe at you. His is the mark of a tiny mind.
Darth contributes to the forum, but he has little patience with those he perceives as not working on their own to come up with an answer.0December 20, 2006 at 11:34 pm #149356I would be interested to see your curve fitting on this example. It is an exercise that always seems to give very skewed, non normal data. With the second option, my histogram was almost triangular but the standard XmR control chart worked well, with just 2 points out of limits, as I might have expected.
Click on the histogram to view the actual data.
http://www.qskills.com/nm/nm.htm0December 21, 2006 at 12:41 am #149357Now that’s an interesting little twist in this cute soap opera: An anonymous Vincent initiated the thread, but now an equally anonymous Dave claims that he wanted to “have input regarding the capability issue a couple of nights ago”. Then miraculously Dr. V appears who accoding to some posts a few months ago only “drops in to see what’s going on and evaluate the intellectual state of affairs from his aloof position as a true expert”. Now, he is dropping the “Dr” because his obviously poor grammar does not add up to a venerable “PhD”. And Dave who at one point obviously was Vincent gets support from V (out of nowhere). Damn’ when the curtain sinks and the lights get turned on in this big theater called “internet”. You are more than welcome to call “rude”, a “twat”, a “tnut” or put me into the venerable tradition of “di.k heads” such as Darth! Good luck in the pursuit of the educational goals of all three of your split personalities.
0December 21, 2006 at 1:37 am #149361Whoa Phil……. That insult/compliment was uncalled for. It has become apparent from the recent threads that you are not the True Dr. Phil whose long history of witty, insightful, humorous and knowledgeable posts have entertained the Forum for a long time. You are either a sham posing as the revered Dr. Phil or the old Dr. Phil from Home Depot in need of a serious intervention from Dr. Barry of the Bench by the Bay. Your strident and lowered standards of witty reparate are not what we would expect from the true Phil. I reject this Phil and call on the Almighty G_d of Six Sigma to cast out the devil in you and return to us the real Phil we loved and respected. Let us all join hands and send positive feelings to Phil in his darkest moments.
0December 21, 2006 at 1:41 am #149362
Garth EdwardsParticipant@GarthEdwards Include @GarthEdwards in your post and this person will
be notified via email.“The assumption of normality is central to the Cpk calculation”
Most will realise the above statement is utter BS.
Cpk = (1K) T / 6s … as hundreds of texts will tell you.
Cp and Cpk give general indications as to the state of a process compared to spec limits. These are not absolute figures in any way.
They attempt to compare “the voice of the process” with “the voice of the customer”.
There are measures of non normality such as skewness and kurtosis. Calculating these may be of interest but is of no real benefit in process management.0December 21, 2006 at 2:33 am #149363
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.No Garth, I’m afraid most won’t. See the post below for details and citations.
https://www.isixsigma.com/forum/showmessage.asp?messageID=44476
Anon, I don’t know where all of these notions concerning curve fitting are coming from. All I can do is refer you to the post below and ask that you reread the discussion.
https://www.isixsigma.com/forum/showmessage.asp?messageID=1089080December 21, 2006 at 4:54 am #149365Sorry, but that is not the linguistic style of the “venerable” Darth. Try again!
0December 21, 2006 at 6:10 am #149368So just how normal does data have to be … we could use a Pearson’s lackoffit test fit out to 3 sigma … that would take about 3500 data points.
The only place you will ever see a perfect normal distribution is in a text book.
Set up a normal random number generator using Excel and plot histograms … they will never look normal unless you use thousands of data points.
Read world distributions are never normal.0December 21, 2006 at 1:51 pm #149383
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.One rather interesting side effect of this thread is that I’m getting to the point where I almost know my prior posting ID numbers by heart. What concerns me is that a lot of this discussion seems to be centered on what can only be described as a complete misunderstanding of the issues surrounding the normal (or any other distribution).
In responding, I don’t mean to single out Sam his post is just the latest along this line. So, once again –
“the issue is not that of knowing a distribution exactly (nor of fitting a distribution exactly) the issue is being able to say with some degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).
If I know my data can be treated as though it came from some parent distribution then I have some assurance that the data does not violate the basic theoretical assumptions of a given test/analysis. This, in turn will mean I will be that much more certain that the conclusions I draw from the results of the test/analysis will be of use with respect to addressing whatever problem I was asked to solve.”
Perhaps a concrete example will help:
You are asked to build an experimental design. You have 3 factors and you budget says you can run a total of 9 experiments. You run a full 2 level factorial with a single point repeated. Your boss wants a predictive equation relating the three variables of interest to the process output. You go out and do….what? Well, an obvious choice would be to run a stepwise regression in order to identify the variables that are significant. So let’s say you do this.
Question: If we try to view the question as Sam and others on this thread have attempted to do then how do we go about identifying significant terms and building our model?
The answer is – we can’t because one of the basic assumptions of residual analysis which is central to the issues of tests of variable significance is that the residuals are normally distributed random variables.
So, assuming our residual analysis didn’t show any obvious trends, how normal do they have to be…I guess as normal as 9 data points can be….and how normal is this – well, you could run the data through a bunch of tests and, with the sample size you will probably fail most of them or you could engage in what was described as an exercise in futility and plot them on a normal probability plot and visually compare their plot to other nine point plots of data drawn from a known normal. If it looks “reasonable” (and, obviously this is a judgment call)then at the end you will have to ask not – is the data normal but, as I said above – can it be treated as though it came from a population that was?
0December 21, 2006 at 1:56 pm #149385Darth, youve apparently mistaken Phil The Cunning Linguist for Dr. Phil The Witty. Dr. Phil The Witty interspersed humor with Six Sigma practice insights and Phil The Cunning Linguist does neither.
I concur with your call to cast out the imposter and assume you were beseeching the almighty Mike Carnell to hurl him bodily from the field.
Dr. V. (not related even electronically to Vincent nor Dave despite Phil The Cunning Linguists pale analytical attempts at linkage I hope thats not an example of his Six Sigma project work)0December 21, 2006 at 2:28 pm #149387Alas, it is truly I, the caped evil one, owner of the Vadermobile, attempted slayer of the Evil Princess, etc…. Style is more mellow due to the holiday season and the fact that I am on vacation the rest of the year.
0December 21, 2006 at 2:53 pm #149388
K.M.DateParticipant@K.M.Date Include @K.M.Date in your post and this person will
be notified via email.Dear Vincent,
1. Although you are mentioning that the specs are Min: 0 m and Max: 2 m, in reality there is only a single spec, that is Max: 2 m! This is because even if one does not explicitly specify Min as 0, we would like/desire to see inventory levels close to zero as far as possible, provided the replenishment happens very very quickly. Cp is not relevant in such situations. Only Cpk is to be calculated. However, if the replenishment can not be done that fast, we need to arrive at the right Min value after studying the replenishment cycle. In which case, both Min and Max specs will come into picture and both Cp and Cpk will then become relevant.
2. Assuming Max spec only
Cpk = (Max value – Average of 2517 reference values)/standard deviation of the reference values.
3. Assuming both a non zero Min and a Max, the Cp and Cpk are:
Cp= (Max – Min)/6 times standard deviation referred to above, and
Cpk = Minimum of {(Max – Avg.)/3 times std. dev, and (Avg. – Min)/3 times std. dev.}
2. and 3. valid, provided the distribution of the values has a single mode ( Histogrm/Dot Plot showing a single hump), and the shape of the distribution is not highly departing from Normal Distribution shape.
Hope that clarifies your query.
Best of Luck
K.M.Date0December 21, 2006 at 8:33 pm #149405The fact that a “valedictorian” such as Dr. V would even respond to a totally nonsensical thread of some “Phil” just makes me burst out in laughter. Anyway, Happy Holidays!
0December 21, 2006 at 9:01 pm #149407You think that even an accomplished academician turned industrial leader cant enjoy engaging in slanderous repartee occasionally? My friend, you must lead a lonely and insular cubedwelling existence.
Phil, instead of feeling anxious inadequacybased separation and attacking when you see the appended and appropriate title Dr. you might try a multistep recovery program beginning with just breathing deeply and letting it go, moving toward seeking common ground, and winding up possibly even hugging and taking a PhD to lunch a nice Chicken Caesar Salad would be nice.
This will have to be my last posting for awhile as I am heading off for a nice holiday vacation, but I look fondly toward the upcoming New Year in which you can demonstrate a new found restraint, balance and tolerance toward others.
I wish you and yours a very Merry Christmas and a wonderfully prosperous and Happy New Year.
Dr. V.0December 21, 2006 at 9:03 pm #149408Hello V,
In case phil isaround, are we talking to ourselves, i think we have got to him.
Can’t wait for the reply off him!!!
I never thought i would lower myself to his level!!!
regards,
Dave0December 21, 2006 at 9:26 pm #149412You must be joking. Do you really believe that you could determine that a set of 9 data points is from a normal distibution and not one of the other myriad of distributions ?
50 data points are needed to test it out to +/1.28 sigma.0December 21, 2006 at 9:43 pm #149414Of course you can. We see BBs do it all the time and lo and behold the nine data points always shows a high p value on the normality test. Must be OK then if JMP and Mini do the calculations and the p value is high.
0December 21, 2006 at 9:55 pm #149415Dave, you have a Happy Holidays too. May Santa Claus send you a large selection of “blabla” for Dummies … :))))))))))))))))))))))
0December 21, 2006 at 10:35 pm #149420Phil,
You have a good one too mate!
Regards,
Dave
p.s. i hope i get the beginners guide to english0December 21, 2006 at 11:24 pm #149424Begone Phalse Phil and start your eggnogging extravaganza.
0December 22, 2006 at 1:26 am #149427Big hug … and keep building your long list of publications by posting on the distinguished isixsigma forum. The collection of posts on groundbreaking topics such as the “capability for a Non Normal Distribution” will look very impressive on the CV of all the renowned Drs. who publish so profusely on this site …LOLOLOLOLOLOL.
0December 22, 2006 at 1:35 am #149428You make a good point Phalse Phil. This Forum should be restricted to the publications of ex Home Depot Paint Department Managers for they have a lot to say. Dr. V should confine himself to trips down drug induced memory lanes and fantasies about tie dyed t shirts and cranial adornments.
You did hit on a potential marketable product that possibly Stevo would be interested in getting involved in. Why not collect some of the best of the best AND worst of the worst of isicksigma posts and put them in a best selling book on SS that possibly MC might want to publish. There certainly are enough of each to fill many volumes and I bet we could outsell those stupid Dummy books. Maybe we could pattern the series after the Chicken Soup book. “Chicken Soup for the Six Sigma Soul” has a snappy title….what aliteration.0December 22, 2006 at 2:48 am #149430Darth, you’re finally getting back in the swing of things! I was worrying about what the holidays were doing to you :). The dark force was starting to wane.
0December 22, 2006 at 5:36 am #149432Darth and Phil,
This forum is about Six Sigma topics. Your messages do not create any value for this forum. Please leave this forum we do not need you.
Thanks0December 22, 2006 at 6:00 am #149434Really????? Well, I hate to break it to you, but you’ll have to live with the likes of Darth and Phil. Just as you have to live with those in an organization who don’t buy into Six Sigma. Do you really think that the scientifically correct calculation of the capability for (lol) a non normal distribution (oops, now that is truly the most important question in six sigma) will allow you to fix the problems you are supposed to fix? Go and get a dose of reality, or even better jump on the bandwagon of reality. You’re welcome to ask me to leave, but this is the internet and your opinion ends at my keyboard. Get a value meal at McDonald’s, they’re really good on “valueadd”. Happy, whatever “Memets” celebrate :).
0December 22, 2006 at 2:02 pm #149447Hey Memet,
Here’s a challenge…..let’s do a Forum archives search and see how much Phil and I’ve contributed over the years and how much you have contributed. If we find that you have contributed more, we will leave the Forum. If not, then you shut up and try contributing before asking others to leave. How about it? Are you up to the challenge?0December 22, 2006 at 4:11 pm #149459Dear KM Date,
Thanks very much for your answer and many thanks to everyone on this forum who participated in answering my original question.
Merry Christmas to all
Vincent
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.