iSixSigma

Capability for a non normal distribution

Six Sigma – iSixSigma Forums Old Forums General Capability for a non normal distribution

Viewing 60 posts - 1 through 60 (of 60 total)
  • Author
    Posts
  • #45572

    Quainoo
    Member

    Hello everyone,
    I am working on an inventory reduction project.
    I have calculated how many months of inventory is currently in stock for each item.
    The distribution is not normal
    My specification limits are: mini: 0 and maxi 2 (months of inventory).
    Since the data is not normal, I followed the following procedure in order to calculate the capability.
    Total number of references: 2517
    Number of items with more that 2 months of inventory: 1783
    DPU 0,71 (*)
    Z score: 0,37
    Cp (**): 0,12 (Z/3)
    (*) Because 0,71 represent both sides of the curve and I am looking at a Z table that takes into account only one side, it makes sense to me to divide this value by 2 (0,35) to get the Z value.

    (**) I have a feeling that the result is more a Cp than a Cpk but I am not sure.
    I would like to have your input about the above methodology.
    If not correct, I would appreciated to know the right way to calculate thes capability in this situation.
    Thanks
    Vincent

    0
    #149282

    Robert Butler
    Participant

    The post below and the discussion thread that follows it may provide some answers to your question.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=17743

    0
    #149284

    The Force
    Member

    use weibull to determine capability for non-normal data

    0
    #149289

    Theo
    Member

    Capability means consistently capable of meeting specification.  To do this a process must be in control.  It makes no difference what type of distribution the data has … and you will never exactly know it anyway.

    0
    #149291

    Robert Butler
    Participant

      With respect to standard capability calculations the statement “It makes no difference what type of distribution the data has … and you will never exactly know it anyway.” is in error.
     The assumption of normality is central to the Cpk calculation.  The post below has the details.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=44476
      As for the second part – the issue is not that of knowing a distribution exactly the issue is being able to say with a degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).

    0
    #149292

    Ashman
    Member

    How many real world histograms have you ever looked at ?
    How many even remotely look like a normal distribution ?
    Take 100 random normally distributed points and create a histogram … it won’t look anything like a normal distibution.
    A person lack of fit test will test the fit of 100 data points out to just +/-1.65 sigma.
    Try some time based distributions … they are very assymentric and often bimodal.  Transforms don’t help such data.
    Forget about normal distributions !!!! 

    0
    #149293

    Robert Butler
    Participant

    How many real world histograms have you ever looked at ?
    Answer: Quite a few but in and of themselves they aren’t worth much – it’s too easy to change the binning to modify their shape as a result you can make them look like almost anything you choose. 
      The way you visually examine a distribution is to plot it on normal probability paper (or its computer graphic equivalent).  The way you give yourself a sense of just how non-normal, perfectly normal data can look is to generate repeated 10, 20, 30, 60, and 120 samples using a generator of random numbers from a normal distribution and plot these results on normal probability paper.
    How many even remotely look like a normal distribution ?
      Answer: Once you have generated the graphs in #1 above you will realize that they can be very far from the ideal normal curve as displayed on the old ten Deutschmark note but once you combine the visuals with various tests you can see that even when they don’t visually meet your ideal of what a normal should look like they still pass muster with respect to being viewed as data points from a normal parent distribution.
    As far as forgetting about normal distributions – I wouldn’t recommend this. 
    1. If you choose to ignore the normal distribution then you are choosing to ignore all of residual analysis – this means you are choosing to ignore most of the guidelines surrounding the tests for significant terms in a regression model. This, in turn, means you are choosing to identify significant terms or insignificant terms….how? This also means you are going to determine model adequacy…how?
    2. If you choose to ignore the normal distribution then you will go wrong with great assurance with respect to capability calculations if your data is non-normal (Bothe Chapter 8 has the details).
    3. If you ignore the normal distribution you run the risk of misapplying any number of statistical methods.
    4. If you ignore the normal distribution then you are walking away from the utility of the concept of central tendency.
      On the other hand, I wouldn’t stand around and assume that if my data failed a normality test that all was lost. 
     As far as time based distributions are concerned – don’t know the term- could you give some examples?

    0
    #149295

    Ashman
    Member

    There are many real world distributions that are very non normal.  Have you ever looked at time based processes for example … call centres, help desks etc  ?
    If Bothe is your hero, no wonder you are confused !!!
     

    0
    #149296

    Markert
    Participant

    Davis R. Bothe wrote an article entitled “Statistical Reason for the 1.5 sigma Shift,” in ASQ’s journal Quality Engineering (Vol. 14, No. 3, MARCH 2002, pp. 479-487). There he addressed the question why six sigma followers add a 1.5 sigma shift to the average before estimating process capability.  If Bothe is the basis for your suggestions, you are a very poor statistician. I assume that you are also a believer in the 1.5 shift ?  

    0
    #149297

    Sigmordial
    Member

    Hi Steve,
    If Robert Butler is confused, then sign me up for confusion.  I have read quite a number of his posts, and he has been spot on.  He has also been cordial to the rare snipes.
    Kudos and thanks to Robert for his participation!

    0
    #149298

    Robert Butler
    Participant

      I didn’t think the issue was whether or not there were real world distributions other than the normal. I thought the issue, as presented, was that we were supposed to forget about the normal.
      Based on what you have posted it would appear that what you mean by a time based distribution is the Poisson. As noted in the literature the Poisson is a good representation of the distribution of data that arises from the measure of events that occur over equal intervals of time (or space). There are others that are far more extreme. For example, most of my work is with data that can best be described as having underlying distributions that are either binary, Weibull, extreme value, log normal, or ZIP (zero inflated Poisson). 
        I’ve never met Bothe so I don’t know a thing about him.  What I do know is that his book Measuring Process Capability is the best reference I’ve seen on the subject and, as I mentioned, Chapter 8 is an excellent discussion of computation of process capability in the presence of non-normal data.  On that same line, Chapter 9 does a great job of capability calculations when dealing with attribute data.

    0
    #149299

    Robert Butler
    Participant

    Well Phil, rather than go into all of the 1.5 stuff again please just read what I posted on that subject a long time ago:
    https://www.isixsigma.com/forum/showmessage.asp?messageID=39663
    Actually, all Bothe showed was that if you made a series of assumptions about the process you could come up with 1.5.  The big point he made in his paper was that if any of these assumptions were violated then all bets were off. 

    0
    #149300

    Ashman
    Member

    You seem to be saying that you agree that the 1.5 is crap but you do believe Bothe’s use of it in calculating capability !!!!????

    0
    #149302

    Ashman
    Member

    Poisson is a theoretical distribution that can relate to time based processes.  Real world process are quite different.  If there is a way to post graphics, I could post some help desk data … you could play around for days trying to fit distributions … but for what purpose !? 
    The meaning is in that data itself, no matter how skewed, lumpy or whatever, not in attempting to fit distributions to it. 
    The aim should be to use the histogram to gain insight into whatever the process is.

    0
    #149303

    Savage
    Participant

    Monte Carlo simulation uses distribution fitting.  Just curious, why do you have such heartburn over this?  Are you just a fundamentalist?

    0
    #149304

    V.
    Member

    I fully agree with the comments you made.   Robert should be the forum and the field’s standard for both professional deportment and accuracy of input versus the railed at exception.   I always enjoy reading and learning from Robert’s postings.  
    Dr. V.
     
      

    0
    #149305

    Hal
    Participant

    Read “Normality and the Process Behaviour Chart”.
    It will give you a new (non normal) way of looking at processes.

    0
    #149307

    anon
    Participant

    I wonder if we can analyse a distn here for a help desk service times … non normal of course
    ooooooox
    ooooxxx
    oooxxxxxxx
    ooxxxxxxxxxxxx
    oxxxxxxxxxxxxxxxxxoooxoxooooooooxoooxxxxoox
    xxxxxxxxxxxxxxxxxxxoxxxxxxxoxxoooxxxxxxxxxooxoox
     

    0
    #149308

    Markert
    Participant

    A Dr. V learning from an M.S? That’s not much of a Dr. now, is it? 

    0
    #149311

    anon
    Participant

    Mmmm… that didn’t work so well.  However, I have found an interesting exercise on non normal distributions:
    http://www.q-skills.com/nm/nm.htm
     

    0
    #149312

    Robert Butler
    Participant

     Regarding this statement “You seem to be saying that you agree that the 1.5 is crap but you do believe Bothe’s use of it in calculating capability !!!!????”
      I guess I don’t see how the two thoughts are connected.  Bothe wrote an article which, as far as I’m concerned, put paid to any idea that 1.5 had any generalizable merit.  His book is about the issues surrounding process capability. I looked over Chapter 8 (the one relevant to the initial poster’s question) and I can’t find a thing concerning 1.5 when computing the equivalent 6 sigma spread.
      Just so we don’t leave Vincent in the lurch – if we assume your process is reasonably stable and if your data is non-normal and if you have to provide some estimate of process capability then the method I would recommend, and the method I’ve used many times, is the one outlined in Chapter 8 of the Bothe book. 
      If you don’t have ready access to the book the idea is this: take your data and plot it on normal probability paper and identify the .135 and 99.865 percentile values (Z = +-3).  The difference between these two values is the span for producing the middle 99.73% of the process output.  This is the equivalent 6 sigma spread and you can substitute this value in the equation for computing an equivalent capability index.
      As for the sidebar discussion about distributions all I can do is reiterate what I said at the beginning – the issue is not that of knowing a distribution exactly (nor of fitting a distribution exactly) the issue is being able to say with some degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).
      If I know my data can be treated as though it came from some parent distribution then I have some assurance that the data does not violate the basic theoretical assumptions of a given test/analysis. This, in turn will mean I will be that much more certain that the conclusions I draw from the results of the test/analysis will be of use with respect to addressing whatever problem I was asked to solve.

    0
    #149315

    Prof Deventer
    Participant

    Sounds good.  You don’t support the 1.5 or six sigma tables.  And it sounds as good an approach as any other in defining a capability – after all these are just numbers.  The traditional calculation of capability is just as good. We can calculate any numbers we like as long as we attribute proper meaning and utility to them.
    Unfortunately as you know, it takes an extraordinarily large amount of data to make any assumptions about the parent distribution.  Most commonly, this is not some “ideal theoretical” curve anyway.  Rather than curve fitting, we can learn much more by considering why the histogram looks the way it looks … for example rather than trying to fit some theoretical distribution to a bimodal histogram, we may look for possible mixed data from 2 sources.
    We should always have a purpose.  For example what is the real purpose in calculating capability and what is the purpose in using this or that formula.  For example, there is no real purpose or benefit in plotting a bimodal data set on normal probabilty paper.
    It is easy for people to get lost in doing all sorts of calcs and transforms and manipulations, without thinking about what the data is really saying.  The aim is not to generate numbers. The aim is to listen to what the process is saying and discover how to improve it.

    0
    #149320

    V.
    Member

    That was not much of a question.    Quite evidently not one familiar with what you do or don’t know, are you?      Dr. V.

    0
    #149323

    Robert Butler
    Participant

      The statement “For example, there is no real purpose or benefit in plotting a bimodal data set on normal probabilty paper.” is in error and may indeed be at the heart of a lot of the misunderstanding in this thread.
      If I have a bimodal distribution such that the bimodality consists of two separate and distinct peaks then it probably doesn’t matter whether I plot it as a histogram or plot it on normal probability paper.  On the other hand, if the two distributions have a large degree of overlap then it is very easy to have binning in a histogram routine that will hide this fact and lead you to the false conclusion that bimodality isn’t present. 
      Take that same data and plot it on normal probability paper and the bimodal signature is unmistakable.  Consequently, the real benefit of running a normal probability plot on a block of unknown data isn’t to see how close it falls to the ideal line but rather to see its shape and use what you see to make a judgment call with respect to the proper treatment of the data.
      Steve gives the impression he deals with data that could be viewed as possibly having an underlying Poisson or perhaps Negative Binomial distribution.  Assuming he has some kind of package that allows him to generate Poisson’s with different lambda values it would be worth his time to generate and plot a series of these and keep them as a reference.  When he gets an unknown block of data, run a normal probability plot and check it against the ideal plots – if it’s close then he probably can’t go too far wrong if he treats the data as having come from a Poisson..and if it isn’t then he will have to think about it some more.

    0
    #149324

    Markert
    Participant

    You’re welcome, always nice to see your fluffy rear end :-)))).

    0
    #149325

    Dr. V.
    Participant

    I’d imagine so, as you appear to be attracted to fluffy rear ends.         
         

    0
    #149333

    Markert
    Participant

    Not much intellectual sharpness in that response either “Dr.” V.

    0
    #149335

    V.
    Member

    Can’t argue with you there.  
     
    If it would offend your sensitivities less, in the future when complementing a forum contributor on his/her consistency in responding in a professional and informative manner, I could leave off the “Dr.”   
     
    I understand both how in some circles it’s almost an expected title prefixed to a name and in other circles it’s not.  Apparently it’s a “not” for you so I’ll work with you and leave it off.    But in my doing so you’ll just have to accept the “V.” is for valedictorian of some really good schools with very rigorous programs – hope that’s not also offensive to you. 
     
    V.          

    0
    #149342

    Prof Deventer
    Participant

    You don’t need to plot on probability paper to detect a bimodal distribution.  Such plotting is an exercise in futility.
    Curve fitting achieves no practical purpose.  For all your lengthy diatribe, you have yet to ascribe any benefit in attempting to fit distributions.

    0
    #149343

    Savage
    Participant

    Monte Carlo simulation uses distribution fitting.

    0
    #149347

    Markert
    Participant

    Your posts are at a reading level of a 10th grader. So, that’s an accomplishment! Now let’s work on your grammar: If it would offend your sensitivities less, in the future when complementing a forum contributor on his/her consistency in responding in a professional and informative manner, I could leave off the “Dr.” . You are only a few steps away from achieving your dream: That we all believe that you are a true “valedictorian of some really good schools”. How can I be offended? You have a dream. One day your dream will turn into a true “PhD.”. Until then, keep that acronym. It describes you well.  

    0
    #149348

    Robert Butler
    Participant

      I guess I don’t see where I was advocating fitting distributions. What I said was:
    “all I can do is reiterate what I said at the beginning – the issue is not that of knowing a distribution exactly (nor of fitting a distribution exactly) the issue is being able to say with some degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).
     
      If I know my data can be treated as though it came from some parent distribution then I have some assurance that the data does not violate the basic theoretical assumptions of a given test/analysis. This, in turn will mean I will be that much more certain that the conclusions I draw from the results of the test/analysis will be of use with respect to addressing whatever problem I was asked.”

    As far as believing probability plots of data are fuitle with respect to identifying a bimodal (or any other kind of distribution) you are free to believe as you wish.  My experience is to the contrary

    0
    #149352

    Ropp
    Participant

    Phil,
        I tried to have input regarding the capability issue a couple of nights ago, i have read some of your replies, you are rude, in fact you are a twat and hide behind the fact you are well educated in the subject we speak about, i am not and still learnig, but i know i will get there.
    I can say without a doubt, you along with div(darth),steve are the biggest prats i have ever had to communicate with, you kill this subject because you like to jump onto others errors,lack of knowledge,i could go on but you and the other two di.k heads are not worth it.
    pick all the grammer mistakes out of this post you tnuc.
     
     Regards,
     
     Dave
     
     

    0
    #149355

    V.
    Member

    Don’t worry Dave.   Phil’s not worth it.   He has little imagination and even less capability.   If all he can do is hide under the anonymity of a forum like this and take little potshots at those trying to communicate he’s too wrapped up in his own insecurity to do other than snipe at you.   His is the mark of a tiny mind. 
     
    Darth contributes to the forum, but he has little patience with those he perceives as not working on their own to come up with an answer.  

    0
    #149356

    anon
    Participant

    I would be interested to see your curve fitting on this example.  It is an exercise that always seems to give very skewed, non normal data. With the second option, my histogram was almost triangular but the standard XmR control chart worked well, with just 2 points out of limits, as I might have expected. 
    Click on the histogram to view the actual data.
    http://www.q-skills.com/nm/nm.htm

    0
    #149357

    Markert
    Participant

    Now that’s an interesting little twist in this cute soap opera: An anonymous Vincent initiated the thread, but now an equally anonymous Dave claims that he wanted to “have input regarding the capability issue a couple of nights ago”. Then miraculously Dr. V appears who accoding to some posts a few months ago only “drops in to see what’s going on and evaluate the intellectual state of affairs from his aloof position as a true expert”. Now, he is dropping the “Dr” because his obviously poor grammar does not add up to a venerable “PhD”. And Dave who at one point obviously was Vincent gets support from V (out of nowhere). Damn’ when the curtain sinks and the lights get turned on in this big theater called “internet”. You are more than welcome to call “rude”, a “twat”, a “tnut” or put me into the venerable tradition of “di.k heads” such as Darth! Good luck in the pursuit of the educational goals of all three of your split personalities.
     

    0
    #149361

    Darth
    Participant

    Whoa Phil…….  That insult/compliment was uncalled for.  It has become apparent from the recent threads that you are not the True Dr. Phil whose long history of witty, insightful, humorous and knowledgeable posts have entertained the Forum for a long time.  You are either a sham posing as the revered Dr. Phil or the old Dr. Phil from Home Depot in need of a serious intervention from Dr. Barry of the Bench by the Bay.  Your strident and lowered standards of witty reparate are not what we would expect from the true Phil.  I reject this Phil and call on the Almighty G_d of Six Sigma to cast out the devil in you and return to us the real Phil we loved and respected.  Let us all join hands and send positive feelings to Phil in his darkest moments.

    0
    #149362

    Garth Edwards
    Participant

    “The assumption of normality is central to the Cpk calculation”
    Most will realise the above statement is utter BS.
    Cpk = (1-K) T / 6s  … as hundreds of texts will tell you.
    Cp and Cpk give general indications as to the state of a process compared to spec limits. These are not absolute figures in any way. 
    They attempt to compare “the voice of the process” with “the voice of the customer”.  
    There are measures of non normality such as skewness and kurtosis.  Calculating these may be of interest but is of no real benefit in process management.

    0
    #149363

    Robert Butler
    Participant

    No Garth, I’m afraid most won’t.  See the post below for details and citations.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=44476
    Anon, I don’t know where all of these notions concerning curve fitting are coming from. All I can do is refer you to the post below and ask that you re-read the discussion.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=108908

    0
    #149365

    Markert
    Participant

    Sorry, but that is not the linguistic style of the “venerable” Darth. Try again!

    0
    #149368

    mand
    Member

    So just how normal does data have to be … we could use a Pearson’s lack-of-fit test fit out to 3 sigma … that would take about 3500 data points.
    The only place you will ever see a perfect normal distribution is in a text book.
    Set up a normal random number generator using Excel and plot histograms … they will never look normal unless you use thousands of data points.
    Read world distributions are never normal.

    0
    #149383

    Robert Butler
    Participant

      One rather interesting side effect of this thread is that I’m getting to the point where I almost know my prior posting ID numbers by heart. What concerns me is that a lot of this discussion seems to be centered on what can only be described as a complete misunderstanding of the issues surrounding the normal (or any other distribution).
    In responding, I don’t mean to single out Sam his post is just the latest along this line.  So, once again –
    “the issue is not that of knowing a distribution exactly (nor of fitting a distribution exactly) the issue is being able to say with some degree of certainty that the data presented can be treated as though the parent distribution was (fill in the blank).
      If I know my data can be treated as though it came from some parent distribution then I have some assurance that the data does not violate the basic theoretical assumptions of a given test/analysis. This, in turn will mean I will be that much more certain that the conclusions I draw from the results of the test/analysis will be of use with respect to addressing whatever problem I was asked to solve.”
      Perhaps a concrete example will help:
    You are asked to build an experimental design. You have 3 factors and you budget says you can run a total of 9 experiments.  You run a full 2 level factorial with a single point repeated.  Your boss wants a predictive equation relating the three variables of interest to the process output.  You go out and do….what?  Well, an obvious choice would be to run a stepwise regression in order to identify the variables that are significant.  So let’s say you do this. 
    Question: If we try to view the question as Sam and others on this thread have attempted to do then how do we go about identifying significant terms and building our model?
    The answer is – we can’t because one of the basic assumptions of residual analysis which is central to the issues of tests of variable significance is that the residuals are normally distributed random variables.
      So, assuming our residual analysis didn’t show any obvious trends, how normal do they have to be…I guess as normal as 9 data points can be….and how normal is this – well, you could run the data through a bunch of tests and, with the sample size you will probably fail most of them or you could engage in what was described as an exercise in futility and plot them on a normal probability plot and visually compare their plot  to other nine point plots of data drawn from a known normal.  If it looks “reasonable” (and, obviously this is a judgment call)then at the end you will have to ask not – is the data normal but, as I said above – can it be treated as though it came from a population that was?
     

    0
    #149385

    Dr. V.
    Participant

    Darth, you’ve apparently mistaken Phil The Cunning Linguist for Dr. Phil The Witty.     Dr. Phil The Witty interspersed humor with Six Sigma practice insights and Phil The Cunning Linguist does neither.    
     
    I concur with your call to cast out the imposter and assume you were beseeching the almighty Mike Carnell to hurl him bodily from the field.
     
    Dr. V.   (not related even electronically to Vincent nor Dave despite Phil The Cunning Linguist’s pale analytical attempts at linkage – I hope that’s not an example of his Six Sigma project work)            

    0
    #149387

    Darth
    Participant

    Alas, it is truly I, the caped evil one, owner of the Vadermobile, attempted slayer of the Evil Princess, etc….  Style is more mellow due to the holiday season and the fact that I am on vacation the rest of the year. 

    0
    #149388

    K.M.Date
    Participant

    Dear Vincent,
    1. Although you are mentioning that the specs are Min: 0 m and Max: 2 m, in reality there is only a single spec, that is Max: 2 m! This is because even if one does not explicitly specify Min as 0, we would like/desire to see inventory levels close to zero as far as possible, provided the replenishment happens very very quickly. Cp is not relevant in such situations. Only Cpk is to be calculated. However, if the replenishment can not be done that fast, we need to arrive at the right Min value after studying the replenishment cycle. In which case, both Min and Max specs will come into picture and both Cp and Cpk will then become relevant.
    2. Assuming Max spec only
    Cpk = (Max value – Average of 2517 reference values)/standard deviation of the reference values.
    3. Assuming both a non zero Min and a Max, the Cp and Cpk are:
    Cp= (Max – Min)/6 times standard deviation referred to above, and
    Cpk = Minimum of {(Max – Avg.)/3 times std. dev, and (Avg. – Min)/3 times std. dev.}
    2. and 3. valid, provided the distribution of the values has a single mode ( Histogrm/Dot Plot showing a single hump), and the shape of the distribution is not highly departing from Normal Distribution shape.
    Hope that clarifies your query.
    Best of Luck
    K.M.Date

    0
    #149405

    Markert
    Participant

    The fact that a “valedictorian” such as Dr. V would even respond to a totally nonsensical thread of some “Phil” just makes me burst out in laughter. Anyway, Happy Holidays!

    0
    #149407

    Dr. V.
    Participant

    You think that even an accomplished academician turned industrial leader can’t enjoy engaging in slanderous repartee occasionally?    My friend, you must lead a lonely and insular cube-dwelling existence.
     
    Phil, instead of feeling anxious inadequacy-based separation and attacking when you see the appended and appropriate title “Dr.” you might try a multi-step recovery program beginning with just breathing deeply and letting it go, moving toward seeking common ground, and winding up possibly even hugging and taking a PhD to lunch – a nice Chicken Caesar Salad would be nice.   
     
    This will have to be my last posting for awhile as I am heading off for a nice holiday vacation, but I look fondly toward the upcoming New Year in which you can demonstrate a new found restraint, balance and tolerance toward others.    
     
    I wish you and yours a very Merry Christmas and a wonderfully prosperous and Happy New Year.   
     
    Dr. V.

    0
    #149408

    Ropp
    Participant

    Hello V,
            In case phil isaround, are we talking to ourselves, i think we have got to him.
     
     Can’t wait for the reply off him!!!
     
     I never thought i would lower myself to his level!!!
     
     regards,
     
     Dave

    0
    #149412

    mand
    Member

    You must be joking.  Do you really believe that you could determine that a set of 9 data points is from a normal distibution and not one of the other myriad of distributions ?
    50 data points are needed to test it out to +/-1.28 sigma.

    0
    #149414

    Darth
    Participant

    Of course you can.  We see BBs do it all the time and lo and behold the nine data points always shows a high p value on the normality test.  Must be OK then if JMP and Mini do the calculations and the p value is high.

    0
    #149415

    Markert
    Participant

    Dave, you have a Happy Holidays too. May Santa Claus send you a large selection of “blabla” for Dummies … :-))))))))))))))))))))))

    0
    #149420

    Ropp
    Participant

    Phil,
       You have a good one too mate!
     
     Regards,
     Dave
    p.s. i hope i get the beginners guide to english

    0
    #149424

    Darth
    Participant

    Begone Phalse Phil and start your eggnogging extravaganza.

    0
    #149427

    Markert
    Participant

    Big hug … and keep building your long list of publications by posting on the distinguished isixsigma forum. The collection of posts on groundbreaking topics such as the “capability for a Non Normal Distribution” will look very impressive on the CV of all the renowned Drs. who publish so profusely on this site …LOLOLOLOLOLOL.

    0
    #149428

    Darth
    Participant

    You make a good point Phalse Phil.  This Forum should be restricted to the publications of ex Home Depot Paint Department Managers for they have a lot to say.  Dr. V should confine himself to trips down drug induced memory lanes and fantasies about tie dyed t shirts and cranial adornments.
    You did hit on a potential marketable product that possibly Stevo would be interested in getting involved in.  Why not collect some of the best of the best AND worst of the worst of isicksigma posts and put them in a best selling book on SS that possibly MC might want to publish.  There certainly are enough of each to fill many volumes and I bet we could outsell those stupid Dummy books.  Maybe we could pattern the series after the Chicken Soup book.  “Chicken Soup for the Six Sigma Soul” has a snappy title….what aliteration. 

    0
    #149430

    Markert
    Participant

    Darth, you’re finally getting back in the swing of things! I was worrying about what the holidays were doing to you :-).  The dark force was starting to wane.

    0
    #149432

    Memet
    Participant

    Darth and Phil,
    This forum is about Six Sigma topics. Your messages do not create any value for this forum. Please leave this forum we do not need you.
    Thanks

    0
    #149434

    Markert
    Participant

    Really????? Well, I hate to break it to you, but you’ll have to live with the likes of Darth and Phil. Just as you have to live with those in an organization who don’t buy into Six Sigma. Do you really think that the scientifically correct calculation of the capability for (lol) a non normal distribution (oops, now that is truly the most important question in six sigma) will allow you to fix the problems you are supposed to fix? Go and get a dose of reality, or even better jump on the bandwagon of reality. You’re welcome to ask me to leave, but this is the internet and your opinion ends at my keyboard. Get a value meal at McDonald’s, they’re really good on “value-add”. Happy, whatever “Memets” celebrate :-).

    0
    #149447

    Darth
    Participant

    Hey Memet,
    Here’s a challenge…..let’s do a Forum archives search and see how much Phil and I’ve contributed over the years and how much you have contributed.  If we find that you have contributed more, we will leave the Forum.  If not, then you shut up and try contributing before asking others to leave.  How about it?  Are you up to the challenge?

    0
    #149459

    Quainoo
    Member

    Dear KM Date,
    Thanks very much for your answer and many thanks to everyone on this forum who participated in answering my original question.
    Merry Christmas to all
    Vincent
     
     

    0
Viewing 60 posts - 1 through 60 (of 60 total)

The forum ‘General’ is closed to new topics and replies.