iSixSigma

p value and six sigma

Six Sigma – iSixSigma Forums Old Forums General p value and six sigma

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #32200

    fernando
    Participant

    I don’t understand how a six sigma process (3.4 defects per million opportunities) can be compatible with hypothesis testing where we accept a confidence band of 95%.
    Can anybody explain me this apparent contradiction?
    Fernando

    0
    #85663

    Annie
    Participant

    Hi,
    I am but a wee Green Belt… but my understanding is that the P value and the 6 sigma level are both in the standard deviation…  Six Sigma is 99.7% (supposedly the best) and the P value is at 95% – close enough to the 6 sigma value , to decide that the hypothesis cannot be rejected, because it is statistically normal.  As the P value strays from the 6 Sigma level… the hypothesis starts to fall apart… and must be rejected.
    Is that what you meant?

    0
    #85667

    Zilgo
    Member

    What you need to remember is that hypothesis testing is separate from the dpmo.  When you test a hypothesis, for example testing normality, you are saying to yourself that if the p-value is low enough then I can reject my null hypothesis.  You decide what that level of significance is.  Most of the time, people are satisfied with 95% confidence in hypothesis testing.  3.4 dpmo is the result of working with the standard normal distribution to determine what defect rate is synonomous with a six sigma process.  The research was done, and the result of that was 3.4 dpmo.  There are whole other discussions about the validity of that, but that is a whole different topic.  You are trying to join together two things that need to be treated separately.

    0
    #85686

    Mikel
    Member

    Annie,
    Nothing personal, but you got short changed on your training.
    Six Sigma is 99.9997%.
    95% confidence is used as a nice bilance between cost and risk. It is a decision making tool to decide what is important and what is not. A decision that it is important is a decision to learn more about it. Any decisions that involve cost, changes, liability, etc. are, at a minimum, replicated before they are implemented to reduce risk.

    0
    #85688

    Spiderman
    Member

    Hi Fernando and Annie…
    Stan and Zilgo got it right.  P-Value and 95 confidence are two different things really.  P-value actually refers to the probability of making a type 1 error, should you reject the null hypo.  And the P-value varies from Business to business depending on the accuracy needed.  95% confidence usually refers to the confidence involved that sample means, medians, etc fall within a certain band for the population.

    0
    #85689

    Annie
    Participant

    My bad!  Not that I didn’t know that, but because I gave the wrong percent; I was looking at my notes… at an example that had both P values and Sigma levels… 
    How about this… the P value is the percent that you could be wrong, if you reject your hypothesis…  Bob, Mary and Jane all paint the same number of houses, different thickness, different timelines… company wants to see if Mary is better than the rest….  they might all be bad (none of them performing at a Six Sigma level) you run ANOVA and determine that statistically, they all perform the same….  you accept the hypothesis that all three paint the same (even though you thought Mary was better)
    or…..
    “If the P is love… the Ho must go!”
    But….. it is totally not fair for you to judge my training on simple mistake…. 

    0
    #85693

    Spiderman
    Member

    Annie…YEAH..get that Ho out of there if the P be low!! 
    And dont feel bad…GBs arent expected (at least in my business) to be very versed in the whole P-value, 95 jive…..thats why they have me…the MBB …to be …and to see …that GBs do the right deed!!

    0
    #85704

    Mikel
    Member

    Annie,
    I am basically not fair.
    I did not judge the simple mistake, but that coupled with the statement about p value.
    There is a load of bad training out there, you would not be the first (or last).

    0
    #85716

    Annie
    Participant

    Stan –
    My mistake was looking at the wrong example, and typing before I comprehended what I was looking at…. not a mistake of my training…..
    I’m sorry you’re so cynical, glad you’re not on my team.
    Annie
     

    0
    #85720

    Chugh
    Participant

    Annie & Stan
    this forum is to learn not for a out of topic debate.
    Lets learn with eachother.
    [email protected]

    0
    #85725

    fernando
    Participant

    I thank all you for your answers, but there’s still something that is not completely clear to me.
    Let’s assume I have a set of data that look normally distributed with a certain p-value. Let’s assume that the process capabilty for this set of data is Z=6 (dpmo=3.4). Now I have assumed that my data are normal with a confidence of 95% (I suppose to use alpha=0.05). If I have well understood this means that the real process capability is not exactly dpmo=3.4, because I performed a process capability assuming normal data, while there’s a certain level of uncertainty about normality that reflects in an uncertainty around 3.4 that I cannot estimate. If I perform a normality test with alpha=0.01 the uncertainty tends to reduce, I’m safer about the process capability. Of course from a practical stand point nothing changes.
    Is it right?
    Thanks again

    0
    #85728

    abasu
    Participant

    The greater source of error in Z=6 (DPMO = 3.4) will come from the actual Zshift in your process not from the deviations in normality from a 95% assumption.

    0
    #85912

    Damodaran
    Member

    Let me just give you a brief as to how the 2 are used.
    There is a particular process “A” requiring improvement & producing a dpmo of say 20000. You carry out improvements (or simulations as the case may be) with x’s arising out of your fish bone / FMEA / brain storming etc. You carry out a piloted solution & get a dpmo of 10000.
     Does it mean that there is an improvement in the process? May or may not as the sample sizes are different. The data before improvement could have been for a period of 3 months or more. The pilot solution data could be for a handful of samples or maybe a week’s production. This is where you require a t-test. A p-value of less than 0.05 indicates that there is a significant difference indicating process improvement. With this result, you can confidently (with 95% confidence) go in for a change of process. If the p-value is greater than 0.05, there is statistically no difference between the 2 lots & you need to look at different solutions.

    0
Viewing 13 posts - 1 through 13 (of 13 total)

The forum ‘General’ is closed to new topics and replies.