p value and six sigma
Six Sigma – iSixSigma › Forums › Old Forums › General › p value and six sigma
 This topic has 12 replies, 8 voices, and was last updated 19 years, 3 months ago by Damodaran.

AuthorPosts

May 8, 2003 at 6:36 am #32200
fernandoParticipant@fernando Include @fernando in your post and this person will
be notified via email.I don’t understand how a six sigma process (3.4 defects per million opportunities) can be compatible with hypothesis testing where we accept a confidence band of 95%.
Can anybody explain me this apparent contradiction?
Fernando0May 8, 2003 at 1:50 pm #85663Hi,
I am but a wee Green Belt… but my understanding is that the P value and the 6 sigma level are both in the standard deviation… Six Sigma is 99.7% (supposedly the best) and the P value is at 95% – close enough to the 6 sigma value , to decide that the hypothesis cannot be rejected, because it is statistically normal. As the P value strays from the 6 Sigma level… the hypothesis starts to fall apart… and must be rejected.
Is that what you meant?0May 8, 2003 at 2:01 pm #85667What you need to remember is that hypothesis testing is separate from the dpmo. When you test a hypothesis, for example testing normality, you are saying to yourself that if the pvalue is low enough then I can reject my null hypothesis. You decide what that level of significance is. Most of the time, people are satisfied with 95% confidence in hypothesis testing. 3.4 dpmo is the result of working with the standard normal distribution to determine what defect rate is synonomous with a six sigma process. The research was done, and the result of that was 3.4 dpmo. There are whole other discussions about the validity of that, but that is a whole different topic. You are trying to join together two things that need to be treated separately.
0May 8, 2003 at 5:06 pm #85686Annie,
Nothing personal, but you got short changed on your training.
Six Sigma is 99.9997%.
95% confidence is used as a nice bilance between cost and risk. It is a decision making tool to decide what is important and what is not. A decision that it is important is a decision to learn more about it. Any decisions that involve cost, changes, liability, etc. are, at a minimum, replicated before they are implemented to reduce risk.0May 8, 2003 at 5:23 pm #85688
SpidermanMember@Spiderman Include @Spiderman in your post and this person will
be notified via email.Hi Fernando and Annie…
Stan and Zilgo got it right. PValue and 95 confidence are two different things really. Pvalue actually refers to the probability of making a type 1 error, should you reject the null hypo. And the Pvalue varies from Business to business depending on the accuracy needed. 95% confidence usually refers to the confidence involved that sample means, medians, etc fall within a certain band for the population.0May 8, 2003 at 5:33 pm #85689My bad! Not that I didn’t know that, but because I gave the wrong percent; I was looking at my notes… at an example that had both P values and Sigma levels…
How about this… the P value is the percent that you could be wrong, if you reject your hypothesis… Bob, Mary and Jane all paint the same number of houses, different thickness, different timelines… company wants to see if Mary is better than the rest…. they might all be bad (none of them performing at a Six Sigma level) you run ANOVA and determine that statistically, they all perform the same…. you accept the hypothesis that all three paint the same (even though you thought Mary was better)
or…..
“If the P is love… the Ho must go!”
But….. it is totally not fair for you to judge my training on simple mistake….0May 8, 2003 at 5:43 pm #85693
SpidermanMember@Spiderman Include @Spiderman in your post and this person will
be notified via email.Annie…YEAH..get that Ho out of there if the P be low!!
And dont feel bad…GBs arent expected (at least in my business) to be very versed in the whole Pvalue, 95 jive…..thats why they have me…the MBB …to be …and to see …that GBs do the right deed!!0May 8, 2003 at 9:57 pm #85704Annie,
I am basically not fair.
I did not judge the simple mistake, but that coupled with the statement about p value.
There is a load of bad training out there, you would not be the first (or last).0May 9, 2003 at 3:24 am #85716Stan –
My mistake was looking at the wrong example, and typing before I comprehended what I was looking at…. not a mistake of my training…..
I’m sorry you’re so cynical, glad you’re not on my team.
Annie
0May 9, 2003 at 9:54 am #85720Annie & Stan
this forum is to learn not for a out of topic debate.
Lets learn with eachother.
[email protected]0May 9, 2003 at 11:48 am #85725
fernandoParticipant@fernando Include @fernando in your post and this person will
be notified via email.I thank all you for your answers, but there’s still something that is not completely clear to me.
Let’s assume I have a set of data that look normally distributed with a certain pvalue. Let’s assume that the process capabilty for this set of data is Z=6 (dpmo=3.4). Now I have assumed that my data are normal with a confidence of 95% (I suppose to use alpha=0.05). If I have well understood this means that the real process capability is not exactly dpmo=3.4, because I performed a process capability assuming normal data, while there’s a certain level of uncertainty about normality that reflects in an uncertainty around 3.4 that I cannot estimate. If I perform a normality test with alpha=0.01 the uncertainty tends to reduce, I’m safer about the process capability. Of course from a practical stand point nothing changes.
Is it right?
Thanks again0May 9, 2003 at 2:54 pm #85728The greater source of error in Z=6 (DPMO = 3.4) will come from the actual Zshift in your process not from the deviations in normality from a 95% assumption.
0May 15, 2003 at 1:12 pm #85912Let me just give you a brief as to how the 2 are used.
There is a particular process “A” requiring improvement & producing a dpmo of say 20000. You carry out improvements (or simulations as the case may be) with x’s arising out of your fish bone / FMEA / brain storming etc. You carry out a piloted solution & get a dpmo of 10000.
Does it mean that there is an improvement in the process? May or may not as the sample sizes are different. The data before improvement could have been for a period of 3 months or more. The pilot solution data could be for a handful of samples or maybe a week’s production. This is where you require a ttest. A pvalue of less than 0.05 indicates that there is a significant difference indicating process improvement. With this result, you can confidently (with 95% confidence) go in for a change of process. If the pvalue is greater than 0.05, there is statistically no difference between the 2 lots & you need to look at different solutions.0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.