Accepted P Value
Six Sigma – iSixSigma › Forums › Old Forums › General › Accepted P Value
 This topic has 11 replies, 7 voices, and was last updated 15 years ago by Fontanilla.

AuthorPosts

November 15, 2007 at 6:28 pm #48684
Jegan SekarParticipant@JeganSekar Include @JeganSekar in your post and this person will
be notified via email.Hello
In Minitab for every data we enter and mostly for all the analysis we get a “P Value” which helps us to indicate whether the data follows a normal distribution or not.
I would like to know what is the accepted “P Value” (less than or greater than X) so that I can easily identify whether the data follows a normal distribution or not.
And this helps me to further do a normalization on the data.
Thanks in advance.
Jegan Sekar0November 15, 2007 at 7:16 pm #164857
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.Your post suggest you are taking data, dumping it into a computer program, clicking on “normality test” and using this number to make decisions about your data concerning its normality. I’m not trying to be mean spirited or sarcastic but based on your post I would say you are trying to replace thinking with a number. I would strongly recommend you not do what your post suggests you are doing.
If your data sets are small, even if they were drawn from a perfectly normal distribution, there is a good chance they will fail a test for normality. In order to better understand this and gain some understanding of what I would recommend instead I’ll offer the following thread.
https://www.isixsigma.com/topic/pvalueof00595confidence/
0November 15, 2007 at 7:45 pm #164858
Ed StemborowskiParticipant@EdStemborowski Include @EdStemborowski in your post and this person will
be notified via email.If the pvalue < 0.05 reject the null hypothesis
If the pvalue > 0.05 we fail to reject the null hypothesis and accept that the data is normal
0November 15, 2007 at 7:51 pm #164859Amen!”No analysis technique is perfect. You always have to think first, and then think statistically.”
Dr. Donald J. Wheeler0November 15, 2007 at 9:14 pm #164863
not a docParticipant@notadoc Include @notadoc in your post and this person will
be notified via email.Hi. This statement isn’t 100% accurate: the pvalue > 0.05 we fail to reject the null hypothesis and accept that the data is normal. You can have nonnormal data in any nonparametric test. Just because the P is >.05 won’t make the data normal.
You have to run a normality test before you run the correct statistical test. If you don’t you could face either an Alpha or a Beta error.
It’s a good question and you’re on the right path by asking questions.0November 16, 2007 at 1:03 am #164876
Ed StemborowskiParticipant@EdStemborowski Include @EdStemborowski in your post and this person will
be notified via email.We should not complicate an answer. The question was prefaced with the fact that the value comes out of Minitab. The conclsion is that they already ran the Anderson Darling Normality Test and simply wanted to know what the transition point was.
0November 16, 2007 at 1:16 pm #164904Oddly enough the answer was only a mouse click away in Minitab’s Help Menu.Stat>Basic Statistics>Normality Test>HelpBasic Statistics>Normality Test>HelpBasic Statistics>Normality Test>HelpNormality Test>HelpNormality Test>HelpHelpHelp”Generates a normal probability plot and performs a hypothesis test to examine whether or not the observations follow a normal distribution. For the normality test, the hypotheses are, H0: data follow a normal distribution vs. H1: data do not follow a normal distributionPvalue
Determines the appropriateness of rejecting the null hypothesis in a hypothesis test. Pvalues range from 0 to 1. The smaller the pvalue, the smaller the probability that rejecting the null hypothesis is a mistake. Before conducting any analysis, determine your alpha (a) level. A commonly used value is 0.05. If the pvalue of a test statistic is less than your alpha, you reject the null hypothesis.0November 17, 2007 at 3:08 am #164944
not a docParticipant@notadoc Include @notadoc in your post and this person will
be notified via email.This isn’t making more out of the question. Running a normality test is part of it, but also running a test of equal variance may also be needed. At the end of the day, if you run the wrong test you P value will hold no value. You will either have an Alpha or Beta error. It all depends on the data.
I just don’t want to send you off in the wrong direction and have you go after something that won’t help.0November 17, 2007 at 3:56 am #164949Equal variance? I was under the assumption he was concerned with only one distribution and its normality.
0November 17, 2007 at 4:39 am #164956
not a docParticipant@notadoc Include @notadoc in your post and this person will
be notified via email.I didn’t read that. I didn’t see anything that indicated what kind of data he was working with.
0November 30, 2007 at 6:47 am #165520Hi There,
Your p value will depend your Confidence interval,
Ex. for IT and ITES CI = 95% , meaning if p > 0.05 you will consider your data set as normal.
in Manufacturing units CI =99% , meaning if p>0.01 you will consider your data set to be normal.
Note: atleast consider 25 – 30 Data points for analysis.
Regards,
Sam0December 3, 2007 at 1:56 pm #165629All of the posts regarding the transition point etc being .05 entirely miss the point. You base your pvalue decision on the amount of risk you are willing to take. How much is it worth to you if you are wrong? If it is low risk, you might select a high pvalue such as .1 or .15. If you are riskaverse, you might use a pvalue of .01. Your risk of making alpha and beta errors is what you want to mitigate in your choice of pvalue. The commonly used .05 is a good suggestion, but doesn’t address your specific level of risk acceptance.
So, what’s it worth to you if you are wrong? Base your pvalue decision on that.0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.