when pvalue is close to .05
Six Sigma – iSixSigma › Forums › Old Forums › General › when pvalue is close to .05
 This topic has 35 replies, 32 voices, and was last updated 14 years, 9 months ago by Pankaj Bansod.

AuthorPosts

August 10, 2004 at 10:33 am #36479
Hi,
I have a doubt what should be our decision in case:
i . 0.05< pvalue <0.10 and
ii .01< pvalue < 0.05
in both the cases my sample size let say is 100 nos.
regards
0August 10, 2004 at 11:34 am #105342
Dr. Reiner HutwelkerParticipant@Dr.ReinerHutwelker Include @Dr.ReinerHutwelker in your post and this person will
be notified via email.This is a matter of the objective of your investigation. If I want – with a low budget – have a first look into the data with a small sample size I do accept a bigger alpha error and as a tradeoff a smaller betaerror. So I accept 0,5 < p < 0,1. I want to "see" every possible cause (x) of my result (Y). If the result is significant at this level I try again with a bigger investigation. If, on the other hand, I want to be shure, that the new expensive machine, we are trying out, is really better than the old one, I adjust my alpha level on 0,01, i.e. p<0,01. In this situation I accept a bigger betaerror for I want to see a really significant improvement. But please keep in mind, that significance has nothing to do with the power of your effect. With a big sample size you might allways get a significant result. But the improvement is so small, that nobody cares.
This is, how I explain the topic to my trainees. I hope that it is also helpful for you.
regards,
Reiner Hutwelker
0August 10, 2004 at 11:34 am #105343
Preeti SParticipant@PreetiS Include @PreetiS in your post and this person will
be notified via email.Dear alifa,
If your P value is .05 like u mentioned p vale as .10 than we fail to reject null hypothesis .
Hope this helps0August 10, 2004 at 11:50 am #105345
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.The following post may be of some help.
https://www.isixsigma.com/forum/showmessage.asp?messageID=190890August 10, 2004 at 2:45 pm #105367
KBaileyParticipant@KBailey Include @KBailey in your post and this person will
be notified via email.Alifa,
Don’t limit yourself to the other answers you’ve gotten. If your pvalue is borderline, there are other things to look at before making your determination.Look for sources of variation that should be included in your model.
Tighten up process control. Make sure everyone’s following SOPs and that those SOPs are adequate.
If you didn’t already, make sure to check for interactions and nonlinear relationships.
Good look!
k
0August 19, 2004 at 5:21 am #105961
DeependraParticipant@Deependra Include @Deependra in your post and this person will
be notified via email.I carry the same opinion as of Preeti , If P value is <0.05 null hypothesis is to be rejected , if it is a border line case may be test can be performed again by increasing the sample size ….
Hope this helps
0August 19, 2004 at 5:29 am #105962
Vikas NagpalMember@VikasNagpal Include @VikasNagpal in your post and this person will
be notified via email.Alifa,
When p value is greater than .05 we accept Null Hypothesis.When p value is less than .05 we accept alternate hypothesis.When p valueis close to .05 we generally accept null hypothesis but it depends upon how critical is the process.
0August 19, 2004 at 6:54 am #105966Guys, just remember this.
If the pvalue is <0.05, say 0.04… this means that there's only a 4% chance of the null hypothesis happening. On the other hand, there's a 96% chance of the alternate hypothesis happening.
And, in 6 Sigma, we never say that we accept the null hypothesis. We say that we don’t have enough evidence to reject the alternate, therefore we must declare the null hypothesis.
Hope this makes it clearer.
0August 19, 2004 at 7:13 am #105970
WelwisherMember@Welwisher Include @Welwisher in your post and this person will
be notified via email.Dear Vikas Nagpal,
In reality when you do Hypothesis testing you never accept a NULL HYPOTHESIS. We either reject the null hypothesis or Fail to reject the Null hypothesis. Please ensure correct information when you share in these forums or else wait for experts to comment
Secondly – p value is only a statistical guide for you to look at the next step in decision making. I like the explanation given by one of the respondents on what to do practically based on p value.
0August 19, 2004 at 7:45 am #105971Quote of George Box in “Statistics as a catalyst to scientific learning by scientific method” in journal of Quality Technology january 1999
“The concept of hypothesis testing at accepted fixed significance levels are, so far as they can be justified at all, designed for terminal testing on past data of a believable null hypothesis. This makes litlle sense in a context of exploratory inquiry. We should not be afraid of discovering something. If I know with only 50% probability that there is a crock of gold behind the next tree, should I not go and look”
In the inquiry phase, you can sometimes keep factors with p values of 0.2 but when you put a process in production a buy a costly tool, the p value shoud be much lower than 0.05
Yves
Yves0August 19, 2004 at 8:20 am #105974Hi
If P value is >0.05 then accept Null Hypothesis and P<0.05 reject Null Hypotheses.
If p value is near the Border,then experiment need to conducted with larger sample size if possible ,and followed by Root cause analysis.
Sanjay Yatgiri0August 19, 2004 at 9:55 am #105977
Pankaj BansodParticipant@PankajBansod Include @PankajBansod in your post and this person will
be notified via email.To accept the null hypothesis, the p value should be <=0.05
otherwise reject the hypothesis.
Regards,
Pankaj Bansod
0August 19, 2004 at 11:03 am #105979
Gary A. GackParticipant@GaryAGack Include @GaryAGack in your post and this person will
be notified via email.Depends on what is at stake – p is just a measure of risk – if life is at stake you will certainly reject higher p – if you risk $1.50, higher p is ok
0August 19, 2004 at 12:21 pm #105980It depends on the risk you are willing to take. Why did you set your limit at .05? Is there high dollar value or safety issues involved?
I normally set it at 0.1 and if I’m close I look at the data, possibly run another sample of larger size, determine if there are outliers, etc.
Don’t limit your options! Take a chance, I’ve actually had great success with 50% confidence levels, as our MBB said if you’re not willing to gamble you’ll never win anything.0August 19, 2004 at 12:44 pm #105983Yves and Gary – very good answers!
We must base our decisions on what the statistics and data tell us but we should never let numerical values set limitations on our rational thinking.
Suppose we were looking at four factors and the resulting Pvalues were .21 , .18, .15,. .06. According to the book we would say no significance as we did not reach .05 or less on any factor. With this large spread in P values the probability is high that something is going on with the factor producing the .06. Look over the data and testing procedure again and reinvestagate.
Don’t overlook the pot of gold behind the tree just because the tree did not carry the correct number.0August 19, 2004 at 12:47 pm #105984
SigmordialMember@Sigmordial Include @Sigmordial in your post and this person will
be notified via email.Hi Dr. Hutwelker,
There may be a possibility for a misunderstanding with your statement: But please keep in mind, that significance has nothing to do with the power of your effect. That is true, but the use of significance and power could provide one to conclude that there is no relationship between the alpha and beta risks.
The stated alpha does impact beta (and hence the power), though not as strong as sample size. It is an interesting exercise — if you have Minitab, keep the same sample size and stated effect, and see what happens to the power as you vary the alpha. Then, keep alpha fixed and vary sample size.0August 19, 2004 at 1:44 pm #105988
Luis Javier RivasParticipant@LuisJavierRivas Include @LuisJavierRivas in your post and this person will
be notified via email.Good responses..
Also you can see the boxplots of both samples for dispersion of data. Consider the Cpk and verify what is your objective in decision making. What is critical to satisfy for your customer?0August 19, 2004 at 2:17 pm #105992
Alfonso VillalobosParticipant@AlfonsoVillalobos Include @AlfonsoVillalobos in your post and this person will
be notified via email.In my opinion it’s more matter of criteria on your behalf.
Remember that pvalue it’s a way of saying the probability of Ho being true… so.. common alpha values are 0.05 and 0.01 which only reference from what point do you consider to reject or not reject Ho.
I think you should use your common sense and process expertisse to make this kind of desicion.
But, as i saw on some postings i agree that a good idea is to sample some to make a better desicion
Hope this helps….0August 19, 2004 at 9:39 pm #106028
atul singhParticipant@atulsingh Include @atulsingh in your post and this person will
be notified via email.“”To correct Mr.Pankaj Bansod””
dear Pankaj,
we never accept the null hypothesis,we fail to reject the null hypothesis
regards,
atul0August 20, 2004 at 5:12 am #106048
Shrinivas PoteMember@ShrinivasPote Include @ShrinivasPote in your post and this person will
be notified via email.If P>0.05 – With 95% Confidence Level we do not have sufficient evidence to reject Null Hypothesis Hence we fail to reject Null Hpothesis.
If P<0.05 – With 95% Confidence Level we have sufficient evidence to accept Alternative Hypothesis Hence we reject Null Hypothesis.
Hope this will make clear.
Pote.0August 20, 2004 at 12:13 pm #106060I have always taught this in terms of cost benefit:
1. If the cost of a wrong decision is high and the gain is low in comparison to the cost, you probably want 95%99% confidence.
i.e. Gambling $500,000 dollars to win $1000 you want to know you have a 9599% chance of winning.
2. If the cost of a wrong decision is low and the gain is high, you can use 75% or even lower.
Gambling $1000 to win $500,000 if you lose every now and then no big deal because the posibble improvements are worth the risk.
3. If the cost and the gain are comparable, I typically stay at the 95% level to ensure buyin and acceptablity. Although I could look to 90% if the cost of further sampling is dramatically high.
The hard and fast rule of “p<0.05" is dangerous. What is needed is to always combined a financial and risk analysis of why a certain confidence level and sampling size was picked. As a project sponsor asked to commit resources (people/time/money), I would be foolhardy to just accept a "textbook" answer that 95% Confidence is the accepted method for hypothesis testing.0August 20, 2004 at 12:17 pm #106062What a thing to say!!! 50%?
0August 20, 2004 at 1:55 pm #106082
vineeth kumarMember@vineethkumar Include @vineethkumar in your post and this person will
be notified via email.Hii..
Regarding the Sample Size – It should be Enough for Statistical Analysis…Minimum 30 nos is Recommended..
and before Doing the Hypothesis Testing , U need to Do a Normality Test for the Data.( I am Assuming Data is continuous Data )
if data is normal then you can Do the Hypothesis test.
1.If P value is < 0.05 mean , Reject Null Hypothesis..Both the Samples are significantly Different..
2. If P value > 0.05 then Do not Reject Null Hypothesis..you need to Do some Further Experimentation and Analysis to Conclude that..
this is because Experimental Error or Measurement System Error can also cause the varation in your measurement..So Check that First Fix it and Then Check the P value Again..
In repeated cases if you are getting P value > 0.05 then U can say with 95% confidence that Statistically these Samples not significant.
In My opinion in Hypothesis Testing only 2 cases exists 0.05
So dont get confused between 0.05 and 0.1 !!!!
I think I have clarified you
have a nice day
vineeth
0August 20, 2004 at 1:59 pm #106083
Preeti SParticipant@PreetiS Include @PreetiS in your post and this person will
be notified via email.Vineeth,
Thanks for the reply , however I would like to add a little here if my p value is =.05 which is quiet possible in many situations than I would take more sample than perform the test again and if my p value is < .05 than again I will break the data and perform the test to see any significant difference between ( eg my bottom performers and than taking a sample out of it ) …………………
Cheers
Preeti S0August 20, 2004 at 2:14 pm #106086
aBBinMNParticipant@aBBinMN Include @aBBinMN in your post and this person will
be notified via email.Alifa,
One poster was exactly right about things to look for. Most of the others are partially right in that you need to consider risk and cost in deciding how to proceed. However, they’re missing the significance of a pvalue that doesn’t give you the confidence you’re looking for.
The reason your pvalue is higher than you want is either because your model is inadequate/incorrect (missing factors or interactions) or because of variation in factors that are included in the model (inadequate controls) or both. You also need to consider whether you’ll be able to achieve adequate process control based on your model. If there’s a significant interaction or source of variation that you don’t know about or that you’re not controlling, you might correctly reject the null hypothesis but still end up with a process that isn’t capable.
A pvalue of .1, .15, or .2 indicates it’s probably important to control those factors. It also means that you’re not very confident that you can control the process adequately by controlling just those factors. If you can operate well within customer specs, great! Otherwise, keep looking for the other significant factors/interactions.0August 20, 2004 at 3:19 pm #106099
PowersParticipant@bizdoctor Include @bizdoctor in your post and this person will
be notified via email.Brains,
I believe your post was misstated…
It should say, ” … We say that we don’t have enough evidence to reject the null, therefore we must declare the null hypothesis.”0August 20, 2004 at 11:46 pm #106150Refer a good text book (e.g. Box and Hunter)..The replies given by Butler, Yves and Ken are right on the money.
Blind statements such as ” if p<5% reject the null " that we learned from school do more harm than good because they replace good logical thinking and exploration with pat canned statements. Taking more samples is always a good idea albeit an expensive one.
There is nothing holy about 5% Do people know where the 5% come from? (Hint: From Tomatoes – by the person to whom the Box and Hunter book is dedicated to..)
Don’t believe everything you read on the bulletin – including this posting.0August 21, 2004 at 4:02 pm #106166Good call. Everyone should read page 109 of BHH to get a good perspective of significance levels.
Remember that it starts with risk and how big of a difference you want to see – that gives you sample size. If you don’t do this, the p value at any level has no meaning.0August 21, 2004 at 10:27 pm #106183
KBaileyParticipant@KBailey Include @KBailey in your post and this person will
be notified via email.Ram,
Good call on the “blind statements.” I’m not sure what everyone else is learning in school, but it’s always been my understanding that the p<5% rule is "a rule of thumb" based on 95% confidence being pretty good for most purposes. We need to remember that a useful generalization is a useful generalization, and part of our job is to recognize when it doesn't apply.0August 23, 2004 at 3:20 am #106221
Edwin GarroParticipant@EdwinGarro Include @EdwinGarro in your post and this person will
be notified via email.Excellent question.
This is the reason why we need the process owner really close to us during the statistical analysis. A p value close to 0.05 calls for the expertise of the process owner to define de importance of the variable under study in practical terms. After all that is why the analysis was conducted in the first place, to improve a process. Stastitical significance is only a step towards the real goal.
E!0August 23, 2004 at 1:16 pm #106250wrong
0August 23, 2004 at 1:39 pm #106252Remember that it starts with risk and how big of a difference you want to see – that gives you sample size. If you don’t do this, the p value at any level has no meaning.
Just ask the pharmaceutical companies about finding effects using a huge sample size.0August 23, 2004 at 10:29 pm #106289Alifa, you sure have gotten some great responses so far.
While each statistical test has an associated null hypothesis, as you know by now, the pvalue is the probability that your sample could have been drawn from the population(s) being tested (or that a more improbable sample could be drawn) given the assumption that the null hypothesis is true. A pvalue of .05, for example, indicates that you would have only a 5% chance of drawing the sample being tested if the null hypothesis was actually true. Null Hypothesis are typically statements of no difference or effect. A pvalue close to zero signals that your null hypothesis is false, and typically that a difference is very likely to exist. Large pvalues closer to 1 imply that there is no detectable difference for the sample size used. A pvalue of 0.05 is a typical threshhold used in industry to evaluate the null hypothesis. In more critical industries (healthcare, etc.) a more stringent, lower pvalue may be applied. More specifically, the pvalue of a statistical significance test represents the probability of obtaining values of the test statistic that are equal to or greater in magnitude than the observed test statistic. To calculate a pvalue, collect sample data and calculate the appropriate test statistic for the test you are performing. For example, tstatistic for testing means, ChiSquare or F statistic for testing variances etc. Using the theoretical distribution of the test statistic, find the area under the curve (for continuous variables) in the direction(s) of the alternative hypothesis using a look up table or integral calculus. In the case of discrete variables, simply add up the probabilities of events occurring in the direction(s) of the alternative hypothesis that occur at and beyond the observed test statistic value.0August 24, 2004 at 8:04 am #106297Hi Alifa,
maybe this is already a bit redundant but I’d start from the interpretation of the p value which I understand to be the following:
P is the probability to measure the value you actually got, in a population where there is no effect (i.e. the null hypothesis is true and you got the said value by a fluke of sampling error.)
So the question is – as many have posted already – of how much risk you are willing to take to commit for a direction that could in fact be a false alarm?
Would you feel comfortable to go to your boss and ask him to spend X dollars when there is a 5% chance that you will be proven wrong later ? Would you feel comfortable if the chance of being wrong was 10%? 15%? I think this is the question (or a variant thereof) that you need to ask yourself. And of course it is much better to ask this question before you actually start the measurement.
Regards
Sandor
0August 30, 2004 at 3:13 pm #106604Hi Everybody. This is my first post. Correct me if I am wrong!
The p value is merely a measure. In very simple terms – it would mean that, you are accepting or rejecting the hypothesis at (1p)% confidence level.
E.g. If the p value is between 0.01 and 0.05, it would mean that the decision has a confidence between 95% and 99%. This is the beauty of p value. Now, whether to go for p of 0.01,0.05,0.1,or any other value for that matter, will depend on the severity of the decision and robustness of the sample. A call on it has to be made by the project leader and project owner in cosultation with the affected parties. There are certain “thumb rules”, But, of course who knows your process better than yourself!!!0February 28, 2008 at 5:08 pm #169090
Pankaj BansodParticipant@PankajBansod Include @PankajBansod in your post and this person will
be notified via email.Hi,
Pankaj,
This is Pankaj Bansod from Hyderabad.
Please contact me with your mail and let’s have more discussion on Six sigma0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.