When to reject a null hypothesis
Six Sigma – iSixSigma › Forums › Old Forums › General › When to reject a null hypothesis
 This topic has 30 replies, 17 voices, and was last updated 13 years, 1 month ago by Craig.

AuthorPosts

February 4, 2009 at 2:11 pm #51792
bigbavarianParticipant@bigbavarian Include @bigbavarian in your post and this person will
be notified via email.When performing hypothesis testing, when should you reject a null hypothesis? <.05?
I have a been a little confused in some of the readings online.
Thanks
0February 4, 2009 at 3:05 pm #180589
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.You can reject your Null Hypothesis when the p value is less than your selected alpha level. It may be .05 or .01 or .10 or any other value of risk that you are willing to take regarding the type 1 error.
0February 4, 2009 at 3:08 pm #180590
bigbavarianParticipant@bigbavarian Include @bigbavarian in your post and this person will
be notified via email.Thank you for confirming. I was starting to pull my hair out.
0February 4, 2009 at 3:17 pm #180592Where do you stand when the pvalue equals your alpha risk exactly…at least to the decimal places displayed in Minitab?
Curiosity of this cat.0February 4, 2009 at 3:22 pm #180595
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.If the reject was P < .05 and you have a situation where, to the best of your knowledge, P = .05 then this means P is not less than .05 so it does not meet your criteria and you do not reject the null.
0February 4, 2009 at 4:28 pm #180598
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.Use common sense is the rule I use. The selection is arbitrary so if it is .05 vs .04 or .06, make a reasonable decision and don’t worry much about decimal places.
0February 4, 2009 at 5:20 pm #180602Darth:
R.A. Fisher (18901962)â…either there is something in the treatment, or a coincidence has occurred such as does not occur in 1 in 20 trialsâ(Journal of the Ministry of Agriculture of Great Britain, 1926)This always seemed like pretty good common sense to me.Cheers, Alastair0February 5, 2009 at 2:04 pm #180668
R. Eric ReidenbachParticipant@ericreidenbach Include @ericreidenbach in your post and this person will
be notified via email.When setting a p level keep in mind what “rejecting the null hypothesis” really means. It means that you do not have sufficient information to accept the hypothesis. This is not quite as strong as “rejecting” a null hypothesis. Unfortunately, too many statistics users do not understand this and the “rejection” becomes a lot stronger than it really is. Accordingly, the lower the p level (p of 5% or P of 1%) the more stringent the statement you can make about the hypothesis and more sure you can be about your conslusions.
0February 5, 2009 at 2:40 pm #180675
SwaggertyParticipant@George Include @George in your post and this person will
be notified via email.Hi Eric:When I discuss rejecting the null hypothesis in class, I literally state that alpha (type 1 risk – sending an innocent person to jail) is a basic guideline. When the pvalue is significantly larger or significantly smaller than .05, it is really straight forward.
Pvalue high: do not reject Ho – at a 95% Confidence level, we do not have enough evidence beyond a reasonable doubt to convict!
Pvalue low: reject Ho (p0February 5, 2009 at 4:51 pm #180686
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.Robert,I understand in controlled situations .05 is a good measurement, especially in my old psychology research world; however, in the common/frequent six sigma applications of today do you think more is getting rejected or accepted than should be or not enough? After all, the controls are not the same and their is usually more noise present in everyday life. There was a big push in the academic world before I left it to accept more studies in peer reviewed magazines to share lessons learned that would not have been accepted before because of a higher p value. HF Chris Vallee
0February 5, 2009 at 5:35 pm #180692When nothing happens,then it is H0
When you plan to achieve a target then it is H1
0February 5, 2009 at 5:35 pm #180693
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.HF,
I don’t have a sense that any more or less is being accepted/rejected. For me, and for those who review what I’ve done, the issue isn’t so much the choice of the cut point as it is one of “honoring the contract ” and describing the significance of your results (or lack thereof) in terms of the originally stated goals.
There have been instances in peer reviewed articles where we found things in the “close but no cigar” category which, even with the statistically insignificant observed numeric differences, would have been of value either physically or clinically. In cases such as this we’ll provide the usual summary table, note our failure to meet our prespecified goal, comment on how close we came and suggest that even though we did not reach our statistical goal there would be merit in additional investigations.
In a discussion last year I posted a summary of some ways one can go about reporting near misses of this type.
https://www.isixsigma.com/forum/showmessage.asp?messageID=145121
0February 6, 2009 at 12:26 am #180721
ValleeParticipant@HFChrisVallee Include @HFChrisVallee in your post and this person will
be notified via email.Robert,Thanks for the input and validity of sharing “near misses”in some cases. HF Chris Vallee
0February 6, 2009 at 6:05 am #180729
Rajeev sethParticipant@Rajeevseth Include @Rajeevseth in your post and this person will
be notified via email.If you have taken Alpha Risk as 5% then you will reject Null Hypothesis if p value is less than 0.05. If it is more than 0.05 you are fail to reject Null Hypothesis
Rajeev0February 6, 2009 at 10:50 am #180736
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.If I take your last bit of logic further, if the risk to the business is so high how would you know which side of the null hypothesis to take?
Not sure it makes sense from a practical point of view….but understand your connection.
Also, if the risk of the business was paramount, then why set an alpha or beta risk–just do the safest thing and potentially never drive the business to new breakthroughs? I digress with my musings… So I’m clear, I’m NOT advocating use of business risk for a statistical test which is to be void of emotion.
0February 6, 2009 at 11:46 am #180738Hi,
a point that I find important is that the p value gives the risk of a false alarm (alpha risk) with the assumption that everything else in the measurement was done perfectly – MSA is fully OK, there is no bias in the sampling etc. etc.
The actual risk of making a type 1 error is in reality far bigger – p only measures this one component of it. Of course in a cleanly done DMAIC you are supposed to check for all these factors before getting to a p – so the question is really: were those steps done correctly enough?
Regards
Sandor
0February 6, 2009 at 2:12 pm #180751
SwaggertyParticipant@George Include @George in your post and this person will
be notified via email.Certainly, if there is a business risk associated with the decision we reach, we would want to reduce either alpha and/or beta risks – e.g. investment examples, call centres for medical questions, etc.The ramification of this reduction in risk (i.e. reducing alpha and beta, thus we are not as ready to reject the null) is that these two risks are key components in sample size calculations for continuous data. The lower the risk we are willing to take, the higher the sample size should be.
0February 6, 2009 at 2:23 pm #180752
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.More samples is always good if possible…..
However, I’m just stating that the business impact is not lessened if we take a 5% or 1% risk because the impact will be felt if we make the wrong conclusion based on our alpha/beta risks. Only the CHANCE of the negative business impact is reduced with more samples giving more confidence.
I had originally reacted because of someone’s earlier post that if one had an original 5% alpha risk, the selection of the null hypothesis rejection would be influenced by the business risk. The assessment of business risk should give you alpha/beta risk BEFORE you start doing your statistical comparison.
0February 6, 2009 at 2:54 pm #180754
SwaggertyParticipant@George Include @George in your post and this person will
be notified via email.You hit the nail right on the head. Too many practitioners back into the setting of the risks after the fact. That’s why we refer to setting of risks “a proiri”.
0February 6, 2009 at 2:55 pm #180755
SwaggertyParticipant@George Include @George in your post and this person will
be notified via email.That should be “a priori”.
0February 6, 2009 at 3:00 pm #180756
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.I figured we were drilling in on the same point.
0March 19, 2009 at 5:13 pm #182526
Abu JaziParticipant@AbuJazi Include @AbuJazi in your post and this person will
be notified via email.Hi Eric ,
I just want to ask , why is rejecting a null hypothesis is stronger than failed to reject it ( accept it ) ? and if anyone knows the answer please tell me , and please give some links or books that prove it .0March 19, 2009 at 9:22 pm #182538
Abu JaziParticipant@AbuJazi Include @AbuJazi in your post and this person will
be notified via email.Hi Eric ,
I just want to ask , why is rejecting a null hypothesis is stronger than failed to reject it ( accept it ) ? and if anyone knows the answer please tell me , and please give some links or books that prove it .0March 19, 2009 at 9:33 pm #182539
R. Eric ReidenbachParticipant@ericreidenbach Include @ericreidenbach in your post and this person will
be notified via email.Remeber, that to reject a null hypothesis says that we do not have enough information (proof) to accept it. You have to keep in mind that this is a probablistic statement. Too often, significance tests are treated as if they were incontrovertible truth when in fact they are not. You may accept a null hypothesis when you shouldn’t and you may reject it when you shouldn’t.
As far as links to books, I prefer Nunnally, Psychometric Theory.
I hope this helps.0March 19, 2009 at 9:43 pm #182540Wrong
0March 19, 2009 at 11:21 pm #182548Rejecting the null means we have very strong evidence that the null is
not correct.0March 20, 2009 at 1:07 am #182554
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.Not sure I am comfortable with the word “strong”. That might be a function of the selected alpha error which we are using as our decision point. We might have “significant” evidence.
0March 20, 2009 at 1:42 am #182557We can argue semantics, at least I’ve got the data going in the correct
direction.0March 20, 2009 at 8:15 am #182558
Bower ChielParticipant@BowerChiel Include @BowerChiel in your post and this person will
be notified via email.Hi EveryoneYou might find it interesting to look at the paper entitled “Sifting the evidencewhat’s wrong with significance tests?” by Jonathan A C Sterne and George Davey Smith available free at http://www.ptjournal.org/cgi/content/full/81/8/1464.I particularly like the “spectrum” diagram for suggested interpretation of P values from published medical research.Best Wishes
Bower Chiel0March 20, 2009 at 2:12 pm #182562
NatanyahooParticipant@Natanyahoo Include @Natanyahoo in your post and this person will
be notified via email.Wrong!
0March 21, 2009 at 11:33 am #182603When p is low, reject Ho
That’s all you’ll ever need to know
How poetic!
Oh by the way, if you set alpha at .05 and the p value was exactly .05…and you fail to reject the null…and you stopped investigating this X variable…..YOU’RE FIRED0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.