Type II error
Six Sigma – iSixSigma › Forums › Old Forums › General › Type II error
 This topic has 4 replies, 4 voices, and was last updated 14 years, 6 months ago by Ward.

AuthorPosts

November 20, 2007 at 12:28 pm #48722
Hello Everyone
can anyone refer me to a good online reference to review the theory behind type II error.
I got a small binary sample that is showing no difference on a 1proportion test but I think still think there is a difference, its just not showing cause the size is to small. Want to go through theory and examples to understand this.
thanks0November 20, 2007 at 3:28 pm #165170My first reaction is to suggest that you Google it and see what comes up. Here’s an article that discusses Type I and Type II errors and the thought process behind determining which is more important in your particular test. http://core.ecu.edu/psyc/wuenschk/StatHelp/TypeIIIErrors.htm
The short answer to your problem though is that because you have a small sample size, you have a relativvely wide confidence interval. You need to gather enough data so that your confidence interval shrinks to the point where your results are more reliable. You must also be prepared to discover that despite your gut feeling and additional data, there really might not be a difference.0November 20, 2007 at 3:39 pm #165172Use power and sample size in MTB. Go into it and play around with the settings, as well as referencing the HELP function. This will give you a good idea as to how the different parameters interact and what might be happening with your analysis.
0November 20, 2007 at 6:08 pm #165182Robino,This my 3rd attempt at responding. You do not have sufficient evidence to reject the null for a couple of reasons:1) Your “noise” is too great. The noise is the denominator in the test statistic (in this case, “Z”). The noise is a function of the hypothesized p (what your are testing against) The as your hypothesized p approaches p .5, the denominator will be larger, creating more “noise”. As n is small, the denominator will be larger, again crating more “noise”.
2) Your “signal” is too small. The signal is difference between your observed proportion, and your hypothesized p. The greater the difference, the stronger the signal.
3) Finally, there really is no difference.The test statistic (Z value) is really a signal to noise ratio. The stronger the signal, and weaker the noise, the higher the value of Z, and subsequently the smaller the value of p.You could also relax your criteria for rejecting the null hypothesis. For exploratory data analysis, an alpha value of .10, rather than .05 is acceptable. If your pvalue is greater than .05 but less than .05, you could reject the null hypothesis. The risk is you have a slightly greater chance of rejecting the null when you should not.0November 20, 2007 at 6:16 pm #165183sorry for the typos!I meant greater than .05, but less than .10 in second to last sentence!
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.