# Type II error

Six Sigma – iSixSigma Forums Old Forums General Type II error

Viewing 5 posts - 1 through 5 (of 5 total)
• Author
Posts
• #48722

Robino
Member

Hello Everyone
can anyone refer me to a good on-line reference to review the theory behind type II error.
I got a small binary sample that is showing no difference on a 1-proportion test but I think still think there is a difference, its just not showing cause the size is to small. Want to go through theory and examples to understand this.
thanks

0
#165170

Sloan
Participant

My first reaction is to suggest that you Google it and see what comes up. Here’s an article that discusses Type I and Type II errors and the thought process behind determining which is more important in your particular test. http://core.ecu.edu/psyc/wuenschk/StatHelp/Type-I-II-Errors.htm
The short answer to your problem though is that because you have a small sample size, you have a relativvely wide confidence interval. You need to gather enough data so that your confidence interval shrinks to the point where your results are more reliable. You must also be prepared to discover that despite your gut feeling and additional data, there really might not be a difference.

0
#165172

annon
Participant

Use power and sample size in MTB. Go into it and play around with the settings, as well as referencing the HELP function. This will give you a good idea as to how the different parameters interact and what might be happening with your analysis.

0
#165182

Ward
Participant

Robino,This my 3rd attempt at responding. You do not have sufficient evidence to reject the null for a couple of reasons:1) Your “noise” is too great. The noise is the denominator in the test statistic (in this case, “Z”). The noise is a function of the hypothesized p (what your are testing against) The as your hypothesized p approaches p .5, the denominator will be larger, creating more “noise”. As n is small, the denominator will be larger, again crating more “noise”.
2) Your “signal” is too small. The signal is difference between your observed proportion, and your hypothesized p. The greater the difference, the stronger the signal.
3) Finally, there really is no difference.The test statistic (Z value) is really a signal to noise ratio. The stronger the signal, and weaker the noise, the higher the value of Z, and subsequently the smaller the value of p.You could also relax your criteria for rejecting the null hypothesis. For exploratory data analysis, an alpha value of .10, rather than .05 is acceptable. If your p-value is greater than .05 but less than .05, you could reject the null hypothesis. The risk is you have a slightly greater chance of rejecting the null when you should not.

0
#165183

Ward
Participant

sorry for the typos!I meant greater than .05, but less than .10 in second to last sentence!

0
Viewing 5 posts - 1 through 5 (of 5 total)

The forum ‘General’ is closed to new topics and replies.