iSixSigma

Goodness of fit test for normality

Six Sigma – iSixSigma Forums Old Forums General Goodness of fit test for normality

This topic contains 11 replies, has 5 voices, and was last updated by  thandi 14 years, 8 months ago.

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • #37218

    thandi
    Participant

    Dear All,
      I have a question on goodness of fit tests for normality. What I am concerned is the intepretation of the significance level here. When I run a test for normality my prime concern is that I should not wrongly conclude data is normal when actually it is not normal. This means that my Type II error (beta) should be as low as possible. But since I cannot control beta, I have to rely on alfa. Since an  increase in alfa means a decrease in beta, i would choose a greater alfa value. i.e if my test would pass at the 0.1 level of significance  rather than just pass at the 0.05 level of significance I would have greater protection that I am not wrongly concluding that the data is normal.My question is, is the logic entailed in the above discussion of the problem correct? i.e choosing a higher level of significance compared to a lesser level of significance gives more protection that I would not wrongly conclude that my data is normal when it is actually not normal? If this logic is incorrect please correct me and tell me what is the usual level of significance desired for a goodness of fit test for normality.
    Thanks
    Guru

    0
    #109221

    Markert
    Participant

    Fundamentally the issue is one of “what is a test for normality”.
    Many frequentist statistical techniques rely on an assumption of approximation to the theoretical normal distribution function, but the normal distribution is actually a mathematical abstraction. The purpose of so called tests for normality is to see whether there is detectable evidence that the assumptions for a test are being violated (which would then make the model less than useful). Tests for normality are therefore actually “tests for non-normality” but they do NOT (and cannot) tell you that your data are from a normal distribution, (because there is actually no such thing as a normal distribution in nature). With a small sample, any data (even 20 telephone numbers) will “pass” a test for normality. However, the more data you have, the further out into the tails you are looking for a lack of fit and the more likely you are to detect that your distribution is non-normal.
    To use the terminology of Karl Popper, the assumption of normality is “falsifiable” but it is not provable, and if you cannot detect a lack of fit, it is only because you don’t yet have enough data.
    Hope this helps
     

    0
    #109239

    Salomon
    Member

    Your assumptions about the fit are somehow misleading. Your type II error, yes it is not controllable, however all documented tests for normality rely basically in minimizing this type of error. So you can actually perform, to a ceirtain degree of confidence, some goodness of fit test knowing that your type II error has already been minimized (by actually finding the minima of the error function, using differenciation).
    Hope this helps
    ST

    0
    #109257

    Ted
    Member

    The real question you seem to be asking is about the relationship between your typeI and typeII risks (no matter what hypothesis you are testing).  Here is a good link (includes an applette) which does a good job of explaining it.
    http://www.intuitor.com/statistics/T1T2Errors.html
    hope that helps

    0
    #109401

    thandi
    Participant

    Given your comments I still have a doubt . Will increasing the significance level in a goodness of fit test for normality imply that I will be more protected from wrongly concluding a distribution to be normal when it actually isnt normal?
    Guru

    0
    #109446

    Salomon
    Member

    In theory, yes. Nevertheless, increasing significance implies getting more and more demanding on the results of the test. In other words, it will require a bigger sample size and fewer observations out of the “normal” bins, for instance, for a chi-square goodness of fit test. What the general formulae ensure is that, given a significance level, there is a minimum likelihood of being in the type II error. Practically, you need to establish the economical (or risk) value of the type II error, and thus find what the acceptable level for this is for your organization. That will basically lead you to establish the significance levels for your tests, and from then on to your sample size and relevance of your goodness of fit test.
    Keep in mind that if the process you are assessing is truly stochastic in nature, having 100% confidence is almost impossible, in theory is infinitely expensive, since you need an ever-increasing sample size. Goodness of fit test (for normality) in a practical sense will not tell you if a given population is distributed normal, but rather if you can actually use a parameterized (mu, sigma) normal to characterize the distribution of the data.
    Very interesting questions of yours. What exactly are you trying to achieve?  I am intrigued.
       

    0
    #109490

    thandi
    Participant

    Dear Solomon,
      I think you have answered my doubt pretty well. Actually I am in the process of doing a process capability study on a filling process. Before using the process capability indices like Cpk I wanted to be sure that my process is normally distributed since Cpk is a parametric index. Or else I could use a non -parametric indix.which brings me back to another question. i have heard of but dont know any non-parametric process capability indices. Actually I am sort of new to practical statistics and expect to clear many of my doubts using this site.
    Guru

    0
    #109496

    Robert Butler
    Participant

      It’s true that the usual Cpk calculation assumes a normal distribution but capability calculations are not limited to normal processes.  At the risk of sounding like a broken record (I’ve cited this book and the particular chapters in other posts to this forum) I would recommend you check Bothe’s book Measuring Process Capability and, in particular, Chapter 8 – Measuring Capability for Non-Normal Variable Data and Chapter 9 – Measuring Capability with Attribute Data.

    0
    #109537

    Salomon
    Member

    Interesting,
    As very well noted by Robert, there may be some other distributions that you can use to build process capability study. Let me check, but I don’t know how convenient a non-parametric may be, specially for a filling process, which – I guess – may be a good candidate for a parametric index. A quick way to overcome the limitation on knowledge about the distribution of a given metric of your process – and it is rather easy to implement – is using the mean for the parameter you are estimating: Take a batch or a time interval and measure the parameter, then obtain the mean of the sample (sample size here may not be relevant, keep reading); repeat as many times as required – the number of repetitions is actually the sample size n for a study on the distribution of the mean. Your variable under study is the mean of the batch’s mean. Now, you can actually apply any stat study (capabilities, confidence interval, etc) using the appropriate estimator for the mean of the means. Theoretically, the mean of the means will be an unbiased estimator for the true mean, and the variance is sigma(squared)/n… and it turns out that it will be distributed normal! regardless of the true distribution of the individual observations. You will not loose relevance or significance on your parameter, and can actually set upper and lower control limits without further assumptions on the true distribution. Just remember to make the appropriate statement in your reports, it is not the actual distribution, is the distribution of the mean… so find a batch size or time interval and frequency of sampling relevant to the quality and cost objectives of your department.
    Good luck!

    0
    #109545

    thandi
    Participant

    Solomon,
       I get what you are saying. Basically that the distribution of the means will be normal irrespective of the underlying distribution. But there is one hitch with my problem. I have specifications for fill weight for each individual unit, so a control chart will tell me that my process is in control but it will not tell me my process capability with respect to compying to my weight tolerances. Any ideas?please do share.
    Guru

    0
    #109555

    Salomon
    Member

    Then, you must certainly perform a process capability study based on individual information. Most QA texts will tell you how to do this if the population is distributed normal. There are twists, not so complicated, to the formulae in order find the capability of any process distribution, but you must perform a goodness of fit test. My suggestion:
     
    1) Have a basic sample (or historical data) available – sample size will not be much an issue at this point. Create frequency and probability histograms… see what it looks like in shape.
     
    2) Create three or four hypothesis on the distribution. My guess is that given your process you may be in good shape is you take the normal, the chi-square (only positive values, right?), a beta (will be tricky to theorize the parameters, but in practice it turns out fairly easy, since in some cases a beta may converge to a normal, and you can actually start parameterizing by the mean), and an Erlang or Lognomal.
     
    3) Using any sampling size theory (and the initial values form the sample in step 1), find a new, significant sample size to perform your goodness of fit test. I would go for the chi-square, it has a better discrimination power. Make sure you use the equal-probability bin technique. It is best if you use a sample originated differently than the one used in point 1. It may turn out that more than one distributions are a good fit, use the one that is farther from the test value (but still in fit!).
     
    4) Once you decide the distribution to use, go and theorize the percentiles that you are going to look for according to your confidence (or significance) levels, as we discussed before: 90, 95, 99, etc. Keep in mind that you want to keep a double sided interval, P(a<= x <= b) = 1-alpha.
     
    5) Once set, you can actually determine the capability of the process: an interval (a,b) that should match your quality standards will lead you to find the % of units in compliance with your tolerance, or by setting the number of standard deviations away from your expected value, you can find the % within standards… you can always find the associated threshold values, it just requires some algebra with the cumulative distribution function.
     
    Note that the distributions I proposed are continuous. If the actual filling-related values are used to find defective items, and the number of these are the ones that your process is trying to reduce, you may want to try a Binomial or a Poisson, but not on the filling values, but in the number of defective items. You’ll get to a point where you may need to generate random deviates from these distributions, it is fairly easy to do this in excel… see “inversion” in simulation or stats books.
     
    Keep it simple at first, to reduce the cost and time of your study. If more discrimination is needed, or you find the data leads you to an unpractical conclusion (or it is even inconclusive), then redo with a larger sample size until the appropriate metrics are reached. Rather than finding this a kind of “rework”, it actually decreases the cost of re-sampling if the study is not correctly performed, since these iterations prevent you from implementing a decision that later will not improve your quality. Most text won’t tell you about this iterative approach – they will all theorize on estimators for the parameters… and will not tell you how to obtain your very first values! not to mention the confidence intervals on those…
     
    Let me know how it turns out… good luck.

    0
    #109595

    thandi
    Participant

    Salomon,
      You suggestions were really informative. Actually I was aware of the tolerance interval method you suggested but then it assumes normality. But I guess given my process which is a filling process I should have no problem proving normality. If you come across any nonparametric process cabability indices please do share. (I will try to find the book suggested by Robert)
    Thanks a lot for all the help
    Guru

    0
Viewing 12 posts - 1 through 12 (of 12 total)

The forum ‘General’ is closed to new topics and replies.