iSixSigma

Normality Tests

Six Sigma – iSixSigma Forums Old Forums General Normality Tests

Viewing 24 posts - 1 through 24 (of 24 total)
  • Author
    Posts
  • #32235

    Gavin
    Participant

    I have recently been instructed by one of our organisations quality managers to perfrom normality tests on process data using Minitab. The results of these tests have produced P-Values between 0.000 and 0.081. As I understand, a P-Value below 0.05 indicates non-normal data. The data is still within the control and specification limits so should I be especially concerened with the results of the tests? Somehow, I can’t help but feel that we are using this quality tool because it’s available and are simply trying to baffle our senior managers with science. In many cases I have been instructed to produce these tests but then nothing is done about the results. Peoples thoughts and opinions would be appreciated.    

    0
    #85871

    Zilgo
    Member

    As unfortunate as that situation is, I find it common as well.  People want to see a p-value even though they have no idea what it tells them.  In any case, you are getting data that is generally non-normal.  So any analysis performed after that is likely to be invalid unless normality is not required (which is true in a few cases, but unlikely).  You could always point that out, explaining that non-normal data requires further effort to interpret correctly.  Or you could flat out refuse to give the results unless something is done about it.  If they just want pretty charts for a presentation, they can get them themselves.  Change often requires a kick in ass to get people to keep moving.

    0
    #85877

    Lomax
    Participant

    Gavin,
    I agree with Zilgo…Why are you conducting normality tests in the first place? Typically, normality tests are performed to satisfy underlying assumptions of more powerful statistical tests.  What is the ultimate goal(s) you (or your Quality mgr.) is trying to achieve? You already mentioned the product is meeting specifications.  I would be concerned about your reference to control limits (SPC) compliance depending on the degree of abnormality in the data.
    Setting an alpha or beta value to compare the p-value too is nothing more than a line in the sand.  Where you draw that line depends on the risk level, financial impact, etc. of the decision towards some goal or objective. 
    Call me a purist…but every tool has an intended purpose and limitations. When you extend beyond the purpose or ignore the limitations, something or someone typically gets hurt… 

    0
    #85902

    B. Wendell Jones
    Participant

    Good Morning,
    Good Stuff.  Both Zilgo & Neal offer good points.  I recently attended Minitab training (but still no expert) and there was alot of emphasis placed on the various “t-tests” and a check for normality of your data before performing T-Tests.  As I understood it,  the check for normality is to comfirm your data set has not deviated significantly from a normal distribution.  If so, proceed with T-test and confidence your data set is “robust”.  If not, proceed with caution and/or investigate what possible sources of variation might be contributing to your data being “non-normal”.
    Also,  even though your data set is operating within spec limits – does that also mean the process is producing a predictable result over time?  Having a process operating predictably is not the same as operating within spec limits. Non-normal data would lead me to believe the process is not behaving or producing a predictable result.  What are the natural process limits?
    My thoughts … I am learning as I go …

    0
    #85905

    JZuzik
    Participant

    Sometimes your process will never be normal – it’s the process’ natural state due to inherent limitations (ceilings and floors). You don’t explain what you’re measuring, so I have no idea if your process should  naturally be normal or non-normal.  To go further in process improvement, then transform the dataset and pursue other tests.  I agree wholeheartedly with the other postings; sometimes managers want to impress others with high-level stuff, not really understanding that they look foolish because they’re presenting/explaining it the wrong way. And I also need to put my two cents in on this: if nothing is being done about any of the data you present, look for a new job with a manager that supports you!!!!!!!!

    0
    #85908

    Six Sigma Saviour
    Member

    Hi Gavin,First, what type of non-normal data is it?  Left/right skewed, granular, kurtosis or something else?  If you are not sure how to tell, do you have someone you can ask? If not, you can always send me the data and I will give you a hand trying to intepret it.  Secondly, what are you using this data for? Are you wanting a Cpk?  A p-value is just a hypothesis test of sorts to see if a data set can be called a ‘normal’ distribution or not and is not particularly useful for decision making in itself.LOL @ baffling senior management with science :) Yes, I think this is somewhat universal.  I may laugh, but its an ironic laugh.  It sounds like you are being pushed to present data that will make your manager look hi-tech to their boss, but have little or no value other than wow-factor.  Minitab is to management, what fireworks is to big crowds – ooooooh ahhhh.  Haha.Again, im not sure if I helped or just ranted.  But if you do want help with what you are doing, I will try. 

    0
    #85909

    Bill Kleintop
    Participant

    A normality test is done to determine if the data you are
    working with is significantly different from the shape a normal
    distribution, bell curve, takes on. It is somewhat of an art to
    determine this. Skewness and kurtosis statistics help. There
    are rules for the percentage of cases that fall within 1, 2 and 3
    standard deviations of the mean in a normal distribution. You
    can plot a graph to check normality. A p-value will not tell
    you if you have a normal distribution.t-Tests and other tests of means, variances, etc. depend as
    one of their assumptions on an underlying normal distribution
    of the data set. If the data is not normally distributed the
    statistical test results will be inaccurate. There are
    nonparametric tests which can be used for non-normal data.There is some robustness in statistical tests that allow you to
    strech the assumptions. If there is a large enough number of
    cases that you are studying, such as 50 and above (some say
    30 and above), you can assume that the data is normally
    distributed and use a t-Test without too much risk of violating
    the normality assumption.

    0
    #85913

    Sach
    Member

    Capability indices, defects per million calculations are based on the data coming from a normal distribution. The whole theory of 6 sigma=3.4 defects/million comes from the assumption that the data is normal. Hence you need to check normality and cannot claim capability of a process being good/bad, when your data is not normal.

    0
    #85918

    Dennis Yong
    Participant

    Hi friends.
    Control limits for our control charts are calculated based on the assumptions that the distribution of the process being studied is normal. Also the process capabilities ratios Cp and Cpk are calculated based on the normality assumption too.
    In the electronics industry, there are a number of parameters which do not display the normal distribution. How do we proceed to construct control charts for non-normal distribution?
    1. Using percentiles : +/- 3sigma represents 99.73% of the population2. Normalize the data and then calculate the control limits as before.
    For Cp, Cpk, you can use:1. Pearson familities of distributions: Refer to Kotz and Lovelace (1998)2. Percentiles3. Transformation of data
    There may be other methods: and I would be extremely happy if there are other methods you can be shared with us here in this forum group.
    As I do not have Minitab: Normality tests that can be performed:1. Probability Plots2. Shapiro Wilks Test3. Kolmogorov-Smirnov Tests4. Anderson-Darling Statistic…..etc…You can refer to the excellent book by Shapiro (1980) for more discussions.
    But personally, we should perform tests for normality before we construct control charts and process capability studies. But I wonder how many people in the industry are doing it. A good discussion on this will be the Kotz and Lovelace book and D.C.Montgomery: Introduction to Statistical Quality Control.
    In any case, I am finding this forum group very interesting and I have learned a lot from it.
    Cheers and warmest regards to all:Dennis Yong
     
     

    0
    #85921

    Adam
    Participant

    All the replies you have had are good. As a summary, normality tests are used as a stepping stone to the next phase of analysis, they mean little on their own. Whether or not your process data is normal or not depends entirely on what it is you want to know.
    If you want to know how capable your process is, normality tests will inform you which capability analysis tool to use or that you need to transform your data first.
    If you want to analyse further using ANOVA, T Tests or other statistical tools then again normality tests will inform you what tools you can use or again that you need to transform your data before analysis.
    I can think of no circumstance where to say “Look we ‘ve got normal data” would mean anything on its own.

    0
    #85922

    Carl H
    Participant

    I agree that normality tests are a proper first step before jumping right into parametric (normal) hypothesis tests.  If data is normal, great.  If not, proceed with caution and/or use non-parametric tests (less powerful).
    If data is not normal, this is often critical.  It can highlight special causes which ARE the problem (skewness, outliers, two or more modes).  Movingt these from special to common cause often IS the solution.
    Carl

    0
    #85924

    Marc Richardson
    Participant

    Dennis,
    Unless you are running individuals charts on your processes, there is no need to assess the normality of the process outputs’ distribution prior to charting it. This is because, if you are running average charts, the central limit theorem comes into play. The theorem states that averages of subgroups drawn from the process stream tend to be normally distributed regardless of the shape of the underlying distribution.
    In my experience, lack of control on a control chart and non-normality in the distribution go hand in hand; if you have one, you the other. For example, if you have a significant upwards shift in the control chart data, a histogram of the same data will show a definite right-skew.
    Distributions are defined by three parameters: central tendency, spread and shape. If the shape of the distribution being analyzed does not conform to the shape of the normal distribution, then your assumptions about what percentage of the distribution is under the curve at the point of interest will be incorrect. It’s simple geometry really.
    Marc Richardson,
    Sr. Q.A. Eng.
     

    0
    #85931

    Mark Pembrooke
    Participant

    I am new to Six Sigma and thus a little confused about the discussion of P values and normality.
    I have only used P values in Minitab and SAS, not to test normality, as much as to determine the goodness of fit. If the P value for a (x) variable,  is less than alpha in a regression, then the (x) variable is said to be significant to explain  the (y) variations in the equation. The P value is used as a shortcut to determine, “… the minimal level of significance, (alpha), that can be chosen for the test and result in rejection of the null hypothesis” (Dielman, Applied Regression Analysis for Business and Economics, second edition, p. 89).
    Normality is assumed from the sample (central limit theorem) in order to explain the characteristics of the population. I would agree with the others that there are other tests you can (should) use for normality tests.
    My question to the group is whether P values are taught in Six Sigma to be used as tests for normality? Why?
    I appreciate the disscussion,
    Mark

    0
    #85933

    Zilgo
    Member

    P-values are used in all hypothesis tests.  Goodness of fit tests are just a type of hypothesis test that has to do specifically with the impact of an X on a Y, used in model building.  But there are many other hypothesis tests besides goodness of fit tests.  One is a normality tests which tells you whether or not your data violates the assumptions of normality for things like ANOVA and regression.
    So it’s not that p-values are also interpreted as normality tests.  They are not.  But you can run a normality test, and its results are expressed as a p-value

    0
    #85934

    Mark Pembrooke
    Participant

    I agree, thanks for the clarification.
    Mark

    0
    #85937

    Keith M. Bower
    Participant

    My guess is that they will be requesting an inspection of the distribution to assess whether the normal distribution is an adequate fit, with an eye to process capability estimates. 
    As has been discussed in Quality Engineering (see Somerville & Montgomery, 1996), and elsewhere, capability estimates such as Cp and Cpk are highly sensitive to the assumption of normality.  Even mild departures from the normal model may provide highly misleading defect rates.
    When the assumption of normality is violated, other procedures – e.g. transforming the data, or using an alternative model to assess the proportion defective, may be appropriate.

    0
    #85947

    K.Lee
    Participant

    The p-value is only as good as the data. Use it as a guideline. You can try testing out your null hypothesis, in regards to the reliability of the data too.The p-value will (hopefully) show you the reliability of your data. (with .05 as the “borderline acceptance”). The higher the p-value, the less likely you can believe the:
    -a- relationship of your variables in your sample will represent -b- the corresponding variables in your population. I hope this helps…
    KL

    0
    #85949

    Gabriel
    Participant

    Marc,
    As you said, “The theorem states that averages of subgroups drawn from the process stream tend to be normally distributed regardless of the shape of the underlying distribution”. I would add “As the subgroup size increase”. If the process is pretty not normal than a small subgroup size may be not enough to get the averages normally distributed. Of course, instead of testing the process for normality, Xbar can be tested to check.
    Yet, there is not a general agreement about if SPC can be used with any distribution without any special concern, even in those cases where the charted parameter in not normally distributed. To give some examples, R charts and even p (attributes) charts use ±3 sigma control limits, and those parameters are clearly not normal even if the individuals distribution is normal. So if we don’t care about normality for R and p why would we care for Xi or Xbar?
    And finally we get to the capability indexes. One way to see those indexes is as a tolerance to process spread ratio. The problem then is how to define “process spread”. We all know there are several parameters for spread, such as sigma, range, variance, interquartile span, etc.
    One way to define spread is just 6*sigma, and we have the classical formula for Cp and Cpk. If the process is normally distributed, then it happens that the central 6 sigmas cover from percentile 0.135% to 99.865%. So another way to define “spread” could be the span between those percentiles. Both definitions are equivalent only in the normal distributions, so in other distributions using one way or the other will lead to different values for the index. Does this mean that one way is correct and the other way is not? I don’t think so.
    Ok. If you use the “sigma” way and the process is not normally distributed, then the tables to convert Cp and Cpk to PPM do not work. And so wat? One can make such a table for any distribution wanted. And after all, if the process is capable (let’s say Cpk = 1.5) the estimation will be very rough anyway, because you will have so few parts in the “out of specification” zone (if you have any part at all in that zone” that it will be almost impossible to prove that the process matches ANY distribution thatt far in the tails, normal or other. Anyway, 0.43., 4.3 or 43 PPM is very good, and 1000, 2000 or 5000 PPM is bad, so who want an exact value after all? The key is to improve. You can’t improve what is not measured, so to verify that you improved you have to measure the process befor and after the improvement activities. And if you improved the process then the capability index will improve, no matter which formula was used (as long as you use allways the same formula, of course). If you want an estimation of the process’ PPM, better use the sample’s PPM (or an upper confidence limit for a given degree of confidence) as estimator. The sample is not large enough to get a usable value of PPM? Then it is not large enough to validate whetehr the model distribution fits the process distribution in the out of tolerance zone.
    A final comment. I agree that, sometimes, a not normal histigram is the image of an unstable process (specially when the histogram shows bimodal or outliers). Yet, there are A LOT of characteristics for which the stange thing would be that they were normally distributed. Examples: roughness, ovality, position error, tapper, sound level, endurance, number of defects, administrative errors, times to provide a service and, in general, any characterisitc that has a phisical bound and where the size of the spread is comparable to the position relative to that bound. If the time to deliver pizza ranges from 25 to 30 minutes, it might be normally distributed. If it ranges from 2 to 10, I bet it will not.
    I guess that if we all spent more time managing variation (due to either special or common causes) and reducing it without careing too much about normality, we would be adding more value than using this time fighting against (or managing) lack of normality when it is not of practical significance. This does not applies for those tests that are very sensible to the normality asumption.

    0
    #86336

    M S SUDHAKAR
    Participant

    This is a problem that I face too.In my study on a particular system  , the data that i collected are in between 0 to 1,and when i run the normality test ,it is non-normal.
    When I get the data up to the second decimal place, it becomes normal.But this is not possible always.I can get the data only up to single decimal place.How should i proceed – to take it as normal or nonnormal.?

    0
    #86420

    Bruce Baker
    Participant

    The p value that falls out of a normality test is really the same as any other p value.  If it is lower than your selected acceptable alpha risk then you should reject the null hypothesis and therefore accept the alternate.  In the case of normality tests the null hypothesis is that the data is normally distributed. 
    Not all normality tests are equal.  Some are more sensitive to deviance from normality in different parts of the distribution.  For example, the Anderson – Sarling procedure is more sensitive to deviance in the tails of the distribution.  On the other hand the Kolmogorov – Smirnov test is more sensitive to deviance in the body of the distribution.  It is possible that a given data set will reject the null on one procedure and accept the null on another.

    0
    #86421

    Bruce Baker
    Participant

    You probably will baffle the senior management of your organization.  That doesn’t mean that it is a useless procedure. 
    When you have marginal p values you should look beyond the p value.  A good place to start is the normal plot of the data.  If the points fall on or near the line in the middle but off line at the ends (tails) then it is possible that you have some unusual observations that indicate potential special cause.  If you have a defeinite nonlinear pattern that is the points form kind of a curve about the normal line then you may have a nonnormal but identifiable distribution to your data.  This is very frequent for cycle time data and where your process operates near some kind of normal bound. 
    The histogram is another good diagnostic tool.  If you see bimodality, you should consider that possibility that the data are coming from different process.  Perhaps they are normal individually but when you mix them and call it all one distribution, it looks nonnormal
    Minitab also allows you to easily work with the Weibull distribution for capability indices and it will try a Box Cox transformation on nonnormal data.
     
     

    0
    #86602

    Marc
    Participant

    Hi, it has been very interesting reading all the comments.
    I have a question regarding Normality Tests using Minitab.
    I notice that when we conduct Normality test in Minitab, we do not specify an alpha value for the 3 tests in Minitab (Anderson-Darling, Shapiro-Wilk & Kolgomorov-Smirnov).
    Therefore, how do we decide whether the data is normal or non-normal? (based on which alpha value?)
    Hope someone can enlighten me on this issue. Thanx…:) 

    0
    #86603

    faceman888
    Participant

    What you do is determine how much risk you are willing to take of making a type I error (reject the null hypothesis when it is in fact true).  Or saying that the data is not normal when it really is.  You need to characterize this risk as a probability (if you are willing to accept 5% risk then your alpha probability is 0.05 for example).  The run the test.  If the p value is less than the alpha that you selected then you have a significant result, you reject the null hypothesis that the data is normally distributed and therefore adopt the alternate hypothesis that the data is not normal.  If the p value from the test is greater than the alpha that you picked then you accept the null hypothesis and assume that the data is normally distributed.
    Good luck.

    0
    #99958

    Grossmann
    Participant

    I found this formu very interesting as well , but there is one but …still searching for basics like f.e.g. what AD tells us in normality tests or KS making second test – no idea what this values tell about. I would like to know a title of a book or site where I could know how to interprete this values. thx for help. Michal

    0
Viewing 24 posts - 1 through 24 (of 24 total)

The forum ‘General’ is closed to new topics and replies.