iSixSigma

DOE to assess effects of factors on variance

Six Sigma – iSixSigma Forums Old Forums General DOE to assess effects of factors on variance

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • #31645

    johansson
    Participant

    Dear all,
    a question.
    If I want to assess effect on variance using a DOE, can I determine the number of replicates using expected variance differences, run the DOE and then use Test of equal variances from ANOVA folder/MTB?
    The technique I was taught goes after running DOE to analysing the Factorial design (as usual for MEANS), then storing residuals, then squaring them, compensating degrees of freedom, then taking LOG of this to normalize – and then using DOE factorial plots again. To make sure that it’s clear – LOG((resi**2)(n+1)/n).
    Do I need to go this way if I do not care interaction effects on Variance – or simple ANOVA – Test of equal variances does the job?
    Many thanks
     

    0
    #83854

    johansson
    Participant

    It is me agian – Max. No reply so far – does anyone have any idea?
    Thanks

    0
    #83859

    Robert Butler
    Participant

       Your initial post gives the impression that you are trying to determine the number of replicates of an experimental design needed to assess the effects of DOE variables on measured responses by using a Box-Meyers analysis.  This doesn’t make sense.
      If you have an unreplicated design (or one with no replicates except at a few chosen points) and you are interested in assessing the impact of design variables on response variance then the Box-Meyers approach for assessment of dispersion effects is the method of choice and it is run about the way you described in the second part of your initial posting:
    From Understanding Industrial Designed Experiments-Schmidt and Launsby
    Box-Meyer
    1. Use a Res 4 or higher design to avoid confounding interaction and dispersion effects.
    2. Fit the best model to the data.
    3. Compute the residuals
    4. For each level of each design factor, compute the standard deviation of the residuals.
    5. Compute the difference between the two standard deviations for each factor.
    6. Rank order the differences from #5
    7. Use a pareto chart, normal probability paper, or log of the ratio of the high and low variances of the residuals of each factor to determine the important dispersion effects.
    8. For factors with important dispersion effects, determine the settings for least variation and set location effect factors to optimize the response average.
    9. If there is a conflict of settings, use a trade-off study to determine which setting is most crucial to the product quality.

    0
    #83860

    Jamie
    Participant

    MAX, Since noone has responded I’ll give it a shot. My answer is no you can not use ANOVA to test for differences in variance. On the contrary ANOVA assumes = variance within the groups and tests only the means. My suggestion would be… if you have multiple factors and can assume no interaction then all you need to do is use Bartletts test for equal variance for each factor one at a time. But you may be going out on a limb by assuming no interaction especially for 2 way interactions. The squared difference log approach really isn’t that hard to do once you have the data. If you have the data collected in a mintab worksheet it takes a few minutes to do.
    A note for estimating number of reps, you will want to do this looking up the ratio of the 2 variances that you want to detect in a F table (I’m pretty sure minitab can’t do this for you), where the numerator df = denominator df. The degrees of freedom +1 will be your sample then… then divide this number by (the number of corner points in your experiment/2). So to pick up a 4 fold increase in variance for a 2 factor experiment I look up in the F table 3.79 (the first number less than 4). I get 7 df’s or 8 sample size. I divide this by 4/2 (half the number of corner points) and I get 4 reps. I did this quickly but I think I got it right.
    Hope this helps, Jamie

    0
    #83861

    Jamie
    Participant

    MAX, My appologies …. I just reread you post and see that you are clear on ANOVA (my first read was that you were asking if you could use ANOVA). But I did eventually answer your question, if you assume no interactions then you can use Bartletts test for equal variance, one factor at a time.
    Jamie

    0
    #83863

    johansson
    Participant

    Jamie,
    thanks a lot.
    Actually my question started when I did both – DOE data manipulation as per my original message and Barlett. Barlett gave me significance of a factor, DOE way – no significance, neither for main factor, nor for interactions. I was completely confused what should I trust to. Any idea what might go wrong, and more practically – what is more trustworthy?
     
    regards – and bon weekend  

    0
    #83870

    Jamie
    Participant

    MAX, about all I can say is well… dam:) The 2 methods are not mathematically the same so could they yield different results. Is the DOE method prone to error (meaning you might have made a mistake in the calc) I’d say yes, but I’m assuming that you checked it over it you think you did it correctly. Its does have a number of steps and you need to make sure you use the correct response variable. Chances are you have 4 or more columns you could use for response…. the original observed response, the residual, the individual variance of the response (i dont know what else to call (obs – predict)^2 * (n+1)/n), and the log of the variance. Its possible you are looking at the wrong respose, response should be log(var). Assuming all is right I would look at the main effects plots and the interaction plots (for log(var)). Also plots try plotting the straight residuals against each individual factor. Can you see a difference in the variation of the residuals which exhibits a different range for one level vs another. This might tell some of the story. If Bartlett’s shows a difference for one factor I would really expect to see it in the graphical analysis I’m describing. Possibly also consider the p-values in the DOE analysis. Are they “close” to significant. How different is the p-value from DOE vs Bartletts. Also are the within groups log(var) values normally distributed. Taking the log is done to make them normal, but its might not do so.
    I’m trying to think off the top off my head quickly since I have an appointment, but possibly some of these ideas my help ferit out the inconstency. Though I’m leaning towards Bartletts since it did prove a difference.
    Jamie

    0
    #83871

    jamall Yamak
    Participant

    Max,
    When designing your experiment, you can use the variance as the response variable.  You can do that by replicating each treatment combination many times and calculate the variance.  As you know, the variance follow the Chi square distribution and you can not use ANOVA since ANOVA assumes equal variances.  So you need to  do a variance stabilization transformation to the data (mostly using the LnY to tansform) and then you do the ANOVA analysis.  This way you can understand which factor influences the variance.
    Hope this helped.
    Jamal

    0
    #83872

    jamall Yamak
    Participant

    Max,
    Oops I meant to say since the variances follow the Chis-quare distribution (ANOVA assumes normality) you need to normalize your data.  Sorry about that.
    Jamal

    0
    #83884

    Robert Butler
    Participant

      The exchnge between you and Jamie gives me a better understanding of what you are trying to do. I’d like to offer the following:
      There are a couple of problems with trying to compare Bartlett’s method with the Box-Meyer approach.  Jamie pointed out some but there are other things to consider.  The first is the number of degrees of freedom associated with each of the standard deviations.  Bartlett’s test is not very robust with respect to lack of normality and if you only have a few degrees of freedom for each standard deviation it would be very easy for Bartlett’s test to be fooled by your data. 
      The second is the fact that you really should choose a test method before you begin your analysis.  As any statistician will tell you if you try enough tests on a data set you are bound to find one or more that will give you a significant result.  Since you began with the Box-Meyer approach I would recommend that you go with the results that you had from that test.  As Jamie noted, if you haven’t already done so, it would be worthwhile doing the Box-Meyer analysis graphically as well as numerically just to see how close to a significant difference you might have-that is compare graphs and compute the exact levels of significance.  If you have a situation where you are 92% confident of a difference but not 95% confident you should certainly consider the possibility that there really is something there and that perhaps, as Jamie noted, the lack of 95% confidence may be due to nothing more than the degrees of freedom that you have available for a test.
      However, if you are determined to try another test I would suggest the Foster-Burr test instead of Bartlett’s.  It is very robust with respect to non-normality and it does very well when the degrees of freedom associated with each standard deviation are 10 or less.

    0
    #83890

    Jamal
    Participant

    Max
    What is the objective of your experiment?.  Is your objective is to know which factor affects the variance so you can work with this factor in order to minimize the process variation, or is it to test for homogeneity of variances? These are two objectives.  Based on which objective you are after then the proper test statistics will be recommended.   Based on your post I believe it is the first objective.  Please clarify.
    Jamal

    0
Viewing 11 posts - 1 through 11 (of 11 total)

The forum ‘General’ is closed to new topics and replies.