iSixSigma

DOE Questions

Six Sigma – iSixSigma Forums Old Forums General DOE Questions

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #51475

    newbie
    Participant

    How does running repeats (ie multiple parts) improve the precision of a given design?  For example, suppose more than one replicate is too expensive or time consuming, but once a given treatment level has been established, running multiple parts would not be difficult.

     Where is the benefit to this tactic?  Improved precision in the area of part and measurement variation?  Is the squeeze worth the juice on this practice (generally) ??
    Blocking – is this technique used where the factor is known, yet uncontrollable?  For if it was controllable and potentially causal, one would simply include it as a factor of interest, no?
    Thanks!

    0
    #178460

    Robert Butler
    Participant

    It depends on the context of the repeated measure.  If you have a design where one of the responses is an attribute then you will have to take X number of samples in order to get an estimate of the proportion for that particular experimental combination.  If you have a design ( a repeat measure design) where one of the components of the design is time duration then you will need to take repeated measures as part of the data gathering process (experiments where things continue to change after applying the given combinations of X’s – anything with growth or decay of the output over time).  If you have a standard design where the response is continuous and neither of the above apply then, if it is just as easy to take one sample as it is to take X you might want to do this if there was some argument concerning the representative nature of a single sample.
     
     Blocking can be used for both controllable and uncontrollable variables.  Split plot designs are designs where the variable can be controlled and there is a need for blocks (splits).  In this case the effect of the variable assigned to the block can be estimated – the trick here is identifying the proper error terms for computing effect significance.  
    In the uncontrolled case blocking prevents the confounding of the effect of the blocked variable with that of the actual variables of interest.  For example – let’s say you have two variables at two levels and you can only run two experiments per day and you know that there will be a problem with day-to-day variation.  With 4 experiments (-1,-1), (1,-1), (-1,1), and (1,1) one would chose to run the first and the 4th on day 1 and the second and third on day 2.  Days is confounded with the AB interaction and not with either A or B.
     
    If you want to read more about these issues a good book is
     
    Analysis of Messy Data Volume 1 – Milliken and Johnson

    0
    #178461

    newbie
    Participant

    Per usual, thanks a million doc.  I have already been to amazon and will read up on it.  Thanks!

    0
Viewing 3 posts - 1 through 3 (of 3 total)

The forum ‘General’ is closed to new topics and replies.