iSixSigma

Is stability required for DOE

Six Sigma – iSixSigma Forums Old Forums General Is stability required for DOE

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #47734

    newbie
    Participant

    According to an interpretation of RA Fisher I have been reading, using a randomized block design would negate the necessity of the underlying assumptions of NIID….is this true and if so, does it mean you can move ahead with a DOE even if the initial inspection indicates an out of control process?  Thanks!

    0
    #159530

    Chad Taylor
    Participant

    I have read similar and I believe this is to be true only if a non stabile process is stationary, Also believe this would only be true of long term process distributions. Maybe some of the other guru’s can fill us in….??
     
    Chad Taylor

    0
    #159535

    Anonymous
    Participant

    If by out-of-control you are referring to a shift in the mean, and the X is known, then blocking will prevent the model from bias due to the shift.

    0
    #159557

    Robert Butler
    Participant

    There are a number of ways to end your sentence …the underlying assumption of Normally, Independently, Identically Distributed….what?   If you could give us the end of the sentence and also a citation with respect to what you are reading perhaps I or someone else might be able to offer some additional thoughts.

    0
    #159558

    Craig
    Participant

    Been there, done that!
    I was involved with a DOE for dry etch in semiconductors. We were aiming to optimize the across wafer variation. In other words, we needed a process that would etch the center and edges of the wafer uniformly. We could build a decent model, etc, but when it came time for the confirmation runs, we could never duplicate what the model predicted. There were lurking variables that caused the process to be unstable. If a process is very erratic, it can be real pain to obtain DOE results of any value.
     

    0
    #159559

    newbie
    Participant

    “Fishers solution to the quantndry (to discover how to run experiments on processes and systems that were never in a state of statistical control) of how to run such experiments was the invention of randomized blocks.  He showed it was possible to obtain results that to an adequate approximation could be analyzed ‘as if ‘ the usual assumption about IDD (Independence, Identical Distribution) errors were in fact true”.
    – Statistics for Experimenters, Design, Innovation, and Discovery, 2nd edition, Box, Hunter, Hunter,pages 154-155
    Thanks Everyone.
     
     

    0
    #159560

    Robert Butler
    Participant

      Ok, now I understand the direction of your question.  The answer is yes you can and the results are very likely to be as hacl described.  If you choose to identify significant variables under these conditions you are assuming that the variables you are controlling and changing will have a bigger impact on the process than that (those) unknown variable(s) causing the system to be out of control. 
      The possibilities are as follows:
    1. You have at least enough information about the process to block the design on the unknown variable. Result – you run the analysis and find that the block effect as well as other terms in the design are significant.
       a. If the block effect is smaller than the effect of the other significant variables then you will probably be able to change the other significant variables and impact the process in a meaningful fashion in spite of the variation of the unknown variable.
       b. If the block effect is larger or about the same size as the other significant variables then you will have the situation described by hacl.
    2. You don’t have enough information to block on the unknown variable(s).  In this case you run a completely randomized design.  The effect of the unknown variable(s) will show up in the expression for the model error. 
     a. If the effect of the unknown variable(s) is equal to or greater than the effect of the variables of interest most or perhaps even all of the variables you tested will fail to exhibit significance.  In any event the final model will be of little utility.
     b. If the effect of the unknown variable(s) is less than the effect of the significant terms in the model then you will be back to a) above.
     c. If the effect of the unknown variable(s) is larger or about the same size as the effect of the variables you controlled and if, by luck of the draw, you happen to run the randomized design during a period when the unknown variable is not changing significantly you will most likely develop a good correlation between the controlled variables and the response.  
      This model will work as long as the unknown variable(s) remain at a level that has an impact on the process that is less than that of the variables you are controlling.  When the unknown variable(s) shift in the direction which increases the variation of the process your model will fall apart and you will be back to the situation described by hacl.  

    0
    #159562

    newbie
    Participant

    Excellent, thanks guys…but one clarification, Mr Butler.  When you speak of blocking on the unknown variable(s), you confused me.  It was my understanding that we block on known, uncontrolled factors to nuetralize their effects and randomize the run order within the blocks themselves to neutralize, or at least mitigate, the effects of unknown variables (ie lurking variables).  ??
    So when (if ever) can one attempt to leap frog process instability and attempt an analysis through randomized block design? Is this always a bad idea or does it have application under some set of circumstances? 
    Thanks for your help.

    0
    #159563

    newbie
    Participant

    Hacl,
    Did you know you had a highly erratic process before the attempt? Would you not detect this instability early on using run charts or CC?  Thanks.

    0
    #159567

    Craig
    Participant

    Newbie,
    The initital problem was that the uniformity across wafer was too high. The process was somewhat unstable, and we were trying to see if there was a better set point for 5 of the critical variables. Some runs had optimal uniformity (3 to 5%), and some had in excess of 10.
    At the time we were using design expert software, and it allowed us to determine a choice of optimal solutions. The only problem was that we couldn’t hit the predicted values. I recall that it was a limitation of the mounting chuck in the etcher. Over time as the chuck degraded you could see the instability arise.

    0
    #159568

    newbie
    Participant

    Thanks Hacl!

    0
    #159573

    Craig
    Participant

    hacl -How did you measure uniformity? What dry etch process?Sunny

    0
    #159574

    Craig
    Participant

    Sunny,
    You are making it seem as if I am talking to myself! :-)
    I believe we measured uniformity as shown below. This was over 4 years ago. It was the contact etch process.
    HACL

     
    s1
    s2
    s3
    s4
    s5
    avg
    rang
    %

    pre
    2555
    2566
    2653
    2455
    2455
     
     
     

    post
    1575
    1567
    1747
    1540
    1543
     
     
     

    delta
    980
    999
    906
    915
    912
    942.4
    93
    10%

    0
Viewing 13 posts - 1 through 13 (of 13 total)

The forum ‘General’ is closed to new topics and replies.