DOE – Replication vs Repeated Measures
Six Sigma – iSixSigma › Forums › Old Forums › General › DOE – Replication vs Repeated Measures
 This topic has 36 replies, 18 voices, and was last updated 17 years, 6 months ago by ashok khatri.

AuthorPosts

October 2, 2003 at 4:08 am #33470
Hoping someone can clarify the difference between replication and repeated measures in a DOE. For example, in an experiment to improve part strength in injection moulding, if you have a design with 8 runs and take 10 shots each run – is the strength of the each of the 10 parts repeated measures or replicates? Or do you have to repeat the 8 runs giving you 2 replicates at each setting.
0October 2, 2003 at 6:26 am #90535
Raja SetlurParticipant@RajaSetlur Include @RajaSetlur in your post and this person will
be notified via email.Repeats are consecutive experimental runs using the same treatment combinations (no channge in setup betwen runs). i.e. if you run 10 shots of the first run followed by 10 shots of the second. run and so on.
If however, you run all eight runs once and then all 8 runs once more and so on then it is considered as 10 replicates of the 8 runs. Replicates are duplicate runs of the entire experiment.
The latter is preferable for minimising errors. Sample size calculations apply to replicates not repeats.
BTW is your experiment a 2^3 full factorial DOE?0October 2, 2003 at 8:46 am #90539Verity
I think this is repeatation & not replication as you are not changing settings of your injection moulding in those 10 shots at each run in between.Can you send me your Doe analysis if possible
Milind0October 2, 2003 at 8:59 pm #90584Thanks for your replies, the reason I was asking is in training on Taguchi the trainer was explaining replicates as being 5 consecutive shots at the same setting, or using the catapult 5 consecutive throws at the same setting.
I haven’t run the experiment yet but I’m intending to run an experiment on 4 factors at 2 levels, but probably only 8 runs instead of 16. DO you need to do a replicate when you are screening?0October 20, 2003 at 6:45 am #91226
ALEK DEParticipant@ALEKDE Include @ALEKDE in your post and this person will
be notified via email.Are you changing the set up for 10 readings ? No. This clarifies that these are repeat measurements. Repeat reveals short term variation & replicate reveals long term variation where people, instruments , set ups are changed& measurements are taken. Replicate generates pure error & repeate does not.
Thanks
Alek0October 20, 2003 at 2:12 pm #91238Verity
With Taguchi’s Orthogonal array L8 can examine 4 factors with 2 levels in simple following way.
Expt no. Factors 1 2 3 4
1 1 1 1 1
2 1 1 1 2
3 1 2 2 1
4 1 2 2 2
5 2 1 2 1
6 2 1 2 2
7 2 2 1 1
8 2 2 1 2
I hope it helps
Manee0October 20, 2003 at 3:58 pm #91244As stated by Alek, the 10 shots are repeats and will give you a good idea of short term variation (within a job, if you will). The 8 runs need to be repeated in random order to create replicates. Without replicates you won’t learn anything about variation “between jobs”. Good luck!
0October 20, 2003 at 9:39 pm #91272If the catapult lesson was the reason you posted, then I would be concerned about what the trainer has told you. Based on the responses so far, I believe that doing 5 shots in a row with a single combination of settings is more of a repetition than a replication. Minitab gets a little cranky if you try to fool her into doing repetitions since she sets up the design matrix to do replicates.
0October 21, 2003 at 2:41 pm #91291The responses you’ve received are good. The 10 shots are REPEATS of a single run or experimental condition. If you wish to run fractional factorial (2**41) which is a resolution IV, I suggest a replicate of the 8 runs.
The repeats are good for using the average response for the 10 parts, especially when the MSA was not <10% (for continuous data) to reduce the measurement variaton.
drew0October 21, 2003 at 2:50 pm #91292Why on earth would you replicate a fractional factorial when you could run the full factorial in the same number of runs with the same power?
0October 21, 2003 at 3:28 pm #91294“Without replicates you won’t learn anything about variation “between jobs””
Your post would imply that replicates are required to determine the experimental error – or as you call it the “between job variation”. However, the majority of DOEs that I have been associated with have been without replication and successfully estimated the experimental error due to the hidden replication of factorial designs.
I can not agree with your statement about replication.0October 21, 2003 at 4:40 pm #91297
Bob PetersonParticipant@BobPeterson Include @BobPeterson in your post and this person will
be notified via email.I would agree with Statman. Generally a DOE does not need to be repeated in it’s entirety, certainly not when doing a screening experiment. There are mitigating circumstances, however, so think it through to the end and decide how much energy and money you wish to put into emphasizing process settings against just gathering more repeats for a given setup.
0October 21, 2003 at 5:27 pm #91301Thanks for the correction, Statman. Replicates increase the accuracy of the estimate of experimental error, thus increasing the power of the experiment. Replication is costly because we reset the factors each time. Thus, its value depends on the situation. Is this correct?
0October 21, 2003 at 5:32 pm #91303I would not recommend this design as it is a resolution 3 design.
A is confounded with BC
B is confounded with AC
C is confounded with AB
Use the 2IV41 design. No need to give up resolution with the same number of runs.0October 21, 2003 at 5:45 pm #91305If the power were the same between a fraction and a full factorial, for the same number of runs, then one would chose to run the full on the slight chance that the interaction effects (e.g., A*B*C) turned out to be significant. However, I believe that many times a fraction results in fewer total runs (though more replicates) than a full factorial of the same minimum power (above some threshold power, of say 0.90). This can be seen by exploring the example in Minitab Help for calculating power for a twolevel fractional factorial design.
0October 21, 2003 at 5:45 pm #91306Yes Gary,
At the risk of being too picky, they increase the precision not accuracy of estimating the experimental error.
It is almost always more efficient to add factors than add replicates to increase the power of the DOE. For example, it is better to run a 24 than to replicate a 23 because you will learn about an additional factor at the same cost (number of runs). The power of a 24 will be the nearly the same (depending on the residual degrees of freedom) as the power of a 23 replicated twice.
Experimental efficiency is getting the maximum information from each degree of freedom in a DOE.
Highest Regards,
Statman0October 21, 2003 at 6:22 pm #91308Hi Gary,
First of all, Minitab will never give you less than 2 replicates in the power and sample size calculation for a 2k factorial, and it will not allow the determination of power for an unreplicated design, so you can not compare the power of an unreplicated design to a replicated design in the Minitab. I dont know why but this is the way it is. What you can do in Minitab is compare replicated designs with different number of factors but hold the total number of runs constant and see that the power will stay the same. For example, lets compare 16 run DOEs with 3, 4 and 5 factors:
Power and Sample Size
2Level Factorial Design
Sigma = 1 Alpha = 0.05
Factors: 3 Base Design: 3, 8
Blocks: none
Center
Points
Per Block Effect Reps Power
0 2 2 0.9367
Power and Sample Size
2Level Factorial Design
Sigma = 1 Alpha = 0.05
Factors: 4 Base Design: 4, 8
Blocks: none
Center
Points
Per Block Effect Reps Power
0 2 2 0.9367
Power and Sample Size
2Level Factorial Design
Sigma = 1 Alpha = 0.05
Factors: 5 Base Design: 5, 8
Blocks: none
Center
Points
Per Block Effect Reps Power
0 2 2 0.9367
As you can see, the power remains the same even though I have looked at a full factorial, a resolution 4 design, and a resolution 3 design.
As you can see, for a balanced 2K factorial design, the power depends on the total number of runs in the experiment since the standard error of the experiment is equal to 2*s/sqrt(N) where N is the total number of runs in the experiment.
To be precise, I should have said that the power will be nearly the same depending on the number of degrees of freedom that are used to calculate the experimental error. In most cases, that number will close to the same.
Regards,
Statman0October 21, 2003 at 7:10 pm #91311
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Statman, please help me understand the following: (keep in mind that DOE is not my strong side)
1) If you do no replicates, then the degrees of freedom you have left for the error is zero. I only know some approximate approaches to use DOE in those circumstances (to visually find outliers in a line fitting, pulling up, taking the MS of the interactions as experimental error, etc). Then, what would be what you call “the power of an unreplicated design”?
2) I understand the difference between repeats and replications (basically, the replication will incluthe all the variation included in the repetitions plus the between setup variation (if there is any) which, by the way, would be a special cause of variation, and hence a sign that we are in front of an unstable process). Now, how is this difference between repetitions and replications taken ointo account in the mathematical construct of the DOE? If you take into accoun repetitions as “more data”, then they will be adding degrees of freedom, just as replications. If, as someone suggested, you average the results of repetitions and take them as a single data, then if you make no replications we would be in case 1) (no degrees of freedom left for the error), and if you make replications, you are fakely reducing the experimental error because of the averaging. So the significance level wold not be the significance level and the power would not be the power.0October 21, 2003 at 8:05 pm #91312Gabriel –
I agree that there is no mathematical difference in repetitions and replications. How does the analysis technique KNOW that the results are repeats versus replicates? Must be some kinda smart analytic technique! It does not know the time history;only the numeric values. The analyst might detect it in a resdiual analysis but not the technique itself.
I think what is important is that repeats only include analytic estimate of error (see BHH section 10.6 , pg 319 of 1st edition). The estimation of total experimental error requires genuine replicates be made as George B. says in the cited reference.
I disagree with you on special cause :
“the replication will incluthe all the variation included in the repetitions plus the between setup variation (if there is any) which, by the way, would be a special cause of variation, and hence a sign that we are in front of an unstable process).”
Such variation may be just common cause in a very stable process. Certainly run to run and setup to setup variation exists in even the most stable process. Only if it is excessive would we say it is special cause.
It has been my experience that you should do as much of both repetition and replication as is economic to get best power and increase the inference space of your results. Unfortunately, this usually runs afoul of production and time needs. If, because of those constraints, you have only done repeats, you must be aware of what you can safely conclude.0October 21, 2003 at 11:02 pm #91318Hi Gabriel,
Good questions. I am glad that you posted them as I think this is confusing for many BB (as well as some MBBs).
1. As you know, power of a DOE is 1Pr(concluding no effect when a significant effect exists) and can be looked at as the minimum detectable effect given the size of the experiment and the experimental error. The issue with an unreplicated design is in determining the experimental error. You are correct that an unreplicated design will have zero degrees of freedom to estimate the experimental error – IF all effects (interactions and main effects) are significant. What we rely on, and research has verified, is that not all effects will be significant. This is called the Sparcity of Effects Principle. The point behind this principle is that the nonsignificant effects can be modeled as a normal distribution with mean zero and a standard deviation equivalent to the experimental error (replicate error). The rule of thumb (and I hate rules of thumb) is that only 30% to 40% of the effects will be significant. So if we run a 16 run unreplicated design, we have 15 degrees of freedom estimating effects, and only about 4 to 6 effects will be significant (signals). The other 9 to 11 will be random normally distributed noise.
A good way to visualize this is what happens when one main effect and its associated interactions are not significant (just random noise). If we started with a four factor, unreplicated design and one of the factors is noise, then the design will collapse to a 3 factor design with 2 replicates since the design matrix at the high and low level of the fourth factor are the same and this gives us 8 degrees of freedom to estimate the error. This argument can be extended to other orthogonal interactions.
If we can assume a priori to the analysis that the majority of the effects will be noise and the nonsignificant effects will have a random normal distribution (an assumption that will work in almost every DOE), then we know that the nonsignificant effects will form a line through the origin on a normal probability plot (NPP). The significant effects will show up as outliers on the NPP in either the upper right or lower left quadrant of the plot. Yes, the analysis is typically performed by visual assessment of the NPP. However, there are other methods that are more quantitative. For example, Russell Lenth has suggested a method for determining the pseudo standard error using the median of the in lying contrast coefficients (effects). Box & Myers have suggested the calculation of posterior probabilities that each contrast is significant.
The bottom line is that not all the effects will be significant and the nonsignificant effects can be used to estimate the experimental error.
2. The repeat measurements are averaged to give a single value for each run (or condition) in the DOE. You would not analyze the DOE without averaging the repeats as this would be picked up as replicates in the analysis and artificially inflate the degrees of freedom for estimating the error. You should look at repeats as a method to increase the precision of measuring each condition in the same way you average repeat readings on a gage for a single test determination to filter out the noise of a gage. Repeats will improve the power due to the averaging of the condition variation or gage variation but the improvement will be minimal compared to the increase in power you get from increasing the size of the experiment (more runs). You can also use the repeat variation as another response of the experiment and model the effect the factors have on the system variation across the inference space.
When we conclude that an effect is significant at a1alpha level of confidence, we are making a statement (prediction) about the likelihood of drawing the same conclusions if the experiment was repeated in the future (back to our confidence interval discussion). This can only be accomplished by comparing the magnitude of the effect to the variation due to the experiment not to the common cause variation of the system (repeat variation).
I could go on for hours on this topic but I have probably raised more questions than I have answered at this point.
Highest Regards,
Statman0October 21, 2003 at 11:55 pm #91321One more thing Gary,
You wrote:
“then one would chose to run the full on the slight chance that the interaction effects (e.g., A*B*C) turned out to be significant”
The suggested design was a resolution 4 design which will have confounded 2 factor interactions. There is significantly more than a slight chance that at least one of the 6 two factor interaction is significant
Cheers,
Statman0October 22, 2003 at 1:35 am #91322
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Thanks for the answer and, yes, I have some new questions now. However, as I told you, DOE is not my strong side so it would not be fair to ask very profound questions (I probably would not understand the answer in first place). But…
1) a) If using the degrees of freedom nonsignificant factors as the degrees of freedom of the error is Ok, can I also use it even when I do replicates, to increase even more the degrees of freedom of the error and improve the power? In yes, how? If not, why?
b) By using the non significant effects to estimate the error you are changing the correct “There is not enough evidence to prove at this level that this effect is significant” by the incorrect “This effect is not significant”. Surely (well, I think) many factors and several interactions have an effect that would be measurable beyond the experimental error (i.e. significant) IF we had enough data (i.e. made a lot of raplications), but that effect would probably be beyond of what is called “practical significance”. So by introducing all the MS of a non significant factor or interaction you are, most probably, adding some variation due to special causes (the change in the factor or interaction level)
3) I still do not understand how do you take repetitions. If I make a DOE with say 10 repetitions and no replications: How many dtata do I have for each eperimental condition? 1 (the average) or 10? Do I have degrees of freedom free for the error?0October 22, 2003 at 2:02 am #91323Having understood repeats and repetitions, please help me analyze the following DOE.
– Design is 4 factor full factorial
– each run I am able to collect 10 measurements because I am using one strip (consisting of 10 units) per run.
– I am replicating the DOE 4 times.
When I analyze the data, say in minitab, do I need to average the 10 measurements per run and input all the replicates?
Thanks
0October 22, 2003 at 2:16 am #91324Migs,Yes you will need to average the 10 measurements per run and input the averages for each run. So you will have 16×4 = 64 values that you put into minitab. go to:stat>DOE>Factorial>Create Factorial DesignDOE>Factorial>Create Factorial DesignDOE>Factorial>Create Factorial DesignFactorial>Create Factorial DesignFactorial>Create Factorial DesignCreate Factorial DesignCreate Factorial Designset the number of factors as 4. Under “designs…” choose full factorial and number of replicates as 4. Minitab will create the design matrix with the 64 runs.I of course do not know the application of this DOE but it seems like over kill with the number of runs. With 64 replicates you will be able to detect a minimum difference approximatly equal to the process standard deviation (1/6 the process range). Just out of curiosity, what considerations did you use to determine the size of this experiment?Statman
0October 22, 2003 at 2:23 am #91325That should be
Stat>DOE>Factorial>Create Factorial DesignDOE>Factorial>Create Factorial Design
Something happened in transmission0October 22, 2003 at 2:39 am #91326Hi
Going back to the original question I just wanted to point out that it is very dangerous to analyse repeats as though they were replications.
It is also very easy (and tempting) to think you are doing replications when you are actually doing repeats.
If in doubt, assume it is repeats.
Regards
Glen
0October 22, 2003 at 3:05 am #91328statman,
thanks.
Correction… It should be 3 replicates based on alpha=5%, beta=5%, able to detect 2.72 fold change in standard deviation.
Rgds0October 22, 2003 at 3:12 am #91329
Dinesh SinghParticipant@DineshSingh Include @DineshSingh in your post and this person will
be notified via email.Dear all
In DOE replication we do for minimizing any experimemtal error…on the other hand Repeatition takes care of any error because of measurement system itself..we should not have confusion regarding this……so if we are sure of our Measurement system we can do away with repeatition but replication is always requested.
regards,
dinesh0October 22, 2003 at 11:59 am #91338
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.That makes no sense to me. If I want to improve the repeatability of the masurement system, then I would repeat the measurement on the same part, not on consecutive parts.
Repeats includes not only measurement variation but also parttopart (i.e. “within subgroup”) variation. In addition, replication would also includes what in SPC would be “between subgroups” variation (assuming that different subgroups could belong to different setups occasions of the same process with the same settings). This variation should be negligible if the process is stable. So most of the experimental error should be parttopart varaition.
I don’t have much experience in DOE, but it seems that some of you are overeacting about the consequences to use Repeats as Replicates, specially when the process is known to be fairly stable.0October 22, 2003 at 1:45 pm #91342I agree with Gabriel.
In the experiment I mentioned, one shot (one run) can manufacture 10 units. I measure the thickness of each unit. The reason I am doing this is I want to capture the unittounit variation (which may or may not be a major source). If I average the 10 readings, then I won’t be able to capture that.
Thanks
0October 22, 2003 at 4:19 pm #91348Gabriel,
Your questions were:
1. If using the degrees of freedom nonsignificant factors as the degrees of freedom of the error is Ok, can I also use it even when I do replicates, to increase even more the degrees of freedom of the error and improve the power? In yes, how? If not, why?
Yes, you can use the nonsignificant effects in the estimation of the error term even when the design has been replicated. However, it will usually have minimum affect on the power. What it will do is provide more degrees of freedom and reduce the magnitude of the tvalue used in determining the confidence interval. This is a slight reduction for DOEs with runs greater than 4. Most software will include the nonsignificant effects in the estimation of the standard error when you leave the term (main effect or interaction) out of the model but mechanically it can be done by least squares in the same manner that regression calculates the standard error. Also, a good software package will perform an Ftest to determine if the residual error (nonsignificant effects) is significantly larger than the pure error (variance from the replicates).
2. So by introducing all the MS of a non significant factor or interaction you are, most probably, adding some variation due to special causes (the change in the factor or interaction level)
Yes, there is always a risk that you are including active effects in the estimation of the experimental error when the design is not replicated. There has been quite a bit of research into methods to lower this risk; the work by Lenth and Box & Myers that I mentioned in my previous post are examples. I am using the term nonsignificant effects loosely as I should be saying effects that lack sufficient evidence to be deemed active.
You need to view this in the same way that we use normality testing. When we fail to reject the null in a normality test, there is still a risk the data is not normal but we proceed to apply normal theory to the data. Similar argument can be used when using the NPP to assess significant and nonsignificant effects.
3. If I make a DOE with say 10 repetitions and no replications: How many data do I have for each experimental condition? 1 (the average) or 10? Do I have degrees of freedom free for the error?
You have one data point for each condition; the average of the repeats. You would not have any degrees of freedom for the error. Once again, you would have to rely on the effects that lack sufficient evidence to be deemed active.
I decided that I would demonstrate what I was getting at in my previous post as the verbal explanation may not have been clear. The table below is intended to demonstrate the effect on the power of a 2k factorial DOE of changing the number of repeat measurements and/or the number of runs in the experiment. I have looked at three scenarios by changing the relative size of the setup standard deviation to the on condition standard deviation as equal to, half and one quarter. The power is determined by the minimum difference detectable at a alpha = 0.05 with an unreplicated design (assuming the 70% rule for determining the df).
As you can see the improvement is small when you increase repeats compared to increasing the size of the experiment (# or runs) and the impact of repeats decreases as the number of runs increases. Also, as the setup variation decreases relative to the on condition variation, the impact of increasing the number of repeats increases.
Repeat s Setup s # of repeat # of Runs Exp error SE Exp Diff at .05
1 1 4 4 1.12 0.56 2.41
1 1 8 4 1.06 0.53 2.28
1 1 16 4 1.03 0.52 2.22
1 1 4 8 1.12 0.40 1.02
1 1 8 8 1.06 0.38 0.96
1 1 16 8 1.03 0.36 0.94
1 1 4 16 1.12 0.28 0.62
1 1 8 16 1.06 0.27 0.58
1 1 16 16 1.03 0.26 0.57
1 0.5 4 4 0.71 0.35 1.52
1 0.5 8 4 0.61 0.31 1.32
1 0.5 16 4 0.56 0.28 1.20
1 0.5 4 8 0.71 0.25 0.64
1 0.5 8 8 0.61 0.22 0.56
1 0.5 16 8 0.56 0.20 0.51
1 0.5 4 16 0.71 0.18 0.39
1 0.5 8 16 0.61 0.15 0.34
1 0.5 16 16 0.56 0.14 0.31
1 0.25 4 4 0.56 0.28 1.20
1 0.25 8 4 0.43 0.22 0.93
1 0.25 16 4 0.35 0.18 0.76
1 0.25 4 8 0.56 0.20 0.51
1 0.25 8 8 0.43 0.15 0.39
1 0.25 16 8 0.35 0.13 0.32
1 0.25 4 16 0.56 0.14 0.31
1 0.25 8 16 0.43 0.11 0.24
1 0.25 16 16 0.35 0.09 0.190October 22, 2003 at 7:20 pm #91356If you average the results from the 10 measurements, you could also add the standard deviation as a response for the DOE, especially if you are trying to find the parameters to minimize variation.
0October 22, 2003 at 7:36 pm #91359Lee,
You have a good idea, but you can not add the standard deviations on the other hand you can add the variances instead.
I am amazed how a simple question about Replicates vs. Repeats can generate too much discussion.
Jamal
0October 23, 2003 at 12:36 am #91377I did not mean “add” but model. You could model the standard deviation (or variance) in much the same way you are modelling the measured response (average responses of repeated measurement). This would allow for finding both the significant parameters and interactions affecting both the within and between variations.
0November 18, 2003 at 10:13 pm #92656Dear Statman,
Thanks for your responses, it has clarified a lot for me. Can you answer a couple more questions for me.
1. Degrees of Freedom has been mentioned several times – why is it so important, is there a minimum value for degrees of freedom, if below this value is the analyisis invalid, can you increase them after you have run your DOE and found you have no degrees of freedom?
2. I understand that power if the probability of rejecting the null hypothesis when false and accepting alternate when true – when you are setting up a DOE in Minitab, how do you calculate the power of the experiment?
3. Unrelated to any of the above – NESTED ANOVA – when do you use it, how do you determine the order of nesting? Most of my testing is destructive – should I used nested anova instead of Gage R&R to validate my Gauge?
4. Can anyone recommend a book that is good for self teaching in statistics (I work as a BB in a company with noone trained in Stats beyond BB training)
Thanks
Verity0November 19, 2003 at 4:16 am #92668Hi Verity,
Good Questions:
Let me see if I can address them.
1. Degrees of Freedom are a measure of the size of the experiment in terms of the number of runs or conditions in the experiment and the number of unique contrasts (effects) that we can determine. For every run performed in the DOE we gain a degree of freedom, for every parameter we calculate we lose a degree of freedom. For example, if we run an unreplicated 24 full factorial DOE, we would have 15 degrees of freedom 16 runs minus one degree of freedom for the grand mean. In this design, you will have 15 unique contrast vectors or effects (4 main, 6 twofactor, 4 threefactor, and one fourfactor interactions). So when we calculate these fifteen one degree of freedom effects, we have not degrees of freedom left to determine the experimental error.
There is only a couple ways we can create degrees of freedom for determination of the experimental error. We can replicate part or all of the experiment. For example if we replicate the experiment twice (32 runs) then we would have 16 degrees of freedom to estimate the error term. The other method is to assume some of the 15 effects are only random noise and use those to determine the error term. In the full factorial above, if one main effect, 3 twotwo factor, the 4 threefactor and one four factor effects were termed to be noise, you would have 9 degrees of freedom to estimate the error term.
2. The power is 1beta where beta is the risk of concluding a factor has no effect when it actually does. One analogy that I like is the following. Imagine that you were hunting geese. You come to a field where the grass is 3 feet high but the geese are only two feet high. You cant conclude that there are no geese in the field because the noise (grass) is higher than the signal (geese). So your test (looking across the field) does not have sufficient power to detect geese.
The formula for sample size of a factorial DOE (no center points) is as follows.
N = (ta/2 +tb)2*s2/d2
Where N is the total number of runs in the DOE, t is the students t value, s is the standard deviation of the experiment, and delta is the minimum detectable difference. You can, if you like, rearrange this formula and solve for 1beta to get the formula for power. Under the sample size module in Minitab you can determine the power for various sizes of experiments, alpha levels, and detectable differences. Minitab uses the above formula to determine this. You need to have an estimate of the experimental variation in order to use this.
3. The typical DOE set up is one that has crossed factors. Two factors are said to be crossed factors if each level of one factor can be applied at each level of the other factor. Two factors are said to be nested if the levels of the factors are unique to a particular higher level factor. For example, parts made on one machine can not be crossed with another machine. The parts can only come from one or the other machine. For nested factors, interactions are not possible. Your destructive test is another example of a nested design. Since the parts that are tested by one operator are destroyed during the testing, they are unique to that particular operator; so parts are nested within operator. The part by operator interaction does not make any logical sense. So yes, you should use the nested analysis for a destructive test. Typical application of nested ANOVA is in components of variation studies. These are studies to explore potential sources of variation in a process, determine which sources contribute the largest amount of variation to the process, prioritize areas for improvement or redesign, and determine the potential level of process performance (entitlement variation).
Determining the nesting structure takes some practice. Sometimes it is fairly straightforward but sometimes it can be difficult. It is important to have the right structure as the nesting order will determine how the variation is extracted and assigned to the various factor levels. I am surprised (maybe I shouldnt be) that this was not given major consideration in your BB training as I think it is one of the most important and valuable application methods.
4. Statistics for Experimenters by Box, Hunter, and Hunter ISBN #0471093157
Implementing Six Sigma by Breyfogle ISBN #0471296597
Quality Control and Industrial Statistics by Duncan ISBN #0256035350
Applied Linear Statistical Models by Neter and Wasserman ISBN #0256014981
These are some of my favorites. There are a lot of good books out there. Just some advice, stick with the premium publishing houses for mathematics and statistics. Look for Wiley, Irwin, and McgrawHill and stay away from the Publishing for Consultants like Palladyne Publishing. These books do not have the rigor of review like the premium publishers. They are only publishing to promote something.
If you are looking for some knowledgeable and experienced support, let me know how to contact you.
Regards,
Statman0May 26, 2004 at 7:33 am #100745
ashok khatriParticipant@ashokkhatri Include @ashokkhatri in your post and this person will
be notified via email.replication means repeatition of trial but not at the same time. doing the same trial at different time is replication.( i.e. set the control conditions one day & do trial , bring the m/c to previous setting , set the same control conditions another day & do trial )
the advantage of replication is to access the effect of measurement variation ( time to time ) , to access the effect of some lurking ( not known )variables.
repeatition means to run the same batch material ( made with one controlled condition ) & evaluate .
(i.e. make 50 pcs. in one controlled condition & validate the result in 5 runs of 10 nos. )0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.