Home › Forums › General Forums › Methodology › Taguchi DOE vs Traditional DOE
This topic contains 19 replies, has 6 voices, and was last updated by yu 2 years, 5 months ago.
Hi To All,
I do not have a particular problem to solve.
I just completed my first course on Taguchi DOE and I am curious as to which is more widely used as well as some feed back on practical experiences good or bad.
Regards,
Marty
Traditional is used more.
Taguchi designs are just highly fractionated factorial designs and as such are just traditional designs. The difference is in the analysis and if you take the advice of looking at variance as well as means using Box Meyer, traditional is superior. If you just do ANOVA, Taguchi analysis methods can uncover things traditional cannot.
What is the particular problem you need to solve?
If you just want an overview the post below may be of some help. I wrote it back in 2006 and it was over under the locked posts. I tried to link to it and apparently links to the older section no longer work.
Posts with questions/observations/beliefs about the similarities and differences of Taguchi and traditional designs, as well as opinions concerning the value (or perceived lack thereof) of each show up on the forum with a fair degree of regularity. It is my perception that much of the confusion in these posts is due to a poor understanding of the technical terms used to frame these discussions. What follows is an attempt at term/concept clarification.
Orthogonal Arrays
The point of a design any design is to build experiments such that when they are run the effects of the variables of interests can be separated from one another. The highest degree of separation that can occur in a design any design is when the effects of one variable are completely separated from the effects of another variable. When this occurs we say the variables are orthogonal to one another. All basic experimental designs have this feature thus all basic designs are orthogonal thus a traditional design is an orthogonal array.
The only issue and this is an issue of all designs is the question of what is going to be orthogonal with what in other words are the main effects clear of one another but confounded with two-way interactions or are they to be both clear of each other and are all two way interactions clear of one another as well, or what? Taguchi appears to combine the notion of orthogonal and the definition of an orthogonal array to identify a design whose main effects are clear (orthogonal) of one another but are confounded with two way interactions. In the statistics literature this kind of design is commonly known as a saturated design.
The L Experimental Designs (Arrays):
L arrays are nothing more than standard 2 and 3 level factorial designs of varying degrees of saturation (Plackett-Burman, Fractional Factorial, and Latin Square) which have been cosmetically altered by changing the signs of some the elements in the design matrix. Because they are just standard designs you can use any traditional design in their place and achieve exactly the same results.
Renaming existing designs, highlighting a difference which isnt a difference (swapping -1 and +1 points) and choosing another term over an accepted one (saturated vs. orthogonal array) give the false impression that the L series are somehow different from traditional designs. This verbal fog only confuses and leads to comments/questions such as these:
I don’t know what you mean by a Taguchi L9? Taguchi uses orthogonal arrays and I doubt whether he would use an L9 OA anyway!
I am looking for a book/paper where the derivation of Taguchi orthogonal arrays is explained. Does anyone know where I can find the scientific articles (published in math or statistic journals) written by Dr. Taguchi? Especially what generators have been used
The real advantage of Orthogonal Arrays appears when the factors have more than three levels.
The role of orthogonal array is to check bad desing in confimation runs.
The Noise Issue
Taguchi chooses to distinguish between what are called process variables and noise variables.
Noise, in the statistical sense, is used to refer to the effects of unknown and hence uncontrolled process variables. Since they are unknown they cannot be subject to any kind of control. Standard statistical practice uses randomization to insure the effects of these unknown variables will appear only as an increase in the residual error and not as an alias with one of the controlled variables in a design.
Noise, in the Taguchi sense, is just a term for a variable you would really rather not have to control once you start running your process. It is a known variable and it is one that, during the course of running your design, you will control in order to gain an estimate of its effect.
Failure to understand that Taguchi noise and statistical noise are not the same is the source of endless confusion and error as exemplified by the following:
Using traditional DoE (Classical approach) methodologies, we have not been entirely successful. We feel that we have not managed noise factors very well. We have used “Blocks” to handle noise, but the results never satisfied us.
This confusion is mild compared to some of the other situations Ive witnessed. For example:
1) From time to time Ive been brought into the aftermath of a design effort where the investigators really didnt know their process as well as they thought they did. They identified their process and noise parameters and ran the Taguchi design as recommended (no randomization). They found their optimum in the presence of the noise variable, ran the confirmations, set up the process and watched everything fall apart. After a lot of intensive post mortem work they found they had a noise variable in the statistical sense and that this variable had been confounded with one of the variables in the inner array. Their efforts failed because after setting up the process, the statistical noise variable had shifted and their control/improvement plan, which was based on a variable of no consequence, fell apart.
2) A team sets up their inner and outer array and runs the design. As it turns out, one of the noise variables is really the key to process improvement. However, because it was labeled a noise variable the team ignores it and proceeds to identify an optimum in the presence of this noise. The end result is a grossly suboptimal process but it is one that is doing the best it can given that the most important variable isnt being controlled. This may sound silly but Ive run into this situation a couple of times.
Design Structure
Taguchi designs are nothing more than a combination of two traditional factorial designs. Taguchi calls the two designs the inner and the outer array. The concept is very simple and the confusion about supposed differences between these designs and traditional designs is due, again, to nothing more than wordplay.
The practice of running a saturated design at each design point of a smaller, full or fractional factorial design is something that has been done almost since the beginning of DOE in the 1920s. With perhaps the sole exception of mixture/process variable designs, the reason you usually dont see this kind of design in the statistics literature is due to its general inefficiency (that is running more experiments than is necessary).
Since a variable by any other name will vary just as much the view of traditional design is there is nothing to be gained by artificially labeling one variable a process variable and another a noise variable and separating them in the design space. Once run the results of a traditional design are analyzed to identify those variables having a significant impact on the process. If, after having run and confirmed our results, we wish to not control some design variable (so that it is, in fact, contributing to the overall noise of the process) we will use the analysis of the design to identify the optimum that can be achieved under these circumstances and identify the probable variation in process output that will result from such an action.
Robust Designs
Statistically, a design is robust if it can suffer damage and still produce useful results. The damage can be of two types.
A) One or more of the experiments in the design fail to produce measurable results.
B) There were shifts, changes, or omissions with respect to the level settings of the variables in the design. All traditional designs are robust. Taguchi designs are just combinations of traditional designs therefore by definition they too must be robust in this statistical sense.
Robust, in the Taguchi sense, has nothing to do with A or B. Robust is the term he uses to identify his design matrix. It is evident from the literature that inherent in the word choice is the idea that the use of these designs will provide results which will identify optimum process settings (mean process levels) which will result in product whose properties will remain at or near optimum in spite of the variability of the noise and process parameters. This is one of those cases where a single word really is the best choice as a descriptor for both the statistical concepts and the Taguchi labels. It does cause confusion but there isnt much you can do except pay close attention to the context in which it is used.
Other than the potential for confusion of meaning the only other problem with the term robust design is that, for some, it conveys the idea that non-Taguchi designs are somehow fragile because these designs are supposedly incapable of assessing the effects of design variables on both target and variability. This supposition is false.
Consequently, the statement:
However I do not know if you are familiar with the 2 step optimization in Taguchi method:-
1. minimize variability (NPM) and
2. adjust to target (TPM).
Is the optimization method that should also be applied to traditional designs.
Both Taguchi and traditional designs identify means (target) adjustments using the methods of regression and ANOVA. In a traditional design one investigates the design variable effects on final product variation by running a Box-Meyers analysis.
Design Competition
There is an aspect of the Taguchi vs. Traditional arguments that I find curious in the extreme. It is exemplified in the following two quotes:
Japanese companies use Taguchi Methods – but I guess they’re wrong because according to some people they’re not efficient, don’t work properly, alias effects, etc. So there must be something about doing things wrong that leads to getting the right answer
Perhaps there is a good reason why Taguchi takes a different approach – as yet unappreciated :-)
The essence of these comments is as follows: The (fill in the blank) are running Taguchi designs. They have had successes with these designs in spite of the fact that these designs arent the best or are different from traditional designs.
From this one is supposed to infer that this proves Taguchi designs are as good as if not better than traditional designs. However, this is the wrong inference since the comparisons are not between traditional designs and Taguchi designs but between what (fill in the blank) got for their efforts before running a Taguchi design and what they got after they ran a Taguchi design. In my experience what (fill in the blank) did before Taguchi was nothing but wonder-guess-putter. In short, (fill in the blank) is comparing the results they got using some form of organized inquiry (which is all a design is) to the results they got using disorganized inquiry. The simple fact is that organized inquiry any organized inquiry is going to produce results better, cheaper, and faster than disorganized inquiry.
Thus there is no argument that the Taguchi approach is more efficient than disorganized or one-at-a-time effort and thus less costly. Its been my experience that Taguchi designs are not as efficient as traditional designs. For me, this means a good traditional design is less costly than a Taguchi design and, all other things being equal, it will probably get to the answer sooner with less expenditure of funds.
The Value of Taguchi
I think the biggest contribution Taguchi has made is his popularization of the use of experimental designs and the promulgation of the concept of a loss function. As a statistician I am grateful that his efforts have raised the general awareness of the power of designs. This is something that should have been done by my colleagues in the universities. They have not done this, indeed they have, in general, failed miserably to convey to the public the value of any kind of statistical analysis.
I also think focusing the efforts of a design to look for an optimum setting in the presence of known, uncontrolled, variation is an improvement over the simpler searches for optimum process settings.
My Reservations
There is a perception that statisticians do not like Taguchi designs. As with all things there is, Im sure, a grain of truth to this statement. However, since I dont have the results of a general survey of my fellow statisticians on this point I cant give you any measure of its validity. Speaking as a single statistician my view is that, except for issues of efficiency, I dont care much one way or the other as far as Taguchi designs are concerned. However, I am one of those statisticians who do not have much regard for the Taguchi approach to inquiry and analysis.
My lack of fondness for Taguchis approach is driven by issues that are practical and philosophical in nature. The practical reservations revolve around the issues of unknown and uncontrolled variation and randomization and what appears to be a lack of understanding of design power and efficiency.
Taguchis belief in the ability of an investigator to routinely possess process knowledge so detailed as to allow one to conclude there is no possibility of a significant unknown lurking variable is not in accordance with my experience. Time and again Ive seen efforts come to naught because of a failure to use randomization to guard against the effects of unknown variables. Consequently, Taguchis lack of concern with respect to the need for randomization is, for me, unacceptable.
My sense of his failure to understand design power and efficiency is based on posts like the following:
Just read Taguchi’s latest book : Taguchi’s Quality Engneering Handbook
In the book, you will finds the clear statement like this :
“when there is no interactions among control factors, DOE is a waste — you can use OFAT and get the same result.”
Admittedly, I have not gone back to Taguchis publications to check the context of quotes like this but the fact that Ive had to repeatedly address this issue when talking with people who were ardent advocates of Taguchi methods suggests that, at worst, views such as these are indeed his or at best these views are the result of a poor understanding of his position.
The focus of my philosophical displeasure is the needless renaming of designs and the re-definition of accepted statistical terms and concepts. These aspects of Taguchi methods serve only to obfuscate and confuse. They give the beginner a sense of differences where none exist and they hinder attempts to understand the overarching fundamentals of design philosophy and organized inquiry.
I hope the above will help clarify the issues surrounding differences in design and analysis of the two approaches.
As always, Robert, your understanding of the topic is accurate and impressive. I’d like to build on what you say with a few odds-and-ends comments if I may. I tend to favor “traditional” matrices most of the time, but I have found a few times when Taguchi’s contributions benefit what I do.
No matter whether the matrix is Taguchi or “traditional,” I find that the preparatory work before a DOE is vastly more important than selecting the “right” DOE matrix (assuming there ever will exist a single “right” matrix for a given context). That preparation almost always should include at least the following:
– Flow chart the process
– Brainstorm potential sources of variation
– Categorize sources of variation into “potentially controllable” and “not considered for control”
– Determine what the “Y” metric should be
– Validate the measurement system
– Identify factors for the DOE
– Determine how to minimize variation on factors not in the DOE
– Practice the DOE in a conference room “dry run”
– Be there during the DOE and note anything unexpected that occurs during the DOE
I probably missed a few steps. Teams that take shortcuts here are the ones that experience disappointing outcomes, no matter what matrix was chosen.
I seldom use inner & outer arrays, but once in a while they can be useful. However, the “noise” array NEVER should include a variable that could be controlled in normal use. Also, it DOES make sense to evaluate the impact of the “noise” variables, to see the extent to which each might impact the outcomes. If we think about it, it can be mighty hard to make a design “robust” if we ignore that knowledge. (I think this reinforces 99% of what you said about problems with inner-outer arrays.)
Once in a while I use a Taguchi matrix that mixes 2-level and 3-level variables. I agree they are not the optimum combination, but with solid pre-experimentation prep, I have managed to make them work. I tend to favor this option when the 2-level variables are mainly categorical and the 3-level variables are continuous. As you know, center points with categorical X’s can result in a very high run count.
I think Taguchi contributed a few new wrinkles of merit:
He made people aware that it is important to develop an effective response variable, and that doing so sometimes can entail a fair amount of work. Occasionally the signal-to-noise ratios are useful, but even in their absence investing time to develop a solid “Y” variable is wise. I think he made many people aware that sometimes the important “Y” is based less on the mean and more on some measure of its variation – I’m not aware of that being a prominent understanding before he came along.
I like to give him credit for separating sources of variation we can control from sources of variation against which the process or product must be robust. If he’s not the originator of that notion, I’d love to know where the credit lies.
That all being said, there are some flaws in his approach:
According to a discussion I once had with Mike Harry, Taguchi is no fan of randomizing run order. I think that the preparatory steps I described above greatly lower the risk of encountering a lurking variable. However, if it doesn’t set the experiment back too badly, I cannot see the harm of the added protection. I teach people to prepare properly, then randomize to the extent that is practical. Obviously, blocking variables help here.
I think there is both a mathematical and an intuitive elegance in “statistically standard” run order, which gets utterly messed up by the sequences and symbols of Taguchi arrays.
I think Taguchi’s approach to evaluating interactions is flawed and puts the experimental team at risk of making erroneous conclusions.
There. Now I’ve had my rant. And some exercise earlier. I feel better…
Good Afternoon,
Many thanks to all who responded, especially for the mini-treatise on Taguchi v Traditional DOE pro and con. I am close to completing my Black Belt training and thus far I find this aspect to be the most useful and the most interesting.
This is especially true given the power of various software packages and the complex issues or problems one can address with the same. Like many aspects of Six Sigma, $$$ looms ever large in terms of factors to consider.
The timing of your is great…it will take me a while to digest this info.
Thanks again!
Marty
Good Morning Robert,
I finally got the time to read your response to my earlier post re: DOE vs. Taguchi. In a few words great and very informative.
I do have a few comments & questions;
1) a little background, my university BB professor is a statistician as well, who teaches where Mr. Taguchi’s son attended under grad, therefore we get a strong emphasis on Taguchi.
2) To a green horn like me the Taguchi premise at first seems attractive but is more confusing than useful. Traditional DOE suits me fine, very powerful if used correctly.
3) to help make a bolder distinction as to statistical noise vs. taguchi noise or “Z” factors, could you provide some theoretical or real life examples of each?
Thanks again,
Marty
I wish that I could give you a thousand thumbs up Karmas – alas, you’ll need to settle for the one now, and another when I think about it again.
Robert & MBBinW,
A little more clarification regarding my Taguchi position is in order, I do not dislike Taguchi, rather I find that as a “green horn” , traditional DOE suits me better.
In retrospect my prof presented both methods equally well (this prof is an especially sharp statisitician), showing that each is another “arrow in one’s statistical/analytical quiver”; depending on one’s problem and circumstances, the statistician/practioner is left to select the appropriate method.
It may sound a bit dry and boring, but this profs presentation and examples make “statistical analysis” dare I say “sexy” or maybe its the power it imparts when used correctly.
Marty
The thousand thumbs up were for Robert Butler. You won’t go wrong listening to Mr. Butler.
Yeah I hear you loud and clear, especially since the site reconfiguration, the voices of “Rational and Informed discussion are few and far between……how long have you been involved with Six Sigma?
Unfortunately, this forum does not link one message to another, so unless you state to whom your comments are directed, it is difficult to tell. (I’ve already made that mistake). Wish that when you submitted a quick reply, the message said to which message the reply related to.
optomist1 wrote:
Yeah I hear you loud and clear, especially since the site reconfiguration, the voices of “Rational and Informed discussion are few and far between……how long have you been involved with Six Sigma?
Unfortunately, this forum does not link one message to another, so unless you state to whom your comments are directed, it is difficult to tell. (I’ve already made that mistake). Wish that when you submitted a quick reply, the message said to which message the reply related to.
I guess the best way to do this is to use the “quote” instead of quick reply. Perhaps I can learn new things after all.
MBBinWI,
Sorry about that, still getting used to the “new and improved” configuration…
Marty
optomist1 wrote:
MBBinWI,
Sorry about that, still getting used to the “new and improved” configuration…
Marty
Aren’t we all!
Hi, Marty,
About Taguchi DOE vs Traditional DOE, if you have more than 3 factors you choose Taguchi DOE, coz it will cost you less time and low cost. Ofcourse, if the Taguchi DOE dosen’t solve your problem, you also can continue use traditional DOE.
As far as I am concerned, before chooseing Taguchi DOE and Traditional DOE, I choose DorianShainin’s DOE. It’s very useful, especially for engineer.
The statement ” if you have more than 3 factors you choose Taguchi DOE, coz it will cost you less time and low cost.” is incorrect. As for Shanin DOE – once you get past his methods for variable level selection all he uses are standard designs.
As mentioned previously – Taguchi designs are nothing more than cosmetically altered standard designs thus, as far as identifying variable effects on mean output responses there isn’t any difference. As for the issue of variable effect on output variation – a standard design analyzed using the methods of Box-Meyers will require fewer experiments.
With respect to Shanin the following might be of interest – again I posted this some time ago on the old forum which, as near as I can tell, is unsearchable and will not permit links.
Shainin and Classical DOE-A Comparison
In the time Ive been reading and contributing to this forum there have been a number of threads and queries asking about the differences between Shainin methods and those of Experimental Design. The on-line paper cited by Andy U in his post to this forum provides an opportunity to offer a comparison. At the moment, the paper can be found at:
http://www.gcal.ac.uk/crisspi/downloads/publication11.pdf
In the event the link is severed, the title of the Paper is Training For Shainins Approach to Experimental Design Using a Catapult Antony and Cheng Journal of European Industrial Training 27/8 [2003] 405-412.
(Note: As of 18 August 2010 this link no longer works)
The catapult problem described in the paper involves a search of seven (7) catapult variables in order to identify an optimum.
The variables are:
Worst Best
Ball Type: Light Heavy
Rubber Band Type : Brown Black
Stop Position: 1 4
Peg Height: 2 4
Hook Position: 2 4
Cup Position 5 6
Release Angle 170 180
The comparison will consist of summaries of statements in the paper concerning the Shainin approach followed by the approach/techniques used in traditional design.
The Paper Preliminary Work
In Phase I of the process the investigators develop a list of process parameters to be tested. The selection is based on meetings with people from design, QC, and various levels and departments in manufacturing. The variables are identified by brainstorming and examining earlier process output. Once the factors are identified they are ranked in order of perceived importance and two levels +1 the best, and -1 the worst are identified for each of the variables of interest.
In the paragraphs following the opening comments about Phase I the authors describe the initial variable search method which consists of running an experiment with all of the variables set at their best level and another with all of the variables set at their worst level. A six experiment study of these two experiments is set up the two experiments are randomized to the study so that each one is run three (3) times for a total of six (6) runs.
At the end of this initial study the differences between the medians of the two experiments is computed and an estimate of the range of the output responses is derived. In order to move on to Phase 2 of the difference of the medians (DM) of the two sets of three experiments must meet the criteria of DM > 1.25:1 (pp.406) If this condition is not satisfied, then either the list does not yet contain the critical or key process variables or else the settings of one or more of the variables must be reversed.
Experimental Design Preliminary Work.
A traditional design, like the Shainin approach, would take advantage of the results of a brainstorming session with everyone. Since a process of this type is going to generate a lot of possible variables we would ask people to look at the final list and then independently rank them in terms of perceived importance. The results of these independent polls would be tallied and the variables ranked accordingly. The final choice of the number of variables for screening would be driven by the cost/time constraints of the operation. In keeping with the example of the paper we will assume the result of the brainstorming resulted in the seven preliminary variables as listed in Table 1 pp. 407.
The position of traditional design is that it is rare that the best and worst settings are known. Usually, what is known is a nominal setting (a setting at which the process makes useful product – it is this setting that would be the current focus of most process control efforts) and a bad setting usually a setting known to have caused trouble in the past. Rather than focusing on either of these, the usual approach is to ask about feasible extreme settings of the process variables. Once these extremes are identified the questions shift to issues concerning the validity and feasibility of running the process at these levels.
With the extreme settings identified and agreed upon the next phase would be the construction of a screen design.
Of the variables of interest only Rubber Band is a true type variable. . A good traditional design would be a near saturated 2 level, 8 point design for the six continuous variables with a single replication of one of the eight points. This 9 point design would be run for each of the two rubber band types (18 experiments). The design would be constructed and the settings of each experiment would be reviewed to make sure we hadnt knowingly set up conditions that were either impossible to run or that would be catastrophic in nature. Note: strictly from the standpoint of investigator comfort I would also add a 19th point which would be an experiment with variables at the current nominalsettings.
The Paper Initial parameter investigation
After running the six experiments and computing confidence limits on medians using range estimates and finding a significant difference between the minimum and maximum median the investigation moves on to putting control limits, which they also call prediction intervals, around the Median for the variables set at their best settings (MB) and the Median for the variables set at their worst setting (MW). With these limits in place the next step is that of separating the critical variables from the unimportant ones.
For this effort called the swapping phase a 7th experiment is run where the most important variable (as ranked by the team) is kept at its worst level while all the other factors are kept at their best level (pp.406). This experiment is repeated but the roles are reversed now the important variable is put at its best level and the others are set at their worst. If the output response values from the above two experiments fall within the median limits (by this I believe they mean within the limits surrounding either the best or worst median) , then the influence of the swapped process variable can be disregarded (pp.406). On the other hand, if the measured response falls outside these limits then the swapped variables or some interaction involving them can be viewed as significant to the process. This process is repeated for all of the variables of interest
After these runs have been made (for the 7 variables of interest this would imply an additional 14 runs) the Shainin approach recommends capping runs which constitute runs confirming variable significance. For these the variables identified in the swapping phase as being most important are set at their best levels and the remainder at their worst. As before, the settings are then reversed for the important and unimportant and another experiment is run. If the output responses are within the median limits the runs are declared a success. The wording in the paper is such that I believe they mean if the results of the best settings lie within the limits of the best median and the worst settings lie within the limits of the worst median then they can proceed.
Experimental Design Initial parameter investigation
Once the 19 point design had been run it could be examined using standard regression methods. Over the ranges of the variables studied, the results of the analysis of the 19 experiments would identify those variables whose linear behavior significantly impacted the measured response and thus those variables which are critical to the process (the red X, pink X, and pale pink X of the Shainin approach).
The Xs are normalized over the ranges they were actually run (as opposed to just blindly plugging in the ideal -1, +1 matrix). Then both backward elimination and forward selection with replacement are run. In most instances, if the actual design is reasonably orthogonal, the two methods will converge to the same model. Because the Xs have been normalized, their respective coefficients can be directly compared and the variables ranked according to the impact a unit change in the variable (from -1 to +1) would have on the response.
The DOE equivalent of capping runs are identified by studying the regression equation and using its predictive capability to identify combinations of Xs whose predicted response is considered to be an optimum. These confirming experiments are run and if the results are within the confidence limits of the predicted value it is reasonable for the investigator to reach the same conclusions as those provided by the results of the Shainin capping runs.
The Paper Factorial Analysis
In the paper, the results of the initial work have reduced the variables from 7 to 3. The paper then uses a standard 2 level, three variable, full factorial design to investigate the main effects and their two way interactions. The investigators ran 3 replicates on the 8 point design, computed the median values of the results of each experiment and evaluated the results using ANOVA and interaction plots.
Experimental Design Factorial Analysis
There are many ways to analyze a factorial design and the Shainin approach to this part of the analysis differs from traditional methods only in its use of medians. Like the traditional design, Shainin methods appear to leave the choice of analytical tools to the discretion of the investigator. The authors chose to use ANOVA and interaction plots.
My view is the authors choice of tools and replicates will give results that are much less than they could be. Since the final three parameters are all continuous variables and since all of them have a center point I would choose to run a full factorial two level design with a replicate on the center point (10 point total). I would analyze the results of this design to insure no significant curvature effect and I would take the resultant equation and test it to see if it had identified the optimum setting. I would also use Box-Meyers methods to check for variables impacting the process variation.
Only after this would I give any consideration to more replication and/or design augmentation. The regression would identify significant interactions and more importantly it would provide quantity to the effects of the X terms in the model. ANOVA will identify significant effects but you cannot take the direct results of an ANOVA and start turning knobs. Similarly, interaction plots are useful but if there are too many of them they can be difficult to use for process control.
Some Final Thoughts
1. The initial search for significant variables, as outlined in the paper, is logical and organized, however, it does not appear to be particularly efficient. In particular, it would be very easy to set up combinations of best and worst setting for variables whose results were inconclusive. As offered by the paper, the only remedy to this problem (pp.406) is If this condition (DM >1.25) is not satisfied, then either the list does not yet contain the critical or key process variables or else the setting of one or more variables must be reversed. Experiments run in this fashion usually have to be treated as stand alone studies which means if their results are inconclusive, you cannot combine them with other similarly run experiments to identify variable effects because of confounding. It doesnt take too many variables to turn a search of this type into a random walk through your design space. By contrast, a well thought out screen design will let you know, with a high degree of certainty, whether or not the variables you have chosen really matter.
2. Multiple Ys The paper only addresses the issue of a single Y. Most efforts Ive been involved in have at least 5 Ys of interest and a large percentage of the work Ive done has involved 10 or more. As described, the initial variable search would seem to require an independent effort for each Y. While you could measure multiple Ys at each of the points in the preliminary study you would have to face the fact that the best and worst settings may be different for different Ys. This, in turn would only add to the number of runs you would have to make. On a screening design, this is not an issue. You run the design, you measure the Ys and you build predictive equations for each Y. From this you can quickly see which variable is significant for which Y and where best settings are in conflict.
Robert Butler wrote:
The statement ” if you have more than 3 factors you choose Taguchi DOE, coz it will cost you less time and low cost.” is incorrect. As for Shanin DOE – once you get past his methods for variable level selection all he uses are standard designs.
As mentioned previously – Taguchi designs are nothing more than cosmetically altered standard designs thus, as far as identifying variable effects on mean output responses there isn’t any difference. As for the issue of variable effect on output variation – a standard design analyzed using the methods of Box-Meyers will require fewer experiments.
With respect to Shanin the following might be of interest – again I posted this some time ago on the old forum which, as near as I can tell, is unsearchable and will not permit links.
Shainin and Classical DOE-A Comparison
In the time Ive been reading and contributing to this forum there have been a number of threads and queries asking about the differences between Shainin methods and those of Experimental Design. The on-line paper cited by Andy U in his post to this forum provides an opportunity to offer a comparison. At the moment, the paper can be found at:
http://www.gcal.ac.uk/crisspi/downloads/publication11.pdf
In the event the link is severed, the title of the Paper is Training For Shainins Approach to Experimental Design Using a Catapult Antony and Cheng Journal of European Industrial Training 27/8 [2003] 405-412.
(Note: As of 18 August 2010 this link no longer works)
The catapult problem described in the paper involves a search of seven (7) catapult variables in order to identify an optimum.
The variables are:
Worst Best
Ball Type: Light Heavy
Rubber Band Type : Brown Black
Stop Position: 1 4
Peg Height: 2 4
Hook Position: 2 4
Cup Position 5 6
Release Angle 170 180The comparison will consist of summaries of statements in the paper concerning the Shainin approach followed by the approach/techniques used in traditional design.
The Paper Preliminary Work
In Phase I of the process the investigators develop a list of process parameters to be tested. The selection is based on meetings with people from design, QC, and various levels and departments in manufacturing. The variables are identified by brainstorming and examining earlier process output. Once the factors are identified they are ranked in order of perceived importance and two levels +1 the best, and -1 the worst are identified for each of the variables of interest.
In the paragraphs following the opening comments about Phase I the authors describe the initial variable search method which consists of running an experiment with all of the variables set at their best level and another with all of the variables set at their worst level. A six experiment study of these two experiments is set up the two experiments are randomized to the study so that each one is run three (3) times for a total of six (6) runs.
At the end of this initial study the differences between the medians of the two experiments is computed and an estimate of the range of the output responses is derived. In order to move on to Phase 2 of the difference of the medians (DM) of the two sets of three experiments must meet the criteria of DM > 1.25:1 (pp.406) If this condition is not satisfied, then either the list does not yet contain the critical or key process variables or else the settings of one or more of the variables must be reversed.
Experimental Design Preliminary Work.
A traditional design, like the Shainin approach, would take advantage of the results of a brainstorming session with everyone. Since a process of this type is going to generate a lot of possible variables we would ask people to look at the final list and then independently rank them in terms of perceived importance. The results of these independent polls would be tallied and the variables ranked accordingly. The final choice of the number of variables for screening would be driven by the cost/time constraints of the operation. In keeping with the example of the paper we will assume the result of the brainstorming resulted in the seven preliminary variables as listed in Table 1 pp. 407.
The position of traditional design is that it is rare that the best and worst settings are known. Usually, what is known is a nominal setting (a setting at which the process makes useful product – it is this setting that would be the current focus of most process control efforts) and a bad setting usually a setting known to have caused trouble in the past. Rather than focusing on either of these, the usual approach is to ask about feasible extreme settings of the process variables. Once these extremes are identified the questions shift to issues concerning the validity and feasibility of running the process at these levels.
With the extreme settings identified and agreed upon the next phase would be the construction of a screen design.
Of the variables of interest only Rubber Band is a true type variable. . A good traditional design would be a near saturated 2 level, 8 point design for the six continuous variables with a single replication of one of the eight points. This 9 point design would be run for each of the two rubber band types (18 experiments). The design would be constructed and the settings of each experiment would be reviewed to make sure we hadnt knowingly set up conditions that were either impossible to run or that would be catastrophic in nature. Note: strictly from the standpoint of investigator comfort I would also add a 19th point which would be an experiment with variables at the current nominalsettings.
The Paper Initial parameter investigation
After running the six experiments and computing confidence limits on medians using range estimates and finding a significant difference between the minimum and maximum median the investigation moves on to putting control limits, which they also call prediction intervals, around the Median for the variables set at their best settings (MB) and the Median for the variables set at their worst setting (MW). With these limits in place the next step is that of separating the critical variables from the unimportant ones.
For this effort called the swapping phase a 7th experiment is run where the most important variable (as ranked by the team) is kept at its worst level while all the other factors are kept at their best level (pp.406). This experiment is repeated but the roles are reversed now the important variable is put at its best level and the others are set at their worst. If the output response values from the above two experiments fall within the median limits (by this I believe they mean within the limits surrounding either the best or worst median) , then the influence of the swapped process variable can be disregarded (pp.406). On the other hand, if the measured response falls outside these limits then the swapped variables or some interaction involving them can be viewed as significant to the process. This process is repeated for all of the variables of interest
After these runs have been made (for the 7 variables of interest this would imply an additional 14 runs) the Shainin approach recommends capping runs which constitute runs confirming variable significance. For these the variables identified in the swapping phase as being most important are set at their best levels and the remainder at their worst. As before, the settings are then reversed for the important and unimportant and another experiment is run. If the output responses are within the median limits the runs are declared a success. The wording in the paper is such that I believe they mean if the results of the best settings lie within the limits of the best median and the worst settings lie within the limits of the worst median then they can proceed.
Experimental Design Initial parameter investigation
Once the 19 point design had been run it could be examined using standard regression methods. Over the ranges of the variables studied, the results of the analysis of the 19 experiments would identify those variables whose linear behavior significantly impacted the measured response and thus those variables which are critical to the process (the red X, pink X, and pale pink X of the Shainin approach).
The Xs are normalized over the ranges they were actually run (as opposed to just blindly plugging in the ideal -1, +1 matrix). Then both backward elimination and forward selection with replacement are run. In most instances, if the actual design is reasonably orthogonal, the two methods will converge to the same model. Because the Xs have been normalized, their respective coefficients can be directly compared and the variables ranked according to the impact a unit change in the variable (from -1 to +1) would have on the response.
The DOE equivalent of capping runs are identified by studying the regression equation and using its predictive capability to identify combinations of Xs whose predicted response is considered to be an optimum. These confirming experiments are run and if the results are within the confidence limits of the predicted value it is reasonable for the investigator to reach the same conclusions as those provided by the results of the Shainin capping runs.
The Paper Factorial Analysis
In the paper, the results of the initial work have reduced the variables from 7 to 3. The paper then uses a standard 2 level, three variable, full factorial design to investigate the main effects and their two way interactions. The investigators ran 3 replicates on the 8 point design, computed the median values of the results of each experiment and evaluated the results using ANOVA and interaction plots.
Experimental Design Factorial Analysis
There are many ways to analyze a factorial design and the Shainin approach to this part of the analysis differs from traditional methods only in its use of medians. Like the traditional design, Shainin methods appear to leave the choice of analytical tools to the discretion of the investigator. The authors chose to use ANOVA and interaction plots.
My view is the authors choice of tools and replicates will give results that are much less than they could be. Since the final three parameters are all continuous variables and since all of them have a center point I would choose to run a full factorial two level design with a replicate on the center point (10 point total). I would analyze the results of this design to insure no significant curvature effect and I would take the resultant equation and test it to see if it had identified the optimum setting. I would also use Box-Meyers methods to check for variables impacting the process variation.
Only after this would I give any consideration to more replication and/or design augmentation. The regression would identify significant interactions and more importantly it would provide quantity to the effects of the X terms in the model. ANOVA will identify significant effects but you cannot take the direct results of an ANOVA and start turning knobs. Similarly, interaction plots are useful but if there are too many of them they can be difficult to use for process control.
Some Final Thoughts
1. The initial search for significant variables, as outlined in the paper, is logical and organized, however, it does not appear to be particularly efficient. In particular, it would be very easy to set up combinations of best and worst setting for variables whose results were inconclusive. As offered by the paper, the only remedy to this problem (pp.406) is If this condition (DM >1.25) is not satisfied, then either the list does not yet contain the critical or key process variables or else the setting of one or more variables must be reversed. Experiments run in this fashion usually have to be treated as stand alone studies which means if their results are inconclusive, you cannot combine them with other similarly run experiments to identify variable effects because of confounding. It doesnt take too many variables to turn a search of this type into a random walk through your design space. By contrast, a well thought out screen design will let you know, with a high degree of certainty, whether or not the variables you have chosen really matter.
2. Multiple Ys The paper only addresses the issue of a single Y. Most efforts Ive been involved in have at least 5 Ys of interest and a large percentage of the work Ive done has involved 10 or more. As described, the initial variable search would seem to require an independent effort for each Y. While you could measure multiple Ys at each of the points in the preliminary study you would have to face the fact that the best and worst settings may be different for different Ys. This, in turn would only add to the number of runs you would have to make. On a screening design, this is not an issue. You run the design, you measure the Ys and you build predictive equations for each Y. From this you can quickly see which variable is significant for which Y and where best settings are in conflict.
Hi, Robert,
Thanks for your views.
1.”The initial search for significant variables, as outlined in the paper, is logical and organized.”
2.”As described, the initial variable search would seem to require an independent effort for each Y. While you could measure multiple Ys at each of the points in the preliminary study you would have to face the fact that the best and worst settings may be different for different Ys.
I really agree with you.
By the way, I can’t find the paper you gave in your replies.
This link “http://www.gcal.ac.uk/crisspi/downloads/publication11.pdf” is not served now. What a pity.
Can you send the PDF file to my email box, thanks a lot. My email adress is: shoutbug@yahoo.com.cn.
Best regards!
From leslile
CHINA :cheer:
I don;t have the article – however it is my understanding that you can order it online.
Robert Butler wrote:
I don;t have the article – however it is my understanding that you can order it online.
:) That’s ok, I will find some other readings to learn. Thank you all the same.
May you have a nice day!
© Copyright iSixSigma 2000-2014. User Agreement. Any reproduction or other use of content without the express written consent of iSixSigma is prohibited. More »