iSixSigma

Traditional vs. Taguchi-Similarities Differences

Six Sigma – iSixSigma Forums Old Forums General Traditional vs. Taguchi-Similarities Differences

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #45499

    Robert Butler
    Participant

    Posts with questions/observations/beliefs about the similarities and differences of Taguchi and “traditional” designs, as well as opinions concerning the value (or perceived lack thereof) of each show up on the forum with a fair degree of regularity.  It is my perception that much of the confusion in these posts is due to a poor understanding of the technical terms used to frame these discussions. What follows is an attempt at term/concept clarification.
     
     
    Orthogonal Arrays
     
    The point of a design – any design – is to build experiments such that when they are run the effects of the variables of interests can be separated from one another.  The highest degree of separation that can occur in a design – any design – is when the effects of one variable are completely separated from the effects of another variable.  When this occurs we say the variables are orthogonal to one another. All basic experimental designs have this feature thus all basic designs are orthogonal thus a “traditional design” is an “orthogonal array”. 
     
    The only issue – and this is an issue of all designs – is the question of what is going to be “orthogonal” with what – in other words are the main effects clear of one another but confounded with two-way interactions or are they to be both clear of each other and are all two way interactions clear of one another as well, or…what?  Taguchi appears to combine the notion of orthogonal and the definition of an orthogonal array to identify a design whose main effects are clear (orthogonal) of one another but are confounded with two way interactions. In the statistics literature this kind of design is commonly known as a saturated design.
     
    The “L” Experimental Designs (Arrays):
     
     L arrays are nothing more than standard 2 and 3 level factorial designs of varying degrees of saturation (Plackett-Burman, Fractional Factorial, and Latin Square) which have been cosmetically altered by changing the signs of some the elements in the design matrix.  Because they are just standard designs you can use any traditional design in their place and achieve exactly the same results. 
     
     Renaming existing designs, highlighting a difference which isn’t a difference (swapping -1 and +1 points) and choosing another term over an accepted one (saturated vs. orthogonal array) give the false impression that the “L” series are somehow different from “traditional” designs. This verbal fog only confuses and leads to comments/questions such as these:
     
     “I don’t know what you mean by a Taguchi L9? Taguchi uses orthogonal arrays and I doubt whether he would use an L9 OA anyway!”
      “I am looking for a book/paper where the derivation of Taguchi orthogonal arrays is explained. Does anyone know where I can find the scientific articles (published in math or statistic journals) written by Dr. Taguchi? Especially what generators have been used”
     “The real advantage of Orthogonal Arrays appears when the factors have more than three levels.”
     “The role of orthogonal array is to check bad desing in confimation runs.”
     
    The Noise Issue
     
    Taguchi chooses to distinguish between what are called “process” variables and “noise” variables. 
     
     Noise, in the statistical sense, is used to refer to the effects of unknown and hence uncontrolled process variables.  Since they are unknown they cannot be subject to any kind of control.  Standard statistical practice uses randomization to insure the effects of these unknown variables will appear only as an increase in the residual error and not as an alias with one of the controlled variables in a design.
     
    Noise, in the Taguchi sense, is just a term for a variable you would really rather not have to control once you start running your process.  It is a known variable and it is one that, during the course of running your design, you will control in order to gain an estimate of its effect.
     
    Failure to understand that Taguchi noise and statistical noise are not the same is the source of endless confusion and error as exemplified by the following:
     
    “Using traditional DoE (Classical approach) methodologies, we have not been entirely successful. We feel that we have not managed noise factors very well. We have used “Blocks” to handle noise, but the results never satisfied us.”
     
      This confusion is mild compared to some of the other situations I’ve witnessed. For example:
     
    1) From time to time I’ve been brought into the aftermath of a design effort where the investigators really didn’t know their process as well as they thought they did. They identified their “process” and “noise” parameters and ran the Taguchi design as recommended (no randomization). They found their “optimum” in the presence of the “noise variable”, ran the confirmations, set up the process and watched everything fall apart.  After a lot of intensive post mortem work they found they had a noise variable in the statistical sense and that this variable had been confounded with one of the variables in the inner array. Their efforts failed because after setting up the process, the statistical noise variable had shifted and their control/improvement plan, which was based on a variable of no consequence, fell apart.
     
    2) A team sets up their inner and outer array and runs the design.  As it turns out, one of the “noise” variables is really the key to process improvement.  However, because it was labeled a “noise” variable the team ignores it and proceeds to identify an optimum in the presence of this “noise”.  The end result is a grossly suboptimal process but it is one that is doing the best it can given that the most important variable isn’t being controlled. This may sound silly but I’ve run into this situation a couple of times.
     
    Design Structure
     
       Taguchi designs are nothing more than a combination of two “traditional” factorial designs.  Taguchi calls the two designs the inner and the outer array.  The concept is very simple and the confusion about supposed differences between these designs and traditional designs is due, again, to nothing more than wordplay. 
     
    The practice of running a saturated design at each design point of a smaller, full or fractional factorial design is something that has been done almost since the beginning of DOE in the 1920’s. With perhaps the sole exception of mixture/process variable designs, the reason you usually don’t see this kind of design in the statistics literature is due to its general inefficiency (that is – running more experiments than is necessary).
     
    Since a variable by any other name will vary just as much the view of traditional design is there is nothing to be gained by artificially labeling one variable a “process” variable and another a “noise” variable and separating them in the design space.  Once run the results of a traditional design are analyzed to identify those variables having a significant impact on the process.  If, after having run and confirmed our results, we wish to not control some design variable (so that it is, in fact, contributing to the overall noise of the process) we will use the analysis of the design to identify the optimum that can be achieved under these circumstances and identify the probable variation in process output that will result from such an action.
     
    Robust Designs
     
    Statistically, a design is robust if it can suffer “damage” and still produce useful results.  The damage can be of two types. 
    A) One or more of the experiments in the design fail to produce measurable results.
     B) There were shifts, changes, or omissions with respect to the level settings of the variables in the design.  All “traditional” designs are robust.  Taguchi designs are just combinations of traditional designs therefore by definition they too must be robust in this statistical sense.
     
     “Robust”, in the Taguchi sense, has nothing to do with A or B.  Robust is the term he uses to identify his design matrix.  It is evident from the literature that inherent in the word choice is the idea that the use of these designs will provide results which will identify optimum process settings (mean process levels) which will result in product whose properties will remain at or near optimum in spite of  the variability of the “noise” and process parameters.  This is one of those cases where a single word really is the best choice as a descriptor for both the statistical concepts and the Taguchi labels.  It does cause confusion but there isn’t much you can do except pay close attention to the context in which it is used.
     
    Other than the potential for confusion of meaning the only other problem with the term “robust design” is that, for some, it conveys the idea that non-Taguchi designs are somehow “fragile” because these designs are supposedly incapable of assessing the effects of design variables on both target and variability.  This supposition is false.
     
    Consequently, the statement:
     
    “However I do not know if you are familiar with the 2 step optimization in Taguchi method:-
    1. minimize variability (NPM) and
    2. adjust to target (TPM).”  
    Is the optimization method that should also be applied to traditional designs.
       Both Taguchi and traditional designs identify means (target) adjustments using the methods of regression and ANOVA.  In a traditional design one investigates the design variable effects on final product variation by running a Box-Meyers analysis.  The post below gives the details:
     
      https://www.isixsigma.com/forum/showmessage.asp?messageID=80134
     
     
     
    Design Competition
     
    There is an aspect of the “Taguchi vs. Traditional” arguments that I find curious in the extreme.  It is exemplified in the following two quotes:
     
    “…Japanese companies use Taguchi Methods – but I guess they’re wrong because according to some people they’re not efficient, don’t work properly, alias effects, etc. So there must be something about doing things wrong that leads to getting the right answer”
    “Perhaps there is a good reason why Taguchi takes a different approach – as yet unappreciated :-)”
    The essence of these comments is as follows: “The (fill in the blank) are running Taguchi designs.  They have had successes with these designs in spite of the fact that these designs aren’t the “best” or are different from “traditional” designs.” 
     
     From this one is supposed to infer that this proves Taguchi designs are as good as if not better than traditional designs. However, this is the wrong inference since the comparisons are not between “traditional” designs and Taguchi designs but between what (fill in the blank) got for their efforts before running a Taguchi design and what they got after they ran a Taguchi design.  In my experience what (fill in the blank) did before Taguchi was nothing but “wonder-guess-putter.” In short, (fill in the blank) is comparing the results they got using some form of organized inquiry (which is all a design is) to the results they got using disorganized inquiry.  The simple fact is that organized inquiry – any organized inquiry – is going to produce results better, cheaper, and faster than disorganized inquiry. 
     
     Thus there is no argument that the Taguchi approach is more efficient than disorganized or one-at-a-time effort and thus less costly.  It’s been my experience that Taguchi designs are not as efficient as “traditional” designs.  For me, this means a good “traditional” design is less costly than a Taguchi design and, all other things being equal, it will probably get to the answer sooner with less expenditure of funds.
     
     
    The Value of Taguchi
     
    I think the biggest contribution Taguchi has made is his popularization of the use of experimental designs and the promulgation of the concept of a loss function.  As a statistician I am grateful that his efforts have raised the general awareness of the power of designs.  This is something that should have been done by my colleagues in the universities.  They have not done this, indeed they have, in general, failed miserably to convey to the public the value of any kind of statistical analysis.
     
    I also think focusing the efforts of a design to look for an optimum setting in the presence of known, uncontrolled, variation is an improvement over the simpler searches for optimum process settings.
     
     
     My Reservations 
     
    There is a perception that statisticians do not like Taguchi designs.  As with all things there is, I’m sure, a grain of truth to this statement. However, since I don’t have the results of a general survey of my fellow statisticians on this point I can’t give you any measure of its validity.  Speaking as a single statistician my view is that, except for issues of efficiency, I don’t care much one way or the other as far as Taguchi designs are concerned. However, I am one of those statisticians who do not have much regard for the Taguchi approach to inquiry and analysis. 
     
     
    My lack of fondness for Taguchi’s approach is driven by issues that are practical and philosophical in nature.  The practical reservations revolve around the issues of unknown and uncontrolled variation and randomization and what appears to be a lack of understanding of design power and efficiency. 
     
    Taguchi’s belief in the ability of an investigator to routinely possess process knowledge so detailed as to allow one to conclude there is no possibility of a significant unknown lurking variable is not in accordance with my experience. Time and again I’ve seen efforts come to naught because of a failure to use randomization to guard against the effects of unknown variables. Consequently, Taguchi’s lack of concern with respect to the need for randomization is, for me, unacceptable. 
     
    My sense of his failure to understand design power and efficiency is based on posts like the following:
     
    Just read Taguchi’s latest book : Taguchi’s Quality Engneering Handbook
    In the book, you will finds the clear statement like this :
    “when there is no interactions among control factors, DOE is a waste — you can use OFAT and get the same result.”
    Admittedly, I have not gone back to Taguchi’s publications to check the context of quotes like this but the fact that I’ve had to repeatedly address this issue when talking with people who were ardent advocates of Taguchi methods suggests that, at worst, views such as these are indeed his or at best these views are the result of a poor understanding of his position.
     
    The focus of my philosophical displeasure is the needless renaming of designs and the re-definition of accepted statistical terms and concepts.  These aspects of Taguchi methods serve only to obfuscate and confuse.  They give the beginner a sense of differences where none exist and they hinder attempts to understand the overarching fundamentals of design philosophy and organized inquiry.
     
    A Final Thought
     
      I hope the above will help clarify the issues surrounding differences in design and analysis and also I hope this post, in conjunction with the earlier one on  Shainin’s experimental design strategies,
     
    https://www.isixsigma.com/forum/showmessage.asp?messageID=63748
     
     
    will give you a better understanding of the concepts and terms of the various approaches to design and organized inquiry in general.

    0
    #148853

    Deep
    Participant

    Robert Butler!!!!!!!!!!!!!!!Did not get a chance to read all that, but i saved all that and will read today evening..
    You are the center stage, you are the star…
    Thaaaaaaaaaaaaaaaaaaaaanks a million for being here and helping us….
    Deep

    0
    #148855

    Mikel
    Member

    Robert,
    Very nice. I agree with Taguchi’s main contribution being the loss function. Six Sigma, in general, does not seem to appreciate the idea as we don’t spend much time on teaching people to go to target. As you know, I am a fan of Cpm and would do away with the traditional capability indices where target is known.
    The other value of Taguchi, in my opinion, is his interactin matrices and linear graphs. I like them for two reasons –
    1) For a person to understand how to construct their own linear graphs, they need to understand well how an experiment is constructed.
    2) IF there is good process knowledge on the team planning an experiement, there is a potential of reducing the size of the experiment by choosing which interactions to ignore. Note that it is a big if, it is not often you get that kind of knowledge in a Black Belt class.

    0
    #148865

    Yoshi
    Member

    Robert:
    Perhaps you are right .. no Japanese statisticans have bothered to check ‘traditional DOE’ against Dr. Taguchi’s approach!!! In this regard they seem similar to their USA colleagues, but in a slightly different respect.
    Do you believe it !!!!!
    Too bad USA statisticians did not bother to check Dr. Harry’s 1.5 sigma shift. Ha, ha, ha!
    Can you believe it !!!!
    Yoshi
     

    0
    #148875

    Markert
    Participant

    A major difference is that DOE is enumerative and Taguchi is analytic.

    0
    #148881

    Waste
    Member

    Robert,
    Sometimes it is worth stopping by on this site. Thanks! That’s an excellent summary of the discussion.

    0
    #157042

    mohamed elhassan abdelrhman
    Participant

    i need hand book for design tures and twoers. pleas help me to find that book.
    thank you

    0
    #157043

    Robert Butler
    Participant

    You’ll have to clarify what you mean by “design tures and twoers”

    0
Viewing 8 posts - 1 through 8 (of 8 total)

The forum ‘General’ is closed to new topics and replies.