Is there a difference between Six Sigma and "robust design"?


Where the engineering of quality is concerned, Dr. G. Taguchi popularized the idea of “robustness” in the mid 80’s.  As one might suspect, the underlying definition of this term is fairly simple – per the dictionary, it is the ability to recover from unexpected [and expected] conditions during operation.   In this sense, the idea of “robustness” is related to the concept of “tolerance.”  In other words, a process is “robust” when its defining operations are relatively and collectively tolerant (impervious) to expected and unexpected sources of random and nonrandom variation.

It was also during the mid 80’s that the idea of “Six Sigma” was first exploited at Motorola.  As many will attest, this particular approach to quality improvement was quite profound in its purpose – and arguably more straightforward than Dr. Taguchi’s aim (reducing losses to society).  In contrast, the Six Sigma methodology was focused on the improvement of critical-to-quality characteristics (CTQs).  This doctrine advocates that each CTQ must exhibit no more than 3.4 defects-per-million-opportunities (DPMO = 3.4). 

Handpicked Content :   How can Six Sigma be extended to the maintenance field to enhance our machine Mean Time Between Failure (MTBF??

Of course, such a discrete level of quality is statistically equivalent to a continuous unilateral capability of 4.5S, were S is the long-term process standard deviation.  Statistically compensating for the influence of long-term uncertainties, we compute the short-term capability expectation as 4.5S + 1.5S = 6S.  Thus, we say that a six-sigma level of short-term capability is generally equivalent to a long-term quality expectation of DPMO = 3.4.

We further discover that a six-sigma level of capability is directly equivalent to an operating bandwidth of 50 percent.  This means that only 50 percent of the specification bandwidth (tolerance zone) is consumed by the process bandwidth.  Interestingly, such a level of process capability naturally provides for a high level of robustness to random and nonrandom process variations (by virtue of its narrow operating bandwidth). 

For example, consider a continuous CTQ with a bilateral performance specification.  Given this, the specified bandwidth of operation (tolerance  zone) can be fully described by quantity USL – LSL, were USL is the upper specification limit and LSL is the lower specification limit.  For the general case, we also recognize that T is the target specification and is often symmetrically located within the tolerance zone such that T = (USL -LSL) / 2.  By quality convention and statistical declaration, we acknowledge that unity is contained by the 3S limits of the process distribution. 

Handpicked Content :   What are your thoughts on the different types of bottom line benefits of Six Sigma?

Given these facts, we can confidently assert that only 50 percent of the design specification (USL -LSL) is consumed by the operating bandwidth of the process (+/- 3S), were S is the standard deviation (index of variability).  Algebraically speaking, this means that the standard deviation of a six-sigma process can be sufficiently characterized as S = |SL – T| / 6.  If this statistical construct can be realized in practice, the CTQ would be considered “Six Sigma.” 

In this context, the process center (average) would enjoy the freedom to “wiggle around a little” without any appreciable loss of quality. This means that the process can tolerate fairly large “shifts and drifts” in terms of process centering.  More specifically, all of this is to say that a 6S level of capability is quite naturally “robust” to process centering errors on the order of 1.5S or less.

You Might Also Like

Leave a Reply