In my recollection, two recurring questions have dominated the field of six sigma.  The first inquiry can be described by the global question: “Why 6s and not some other level of capability?”  The second inquiry is more molecular.  It can be summarized by the question: “Where does the 1.5s shift factor come from – and why 1.5 versus some other magnitude?”  For details on this subject, reference: “Harry, M. J. “Resolving the Mysteries of Six Sigma: Statistical Constructs and Engineering Rationale.” First Edition 2003. Palladyne Publishing. Phoenix, Arizona. (Note: this particular publication will be available by October 2003).  But until then, we will consider the following thumbnail sketch.

At the onset of six sigma in 1985, this writer was working as an engineer for the Government Electronics Group of Motorola.  By chance connection, I linked up with another engineer by the name of Bill Smith (originator of the six sigma concept in 1984).  At that time, he suggested Motorola should require 50 percent design margins for all of its key product performance specifications.  Statistically speaking, such a “safety margin” is equivalent to a 6 sigma level of capability.

When considering the performance tolerance of a critical design feature, he believed a 25 percent “cushion” was not sufficient for absorbing a sudden shift in process centering.  Bill believed the typical shift was on the order of 1.5s (relative to the target value).  In other words, a four sigma level of capability would normally be considered sufficient, if centered.  However, if the process center was somehow knocked off its central location (on the order of 1.5s), the initial capability of 4s would be degraded to 4.0s – 1.5s = 2.5s.  Of course, this would have a consequential impact on defects.  In turn, a sudden increase in defects would have an adverse effect on reliability.  As should be apparent, such a domino effect would continue straight up the value chain.

Regardless of the shift magnitude, those of us working this issue fully recognized that the initial estimate of capability will often erode over time in a “very natural way” – thereby increasing the expected rate of product defects (when considering a protracted period of production).  Extending beyond this, we concluded that the product defect rate was highly correlated to the long-term process capability, not the short-term capability.  Of course, such conclusions were predicated on the statistical analysis of empirical data gathered on a wide array of electronic devices. 

Thus, we come to understand three things.  First, we recognized that the instantaneous reproducibility of a critical-to-quality characteristic is fully dependent on the “goodness of fit” between the operating bandwidth of the process and the corresponding bandwidth of the performance specification.  Second, the quality of that interface can be substantively and consequentially disturbed by process centering error.  Of course, both of these factors profoundly impact long-term capability.  Third, we must seek to qualify our critical processes at a 6s level of short-term capability if we are to enjoy a long-term capbility of 4s.

By further developing these insights through applied research, we were able to greatly extend our understanding of the many statistical connections between such things as design margin, process capability, defects, field reliability, customer satisfaction, and economic success.

About the Author