1.5 sigma shift

Six Sigma – iSixSigma Forums Old Forums General 1.5 sigma shift

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
  • #30852

    Mike Harry

    It is very interesting and enlightening to read all of the fine debate (both positive and negative) surrounding the 1.5 sigma shift, as such discussion well serves the need to “keep the idea alive,” so to speak.  In an effort to address the posted concerns and issues, I have recently completed a small booklet on this subject (to be published shortly after year’s end).  This particular booklet sets forth the theoretical constructs and statistical equations that under girds and validates the so-called “shift factor” commonly referenced in the quality literature. 
    The booklet is entitled: “Demystifying the 1.5 Sigma Shift — Supporting Engineering and Statistical Rationale.”  It should be recognized that this particular booklet has been prepared from a design engineering perspective.  Owing to this, it can fully support many of the aims associated with design-for-six-sigma (DFSS) – as well as processing-for-six-sigma (PFSS).  Although the booklet is skewed towards design engineers, it provides a methodology for risk analysis that would interest most producibility engineers.  In addition, the booklet is also intended for quality professionals and process engineers that are responsible for the “qualification” of a design or process. 
    As the booklet mathematically demonstrates, the “1.5 sigma shift” can be attributable solely to the influence of random sampling error.  In this context, the 1.5 sigma shift is a statistically based correction for scientifically compensating or otherwise adjusting the postulated model of instantaneous reproducibility for the inevitable consequences associated with dynamic random sampling error.   Naturally, such an adjustment (1.5 sigma shift) is only considered and instituted at the opportunity level of a product configuration.  Thus, the model performance distribution of a given critical performance characteristic can be effectively attenuated for many of the operational uncertainties associated with a design- process qualification (DPQ).
    Based on this quasi-definition, it should be fairly evident that the 1.5 sigma shift factor can be treated as a “statistical constant,” but only under certain “typical” engineering conditions.  By all means, the shift factor (1.5 sigma) does not constitute a “literal” shift in the mean of a performance distribution – as many quality practitioners and process engineers falsely believe or try to postulate through uniformed speculation and conjecture.  However, its judicious application during the course of engineering a system, product, service, event, or activity can greatly facilitate the analysis and optimization of “configuration repeatability.”
    By consistently applying the 1.5 sigma shift factor (during the course of product configuration), an engineer can meaningfully “design in” the statistical confidence necessary to ensure or otherwise assure that related performance safety margins are not violated by unknown (but anticipated) process variations.  Also of interest, its existence and conscientious application has many pragmatic implications (and benefits) for reliability engineering.  Furthermore, it can be used to “normalize” certain types and forms of benchmarking data in the interests of assuring a “level playing field” when considering heterogeneous products, services, and processes.
    In summary, the 1.5 sigma shift factor should only be viewed as a mathematical construct of a theoretical nature.  When treated as a “statistical constant,” its origin can be mathematically derived as an equivalent statistical quantity representing the “worst-case error” inherent to an estimate of short-term process capability.  Hence, the shift factor is merely an “algebraic byproduct” of the chi-square distribution. Its general application is fully constrained to engineering analyses – especially those that are dependent upon process capability data.  Perhaps even more importantly, it is employed to establish a “criterion” short-term standard deviation — prior to executing a six sigma design-process qualification (DPQ). 


    Robert Butler

      If you are not aware of the Davis Bothe article on the same subject in Quality Engineering 14(3) 479-487 (2002) you may want to read it before you go to print with your booklet.  Based on your description of what you have done, it sounds like the two of you have done the same thing – worked on the theory that would justify 1.5.



    Theoretical justification is all well and good, but you still won’t be able to make it mathematical possible to estimate the z-scores associated with a dpmo probability without assuming z-scores are equidistant from the mean.
    The formula to calculate sigma level from a known dpmo level is not possible without assuming z-scores equidistant from the mean, which is definitely not always the case.



    I thought that confidence intervals based on sample size, rather than a constant shift,  where used to adress sampling error.
    For example, if you want to be 95% sure that the “real” Cpk is at least 1.5 you need a calculated Cpk of 1.81 if your sample size is 50, 1.71 if your sample size is 100, or 1.62 if your sample size is 200.


    Rich Schroeder

    Stop all this needless pontificating about things you have never really done.
    Go back to playing cowboy or I’ll start telling people that I made you.

Viewing 5 posts - 1 through 5 (of 5 total)

The forum ‘General’ is closed to new topics and replies.