1.5 sigma shift
Six Sigma – iSixSigma › Forums › Old Forums › General › 1.5 sigma shift
 This topic has 4 replies, 5 voices, and was last updated 18 years, 10 months ago by Rich Schroeder.

AuthorPosts

November 21, 2002 at 7:21 pm #30852
Mike HarryParticipant@MikeHarry Include @MikeHarry in your post and this person will
be notified via email.
It is very interesting and enlightening to read all of the fine debate (both positive and negative) surrounding the 1.5 sigma shift, as such discussion well serves the need to “keep the idea alive,” so to speak. In an effort to address the posted concerns and issues, I have recently completed a small booklet on this subject (to be published shortly after year’s end). This particular booklet sets forth the theoretical constructs and statistical equations that under girds and validates the socalled “shift factor” commonly referenced in the quality literature.
The booklet is entitled: “Demystifying the 1.5 Sigma Shift — Supporting Engineering and Statistical Rationale.” It should be recognized that this particular booklet has been prepared from a design engineering perspective. Owing to this, it can fully support many of the aims associated with designforsixsigma (DFSS) as well as processingforsixsigma (PFSS). Although the booklet is skewed towards design engineers, it provides a methodology for risk analysis that would interest most producibility engineers. In addition, the booklet is also intended for quality professionals and process engineers that are responsible for the qualification of a design or process.
As the booklet mathematically demonstrates, the “1.5 sigma shift” can be attributable solely to the influence of random sampling error. In this context, the 1.5 sigma shift is a statistically based correction for scientifically compensating or otherwise adjusting the postulated model of instantaneous reproducibility for the inevitable consequences associated with dynamic random sampling error. Naturally, such an adjustment (1.5 sigma shift) is only considered and instituted at the opportunity level of a product configuration. Thus, the model performance distribution of a given critical performance characteristic can be effectively attenuated for many of the operational uncertainties associated with a design process qualification (DPQ).
Based on this quasidefinition, it should be fairly evident that the 1.5 sigma shift factor can be treated as a statistical constant, but only under certain typical engineering conditions. By all means, the shift factor (1.5 sigma) does not constitute a literal” shift in the mean of a performance distribution as many quality practitioners and process engineers falsely believe or try to postulate through uniformed speculation and conjecture. However, its judicious application during the course of engineering a system, product, service, event, or activity can greatly facilitate the analysis and optimization of configuration repeatability.
By consistently applying the 1.5 sigma shift factor (during the course of product configuration), an engineer can meaningfully design in the statistical confidence necessary to ensure or otherwise assure that related performance safety margins are not violated by unknown (but anticipated) process variations. Also of interest, its existence and conscientious application has many pragmatic implications (and benefits) for reliability engineering. Furthermore, it can be used to normalize certain types and forms of benchmarking data in the interests of assuring a level playing field when considering heterogeneous products, services, and processes.
In summary, the 1.5 sigma shift factor should only be viewed as a mathematical construct of a theoretical nature. When treated as a statistical constant, its origin can be mathematically derived as an equivalent statistical quantity representing the worstcase error inherent to an estimate of shortterm process capability. Hence, the shift factor is merely an algebraic byproduct of the chisquare distribution. Its general application is fully constrained to engineering analyses especially those that are dependent upon process capability data. Perhaps even more importantly, it is employed to establish a “criterion” shortterm standard deviation — prior to executing a six sigma designprocess qualification (DPQ).0November 22, 2002 at 1:58 pm #80965
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.If you are not aware of the Davis Bothe article on the same subject in Quality Engineering 14(3) 479487 (2002) you may want to read it before you go to print with your booklet. Based on your description of what you have done, it sounds like the two of you have done the same thing – worked on the theory that would justify 1.5.
0November 22, 2002 at 3:15 pm #80974Theoretical justification is all well and good, but you still won’t be able to make it mathematical possible to estimate the zscores associated with a dpmo probability without assuming zscores are equidistant from the mean.
The formula to calculate sigma level from a known dpmo level is not possible without assuming zscores equidistant from the mean, which is definitely not always the case.0November 22, 2002 at 8:31 pm #80988
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.I thought that confidence intervals based on sample size, rather than a constant shift, where used to adress sampling error.
For example, if you want to be 95% sure that the “real” Cpk is at least 1.5 you need a calculated Cpk of 1.81 if your sample size is 50, 1.71 if your sample size is 100, or 1.62 if your sample size is 200.0November 23, 2002 at 4:10 am #80992
Rich SchroederMember@RichSchroeder Include @RichSchroeder in your post and this person will
be notified via email.Mike,
Stop all this needless pontificating about things you have never really done.
Go back to playing cowboy or I’ll start telling people that I made you.
Love,
Rich0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.