Home › Forums › General Forums › Training › Trying to Explain 1 1/2 Sigma Shift
This topic contains 14 replies, has 8 voices, and was last updated by Mike Bonnice 1 month, 1 week ago.
I am teaching a class in Lean Six Sigma at the Green Belt level. I am getting ready to discuss general statistics as part of the Measurement Phase of the DMAIC methodology. In reviewing the literature I am still a bit confused as to how I should explain the 1 1/2 Sigma Shift. I understand the shift represents long term variability and is comprised of both common cause and special cause variation. The shift which Motorola observed over time suggests that during the design stage that processes need to be robust in nature to allow for this level of variation. OK, I get that. However when I talk about a normal distribution as a model for say making a widget I usually say that the distribution is defined by + – 3 standard deviations. If you have an upper and lower specification set at +- 6 standard deviations then you will see parts per billion with respect to escapes if the process is in statistical control. When we apply the 1 1/2 sigma shift we find in that same situation that the escapes associated with this process are 3.4 ppm. I think I got that correct. Problem. Is that a +- 1 1/2 sigma shift (two tail) so that it is 1.7 ppm on either side? Is not the process in that situation out of control given that it includes special causes? The literature shows all kinds of pictures none of which make any sense. Do I show that the process long term is really +- 4 1/2 sigma wide rather than +- 3 sigma wide? I could use some assistance here.
Thanks,
Manny
Look at normal table. You’ll see 3.4 ppm at 4.5 Z
Forget the table. Lets talk distributions. Are you saying that long term one is looking at a distribution that is +-4.5 Standard Deviations wide. And if that is the case that the 3.4 defects are 1/2 on the upper side and 1/2 on the lower side assuming the process is centered. Looking at a table does not explain things.
Manny – The 1.5 sigma shift is a myth. This is something dreamed up to identify that over time, processes shift/drift more than they do in the short term. There has never been a solid proof that it does so by an exact 1.5 standard deviations. That said, it has become a common “adjustment” for estimating long-term process variation from short-term data. If you are a good six sigma practitioner, you should immediately ask for a definition of “short” vs. “long” term. And there’s the rub. For any process that is on-going, there is always a “longer” term. If you capture the measurements of a process variable for one year, and that process continues on, then you have the possibility that some unobserved source of variation may impact the process now that didn’t impact the process before, and thus add more variation.
The way that I teach this is to “demystify” the 1.5 sigma shift by identifying it as a “reasonable” estimate of future process variation unaccounted for in current data. IF there is a way to put controls in place that reduce or negate that future variation, then you can reduce this estimated future impact. Or, you can rely strictly on short-term statistics – Cpk instead of Ppk. That is a safe bet, as it only counts the observed variation. You will also want to ensure good process control measures so that if some unaccounted for variation affects the process, that it does not long impact the process.
Just my humble opinion.
The clearest explanation I’ve read is in the Expert Answers section of Quality Progress magazine, January 2017. If you know someone who’s an ASQ member you might ask to borrow their copy, or you might find it in a library. The long-term drift is real, and common sense. If you have two different sample data sets there will almost always be different means. Is 1.5 the right value to adjust for this? The drift will almost always be more or less. But there is a statistical basis for using 1.5
@Straydog – sorry, but you cannot find a mathematical proof for a 1.5 sigma shift. It is a practical approximation, but not a rigorously provable mathematical one.
The way that I have explained the 1.5 sigma shift in the past is to call it a tolerance for the statistical center of the process. The objective in that would be to provide a statistical threshold for action to introduce measures to tighten process controls so that the UCL and the LCL still remain with the customer specifications.
Watch the width of the process curve, watch the center of the curve, then set a point for recalculating the nominal center of the curve in order to see if the process is actually drifting.
Bringing it back to a business requirment, these are all reasonable things to do to make sure that you are not spending resources unnecessarily to respond to things that are not real problems, while continuing to meet the customer’s requirements.
@ronniewdoyle – so, essentially, you’ve established a guard-band of 1.5 sigma.
Way, way back I posted the following summary after reading a very long bit of back and forth discussion on this forum. The short version is as MBBinWI noted – nothing much to see.
My first encounter with 1.5 was not a pleasant one. The company I was working for was supplying product to a customer who made frequent comments concerning the capability of our process and the probability of their long term projections of our impending failure to meet requirements. It was my responsibility to meet with their rep to iron out the differences. In the meeting I was introduced to the notion of the inevitability of a 1.5 shift and the predicted failings of my company. I freely admitted that I had never heard of the concept and I asked for details.
I was told the idea had been around for a very long time, there were countless published papers on the subject and the 1.5 shift was a fact which had been verified on thousands of separate occasions. It was strongly intimated that I was not only a poor analyst but also hopelessly out of touch with current best practices. After recovering from this verbal beating, I asked for references. In so many words my assailant admitted he didn’t happen to know of any offhand, but he assured me the information was readily available and all I had to do was search the web to find it.
A search of the statistical literature didn’t reveal a thing. While searching the web I found the isixsigma site and posts like the following:
Is There Any Empirical Evidence To The 1.5 Shift?
Posted on: Friday, 11th October 2002
First, a long time is years. Long-term variation should only be talked about for a process that has been observed over years, not months or weeks as some would like to think. The empirical evidence comes from Motorola. They studied processes that they had applied Six Sigma to years after the project ended and that is when they noticed a 1.5 Sigma Shift, on average. I don’t whether that is published anywhere. Anybody with a link?
The answers went pretty much like this:
Posted on: Friday, 11th October 2002
There is a great deal of discussion here regarding the assumed 1.5 sigma shift. Okay deal with it statistically someone somewhere made a lot of observations and so here we are.
What does it mean to you as a six sigma practitioner? It means that in most cases you will overstate your actual results by 1.5 sigma in the long term.
Long term is not relevant in a continuous improvement culture. Strive to maintain and develop a six sigma continuous improvement culture and the question becomes mute.
The answers to the 1.5 sigma shift have been weel documented on countless number of responses.
In short, the web was equally mute with respect to information and case studies. I found the lack of solid information, about a subject which had been so vehemently promoted by our customer, to be very distressing. Finally, courtesy of this site, one of the posters referenced the then recently published article by Davis Bothe – Statistical Reason for the 1.5 Sigma Shift –Quality Engineering 14(3), 479-487 (2002). I immediately got a copy and read it. The last part of the opening paragraph of Bothe’s paper went right to the heart of the problem then and it appears to be central to the recent exchanges on this forum.
“When asked the reason for such an adjustment, Six Sigma advocates claim that it is necessary, but they offer only personal experiences and three dated empirical studies (2-4) as justification (two of these studies are 25 years old, the third is 50).”
The papers referenced by Bothe are the Bender, Evans, and Evans papers listed below. Bothe showed that certain conditions will give a process shift between 1.3 and 1.7. However, he was also very clear that it could be more, it could be less and that all of what he had to say was predicated on the assumption of a stable process variance. He concluded by asking, “What if sigma is also subjected to undetected increases and decreases? Further studies are needed to determine how these changes would affect estimates of outgoing quality.”
The post of 26 August (below) is the only listing of citations in support of the 1.5 shift I have seen in the public domain and it mirrors the antiquity of the papers exhumed by Bothe’s careful research. It also reinforces the impression that these few articles are really all there is as far as formal support for the 1.5 shift is concerned.
‘Z’ Short Term And Long Term
Posted on: Tuesday, 26th August 2003
….” Again, you really need to read the following resources:
Harry, M.J. and Prins, J. (1991). The Vision of Six Sigma: Mathematical Constructs Related to Process Centering. Motorola University Press, Motorola Inc., Schaumburg Illinois.
Harry, M.J. and Stewart, R. (1988). Six Sigma Mechanical Design Tolerancing. Publication Number 6s-2-10/88. Motorola University Press, Motorola Inc., Schaumburg Illinois.
Harry, M.J. and Lawson, R.J. (1988). Six Sigma Producibility Analysis and Process Characterization. Publication Number 6s-3-03/88. Motorola University Press, Motorola Inc., Schaumburg Illinois.
Bender, A. (1975).”Statistical Tolerancing as it Relates to Quality Control and the Designer.” Automotive Division Newsletter of ASQC.
Evans, David H., (1974). “Statistical Tolerancing: The State of the Art, Part I: Background,” Journal of Quality and Technology, 6 (4), pp. 188-195.
Evans, David H., (1975). “Statistical Tolerancing: The State of the Art, Part II: Methods for Estimating Moments,” Journal of Quality and Technology, 7 (1), pp. 1-12.
Evans, David H., (1975). “Statistical Tolerancing: The State of the Art, Part III: Shifts and Drifts,” Journal of Quality and Technology, 7 (2), pp. 72-76.”
If this is the case, then the hundreds of independent confirmations and dozens of papers claimed by that customer rep of a few years ago (and also claimed by my Black Belt instructors and others Six Sigma practitioners) are reduced to three old papers, a few articles written by one other individual, anecdotes,…and the Bothe summary. For a concept that is central to so much Six Sigma rhetoric, this is unacceptable.
My experience in industry closely mirrors the many facets of the Bothe paper. I have worked on processes that exhibited long term drift which could probably be summarized and guarded against by invoking 1.5 (or something more or less) as a production protection factor. I have worked on processes where the combination of changes in mean and variance worked, over the long term (as in 3.5 years long term), in the opposite direction and I have worked on processes where careful implementation of standard SPC practices held the long term variation and mean shift to within limits much less than those predicted by the automatic invocation of 1.5.
Given the nature of the second law of thermodynamics this really isn’t too surprising. Contrary to commonly held belief, the second law of thermodynamics does not say that entropy is constantly increasing in all systems (yes, I know, I have textbooks that offer up this kind of sound bite science too). What it does say is that entropy is constantly increasing in a closed system. I have never worked on a closed system process.
So, in summary, long term shift/drift happens but the 1.5 factor is an assumption? And the assumed drift is irrelevant if not a distortion? I’ve long been on the side for not using the 1.5 shift but since it’s “doctrinal” that would call our calculated sigma into question.
@rbutler – thanks for doing more research into this than it deserves.
All: Use Cpk and process control charts and forget about the 1.5 sigma shift BS.
I have done plenty of capability studies that prove a difference in variation from short term to long term.
I would never advocate an absolute 1.5 sigma shift but I can tell you that I’ve seen plenty of evidence of shifts in the chemical industry (which is QUITE broad), the food industry, and even in widget production.
Lastly consider this….if you don’t think there’s an impact by procurement policies having an impact on process variation, you’d be shocked.
Don’t let it become doctrinal or BS. It’s a model for a real phenomenon. Like all models, it’s wrong but it’s useful.
The 1.5 shift is not a physical property of nature. It’s not derivable statistically. It’s a rule of thumb. If the short term capability is good, assume 1.5 and move on. If the short term capability is poor, don’t assume anything; the challenge is to change something to get a better match between design requirements and manufacturing processes.
Mikel Harry’s book, “The Vision of Six Sigma: A Roadmap for Breakthrough” shows data from Asea Brown Boveri (ABB) that says the mean shift (number of standard deviations of shift) is small when the capability is poor and large when the capbility is high. One particular graph includes 25 data points. It shows a wide range of values for mean shift. In the neighborhood of three sigma capability (Cpk=1), the value of Z shift for seven different data points was between zero and 2.6.
For high quality processes, whether the amount of shift is 1 or 1.5 or 2 sigma, the effect on yield is still in the low PPM range, and hence the practical effect of not knowing the shift and drift is negligible. Just assume a reasonable number to add some margin for long-term production variation to the design tolerancing.
For processes having poor quality, the issue is not to know the correct amount of shift but to change the design or manufacturing to achieve better quality.
I teach people how to get their own data and do their own shift and drift analysis. Inevitably, once someone takes the initiative to do the study, they become more compelled to understand the nature of the long term variation and less inclined to care about the magnitude of the shift and drift. They see how the shift and drift number is so heavily dependent on the nature of the underlying processes, then they work toward improving processes.
Origins are Motorola (as stated with references above). I had gotten it from the grapevine (and it seems to entertain and satisfy most of my students) that a 2 sigma shift felt like too much and a 1 sigma shift felt like too little. Although one of the international LSS consulting groups I’ve worked with wrote a several page paper justifying the 1.5 value.
Much of science and statistics is not a definitive precise process, rather as Douglas Hubbard puts it “a reduction of uncertainty.” Although academic applications evaluate students for precise values, little is actually precise. Most of it is hypothesis supported by observations & experimentation, presented to a peer community for consensus and approval. Look at the origins of why the standard deviation equation has a “n-1” in the denominator.
Mikel Harry’s book that I referenced actually shows that the shift and drift is about 1.5 sigmas if you want to assume the mean moves, and that the ratio of long term to short term standard standard deviation is about 1.5 if you want to assume that the variation grows while the mean remains constant. The insight for me was that this was a useful engineering tool for tolerancing, you can use 1.5 adder or multiplier depending on what you know, and like most tools used for engineering estimates, it is crude but helpful. There’s no need to try to prove it from first principles.
I saw a paper that proved that the adder and the multiplier couldn’t both be 1.5 except at a single Cpk condition. It seemed like an interesting proof that the author understood the mathematics of the Z table, but it had no relevance in the practical universe of design.
© Copyright iSixSigma 2000-2017. User Agreement. Any reproduction or other use of content without the express written consent of iSixSigma is prohibited. More »