- New JobMondelezHSE Manager
Why “Zst = Zlt+1.5″ instead of “Zlt=Zst+1.5″ ? Usually, we use current data, sampling data, to predict the futher.
I believe your equation is in error. The correct formulation
would be:Z.st = Z.lt + 1.5andZ.lt = Z.st – 1.5.
I mean why “Zst = Zlt+1.5″ not “Zlt = Zst+1.5″ because I el that it is make more sense if we use short term data to predict long term behaviou.Thanks.
We add Z.shift to Z.lt so as to estimate Z.st. Note that Z.lt
is a long-term estimate of capability. This means that the
influence of process centering error is a part of the long-
term estimate. So we add the 1.5 shift factor to Z.lt as a
way to artifically remove (or reduce) the influence of
process centering error and get a better picture of the
inherent short-term capability. Please also note that the
1.5 sigma shift is just a “compensatory measure” that
should only be used when the true effect is not known or
would be too costly or inconvient to find out. We subtract
Z.shift from Z.lt to get Z.st. By subtracting 1.5 we are able
to artifically inject the influence of long-term process
centering error into our estimate of capability. Again, the
true value of shift should be used if it is known. If it is not
known, then use 1.5 as a “guiding principle.”
You may want to pose this question to Dr. Harry on his Q&
A forum on this website. He knows more about the 1.5
sigma shift phenomenon than anyone on the planet.
I am not an expert on this, but from what I’ve seen in this web site (forum, articlae, etc.) I understand the following:
1) Variation always exist.
2) Further more, special causes of variation always exist, even when variation due to special causes can be small enough to not to be detected, for example, as out-of-control signals in a control chart.
3) That, plus the fact that things left on their own do not improve but worsen, make us expect more variation in the long term than in the short term.
4) When we are working in a project, we ussually do not have time to wait for the long term to show results. Then the standard is to report sigma level short term (Zst).
5) Remember that a higher Z is better performance. As we expect a worse result in the long term, we substract from Zst a “worsening” value to estimate Zlt. Then Zlt=Zst-(worsening value). It is pretty accepted that, in the absence of better data, 1.5 is a good worsening value to start with. It is supported by some people that 1.5 is allways close enough to reality as to lose time looking for a better value in each case, but that’s not so widely accepted. If you accept the 1.5, then you get to Zlt=Zst-1.5.
6) Then, mathematically, Zst=Zlt+1.5. Now, as I understand, the important figure is Zlt. The only reason to calculate and report Zst instead is the time involved in long term studies. But sometimes we actually have enough history so as to calculate Zlt (and sometimes Zst too) without any asumptions about the shift. So why is the sigma shift used to “reverse” the approach and estimate a Zst based on Zlt and an assumed shift, instead of just reporting Zlt? I don’t know. I could not find the answer to that question yet. Maybe I’ll go and ask Harry.
Gabriel:I would like to commend your answer. It was very nicely
stated. I would rather imagine that Dr. Harry can answer
your question. I have heard him explain it, but maybe you
should ask him.My complimentsReigle Stewart
Done. Now let’s see what happens. It was a hard work to shrink the question in 1000 characters or less. I hope it’s still sound. It was like this:
Dear Dr Harry: In the iSixSigma forum, Reigle Stewart did a nice job showing the validity of the 1.5 sigma shift in a DPQ context for specific samples, but told me to better ask you the following: 1- How do we get from that to count defects, count opportunities, find how many sigmas away should a normal distributions average be from a unilateral specification limit to produce as many defectives per units as defects per opportunity were found, call that “long term sigma level”, and add 1.5 to get the “short term sigma level” despite the sample size, the process distribution, and how it will be controlled? 2- As I understand, the important figure is Zlt. We calculate and report Zst instead due to the long time involved in long term studies. But sometimes we have enough history to calculate Zlt without assumptions about the shift. Why is the sigma shift used to “reverse” the approach and estimate a Zst based on Zlt and an assumed shift, instead of just reporting Zlt? Thanks for your help
Reigle, thanks for your commendation. Dr Harry’s answer came pretty fast (in fact, it was the first answer after I posted the question).
However. I am really disappointed with the content of the answer because:
1) He used too many hypothesis that need to be met, they are typically not met, and were specifically discarded yn my question. Quote: “…Let us postulate a continuous performance variable … it is a naturally continuous random normal variable … it has been assigned a bilateral performance specification … its production volume is relatively low … all pursuant testing is executed on a pass-fail basis”. It is a known good practice in science to use as few assumptios as possible.
2) Even with all those assumptions, the scientific and factual expalanation supported by data is: “In this case, we seek to set Z.shift at the convenient and conventional value of 1.5″. And, the best part, his explanation ends with the incredible “The reader is admonished to recognize this authors repeated use of the word approximation. In this context, Z.shift is an engineering approximation, not a statistical estimate. This is a difference that seems to escape many practitioners of Six Sigma.” (sic) It seems that Stan was right after all. There is no data to support the 1.5 after all (except, may be, for some specific cases when the process is not knwn, like in DPQ).
3) He insisted to show how to “shift” Zlt to convert it to Zst when, apparently, the idea of the shift was to account to that long-term variation that can not be meassured with Zst and use the shift to estimate Zlt, but never answered why we would reverse the approach when we already have Zlt.
If my understanding of the 1.5 sigma shift was in twilight, now it is in darckness. Surely, I learnt nothing new. That’s not a good thing, specially when the question was answered by the master. My first feeling after reading the answer was that Dr Harry intentionaly avoided answering the 2 questions, and he found an “elegant” and “scientific-looking” way to do it. But since there is no data to support that, I will not support that first feeling. After all, if he didn’t want to answer the question he could just give no answer at all.
Now, Reigle. You said that you knew the answers to my questions but that you thought it would be better to go to the source and ask Dr Harry. Since that plan did not work, can we return to the original plan? Can you answer the 2 questios I origianlly asked you? (see the previous post in this thread for the questions).
Thanks for your help.
Once again you are my hero.
We all need to be able to look through the cloud of fancy words and the assumption of Dr. Harry that if he uses enough big sounding concepts, that we will be afraid to say we don’t understand (works great with executives).
You called him a Master. He is a master in the same way P. T. Barunum is a master.
Phineas Taylor Barnum (1810-1891) is one of the most colorful and well known personalities in American history. A consummate showman and entrepreneur.
And remember P.T.’s most famous quote – “there’s a sucker born every minute”.
The forum ‘General’ is closed to new topics and replies.