iSixSigma

Sigma calculation

Six Sigma – iSixSigma Forums Old Forums General Sigma calculation

Viewing 16 posts - 1 through 16 (of 16 total)
  • Author
    Posts
  • #53232

    Hans ”t Jong
    Participant

    For a set of data (361 units, 1 defect) I have calculated Cpk and Sigma level (Z.bench short term) with Minitab and also the Sigma Calculator. Off course in Minitab all 361 individual measurements and LSL+USL are present.
    The result on the iSixSigma calculator (1,5 sigma shift for short term)  is Process Sigma 4,274.
    On the other hand Minitab has calculated a Z.bench 2,52 (Z.LSL is 2,52 and Z.USL is 4,75) with Cpk 0,84.
    Untill now I can not explain this difference. Perhaps someone can help me with this? Most likely it’s something obvious….
    Hans

    0
    #189124

    MBBinWI
    Participant

    Cpk => uses closest spec limit and defects beyond that (disregarding the defects outside the other limit if it exists)
    Zbench => gives number of std dev’s which would equal having ALL defects in one tail.

    0
    #189126

    Mikel
    Member

    What command did you use in Minitab? The data from Minitab makes
    no sense.Given all the stupid assumptions of the 1.5 shift the iSixSigma
    calculator is correct (it’s just a simple excel calculation using
    normsinv)Kind of makes you wonder why someone ever thought anything more
    than reporting a p value was of value.

    0
    #189147

    Bower Chiel
    Participant

    Hi Hans
    A very interesting question!
    The Sigma Calculator gives you 4.27 as an estimate of the Process Sigma.  Using the formula, Process Sigma = Zbench + 1.5, you get 2.52 + 1.5 = 4.02 as another estimate.  The answers differ because in the first case the estimate is based on the actual observed proportion of defective items whereas in the second case the estimate is based on the predicted proportion of defective items from a normal distribution fitted to the data.
    Best Wishes
    Bower Chiel

    0
    #189148

    Mikel
    Member

    What the heck is z-bench?

    0
    #189150

    Bower Chiel
    Participant

    Hi Stan
    With Capability Analysis > Normal in Minitab under Options … one can select Benchmark Z’s (sigma level).  You then get the output of Zbench, Z.LSL and Z.USL that Hans was referring to in the original post.  If you drill down to Methods and Formulas in the Help you will get all the gory details.  Whatever one’s view on all this stuff I must say that I like the fact that this option provides confidence intervals for Cpk.
    Best Wishes
    Bower Chiel

    0
    #189151

    Mikel
    Member

    I am glad you are happy about confidence intervals.Z bench is meaningless.

    0
    #189152

    Mikel
    Member

    The easy difference is you are using attribute data for the
    calculator and variable data in Minitab.Variable is clearly superior data.Z-bench is still a stupid metric. Who cares what would happen if all
    defects were in one tail? They aren’t. The observed and expected
    performance boxes at the bottom of the capability analysis tell you
    what you need to know.Bower, the CL’s are there without the z nonsense.

    0
    #189164

    Hans ”t Jong
    Participant

    Hi Stan,
    Thanks for your replies. The main results for me are Cp and Cpk values. Only some people within the organization do want to know the sigma level (=Z.bench).
    Therefore I was calculating and comparing these values with the isixsigma calculator and Minitab. The difference (or gap) between the outcome (4,27 versus 2,52) is high. Is this difference only caused by attribute versus variable data? Minitab calculates the Z.bench for the lowest Cpk (1-tail). Besides this it uses st.dev.(within) which is calculated by average moving range.
    The isixsigma calculator is using normsinv for DPMO 2770; so normsinv (1-1:361)=2,77 + 1,5 = 4,27
    Could you give an further explanation please?  I would like to understand.
    Thanks in advance.
    Hans

    0
    #189166

    Mikel
    Member

    Hans,Your understanding of z-bench is not correct. It is not for the
    lowest Cpk. It is a statistical distortion at best. What z-bench does
    is look at the defects beyond both tails and then treats them as if
    they are in one tail. Perhaps as a demonstration to help people
    understand in a class (maybe), but as a reported metric? Pretty
    stupid.What you have missed is the 2.52 would have 1.5 added to it by
    the Six Sigma zombies because of a fable once told 20+ years ago
    that someone had data to prove it. They did not and it’s stupid.So your real difference is 2.52 vs. 2.77 or 4.02 vs 4.27 – well within
    confidence limits.

    0
    #189173

    Jonathon Andell
    Participant

    Oddly enough I find myself in full agreement with you, Stan. It’s disconcerting not to have even a minor quibble.I have been way beyond merely unimpressed with the whole “sigma shift” baloney ever since it came out. I spoke with Bill Smith one upon a time, and he seemed way to smart to miss the fact the “six sigma” statistically means 2 parts per billion. My personal theory is that Motorola brass liked the sound of the words “six Sigma,” but had no intention of signing up for 2 PPB – so people were ordered to mangle the definition so as to lower standards. I’ll never be able to verify it, but it feels vastly more plausible than to think that Bill actually advocated such idiocy.The mess with how Minitab and others handle discrete data is but one of many ways that decision has screwed things up for people with a clue.The really scary part is how many otherwise bright people have invested significant amounts of time in trying to justify and validate the nonsense.

    0
    #189194

    Hans ”t Jong
    Participant

    Hi Stan, thanks again.
    So as I understand you correctly the 1,5 sigma shift is not used by you. Than I suppose you calculate Cp and Cpk with the stdev (traditional excel)?
    In Minitab this stdev is used to calculate long term (overall) Pp and Ppk. For Cp and Cpk depending on the subgroup size Minitab is using an estimated stdev (standard for size = 1: moving average range and pooled standard deviation for size >1). The method of calculating can be changed.
    So in terms of Minitab you are actually calculating Pp and Ppk? Or am I completely wrong?

    0
    #189195

    Mikel
    Member

    I use Minitab’s standard output for process capability including Cpm

    0
    #189197

    Hans ”t Jong
    Participant

    I suppose Cpm only when there is a target value not being in the center between lower and upper specification limit? Otherwise Cpk and Cpm will be nearly the same.
    If you use standard results from Minitab do you than use in all cases  Pp+Ppk in stead of Cp and Cpk?
     

    0
    #189198

    Mikel
    Member

    You are not correct about Cpm and Cpk being the same if the target is
    centered.Use of the others depends on the nature and quality of my data

    0
    #189202

    Mikel
    Member

    Let me elaborate on the quality and nature of the data. I did not
    have time before.The underlying assumption of the Cp/Pp and Cpk/Ppk is you have
    data gathered in rational subgroups over a long enough period of
    time to represent most of the variation that will be seen in a
    process. Some people call this long term data, There is also an
    underlying assumption of normality if you are using the one called
    normal. If all of these conditions are true, use of all four is
    appropriate and informative.If the data is of ambiguous groupings the Cp/Cpk makes no sense.If the data does not represent many cycles of the process Pp/Ppk
    makes no sense except directionally.If the data is truly a classical short term study Pp and Ppk are
    actually Cp and Cpk.In other words, know the source of your data and how it was
    gathered and interpret accordingly.The only case where Cpm and Cpk are equal is where x-bar and
    target are equal. All other cases are different.

    0
Viewing 16 posts - 1 through 16 (of 16 total)

The forum ‘General’ is closed to new topics and replies.