# Sigma calculation

Six Sigma – iSixSigma › Forums › Old Forums › General › Sigma calculation

- This topic has 15 replies, 5 voices, and was last updated 10 years, 6 months ago by Mikel.

- AuthorPosts
- February 5, 2010 at 3:34 pm #53232

Hans ”t JongParticipant@Hans-''t-Jong**Include @Hans-''t-Jong in your post and this person will**

be notified via email.For a set of data (361 units, 1 defect) I have calculated Cpk and Sigma level (Z.bench short term) with Minitab and also the Sigma Calculator. Off course in Minitab all 361 individual measurements and LSL+USL are present.

The result on the iSixSigma calculator (1,5 sigma shift for short term) is Process Sigma 4,274.

On the other hand Minitab has calculated a Z.bench 2,52 (Z.LSL is 2,52 and Z.USL is 4,75) with Cpk 0,84.

Untill now I can not explain this difference. Perhaps someone can help me with this? Most likely it’s something obvious….

Hans0February 6, 2010 at 4:55 pm #189124

MBBinWIParticipant@MBBinWI**Include @MBBinWI in your post and this person will**

be notified via email.Cpk => uses closest spec limit and defects beyond that (disregarding the defects outside the other limit if it exists)

Zbench => gives number of std dev’s which would equal having ALL defects in one tail.0February 6, 2010 at 6:20 pm #189126What command did you use in Minitab? The data from Minitab makes

no sense.Given all the stupid assumptions of the 1.5 shift the iSixSigma

calculator is correct (it’s just a simple excel calculation using

normsinv)Kind of makes you wonder why someone ever thought anything more

than reporting a p value was of value.0February 7, 2010 at 10:17 pm #189147

Bower ChielParticipant@Bower-Chiel**Include @Bower-Chiel in your post and this person will**

be notified via email.Hi Hans

A very interesting question!

The Sigma Calculator gives you 4.27 as an estimate of the Process Sigma. Using the formula, Process Sigma = Zbench + 1.5, you get 2.52 + 1.5 = 4.02 as another estimate. The answers differ because in the first case the estimate is based on the actual observed proportion of defective items whereas in the second case the estimate is based on the predicted proportion of defective items from a normal distribution fitted to the data.

Best Wishes

Bower Chiel0February 7, 2010 at 10:31 pm #189148What the heck is z-bench?

0February 7, 2010 at 10:48 pm #189150

Bower ChielParticipant@Bower-Chiel**Include @Bower-Chiel in your post and this person will**

be notified via email.Hi Stan

With Capability Analysis > Normal in Minitab under Options … one can select Benchmark Z’s (sigma level). You then get the output of Zbench, Z.LSL and Z.USL that Hans was referring to in the original post. If you drill down to Methods and Formulas in the Help you will get all the gory details. Whatever one’s view on all this stuff I must say that I like the fact that this option provides confidence intervals for Cpk.

Best Wishes

Bower Chiel0February 7, 2010 at 11:36 pm #189151I am glad you are happy about confidence intervals.Z bench is meaningless.

0February 8, 2010 at 1:34 am #189152The easy difference is you are using attribute data for the

calculator and variable data in Minitab.Variable is clearly superior data.Z-bench is still a stupid metric. Who cares what would happen if all

defects were in one tail? They aren’t. The observed and expected

performance boxes at the bottom of the capability analysis tell you

what you need to know.Bower, the CL’s are there without the z nonsense.0February 8, 2010 at 11:13 am #189164

Hans ”t JongParticipant@Hans-''t-Jong**Include @Hans-''t-Jong in your post and this person will**

be notified via email.Hi Stan,

Thanks for your replies. The main results for me are Cp and Cpk values. Only some people within the organization do want to know the sigma level (=Z.bench).

Therefore I was calculating and comparing these values with the isixsigma calculator and Minitab. The difference (or gap) between the outcome (4,27 versus 2,52) is high. Is this difference only caused by attribute versus variable data? Minitab calculates the Z.bench for the lowest Cpk (1-tail). Besides this it uses st.dev.(within) which is calculated by average moving range.

The isixsigma calculator is using normsinv for DPMO 2770; so normsinv (1-1:361)=2,77 + 1,5 = 4,27

Could you give an further explanation please? I would like to understand.

Thanks in advance.

Hans0February 8, 2010 at 12:06 pm #189166Hans,Your understanding of z-bench is not correct. It is not for the

lowest Cpk. It is a statistical distortion at best. What z-bench does

is look at the defects beyond both tails and then treats them as if

they are in one tail. Perhaps as a demonstration to help people

understand in a class (maybe), but as a reported metric? Pretty

stupid.What you have missed is the 2.52 would have 1.5 added to it by

the Six Sigma zombies because of a fable once told 20+ years ago

that someone had data to prove it. They did not and it’s stupid.So your real difference is 2.52 vs. 2.77 or 4.02 vs 4.27 – well within

confidence limits.0February 8, 2010 at 4:48 pm #189173

Jonathon AndellParticipant@Jonathon-Andell**Include @Jonathon-Andell in your post and this person will**

be notified via email.Oddly enough I find myself in full agreement with you, Stan. It’s disconcerting not to have even a minor quibble.I have been way beyond merely unimpressed with the whole “sigma shift” baloney ever since it came out. I spoke with Bill Smith one upon a time, and he seemed way to smart to miss the fact the “six sigma” statistically means 2 parts per billion. My personal theory is that Motorola brass liked the sound of the words “six Sigma,” but had no intention of signing up for 2 PPB – so people were ordered to mangle the definition so as to lower standards. I’ll never be able to verify it, but it feels vastly more plausible than to think that Bill actually advocated such idiocy.The mess with how Minitab and others handle discrete data is but one of many ways that decision has screwed things up for people with a clue.The really scary part is how many otherwise bright people have invested significant amounts of time in trying to justify and validate the nonsense.

0February 9, 2010 at 8:25 am #189194

Hans ”t JongParticipant@Hans-''t-Jong**Include @Hans-''t-Jong in your post and this person will**

be notified via email.Hi Stan, thanks again.

So as I understand you correctly the 1,5 sigma shift is not used by you. Than I suppose you calculate Cp and Cpk with the stdev (traditional excel)?

In Minitab this stdev is used to calculate long term (overall) Pp and Ppk. For Cp and Cpk depending on the subgroup size Minitab is using an estimated stdev (standard for size = 1: moving average range and pooled standard deviation for size >1). The method of calculating can be changed.

So in terms of Minitab you are actually calculating Pp and Ppk? Or am I completely wrong?0February 9, 2010 at 9:03 am #189195I use Minitab’s standard output for process capability including Cpm

0February 9, 2010 at 10:12 am #189197

Hans ”t JongParticipant@Hans-''t-Jong**Include @Hans-''t-Jong in your post and this person will**

be notified via email.I suppose Cpm only when there is a target value not being in the center between lower and upper specification limit? Otherwise Cpk and Cpm will be nearly the same.

If you use standard results from Minitab do you than use in all cases Pp+Ppk in stead of Cp and Cpk?

0February 9, 2010 at 10:54 am #189198You are not correct about Cpm and Cpk being the same if the target is

centered.Use of the others depends on the nature and quality of my data0February 9, 2010 at 12:33 pm #189202Let me elaborate on the quality and nature of the data. I did not

have time before.The underlying assumption of the Cp/Pp and Cpk/Ppk is you have

data gathered in rational subgroups over a long enough period of

time to represent most of the variation that will be seen in a

process. Some people call this long term data, There is also an

underlying assumption of normality if you are using the one called

normal. If all of these conditions are true, use of all four is

appropriate and informative.If the data is of ambiguous groupings the Cp/Cpk makes no sense.If the data does not represent many cycles of the process Pp/Ppk

makes no sense except directionally.If the data is truly a classical short term study Pp and Ppk are

actually Cp and Cpk.In other words, know the source of your data and how it was

gathered and interpret accordingly.The only case where Cpm and Cpk are equal is where x-bar and

target are equal. All other cases are different.0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.