# Z-value

Viewing 51 posts - 1 through 51 (of 51 total)
• Author
Posts
• #39587

grover
Participant

How is Z-value relates with six-sigma. Thanks for inputs!!
Regards

0
#120723

Priya Sahu
Participant

Z value is the process capability, i.e the number of SIGMA values that can fit in between the LSL and the USL.

0
#120725

magesh
Participant

If your Z value is 4.5, then your process capability is 4.5sigma.

0
#120728

Angler42
Participant

Hi,
I thought it was the number of sigma between the average and the closest specification limit….
Franck

0
#120733

Whitehurst
Participant

Z value is the number of standard deviation for a particular data point from its mean. Z = x – mean / standard deviation . For a standard normal distrinbution you get a z table to calculate the probality  i.e to find the area under the standard normal curve.
Hope this helps . Robert Butler if you read this thread just correct me if I am wrong you are the guru.

0
#120750

BTDT
Participant

Joe is the only one who is right so far.https://www.isixsigma.com/dictionary/Z-363.htm

BTDT

0
#120755

Ken Feldman
Participant

Also don’t forget that common practice is to add 1.5 to the Z value to get Sigma Level.  I didn’t say the right way, merely convention.

0
#120759

Utah123
Member

Z Score   Basically, it tells us how many standard deviations a specific data value is away from the mean.  It can be computed for each data value as follows:  x-x-bar / s

Z score  the number of s the data value falls above (+ Z) or below (-Z) the mean.
Z scores are highly influenced by outliers, as it is calculated from the mean and std dev.
So if a data value has a Z score of 1.5, it is l.5 s above (b/c its positive) the mean.Z scores are measured in std dev and show us a data points position relative to the mean.  There are is also Z bench, a process capablity index, which is commonly used instead of or in conjunction with Cpk, and is essentially the same formula, replacing the value of the data point with the USL/LSL value:  USL or LSLl – xbar / std dev (use the min of the two to represent the process).  This is my understanding, but you’ll want to verify it, as you will with most info recieved here…although BTDT has always given me reliable info.  Good luck.

0
#120798

grover
Participant

So, does it mean that when we are saying six sigma, we are talking about z=6.0??. I thought that sigma is basically a standard deviation (sigma) of the data points of the normal curve (meaning 1 sigma is one times of sigma from the mean value, 2 sigma is two times of sigma from the mean…..etc )and from this respect we talk about six sigma….. not z-value ….. please correct me if i am wrong.
Regards,
Regards

0
#120799

MAK
Participant

A Z value is a data point’s position between the mean and another location as measured by the number of standard deviations. Z is a universal measurement because it can be applied to any unit of measure. Z is a measure of process capability and corresponds to the process sigma value that is reported by the businesses. For example, a 3 sigma process means that three standard deviations lie between the mean and the nearest specification limit. Three is the Z value.
Regards,

0
#120834

Ken Feldman
Participant

I think that you and the other guy need to do a little research on this site regarding the 1.5 shift.  As I have already mentioned, it is convention that sigma level be calculated by adding 1.5 to the Z value to get Sigma Level.  A Z of 4.5 will equate to a Sigma Level of 6, as in “Six Sigma”.  The rationale behind the shift can be found in numerous threads on this site.

0
#120842

Utah123
Member

It is confusing….The key point to remember is you are simply trying to determine the current capability of your process.  This requries some metric that will allow you to compare the actual error in your process to the amount of error the customer will allow.  Hence the need for the process standard deviation and customer tolerances.

There are many process capability metrics that you can use:  DPMO, PPM, Cpk, Z score, sigma, Yield, DPU, etc.  They are based an varyious assumptions and inputs, but tell you essentially the same thing:  How well are we doing relative to our customer expectations?  Also, most are roughly convertable to one another:  A Cpk of 1.33 = Z score of 4.0 = sigma level of 4.0 = Yield of  95% = DPMO.  As for the Z score:

The Z value, essentially, is the number of standard deviations between the mean and some number.  When this number is a control limit, it becomes a capability metric.  It is synonymous with the sigma level:  6 sigma levels are equal to a Z score of 6.  It is further explained by the equation:

ZLT = ZST – 1.5 SHIFT       or         ZST = ZLT + 1.5 SHIFT

ZST a measure of process capability when all special cause is removed and the process is properly centered.   It is how processes are compared to one another.  It can be converted into ppm or other capability metrics using the appropriate conversion table.
ZLT is the long-term probability of producing a defect with the current process.  It is the difference between the short-term Z and the Z shift. This is what your customer is experiencing.
1.5 shift is the average of the process shift over time.  It is based on prior data gathered by Motorola on their processes and is often used as the standard in manufacturing.  Although using the 1.5 shift is common, it is not universally accepted as best practice.
The above info is correct to the best of my knowledge, but you will want to validate before acting on it.  Hope it helps.

0
#120846

Mikel
Member

Several incorrect statements in your post. Amoung them
Cpk of 1.33 is not a z score of 4, a Cp of 1.33 is,
There is no data from Motorola – no one has data, it was simply an observation by Bill Smith.
I know of anyone with real data that uses a shift of 1.5. If you have real data and still use 1.5, you did not learn much in BB training.

0
#120853

Ken Feldman
Participant

“I know of anyone with real data that uses a shift of 1.5. If you have real data and still use 1.5, you did not learn much in BB training.”
Change “anyone” to “no one” and you make sense.  If you have real data you should know whether the “shift” occurs or not.  The 1.5 is a substitute for real knowledge about the process.

0
#120858

BTDT
Participant

Stan:There is some data on the Zst and Zlt in Harry’s clipart books, but they do not support the 1.5 shift. The graph on p 9.17 of the ‘White book’ shows a curve. The data is purported as being from ABB. A second graph on p. 9.3 (Black book) and repeated on p. 9.16 (White book) is fit using a linear relationship and shows an intercept of 0.75.I never did understand why the ‘derivation’ didn’t fit his own data.BTDT

0
#120859

Ken Feldman
Participant

OH, NO, don’t get Stan started.  You have to read Harry’s ebook (aptly named Mysteries of Six Sigma) whereby he mathematically…I mean magically, comes up with his 1.47 shift…rounded to 1.5.

0
#120864

e
Participant

My friend….
Why are you saying Zst = 3*Cp and not Zst=3*Cpk.  What nuance are you emphasizing?

0
#120888

Mikel
Member

picky, picky, picky

0
#120889

Mikel
Member

I agree that there are numbers on those pages – that doesn’t make them data.

0
#120890

Mikel
Member

Do you understan the difference between Cp and Cpk?

0
#120891

grover
Participant

Stan and Darth,
Below i am summarizing the discussion which i saw related to my questions. I learned a lot from this discussion. Thanks all of you. Please correct me if i am wrong in summurizing the subject.
“A Z value is a data point’s position between the mean and another location as measured by the number of standard deviations. Z is a universal measurement because it can be applied to any unit of measure. Z is a measure of process capability and corresponds to the process sigma value that is reported by the businesses. For example, a 3 sigma process means that three standard deviations lie between the mean and the nearest specification limit. Three is the Z value. But due to variation in the process we report this sigma level as Z-shortterm and Z-longterm. In case of short-term we will add 1.5 to the original value which we took from normal table. If we talk about long-term, then we will subtract 1.5 from the value we got from the normal table. For example as you said darth that A Z of 4.5 will equate to a Sigma Level of 6, as in “Six Sigma”, in case of short-term z-value.
So, it means that whenever i calculate z-value from the table i have to decide whether it is long-term or short term and get the final z-value accordingly and report this value as the sigma level . Correct…………..????
Best Regards

0
#120895

Whitehurst
Participant

Stan / Darth correct me if I am wrong
Zst = 3 Cp ok but what about Zst = 3Cpk many organizations believes it even some of the materials in GE suggests it  but theres is no satistical validation about it but many industry uses the formula Zst = 3 Cpk as their short term z value.

0
#120903

Mikel
Member

Yes – they are wrong.

0
#120904

Whitehurst
Participant

Stan then how does one calulate the Zst for one sided specification where we dont get the Cp

0
#120911

Ken Feldman
Participant

Naveen, you were doing OK until the very end.  Typically, the Z value is considered long term and thus you add the 1.5 to estimate the short term.  Short term variation will likely be lower than long term and thus the sigma level for short term will be higher than for long term.  The difference was estimated at about 1.5 and that has become the convention.  But, as Stan pointed out, if you have real data you can estimate the true shift if there is any and not be bound by the 1.5 convention which might be misleading.

0
#120917

e
Participant

Stan, the reason I ask about Cpk vs Cp is because…..looking at the formula for Z = x – Xbar/s  is 3 times the Cpk when x is the USL or LSL.
Zu should equal CpkU*3 and Zl = CpkL*3.  Then to get an overall Z, you’d have to combine the overall ppm defective to get the Z.
Where am I wrong my friend…..

0
#120938

BTDT
Participant

Dr. H never did repond to my queries about the difference between his own data and the derivation.Maybe he doesn’t listen to queries.:) BTDT

0
#120973

Mikel
Member

The question you should be asking is 1) where am I right, and 2) can I get a refund for my training?

0
#120979

Mikel
Member

Short term variation will likely be lower than long term?
Give me an example where it’s not a sure thing.
On a more positive note, I’m glad to see you have time off from your cush LSS job to post lately. I’ll start paying more attention if you have time to play again.

0
#120983

Ken Feldman
Participant

You got me, I was being wishy washy for some reason.  Not like me.  Of course st can’t be more than lt.  They can be equal but more often than not, st will be less than lt.  As for all my apparent free time….summer vacation schedules are leaving a few more open coaching slots than usual.  That looks like it will continue until after the 4th…of July that is.  You seem to be paying plenty of attention, even picking up my cute quip about being a know it all.

0
#121142

grover
Participant

Darth, actually i was busy to see your last post yesterday but when i saw your post i was also thinking about your remarks about short-term variation but when i saw stan’s post, it was clear to me. Well, its all clear to me now.
Darth/Stan, i am working in a Chemical lab and have a statistical data i want to share with you guys. I am making my case and will come back very soon. I would appreciate your help in this regard.
Warmest Regards

0
#121248

grover
Participant

Stan/Darth
Now, i have made case. I have some confusion in it which i want to share with both of you. I work in a chemical lab where we get samples of iron ore and chrome ore from our customers. They want us to find out the iron %, Chrome % and Alumina % from both ores and report it to them so that they can see whether the lot from which sample is taken is acceptable or not. We report them the same. Now, in order to analyse the %es of the above, we have some standard procedures, like ISO procedures to find % of iron, chrome from iron or chrome ores. We normally dont exactly follow those procedures as such and make some ammendments in it. So, our procedures are basically not standard procedures, these are rather non-standard. In order to make sure that our non-standard procedure is ok or not, we basically validate our procedures through inter-laboratory comparision scheme. One of the validation schemes is ROUN ROBIN. What we normally do is that same sample is sent to several different labs, where they do that same test, for example finding iron in iron ore, according to their own procedures and then send their results to ROUND ROBIN. They normally compile all the test results and send the summary to all related labs, that participated in ROUND ROBIN activity. Here is one summary:
Lab Mean    Lab Std. Dev    Std Z-score
Iron in iron ore            62.55            0.0                     1.23
Silica in iron ore          4.05              0.0                      -0.87
We normally provide our four test values (for four replicate tests) to them from which they calculate Std. dev and hence z-score. It is apparent that all our four values are same and hence got Std. dev as zero. Calculation of z-score is explained below.
Now with this data they provide the data of the Group who participated in the scheme, it is like this:
Group Mean    Group Std. Dev
Iron in iron ore            62.26                0.236
Silica in iron ore          4.20                     0.178
Now, in order to calculate Z-score, they take our test value (62.55) and group mean value (62.26) divided by group std. dev (0.236). the value of Z-score calculated is 1.23. Similarly they calculate the value for Silica and same for the other tests.
They then draw a normal curve ( i dont know how to incorporate this curve on this forum, anyway i am gonna explain it), where center value is shown X-bar and mark z-value as 1,2,3,-1,-2,-3 on both sides of the curve. Then they present your calculated z-value on the graph so that lab can see where it stands. Whenever the z-value comes above 3 or -3, they highlighted the value and call it a outlier. whenever any tests results comes to us as an outlier, we then investigate what is the problem in our test method or any other potential problem.
Now, in my very initial thread i asked that whether z-vlaue define sigma level and you said yes it is a sigma level after adding or subtracting 1.5 from the original z-vlaue.What i understood before that z-vlaue basically tells you how much centered your value is with respect to the average value of the normal distribution. For example, z-value of 1.23 telling me that i am coming under 37.055% of the of the area on both sides in normal distribution table or cumulative 74.11% of the area of normal distribution.What would be my sigma standing in this case??. How would i decide whether should i announce my sigma as long or short-term??
Regards,

0
#121280

grover
Participant

Regards

0
#121286

j
Participant

Naveen ,
you always report Zst . 3.4 defects / million oppor. is 6 sigma which is Zst and its Zlt will be 4.5 defects.Going by the case what you are saying it seems you are doing some kind of reproducibility check  and giving a z value to it makes no sense to me.
May be Darth , stan robert or btdt can throw some light on it !!

0
#121294

BTDT
Participant

J:I never use 3.4 defects per million as 6 sigma. I calculate the Zshift using data and don’t assume anything.Zlt is the process capability using all your data, USL and LSL.Zst is the process capability using your data in small, centered, time limited subgroups using the same USL and LSL.Zshift is the difference between the two numbers. Zshift = Zst – Zlt.The shift value is used as an estimate of the process capability when the process is controlled. There are no assumptions when you use continuous data.BTDT

0
#121318

grover
Participant

J, you are right that its really a reproducibility check. but in round robin excercise, they always show your result in perspective of others and Z-value is one of the tools they use to do it. For example, as i said, if the Z-value of your lab comes more than 3.0 then they call the lab result as an outlier and if comes under 1.0, it means that your result is in-line with the other labs.
Regards

0
#121371

grover
Participant

No answer from anyone!! we talk too much but dont prove it!!

0
#121382

BTDT
Participant

Are you asking that we prove we talk too much?Zlt is calculated from your continuous data, Zst is calculated from your continuous data. The Zshift is the difference between the two. Don’t start adding 1.5 to things: you know both numbers.https://www.isixsigma.com/forum/showmessage.asp?messageID=72478BTDT

0
#121412

Ken Feldman
Participant

Naveen, there has been considerable traffic on this thread.  At some point, interest is lost and that is what has likely happened.  No one really has the time nor inclination to get into great depth for a long period of time.  Although it might appear lots of time is spent on silly threads, they are usually quick and require little thinking.  They are good for some stress relief.  Short posts which only require a quick response will elicit greater response than long indepth questions.  Possibly you can take this offline with someone willing to spend the time helping.  Personally, I think the thread is dead.

0
#121413

Mikel
Member

Speak for yourself, Mr.Darkside. I personally research and spend hours thinking about each and everyone of my posts.

0
#121415

Ken Feldman
Participant

Ah, now I recall some of the more eloquent posts:
“NO”
“You don’t know what you are talking about”
“You’re an idiot”
“Reigle, you’re an idiot”
“Contact me at [email protected] and I will sell you  (fill in the blank)…….”
“Jack and Coke is what you should be using”
“Dr. Harry is an idiot”
Maybe we should record a CD of Stan’s famous and helpful posts.  Shucks why waste a good CD for only 57kb of material.

0
#121419

Mikel
Member

But it takes me at least an hour for each of them.

0
#121421

Mikel
Member

Darth, sing with me (to the tune of the famous Curtis Mayfield hit  – The theme to Superfly – Freddie’s Dead) —
Hey, hey…Love, love…Yeah, yeah…Uh huh…This thread is dead.That’s what I said…Let Naveen rap a plan; ‘said he’d send him home.But his hope was a dope named Reigle, and he should have known.It’s hard to understand; there was intelligence in this man.I’m sure all would agree, that his misery,was his z score and things.Now this thread is dead.That’s what I said…Everybody’s misused Naveen; made fun and abused him.Another BB playin’; pushin’ meaningless z scores for the man.A terrible blow, but that’s how it go…Naveen’s on the corner now… if you wanna be a BB, wow…Remember, this thread is dead…We’re all built up with progress,but sometimes I must confess,we can deal with rockets and dreams,but reality… what does it mean? Ain’t nothing said.’cause this thread is dead.Hey, hey…Love, love…Hey, hey…Yeah, yeah…Huh huh…Love, love…Yeah, yeah…Huh huh…Yeah, yeah…

0
#121423

Mikel
Member

Sorry, pretty sick I know, but I couldn’t resist. You know how something just sticks in your head?
Coming next Reigleman to the tune Pusherman also from the late great Curtis Mayfield.

0
#121428

Ken Feldman
Participant

That is probably your longest and most thought out post ever…congrats on a great one.  Yeah Yeah

0
#121432

grover
Participant

We are going nowhere!! very dissappointed!!

0
#121434

grover
Participant

0
#121438

Mikel
Member

Naveen,
Your use of z values for your lab round robins is basically sound although I wonder why you are making it so hard. A round robin is just a repeatability study. Plug what you are doing into a GR&R, follow the rules of GR&R (they are basically sound).
MSA is an established method, follow either MSA from AIAG or MSE from Wheeler. Either is okay, just don’t fall for the BS from Wheeler that his method is better, it isn’t.

0
#121440

Mikel
Member

Naveen,
You are depending on others brains to think for you. You have all you need to figure it out. Use your own brain, not mine or Darth’s (his is a wrinkly old South Florida retiree type brain anyway – yuk!).
Very disappointed!!

0
#121458

grover
Participant

Stan, my question was very simple, and i am not making it too hard. its simple. What i am asking is, i have lot of data of a kind i showed on the thread (at least two year data). What i was asking is, ” is there anyway that i can calculate the sigma value of my lab”. I know people do it with the help of DMAIC methodology. I read that one also. It talk about DPMO which talk about opportunities. How do i assign opportunities in my case. Isnt it  a very subjective approach…… what do you think.
Also, just for my better understanding, Round Robin is a reproducibility exercise, not repeatability. Correct me if i am wrong.

0
#122730

shawn
Member

z = x-xbar/s
A z score is a transformation of a data point into a unitless value of distance relative to the mean.
The z-score is a number of standard deviations distance from the mean.

0
Viewing 51 posts - 1 through 51 (of 51 total)

The forum ‘General’ is closed to new topics and replies.