EV/Tol % in Gage RR
Six Sigma – iSixSigma › Forums › Old Forums › General › EV/Tol % in Gage RR
- This topic has 11 replies, 7 voices, and was last updated 19 years, 9 months ago by
Gabriel.
-
AuthorPosts
-
September 18, 2002 at 6:56 pm #30366
If the toleance is say500 to 550 and EV/TOL % by Gage R&R is 20 % what does it really mean . Will I be correct to assume that the measurements between 500 to 505 and 545 to 550 are doubtful . Or can I assume that measurements between 500 to 510 and 540 to 550 are doubtful.
Please explain the concept. EV?TOL % is equipment variation to total tolerance0September 18, 2002 at 7:44 pm #79007Aush,
I think it depends on what you mean by “EV” and “doubtful”. I will assume you define EV as 6 x SDmeasurement and doubtful as either producer or conusmer error (alpha and beta errors) being say >0.1%. If these are OK to assume, then the following example may help:
If USL is 550, LSL is 500, process mean is 545, SDprocess is 0.0, SDmeasurement is 1.667 and there is no measurement bias, then:CR = 6 x SDmeas / TOL = 6 x 1.667 / (550-500) = 20%
About 0.13% of measurements will exceed USL erroneously (alpha error) since all are truly at 545.0. Zmeas = (550-545)/1.667 = 3.0. This gives 0.13% on standard Z table.
If process mean were 540, Zmeas = 6.0 or ~0% alpha error.
In reality though, most SDprocesses >0.0. If SD process is say 2.0, then both variances can lead to alpha and beta errors. A rough estimate (based on 10,000 random lots) breaks down the classification as shown:MEASURED
IN
OUT
ACTUAL
IN
96.81%
2.42%
Alpha/Producer Risk
OUT
0.27%
0.50%
Beta/Consumer risk
Both alpha and beta risk grew to significant values with increase in SDprocess. Amount of alpha, beta risk % seem to be function of SD of both process and measurement as well as distance from USL (or LSL) to the process mean.
I hope this helps. I look forward to comments/critiquies by others.
Regards,
Carl
0September 18, 2002 at 11:14 pm #79010The only thing wrong with your logic is that measurements are about individuals, not means. Given that your percentages are grossly understated/
0September 19, 2002 at 4:25 am #79015
HemanthParticipant@HemanthInclude @Hemanth in your post and this person will
be notified via email.Hi
Well yes, you are right. See when you your Gr&R/TOL is 20%, you mean that whatever is your tolerance band 20% of it is occupied by the error in measurement. Which means if your measured value is 540 it could be anywhere between 535 and 545 (imagine a bell curve with mean at 540 and 3 sitd devn limits at 535 and 545). Now if have a reading of 547 then you are running a risk of either accepting a bad part or rejecting a good part as it could lie anywhere between 542 and 552.
Hope this helps.
Hemanth0September 19, 2002 at 10:37 am #79020How did you get the Alpha & Beta risk? Could you explain detail?
0September 19, 2002 at 11:26 am #79022Stan,
Yeah, I know that the first example (mean = 545.0, SDprocess = 0.00) was probably does not really happen.
I think that the percentages should be close for both first and second cases (SDproc = 0.0 up to 2.0).
If you or anyone has direct formulas for calculating alpha/beta risk as function of process mean, SDproc and SDmeas, these would be helpful.
Thanks
Carl0September 19, 2002 at 12:19 pm #79026Just to clarify further , is it alright to divide the 20 % as 10 % on both sides of the measurement or it should be one sided only . This is a error ,correct, then will it be acceptable to divide this error equally. Is it possible that 20 % can be one sided.If so the conditions change.
0September 19, 2002 at 12:29 pm #79027
HemanthParticipant@HemanthInclude @Hemanth in your post and this person will
be notified via email.Nice observation. While doing GR&R normality is assumed, thats why you see formulae like Rbar / d2 in AIAG manuals. Now, since normality is assumed the 20% is divided into two equal half around the measured value (Property of normal distribution). I would suggest keep it +/- 10% about the measured value because the chances of error going beyond 10% is very small as whatever error you got (EV) is 6 times std devn band, and hence if you are dividing this band into two halves you are still taking 3 times std deviation from the measured value, and as explained in one of the replies, probability of getting error more than this limit is very very small.
Its very difficult to explain whole thing here, I would recommend go through the AIAG QS 9000 manual they have given good explanation on this.0September 19, 2002 at 12:47 pm #79031Thanks
0September 19, 2002 at 3:21 pm #79036What you have stated is the equipment variation is 20% of the total tolerance. Lets work backwards from here and it might help clear things up.
EV/Tol = 20% = .2 and Tol =50
EV = .2*50 = 10
EV = 5.15*Sigma EV (most software, including AIAG and minitab, uses 5.15 Std dev for GR&R which is 99% from Z table)
Sigma EV* 5.15 = 10
Sigma EV = 10/5.15
Sigma EV = 1.94
To assess your risk for any given point use
Z = (USL – (Measured Value))/Sigma EV
if measure value = 545 then
Z = (550-545)/1.94
Z = 2.58
Probability of error is .5 – .4951 for a one sided Z table 12 .0049 or .5%
Hope this helps.
You still need to take into account the AV for appraiser variation. It is handled the same way.
PS purchase/ acquire MSA by the AIAG. It will help you a lot.
TMc
0September 19, 2002 at 5:37 pm #79039Tmc
Thanks for the insight and the calculations.Just to clarify further the probability of error as calculated 0.5 % will signify what. Does it represent that the chance of the measured value,545 to fall beyond the tolarance of 550 is 0.5 %. So as the value of Z decreases the risk factor increases .To put it differently the alpha risk is 0.5 % ie if the hypothesis states that the measured value 545 is within the tolerance then the probability to reject this hypothesis when in fact it is true is 0.5 .
Please correct me if you feel I have not got it right.So the question is what should be the value of alpha to decide which measurement values are questionable.The reason I want to do this is to decide on the gates within the tolerance band .
your input will be very valuable. Thanks0September 19, 2002 at 8:59 pm #79049
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.This is all about managing the risks of the measuremen variation. By setting sorting limits that are tighter than the specification limits you are lowering the probabability of accepting bad parts (lowering, not eliminating). The cost of this is that you dramatically increase the probability of rejecting good parts.
If you want to calculate probabilities, forget the EV/tol for a minute. EV/tol is just a ratio to compare easily the gage variation against the tolerance, but it is a “subproduct” of the gage standard deviation itself. So go stright to the gage standard deviation.
Imagine that you want to know the probability of accepting a part which “true value” (something you will never actually know) is 445. Let’s make it graphically, with your knowdlege of statistics you will know how to calculate it.
Draw an X axis, and cross it with two vertical lines representing the sorting limits (which could be the specification limits or tighter). Draw a third vertical line at 445 (the true value), and another verical line at a distance equal to the bias (for example, if the bias is +2 it will be at 447). Arround this last line, draw a normal curve with standard deviation = gage standard deviation. The area under the bell and within the sorting limits is the probability of accepting a part with a true value of 445. Of course that the probability to reject it is 1 minus the probability to accept it. If you make the same for several values covering all the tolerance and a bit beyond you will get a curve that shows you the probability of accepting a part in function of its true value, which is called the “gage performance curve”.
If your measurement system is appraiser sensitive, then you should include appraiser variation too and use the total measurement system variation, instead of gage variation only.
Two final concepts:
Do not forget the process itself. Never mind how bad your measurement system is, you will not accept bad parts if you don’t produce bad parts (however, if your measurement system is too bad, how can you tell how good your process is?)
If your measurement system and your process are both pretty good, you can tighten a bit the sorting limits without increasing too much the chance of rejecting good parts, because you will not produce parts near the sorting limits. However, in such a good situation, why would you reduce the sorting limits? Unfortunately, the situations where you need to reduce the sorting limits are ussually those where the risk of rejecting good parts increase dramatically, thus making “improving the measurement system” the best (most economic) choice.
Hope this helps.0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.