% GRR
Six Sigma – iSixSigma › Forums › Old Forums › General › % GRR
 This topic has 16 replies, 7 voices, and was last updated 20 years, 4 months ago by Carl H.

AuthorPosts

July 22, 2002 at 2:51 pm #29925
The following states the recommended %GRR for measuring instruments. Is there a Type I and Type II error tied to each of the level? How do we assess the risk for an instrument having a %GRR of 10% vs another with %GRR of 30%? % R&R Results
0July 22, 2002 at 3:31 pm #77445
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,There is a alpha and beta error associated with the study. I would use the percentages listed as a guideline but not carved in stone. You need to understand what your process can tolerate. It is like many other things, it has a sliding scale and the percentages are really relative to where you are operating.How you assess the areas of risk is dependent on what you are looking at and the level your process is operating at. It is probably more of a function of what level your process is operating at than a difference between 10% and 30%. When your process is at a very low sigma level. The impact of the percent tolerance is really significant. The process is spread out and producing a lot of defects. Using the percent tolerance will allow you to assess a band around the spec limit that is a gray zone defined by what ever percent tolerance your study turned up. If you have 10% as your percent tolerance there is a 20% band around the spec. 10% inside and 10% outside where good parts can be bad and bad parts can be good.As your process improves and the sigma level increases You still need to understand the % tolerance so you know how much actual spec you have to work with. You also need to look at the % study variation and % total variation to make sure that the gage isn’t the largest source of variation in the process. If you constantly improve your process capability and don’t do anything for the measurement system it will work its way up the Pareto until it becomes the significant piece.If you want to do the 10 versus 30 comparison imagine this (what we used to call guard banding). A gage with 10% leaves me with 80% of the spec where I can actually trust the measurements first pass without some other verification test (10% off each end of the spec). A gage with 30% will leave me having to use the middle 40% of the spec to operate the process since I have 30% on either end where the measurement system cannot be trusted. An increased R&R (%Tolerance) reduces the workable region of your specification for the process to operate inside of. Ask any person if they would prefer to operate in a window that is 80% of the spec or 40%. It should make for a short discussion.With some luck Gabriel will also respond – he seems to like the R&R questions. We do not always agree but you get a couple of perspectives.Good luck.
0July 22, 2002 at 8:45 pm #77477
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Marc, two questions:
1) 10% of what? r&R is a measure of the variability of your measurement system.When you make an r&R study you get S(r&R) that is the standard deviation of the measurement results (in the range of study only and if the measurement process shows stability only). Then you multiply by 5.15 to get the r&R value, which is supposed to cover (in a normal distribution of measurement results) 99% of the results of the measurement of the same “true value” (same part) happening just by chance. But you do not have a central point, just the width. For example, an r&R of 1mm means that, when you measure the same part, 99% of the measurement results will be within a range of 1mm, but you don’t know where this range will be located (that would be the bias). If you compare the r&R value with the tolerance width then you get r&R%(tolerance). If you compare the r&R value with the process spread (as the +/3 sigmas range) you get r&R%(process).
2) Risk of what? Examples: Accepting bad parts, rejecting good parts, missing special causes of variation (in SPC) because process variation is masked behind measurement variation… If you are concerned about wrongly rejecting or accepting parts, you must be aware that the probability of doing so is function of the measurement results variation (r&R) but also of the measurement results position (bias, linearity) and also of the process variation and position. I mean, you may have a great (small) r&R, but if the gage is too biased (and the result is not corrected with a calibration table) then the results will still be wrong. And not only that. No matter how bad your measurement system is: If you do not produce outoftolerance parts, then the risk to wrongly accept bad parts is zero! You may want to check this thread too (click here) where this subject (% of process vs % of tolerance) was also discussed.
Hope this helps. May be now you want to refrase your question?
Gabriel0July 22, 2002 at 10:05 pm #77487
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,
I told you he liked the R&R questions.
Always worth reading.0July 23, 2002 at 1:06 am #77488Thanks, got it. I’ll have to check the accuracy for any bias before looking at the risk.
0July 23, 2002 at 12:01 pm #77496Gabriel,
I would be appriated if you please to explain me about some matter in your discussion: “Cp of 1.5 with a r&R of 10%, the Cp without the measurement influence (proces alone) would be about 1.53”. Because of in my opinion, if Cp 1.5 with R&R 10%, the Cp without the measurement influence should be 1.66.
Please kindly advise. Thank you.0July 23, 2002 at 12:12 pm #77500Type I and Type II refers to the risk of rejecting good material or accepting bad material.
These risks are consideredand accepted by the manufacturing facility and are a consequence of the inspection technique and the MSA. You cannot separate the two when investigating this issue.
A Gage R&R has accuracy risks involved within the process of performing a gage R&R. If you want to consider the entire system utilize the COV componenets of Variation approach and determine the error or risk factor in your entire system.0July 23, 2002 at 4:31 pm #77509
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Wara:I don’t know how do you get to 1.66. The full frase was “if you find a Cp of 1.5 with a r&R of 10%, the Cp without the measurement influence (proces alone) would be about 1.53. Now, if you found the same Cp of 1.5 but with a r&R of 50%, the Cp without the measurement influence would have been more than 3”.Those values were taken from a chart, so they are not exact, but under no way it is 1.66. What I didn’t noticed then is that those r&R values are % of tolerance. Even when I didn’t said that, from the contexc one could think that they are % of tolerance. However, for a Cp of 1.5, both r&R’s are linked, being r&R(%process)=1.5 x r&R(%tolerance).I prepared an Excel spreadsheet to get “Cp free of measurement influence” from any observed Cp and r&R (both %process and %tolerance). I am sendig the file to iSixSigma. They will attach it in this message soon. In the meanwhile, here you have some results for a Cp (observed) of 1.5:r&R(%p) r&R(%t) Cp free of measurement variation10 6.7 1.5115 10 1.5250 33.3 1.8475 50 3.08As you see, the results are coincident to those I got from the chart. The spreadsheet has also some definitions, calculations and explanations to show how the formulas for the calculation were derived.Please come back with your ideasGabriel Influence of r&R on Cp Download [Microsoft Excel]
0July 23, 2002 at 8:23 pm #77520Mark,
The guardbanding answer described earlier by Mike Carnell hit it on the head for me.
I tried to simulate Types I and II error for different %P/T ratios assuming the process was centered. In EXCEL, setup the USL, process average, process SD and measurement SD. Setup noraml random data based on the mean and SDs entered to determine the ACTUAL and MEASURED test results. Setup a formula (nested IF statement) that puts the proper case I, II, inin, out out in a table. This summarizes the risks based on Cp and %P/T.
Let me know if you want this posted as attachment.
Carl
0July 23, 2002 at 9:10 pm #77523
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Mike, a question:
Do you actually use reduced sorting limits if your gage has an r&R of 10% of the tolerance? We do not.
We have already discussed this (loss function, remember?), but my view is that the design takes into account that processes are not perfect, including measurement process. The probability to accept a part that is out of tolerance by 10% of the tolerance width (and assuming r&R 10% tol and no bias) is 0.5%. The probaility of producing an out of tolerance part in that zone in a process barely capable (let’s say Cp=1) is less than 0.15%. The combined probability of producing and accepting such a part is 0.075% (750 PPM). Yet, any sound design with a minimum of robustness or safety margin (as any sound design has) will support such a part.0July 23, 2002 at 9:13 pm #77524
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Carl:
I am not, Mark, but I would like to see that simulation. Would you attach it?
Thanks
Gabriel0July 24, 2002 at 8:46 am #77529Gabriel,
You correct! I forgot to thinking about additive of variance. I understood when we assume the standard deviation of our process is as 1 (St=1), and we know the minimum Cp value that our acceptable should be 1.33. That means a width of tolerance as 8 (8/6= 1.33). * This is allow for 1Sigma process’s mean shift because of no one can maintain process’s mean to constantly. While Cp=2, it means we can tight process to 6 from USLLSL = 12 (12/6=2). Please see attachment for more details.
So if we have Cp = 1.5 (included measurement error 10%), that means our tolerance = 9 (9/6= 1.5). Then, if we do calculation without measurement error, it will be 9/(610%) = 9/5.4= 1.66. This is wrong!!!
Thank you for your distinct explanation.
Wara S.0July 30, 2002 at 2:51 pm #77706Gabriel,Sorry I know you are not Mark. Here is the AB risk spreadsheet. Tell me if you think its correct. I wish I had a direct formula for these vs simulation.Trying to attach the 1.7mb sheet….Carl AB Risk Download [Microsoft Excel, 1.8 MB] AB Risk Download [Zipped File, 0.6 MB]
0July 30, 2002 at 4:53 pm #77717Gabriel,
Please note that I set recalc in EXCEL to MANUAL (F9 key). This was done since calcualtions were taking a while when parameters were edited.
Sorry,
Carl
0July 30, 2002 at 9:32 pm #77734
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Carl,
Nice simulation.
There is a formula for this, it is a bit complicated. (Those not interested in long Math developement, leave this message now!)
In general: X is the actual value and Y is the measurement result, both X and Y are variables with a probability density functions, f(X) for the process distribution and g(YXo) for the measuremnt results of a part measuring Xo. Note that Y is higly correlated with X (that is tha idea of measuring). You can define “the probability of accepting a part with an actual value Xo” as P(accept Xo)=P(LSL<Y<USL  X=Xo)=int(LSL,USL)(g(YXo)dY (read "integral between LSL and USL of…). Note that this will be a function of the Xo chosen. If you draw P(accept X) in function of X you get the "gage performance curve", that gives you the probability to accept a part with a true value of X for any X. If you use sorting limits other than specification limits, change LSL and USL for the lower sorting limit and upper sorting limit (sometimes, reduced sorting limits are used to minimize the "accept bad parts" type error, but this increase the "reject good parts" type error). Any of these limits can be +infinity or infinity if you have a one sided tolerance. By the way, the probabilty of rejecting a part with a true value of Xo would be P(reject Xo)=1P(accept Xo).
Now, the probability to find a part that acually measures Xo is zero, but the probability to find a part measuring Xo+dX is P(Xo<X<Xo+dX)=f(Xo)*dX (for an infinitely small dX). Assuming that P(accept X) remains constant and equal to P(accept Xo) in this dX range, the probability to find a part in this range AND accept it is P(Xo<X<Xo+dX)*Paccept Xo)=f(Xo)dX*P(accept Xo) (note that the finding a part with a true value of Xo to Xo+dX is independent from accepting a part with a true value Xo, then the multiplication rule is valid). If we accept that a good part is a part where LSL<X<USL, regardless of the measurement result, and if not it is a bad part, then:
1) The probility of accepting a part randomly taken form the process that happens to be a good part (case II) is int(LSL,USL)(f(X)*P(accept X).dX).
2) The probility of accepting a part randomly taken form the process that happens to be a bad part (case III) is int(inf,LSL)(f(X)*P(accept X).dX)+int(USL,+inf)(f(X)*P(accept X).dX).
3) For the probability of rejecting a part randomly taken from the process that happens to be a good part (case III), replace P(accept X) by (1Paccept(X)) in 1).
4) For the probability of rejecting a part randomly taken from the process that happens to be a bad part (case IIII), replace P(accept X) by (1Paccept(X)) in 2).
The sum of 1), 2), 3) and 4) must be 1, because all possible situations are covered. Note that, when we say “that happens to be good/bad” we don’t mean “given that it is good/bad”. For example, even if you had a very high probability of accepting a part given that it is bad (an awfull gage), the probability of “accepting a part randomly taken from the process that happens to be bad” would be very low if the probability of the part to be a bad part is very low (a highly capable process). If you want to find the probabilities to accept/reject given that the part is good/bad, just divide the previous formulas by the probability for the part to be good/bad: int(LSL,USL)(f(X)dX) and 1int(LSL,USL)(f(X)dX) respectively.
Y=X+error, and error=B+U, where B is the bias and U is the random measurement error. Then Y=X+B+U. In the P(accept X) formula, where X is given, the only term affected by random variation is U. But you must note that, in general, the bias is a function of the value being measured (linearity error) and also the measurement variation is different at each point (ussualy higher when the value being measured is higher). So, in general, the bias has a value for each X, B=b(X) and U will have a probability density distribution for each given X, h(UX). (By the way, U has allways an average of zero, because the average of the error is the bias and U accounts only for the random part).
However, in many applications it is Ok to say that B is constant and that U is independent from X, at least in the range of the study. For example, a caliper may have differen bias and different spread along the 0150mm range, but if your process and specification are within 49 and 51, who cares about what happens out of this range? Probably the difference in bias and measurement variation will be undetectable between 49 and 51. In this case b(X)=Bo for any X and h(UX)=h(U).
In this case, we say that a part measuring X will be accepted when LSL<Y<USL, then LSL<X+B+U<USL, then LSLXB<U<USLXB. Then P(axxept X)=P(LSLXB<U<USLXB  X=X)=int(LSLXB,USLXB)(h(U)dU). Further more, if we assume that the measurement result Y for any X is normally distributed arround X+B with stanard deviation SDy, then U is normally distributed arround 0 with standard deviation SDu=SDy (obtainable, for example, from an r&R study). If we call H(U) the cummulative probability function for h(U), then P(accept X)=H(USLXB)H(LSLXB), and if we call N(Z) the standarized normal cummulative probability distribution, and make Z=(U0)/SDu=U/SDy then P(accept X)=N((USLXB)/SDy)N((LSLXB)/SDy).
This siplifies a lot calculating the probability to accept a part of a given true value. But you still have to put it into the other integral to fid the probability of types II, III, etc… cases. You can further assume that the process is normally distributed with average Xbar and standard deviation SDx, but I didn’t get that far. Maybe, if you open the cummulative normal function in its formula (e^…) and the normal distribution of X too, the integral may be easy to solve. I invite you to try it and see if you get to a more usable formula with this assumption.
I would like to know, at least, if you got to read all this mess!0July 31, 2002 at 12:41 pm #77739Nice job. You are, indeed, the man.
0July 31, 2002 at 1:39 pm #77742Gabriel,
I got through your note (almost a whole cup of coffees worth!).
I agree that the four cases/conditions probablities are a function of the PMFs for the process and the gage and that the spec limit(s) set the “fromto” on the integrals. What got me stumped is trying to solve for what I thought were double integrals of the two PMFs. The normal/Gaussin error fucntion was too nasty for me to solve.
Graphically, I got an idea of what was going on by drawing the gage PMF on top of the process one. Again since my math ability is limited I set off to just try to simulate what was going on.
I was hoping to use this simulation (or any equations that can be developed) to do a few things: 1) Help illustrate to our GB and BB trainees that poor measurement systems do lead to quantifiable produccer/consumer risk, 2) Use the simulation to determine where to practically focus our efforts on improving a measurement OR process (ie some good MSA “metrics” can till have high risks and the converse is true) and 3) Determine where our measurement sampling/averaging might even be reduced without significant risks.
Best Regards,
Carl
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.