# % GRR

Viewing 17 posts - 1 through 17 (of 17 total)
• Author
Posts
• #29925

The following states the recommended %GRR for measuring instruments. Is there a Type I and Type II error tied to each of the level? How do we assess the risk for an instrument having a %GRR of 10% vs another with %GRR of 30%? % R&R Results

0
#77445 Mike Carnell
Participant

0
#77477 Gabriel
Participant

Marc, two questions:
1) 10% of what? r&R is a measure of the variability of your measurement system.When you make an r&R study you get S(r&R) that is the standard deviation of the measurement results (in the range of study only and if the measurement process shows stability only). Then you multiply by 5.15 to get the r&R value, which is supposed to cover (in a normal distribution of measurement results) 99% of the results of the measurement of the same “true value” (same part) happening just by chance. But you do not have a central point, just the width. For example, an r&R of 1mm means that, when you measure the same part, 99% of the measurement results will be within a range of 1mm, but you don’t know where this range will be located (that would be the bias). If you compare the r&R value with the tolerance width then you get r&R%(tolerance). If you compare the r&R value with the process spread (as the +/-3 sigmas range) you get r&R%(process).
2) Risk of what? Examples: Accepting bad parts, rejecting good parts, missing special causes of variation (in SPC) because process variation is masked behind measurement variation… If you are concerned about wrongly rejecting or accepting parts, you must be aware that the probability of doing so is function of the measurement results variation (r&R) but also of the measurement results position (bias, linearity) and also of the process variation and position. I mean, you may have a great (small) r&R, but if the gage is too biased (and the result is not corrected with a calibration table) then the results will still be wrong. And not only that. No matter how bad your measurement system is: If you do not produce out-of-tolerance parts, then the risk to wrongly accept bad parts is zero! You may want to check this thread too (click here) where this subject (% of process vs % of tolerance) was also discussed.
Hope this helps. May be now you want to re-frase your question?
Gabriel

0
#77487 Mike Carnell
Participant

Marc,
I told you he liked the R&R questions.

0
#77488

Thanks, got it. I’ll have to check the accuracy for any bias before looking at the risk.

0
#77496

Gabriel,
I would be appriated if you please to explain me about some matter in your discussion: “Cp of 1.5 with a r&R of 10%, the Cp without the measurement influence (proces alone) would be about 1.53”. Because of in my opinion, if Cp 1.5 with R&R 10%, the Cp without the measurement influence should be 1.66.

0
#77500

Type I and Type II  refers to the risk of rejecting good material or accepting bad material.
These risks are consideredand accepted by the manufacturing facility and are a consequence of the inspection technique and the MSA. You cannot separate the two when investigating this issue.
A Gage R&R has accuracy risks involved within the process of performing a gage R&R. If you want to consider the entire system utilize the COV componenets of Variation approach and determine the error or risk factor in your entire system.

0
#77509 Gabriel
Participant

Wara:I don’t know how do you get to 1.66. The full frase was “if you find a Cp of 1.5 with a r&R of 10%, the Cp without the measurement influence (proces alone) would be about 1.53. Now, if you found the same Cp of 1.5 but with a r&R of 50%, the Cp without the measurement influence would have been more than 3”.Those values were taken from a chart, so they are not exact, but under no way it is 1.66. What I didn’t noticed then is that those r&R values are % of tolerance. Even when I didn’t said that, from the contexc one could think that they are % of tolerance. However, for a Cp of 1.5, both r&R’s are linked, being r&R(%process)=1.5 x r&R(%tolerance).I prepared an Excel spreadsheet to get “Cp free of measurement influence” from any observed Cp and r&R (both %process and %tolerance). I am sendig the file to iSixSigma. They will attach it in this message soon. In the meanwhile, here you have some results for a Cp (observed) of 1.5:r&R(%p) r&R(%t) Cp free of measurement variation10 6.7 1.5115 10 1.5250 33.3 1.8475 50 3.08As you see, the results are coincident to those I got from the chart. The spreadsheet has also some definitions, calculations and explanations to show how the formulas for the calculation were derived.Please come back with your ideasGabriel Influence of r&R on Cp Download [Microsoft Excel]

0
#77520 Carl H
Participant

Mark,
The guardbanding answer described earlier by Mike Carnell hit it on the head for me.
I tried to simulate Types I and II error for different %P/T ratios assuming the process was centered.  In EXCEL, setup the USL, process average, process SD and measurement SD.  Setup noraml random data based on the mean and SDs entered to determine the ACTUAL and MEASURED test results.  Setup a formula (nested IF statement) that puts the proper case I, II, in-in, out out in a table. This summarizes the risks based on Cp and %P/T.
Let me know if you want this posted as attachment.
Carl

0
#77523 Gabriel
Participant

Mike, a question:
Do you actually use reduced sorting limits if your gage has an r&R of 10% of the tolerance? We do not.
We have already discussed this (loss function, remember?), but my view is that the design takes into account that processes are not perfect, including measurement process. The probability to accept a part that is out of tolerance by 10% of the tolerance width (and assuming r&R 10% tol and no bias) is 0.5%. The probaility of producing an out of tolerance part in that zone in a process barely capable (let’s say Cp=1) is less than 0.15%. The combined probability of producing and accepting such a part is 0.075% (750 PPM). Yet, any sound design with a minimum of robustness or safety margin (as any sound design has) will support such a part.

0
#77524 Gabriel
Participant

Carl:
I am not, Mark, but I would like to see that simulation. Would you attach it?
Thanks
Gabriel

0
#77529

Gabriel,
You correct! I forgot to thinking about additive of variance. I understood when we assume the standard deviation of our process is as 1 (St=1), and we know the minimum Cp value that our acceptable should be 1.33. That means a width of tolerance as 8 (8/6= 1.33). * This is allow for 1Sigma process’s mean shift because of no one can maintain process’s mean to constantly. While Cp=2, it means we can tight process to 6 from USL-LSL = 12 (12/6=2). Please see attachment for more details.
So if we have Cp = 1.5 (included measurement error 10%), that means our tolerance = 9 (9/6= 1.5). Then, if we do calculation without measurement error, it will be 9/(6-10%) = 9/5.4= 1.66. This is wrong!!!
Thank you for your distinct explanation.
Wara S.

0
#77706 Carl H
Participant

Gabriel,Sorry I know you are not Mark.  Here is the AB risk spreadsheet.  Tell me if you think its correct.  I wish I had a direct formula for these vs simulation.Trying to attach the 1.7mb sheet….Carl AB Risk Download [Microsoft Excel, 1.8 MB] AB Risk Download [Zipped File, 0.6 MB]

0
#77717 Carl H
Participant

Gabriel,
Please note that I set recalc in EXCEL to MANUAL (F9 key).  This was done since calcualtions were taking a while when parameters were edited.
Sorry,
Carl

0
#77734 Gabriel
Participant

Carl,
Nice simulation.
There is a formula for this, it is a bit complicated. (Those not interested in long Math developement, leave this message now!)
In general: X is the actual value and Y is the measurement result, both X and Y are variables with a probability density functions, f(X) for the process distribution and g(Y|Xo) for the measuremnt results of a part measuring Xo. Note that Y is higly correlated with X (that is tha idea of measuring). You can define “the probability of accepting a part with an actual value Xo” as P(accept Xo)=P(LSL<Y<USL | X=Xo)=int(LSL,USL)(g(Y|Xo)dY (read "integral between LSL and USL of…). Note that this will be a function of the Xo chosen. If you draw P(accept X) in function of X you get the "gage performance curve", that gives you the probability to accept a part with a true value of X for any X. If you use sorting limits other than specification limits, change LSL and USL for the lower sorting limit and upper sorting limit (sometimes, reduced sorting limits are used to minimize the "accept bad parts" type error, but this increase the "reject good parts" type error). Any of these limits can be +infinity or -infinity if you have a one sided tolerance. By the way, the probabilty of rejecting a part with a true value of Xo would be P(reject Xo)=1-P(accept Xo).
Now, the probability to find a part that acually measures Xo is zero, but the probability to find a part measuring Xo+dX is P(Xo<X<Xo+dX)=f(Xo)*dX (for an infinitely small dX). Assuming that P(accept X) remains constant and equal to P(accept Xo) in this dX range, the probability to find a part in this range AND accept it is P(Xo<X<Xo+dX)*Paccept Xo)=f(Xo)dX*P(accept Xo) (note that the finding a part with a true value of Xo to Xo+dX is independent from accepting a part with a true value Xo, then the multiplication rule is valid). If we accept that a good part is a part where LSL<X<USL, regardless of the measurement result, and if not it is a bad part, then:
1) The probility of accepting a part randomly taken form the process that happens to be a good part (case I-I) is int(LSL,USL)(f(X)*P(accept X).dX).
2) The probility of accepting a part randomly taken form the process that happens to be a bad part (case II-I) is int(-inf,LSL)(f(X)*P(accept X).dX)+int(USL,+inf)(f(X)*P(accept X).dX).
3) For the probability of rejecting a part randomly taken from the process that happens to be a good part (case I-II), replace P(accept X) by (1-Paccept(X)) in 1).
4) For the probability of rejecting a part randomly taken from the process that happens to be a bad part (case II-II), replace P(accept X) by (1-Paccept(X)) in 2).
The sum of 1), 2), 3) and 4) must be 1, because all possible situations are covered. Note that, when we say “that happens to be good/bad” we don’t mean “given that it is good/bad”. For example, even if you had a very high probability of accepting a part given that it is bad (an awfull gage), the probability of “accepting a part randomly taken from the process that happens to be bad” would be very low if the probability of the part to be a bad part is very low (a highly capable process). If you want to find the probabilities to accept/reject given that the part is good/bad, just divide the previous formulas by the probability for the part to be good/bad: int(LSL,USL)(f(X)dX) and 1-int(LSL,USL)(f(X)dX) respectively.
Y=X+error, and error=B+U, where B is the bias and U is the random measurement error. Then Y=X+B+U. In the P(accept X) formula, where X is given, the only term affected by random variation is U. But you must note that, in general, the bias is a function of the value being measured (linearity error) and also the measurement variation is different at each point (ussualy higher when the value being measured is higher). So, in general, the bias has a value for each X, B=b(X) and U will have a probability density distribution for each given X, h(U|X). (By the way, U has allways an average of zero, because the average of the error is the bias and U accounts only for the random part).
However, in many applications it is Ok to say that B is constant and that U is independent from X, at least in the range of the study. For example, a caliper may have differen bias and different spread along the 0-150mm range, but if your process and specification are within 49 and 51, who cares about what happens out of this range? Probably the difference in bias and measurement variation will be undetectable between 49 and 51. In this case b(X)=Bo for any X and h(U|X)=h(U).
In this case, we say that a part measuring X will be accepted when LSL<Y<USL, then LSL<X+B+U<USL, then LSL-X-B<U<USL-X-B. Then P(axxept X)=P(LSL-X-B<U<USL-X-B | X=X)=int(LSL-X-B,USL-X-B)(h(U)dU). Further more, if we assume that the measurement result Y for any X is normally distributed arround X+B with stanard deviation SDy, then U is normally distributed arround 0 with standard deviation SDu=SDy (obtainable, for example, from an r&R study). If we call H(U) the cummulative probability function for h(U), then P(accept X)=H(USL-X-B)-H(LSL-X-B), and if we call N(Z) the standarized normal cummulative probability distribution, and make Z=(U-0)/SDu=U/SDy then P(accept X)=N((USL-X-B)/SDy)-N((LSL-X-B)/SDy).
This siplifies a lot calculating the probability to accept a part of a given true value. But you still have to put it into the other integral to fid the probability of types I-I, I-II, etc… cases. You can further assume that the process is normally distributed with average Xbar and standard deviation SDx, but I didn’t get that far. Maybe, if you open the cummulative normal function in its formula (e^…) and the normal distribution of X too, the integral may be easy to solve. I invite you to try it and see if you get to a more usable formula with this assumption.
I would like to know, at least, if you got to read all this mess!

0
#77739

Nice job. You are, indeed, the man.

0
#77742 Carl H
Participant

Gabriel,
I got through your note (almost a whole cup of coffees worth!).
I agree that the four cases/conditions probablities are a function of the PMFs for the process and the gage and that the spec limit(s) set the “from-to” on the integrals.  What got me stumped is trying to solve for what I thought were double integrals of the two PMFs.  The normal/Gaussin error fucntion was too nasty for me to solve.
Graphically, I got an idea of what was going on by drawing the gage PMF on top of the process one.  Again since my math ability is limited I set off to just try to simulate what was going on.
I was hoping to use this simulation (or any equations that can be developed) to do a few things: 1) Help illustrate to our GB and BB trainees that poor measurement systems do lead to quantifiable produccer/consumer risk,  2) Use the simulation to determine where to practically focus our efforts on improving a measurement OR process (ie some good MSA “metrics” can till have high risks and the converse is true) and  3) Determine where our measurement sampling/averaging might even be reduced without significant risks.
Best Regards,
Carl

0
Viewing 17 posts - 1 through 17 (of 17 total)

The forum ‘General’ is closed to new topics and replies.