Gage RR and what it tells us?
Six Sigma – iSixSigma › Forums › Old Forums › General › Gage RR and what it tells us?
 This topic has 12 replies, 6 voices, and was last updated 19 years, 2 months ago by vicki kolleck.

AuthorPosts

September 17, 2003 at 5:25 pm #33328
jediblackbeltParticipant@jediblackbelt Include @jediblackbelt in your post and this person will
be notified via email.I understand that the Gage R&R basically tells me that X% of my overall variation is in the R&R, correct? What I am struggling with is what does that really tell me in regards to my type I&II errors?
If my Gage R&R is >30% it tells me I need major work on the gage and if it’s <10% then the gage is alright. But how does it equate back to I have a X% chance of either passing bad or rejecting good? My curiosity is that I have had some OEM automotive people say that because my gage has an error of 20% then I can not tell them what is good and bad. My thoughts are it will only happen when I am at my spec boundaries. So how do I relate the gage RR back to what I can actually say is good or bad?
Struggling on the concept.0September 17, 2003 at 6:20 pm #89966
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.JediBB,
The first place you want to look is at the number of distinct catagories. The minitab help menu will describe what that means (basically I won’t retype it – you can look it up).
If you use the percent tolerance calculation and get the percentage. Take the percentage of the tolerance. That distance is a zone where you have no ability to trust the test/inspection numbers. If you have a 20% P/T ratio then the only area were you can trust your readings is the middle 60% of the tolerance. You can lay that on top of a capability study and figure what percent of your product is good call (basically a Z calculation).
From your side there is an equal distance (as previously mentioned) that you believe is bad based on the test/inspection data that may be good because of measurement error.
Basically 20% is a bad number if you are serious about SS. Eventually the measurement error goes to number 1 on the Pareto.0September 17, 2003 at 6:40 pm #89967I think your approach is correct. I once found bias in a gage and changed the product inspection limits to (USLBias, LSL+Bias). Before I implemented the change on the floor, I checked 12 months of inspection results and found none near enough to the limits to warrant the change then I did the change anyway because it was easier than some convoluted argument for not doing it, and it did not demand more resources. Although this practice makes sense, Im unaware of any MSA or Calibration practices it that recommend or require it. I did it only once because it was a QS9000 requirement. I can live with the risk of not doing it universally because there are so many other issues that have more effect on quality. Do a literature search to get expert advice. One way to determine Type I or II tendencies is to measure your GR&R parts with a more sensitive gage to get a baseline. An example is a Caliper GR&R; measure the parts first on an optical comparator and serialize them. Compare the distributions of Caliper measurements and Comparator measurements to draw conclusions.
0September 17, 2003 at 7:34 pm #89972
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.DaveG,
I have never done this with bias (calibration) but it makes sense. This what we used to call guardbanding. Anything in the guardband would see multiple tests to reduce the error.
Good luck.0September 17, 2003 at 9:08 pm #89981This issue begs the question of why inspect product at all? In a perfect universe, product inspection would be done solely to link process controls to product characteristics, and product characteristics to CTQs / VOC.
The best way to approximate that universe and maximize quality (insert your definition here) is to use FMEA and associated / analogous tools. IMHO, eliminating inspection should help to create a culture that plans for quality and does not require inspection as a crutch or workaround.0September 18, 2003 at 12:47 am #89989
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.DaveG,
That is where we are headed. When you understand the leverage x’s that control the Y then you don’t need inspection/test to sort your process output.
The role we have BB’s in is actually rework. Things in production that are already wrong. DFSS is the proactive approach to getting rid of the problems in the beginning.
Good luck.0September 18, 2003 at 12:04 pm #90012
facemanParticipant@faceman Include @faceman in your post and this person will
be notified via email.I have seen this:
NewUSL = USL3*measurement standard deviation
NewLSL = LSL+3*measurement standard deviation
using a multiplier of 3 is probably conservative, you could use any of the normal sigma multiplers, 1.96 maybe.
I have heard this called buffering tolerance limits. It looks to me kind of like you are assuming a ‘bad case’ measurement error when you are around your tolerance limits. i.e. You assume that you are measuring ‘smaller’ than ‘truth’ at the upper end of your tolerance range, and vice versa at the lower end.0September 18, 2003 at 12:41 pm #90015
jediblackbeltParticipant@jediblackbelt Include @jediblackbelt in your post and this person will
be notified via email.I appreciate everyones responses it has helped me understand a lot of different methods now to adjust the spec limits to satisfy customer curiosity on how we protect them. However, I am still confused on what the actual GageR&R% tells us. Is it as simple as X% of our variation is from the gage? Or is there something that you can relate that to.
Thanks especially to Mike for helping understand the P/T ratio better.
Thanks,
0September 18, 2003 at 12:56 pm #90018
facemanParticipant@faceman Include @faceman in your post and this person will
be notified via email.Imagine a number line with tick marks identifying your upper and lower spec limits. Now Imagine a normal looking curve that represents a 99% confidence interval based on your measurement error (5.15*measurement standard deviation) over that number line. If the width of the 99% confidence interval is 1/10 of the width of your tolerance range then your %RR is 10%, if the width of that confidence interval ‘covers’ 3/10 of your tolerance range then your %RR is 30%, and so on. The % RR is the ratio of a 99% confidence interval based on your measurement system error to the width of your tolerance range.
0September 18, 2003 at 1:32 pm #90028Just remember it is not just the gauge, it is the system (i.e. personnel, fixturing, training, …). Bad R&R’s are rarely an issue with the actual gauge.
0September 18, 2003 at 4:39 pm #90041
jediblackbeltParticipant@jediblackbelt Include @jediblackbelt in your post and this person will
be notified via email.Very Good analogy. So if that is the case then when I have a 10% gage R&R that means there is a chance of making a measurement system error of +/10% around my gage reading. So if I make a reading from the gage it could be off the percent of my gage R&R.
On that same note then if I use the P/T reading that should be the range I have of parts being possible good/bad around that same area.
Correct thought???0September 19, 2003 at 1:08 pm #90089
facemanParticipant@faceman Include @faceman in your post and this person will
be notified via email.JediBB (Cool screen name by the way),
Is P/T = 6*measurement standard deviation/(USLLSL)?
If so, think of it this way
% RR (Tolerance) = 5.15*measurement standard deviation/(USLLSL)
You just change the 6 to 5.15.
There is another %RR that is used which compares 5.15*MeasStdDev to 5.15*TotalStdDev, where TotalStdDev is pooled standard deviation of the MeasStdDev and standard deviation of the parts used in the study.0September 24, 2003 at 5:56 pm #90239
vicki kolleckMember@vickikolleck Include @vickikolleck in your post and this person will
be notified via email.Is the measururement uncertainty of the gauage calculated into the GRR formula or is the uncertainty in addition to the variation of the measurement system?
Please elaborate on the subject.0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.