I have a process, which manufactures several products, all of which are to have a finished thickness between 1.97mm & 2.02mm. There has been some question over the measuring system at final inspection. So I selected 15 random production samples and had three inspectors measure them twice on one gage to three decimal places.
I ran a gage r & r through Minitab and got the Total gage R&R of 35.37% study var and a part to part value of 93.53% study var. The distinct number of categories of 3 was also calculated.
I believed the results were due to the three decimal place accuracy. So I rounded the data to 2 decimal places and ran the data through Minitab. Resulting analysis was a total gage r&r of 41.66% study var and a part to part of 90.91% study var, with the distinct number of categories remained at 3.
On closer examination of the results, the Repeatability result was 39.77% SV with a reproducibility of 12.39% SV. Does this tell me that we are affecting the samples with our measuring system?
According to the AIAG >30% is unacceptable, also the number of distinct categories is unacceptable if below 5.
Does the above study conclude that are measuring system is not capable? This will be a massive issue so I must be on solid ground.
Any advice or agreement would be appreciated.
Gage R&R compares the variation that comes out of your process with the variation that comes out of your measurement system. That means your samples for the gage R&R should be representative for your process, you should have the same(or a similar) standard deviation in your sample parts as in your process. If the variation in your sample lot is smaller than the variation in your process your Gage R&R is more likely to fail, so I would check this first.
Thank you for your advice.
My thought process is to take another 15 random samples from the process and completing a gage r & r. If the results are of the same order would I be right in concluding the measuring system is not capable, if the results calculate capability then I am confused. Would I conclude that if a certain product mix came through the measuring system is not capable? I would infer that I would see less variation from one set of styles as opposed to another.
Am I on the right thought process????
Gauge R&R can compare the amount of variation coming from the gauge to either:
The amount of variation which comes from the process
The width of the tolerance.
You can say that if the R&R is low when comparing the gauge to the process variation, the measurement system is suitable for process improvement (i.e. reduction of the variation generated by the process)
You can say that if the R&R is low when comparing the width of the tolerance, the measurement system is suitable for identifying good and bad products.
So, you might have poor R&R values when you try to get the gauge to look at the process variation, but a good one when you get the gauge to look at the gap between the tolerances.
As Marcus indicated it is important that the products you use for the Gage are representative of the full range of possible variation in your process (including products outside spec limits).
If this was not your case in the first sample, rather than pull a second random sample it would be better to select each item carefully to ensure the full range is represented. Remember that we are not trying to learn anything about the process here, we are simply trying to determine whether the measurement system is adequate.
In the event that your first batch was representative, there is little point in repeating the Gage on a second sample. Instead, try to determine where the inadequacy lies and what you might be able to do about it.
If you want some advice, it would be helpful to have the full output of the Minitab session window, as well as the charts if possible.
Thank you for your offer to give some advice I will endeavour to up load the screens when I return to the office. I am new to this site so I do not know if this is possible.
“You can say that if the R&R is low when comparing the width of the tolerance, the measurement system is suitable for identifying good and bad products”
I do not fully understand this statement but I feel the system we have can identify when a style is out of tolerances but then I stop myself and say that if it is near the USL or LSL then the part my be out of spec when it is in or in spec when it is out. I now that a part that just passes is no better than a part that just fails and then there is the tolerance on the gage, but I am trying to give a professional appraisal of our measurement system and I am struggle.
I would rely appreciate if you could explain your statements in more detail I have a gut feeling you are highlighting an area for further investing.
I look forward to your reply.
Don’t round to 2 decimal places if your equipment can measure to three. keep in mind that if your tolerance is expressed as +/- .xx, you want your gage to be able to measure to 3 decimal places.
How did the minitab plots look from your gage study? Did you see signs of “chunky” data on the range chart? (A sign of poor discrimination)
Some observations based upon the data you presented:
1. Did you callibrate the gage prior to your study? I would suggest that if you plan to re-run the study, have the device callibrated, regardless of whether it is due or not!
2. Make sure you choose several “borderline good” and “borderline bad” parts.
3. As stated earlier, do not reduce the decimal places! Always use the maximum that the device offers!
I mention these items strictly because of your 39.77% SV, which speaks to a problem with your device.
Since you indicate that this problem could be controversial, you might consider also running a Gage Linearity and Bias study via Minitab. This could help you, should you re-run the study and find similar results.
Regarding the posting of your session window contents – if the worst comes to the worst then you could cut and paste directly into a reply window.
Clarification of Earlier Statement:
If you get the R&R to compare: the Variation-Which-is-Coming-From-The Measuremnent-System, to the Width-Between-The-Tolerance-Limits, the answer gives you an idea as to whether the measurement system is good enough to differentiate between good and bad product.
As I predicted this is turning into a hot potatoe.
Adrian, unfortunately I can not get the session window into the reply window. Thank you for the offer.
The gauge is calibrated/ checked each morning and I done the study approx 1 hr after this process.
I will endevour to find borderline passes.
The gage has 4 places after the decimal, this is a joke that is why I now want to ignore all bar 2 places. Even then I get poor results.
Can I run the gage as a go, no-go gage? i.e. the part passes the lower spec and is under the upper spec and not worry about the actual number?
Is it true to say that the less the process variation the more accurate the measuring device needs to be although if the process variation was greater the device would be ok.
You really need to use variable data if possible! Therefore, I would caution against trying to chage your study to attribute. Obviously, you have a gage problem and it needs to be fixed. Give us some details on this gage…is it a stand alone caliper, digimatic indicator, height gage, etc. or is it a device that is incorporated into a custom checking fixture?
Take a very hard look at the system, including the inspectors techniques. Something is causing this….don’t give up until you find the underlying problem! There is some reason that your numbers are poor!
Talk to the inspectors…get their .02 on the device. Don’t accept calibration as being sufficient. If it’s being performed daily, oversee it for a couple of days. I have seen more than one instance where an in-house calibration was a joke (i.e., recorded, but not performed!).
Going with attribute system is going backwards!
Do all of your samples have the same measurement specifications?
If they don’t that could be the reason your Gage R&R is high.
Do another Gage R&R with Gage blocks that are close to the same size as your product. If your R&R with the gage blocks is vastly different than your test samples, then look at the finish or cleanliness of your sample. When you get down to the 3rd decimal place in mm you are so small that a change in temperature or a piece of dust could throw off your measurement.
What type of instrument are you using to measure with?
What is the surface finish of the part?
Is the room climate controlled or a clean room environment?
Thanks for the support on this I really think people in the forum are helping.
I have been monitoring the calibration for the past few days and it is being done and to a constant standard. What I worry about is that the calibration falls to the same problems as when it is being used to measure production.
I am new into this particular industry (printing) and this instrument is standard kit, dead weight drop gage I thing they are termed. And I believe that it is not capable of the job it is being asked to do.
I need to be confident in the way I performed the study such that I can present the figures which prove that the kit is not capable, but I do believe that it can distinguish to two decimal places but the R&R indicates that even at this resolution its not capable which concerns me.
Do you have experience in the electronics industry? If so what is the physical measurement of placement required for the electronic parts or what is the tolerance in cylinder bores for cars. I am trying to put the expected level of measurement into perspective for guys who have never been out the printing industry.
Again thanks for all your help…
I’ve been staring at your clarification statement for days and I think I have finaly understood it.
Are you saying I should select the parts for the gauge r&r that will give me a selection between the tolerance (lower and upper in andout of spec)?
But as the gauge is therefore playing apart in the selection will this not effect the analysis?
That isn’t good if my posting was difficult to understand. Sorry. I’m constantly trying to speak and write as clearly as possible. And I clearly failed in that instance. My fault.
You have hit the nail on the head.
1.) If an R&R is to be executed which is intended to compare the amount-of-variation-coming-from-the-measurement-system to the amount-of-variation-coming-from-the-process, then the whole thing hangs on the sampling from the process. If huge variation between the samples exists (eg if different part numbers are included in the samples), then the R&R will probably look fantastic because the measurement system variation will be tiny compared to the big difference between the samples. For one of these R&Rs to be properly understood, the sampling needs to be detailed – perhaps in a variation map. Lots of fairly pointless work lies down this route unless you absolutely need to know how good the gauge is at seeing the variation in the process.
2.) The cleaner way to do an R&R (in this case) is to compare the amount-of-variation-coming-from-the-measurement-system to the gap between the tolerances. In Minitab set up for a crossed R&R as per normal and then hit Options and go into the Process tolerance field and enter the distance between your upper spec and your lower. The R&R analysis will execute as usual but you will have one additional column which details how the gauge does in comparison to the tolerance.
My suggestion is to do this (1.) rather than get into all the tricksy stuff that can go on with (2.). You will get an answer which most closely fits the question that you were asked by your customer.
I think your samples may not be uniform. My company does some flexo printing so I think I have some idea of what your samples look like.
First of all – AIAG is in a way wrong. Below 30% is acceptable if you are using the measurements ONLY to calcultate capability, but it is NOT acceptable if you use the measuremen systemt to evalutate individuals. Do you want to get a measurement value that says that your product is OK, but it really is 30% of tolerance? Your customer says No, No & Hell No. But AIAG says yes, or?
First thing to check is – WHAT shall we use the measurment results for. THAT says what the acceptance criterias is. Sadly, many only know “Thumb rules”. So much for scientific management….
Secondly - if your measurement variation isn´t normal distributed – it may be not appropriate to use standard deviation as a measure!!! This is the first thing to check – is the measurement variation normal distributed? If not – change the calculations or risk failure!!!
And ALWAYS try to examine your measurement results against your tolerance limits. Else you have 2 risks – that your measurement system is more expensive than appropriate or that you get lousy measurement results-
Even if you have a bad % Study var, I’t doesn’t mean anything if you have a very capable process from the start. Or do you want to waste money to develop a measurement system far better than you really need? Your manager and Customer says NO!
Number of disticnt categories can be showing a good result, becouse you have a lousy process (and according to all the “thumb rules” you use a spread that reflects 80% of your process variation in your Gage R&R). But your measurement system Stink anyhow.
I have investigated measurement systems with a % study var of 50 % that is state of the art mesurement systems. (%Tolerance has been below 10 % and the tolerance limits has been 0,01 mm!!!). My suggestion is - Read good books and stop listening to airhead “Thumb rules”.
So – get back and investigate what the tolerance limits are. If you don’t get a answer, It will be the evidence tha either your R&D functions don’t know what they are doing or that the measure isn’t important.
And finaly – that you are asking for help and are critical against your results- Great. I is evidence that you are smart!!!! Keep up the good work. Many people don’t even question the results, but you do!
Hmmm. Think I will go out for another whiskey.I feel bright tonight.
You guy’s are putting the math and the methodologies above the fact the sample may not be suited for the gage. If the sample is not homogeneous you will not be able to achieve a repeatable measurement.
I don´t understand what you are saying. Do you want a homogenues sample in tha R&R analysis? Or do you mean that we need a homogeneous sample to be able to say something about our process. If we don´t have a good measurement system, we wont be able to detect even a stable sample, or….
Please correct me, I think i have misunderstood what you really mean.
Homogeneous means “all the same” and that is the last thing you want your Gage sample to be.
The Gage sample must be representative of the entire range of output of your process, from one extreme to the other and the bits in the middle.
A homogeneous sample during a Gage R&R is a one way ticket to a high %contribution and perhaps concluding a gage is not acceptable when, in fact, it is.
In a Gage R&R the “repeatable” piece is on the SAME piece (one operator, one piece, multiple measures). It has nothing to do with all the samples being the same or even similar.
More to the point, length is what is being measured. Length is the easiest to measure of all the properties of an object. I use the word homogeneous in regards to height of the sample. If the sample varies in height the gauge could have a valid measurement but a Gage R&R would appear confounded. It seams to me most of the post are suggesting that the R&R was preformed incorrectly and needs to be repeated with a different method or something happened to the measurements to taint their value. That could very well be true but what is the root cause? I think the gage used is a common drop gauge, or rather a digital indicator with a base and stand (with a resolution of 5 decimal places in inches and an accuracy of +/- .0001”) So I feel that unless environment or the stand is set up wrong the sample is not adequate.
If the process produces ‘non-homogenous’ product (i.e. by your definition product for which the same measure in different places will give a different result) the items used for the Gage should represent this.
It will be part of the development of the operational definition for this measure to recognise this and allow for it, either by specifying exactly the location where the measure should be taken or specifying that several measures should be taken in specific locations.
If the items used in the Gage are not representative of the process, a good gage % is not guarantee that the gage is an adequate measurement system for the real process.
The forum ‘General’ is closed to new topics and replies.