iSixSigma

% Study Variation vs. % Tolerance

Six Sigma – iSixSigma Forums Old Forums General % Study Variation vs. % Tolerance

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #29544

    Kulungian
    Participant

    I have a gage that I consider to be in bad shape.
    % Study R&R is 51.00%, Repeatability is 27.47%, and Reproducibility is 42.96%, with 2 distinct categories…
    Our Quality group feels that the gage is good!  They use % tolerance instead of  % study.  The resluts for % tolerance is as follows,
    % Tolerance R&R is 16.62%, Repeatability is 8.95%, and Reproducibility is 14.00%, with 2 distinct categories…
    What is the criteria for %tolerance?  Is this a good gage and if not how do I prove it??
    Thanks,
    JK
     

    0
    #75967

    vin
    Member

    John,
    My opinion – it depends on what you intend to do with the gage.  Typically, 10% or less %variation and P/T ratio is considered good, anything above that is grey area up until 30% or so.  Above 30% is usually deemed “unacceptable.”
    If the gage is used to inspect parts and segregate good pieces from bad, then P/T ratio (% Tolerance) is the metric you should be concerned with first.
    Then, if the gage is to be used to measure parts as part of a project to reduce process/product variation, then %Study Variation is important. Also, Distinct Catergories should probably be 6 or higher. (Some people say 4 or higher).
    Of course, that’s just my opinion.  In a perfect world, all our measurement systems would have P/T ratios and % study variation below 10%.
     

    0
    #75970

    John Evans
    Participant

    For it’s intended use, which I assume is the product that you obtained the tolerance on, this gage is good.  If however, you intend to use this gage with another product tolerance, you must check the grr with that new tolerance to see if the gage is good.  For my Experimental error statements I typically write an equation that multiplies the standard deviation found in the grr by 5.15 x 100 and tell the user to divide by the tolerance to determine acceptability.  Then any user can test whether this is gage is good enough to discriminate bad from good.
    I agree with the previous responder that you should use % study when you are trying to reduce the variation in a test method and also when a tolerance is not available for a product/process.

    0
    #75971

    Mike Carnell
    Participant

    John,
    I would be more concerned with the two distinct catagories than the % anything. With the 2 distinct catagories the gage can only distinguish two levels.
    As far as the % Tolerance any time they are within a distance of 14% of the tolerance are risking an incorrect decision. That is +/-14% of the spec limit. This is the region where if you make a measurement and it is actually somewhere else it mean accepting a bad part or rejecting a good part. This is called guard-banding. If QA wants to use it let them use it they only have 72% of the tolerance to work with.
    Good luck.

    0
    #75972

    Kulungian
    Participant

    Mike,
    I agree that the distinct catagories is an issue..  I am under the impression that even with a 16% R&R for % Tolerance the gage has issues because we still get only 2 distinct categories.  I was suprised to hear John Evans say the gage is good factoring in the distinct categories info.  My reading indicates that there is not one key number you can pin the sucess or failure of a gage on.  Is there any text that goes into some detail about % tolerance methods?
     
    Thanks,
    JK

    0
    #75975

    vin
    Member

    Please keep in mind  that  distinct categories is dependent on the total variation i.e. if you have a process sigma of 0.1″ and the tolerance is +/- 1.5″ you could end up with a very large % Study var and a very low P/T ratio.  In this case, you might not necessarilly have to improve the gage, as long as P/T ratio is low, since your process is very capable and there is no immediate need to improve it.  Spend the resources elsewhere…
    As I said before, in my humble opinion, distinct categories and %Study Var are important for process improvement and SPC, but is not as important as P/T Ratio if you are using the gage for inspection.

    0
    #75978

    Mike Carnell
    Participant

    JohnK,
    I haven’t seen much that goes beyond the stuff in the AIAG maual. Everyone does the basic formula crunch.
    Kim did a nice article that is featured on this site this week about R&R studies.
    We need to get Gabriel in this. He seems to have a pretty strong background in the R&R stuff.

    0
    #75980

    Gabriel
    Participant

    Thanks, Mike:
    I agree with Vin. But be aware that my opinion (and Vin’s) is not sheared by many people.
    Measurement (variation) is a part of the process (variation). But let´s exclude it for a while. So in this specific post when I say “process”, measurement is not included.
    If the measurement system (not the gage alone) has a god r&R as % of tol then it can distinguish not only good from bad parts, but also several categories within the tolerance. So if you want to see if the product is Ok and how close you are from the tolerance limit or from the spec average, then the measurement system is Ok.
    Now, if you want to use the measurements to detect special causes of variation (as in SPC), for a process capability study, to see if an improvement action was effective to reduce process variation, or anything related to process variation, then your measurement system must be able to distinguish several categories within the process spread, and that is a good r&R as a % of the total variation.
    This is what “the book” sais.
    But, imagine the following situation (not so rare, by the way). You have a process followed with SPC which has great capability (let’s say Cp=Cpk=4). The “variation” used to compute Cpk is the total variation, which includes the measurement system variation. That means that your “process alone” variation is even better (i.e. the parts are closer one from each other than what they seem, you have less likehood of a non conforming part that what you calculate from the Cp/Cpk). In fact, for a given total variation, the worse the measurement system variation is, the better your “process alone” variation is. For example, if you find a Cp of 1.5 with a r&R of 10%, the Cp without the measurement influence (proces alone) would be about 1.53. Now, if you found the same Cp of 1.5 but with a r&R of 50%, the Cp without the measurement influence would have been more than 3!. I don’t mean that it is better to have a bad gage, because for a given process the Cp will be worse when the gage is worse. But if you make a process capability study before performing the r&R (what is not supposed to happen) and then you find a bad r&R as % of spread, then you have at least one reason to be happy. The process is better than what you thought! I have a case like this, where the r&R (% spred) is awful, the r&R (% tol) is acceptable, the Cp is about 4, and the process is followed with SPC. If I could do the measurements with a better system, the Cp would be much better. Are we missing special causes of variation that are masked by the measurement system normal variation? Sure. So the process can be out of control and we may not detect it? Yes. But if this happens, we are missing opportunities to look for root causes and take improvement action? Exactly. The point is, this process has a Cp of 4 even when we are measuring with a “poor” system. Do we want to spend time and money to improve this measurement system so we can improve the process? Not now, when we have other processes with a Cpk that hardly reach 1.33. Priorities, it is called. The measurement system we are using is good enough to detect good and bad parts, also to tell us how close from a limit a part is, and also to detect special causes of variation that are big enough as to jeopardize the product conformance or the customer satisfaction. Would we like to have a better measurement system, with a good dicriminations of categories inside the process variation? Yes, of course. It would be great! But, by now, we have other places where to invest with a better ROI. My example is a weight. The tolerance is +/- 0.2 grams. The “process alone” variation is about +/- 0.02 grams. The scale accuracy is also about 0.02 grams. A better scale thac can detect miligrams would cost several thousands dollars, and hte measurment method would require stabilized temperature, a perfec nivelation of the scle, vibration free enviroment, a cover to avoid any current of air, mor time to let stabilize the reading, operators special training, etc, etc, etc… Nothing very comfortable to be implemented in a manufacturing enviroment. Is it worth the effort? Yes, if you have nothing more important to improve or to invest in.
    I must recognize that it was a bit difficult to convince the auditor about that. It is against “the book”.
    I don’t know if it helped or confused even more. If so, sorry. It was not my intention.
    Gabriel.

    0
    #75994

    vin
    Member

    By the way, the opinion I voiced in the first paragraph of my last post pertains specifically to the hypothetical process I mentioned in that paragraph, NOT to JohnK’s process.  Depending on the specific circumstances around JohnK’s process (e.g. Is this the most problematic process in the company?  Is the characteristic being measured considered Critical and relates to a high RPN in his FMEA? etc.), he may want or need to work on the measurement system for it anyway.
    That said, knowing JohnK’s P/T Ratio and %Total Var, we could actually derive a rough estimate of his Cp.  My estimate is it is over 2. (I won’t bore you with the formula – besides, I don’t have the patience to type it all in here and it’s just a very rough estimate.)
    So, the question comes up…Assuming a reasonably high Cp even using a “poor” gage (from a %Total Var standpoint), is improving the “poor” gage a top priority?  Again, the answer depends on the specific circumstances around the process in question – I don’t think anybody outside the process can really give an answer for that since we don’t see the whole, big picture…

    0
    #76050

    RR Kunes
    Member

    Industry standard for the P/T ratio is 99%
    Depending upon what you are measuring and how critical it is. Don’t use a gage that is less than this.

    0
    #76051

    RR Kunes
    Member

     

    % R&R Results

    <5% No issues

    <=10% Gage is OK

    10%-30% Maybe acceptable based upon
    importance of application, and
    cost factors

    Over 30% Gage system needs improvement
    / corrective action

    0
    #76061

    Carl Haeger
    Participant

    Based on you %Tolerance value (16%) is is somewhere between “good 10% and unacceptable = 30%”.  Make sure the Tolerance used is the tightest one you will be measuring against.
    Based on the %Study value and # discrete categories, it initally appears to be a “bad” gage.  These two values ASSUME that the parts selected for the R&R study will have variation EQUAL to normal production variation.  If you selected parts which vary much less than normal, then that is going to make your %Study go up.
    If you have SDtotal from historical data and SDmeas from the R&R, then SDmeas/SDtotal ideally is <10%. This ratio being low helps ensure detecting shifts in the process, whether using SPC or running DOEs (ie. significant Xs will show up as significant, not get lost in measurement noise).
    Thanks
    Carl
     
     

    0
    #76358

    Chris Seider
    Participant

    Carl,
    Very nicely stated on your last response.
    Many times I find people either don’t have 80% (much less) of their typical process variation for these gage studies OR they misuse a Gage R&R to say a gage is great when studying 3 different products with widely different areas of operation and say hey “% variation is quite small” but they could never use the Gage to determine how much one product was changing within its own typical operating range.
    I personally don’t fault people for not getting 80% of the range since I am in the camp that % tolerance is important unless you are worrying about a gage’s ability to distinguish changes needed within a variation study or a DOE.  Of course, I always encourage them to try and get 80% of the one product’s typical range. 
     

    0
Viewing 13 posts - 1 through 13 (of 13 total)

The forum ‘General’ is closed to new topics and replies.