iSixSigma

Multiple Appraiser Attribute GRR ( Visual Inspection

Six Sigma – iSixSigma Forums Old Forums General Multiple Appraiser Attribute GRR ( Visual Inspection

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #31506

    Bill K
    Participant

    I needed to perform an attribute GRR on visually inspected parts (Pass Fail as per the criteria).  I have 9 inspectors that have viewed the 20 samples 2 times each.  I have run MiniTab and reviewed the results.  We re-trained the inspectors and re-ran the study and improved on some accounts and got worst on others.  I can only seem to find information on reviewing results of a 2 person attribute GRR and it shows that the acceptable limit would be 90% for within appraiser, between appraiser and to the standard.  My question is, Is the criteria for an acceptable multi-appraiser (9) GRR the same for a 2 person study at 90% or would it be something different?

    0
    #83137

    Mikel
    Member

    Bill,
    If this were a variable R&R, would you ask this question?
    Example – I have 9 simple meters measuring flow of a liquid. Should I accept a 50% of tolerance or total variation because there are so many meters. The answer is obviously no.
    The answer is the same for attribute. I would suggest that with only 20 parts (this is attribute data after all and does not contain much information) that the number should be close to 100%. Look at the confidence intervals Minitab shows to better understand.

    0
    #83141

    Chip Hewette
    Participant

    Can you provide us with more info?  What was the proportion of “fail” within the 20 samples?  Do you have adequate inference space to state that the inspectors had opportunity to find “pass” and “fail” parts?
    What is the likelihood of part failure in production?  Does its frequency of occurence mean the inspectors would likely be ‘bored’ and miss the random failure?  Or, do failures occur with some regularity?  How would you structure the measurement system study to consider this?
    What variation are you most concerned about?  Due to production requirements do you have many inspectors?  If so, do you fear that one of the nine might have a different interpretation of the true part condition?  If so, keep the AR&R large, and involve all nine inspectors.
    If the part is so difficult to inspect that one inspector might call it “pass” one time and “fail” another time, I don’t think two observations by each inspector will find such difficulties.

    0
    #83243

    Markert
    Participant

    What you have is an interrater (between raters) and intrarater (within appraiser) agreement problem.  In order to access the degree of agreement both within and between raters or appraisers one has to adjust for chance agreement, which is surprisingly quite high.  The accepted method for determining chance adjusted agreement is the kappa statistic.  Simply looking at agreement or using confidence intervals on proportions is not correct since it does not adjust for the chance agreement.  The latest edition of the MSA manual from the Automotive Industry Action Group (AIAG) now reconginizes  the kappa statistic as a basis for evaluating intra or inter rater agreement.  There are endless references available on the kappa statistic and the rater agreement problem (a simple web search will turn up countless hits).  BTW, 20 pieces is a very small sample for an attribute MSA and you should consider at least doubling the sample size if possible.

    0
Viewing 4 posts - 1 through 4 (of 4 total)

The forum ‘General’ is closed to new topics and replies.