iSixSigma

Attribute Gage R R

Six Sigma – iSixSigma Forums Old Forums General Attribute Gage R R

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #37401

    RR
    Member

    Hi,
    Ours is a garment factory. We have 12 assembly lines, wherein the garments are stitched & assembled to get the final product.  We have two quality inspectors for each line. Recently we conducted an AGRR for 30 appraisers and the score was 15% only (overall agreement between themselves and with the expert). Is it is possible to get an AGRR score of 100% if the population of appraisers is more? Is there a better way of doing the AGRR??
    Tks,
    RR.
     
     

    0
    #110237

    V S Magesh
    Member

    Hi,
    I hope you are asking for 100% acceptance of parts, right? See, if GRR is 30%, you have to improve.
    In this case, you have to measure the variable first. “Any thing can be improved, only if it is measured!”
    Then, you have to see, whether the variation is coming from the Appraisers or the Measuring equipment.
    Then only you can improve this…
    Regards,
    V S Magesh

    0
    #110240

    RR
    Member

    Hi Magesh,
    Thanks for the reply.  But ur input is correct in case of the measurement is done through a device.  In my case the measurement is visual inspection. i.e. the supervisors inspect the quality of the garments stitched and decide on whether it is OK or NOT OK. What I am asking is, is it enough if I get an attribute agreement score as 75%, meaning probability of 75% of times all the inspectors agree on the quality of a garment.
    Hope u r clear abt my doubt.
    Rgds,
    RR
     
     

    0
    #110241

    V S Magesh
    Member

    RR,
    I have lot of querries which needs input from your end, before suggesting anything. Meanwhile give ur e-mail id, so that we can contact.
    Regards,
    V S Magesh
    [email protected]

    0
    #110244

    arvind pathak
    Participant

    Even before measuring and analyzing why don’t you first elaborate and standardize the Check-list and SOP of inspection to appraisers? This must be followed with a discussion-cum-training so that everyone counts defects or defective the same way.Later-on you can conduct AG R&R.
    Do you have Operational Definition of measurement?15% shows how differently it is being measured. By the way which one is more prominent –  (i)Operator Consistency (ii) Mutual Consistency or (iii) Operator efficiency?

    0
    #110245

    arvind pathak
    Participant

    Magesh!
    Probably you are giving acceptance criteria of continuous data. RR’s measurement is discrete and he does not want to compromise on anything  less than 100%!

    0
    #110246

    RR
    Member

    Hi Arvind,
    Thanks. Exactly that’s what we have done after we got this score of 15%.  In our analysis we found that inconsistency is more between appraisers.  We would like to repeat the process after the training is completed.  What I wd like to know is whether we can get a score of 100% agreement with a population like 30 nos.
    Rgds,
    RR

    0
    #110250

    arvind pathak
    Participant

    RR,
    Please mail me the case study of your  MSA with a little decription of problem statement on [email protected]. I am interested in understanding why you have taken such a big no. 30 appraiser. Can’t you conduct the study with 3 appraiser and 15 units or so at every assembly line or at the end of the process wherever you are measuring the output or your project Y. The idea is to measure the precision of measurement system which could be done in steps.
    Sometimes GRR problem becomes an issue. In one of my projects measurement eff (overall) was 87 % and it took one and half months to reach 93 % consistently(with 15 parts and two operators )!
    As a thumb rule in descrete data if overall measurement eff is more than 90% we can accept the measurement system.Time is money RR! Journey from 15% to100% with 30 operators…practically I don’t know how much time it will take.But theoretically “yes” we can and infact we should aim at 100% if the measurement system is robust..

    0
    #110278

    Dreemr
    Participant

     
    A few years back one of my first projects was this exactly?  OK well not exactly.  I was in an auto plant and the product we were checking was painted product.  The basics were still the same.  We had to visually inspect a final product to determine if it was good.  The output was measured discreetly.  If it makes you feel any better it can be done.  We took our first GR&R at 13% and ended with a GR&R at 89%.  We took this as a pretty darn good improvement about 580% and moved on to the next project.
     
    I can share practices with you but it should be done through email and not on the forum.  If you would like to compare notes please contact me directly or relay your email address to me.

    0
    #110296

    Thai
    Participant

    RR,
    One question: are you having the inspectors determine if a garmet is simply good or bad, or are you having them report the type of defect?  That is a big difference.  Most of the AGRR’s I have run in the past where I want to determine the R&R of inspectors on picking out the same defect type on each garmet fails miserably (typically with a value around 15% like you received).  The reason for this has been each inspector has their own “favorite” defects that they look for and typically find.  Try and run the GR&R with Good / Bad as the output not the type of defect (if this is what you are doing) you may get a better result.
    If you are having the inspectors report the defect type, a Kappa analysis may be a good method to determine if one particular defect is causing the issue.
    Kirk

    0
    #110306

    RR
    Member

    Kirk,
    Unfortunately the score is for output only (Good/Bad). I did the analysis using Minitab and unable to make any inference with the Kappa statistics it shows as I do not have any idea abt Kappa statistics and what it tells!!!!!
    RR
     

    0
    #110307

    RR
    Member

    Dreemr,
    I am very much eager to know how it was possible to get score from 13 – 89, my email id : [email protected], please send me ur mail id.
    Rgds,
    RR
     

    0
    #110316

    Jonathon Andell
    Participant

    Unfortunately, a 15% agreement score is not all that unusual. Even more unfortunately, adding more inspectors DECREASES your likelihood of agreement – crudely akin to the idea of rolled throughput yield.
    You need to treat your inspection step like the process it is. It sounds like your inspection process has undergone something like a Define and Measure phase; now you must proceed to Analyze, Improve, and Control it — before you use it on the fabrication process it’s supposed to measure.

    0
Viewing 13 posts - 1 through 13 (of 13 total)

The forum ‘General’ is closed to new topics and replies.