iSixSigma

Help needed assessing accuracy of measures

Six Sigma – iSixSigma Forums Old Forums General Help needed assessing accuracy of measures

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • #50824

    Steve H
    Member

    Morning everyone.  As a novice I’m undertaking my first project with little support from within the company in terms of Six Sigma experience.
    I am currently capturing data on the number of errors found on business data sent to us by selling agents.  The error checking is undertaken manually and therefore subject to some potential issues of repeatability.  Due to the small size of the company the error checking is undertaken by one person only.
    I am currently in the measure phase and looking to close this off by assessing the accuracy of the data showing the number of errors found. I considered doing a gage R&R by getting the same operator to measure a sample several times but am confused as to whether this would be correct as;

    Each sample will have a different number of defects
    There will, therefore be massive part to part variation
    Does anyone have advise on how better to check that the operator is consistently finding all the errors?
    Thanks

    0
    #175223

    Remi
    Participant

    Hai, Steve H.
    There is no problem. you are doing the right thing. In DMAIC3 you check the measurement system. For this you need products. The variation of the products should be representative of the one found in reality. This variation will be filtered out in the analysis in DMAIC3 (you get a % value).
    So 1. and 2. as mentioned are good news not bad news.
    The question that will be answered by DMAIC3 is if the operator find the same result (repeatability) for each product in the product range. Reproducibility is not an issue because you have 1 operator.By the way what happens if he has a car accident when you are in DMAIC7 and he can not measure your DoE outcome?
    Success

    0
    #175226

    Steve H
    Member

    Thanks Remi,
    Please forgive my ignorance, I’m finding it very different putting theory into practice!  Your response is most welcome.
    If I understand you correctly I should continue getting the same operator to measure maybe 5 different sets of data 3 times each and record how many errors he finds each time.  The Gage R&R study should show massive part to part variation naturally because the 5 different datasets will correctly have different amounts of errors, but what I am looking for is an insignificant repeatability variation?
    If thats the case what signifies insignificant repeatability variation and what type of R&R study should I use on Minitab?
    As regards the very valid question on staff absence I’m hoping it doesn’t happen at least for the time being until I can get another trained up to the point that repeatability is also not an issue.  Office politics require me to prove I can reduce the error rate first though, easing the burdon of the checking.

    0
    #175228

    Remi
    Participant

    Hai Steve,
    yes. Maybe 5 is low for getting representativeness (the analysis of the parts you used will be used to predict the qualtiy of measuring all parts that will be encountered). I would rather go for 10.2 instead of 3 repeats is good enough.
    Just choose the standard gage r&R-crossed on 3 columns (part-ID, Operator ID (all 1) and Measurement). Minitab will then forget about Reproducibility. Use the same rules as otherwise.
    Wait a minute. What do you mean with 5 dataSETS? 1 dataset =1 product = 1 sample Yes?
    Remi

    0
    #175230

    Steve H
    Member

    Fantastic, I’m on the right line, and you’re a star.
    Will take your advice and go for 10 instead of 5.
    The data I am using relates to the sale of insurance policies.  Every week a selling agent will send us a batch of data relating to each policy they have sold in that week.  They could have sold one policy or 100.  When I say “dataset” I am refering to one such batch of data from a dealer.
    If I am using 10 datasets then this will be 10 different batches, each probably from different agents.  Each batch is likely to contain a different volume of data and a different volume of errors.  Hopefully this will be OK as I’m looking to check that the number of errors the operator finds on each batch can be repeated accurately?

    0
    #175231

    Robert Butler
    Participant

    You said, “I am currently capturing data on the number of errors found on business data” 
    Questions: 
      What kinds of data – forms, spreadsheets, sales tickets, etc.?
      What kinds of errors – entry, omissions, transcription, etc.?
      Are we talking a single form/spreadsheet/sales ticket, etc. , or are we talking about a group of forms/spreadsheets/sales tickets/etc.  covering the same issue but from different vendors, departments, or divisions?
      Are we talking error with respect to the entire form/spreadsheet/sales ticket or are we talking errors with respect to various sections of the form/spreadsheet/sales ticket, etc.?
     

    0
    #175232

    Robert Butler
    Participant

      I see our posts are passing each other in the ether.  Ok, before you start looking to see what your operator is doing you had better make sure you understand what your “suppliers” (the field agents) are doing.
      I’d recommend you undertaking your own study and look at the errors as a function of agent, volume, and location on the form (I’m assuming that this is what you mean when you say “batch of data relating to each policy “).  Once you have some idea of how errors are distributed according to these variables then you will be in a position to start thinking about error detection rates. 
      Based on the very little I’ve done in this area I suspect you are going to find an interesting connection between error commission and error detection as a function of location on the form (and here we are assuming you are talking about a single standard form).

    0
    #175234

    Steve H
    Member

    Afternoon Robert, thanks for your help on this one, its greatly appreciated.
    The data I am referring to is the details of individual insurance policies sold by a selling agent.  Each week all our agents send in a spreadsheet each showing line by line data on each sale made.  Each line of data will contain financial info (premium, tax, amount of cover etc), personal info (name, address, title, telephone number) plus various other system codes.  Before we import an agents batch of data onto our system a manual check has to be undertaken to make sure the data is suitable.  The check usually involves checking the name and address for typing errors, making sure data is in the correct field, and that no data is missing.
    The data I am capturing is a simple count of errors detected per agents batch but i want to test the accuracy by measuring the repeatability of the results.
    So, to answer your question I would say we are looking at looking at a group of spreadsheets, each individually generated by separate vendors.  The errors can be anywhere on the spreadsheet and range from some spreadsheets containing zero errors, and others containing many errors (I think the biggest so far is 3634 errors on one spreadsheet).

    0
    #175235

    Jabber
    Participant

    I am currently in the measure phase and looking to close this off by assessing the accuracy of the data showing the number of errors found. I considered doing a gage R&R by getting the same operator to measure a sample several times but am confused as to whether this would be correct
    Robert,
    Is this not simply a case of running an Attribute Agreement Analysis to determine reliability and then posing your questions once the MS has been validated?  Just wondering…Thanks!

    0
    #175237

    Steve H
    Member

    Just read up on Atrribute Gage R&R and this seems like it might be OK.  To do this I need to get a measure of the actual number of errors first.
    I could get each of the project team members to assess each sample and clearly mark the errors they find, then pool the results to arrive at an actual number of errors and compare the operators results against this.
    Robert, your questions definately need answering, but I had planned to tackle them later on when finding solutions to the problem.
     
    Thanks guys, your help is greatly appreciated.

    0
    #175239

    Jabber
    Participant

    Just note that your MSA for attribute data (ie defects / defective) should focus on repeatability and if you have more than one operator, reproducibility, and finally accuracy to some known standard.
    So with a single appraiser, you could focus on bias (ie degree of accuracy) and precision (ie ability of your single operator to capture  similar observations when measuring the same sample using the same method).
    And remember you choose a representative sample of your spreadsheets then have your apprariser  repeatedly measure them in such a way as to maintain the integrity of the collection effort…..you are not using different samples each time, simply recycling the same fixed number you chose oringinally while working to keep the appraiser “in the blind”

    0
    #175264

    Fontanilla
    Participant

    Back to the original question!  When a error isn’t detected, I assume that down the line, somewhere in the process, that error is detected.  If that is correct, you can gain an understand of the accuracy of your error-checker by comparing what that person has found vs what that person has missed.
    You’d do well to heed Mr. Butler’s advice and stratify your errors found data within the spreadsheets and across the agents.  Same for errors missed (and found later).
    Good luck!

    0
Viewing 12 posts - 1 through 12 (of 12 total)

The forum ‘General’ is closed to new topics and replies.