iSixSigma

Correcting your random sample

Six Sigma – iSixSigma Forums Old Forums General Correcting your random sample

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #47748

    Kate Porter
    Participant

    Good morning,
    I prepared a random sample, from a large (binomial) population of application forms, on which I needed to do and audit. During the course of the audit a number of problems were identified with the sample. The applicant corrected these problems, and at the end of the audit period the error rate for the sample was 4.3%. I therefore applied this final error rate to the entire population.
    However, I have been critiscied. Somebody has said that if you allow the applicant to ‘correct’ the random sample, then it is no longer statistically representative abd that a bias has been introduced. They claim that I should apply the original error rate to the entire population in order to give my audit opinion. However I don’t agree, as I think that if the random sample was representative, then the applicant’s ability to correct the sample is also representative of the entire population.
    Any thoughts? Help would really be appreciated!

    0
    #159589

    qualitycolorado
    Participant

    Kate, Good evening!Regarding your question about drawing a random sample, and then allowing your applicant to “correct” portions of the sample:This is probably NOT what you want to hear, but please read on …. If I understand your situation, you drew a random sample from a large population and conducted an audit of some type on the samples.It is not clear from your original posting what kinds of “problems” were identified with the sample, nor why you let the applicant correct these.It would be helpful to know the particulars. Without having this additionally information and only having the information you posted originally, I would tend to agree with the whoever has criticized you. Think about it: your are auditing, I presume, to find some type of defect. You drew a random sample so that you could apply the rules of random sampling. If you let someone “fiddle” with the sample after you draw it, even with the best of intentions, you have suddenly polluted your sample: it is no longer random!And, if it is no longer random, then you cannot apply the rationale of “random sampling” to whatever results you get. Consquently, you cannot indicate with any certainty that the error rate for the population is similar to the error rate for the
    “sample”. Your arguments about ” … respresentive … ” miss the point completely, and do not validate the “corrections” that your applicant applied to the sample.What you are left with is adulterated data — that’s all. The best procedure from here on out would be to 1) toss the original sample, and then 2) re-sample, with strict adherence to the rules of random sampling.Addtional clarity about your specific situation be important and could (maybe) change my mind, but for right now, I will side squarely with those who have criticized your technique.
    Best regards,
    QualityColorado

    0
    #159590

    KateP
    Participant

    Hi QualityColarado,
    Thanks for the quick reply. To give you more specifics:
    The Client pays a rebate on a certain type of appliance. The customer must fill in an application form, sign it and return it to the Client with full proof of payment attahced.
    I was required to audit the program to ensure that customers were:
    1. filling in the application forms correctly,
    2. signing the application forms
    3. attaching full proof of payment.
    Hence i prepared a random sample from the total applicants (the population).
    During hte audit I found that a number of the applications in fact had ‘order forms’ attached rather than valid tax receipts. The Client contacted the applicants, and these applicants sent in the correct reciepts – hence the “correction”. What do you think now?
    Thanks so much,
    Kate

    0
    #159599

    clb1
    Participant

      I think the common word for that kind of “correction” is cheating and, yes, you have biased your results.
      I take an exam.  The teacher checks my answers and provides me with a measure of my error rate.  I take the exam back, correct all of the mistakes, and then demand a new calculation of the error rate – of the two error rates, the original, or the “revised” which is a measure of my knowledge at the time I took the test?
      Your problem is the same – one of the “defects” surrounds the issue of correctly following instructions the first time.  By allowing a “correction” you may be hiding a problem with respect to instruction clarity or something else that, if addressed properly, may result in a real improvement in the process.

    0
    #159600

    qualitycolorado
    Participant

    Kate, Good morning!
     
    Thank you for the clarifying information. Others on the Forum here may have different opinions: here is my 2-cents worth —
     
    Keep Dr. Deming’s favorite question in mind:  “What is your aim” (or purpose)?
     
    You have stated you purpose well: ensure that customers were:
    1. filling in the application forms correctly,
    2. signing the application forms
    3. attaching full proof of payment.
     
    Now, consider your sampling:  you pulled samples to determine whether they were doing these 3 things. So far, so good.
     
    Now, you discover that you reach a “dead end” on some of these questions, since the customer sent in the wrong thing, and there are no applications at all for these customers.
     
    That’s OK, too — I think that, due to not having the applications, in these situations the sampled item “fails” on all 3 counts — after all, if there is no application, the customer did not sign it correctly, etc. — right?
     
    If I were the client, I would, of course, want to contact these customers, get them to send in the application, etc.
     
    However, you should NOT correct your sample.  You should take these situations where there is no application at all, and count them as failing.
     
    This information may lead your client to do some good root cause analysis, regarding why the customer sent in the wrong things.
     
    If you “correct” the data, the client may NOT be led to do this. Again, ask yourself  “What is my aim (purpose)”, but put yourself in your client’s shoes — what is their aim of having this audit done?  Surely, one of the aims is to help them do roor-cause analysis on “failures”, so they need to know ALL of the failures.
     
    So, my original thoughts stand — you should NOT correct your audit data.  For the sitautions with no application, these samples fail all 3 of your questions, and should be reported that way (with an annotation, perhaps, that there was no application at all for these samples).  If the client wants to contact these people and get them to send in the right forms, that is fine — BUT, do NOT correct your data for this.
     
    … again, just my 2 (and a half) cents worth ….
     
    Best regards,
    QualityColorado

    0
    #159603

    New To Sigma
    Participant

    QC,
    I am still confused as to relationships between the following elements when determining proper sample size calculations:

    For variable data, are the following elements – alpha, beta, sample size, precision, and variability – always to be considered when one simply wants to determine a population parameter (mean, std dev, median, etc) with a corresponding confidence coefficient and supporting interval? 
    And does the following statement summarize accurately their relationship to one another?

    `N is the proper sample size for a given alpha and beta for a given level of variability and specific level of precision` capture the essential elements (5)?
    Can you point me in a direction for determining attribute sample sizes as well? 
    thanks, thanks, thanks!

    0
Viewing 6 posts - 1 through 6 (of 6 total)

The forum ‘General’ is closed to new topics and replies.