iSixSigma

Audit Frequency

Six Sigma – iSixSigma Forums Old Forums General Audit Frequency

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #43157

    Lee
    Participant

    Greetings,
    I work in a production environment, where x units  are produced per hour.  Production rate varies by line and by specific product.  Not all lines produce the same units, nor do they produce the same product all day.  Currently the sampling for SPC charts, and all else, is uniformly set at 5 units per line per hour (independent of standard deviation, units produced per hour, etc.).  It seems like we may be sampling too often in some cases, yet not often enough in other cases.
    To add some logic/basis to our methods, I was going to try using ANSI/ASQ Z1.9 (basically the old MIL std).  My thought was that historical data would help me know the Std. Dev. estimate, the production rate per hour would be my lot size, then Z1.9 would set the sample size per hour.  That size divided by 5 would then be a defensible basis for establishing the sample frequency (i.e., once per 30 minutes, once per two hours, etc).
    Is that an OK way to do it, or is something else better?
    Have a Six Sigma quality day,
    Eugene

    0
    #136585

    Haugen
    Participant

    Eugene,
    The purpose of control charting to monitor process variation, and be able to react when variation goes from common to special.  So the first thing you need to think about is all the potential causes of variation in each process. Usually there is potentially time, temperature, operator, machine, batch, maintenance, tool change….etc. potential causes of “special cause” that you want to be able to pick up.  So understanding potential causes is critical to planning your sample strategy.  Do you need a sample group in the morning at start of shift, one at end of shift, one at tool change……..
    The size of each sample can depend on how much effort is required to measure. 3-5 parts gives you a good R, sample size of more than 10 allows the use of s instead of R, and an in-line gage lets you 100% sample.  You want to minimize variation within groups, so usually your group samples are consecutive, or close to it.
    Use knowledge of your process to set up your sampling, not Z1.9.  Hope this helps.

    0
    #136632

    Prabal Aggarwal
    Participant

    I would like to make a point. That if your logic or calculations says to have different sampling size at different days or different hours. Then it will be really cumbersome to tell this revised sampling plan to production or quality people each and every time.
    Prabal Aggarwal

    0
    #136634

    Prabal Aggarwal
    Participant

    I would like to make a point. That if your logic or calculations says to have different sampling size at different days or different hours. Then it will be really cumbersome to tell this revised sampling plan to production or quality people each and every time.
    Prabal Aggarwal

    0
    #136659

    Eugene Lanning
    Participant

    Good point on changing the audit frequency too much.  I hope to validate that once per hour is sufficient (most of the time), and to identify a few cases where  one per half-hour is needed, and find a few cases where perhaps once per two hours is sufficient.  Basically, I want to allocate our auditing time where it is really needed.
    Thanks for your input.
    Eugene

    0
    #136666

    Andejrad Ich
    Participant

    Limits are based upon process variation.  Sampling frequency is based upon…..and nobody gets this…..dollars.  How much product (i.e., how many dollars….how much rework and sorting) are you willing to risk before detecting that the process shifted?
    Apply more frequent sampling to the more expensive products (cost-risk based sampling frequency).  That’s all there is to it.
    (This is where others will cry out, “but shouldn’t it be based on some sort of known process stability/instability” to which I have to reply in advance, “if you have known process stability/instability, then you have a known out-of-control process — making it NOT a candidate for control charting anyway.”)
    Andejrad Ich

    0
    #136671

    Ior
    Participant

    Dear Eugene.
    My experiance of Six Sigma might be a bit limited, so please dont use my answer without consulting someone experianced. I do not know of the standard that you refer to, but I will do my best to give you some help from my somewhat limited perspective.
    First of all, I think it might me important to secure the production process. In order to get a predictable process (to enable SPC) you need to lock the production parameters.
    Second. To run SPC, you need to verify that you dont have variation due to special couses, most preferably it should be normal distributed.
    Step 1 and 2 will probalby need a validation to secure. Eg. run the process for a couple of shifts (with locked  or noted process settings) and analyze.
    If the above asumptions is verified. Then the third step would be to measure (after calibration acceptance control, accuracy and R&R for the method of course) the standard deviation form each line.  Based on the standard variation and how much you want to detect, calculate your sample size. Probably you can use Minitab to calculate your sample size for each line (function “Power and sample size”. Help regarding this function is found in the Minitab helpfile).The anser you will get from that function is how many products you need to measure to get a valid measurement of a certain population.
    My anser will not give you all the answers, but I hope that I have contributed with some significance, and that it will help you search further.
    Best regards,Ior
     

    0
    #136763

    Eugene Lanning
    Participant

    Thanks for the input Ior, Ill look at that feature of MINITAB.  

    0
    #136915

    AggieIE
    Participant

    There are two ways to establish sampling frequency.  The first is to determine the amount of work and the number of personnel that would be required, or in other words, the amount of resources that are available and the cost effectiveness.  You don’t want to overwhelm anyone or spend a lot of money.
    The second way is to determine the average run length, which is the number of samples that will plot within the control limits until a point is out of control.  With the control limits set, you can calculate the probability a control point will exceed the limit.  The average run length is 1 / probability.  For example, if the probability a point will go out of control is 0.0027, then the average run length is 1 / 0.0027 = 370.  This means, on average, a point will be out of control every 370 samples.  If the process rate is 370 units per hour, then a sample should be taken every hour.
    Also, rule of thumb for a sample size is 30 due to the central limit theorem.

    0
    #136920

    Andejrad Ich
    Participant

    Limits are based upon process variation.  Sampling frequency is based upon…..and NOBODY gets this…..dollars.  How much product (i.e., how many dollars….how much rework and sorting) are you willing to risk before detecting that the process shifted?
    Apply more frequent sampling to the more expensive products (cost-risk based sampling frequency).  That’s all there is to it.
    (This is where others will cry out, “but shouldn’t it be based on some sort of known process stability/instability” to which I have to reply in advance, “if you have known process stability/instability, then you have a known out-of-control process — making it NOT a candidate for control charting anyway.”)
    Andejrad Ich

    0
Viewing 10 posts - 1 through 10 (of 10 total)

The forum ‘General’ is closed to new topics and replies.