Control Chart – Sampling Frequency.

Six Sigma – iSixSigma Forums Old Forums General Control Chart – Sampling Frequency.

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
  • #29282

    Ernesto Lugo

    I have a process with an output of 5000 units/minute with a lot size of 4,000,000 to 8,000,000 depending on the product.  Currently, we take 10 units every 30 minutes to monitor in a control chart.  I have used the n = (sigma/W)^2 equation to determine proper sample size to obtain Xbar ± W at 99% confidence.  I am getting extremelly low values for n.  Can someone tell me how many samples should I collect and at what frequency?, and how to calculate it?  Thanks in advance. 


    RR Kunes

    Go on the internet and dig up Mil-Std-105 it will help you.


    Dave Strouse

    Ernesto –
    Are you wanting to continue control charting? Is your questrion how large to make the subgroups? If so –
     Get a good basic text on control charting and read up on the idea ofrational subgrouping. Also, look at the ARL for various  subgroup sizes. You need to understand the variation in your process and how much undetected drift you can tolerate to set the sizes.
    MIL -STD -105 is about acceptance sampling by attributes and does not have anything to do with control charts. I think the previous poster misunderstood what you are looking for.



    Another approach is to examine the frequency of which out of control (OOC) conditions have been encountered historically for this process and the criticality of the measure being charted. 
    As a guide for many processes, if you are encountering OOC conditions more frequently than 1 out of 20 times a sample is selected, you should increase the control charting sampling frequency.  If on the other hand you rarely get an OOC condition (perhaps more than 1 in 40+ samplings) there may be an opportunity to extend the time between samplings.
    While I’m sure there are other more statistically based approaches to establishing sampling intervals, I have used these as a guide for 20+ years, and it has worked very well for me.



    As people have said before, you should always ‘know’ your process. Using numbers without the appropriate approach or in wrong context could mislead anyone less careful (i.e. you may use numbers for your own benefit regardless of what’s really there, instead of letting the numbers be a help to interpret the process you’re trying to make some sense out of.)Anyhow, here are two formulas which can help you determine the size of the subgroup.If you have proportions (i.e Ok/Not Ok) then use:
    n = (Z/d)^2*p*(1-p) ; where Z is the sigma value (1.96 for a 95% conf) and d is the level of detection (you want to be able to see a shift from 4% to 5%, then you have a 0.01 difference and d equals 0.01.)If you have continuous values (variable data) then use:
    n = (Z/d)^2*s^2 ; where s is your true stdev (if not known then use an estimated value from previous runs)Remember, when using an Xbar/R-chart you normally have the control limits within ±3s which is a 99.73% area where your data will fall into. If you use a 99% confidence level then Z is approximately 2.6. It means you can’t calculate the subgroup size using a 99% confidence level, and then run a Xbar/R-chart with ±3s limits. (±3s is normally referred to as the interval where ‘all’ the values will be within.)An example: You have a true mean value in your process at, let say, 27 and you’re running a chart to detect any deviations from this value. How many do I pick as a subgroup to be able to spot any statistical diffrencies? Lets say I want to see if the mean differs 0.5, at least, from the nominal value. Previous data shows that my stdev is 1.3.
    n = (3/0.5)^2*1.3^2 = 61Don’t forget, you may very well get subgroups which doesn’t indicate any change (below 27.5 that is) even though the mean value has in fact changed. You’re simply dealing with subgroups, and their uncertainty comes with the territory. But on the other hand, if you do get a subgroup saying that the mean is 27.5 then you can be sure (99,73%) that the true mean has changed.If you’re dealing with proportions and don’t know the true p, then a common approach is to use p = 0.5 as this is ‘the worst case scenario’, which gives a value for n where it’s at its maximum (smaller or greater p means smaller n).This reply is nothing but mathematical, so don’t forget to use your common sense and dive into your process with the aim of getting some understanding./P3

Viewing 5 posts - 1 through 5 (of 5 total)

The forum ‘General’ is closed to new topics and replies.