iSixSigma

Sample Size Frequency

Six Sigma – iSixSigma Forums Old Forums General Sample Size Frequency

Viewing 18 posts - 1 through 18 (of 18 total)
  • Author
    Posts
  • #50487

    howe
    Participant

    Hello,
    I am researching current sample size methodolgies being used.  I personally subscribe to the attribute sample using this formuala where I know from previous historical data what my error rate is.

    n=p*(1-p)*(1.96/D)^2 where D is the precision (in percents)  p is the estimated error percentage in decimals.
    What this translates into is
    .01(1-.01)*(1.96/.02)^2
    p=95
    So this means by looking at 95 units I am 95% confident my estimate falls within 2%
    My question is that we are using this to determine the sample size to inspect and report accuracy  – in this case claims paid wrong.   The error rate stays pretty steady at about 1%. We do thousands of claims per day. If I use this formula, does this mean I only sample this amount per month, per week ,per year??? I know population size is not a factor in determining samples (usually) but I would have a hard time convinving people that out of 850,000 claims per year, we only need to sample 95.
    2) If we moved to a variable error mechnaism ($’s paid wrong) what formula would be appropriate?

    0
    #173584

    DaveS
    Participant

    Mike,
    The formula your reference is a modification of the sample size formula for a mean in a normal population.
    In order for your approximation to the normal to hold for the proportion it is neccessary that both pn and (1-p)n or qn are greater than 5. Given the sample size found (95) and checking the >5 criteria, you get p in the sample as (0.01 *95) = 0.95.
    One quick and dirty method is to increase the n until the criteria is met. So 5.26 times the sample size from the faulty formula is 500.
    Setting this up in MINITAB generates 428 as the required sample. I think they are using a formula from “Sample Size Detirmination” , Mace,  1974. This is described in a good little book from ASQ on “how to Choose the Proper Sample Size”, Gary Brush, ASQ Press and presented there as a nomogram. I get about 450 injterpolating from that.
    You should use at least the 428 which is the more exact solution.
    If you use a $ paid metric, you can use the sample size calculator for one mean.

    0
    #173587

    howe
    Participant

    Thanks for the reply
    Can you expand more on why the formula was not accurate as it stood (i.e where does the pn > 5 come from?)
    Is this sample size what should be used for a day’s worth of claims inspection, a weeks? a month?
    Thanks again
    Also what selection in Minitab did you use for the 428?

    0
    #173591

    Ron
    Member

    Use Mil-Std-105E it is a great document and has been adopted by ISO.
    You can get it from the net for free.

    0
    #173606

    Michael Mead
    Participant

    Good suggestion Ron. At least it gives those who don’t understand sampling from an infinite universe the idea that the sample size gets bigger as the population gets bigger.
    But, what are you doing witht the information? MIL-STD-105E is for acceptance sampling. Are you accepting anything?
    If you are using a control chart, you should have a sample size that on average, gives you some defects in each sample. Thus, if you actually have 2% defective, the 95 number should work just fine.

    0
    #173672

    howe
    Participant

    I am using this to generate a sample for the on-goign auditing of claims processed – either they are wrong or right. Looking at the results from prior samples done for the whole year of 2007, the year-to date error rate is only .01%.
    This with a 95% confidence and precicion of 2 yields to few (95) to “feel” right.  I would never be able to convince anyone that out of 500,00 claims a month we only need to sample 95.  Any suggestions?

    0
    #173685

    Michael Mead
    Participant

    Sorry Mike, I thought you said your error rare was a steady 1%, but now it is .01%?  
    Using my theory that a sample should have an average of at least 1 defective, you now need a sample of 10000. Somehow that seems a bit crazy to me.
    Where did the error rate number come from? If I had that information, I would probably pass on the sampling and maybe create a Pareto chart of defect types, sources, that kind of thing–and go directly to the “fix it” part of the process. The erro rate is too small to detect with a sampling plan.
    Good luck.

    0
    #173693

    howe
    Participant

    My mistake: 1% is correct (99% accurate). Does this chnage anything?
     
     

    0
    #173697

    Michael Mead
    Participant

    Hello Mike,
    For statistical process control purposes, in order to identify changes in the process, a sample of 200 would be good enough. I am sure you know the tradeoff between tighter control limits and the cost of larger samples. What are you intending to do with your project?
    I still think that if you have any kind of breakdown on last year’s data, you should start looking for opportunities to improve.
    Good luck.

    0
    #173729

    Mikel
    Member

    “Your theory”?Do you know what the stats books say?

    0
    #173732

    Michael Mead
    Participant

    In his book, Introduction to Statistical Quality Control, Dr. Montgomery states: “If p is very small, we choose n sufficiently large so that we have a high probability of finding at least one nonconforming unit in the sample. Otherwise, we might find the control limits are such that the presence of only one nonconforming unit in the sample would indicate an out-of-control condition.” (3rd edition, page 262)  
    I am not bright enough to make this stuff up by myself.

    0
    #173736

    Mikel
    Member

    If p = .01 and sample size is 100, it is a fairly low probability of
    finding a defective in every sample – about 63%. Montgomery’s fear
    would be realized as you would declare OOC every time either 0 or
    2 defects are found – about 46% of the time.The advice that is worth listening to is that np should be greater
    than 5. At 5, the probability of getting at least 1 is over 99% for the
    first time. Understanding what Montgomery really means is
    important if you give advice. Montgomery’s advice is sound –
    always, but it requires a real understanding, not a superficial one.If you have been giving this interpretation of Montgomery’ advice,
    it is bad advice. Kind of lightweight.

    0
    #173739

    Michael Mead
    Participant

    I told him a sample size of 200, The probability of finding at least one defective is estimated at .86.
    I know what the statistics books say. I just quoted Doug Montgomery. How long did that take? Let’s try another one. Although Grant and Leavenworth are not so clear as Montomery, they state:
    “In order to make effective use of the control chart for fraction rejected as a help in process control, there must be some rejects in the sample observed. It is obvious that the better the quality, the larger the sample in order to find some rejects in the majority of the samples. If only 0.1% of the product is rejected, the sample must be at least 1000 before there will be an average of one reject per sanmple.” (7th edition, page 248)
    So it seems you want to waste a lot of resources on sampling 1000 observations while the defect rate is 10 times their calculation. (By the way, a 1000 piece sample with .999 capability has a probablity of showing a defect of .63. My first recommendation of 100 is equivalent.  Did you read the book, or just citing personal preferences?What book are you quoting? .
     

    0
    #173743

    Mikel
    Member

    Before someone points it out, my statement about OOC at 0 and 2 was incorrect – working off the top of my head while taking care of two munchkins – not a good idea.

    0
    #173744

    Mikel
    Member

    Mr. Mead
    I’m not going to spend much time on you. You are but the latest of a long line who show up on here and start answering everyone’s questions. You think you are hot sxxt, but your knowledge and experience is mediocre. Your advice is lightweight and more textbook nonsense than experience. You answer on the visual controls is a great example of that. It is clear you have never had to really do what you gave the guy advice on.
    Two ways to look at the question of sample size:
    1) If your objective is to not declare OOC on one defect, any sample size above 9 will statisfy that objective when p = .01. So why would you take 100 or 200 if 10 satisfies that objective?
    2) The formulas you use for the p chart assume a normal approximation of a poisson distribution. The chart doesn’t assume normality, the formulas do. Specifically the calculation of standard deviation. Go look in any stats book about the assumptions of using normal to approximate poisson. They are simple – np greater than or equal to 5 and n(1-p) also greater than or equal to 5. An easy place to find this is in Juran’s handbook.
    You will see in many instances Minitab gives warnings if the cell count or expected values are less than 5 – it comes from the same place.
    By the way, the original posters formula and question was exactly right. His sample size is 95 if you are willing to tolerate a defect rate as high as 3% not being seen.

    0
    #173750

    anon
    Participant

    I may have missed it but I don’t see anyone talking about frequency here.
    From my work in call centres I would do this weekly unless of course you think the day of the week may have an effect on the rate and then daily but I would only do daily until I could prove it either way.

    0
    #173778

    Michael Mead
    Participant

    First, to Stan. This is not a Poisson distribution, it is a Bernoulli process (Binomial distribution). what you referred to are the requirements for using the Poisson distribution to estimate the binomial. That is not necessary in this case.
    Next to anon.  The samples should be taken at intervals that allow the process to change. If there is ever a process shift, and I don’t think there would be, we would also consider the average run length, with the goal of minimizing the total quality cost. This is where the rate of change in the appraisal cost is equal to the rate of change in the failure cost.
    I have written a little about this, I call it marginal inspection theory. Two simple corallories are these:
    1. If the cost of a unit failure is high, or can cause downstream damage, much inspection is needed.
    2. If the cost of inspection is high or requires destructive testing, use as little inspection as necessary.
    In all cases, sample size and frequency are based on the knowledge of the process.
    With your process, and given that it is probably operator-dependent, I suggest a smaller sample (200) done daily.  Daily, since it is possible that weekends, holidays, or other events can influence workers’ performance.

    0
    #173791

    Gastala
    Participant

    Stan, if you cannot reply to posts in a professional manner perhaps you should not reply at all. I visit this forum every couple of months and am sick and tired or reading your rude and abusive nonsense. Why don’t you get a life and leave this forum to people who have a genuine interest in Six Sigma.

    0
Viewing 18 posts - 1 through 18 (of 18 total)

The forum ‘General’ is closed to new topics and replies.