iSixSigma

small sample size and decisions

Six Sigma – iSixSigma Forums Old Forums General small sample size and decisions

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #30967

    mcleod
    Member

    There is a product testing process that I’m involved in that currently tests 10 samples of a given product.  From this, we measure resistance values, average, standard deviation and maximum. 
    The current criteria to pass the test is to check that the maximum seen in the sample of 10 is less than  the specification.  What the group is essentially using the maximum for is an average plus 3 sigma to get an idea of what the maximum expected from the population.  Obviously, the maximum of the sample is NOT a good estimate of the population maximum because the sample maximum will keep going up with a larger sample size. 
    The problem with using the average plus 3 sigma directly with a sample size of 10 is that the variance of this value is high with a low sample size.
    What can be done?  Increasing sample size and then using the average plus 3 sigma is cost prohibitive. 
    Thanks,
    Scott

    0
    #81307

    Gabriel
    Participant

    If the process is ussually stable and it is capable (Cpk>1 according to your criteria of maximum=average+3S), you could chart your process.
    By now, you probably have enough historical data to chart several historical points (subgroup size 10) on an Xbar-R (or Xbar-S) chart, plot an histogram, claculate the limits, asses for stability and calculate Cpk. If Cpk is grater than 1 (rmember that stability is a prerequisite for Cpk calculation), then you can say that the average + 3 sigmas of your process (what you called “maximum”) is below the specification.
    Then, next time you take the sample of 10 items, instead of calculating the “maximum” from it, you plot Xbar and R (or S) on the chart. If the charts show no out-of-control point, then there is no reason to think that the process distribution has changed, and then you can assume that the process average + 3S value ramins the same, even when the process average + 3S of the sample will be different each time due to sampling error. Further more, an Xbar-R (or S) chart is pretty sensible with a subgroup size of 10, so small shifts in the process average can be detected.
    Note that we are using historical data to define pressent results. This is only valid if the process behaves allways the same (stability). What to do if not? Well, you just can not get enough information if there is not enough data. Then you can accept the risk, accept the extra costs of a larger sample size, or put some safety margin (like alowing a lower maximum or using 4 sigmas to calculate it).
    We have a case where we charted it and found that the average changes from batch to batch, but the variation remains the same What we do is to take 10 samples, calculate the average and R, and we plot only R on an R chart. We do not plot Xbar because we know forehand that it is not stable. Then we use the average of the sample as the estimator of the average of the batch but, instead of using the standard deviation of the sample, we use the standard deviation of the process. In this way, you are limiting the sampling variation to the average only (that is about 1/3 of the variation of the population), but we eliminated the varaition of the standard deviation, because as long as R remains stable we can say that it hasn’t changed.
    Hope it helps

    0
    #81308

    Dave Strouse
    Participant

    Scott –
    It appears you desire a prediction interval.
    Look at section 44.47 of Jurans Handbook, fifth ed.
    Compare spec to limit of prediction interval.
     

    0
Viewing 3 posts - 1 through 3 (of 3 total)

The forum ‘General’ is closed to new topics and replies.