- New JobMondelezCI Engineer
Six Sigma team members often ask, “How much data do I need to establish the baseline?” for a process that is unstable. There is no valid statistical calculation for sample size in this situation, but that is not much comfort when you are trying to develop a sampling plan in the early stages of your Six Sigma project.
It is possible to apply common sense to the problem and to judge whether the samples taken are likely to give a reliable result for process capability – even to offer a range within which the true value probably lies. Here is how to go about it, including an Excel spreadsheet that can be used as a template for the calculation.
Let’s start with some basic guidelines for gathering a representative sample with special causes. Following these will enable you to avoid some of the most common pitfalls:
The best way to evaluate this is to plot the way the average capability varies as you gather your data (we will call this the cumulative average). This enables you to get at least an intuitive feel for when you have enough data – as the cumulative average flattens out, despite the special causes that may occur from time to time, you start to build some confidence that you have seen ‘enough’ data. If the graph remains unstable or continues to trend up or down, this indicates that the more recent samples are above or below the level you have previously seen, and you need to continue gathering data until the cumulative average has stabilized.
How long should you wait, after the cumulative average has roughly leveled off, before being satisfied that you’ve seen enough? You will need to use your own judgment and knowledge of the behaviour of the process to decide. You might have seen the graph look perfectly level for a month because a problem that crops up every few weeks has not occurred during that time. This would not be sufficient to conclude that you have taken enough data. The best guideline is: If the cumulative average capability seems to be roughly stable over a period when the special causes are fairly representative, you should be safe to conclude your baselining study.
The attached Excel spreadsheet makes it easy to look at the cumulative average percent defective. It is based on attribute data because, when dealing with special causes, the simplest way to determine process capability is usually to just count OK and defectives samples. Here are the steps to completing it:
Below is a sample graph produced by the Excel spreadsheet. It relates to documentation processing. For each document, the time required to process was measured and a defect was recorded if this exceeded the company’s standard. In the example, the estimated percentage defective is 50.8 percent, and we expect the true value to lie between 45 percent and 58 percent. For a stable process with about 50 percent defectives, we would need about 240 samples to obtain a confidence interval of ± 6.5 percent. Here, we took 360 samples and got an estimate range of ± 6.5 percent. The special cause variation drove the necessary increase in sample size.
1. The formula used to calculate the sample size required for population sampling is n = 4p(1-p)
Where: p is the proportion defective, and d is the maximum error at a 95% confidence level
For example, if you believe that proportion defective is 0.05 and need your estimate to be accurate to within 0.02, your sample size will need to be at least 4 x 0.05 x 0.95 / 0.0004 = 475