# Minimum sample size for quality checks

Six Sigma – iSixSigma Forums General Forums General Minimum sample size for quality checks

Viewing 7 posts - 1 through 7 (of 7 total)
• Author
Posts
• #255574

vsg1990
Participant

Hello,

I wondered if someone can help. I have a team that does quality checks on invoices submitted. The total population / no of invoices received last year was 45000. Now currently, they did a quality check on 30000, and they found defects which equates to 8%.

I think that the volume of quality checks (currently 30000) done are very high. Can anyone help me to calculate what the quality check sample size should be, given that we know the defect rate is 8%.

Many thanks

1
#255577

Robert Butler
Participant

If the defects are actually random and are randomly distributed throughout the year then, from time to time, you could take a random sample of 50 invoices and check to see if you have approximately 4 errors. In order to determine how often you should take a random sample you should take the data from the last year (we are assuming it is in time order) and take random samples of 50 for each week in that period, determine the error rate and do a time plot of the results. If the time plot looks random (no trending) then repeat the process for each month and then for each quarter and see what you see. Since you know your error rate is 8% you might be able to get away with a single sample per quarter.

Caveat: Given your current high sampling rate my guess is when you analyze the results of your weekly random sample you are going to see excursions well away from the yearly grand average of 8%. If this is the case then instead of trying to come up with a reduced sampling plan your first job would be to understand the underlying causes for these excursions and eliminate them.

0
#255578

Strayer
Participant

Robert Butler is a statistics expert so I’d take his advice. My only comment is that I hope you looked up the standard formula for computing sample size. You can find it on this site or elsewhere with a quick search. There are also free sample size calculators if you’d rather not do the math.

0
#255580

Robert Butler
Participant

Something else to consider – are you sure the issue is one of just confirming a known error rate of 8% and not one of knowing, with a high degree of certainty, if the error rate has changed from say 8% to 9%? If it is the latter then you would need about 30000 samples to confirm this kind of shift with 80% power and an alpha of .05.

0
#255658

Participant

Don’t know

• This reply was modified 2 weeks, 1 day ago by Sugmadeek.
0
#255746

ICTIME
Participant

It is best to use a simple algorithm in your favorite coding language. I prefer R studio for this type of analysis. You take a sample of the data roughly 10 percent of the total population. Then you sample that ten percent thousands of times over using the code. It will provide you with the detail you are looking for. Here is some sample code. If you want to try it. It is a sampling algorithm I designed for this very purpose. R-Studio is free to use.

library(dplyr)
library(magrittr)
library(rafalib)
library(xlsx)
library(ggplot2)
library(stats)
library(quantreg)

n <- 10000
averages5 <- vector(“numeric”, n)
for(i in 1:n){
X <- sample(datraw, 30)
averages5[i] <- mean(X)
}
mypar(2,2)
z <- (averages5-mean(averages5)) / popsd(averages5)
d <- density(averages5)
plot(d)
qqnorm(averages5)
qqline(averages5)
hist(averages5)
ZZ <- 1-pnorm(z)
hist(ZZ)
mean(averages5)

t.test(averages5, datraw)

mean(dnorm(averages5, datraw))

lm(formula = datraw~datraw, data = datraw)

0
#255769

kathilee
Participant

Thanks for these recommendations.

0
Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic.