Home › Forums › Old Forums › General › Confidence/Reliability Sample Size Calculations
This topic contains 4 replies, has 4 voices, and was last updated by Velasquez 7 years, 10 months ago.
I am currently using confidence/reliability tables to determine sample sizes for ATTRIBUTE data (i.e. Pass/Fail) for medical device inspections.
For example, if I want to determine with 95% confidence and reliability that 95% of the lot will meet specifications (95%/95%), with number of accepted failures = 0, my sample size will be 59.
This link references the tables: http://rac.alionscience.com/Toolbox/p05.htm
However, I am trying to determine the sample size for VARIABLE data, with the same confidence and reliability, and cannot find a resource. Any information will help. Thanks!
JB,
I’ve spent a few years working in the med device industry supporting among others the quality management area. I spotted your posting and thought, while it’s not a Six Sigma question per se, that I might be able to help anyway. I was curious about your use of a reliability sampling plan supporting failure free testing for evaluating measures of attributes. But, I reviewed my reliability reference materials to verify the binomial distribution is correctly used in this application as it is in attributes inspection. For anyone looking in, the sample size for failure free testing is easily determined from the following equation:
n = ln(1-confidence level) / ln reliability
given,
confidence = 0.95reliability = 0.95
n = ln(1-0.95) / ln (0.95) = 58.4 or 58 samples
The application for the use of the above equation is as follows: select n units, subject all of them to the same test at the same time, note the number of units that fail after the test. So, I assume you are subjecting medical devices to some accelerated testing or other, and noting the number of failures after the test duration completes. Am I correct in this assumption? If instead you are selecting samples, and performing a pass/fail test, then perhaps you might want to reference the ANSI/ASQ Z1.4 Standard on Sampling by Attributes. In either case, the FDA would not consider the Internet an acceptable source for referencing sampling plans during an possible audit review, and if known you could receive a finding from them. I’m not trying to give you a hard time here, but I’ve lived in your shoes and don’t want you to be caught with your pants down, if you know what I mean.
Okay, let’s look closer at your sampling plan to find an equivalent variables plan. Your plan above is n=59 and a=0, an accept on zero plan. A side point, the O-C Curve for this plan is fairly steep providing a reasonable chance of rejecting the inspection/test (not sure which) at a defect level less than 5%. The key points on the O-C Curve are as follows:
AQL = 0.09% (Prob of acceptance = 95%)
IQ = 1.17 % (Prob of acceptance = 50%)
LTPD = 3.82% (Prob of acceptance = 10%)
RQL(0.05) = 4.95% (Prob of acceptance = 5%)
With the values above you can sketch out a rough O-C Curve and connect the dots to view its shape. An equivalent variables sampling plan would be n = 19, and k1 = 2.3883 for a one-sided tolerance, and n=19, and k2 = 2.784 for a two-sided tolerance. The first plan is exact, and the second plan is approximate, but close enough for government work. These plans assume the measures follow a Normal distribution, and the standard deviation is unknown. To use these plans select 19 samples and perform the measure, then compute a one-sided statistical tolerance using X-bar + or – k1(SD), as required, or X-bar +/- k2(SD). If this range is within the established specifications than you pass the inspection, if one or both ranges are outside the established specification, then the sample fails the test/inspection, whichever applies.
I should mention that you not use my advice until you verify it with a qualified member of your quality team, preferable someone who has some statistical training in acceptance sampling. I suggest this for your protection. Don’t believe anything you see on this site without first verifying it yourself no matter how professional it sounds. That goes for anything I provide in this posting. Again, this is for your professional protection.
For additional details on the information provided please go to this link: http://www.variation.com/library.html. You will find reputable articles on acceptance sampling from Dr. Wayne Taylor of whom I used to work for.
References:
Experimental Statistics Handbook, National Bureau of Standards (Now NIST), 1991, PB93-196038, pp. T11-T16, (most likely out of print, but you may get lucky and find a used one available at Amazon or other online sources)
Tables for Normal Tolerance Limits, Sampling Plans, and Screening, R.E. Odeh and D.B. Owen, Marcel Dekker, Inc., 1980, pp. 17-143. (this was the source reference for the Experimental Statistics Handbook, but is most likely also out of print–you you’re interested you may be able to get a reprint from UMI Book on Demand via http://www.astrologos.org)
Software:
Sampling Plan Analyzer, Dr. W.A. Taylor, validated for the regulated industry, compatible with all Window OS’s, locate at this link: http://www.variation.com/spa/index.html (I receive no monetary or other renumeration for my recommendation)
Good luck,
Ken
Ken,
Your information was very helpful to me, thank you. Where did the formula, n = ln(1-confidence level) / ln(reliability) , come from? I have a registered copy of SPA but can’t figure out how to derive this information.
Jim,It has been quite some time since looking at this post. Perhaps as much as 3 years! First, you won’t find a derivation via Dr. Taylor’s SPA software. A better place would be to look over a text reference on reliability engineering. The sample size estimate I provided was for a special case of one-shot reliability testing where one observes zero failures and wants to make a claim on the lower bound for the expected reliability of tested units. A key provision for this testing is that the test conditions are essentially identical for each unit. Given this condition the point estimate for the reliability would be:R = (n-r)/n where,n=number of units tested
r=number of observed failuresThe point estimate is an very limited measure of performance, and as such it’s more useful to compute the lower 100(1-alpha)% confidence bound for the expected reliability. I am aware of two ways of computing this estimate using different reference distributions:1) Normal reference:Rl = (n-r-1)/(n + Za(n(r+1)/(n-r-2))^1/2)via Ireson and Coombs, “Handbook of Reliability Engineering, 19682) F-dist reference:R1 = (n-r)/((n-r)+(r+1)F(a)(2(r+1),2(n-r))via a manuscript reference from past ASQ trainingwhere,Rl=the lower 100(1-alpha)% confidence for reliabilityn=sample size
r=number of failures observed
F(a)=the F-value for a given alpha and degs. of freedomGiven expression #2, if the number of failures observed after testing are zero this expression reduces to:Rl = (alpha)^1/n (this is the expression I used)the derivation of the sampling expression from the above expression is straight forward when you substitute alpha as (1-confidence level). I show the derivation rather crudely below:Rl = (1-gamma)^1/nLn(Rl) = (1/n)Ln(1-gamma)1/n = Ln(Rl)/ Ln(1-gamma)n = Ln(1-gamma) / Ln(Rl), with gamma=confidence levelso,n = Ln(1-confidence level) / Ln(Reliability) Hope this helps,Ken
for pass / fail data then the binomial expension is the one to use (two possible outcomes) and in its most simple form you would test with 0 failures using the equation R=(1-C)^n where R is reliability, C = Confidence and n is sample size. The same expansion can be used and include failures but I would recommend using a website such as
http://src.alionscience.com/cgi-src/calc.pl?pval=0.1&nval=&rval=4&clval=95
which will do the calculation for you.
When looking at continuous data and you want to do a similar thing then look at ISO11969 which will give you tables that will enable you to find predicted limits at given confidences and reliabilities both for parametric and non-parametric data.
You will also see that the tables for non-parametric data coincide with the binomial expansion with zero failures.
The forum ‘General’ is closed to new topics and replies.
© Copyright iSixSigma 2000-2016. User Agreement. Any reproduction or other use of content without the express written consent of iSixSigma is prohibited. More »