Home › Forums › General Forums › Tools & Templates › Variable and Attribute Verification and Validation Plans
Tagged: Attribute, Sampling plan, Statistical, validation, Variable, Verification
This topic contains 2 replies, has 3 voices, and was last updated by Sergey 2 weeks, 3 days ago.
Hi all,
Sampling plans for verification and validations (i.e. OQ, PQ, PPQ) typically fall under either variable or attribute responses.
In the case of attribute (i.e. pass/fail) sampling there is no issue in determining the required test sample size as non-parametric binomial success-run Bayesian formulas can be used to determine the required sample size for representative samples at a confidence of “X” and a reliability of “Y”.
The case of variable data poses a different problem…… I understand normal tolerance intervals can be used but the problem here is that the “k” factor in conjunction with the std. deviation is used to determine if the process is capable rather than using a Cpk or Ppk…….
What I am looking for in a variable sampling plan is the following:
1) formula that accounts for confidence level and reliability (or equivalent
2) no need for historical variance/std. deviation, or mean
3) no previously gathered sample data, looking for an upfront approach similar to the non-parametric test for attribute.
4) factors can be used
5) Cpk or Ppk must be the end measure of capability.
A number of acceptance sampling plans exist but these seem more appropriate for incoming inspection of raw materials, etc.
I have no interest in this sort of thing, I am purely interested in verification and validation variable sampling plans.
Help is greatly appreciated.
Brendan C.
@brendans14a If the sampling plans you are referring to are from the Mil standards they were developed to protect suppliers i.e. 105 and 414.
I am not real sure why this is so difficult for you but it looks like you are spending a lot of time looking at formulas. How about doing some reading about Operating Characteristics Curves (OC Curves) so you understand the statistics behind sampling.
Just my opinion.
Sample size always depends on three parameters: process behavior, granularity and risks. First is described by standard deviation or current level of defects. Granularity is how you want to be precise in measuring your characteristic. Risks we manage through alpha and beta. Taking into account your requirements (looks like “I know nothing about my process but could you please advise?” and in numbers :)), there could be used rule of thumb for continuous data – take 100 samples in time order and you’ll get good estimation of mean, std and Cpk/Ppk.
If not happy with that, might be useful to use simple formula: n = (1,96*std/delta)^2,
where 1,96 comes from alpha = 5% (not sure about power here but should be ok); std and delta (granularity) you might estimate from specification. For example, you target is 100 and tolerance of 20. Let’s say you need finally Cpk of 1,33. Then you might estimate a max std about 2,5 and granularity about 0,5. As result, sample size would be 96. Which is the same as a first rule!
The next level would be to use standard software but you’ll need to put more knowledge into calculation regarding std, granularity and power. It may be conducted separately for mean and variance and taken worst case.
Hopefully, it helps
© Copyright iSixSigma 2000-2017. User Agreement. Any reproduction or other use of content without the express written consent of iSixSigma is prohibited. More »