# Sample Size for Validation

Six Sigma – iSixSigma › Forums › Old Forums › Healthcare › Sample Size for Validation

- This topic has 8 replies, 5 voices, and was last updated 10 years, 6 months ago by Jonathon Andell.

- AuthorPosts
- October 4, 2004 at 9:44 pm #25090
Is there a known method for establishing a sample size for validation purposes when a product is brand new? How would a person go about “calculating” a required sample size for determining stability or any statistical trends of a process when a product is new?

Thanks for the help.

-Mark0October 5, 2004 at 2:09 pm #60310Mark,

If you have a similar product and/or process, you can use the data (standard deviation) from that to estimate a sample size. Also, if you have any prototype or sample data, you can use that as well but lean on the conservative side.

Sheri0October 6, 2004 at 2:08 pm #60314First, you need to know the standard deviation (sigma) of new product. Test the hypothesis that the new product would deviate from the mean by the value E. Choose the confidence level Z for the test (say 95%). Here is the sample size equation:

n = Zsquare x sigmasquare/Esquare0October 7, 2004 at 3:10 am #60318Mark,

Your question is key to any process improvement activity. We’ve all been in your shoes at one time or another. The answer is: “It Depends!” Let me explain further:

1. If you are trying to understand the characteristics of a new product or process, any data at all is precious! With just a few samples, you can begin to understand the central tendency (mean, median, mode) and varation (range, standard deviation, variance, etc.). Control charts are a great way to present your data in an easy to understand, graphical format. Use whatever data you have (or can easily get) and build your knowledge from there. If you only have a few data points, use them all! After 20-30 samples, you can feel confident in setting control limits as a basis for understanding common verses special cause events. By the way, be sure to perform a measurement system evaluation (MSE) as well around your sample data collection gage or method. You want to be sure that the measurement system is capable.

If the cost of getting “data samples” is expensive, be judicious in your selection so you can learn as much as possible about the various sources of variation (Mfg Lots or plants, customer application, age of product, etc).

2. How much risk are you willing to take in drawing conclusions from your sample data? If you are looking for a gross improvement in your new product (compared to previous models for example) and are willing to assume some moderate risk (say 5-10%) of calling your new product signficantly different, when it is not, you can get by with fewer samples. This technique is know as “Power and Sample Size” in many statistical analysis software applications. If, on the other hand, you are looking to validate small levels of improvement, you will need a larger samples size.

This whole samples size question should be approached from the practical side as well. What are the consequences of making a bad decsion based on your sample collection and analysis? If the consequences are small (say for instance, you are in the early stages of product development, and you need some initial indicators) you may take more risk. If the decsion that will be made is BIG (the introduction of a new vaccine to a general population, for example) , you will want to minimize the risk of making a bad decsion. In either case, a good continuous improvement practicioner should always state the assumptions and risks quantitatively to the stakeholders/ sponsors of your project. When you present your findings, you can let them “participate” in the decision making process.

Hope this helps you get started. Sorry for the lengthly response, but “It Depends” is a common theme in our continuous improvement world!

Good Luck,

Mark

0October 7, 2004 at 1:36 pm #60320Mark,

Thank you for the response. This does seem as though the thought process I follow prior to determining an appropriate sample size for my validations. I was more curious to know if there was some sort of equation (marketing perhaps) that I could use as a more “scientific” way of calculating the sample size requirement for a given validation. All the products “made” by my company are implantable, so there risk level is high. I would like to explain to you how I “justify” my sample sizes in my validations and welcome your insight on my process.

Based on historical “similar” products variability a project team determines an appropriate initial sample size (almost always <10). This may seem low, but our annual sales per product are not astronomical and the products are extremely valuable. The equation for sample size calculation is used after testing is conducted to determine if the actual sample size is equal to or less than the calculated sample size using the equation (n=1.96ó/E)^2. This process is repeated for each test conducted during a validation. A calculated sample size is determined for each test conducted during the validation and compared to the actual sample size. If any test is deemed unacceptable (sample size) an additional sample is processed and tested. This process is repeated until the sample size requirement is fulfilled.

Feedback is Welcomed,

Mark0October 12, 2004 at 5:02 pm #60322Mark,

From a product life cycle perspective, when you say “validation testing”, I’m assuming you are referring to verifying new product performance prior to licensed release (as opposed to on going quality verification)? Are the sampled units considered saleable product after testing? Are the critical to quality (ctq) measurements used in your testing variable data or are you dealing with attribute characteristics (like pass/fail, good/better/best, etc.)? What basic decisions are you trying to make from your validation sample testing? Answers to these questions will help me provide a better answer to your question about sampling.

Mark0October 12, 2004 at 5:30 pm #60323Let me answer each of your questions…

I’m assuming you are referring to verifying new product performance prior to licensed release…This is true, the products are not saleable (also your next question) product after testing. The validation testing is conducted prior to any market release of a particular product.

Are the critical to quality (ctq) measurements used in your testing variable data or are you dealing with attribute characteristics…All data compiled during validation testing that requires this sample size calculation is quantifiable (variable) data as opposed to qualitative (pass/fail, good/better/best). Validations can not be performed solely on the basis of qualitative (subjective) parameters.

What basic decisions are you trying to make from your validation sample testing…The variable characteristics of my products requiring answering (quantification) stem around the performance or physical characteristics of a product (chemical residue, mechanical strengths, sterility, bioburden, cellular viability, leeching, shelf life, etc…). Statistical methods are employed to verify a sample set of data proves the process/product performs within a pre-determined range (acceptance criteria).

Hope this helps. And once again thanks for your time. I truly appreciate it.

-Mark0January 12, 2010 at 8:11 am #62531

edisoarParticipant@edisoar**Include @edisoar in your post and this person will**

be notified via email.Hi Mark

Touching base on this thread. Was there a final outcome? I have the same query. I am trying to define sample sizes for a new product that I want to validate but obviously, I want to ensure that I have an appropriate statistical rationale . There is some data available from previous testing. I want to select a sample size for both variable and attribute CTQ’s for the purposes of design validation.

Can you help?

Eamon0January 27, 2010 at 1:34 pm #62537

Jonathon AndellParticipant@Jonathon-Andell**Include @Jonathon-Andell in your post and this person will**

be notified via email.For now I will assume you are following changes in the mean (continuous data), or changes in the proportion (discrete data). For continuous data you also could establish sample sizes to track changes in standard deviation (actually, variance, which is the square of standard deviation).For continuous data you need to start with a reasonable approximation of the process standard deviation. Then you need to establish how much variation you can tolerate in the final outcome, expressed as “delta.” The sample size formula is based on the ratio of delta to standard deviation (or vice versa).The final needed knowledge is the amount of decision error risk you can tolerate, which breaks into two components. One is the risk of failing to detect a real change in the process, known as beta. Some software packages ask you to establish a “power” level, which equals one minus beta.The other risk is that of a “false alarm” – obtaining data that indicate a change when one does not exist. That is expressed as “alpha.”The smaller you want your alpha, beta, and delta to be, the greater your sample size will need to be. Many statistical software packages can handle the number crunching.For proportions, you use the anticipated proportion instead of standard deviation, and the amount of change in the allowable proportion instead of delta. You still need to decide what alpha and beta you need. Again, software packages do the numbers.A final decision has to do with whether you can do a one-tailed or a two-tailed hypothesis test. The latter slightly increases sample size, but it is a bit more rigorous.Having said all that: there is some skill in establishing appropriate levels of alpha, beta, and delta. It comes down to understanding the underlying penalties of experiencing a statistical decision error – a situation where random luck of the draw leads you to the wrong conclusion. Once we decide to proceed we never know when such an error has occurred. However, demanding small delta and risk values can drive sample sizes up very quickly.We haven’t even gotten into the issue of ensuring that your process is statistically stable. If that’s not present, then the true risks will be distorted compared to the theoretical risks. The typical tool to evaluate stability is the control chart. If you see signs of instability, a variety of responses and adjustments could be called for.I recommend you contact somebody with a lot of skill and experience before just picking some numbers. Perhaps you have access to a professor of applied statistics, or to a seasoned master black belt in your vicinity. I could help, but I’m not supposed to use this forum to seek consulting business.

0 - AuthorPosts

The forum ‘Healthcare’ is closed to new topics and replies.