Sample Size Attribute data with 0 defects
Six Sigma – iSixSigma › Forums › Old Forums › General › Sample Size Attribute data with 0 defects
- This topic has 4 replies, 4 voices, and was last updated 12 years, 9 months ago by
Fontanilla.
-
AuthorPosts
-
August 26, 2009 at 11:16 am #52570
Peter OParticipant@Peter-OInclude @Peter-O in your post and this person will
be notified via email.Hello everyone.
It has probably been in discussion several times here prior to my posting (browsing the forum I did se some related topics).
I often get the question how many samples do I need to take to be sure? and he answer is normally it depends (soon I will answer 42 as a tribute to Douglas Adams ;-)) follow of what do you need know and after that power , significance level and so on.
One thing (among others) bugs me a bit, attribute data and zero defects.
Starting up a new process/product in validation they often will try to prove there are NO problem and expects zero defects for a certain parameter normally using go/no go test then the question appears, how many samples to take? And 11 times of 10 when I ask maximum fault rate they answer no we dont have one, we just want to be sure that it is zero. So how many samples? *sigh, 42 *
My approach so far has been to bargain for maximum amount of samples (when it starts to hurt) and from that calculate upper CI level (normally 95%) and tell them:
well if you take 42 samples and have zero defects it might be so that you have a zero fault process, but on the other hand it is possible that it can be up to 6,9% scrap but it is a 5% chance that I am wrong
And the answer: WHAT!?! Is this what we pay you good money for? Its useless, do your maths again..
Humble engineer (me): well in you can process 3539 pieces without any faults I can be 99% sure that it is probably less then 0,13% scrap
Upset Boss: Do you know what a part cost?!? 3539 part! Come up with something better!
Well you get the picture =) (this was a fictive story, neither animals, engineers or bosses did get hurt)
So my question are if this is a good approach or if someone have a better trick up the sleeves to be more sure and take fewer samples?
Regards and tanks in advance0August 26, 2009 at 12:13 pm #185005
Robert ButlerParticipant@rbutlerInclude @rbutler in your post and this person will
be notified via email.I face this issue almost every day. The problem is the definition of the word “sure”. What you have is a situation where everyone is walking around saying “sure” when what they mean is “absolutely certain”….and the answer is exactly what you have given them – you can’t be absolutely certain so how “sure” do you want to be?
Perhaps if you “make certain” they understand they are asking a probability question things might go easier. What I usually do is put together a table which summarizes sample sizes, alpha and power.
In other words I’d make a table that summarizes variations on your sentence
“well if you take 42 samples and have zero defects it might be so that you have a zero fault process, but on the other hand it is possible that it can be up to 6,9% scrap but it is a 5% chance that I am wrong ”
I find that if I do this then they can use this to decide just how many samples they’re willing to test to meet their needs for a sense of “sure”.0August 28, 2009 at 4:45 am #185052
Peter OParticipant@Peter-OInclude @Peter-O in your post and this person will
be notified via email.Tanks for the answer
I did that with the table (however presented as a graph) hopefully I did my calculations right =)
As I said the calculation are based on the assumption that the sampling are done in sequence (or at least from the same population) and no rejected parts, result in estimated max % fail.
Sample rate (95%CI) (99%CI)
1 95,0 99,0
10 25,9 36,9
25 11,3 16,8
50 5,8 8,8
75 3,9 6,0
100 3,0 4,5
150 2,0 3,0
200 1,5 2,3
300 1,0 1,5
500 0,6 0,9
750 0,4 0,6
1000 0,3 0,5
This will probably work well to raise the argument What is good enough?
Has anyone any other way to deal with zero defects in attribute data?0August 28, 2009 at 12:06 pm #185057
SeverinoParticipant@Jsev607Include @Jsev607 in your post and this person will
be notified via email.If your organization sets AQL levels for lot acceptance activities, then one thing you can do is set those AQL levels as the LTPDs for your validation lots. So for example, if you use AQL 1.000% as normal lot acceptance, you’ll want to select a c=0 sampling plan with LTPD=1.000% for your initial lots. This is outlined in the Article: “Selecting Statistically Valid Sampling Plans” by Wayne Taylor.
I sympathize with your plight, often people get frustrated when you don’t tell them what they want to hear. If they don’t like it, tell them to switch to variable data which is more effective anyway.0November 5, 2009 at 2:25 pm #186622Perhaps considering historical error rates will help to explain this one. If you look at the history of the process, does the previous data allow you to see the performance over a bigger sample? That may help them to think it through.
0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.