Calculating sample sizes when DPMO=1
Six Sigma – iSixSigma › Forums › Old Forums › General › Calculating sample sizes when DPMO=1
 This topic has 17 replies, 12 voices, and was last updated 15 years, 9 months ago by Anonymous.

AuthorPosts

February 12, 2007 at 7:24 pm #46094
We actually have a process operating at 3.4 DPMO and are looking to improve it to 1.0. I’ve been asked to determine our sample size, but every tool I have just doesn’t go down that low, or gives me >1 million sample size. our daily production is 10k35k.
Does someone have a formula for that calculation? Thanks!0February 12, 2007 at 8:23 pm #151891This sounds suspiciously like a homework question because you say your process is currently at 3.4 DPMO. I can imagine the question saying something like “Your company is currently operating at 6Sigma and your boss wants to improve to 6.25Sigma. How many days will it take before you can say that you have acheived your goal if you produce between 10,000 and 35,000 units per day?”
Convert your DPMO into percent yield then use the sample size calculator for discrete data plugging in your alpha risk, the delta you want to detect and your current pvalue. You’ll come up with a pretty big number, yes larger than 1 million at alpha .05.
0February 12, 2007 at 8:30 pm #151890
Jim ShelorParticipant@JimShelor Include @JimShelor in your post and this person will
be notified via email.Dear Dale,
You can do it the same way you found out you were running at 3.4 DPMO.
I would take a sample of 5 units/hour for 100 hours and plot them on an Xbar, R chart. You only need a minimum of 25 samples but with the production rate you have getting 100 is not an ussue and you will get a more stable answer for the analysis.
I would then run a capability analysis.
DPM is a direct calculation obtained from a capability analysis. You can convert to DPMO depending on how many opportunities there are for each part(?) to be defective.
I am wondering why you want to go from 3.4 to 1. In general, when you fet down that low, further improvement usually cost a lot of money to obtain. Are you making some kind of safety related part that would benefit you greatly to try for a DPMO of 1.
Sincerely,
Jim Shelor0February 12, 2007 at 8:38 pm #151892
Heebeegeebee BBParticipant@HeebeegeebeeBB Include @HeebeegeebeeBB in your post and this person will
be notified via email.I was thinking along the same lines as Outlier…If a realworld process was at 3.4 DPMO (6sigma), then a seasoned Six Sigma type would pipe up and suggest a “lift and shift” to a different process. The ROI for an improvement from 3.4 to 1.0 DPMO may not be there.
Is this in fact, a homework or test question?0February 12, 2007 at 8:47 pm #151894
AnonymousParticipant@Anonymous Include @Anonymous in your post and this person will
be notified via email.A simple rule of thumb for attribute data: you need a sample size large enough to observe 5 defects (np >= 5). So n = 5/1e6 = 5 million. If you are a high volume producer with automated data collection systems, then this can be acceptable.
If not, you will need to obtain continuous measures, where the sample size requirements are much smaller.0February 12, 2007 at 11:46 pm #151903Of course this assumes the guy is smart enough to be checking attributes … which is always doubtful. He is probably taking measurements and comparing them with a spec limit. His low defect rate could simply mean he has followed Bill Smith and widened spec limits to reduce defects. Anyone can get 3.4 dpmo by Bill Smith’s approach.
0February 13, 2007 at 12:09 am #151905
AnonymousParticipant@Anonymous Include @Anonymous in your post and this person will
be notified via email.Your statement on Bill Smith is totally incorrect. I personally knew Bill, and he would never advocate widening the spec limits.
I will tell my students to do this as a joke.0February 13, 2007 at 12:47 am #151908This is not a test question. The situation is prescription drugs. (Now aren’t you glad we have a 3.4 DPMO!) It really is a Yes or No on defects, not a measurement issue. Today they test 100% for several days after a change, and are looking to cut back if possible
Jim, thanks for the control chart idea. I used the best table I could find – the military 105e table, but that only goes to 1/10,000. with a control chart & cap. analysis I think we could stick at the 1250 I came up with – as long as there are no defects & control chart & cap. analysis look good.
Thanks!0February 13, 2007 at 3:20 am #151912
Grey eminenceParticipant@Greyeminence Include @Greyeminence in your post and this person will
be notified via email.i can just imagine the round of laughter and applause that the grey eminence of this site will get with her witty and well thought through joke. the world is glad to see that the anonymous highpriest of the venerous isix sigma forum is at last making efforts to ensure that her students don’t die from premature boredom. onwards and upwards.
0February 13, 2007 at 10:42 am #151918
AnonymousParticipant@Anonymous Include @Anonymous in your post and this person will
be notified via email.You would use a PChart, not an Xbar.
Mil 105e will get you AQL levels (How much bad stuff is acceptable – in percentage – forget about 1 dpmo). Minitab 15 has added Acceptance Sampling, so you could look at that.
Other than the np>=5 rule of thumb there are 2 other tools to consider:
Consider the confidence interval. You could use an exact oneproportion calculator to see for a given sample size with zero defects, what the quality level is at 95% confidence level.
You could also use a power and sample size calculator for 1 proportion. Minitab and SigmaXL include these tools.
Second, here is what you will see
0February 14, 2007 at 1:49 am #151965You are WRONG !!. Most people know that widening spec limits to improve quality, is precisely what Bill Smith advocated. See Bill Smith’s paper, “Making War on Defects”, page 46, IEEE Spectrum, Sept 1993. He also gives an example.
But I do agree, Bill Smith and his ideas are a joke.0February 14, 2007 at 2:33 pm #151980
AnonymousParticipant@Anonymous Include @Anonymous in your post and this person will
be notified via email.I will visit the library and get back to you on this.
0February 14, 2007 at 3:15 pm #151981Once again your lack of experience betrays your dogmatic views.
What if Bill Smith’s comments were directed to inline process specifications, which are set by designers and not customer specifications?
What happens if the designer’s inline specifications are too tight and do not impact the performance, or reliability of the final product. (They have their reasons for doing this!!)
What happens if the inline measurement systems can’t meet the 10% rule?
What happens if there is no other measurement system on the market that can achieve the tolerances required?
Should one take multiple measurements and use an average? What happens to the specification if one uses an average of several measurements to disposition material?
Perhaps you don’t think this can happen?0February 14, 2007 at 3:25 pm #151982
Jim ShelorParticipant@JimShelor Include @JimShelor in your post and this person will
be notified via email.Dear Dale,
I did not realize from your first post that the test you are doing is a pass/fail test.
You cannot use an Xbar, R chart for a pass fail test.
You need to use a P Chart if your sample sizes are not equal or an NP Chart if your sample sizes are equal.
We can get you back to using a Xbar, R chart if the specific measurements you take to decide the pass/fail are continuous data, such as breaking strength, amount of medication in the pill, or whatever else you measure. We could use an Xbab, R chart for each type of failure and combine the results to give you an overall failure rate. I would think that and tests you do to determine pass/fail must be some kind of measurement with some kind of specification.
If you want to take this discussion off line, because it can could get to be a long discussion, my email is [email protected].
Otherwise, we can just keep going here. That way you would have the benefit of review by the other players in this discussion.
Sincere regards,
Jim Shelor0February 14, 2007 at 5:22 pm #151992
SigmordialMember@Sigmordial Include @Sigmordial in your post and this person will
be notified via email.Hi Dale,
Ran into a similar challenge with sterility testing for medical device. Rather than running the conventional tests for proportions, I utilized sequential analysis (aka sequential probability ratio testing or I think ASQ has some literature that labels it sequential probability ratio testing). Google those and you should get some useful results.
Also, some time last year, iSixSigma had an article addressing this in their featured articles.0February 14, 2007 at 8:03 pm #152007
lukaslsParticipant@lukasls Include @lukasls in your post and this person will
be notified via email.This is perfect smartass answer for an interview, when asked how can you reduce defects when you have continous data and perfect example to create ‘veryfing intelligence’ question…how can you very quickly improve the process from 6 to 7 sigma when having no funds? :).
0February 15, 2007 at 12:17 am #152021
John H.Participant@JohnH. Include @JohnH. in your post and this person will
be notified via email.Dale
When a Poisson Defects Per Million Sequential Sampling Table is generated and The Type I and Type II errors are set at the usual .05, a sample size of at least 2 million is required (see below) . The Average Sample Number required for a test decision at both the 1 and 3.4 unit levels also calculates as 2 million .
Unit Sample Size Acc Rej
2 million 1 7
3 3 9
4 5 11
5 7 13
John H.0February 20, 2007 at 2:21 pm #152215
AnonymousParticipant@Anonymous Include @Anonymous in your post and this person will
be notified via email.The example in the Smith paper is simply talking about statistical tolerance design versus worst case tolerance design, not “opening the specs”. Reread the paper.
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.