iSixSigma

Calculating sample sizes when DPMO=1

Six Sigma – iSixSigma Forums Old Forums General Calculating sample sizes when DPMO=1

Viewing 18 posts - 1 through 18 (of 18 total)
  • Author
    Posts
  • #46094

    Dale
    Participant

    We actually have a process operating at 3.4 DPMO and are looking to improve it to 1.0.  I’ve been asked to determine our sample size, but every tool I have just doesn’t go down that low, or gives me >1 million sample size. our daily production is 10k-35k.
    Does someone have a formula for that calculation?  Thanks!

    0
    #151891

    Sloan
    Participant

    This sounds suspiciously like a homework question because you say your process is currently at 3.4 DPMO. I can imagine the question saying something like “Your company is currently operating at 6-Sigma and your boss wants to improve to 6.25-Sigma. How many days will it take before you can say that you have acheived your goal if you produce between 10,000 and 35,000 units per day?”
    Convert your DPMO into percent yield then use the sample size calculator for discrete data plugging in your alpha risk, the delta you want to detect and your current p-value. You’ll come up with a pretty big number, yes larger than 1 million at alpha .05.
     

    0
    #151890

    Jim Shelor
    Participant

    Dear Dale,
    You can do it the same way you found out you were running at 3.4 DPMO.
    I would take a sample of 5 units/hour for 100 hours and plot them on an X-bar, R chart.  You only need a minimum of 25 samples but with the production rate you have getting 100 is not an ussue and you will get a more stable answer for the analysis.
    I would then run a capability analysis.
    DPM is a direct calculation obtained from a capability analysis.  You can convert to DPMO depending on how many opportunities there are for each part(?) to be defective.
    I am wondering why you want to go from 3.4 to 1.  In general, when you fet down that low, further improvement usually cost a lot of money to obtain.  Are you making some kind of safety related part that would benefit you greatly to try for a DPMO of 1.
    Sincerely,
    Jim Shelor

    0
    #151892

    Heebeegeebee BB
    Participant

    I was thinking along the same lines as Outlier…If a real-world process was at 3.4 DPMO (6-sigma), then a seasoned Six Sigma type would pipe up and suggest a “lift and shift” to a different process.   The ROI for an improvement from 3.4 to 1.0 DPMO may not be there.
    Is this in fact, a homework or test question?

    0
    #151894

    Anonymous
    Participant

    A simple rule of thumb for attribute data: you need a sample size large enough to observe 5 defects (np >= 5).  So n = 5/1e-6 = 5 million.  If you are a high volume producer with automated data collection systems, then this can be acceptable. 
    If not, you will need to obtain continuous measures, where the sample size requirements are much smaller.

    0
    #151903

    anon
    Participant

    Of course this assumes the guy is smart enough to be checking attributes … which is always doubtful.  He is probably taking measurements and comparing them with a spec limit.  His low defect rate could simply mean he has followed Bill Smith and widened spec limits to reduce defects.  Anyone can get 3.4 dpmo by Bill Smith’s approach.
     

    0
    #151905

    Anonymous
    Participant

    Your statement on Bill Smith is totally incorrect. I personally knew Bill, and he would never advocate widening the spec limits. 
    I will tell my students to do this as a joke.

    0
    #151908

    Dale
    Participant

    This is not a test question.  The situation is prescription drugs.  (Now aren’t you glad we have a 3.4 DPMO!)  It really is a Yes or No on defects, not a measurement issue.  Today they test 100% for several days after a change, and are looking to cut back if possible
    Jim, thanks for the control chart idea.  I used the best table I could find – the military 105e table, but that only goes to 1/10,000.  with a control chart & cap. analysis I think we could stick at the 1250 I came up with – as long as there are no defects & control chart & cap. analysis look good.
    Thanks!

    0
    #151912

    Grey eminence
    Participant

    i can just imagine the round of laughter and applause that the grey eminence of this site will get with her witty and well thought through joke. the world is glad to see that the anonymous high-priest of the venerous isix sigma forum is at last making efforts to ensure that her students don’t die from pre-mature boredom. onwards and upwards.

    0
    #151918

    Anonymous
    Participant

    You would use a P-Chart, not an X-bar. 
    Mil 105e will get you AQL levels (How much bad stuff is acceptable – in percentage – forget about 1 dpmo). Minitab 15 has added Acceptance Sampling, so you could look at that.
    Other than the np>=5 rule of thumb there are 2 other tools to consider:
    Consider the confidence interval. You could use an exact one-proportion calculator to see for a given sample size with zero defects, what the quality level is at 95% confidence level.
    You could also use a power and sample size calculator for 1 proportion.  Minitab and SigmaXL include these tools.  
     
    Second, here is what you will see
     

    0
    #151965

    Len
    Participant

    You are WRONG !!.  Most people know that widening spec limits to improve quality, is precisely what Bill Smith advocated.  See Bill Smith’s paper, “Making War on Defects”, page 46, IEEE Spectrum, Sept 1993.  He also gives an example.
    But I do agree, Bill Smith and his ideas are a joke.

    0
    #151980

    Anonymous
    Participant

    I will visit the library and get back to you on this.

    0
    #151981

    Chaser
    Participant

    Once again your lack of experience betrays your dogmatic views.
    What if Bill Smith’s comments were directed to in-line process specifications, which are set by designers and not customer specifications?
    What happens if the designer’s in-line specifications are too tight and do not impact the performance, or reliability of the final product. (They have their reasons for doing this!!)
    What happens if the in-line measurement systems can’t meet the 10% rule?
    What happens if there is no other measurement system on the market that can achieve the tolerances required?
    Should one take multiple measurements and use an average? What happens to the specification if one uses an average of several measurements to disposition material?
    Perhaps you don’t think this can happen?

    0
    #151982

    Jim Shelor
    Participant

    Dear Dale,
    I did not realize from your first post that the test you are doing is a pass/fail test.
    You cannot use an X-bar, R chart for a pass fail test.
    You need to use a P Chart if your sample sizes are not equal or an NP Chart if your sample sizes are equal.
    We can get you back to using a X-bar, R chart if the specific measurements you take to decide the pass/fail are continuous data, such as breaking strength, amount of medication in the pill, or whatever else you measure.  We could use an X-bab, R chart for each type of failure and combine the results to give you an overall failure rate.  I would think that and tests you do to determine pass/fail must be some kind of measurement with some kind of specification.
    If you want to take this discussion off line, because it can could get to be a long discussion, my email is [email protected].
    Otherwise, we can just keep going here.  That way you would have the benefit of review by the other players in this discussion.
    Sincere regards,
    Jim Shelor

    0
    #151992

    Sigmordial
    Member

    Hi Dale,
    Ran into a similar challenge with sterility testing for medical device.  Rather than running the conventional tests for proportions, I utilized sequential analysis (aka sequential probability ratio testing or I think ASQ has some literature that labels it sequential probability ratio testing).  Google those and you should get some useful results.
    Also, some time last year, iSixSigma had an article addressing this in their featured articles.

    0
    #152007

    lukasls
    Participant

    This is perfect smart-ass answer for an interview, when asked how can you reduce defects when you have continous data and perfect example to create ‘veryfing intelligence’ question…how can you very quickly improve the process from 6 to 7 sigma when having no funds? :).

    0
    #152021

    John H.
    Participant

    Dale
     
    When a  Poisson Defects Per Million Sequential Sampling Table is generated and The Type I and Type II errors are set at the usual .05, a sample size of at least 2 million is required (see below) . The Average Sample Number required for a test decision at both the 1 and 3.4 unit levels  also calculates as 2 million .
     
    Unit Sample Size                      Acc                 Rej
               2 million                          1                     7
               3                                     3                     9
               4                                     5                    11   
               5                                     7                    13
     
    -John H.

    0
    #152215

    Anonymous
    Participant

    The example in the Smith paper is simply talking about statistical tolerance design versus worst case tolerance design, not “opening the specs”.  Reread the paper.
     
     
     

    0
Viewing 18 posts - 1 through 18 (of 18 total)

The forum ‘General’ is closed to new topics and replies.