iSixSigma

DOE on rare events

Six Sigma – iSixSigma Forums Old Forums General DOE on rare events

Viewing 17 posts - 1 through 17 (of 17 total)
  • Author
    Posts
  • #35802

    J. K. JONES
    Participant

    I am trying to set up a multi-factor-designed experiment in a difficult situation.
    Some of the factors are continuous. Some of the factors are categorical.
    The outcome (y; or response) is categorical. It is binary, either a defective or not. I could try to manipulate it into being continuous by using a percent defective.
    The real problem is that the defect I am trying to predict is extremely rare. The probability of it occurring in our process is less than 0.01. Based on historical data, some factor levels will increase this probability, but not by much. I am thinking I could run many repetitions of the various levels and still not have the defect occur.
    Any suggestions?

    0
    #101456

    Tim F
    Member

    If the parts are inexpensive and the testing is inexpensive, then using % defective in as a variable is certainly a viable option.If possible, I would suggest finding some continuous variable that you can correlate to the biniary type defect you have. Is there some other measurement – thickness, resistance, color … – that relates to the failure? If so, you can measure the other variable and try to control it within reasonable limits so that the complete failure never occurs.Tim F

    0
    #101457

    mjones
    Participant

    You have two problems, and both are hard to address.
    First, with attribute Y data, you do not meet the basic assumptions for DOE and, consequently, ‘usual’ DOE won’t work — not well at least. I have done DOEs with attribute ‘count’ data (like you have) where I had a large number of observations of counts so the results look “normal” and continuous for the analysis. It worked OK with, say, 10 to 40 observations of ‘defects’ per cell, where the results were nicely spread. When there are some cells with very few (~0 defects) it won’t work well at all.
    Second, because you have ‘rare events,’ results will take a long time — regardless of what method you ultimately choose to use.
    Some alternatives:
    Logistic Regression works nicely for attribute Y but is awkward and complicated (especially the first time you do it) and takes a lot more data than it looks like you will have. Plus, you have no good way of making sure all the levels of your factors are included in the analysis…
    A simulation could work — but can be complex, messy and results are only as good as the data in and robustness of the model.
    Best suggestions I can think of: use a Y in your DOE based on correlation with (1) a precursor to the defect or (2) a closely correlated metric.
    (1) If you have data or process knowledge showing the likelihood of a defect is related to some measure (a continuous metric would be good!), use that as your Y. Perhaps there is a precursor (a pressure, density, pH, etc.) which, as it increases (or decreases) suggests the defect is more likely to occur).
    (2) For correlated/comparable metric… For example, I may wish to run a DOE evaluating how often people are actually infected (discrete result) by a dangerous virus. Since it is not desirable to actually infect humans, and alternative is to run a DOE and measure amount of contamination (continuous metric) for the virus or for a comparable virus for all the factor levels. If the team agrees that likelihood of infection is decently correlated with level/degree of contamination, and processes are comparable and realistic, it can work quite well.
    Good luck J.K.

    0
    #101484

    Gabriel
    Participant

    It can work, but lots of data will be needed with this low rate of defectives. But it depends on the size of the differences in that rate (between the different runs) you want to detect.
    Let’s say that I use the rule-of-thumb that the average number of occurrences per sample must be 5 at least (used in SPC for attribute charts). Then the minimum sample size per experimental run (repetitions) is 5/0.01=500 (assuming an average deffective rate of 1%).
    But the 95% CI for a sample of 500 taken from a population with 1% of defectives is 1  to 9 defectives (0.2 to 1.8%). Only an experimental run with zero defects would be suspicious of producing less than 1% of defectives (the probability of zero in 500 would be only 0.7% if the defectives rate was 1%).
    With a sample of 5000 the 95% CI narrows to 38 to 61 defectives (0.76 to 1.22%). But can you afford such a sample size?
    Before someone jumps in saying I do not understrand DOE, well, it is true. DOE is not my strong side.
    I know that this is not the way DOE works. For example, if you make a full factorial 3 factors, 2 levels design (8 runs) each level of each factor would participate in 4 experimental runs, so with 100 repetitions per run you would be testing each level of each factor 400 times.
    I just wanted to give you an idea of the large ammount of data needed for the combination attributte / rare event, wich remains true for SPC, tests of proportions and DOE.

    0
    #102486

    DANG Dinh Cung
    Participant

    Good morning,
    I agree with Gabriel on the amount of experimets and my questions are : (a) are you obliged to run a DOE study and (b) is this rare event critical ?
    I suggest you run a FMEA then decide some design actions to reduce the occurence and/or criticity of the event.
    DANG Dinh [email protected]
     

    0
    #102487

    Marz
    Participant

    Try thinking differently about your problem, instead of trying to make good ones, think “how can I make bad ones”
     
    If you can only measure good or bad then you will need a lot of replicates in your design, the easiest way to do that is to use EVOP.
     
    Set up your design, and run one combination for a whole week, then try one of the other combinations, run for a week etc, then analyse the data.
    If you select the variable levels carefully then you will be able to sell the product, all you are doing is to run the process with some controlled variation.
    It will take a few weeks to get enough data, but with a low defective rate that is the best you could hope for.
     
    Also, have the team relook at the FMEA, is there anything there we can do to eliminate the defective, is there a mistake proof improvement we can implement?
     
    Good Luck
    Kevin

    0
    #102492

    Markert
    Participant

    Some thoughts:

    Quantify the sample sizes you need to detect the size of change you want in the response. (see CQPI report 91 at link below)
    Consider the use of the inverse bimomial distribution as a response, where the response becomes the number of good parts you need to get a specified equal count of bad parts/events etc. (see CQPI report 152 at link below)

    Link to CQPI:
     http://www.engr.wisc.edu/centers/cqpi/
    I would also suggest you start with a control chart for rare events, if you have not already, before you attempt a DOE so you can see if the rate of occurrance is in statisitical control now. (Charts for rare events are where the charted variable is the opportunity between events rather than the frequency of the events themselves – See Don Wheeler’s book “Making Sense of Data” for examples – http://www.spcpress.com). This is a good first step beecause DOE can only be reliably used where the existing system of events is in statistical control. All statisitical hypothesis tests, including DOE, rely on comparing two or more CONSISTENT sets of data for differences between them – if the data is not in statistical control then the data sets are not consistent, and the results of your DOE will be arbitrary – which would be a shame if you have to spend a lot of time and money getting the data, only to find that your results cannot be extrapolated outside the frame of data within your DOE! (Also, if you are in a position to run “one factor at a time” you could try making changes to suspected factors and seeing what effect you get on the control charts?)
    Hope this helps
    Phil

    0
    #102495

    Michael Schlueter
    Participant

    Hi,
    Can you please give us some more details about the rare event you try predicting? Sometimes events are rare, because conditions are not right. E.g. accidents are rare events; they do happen, when all required conditions are provided.
    What is your objective? I suppose you want to make the rare event even more rare, do you? So this would make the verification step even more difficult, does it?
    Is there a threshold involved in your measurement? I mean you do have criteria to decide whether it’s a good or a bad part, don’t you? E.g. when your product is made from 10 components and all 10 fail (rarely), it’s a defect. Perhaps 3 out of 10 fail more frequently, but that’s neither a problem nor are they detected by your system or your criteria. Making your measurement more sensitive to below-defect-criteria may be an option for you.
    Kind regards, Michael Schlueter

    0
    #102501

    Peppe
    Participant

    Dear JK,
    you said “rare events” about a probability of 0.01 = 1% = 10000 ppm ?  Is this correct ?  If so, what you mean for  “rare events” ?
    How many parts you produce per days/weeks ?
    Maybe, having numbers is easier to answer you. 
    Rgs,
    Peppe

    0
    #102506

    J. K. JONES
    Participant

    The even in question is a defect that occurs 50-100 times per day in our process out of about 44,000 units produced. 
    It is desirable to never have this defect.

    0
    #102507

    J. K. JONES
    Participant

    The event is a defect which occurs 50-100 times per day out of our process.  Our process produces 44,000 units per day.
     
     

    0
    #102525

    Sinnicks
    Participant

    Let’s go back to the question of “Why a DOE?”  We all want fewer defects.  Depending on the cost/severity of having the defects, the cost of doing the DOE, and your urgency to get an answer you may be better off with a different approach.
    For example, you could use a combination of an FMEA or cause and effect diagram to better understand the factors and risks.  In the meantime, a multi-vari study might get you about as good defect reduction at a lot lower cost.   It will not give you as precise an answer as a successful DOE, but there is a lot less risk, hassle, and expense.  It depends on the nature of your process and the defects. 

    0
    #102530

    Michael Schlueter
    Participant

    Thanks for this valuable information.
    It sounds like you should run your DOE on the production system itself. If you strive for an improvement of 10, defects should drop down to 5-10 a day. Pragmaticaly you can run about 3-5 experiments per day to find about 1 defect per day should you suceed; it will be significant, when you succeed to improve by a factor of 2-3 at least.
    Did you try clue-generating tools beforehand to identify the red-X and the few pink-Xs ? I’m thinking of paired comparision of BOB (best-of-best) and WOW (worst-of-worst) or similar.
    Kind regards, Michael Schlueter

    0
    #102531

    Peppe
    Participant

    Dear JK,
    from your data the process is performing between 1136 ppm (4,55sigma)  and 2272 ppm (4,34 sigma), based on dalily production assuming each part as opportunity for defects, so the qty is enough high to give you the necessary details (if you select the bad from good, means you do the right go-no-go control) .  About the steps to follow, with these few data available (I assume from your post, you have all the right knowledge about the factors to analize), I suggest to perform a Tukey test and ripetitivity test  to understand quickly if the factors addressed are significant. After that you can do a DOE using Taguchi L8 or L9 (depend by your needs).
    This is just a my idea, but having the full picture of situation, something can be modifed.
    Rgs,
    Peppe 

    0
    #102532

    Michael Schlueter
    Participant

    Hi,
    You wrote: “(…) Based on historical data, some factor levels will increase this probability [of defects], but not by much.(…)”
    Does the opposite hold, too? Can you reduce the defects a little bit when tuning this/these factor/s into the opposite direction?
    If so we can try to intensify the effect of this/these factor/s: it is already a control factor, which is a little bit too weak, may be.
    Kind regards, Michael Schlueter

    0
    #102534

    Cone
    Participant

    J.K.,
    With no disrespect to the other answers, they are taking you down a wrong path.
    You do not want to worry about the method of setting up an experiment or the method of analyzing until you have let the defective parts tell you something.
    A good failure analysis of the parts should let you narrow down the number of inputs significantly. Also a simple analysis like a MultiVari study should allow you to greatly reduce what needs to be looked at.
    What do you know about the defective parts. Can you adequately describe how they are different from all the others?

    0
    #102560

    Kim Niles
    Participant

    Dear J.K:
     
    I like Michaels idea of performing a DOE on the whole process but would suggest you test within acceptable tolerances on production with every run being one entire day’s worth of production.  Your response will simply be the quantity of special rejects for that day.  
     
    Since you don’t have a clue as to what causes the special reject, it’s likely caused by an interaction and therefore, DOE’s are the tool to use.  If you test within normal operating tolerances then all good product would be acceptable and since you are running thousands of parts per run, even very slight changes to key process variables can be measured.   
     
    Good luck,
    Sincerely,
    KN – http://www.KimNiles.com
    https://www.isixsigma.com/library/bio/kniles.asp

    0
Viewing 17 posts - 1 through 17 (of 17 total)

The forum ‘General’ is closed to new topics and replies.