DOE Optimization Suggestions
Six Sigma – iSixSigma › Forums › Old Forums › General › DOE Optimization Suggestions
- This topic has 16 replies, 7 voices, and was last updated 13 years, 1 month ago by
Robert Butler.
-
AuthorPosts
-
May 9, 2009 at 10:49 am #52327
Hey all, been a while since I’ve been on the site, but as usual, the “regulars” make for fun entertainment.Anyway, I am looking for some suggestions on a project I am currently working on.Background: Have used multiple smaller DOE’s to identify and isolate significant factors related to failures I’m having in a soldering process. My final experiment included a response surface DOE of the two most significant factors in my process. I have been able to improve the process so that these inputs can begin to show zero defects, although not consistently….so here’s the problem…as you may be guessing at this point.Problem: The output of the DOE’s has been counts of defects, which, of course, is not continuous data. I have been confident that I would have plenty of defects to work with, but now that I am getting the process optimized, I will start having lots of zeros and ones as my response. Not helpful, nor statistically informative. I cannot think of a way to measure the output with continuous data. Thus, my challenge.Have you encountered this before? Any thoughts or suggestions? THANKS ALL!
0May 11, 2009 at 3:59 pm #184001DC:Welcome back.When you have either small numbers of counts or a large beta error, all you can do is to increase sample size, A LOT!Do some research into EVOP. You will be making very small changes in variables, small enough that the output is still within specifications, but the changes in output will still allow you to slowly fine tune the process. The data collection will be integrated in SOPs and control plans, and specific settings may be in place for weeks.Cheers, AlastairP.S. – Now we wait for Robert Butler to tell us all how it really should be done.
0May 11, 2009 at 8:40 pm #184008
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.DC
I have a little background with Soldering. What constitutes a failure in your process? Do you have any physical attributes such as solder thickness that can be correlated to a failure?
0May 11, 2009 at 11:49 pm #184013Chad, a failure in this case is a solder splash. This is a vapor phase process. The previous DOE’s have been targeted at flux type, flux application, recipe optimization and pre-process treatments. The solder type itself cannot change (Sn37Pb63). These splashes are evident on gold pins which are being soldered through the pwb. Unfortunately, the solder joint itself is left with no defect (ie. blowholes, pinholes, dewetting, etc.) I believe the actual defect is being created while the reflow is occuring, and since there are no other residual effects, all I have to measure is the splashes. Last note, they cannot be reworked, so they are scrapped when the defect occurs. I’m mentioning this because this eliminates options I have of doing large DOE’s over long periods of time…too much cost.In any case, I feel I’m knocking on the door to success, but I’ve got this last barrier to break. Ugh.Hope the info wasn’t too technical, and I’m confident in my factors and approach (not that I don’t value your opinion), so I guess my real question is how to optimize using a DOE when I have count data with a hard lower limit and increasing results to that limit as I improve the process. Ironically, I think I may have accidentally made a prophetic statement when explaining my results to some of leadership today….something like “the danger of Lean Six Sigma champion training is that it often creates the expectation that these tools and this approach will, in theory, solve all the problems. When in reality, six sigma tools, like Lean tools, will have their greatest impact in improving and optimizing the processes” For this process, my success is only defined as zero failures: No splashes, so maybe I’m just making excuses for myself, lol.Sorry to ramble.
0May 12, 2009 at 2:12 am #184014
Eric MaassParticipant@poetengineerInclude @poetengineer in your post and this person will
be notified via email.DC,
If you can get the sample size up a bit, I think you can optimize using Binary Logistic Regression analysis. You can think of it as moving from 0/1 that you’re seeing now, to analyzing the probability of having a defect, from 0 to 100%. I think you can stay with the CCD (Central Composite Design) for the response surface design, but the analysis will be a bit more involved. You can email me if you’d like some help with trying this approach for the first time: [email protected] .
Best regards,Eric Maass0May 12, 2009 at 2:18 pm #184022
Robert ButlerParticipant@rbutlerInclude @rbutler in your post and this person will
be notified via email.You said, “I have been confident that I would have plenty of defects to work with, but now that I am getting the process optimized, I will start having lots of zeros and ones as my response. Not helpful, nor statistically informative. I cannot think of a way to measure the output with continuous data.”
In a follow up post you gave a definition of these defects – solder splash and you imply in that post that a solder splash is a real yes/no question – either it occurs or it doesn’t. Your initial post concerning “lots of zeros and ones” suggests you expect to have way too many zeros and hardly any ones-thus making it difficult, if not impossible, to express the failure rates in percentage terms.
If this is the case then you could use the metric of mean time between splashes or even just time between splashes for each setting as the measurement – the optimum being the combination of factors resulting in the longest time between splashes.0May 12, 2009 at 2:26 pm #184024
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.Robert: Exactly right. Too often I encounter folks stuck in a binary (discrete) world – yes/no, good/bad. Even when there is an underlying measurement on a continuous scale, they want to compare this to an acceptance level and bin it as good/bad.
I have not yet encountered a situation where I couldn’t convert some discrete metric to a continuous one. I had a challenge once on visual appeal of a product, but we were even able to do that via a judge panel with 1-100 scale ratings.
Think about what the customer finds important – often it is time to complete a task correctly. If you cannot find another usable continuous measure, that is usually a viable measure.0May 12, 2009 at 2:29 pm #184026Robert,I always found EVOP to be the way to go in this scenario.
0May 12, 2009 at 6:20 pm #184046
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.DC
Robert Butler gave some great advice as usual on the DOE. So I’m going to go in a little bit different direction.
Have you tried to recreate the failure of “Splashes”. Often times we get so caught up in reducing variation and defects that we miss the true root cause of the problem or the fact that one of the variables we chose not use in the DOE really has significance when all other variables are “Optimized”. Not really an answer to your question, but hope it helps .
Good Luck
PS: I had a similar issue with solder links and found that humidity was building in the enclosed fixture housing and “mini explosions” would create havoc on the links. Fix: we added silica desicant bags in the base of the fixture box and added some high flow are exchangers to the already Air Conditioned room with desicant dryers.0May 12, 2009 at 6:26 pm #184048Okay, let the true debate begin!I have three options, although I admit that I’m a little frustrated with myself for not thinking of time between failures as an option for continuous output data.Here’s what I would like to see debated, keeping in mind that each failure is a defect that is scrap.What is the best way to handle this situation? By the way, I’m not excluding one for another, and multiple approaches can be tried, although for the sake of argument, you must choose one of the following options and defend your choice.My options are as follows, so which one will provide the best data for process improvement with the least amount of cost to the company?1. EVOP (Keep in mind that I still have failures that induce scrap, so it’s not like I have a six sigma process with a window of acceptable output)2. Use mean time between failures as the continuous data to continue experimenting as I have been. (keep in mind the challenge of overcoming variation in my results due to production requirements, machine maintenance, etc., which I would normally block)3. Binary Logistic Regression (keep in mind that my output might not always binary, but count)I’m actually very interested in hearing your defenses for the options you view as most effective, just as much as I am interested in resolving my problem.
0May 12, 2009 at 6:33 pm #184049Chad, Great insight. I actually agree with you in your take, but one of my pre-treatments was baking to remove all moisture from the pwb. Of course, I’m then sending the board into a vapor, and as far as I can ascertain, these “mini-explosions” are typical in this process. You point is very well taken, however, I am thus far confined to the inputs I have, and have taken the step to order pre-fluxed preforms in order to eliminate the variation that comes with manually applying a catalyst. To push for a process that involves using something other than preforms may be requiring the manufacturing processes to change so much that the costs incurred might exceed even that of having these outsourced. And then, there’s no guarantee that the supplier will be any better than me and my SS tools!! Ha, just a little self-indulgence if you don’t mind, lol.
0May 12, 2009 at 6:37 pm #184050EVOP is the way you approach the experiment. Either 2 or 3 should work as the analysis / measurement.
0May 12, 2009 at 6:48 pm #184051Gary, While I admittedly have a limited knowledge of EVOP, let me throw this out there. I have a containment process that eliminates these defects even though it negatively impacts cycle time and manufacturing costs. In using EVOP, wouldn’t I be opening up myself to possible customer defects and internally impacted DPMO metrics just to run this experiment. These splashes require a microscope to see, so you are suggesting that I weigh the cost of non-comformance against the cost of containment to decide which approach to use, and then use both outputs to verify my optimization? Is that correct?
0May 12, 2009 at 7:09 pm #184053
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.DC
Well deserved indulgence. Looks like your on the right track……….0May 12, 2009 at 9:47 pm #184060No. You always protect the customer first. EVOP is just a DOE strategy that says I make small changes
systematically and run each level of the experiment for a long time.
The implication of this that all involved in the process have to knowledgeable of what is being done and why. You also have a
higher responsibility to keep all involved in the result as well. If
you need some reference material on this let me know. I cannot
get to it immediately, but I will get to it.Chad’s suggestion is worth pursuing. Soldering is just physics, go
understand the physical phenomenon that is behind splashes. It
has to be minor explosions if you will of water or some kind of
residue chemical. I’d look to the cleaning and handling of the
boards prior to soldering.0May 12, 2009 at 10:37 pm #184064Gary,Thanks, but my containment action is to dip the pins in masking and bake it before running the boards. The splashes never stick with the masking, so if I’m going to pursue EVOP, then I can’t insulate the customer.Your comments regarding understanding the physics of soldering are spot on, and I agree with Chad. This, however, has been no small undertaking. Meanwhile, I’ll look at using the binary regression approach as well as time between failures, as long as I can continue to find ways to get the biggest bang for my buck with these DOE’s so that I don’t cost the company a bunch of money to prove this improvement.Thanks Gary.
0May 14, 2009 at 1:17 pm #184118
Robert ButlerParticipant@rbutlerInclude @rbutler in your post and this person will
be notified via email.I don’t see that there is much of a debate. My impression of your earlier posts was that you had run some designs, you had chosen the metric of yes/no defects as the output measure, you had found for the early work that this measure was acceptable, you were contemplating building a design using the critical variables identified in the earlier work and you were concerned that yes/no measures would be such (too many 0’s not enough 1’s) that the time/effort needed to run your new design (i.e. your EVOP) would be excessive.
To that end I offered the thought that it might be worth changing the measure to time or mean time to events and use that to further refine the process. I didn’t mean to imply you should try some kind of WGP (Wonder-Guess-Putter) approach using this measure just to see if you could luck into some kind of optimum.
You could run the binary but you would still have to have a lot of data for this to work and at the end you would have odds ratios which aren’t the same thing as coefficients in a regression equation. They will tell you that you chances of something occuring are greater or less depending on what you do (i.e. 4 times more likely to happen if A is at a high setting as opposed to A at a low setting) but given what I think you want from your analysis I don’t think this would be the best choice.
The other thing to consider is this – you could be chasing a chimera – there is, of course, a chance that you will find a combination of factors that result in zero solder splashes but in my experience it is far more likely that you efforts will result in identifying a combination of factors that will significantly reduce, but not completely eliminate, solder splash.0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.