iSixSigma

DOE Screening Designs

Six Sigma – iSixSigma Forums General Forums Training DOE Screening Designs

Viewing 15 posts - 1 through 15 (of 15 total)
  • Author
    Posts
  • #54190

    Joel Smith
    Participant

    I have a quick poll for the LSS community…please respond with your answers, as well as roughly how many people you are answering for (so if you are a consultant and you train 200 GB’s and 50 BB’s a year and your answers apply to all of them, state that).

    1. In any of your LSS courses – and which ones – do you teach the concept of “screening” designs, where you first do a compact DOE to identify significant main effects and then you do a second DOE to more fully and accurately model terms including interactions?

    2. In the projects that you see DOE applied in, how many factors do they typically have?

    3. For those that do not teach the concept of successive designs (screening first and then modeling), how do practitioners usually narrow down their number of factors?

    Thanks for your time – if any of these seem obvious keep in mind I’m trying to hear some VOC on the issue.

    0
    #194107

    Chris Seider
    Participant

    @joelatminitab
    Screening DOE’s are taught as a matter of fact for res III designs but not as a separate topic for BB’s.

    0
    #194112

    Mike Carnell
    Participant

    @joelatminitab When people first start I try to keep them in a half fraction 5 factor design or a 3 factor full. Once they get their legs under them then they can do whatever they want. I think those 2 designs are the easiest to make sense out of so they are a good foundation for other designs.

    I really like Bob Launsby’s book “Straight Talk On Designing Experiments.”

    At the risk of sounding like a heretic I don’t find DOE all that useful (with the exception of R&R or if you have interactions). I would rather see someone who was really good with hypothesis testing. People tend to over estimate the complexity of DOE analysis and underestimate the complexity of running them.

    Just my opinion.

    0
    #194117

    Joel Smith
    Participant

    @cseider and @mike-carnell So is it safe to say in either case that you’re not teaching “successive” DOE’s, where one DOE is done to screen factors and then a second one is done (perhaps by adding runs to the first) to fully model the remaining terms and interactions? There’s no right answer here – I’m just looking for information and want to make sure I’m understanding correctly.

    Anyone else want to share what they do?

    0
    #194118

    Robert Butler
    Participant

    In those situations where I’ve taught DOE the approach I take is driven by the industry and/or what it is that the specific group can and cannot do. If I have an option with respect to teaching then I usually take the screen design/follow up design approach. However, there have been many times when time/cost constraints made this option untenable. What I have done in those cases is put together a D or A optimal design which essentially straddles the screen/follow up approach.

    I build this kind of design only after asking and getting answers to a LOT of questions. When I build these designs, if it is possible, what I will use is a highly fractionated design as a starting point and then augment that design to go after specific interactions/curvilinear terms. I’ll block the design so that after a given number of runs I can start running an interim analysis.

    Once the analysis is complete I’ll use the resulting equations for the various responses to predict optimums and then we’ll run those predicted points and use them as confirmation checks on the work I’ve done.

    The number of X’s involved in these efforts typically fall in the 8-15 range and the number of variables measured (and hence the number of predictive equations constructed) vary from 5-12….Of course there are exceptions – the biggest project I ever worked on had 151 variables and 42 responses.

    0
    #194121

    Mike Carnell
    Participant

    @joelatminitab Joel I do my best to not teach in a classroom so Chris can speak to the classroom stuff better than me. When I am doing site support I will ask them to do screning designs if they have more than 3 factors. If they have more than 5 we have some serious discussions about what can and cannot be sorted out with hypothesis testing.

    Just my opinion.

    0
    #194125

    Chris Seider
    Participant

    @joelatminitab Yes, those items you talk eloquently about are typically handled with site support. Remember, we are teaching theory, application in Minitab of that theory, and then class exercises to further develop learning.

    0
    #194126

    Mike Carnell
    Participant

    @joelatminitab and @cseider I am sure that we cause some people to get their heads completely screwed up just by the way we have to teach it. They see DOE during Measure when they learn R&R but have no idea how the analysis is being done. They have to take it on faith that at some later date they will get it explained to them.

    When we get to teaching DOE you have to teach it backwards from the way you use it. You teach full factorial designs before you teach factional designs even though in application they are done the opposite of that. The only way you are going to get that straight in peoples heads is by actually working through a couple of these.

    Just my opinion.

    0
    #194127

    MBBinWI
    Participant

    @Mike-Carnell – Friend Mike, you were somehow mistreated by your DOE stats instructor. You are correct that without interactions, you can learn everything via other hypothesis tests – but DOE is still the most efficient means to gain that knowledge. Now, when there are interactions, the only way you will gain that knowledge is via DOE.
    You’re right that most training modules that I’ve seen allude to DOE in Measure, but don’t get to details until Improve. I’ve found that if you merely teach the concept of averaging the faces of the cube (using a 3 factor as the easiest to understand) people get the concept of DOE, and once they have that level of understanding you can go straight to fractional factorial/screening. Then, in Improve, you can go through a full explanation of benefits/costs of various types of designs, and then proceed into robustness.

    @joelatminitab – That said, any experienced DOE practitioner will start with a res III screening DOE and control all conceivable variables that might have an input – if I believe that they will ultimately be noise variables, I choose to set them at the more undesireable level. The great thing is that if you need to, you can use this as one block and add the opposite levels later if you decide that you need to.
    The critical thing to get understood with students/novices, is that even those variables that you won’t control normally need to be controlled (and intentionally set) during the experiment. I can’t tell you the number of times that I’ve had someone start in on a DOE series and not controlled some “noise” variable, only to find that they need to change it to a control factor and have no idea how it impacted the transfer function because they didn’t set it or record it.
    In DfSS, I always teach screening method. Try to limit the number of factors, but often get up to 10 or so. So long as you control and intentionally set the variables that you believe ultimately will be noise, you don’t need to include them in the analysis. After you’ve determined the critical few, you can then add back in the noise variable(s) at the alternate settings to see the robustness impact.

    0
    #194136

    Chris Seider
    Participant

    @MBBinWI 10 would be a challenge to manage. Nicely stated comment on the noise factor recording.

    0
    #194137

    MBBinWI
    Participant

    @cseider – Actually, Chris, 2 level 10 factor res III is only 16 experimental runs, so not that bad. As I said, if you set and hold what you believe will be noise factors, you can drop them out for the screening experiment (and since you set/held them, you can always later add them back in by adding the folded block).

    0
    #194143

    Mike Carnell
    Participant

    @MBBinWI you state that DOE is the most efficient. I don’t believe that. How did you determine that? DOE will always be an intrusive proces which is more complicated to arrange. You made the comment yourself that people don’t control noise variable. That is not restricted to noise variables particularly when they are new to this.

    I can understand in Robert Butlers area where he might get into 10 or more variables. In the stuff we do – you, Chris Seider and I – I have never seen a process where I needed to go beyond 5 variables. Only seen 1 3-way interaction. Way to much buzz around DOE. I think it is a valuable tool for 2 things. First as we mentioned interactions – have to have it. Second it is a big GEE WIZ factor that people throw into the training to scare the crap out of people and do a lot of the “don’t pay attention to the man behind the curtain” effect. It doesn’t actually need to be used that frequently.

    0
    #194145

    MBBinWI
    Participant

    @Mike-Carnell – surely, Mike, you’re not going to require me to go through a proof that shows what you can learn by OFAT vs. DOE, are you?
    Not controlling suspected noise variables is not a condemnation of the approach to experimentation, rather on the experimental method generally. Too many folks believe that just because you won’t control something in normal operations means you should not control it during experimentation – this is one of the most difficult mind-sets to overcome.
    As for 10 factors – I’ve often faced that in design. While physics/engineering tells me the “pure” impact of variables, the reality is that variations in those factors makes some more critical than the theoretical equation might suggest. Thankfully, I ran into MonteCarlo simulation a long time ago (my prefered tool is CrystalBall – an Excel add in). Using that tool I can either gather existing data on variation distributions or just assume some worst case distributions and model the system to see the sensitivity to variation. This has allowed me to do less experimentation to determine sensitivities.
    The biggest mistake I see is for folks to jump into a DOE without really thinking through what they are trying to learn. Usually, they are out to “prove” a pet theory and thus set the experimental range too narrowly to actually learn anything. They also tend to miss variables, or fail to control noises, so what they end up with are really low r^2.
    All that said, I agree with you – if you don’t have interactions, you don’t need DOE (but without it, how can you be SURE you don’t have interactions?).

    0
    #194146

    MBBinWI
    Participant

    @Mike-Carnell – Before Chris can chime in about doing partial differentials to get an estimate on the variation, I HATE DIFF E-Q’s! Plus, the cool distributions and tornado graphs that CB provides are much easier to explain to people than an equation.

    0
    #194219

    ivan
    Participant

    Here’s what we teach at my co:
    if # vars equal or less than 4 go ahead and model, if # vars greater than or equal 7 do a screening DOE first, in between its a toss depending on costs!

    0
Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic.