DOE
Six Sigma – iSixSigma › Forums › Old Forums › General › DOE
 This topic has 10 replies, 7 voices, and was last updated 15 years, 11 months ago by melvin.

AuthorPosts

June 13, 2006 at 8:26 pm #43710
I seek to characterize a process using fractional factorial designs in MTB. I know the number of factors and runs available to me, so I will simply allow the MTB table determine my resolution. I can also reference the MTB alias table to understand which effects are confounded. But how do I determine which treatment levels to actually include in the study? THANKS!
0June 13, 2006 at 8:30 pm #139051
Jered HornParticipant@HornJM Include @HornJM in your post and this person will
be notified via email.From the results of your hypothesis testing.
0June 13, 2006 at 8:35 pm #139053OK…I was thinking something to the extent of runnning an FMEA off the PMAP to sort multiple potential KPIV. Now taking those multiple inputs (> 20) into a PlacketBurman (sp) design…..Are you suggesting running some form of HT prior to this to prioritize? What typeof HT would you suggest when running over 20 variables? Thanks!
0June 13, 2006 at 8:50 pm #139056
Butch GruffmanParticipant@ButchGruffman Include @ButchGruffman in your post and this person will
be notified via email.You asked about “which treatment levels to actually include in the study,” Do you mean “what levels to set for each treatment” or “which factors to include in the study”? Both are actually answered by the experience of the team of study participants who have indepth knowledge of the process and collectively making these decisions about study design.
Once conducted, your study will tell you which factors are significant contributors to influencing the process when varied between levels included in the study.0June 13, 2006 at 8:57 pm #139057
Butch GruffmanParticipant@ButchGruffman Include @ButchGruffman in your post and this person will
be notified via email.If the question is “which of the 20 possible factors should I actually include in my experiment” and you are using a team of experts, then you may have an application for the following Prioritization Matrix template.
https://www.isixsigma.com/library/content/c060529a.asp0June 13, 2006 at 9:16 pm #139062
Jered HornParticipant@HornJM Include @HornJM in your post and this person will
be notified via email.Process map and FMEA are good first steps. But, I wouldn’t go directly to a designed experiment from there. Six Sigma is about using data to drive your decisions.
From the process map and FMEA, you can construct a CriticalTo Matrix. That’s where you get your team of process “experts” together, list all the input variables, list all the output variables that are critical to the customer, rate each output variable (scale of 1 to 10…10 being most important), match up each input variable to each of the output variables and rate the percieved importance of that input to the output, then multiply the output importance by the input importance and add across the row. That’ll allow you to narrow your list of input variables to, say, 10.
I’ve always treated hypothesis testing as a way to monitor (NOT manipulate) these potentially critical input variables. If you can show through graphical methods, multivari analysis, test for equal variance, t test, ANOVA, contingency test, or regression analysis that you can significantly effect the output by varying a particular input, then it’s a candidate for DOE.
For levels of each factor, you can start with the extremes of your normal process variation.
Hope that helps.0June 19, 2006 at 6:30 am #139280Developing an improvement hypothesis to take into an experimental design requires completion of exploratory data analysis / observation analysis in Measure and Analyse Phases.
The treatment levels should be based on results from data analysis and interviewing people with process knowledge. I recommend pulling together a proposal based on these different sources and getting feedback as much as possible before proceeding with experimentation.
As well, I would take a hard look before running a saturated design with 20 + variables regardless of the potential for confounding between first and second order effects. Have results form your exploratory data analysis and tribal knowledge collectionleft you with 20 + variables; or is this your process for narrowing from possible to critical Xs?
Bob0June 19, 2006 at 7:10 pm #139312
k.bhadrayyaParticipant@k.bhadrayya Include @k.bhadrayya in your post and this person will
be notified via email.Hi
Usual practice is to decide the number of variables with which one should go with experiementation. Here at this stage inintially number variables need not be limited. List as many as theoritically possible number of variables , based on discussions with your team members/ reports / literature reviews/data collectionand analysis/ regression analysis etc. If number variables are many like 20 , better go to plackett burmann methods initially with two levels as it offers least nof experiemntal trials. From the analysis of the results, it will end up finally with 5 or 6 significant variables. Then one can collect these significant variables and go for fractional factorial designs. Even here one need not assume any interactions. On the analysis of the results, if second order efefcts are significant then only one needs to identify the real interactions by unfolding.Other wise only main variables can be considred for further anaysis an dimprovement.
0June 19, 2006 at 8:34 pm #139318OK..let’s get the terminology straight. When you speak of “levels”, you are refering to factors. I suppose if you are running a 1 factor experiment, then you could use factorlevel and treatment interchangeably.
A treatment combination is a combination of factor levels for a given run (A low, blow, clow, etc.)
I am assuming you are trying to decide how to set your factor levels for the DOE, and that you have already “weeded” the list down to a manageable number. There are many approaches you can take, and here are 2 that I have used.
1) If you are completely unsure, use +/ 15% from the process of record. If your temperature setpoint is 100, use 85 for low and 115 for high.
2) If you have data on how your input factors vary over time, and you don’t know the effect on your response variable, use this data to arrive at your factor levels. If you set your temp at 100, and you see that the actual temperature varies anywhere from 93 to 107, use these extremes as the low and high for this factor.
In all cases, you need to set the factor levels far enough apart where you would expect a difference. Always make sure you evaluate the design space for extreme combinations that might be risky to run.
Hope this helps!0June 19, 2006 at 9:35 pm #139321It is important that you choose factor levels far enough from nominal. If your levels are too close to your normal running enviroment, you run the risk of not having any significant effects following your experiment. Pushing things to the outer limits will help your effects stand out, have a low P value, etc.
0June 19, 2006 at 9:54 pm #139322Hi
I’m just wondering why you would not take advantage of all the information that’s available for analysis before you go into any phase of experimentation. Don;t you see Measure and Analyse Phases as being processes of passive observation?0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.