Using Computer Simulation Results for Design of Experiments
Six Sigma – iSixSigma › Forums › General Forums › Methodology › Using Computer Simulation Results for Design of Experiments
 This topic has 3 replies, 4 voices, and was last updated 2 years, 1 month ago by Strayer.

AuthorPosts

December 17, 2019 at 10:21 am #244529
CzarLucMaticParticipant@CzarLucMatic Include @CzarLucMatic in your post and this person will
be notified via email.I have an ongoing study specifically aimed at improving inventory record accuracy for a company. Currently, I have identified the inputs and critical factors through the use of Root Cause Analysis, C&E Matrix, and FMEA. Having done all of these, I have the critical factors that can be tested for the improvement of the record accuracy.
However, I am having trouble doing the next step which is to improve the said factors. Which I intended to use Design of Experiements for me to know what critical factor to focus on in order to improve inventory record accuracy. It is also not an option to run a physical run of improvements as it would entail a large cost of the company. Which led me to think of the possibility of just running a computer simulation model of the system. With its results I intend ot use them as inputs for the design of experiments.
I have read some articles and journals that indeed use computer simulation results as inputs for the design of experiments. I just have a question if the improvement of inventory accuracy can be modeled just like a manufacturing process? In other words, can system records also be simulated just like other production line models? Or is there a better way of testing my factors for the problem?
0December 17, 2019 at 2:13 pm #244538
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.All of the cases of which I’m aware of that have run simulations of the results of an experimental design have had actual response measurements from which to extrapolate. If the simulation is not grounded in fact then all your simulation is going to be is some very expensive science fiction.
A few questions/observations and an approach to assessing what you have that might be of some value.
1. You said you identified the critical factors using a number of tools. In order for those tools to work you had to have actual measured responses to go along with them.
a. What kind of factors – continuous, categorical, nominal? A mix of all three? Or, what?
b. Apparently, you have some sort of measure or types of measures you view as correlating with record accuracy. What kind of measurements are these – simple binary – correct/incorrect? Some kind of ordered scale – correct, incorrect but no worries, incorrect but may or may not matter, incorrect and some worries, seriously incorrect? Or is the accuracy measure some kind of percentage or other measure that would meet the traditional definition of a continuous variable?
2. You said running an experimental design would entail cost. All experimental efforts cost money. The question you need to ask is this: Given that running a design would cost money and given that I have taken the time to generate a reasonable estimate of this cost – what is the trade off? Specifically, if I were to spend this money, how much could I expect to gain if I identified a way to increase accuracy by X percent? We are, of course, assuming the accuracy issue is physically/financially meaningful and one where the change in percent would also be physically/financially meaningful. I make note of this because if you are at some high level of accuracy like 99% then you can only hope for some fraction of a percent change and the question would be what would a tiny fraction of a change in the final 1% translate into with respect dollars saved/gained?
If you refuse to run even a simple main effects design (you should note that one does not need to interrupt the process in order to do this) then you are left with the happenstance data you gathered and tested using the tools you mentioned in your initial post.
In this case you could do the following: Take the block of data you gathered and check the matrix of critical factors you have identified (the X’s) for acceptable degrees of independence from one another. The usual way to do this is to use eigenvalues and their associated condition indices and run a backward elimination on the X’s dropping the X with the highest condition index and then rerunning the matrix on the reduced X matrix. You would continue this process until you have a group of X’s that are “sufficiently” independent of one another within that block of data. An oftenused criteria for this measure of sufficiency is having all remaining X variables with a condition index < 10. If you are interested, the book Regression Diagnostics by Belsley, Kuh, and Welsch has the details.
What this buys you is the following: you will know which X’s WITHIN the block of data you are using are sufficiently independent of one another. What you WON’T KNOW and can never know is the confounding of these variables with any process variables not included in the block of data and the confounding of these variables with unknown variables that might have been changing when the data you are using was gathered. You will also need to remember your reduced list of critical factors will likely fail to include variables you know to be important to your process. This failure will probably be due to the fact that variables known to be important to a process are being controlled which means the variability they could contribute to the block of data you have gathered is not significant – in short you will have a case of causation with no correlation.
Keeping in mind these caveats, you can take your existing block of data and build a multivariable model using those critical factors you have identified as being sufficiently independent of one another. Run a backward elimination regression on the outcome measures using this subgroup of factors to generate your reduced model. Test the reduced model in the usual fashion (residual analysis, goodness of fit – and make sure you do this by examining residual plots – Regression Analysis by Example by Chatterjee and Price has the details). Take your reduced model, apply it to your process and see what you get. Before you attempt to run the process using your equation, you will want to make sure the signs of the coefficients make physical sense and you will need to make sure everyone understands the model is based on happenstance data and failure of the model to actually identify better practices is to be expected.
0December 17, 2019 at 2:43 pm #244541
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.I can only imagine your approach COULD work IF…
1. The model was validated. No model is 100% so direction better be correct and an acceptable magnitude of error isn’t large relative to predictive changes in results. The model implies you apply the factors AS IS and get “close” to actual results. Don’t ask me online (LOL) what close is but I’ve used/seen cases where choices in equipment arrangment or choice was validated with real world results before applying the model.
2. The ability to trial ONE of your experimental runs on the site/process/etc. or you’ll have tough time to see if model correctly “predicts” outside the present conditions.
0December 18, 2019 at 12:42 am #244555
StrayerParticipant@Straydog Include @Straydog in your post and this person will
be notified via email.Are you being led down the garden path, thereby missing the real problem? There might be a hidden root cause. For instance there was a problem with billing errors. After lengthy, expensive, statistical analysis and DoE failed to find improvement someone discovered that we’d missed the root cause. Variations from list pricing and standard discounts had to meet complex rules and be signed off by a corporate officer. But the salesman, customer, and official approver could agree before signature approval got into the system, because the paperwork took quite some time. As a result shipment and billing could occur before the system knew about the agreed price. Once we realized the hidden root cause it was a pretty easy fix. Most of our improvement effort before then had been wasted. Since then I’ve advocated brainstorming to look for the hidden X. In your case I’d start by asking whether something prevents your inventory records from being real time. Do the records show what’s actually in inventory at any given point in time. If not, why not?
2 
AuthorPosts
You must be logged in to reply to this topic.