iSixSigma

Multi Vary vs DE vs Multiple Regression

Six Sigma – iSixSigma Forums Old Forums General Multi Vary vs DE vs Multiple Regression

This topic contains 44 replies, has 15 voices, and was last updated by  CIC 11 years, 2 months ago.

Viewing 45 posts - 1 through 45 (of 45 total)
  • Author
    Posts
  • #40594

    AT
    Participant

    All of MV, DOE and Multiple Regression model can help identify the critical Xs while DE and MR and help determine settings (optimization). So is there any logical order that is generally followed?
    For example, if I am trying to identify the major Xs for yield in a chemical process where temp, pressure and time has been identified as sources of variation, should I start with MV, confirm with MR and Optimize with DE ? What’s the ideal path in this case and what’s the logic behind selecting this path ?

    0
    #126386

    Michael Schlueter
    Participant

    Hello Ramesh,
    Please let me comment from an engineering perspective. In this situation I want to design the chemical process as simple, as predictable and as stable as possible.
    Unfortunately the characteristic chosen (yield) will not help me for this purpose. For example, when my average yield is 60 %, and temperature can raise it by 30 %, and pressure by 25 %, I can not expect (DOE approach) the yield to rise to 60 + 30 + 25 = 105 %.
    Yield lacks additivity most of the time. I suggest to avoid measuring yield and to measure a quantity instead, which is closer related to your concrete reaction product and where you can add components independently (at least in a good approximation).
    I do not expect too many differences then with a more appropriate characteristic between the 3 approaches you mention. Differences will become even smaller, when my chemical reaction approaches the ideal chemical reaction (which hasn’t been defined yet; the ideal chemical reaction will provide just your desired reaction product, will cause [almost] no harm, will cost [almost] nothing). E.g. imagine the succesfull situation where variation caused by changes in temperature, pressure and time hardly change your result any more.
    However, if time-to-market is critical, and cost, and I need a low customer (or operator) complaint rate later, I would opt for Taguchi’s Robust Parameter Design. It allows me to make the chemical reaction less sensitive to (even strong) changes from temperature, pressure and time.
    Hope this helps. +++ Michael Schlueter
     
    You can cont(r)act me by replying to this post.

    0
    #126391

    AVY
    Participant

    Hi Ramesh
    You have asked a very good question and many BBs have this question in mind, when they start with their first Six-Sigma project.
    As per my opinion, the first question, you need to ask yourself is          ” What are the factors affecting your Big Y “. You may use correlation analysis to filter out the factors and check if there is a linear relation between the input factor and the output. Many times, there is interaction between the factors and the significance of the factor is only highlighted when 2 or more factors come together. Now,  for this you can use Multi-vari chart, which will help you to understand the interaction between the factors and their effect on the output.
    After you identify that these 2 or more factors are significant, next step will be to use regression analysis to find out what is the significance level of this interaction. This will help you to understand how much % of variation in the Y can be tackled by addressing these 2 or more  factors.
    After regression analysis, you will have a confidence level that will tell you much of variation is answered by the regression model generated out of the data available. How do you know, what is the optimum level of these factors that will give you the desired output ? For this you may use the DOE tool to optimise ur X’s.
    The Best way to understand the usage of these tools is to identify what kind of analysis is required to be done at any particular time of the project phase.
    Hope this helps you. Anyway all the best !
    AVY

    0
    #126440

    AT
    Participant

    Thanks!
    You have suggested that DOE be used for the optimization step. But some people advice that it be utilized while identifying the key x’s. Why so when the key x’s can anyway be identified by MV or Regression? In fact, even the Y can be maximised/minimized once we know the regression equation and have controlable ranges for the key x’s.What do you think?

    0
    #126456

    Robert Butler
    Participant

      MV, DOE, and multiple regression are different facets of the process of analysis. They are used as needed in an analysis and typically they will be used alone or in combination, repeatedly during the course of an analytical effort. If you attempt to reduce the process of analysis to some kind of linear progression from MV to regression to DOE you will do yourself and your team a great disservice.
     
      Typically, the data you will have to work with at the beginning of an effort will be happenstance data.  The data will be historical in nature and can be drawn from a number of sources – spot checks, control charts, summary reports, etc.  You can subject such data to MV and you can run regression on the data but you need to remember that the “critical variables” you find with such data may not be all that critical and you will probably also find that many variables you know to be critical to the process will not appear as such when you analyze this kind of data. 
     
      The failure of known critical variables to show up in data of this kind is due to the fact that if they are known then you will probably have controls in place to limit their range of values.  If these controls have been successful you will have limited their ranges to such an extent that they are not having any undue effect on the process – this is great for the process (process control) but it is terrible if you are going to try to express their impact on the process by analyzing data where they have not been allowed to vary.
     
      The second big drawback of analyzing happenstance data is the fact that you have little or no information about other things that changed in the process at the same time that your variables of interest were changing.  This means that a change due to some unknown could easily manifest itself as a change due one of your variables.
     
     The third drawback to happenstance data is the problem of confounding.  It is unlikely that happenstance data will provide you with data that will have all of your variables of interest clear of one another.  The only way to check this would be to run diagnostics on your X matrix – linear regression will not identify these problems.
     
      A good approach (not an ideal path) to the data would be to take advantage of your happenstance data by doing a lot of graphing.  Good graphs will help you see holes in your data matrix and can give you a sense of what is confounded with what.  If you wish, you can run a regression but you need to bear in mind all of the above.  What may result from the effort may be a partial list of variables you might want to consider.  If these are coupled to known variables of interest (which may not have shown them as being important because of the data) and if the list is large you could run a screen to reduce the list and/or confirm your suspicions of linear relationships.  Regression analysis of this design should provide you with a lot of guidance with respect to further efforts.

    0
    #126489

    AT
    Participant

    Thanks a lot Robert. Your observations really clarify the points raised. However, I do not understand what you mean by”running diagnostics on X matrix”. Can you elaborate on that one a bit?

    0
    #126491

    gvrk
    Participant

    Well each one MultiVary..and Multiple regr..has its own advantage. I propose you need to go ahead with DOE when we look at the DOE approach If we have several Y’s associated with several X’s then it becomes multivaritate analysis. As pointed in your question you have one chemical process (Y) and temp(x1),press(x2) and another parameter(x3). Now when you arrive at an tranfer function/prediction equation. The TF/PF contains all these X’s and one output ex: maximise or minimise of chemical process…Hence you can go ahead with Multiple regression or an simple DOE. If you have several X’s along with Several Y’s then you should go ahead with Multivariate analysis as the name defines multivariate ie..you have multiple outputs w.r.t multiple variations..Hope this clarifies your doubt.

    0
    #126497

    Anonymous

    Gvrk,
    Just a couple of comments ….
    Unless I’m mistaken, Multivari refers to the study of process ‘nosie factors’.
    A multivariate process of the form:
    [Y] = F[X] where [Y] and [X] are matrices] cannot be studied using a Multivari chart
    I should also like to point out that y = f(x) is not a transfer function. It generally describes a physical law.
    A real transfer function is of great interest, because it explains ‘robust design.’
    Y(m1, s1) = F(m2,s2) X(m3,s3)
    You might find that Taguchi’s approach is consistent with this form when there is no covariance between the Y’s and no collinearity between the x’s.
    But I have no idea how to interpret this in another domain.
    Cheers,
    Andy

    0
    #126507

    Robert Butler
    Participant

      One of the requirements for developing meaningful regression equations is the need for the X’s to be “reasonably orthogonal” .  In an ideal situation such as a DOE where you have somehow managed to make sure that every value corresponding to a low setting (-1) was exactly set at that same setting every time it was called for (and likewise for the high setting (1) and none of the experiments failed you will have a situation where all of your X’s are independent of one another thus when a particular X tests as significant you are assured that the significance is due to that X alone and not to some combination of the other X’s you were using for your model building. 
      In truth it is rare, even in the case of a design, for the X’s to be perfectly orthogonal because too often it isn’t possible to hit the exact low or high setting every time.  Usually, in a design, the deviations from the ideal setting are too small to matter and one can run the analysis without worrying about confounding of the X’s used in the model.
      With happenstance data there are no such guarantees.  It is very likely that for such data there will be multiple correlations between those variables you have identified as critical.  In order to test for this confounding you will have to run diagnostics on the X matrix – which is to say you will have to check the VIF (variance inflation factors) and run collinearity tests which will generate such things as condition indices which will tell you the nature of the confounding among your X’s. 
      Many packages will give you the VIF’s but I only know of two that will give you condition indices and other tests for collinearity.  You can attempt to look at the situation by running a simple correlation matrix for the X’s but this kind of a test is only checking one-on-one correlation  not the many-on-many checks that you really need.  However, VIF’s and a simple correlation matrix are better than nothing and they will at least give you some sense of the problems you may have with the X matrix from your happenstance data

    0
    #126537

    Anonymous

    Robert,
    I just wanted to complement you on the quality of your posts.
    Best regards,
    Andy

    0
    #126556

    Robert Butler
    Participant

    Thanks for the kind word Andy.

    0
    #126560

    “Ken”
    Participant

    Robert,
    Have you looked at Essential Regression available via the Inet for free?  ES is an Excel Add-in that works within MS Excel versions up to 2000.  I’ve used ES to successfully extract relationships from retrospective data using passive regression analysis for over 7 years. 
    Over the past few years, I’ve found that interactions predominate in many chemical, pharma, and Bio-pharma processes I’ve supported.  ES allows me to easily select an interaction model using retrospective data, internally transform the output data as required, and perform the regression.  Minitab allows the same, but with greater difficulty.  ES also has an Auto mode that allows the selection of model terms via Stepwise regression.  Within ES you can perform collinearity evaluation and make the necessary model adjustments.  It also allows you to perform model validation, and construct surface plots with any combination of two variables at a time.  ES allows you to make predictions via user defined input settings, and calculate confidence and prediction intervals on the results. 
    ES comes with a manual that does an excellent job of explaining the basics of regression analysis.  Feel free to search the net on “Essential Regression”, and pull it down.  By the way, it’s a little tricky to install and setup.  If you have any problems, feel free give me a yell.  I’ve installed it on 100’s of computer systems over the years, and think I’ve seen most of the problems one can encounter.
    Good luck,
    Ken

    0
    #126567

    Anonymous

    Ken,
    I think I found your website .. are you the guy with twenty-five years experience in the pharmaceutical industry?
    Let’s see .. thirteen years of experience working in the semicodncutor industry, plus …. Wow, you’re older than me!!
    PS: We’ll have to debate the phantom shift sometime!
    Cheers,
    Andy

    0
    #126591

    “Ken”
    Participant

    Andy,
    Nope, not my website.  Is it possible to be older than you?
    Ken

    0
    #126593

    Anonymous

    Ken,
    To bad .. I had hoped you were Ken Myers?
    Cheers,
    Andy

    0
    #126596

    “Ken”
    Participant

    This is me.

    0
    #126600

    Anonymous

    Then this is not you …

    0
    #126604

    “Ken”
    Participant

    Andy,
    You’re having too much fun on the Internet today..  So, are others as I can see from the previous posting.  Considering Stan doesn’t do any graphics posting, I would suspect Darth or a close friend is enjoying the opportunity to play a bit!  You got the name right, but the experience timing seems a bit off…  I occassionally do work as an affiliate consultant with other companies from time to time, and some of these affiliates may tend to stretch the truth a bit to bolster their credentials.  Perhaps you can provide me with the link you located the latest information on me, and I will clear up the misinformation.
    So, let’s set the record straight so that you will be able to sleep well tonight.  The tally goes something like this:  13 years in the microelectronics/materials science under the government contracting industry, 6 years in commercial electronics, 13 years combined medical device, pharma, and bio-pharma industry experience.  I started young in industry, and went to school the entire time I worked as a Research Assistant, Development Engineer, Process and Quality Engineer, Manufacturing Engineer, Process Improvement Manager, with side excursions as a Black Belt, and Master Black Belt.  Have supported improvement efforts in international operations in 26 countries, and continue to do so today.  And yes, the forum comment on operating windows for controlled processes picked up by iSixSigma in the early days of its existence are mine.  Great sleuthing work on your part!
    OK, you know me.  So, for shxts and grins why don’t you tell us about yourself.  After we get the intros out of the way, maybe we can all get back to business as usual. 
    Cheers,
    Ken 

    0
    #126607

    Andy’s Prison Guard
    Participant

    Ken, I am writing this on behalf of inmate Andy U.  He is unable to come to the computer right now since he has been misusing his Internet priviledges tracking down Forum posters so he can stalk them.  I thought I would warn you.  Below is a picture of him being escorted to his required Black Belt training.  We feel it is worthy punishment for his actions.

    0
    #126609

    AB
    Participant

    Andy’s prison guard,
    Have mercy. Sending him for BB training in the middle of the last test match between England and Australia? He is going to run away from jail to see that. Watch out.

    0
    #126610

    Andy’s Guard
    Participant

    You must be clairvoyant.  We caught him trying to sneak out this morning but the cameras caught him.  What a stupid disguise. He must have been on his way to the game.  As result, he will be locked in solitary and made to read nothing but Vinny’s posts for the last year.  After that, he will be forced to listen to Deming’s tapes.  Arggggggg.  Nothing too cruel and unusual about that punishment.

    0
    #126613

    “Ken”
    Participant

    Andy’s Prison Guard(AKA Darth’s brother),
    It’s a sad state of affairs when forum members are stalking forum members…  What’s next?  Forum members passing themselves off as other forum members…  Then we’ll never know who we’re really communicating with… 
    It’s a sad day indeed!  Sorry to hear about Andy–he’s a good soldier.
    One who’s been stalked,
    Ken

    0
    #126614

    “Ken”
    Participant

    Forced to read nothing but Vinny’s posts.  That’s real cruelty!  So sorry for Andy.  He should just wait his time out like all the other stalkers…

    0
    #126625

    Robert Butler
    Participant

    I don’t know that particular package but your description of its use does raise some questions.  You said “(with) ES you can perform collinearity evaluation and make the necessary model adjustments” –  the question is when does this program do this?  For proper model building you need to check the proposed X matrix before you begin building a model not after.
     
      I use SAS and after scaling everything to a -1 1 range I run the X matrix through proc reg using the vif and collin commands which generate VIF estimates as well as the eigenvalues/condition indexes and the matrix for the proportion of variation.  The matrix is invaluable for permitting an assessment of the many-to-many collinearities that usually show up in happenstance data and for guiding decisions concerning which terms must be dropped from consideration before starting with a stepwise analysis.
     
      You also said, “ES also has an Auto mode that allows the selection of model terms via Stepwise regression “  This gives the impression of an automated approach to the model building.  If it is indeed automated does it let you look at the results and the summary statistics of each step?  In SAS, once I have identified the terms that the X matrix will support I run the data through forward selection with replacement and backward elimination.  I run it both ways because with happenstance data there is a good chance that the two methods will give me different models – sometimes radically different models.  While SAS too is automated the output is such that the results of each step is printed. I manually examine each step of the SAS printout watching the relationships between Mallows Cp, the R2 and the MSE.  Many times the stepwise program will keep adding statistically significant terms long after there is any real improvement in MSE or R2.  Typically, in the latter stages, the process will be driven by Mallows Cp.  What this means is that I may choose to truncate the addition of terms at the point where MSE or R2 or both seem to plateau.  I will run an analysis on the “full” models as well as on the truncated models to see if there is any major difference between the two and then I will sit down with the investigator and we will talk about the physical significance of the various models from the standpoint of which one makes the most sense.
     
      For the last couple of years I’ve been working in biostatistics and I’ve done a lot of work with retrospective data.  The biggest problem I’ve found with the medical data is the fact of constantly changing medical practice/record keeping procedures.  Often practice changes make it difficult to pool retrospective findings and even when “nothing has changed” many times the variables of interest have. Either the definition of the variable has changed, or the way the variable was measured/obtained has changed, or the variable of interest has only recently been recorded in a consistent manner.  If the variable does exist and if the definition hasn’t changed over time and if the earlier records aren’t too spotty one can try some data repair methods like multiple imputation before attempting model building but one has to be very careful with these methods. 
     
      I don’t mean to give the impression that retrospective studies can’t be done – they are done – all the time and I have already completed several.  It is just that I find I have to be even more vigilant with respect to investigating data compatibility with retrospective (happenstance) medical data than I had to with happenstance industrial data.
     
      As for the package it’s sounds like it is worth knowing about. Thanks for the information concerning its availability

    0
    #126627

    “Ken”
    Participant

    Robert,
    For many on this forum the use of SAS is both cost prohibitive, and beyond their needs and abilities.  While familiar with SAS, I don’t use it to my work.  It’s almost like hunting butterflies with an elephant gun, if you know what I mean.  The colliearity evaluation in ES is done much the same way it’s done in other software packages, via the VIF estimates.  Model adjustments are done after computing regression estimates and the interim regression model by manually removing the model terms, much in the same way it can be done in Minitab and other packages.  One simply reruns the regression function to observe the new estimates and model terms.
    Stepwise regression is not a new method.  In fact, it is a combination of forward and backward regression in a iterative sweep using significance levels for both movements.  Again, this procedure is run by Minitab and most stat packages out there.  Like the other packages, ES allows you to adjust the significance for including a model term for both forward and backward regression.  In the exploratory analysis of empirical data I typically start with both alphas set at around 15-20%.  Doing this provides reasonable power for detecting differences in effects. 
    Got to go for now!
    Cheers,
    Ken

    0
    #126634

    Anonymous

    Ken,
    I’m sure someone will find your background of interest … For myself, my interest was the post and I didn’t want to ask you directly, so I chose an indirect method that backfired …
    Coming back to the point of interest, and it’s still not clear, are the author of the post I mentioned. If you are .. how did you calculate a shift of 1.5 sigma based on a subgroup of n= 4.
    Sometime ago Gabriel sent me a spreadsheet to calculate X-bar and R comprising a random shift, and the largest shift we observed was about 0.6 sigma.
    Finally, I can tell you I’m only in contact with two other persons in this forum … Peppe and John H.
    Cheers,
    Andy

    0
    #126635

    Anonymous

    “Throw it in the trash” …

    0
    #126636

    Anonymous

    I believe in the due process of law. If laws can’t protect society; they should be changed!

    0
    #126638

    Racs
    Participant

    Mr. Ken,
    You have said in previous messages that you are not Black Belt or Master Black Belt, but here you claim to be. What is correct? Are you one who certifies yourself?

    0
    #126641

    “Ken”
    Participant

    Why do you ask?
     

    0
    #126643

    “Ken”
    Participant

    Andy,
    Glad to hear you got out of jail for stalking forum members.  Hopefully you got three squares a day while you were in.  Boy I’m certainly glad someone will find my background of interest. 
    I sensed you had an agenda, and suspected it was my comments on this forum many years back.  So, now you would like to revisit those comments?  We both know discussions on longer-term mean shifts of a stable processes are available throughout this forum-as well as the net.  Some of the forum discussions are even meaningful.  But most, end up in very ugly deaths.  I don’t think I want to take part in another.  I would be glad to provide you some reference on the subject, if interested. 
    Have a good weekend!
    Cheers,
    Ken

    0
    #126645

    Darth
    Participant

    Ken, smart move on passing up the 1.5 shift debate although we haven’t had a great thread on that in a while.  Stan has just be waiting to challenge someone else to a Great Debate to settle this matter once and for all.  Once I read Harry’s book where he derives the 1.46 shift, I had new appreciation for it and how much of the industry has misapplied his original and very narrow intent.

    0
    #126646

    “Ken”
    Participant

    Darth,Yes, I think we’ll save a great debate for a subject that’s worthy. A debate on longer term mean shift is not that interesting these days. Have a good one!Ken

    0
    #126657

    Anonymous

    Yes, I would appreciate some references .. Thank you!
    Regards,
    Andy
     

    0
    #126669

    “Ken”
    Participant

    Andy,
    While by no means exhaustive–the following are a few references at my finger tips:
    Measuring Process Capability, Davis R. Bothe, ISBN-0-07-006652-3, 1997, McGraw-Hill, Chapter 14.
    Tolerance Design, C. M. Creveling, ISBN-0-201-63473-2, 1997, Addison-Wesley Publishing Co., Chapter 15.
    Bender, Art; Statistical Tolerancing as it Relates to Quality Control and the Designer, SAE Paper No. 680490, Society of Automotive Engineers, Southfield , MI, May 1968.
    Bender, Art; A Simple Practical Probability Method of Handling Tolerances for Limit Stack-Ups, Graphic Science, December 1962, pp. 17-21.
    Evans, David H.; Statistical Tolerancing Formulation, Journal of Quality Technology, Vol. 2, No. 4, October 1970, pp. 188-195.
    Evans, D. H.; Statistical Tolerancing: The State of the Art, Part I, Background, Journal of Quality Technology, Vol. 6, No. 4, October 1974.
    Evans, D. H.; Statistical Tolerancing: The State of the Art, Part II: Methods for Estimating Moments, Journal of Quality Technology, Vol. 7, No. 1, January 1975a, pp. 1-12.
    Six Sigma Producibility Analysis and Process Characterization, M. J. Harry and J. R. Lawson, ISBN-1-56946-051-5, 1998, Motorola Univerisity Press, Section 5.
    Process Tolerancing: A Solution to the Dilemma of Worst-Case versus Statistical Tolerancing, W. A. Taylor, ASQ Fall Technical Conference Presentation, 1995, presentation located at: http://www.variation.com/techlib/ta-2.html.
    Fan, John Y., Achieving 6-sigma in Design, 44th ASQC Annual Quality Congress Transactions, San Francisco, CA, May 1990, pp. 851-856.
    Indices of Capability: Classical and Six Sigma Tools; Koons, 1992, Addison-Wesley Publishing Co., pp. 22-23.
    Implementing Six Sigma; F. W. Breyfogle III, 1999, John Wiley & Sons, pp. 186-214.
    Hope this is useful in your research.
    Cheers,
    Ken
     

    0
    #126672

    Anonymous

    Ken,
    Thank you for the list of references … Would you be kind enough to point out which one claims an X-bar and R chart, having a subgroup size = 4, would lead to a 1.5 sigma shift?
    Cheers,
    Andy

    0
    #126677

    “Ken”
    Participant

    Andy,
    I don’t remember which article or reference provides a specific discussion on the control of a process average in terms of a 1.5 sigma shift.  The terminology used in most of these references refers to control of the average within an operating window or possibly a tolerance range given a certain sample size.  Some articles call reference to degree of control by evaluating Average Run Length, ARL.  I was just looking at the text from Achenson Duncan, which is a bit timely, who suggests a selection of 4 to 5 samples per subgroup as being the most reasonable when using range statistics.  Wheeler provides reference to range based statistics supporting average charts in his paper, “Range Based Analysis of Means”, and suggests with a subgroup size of 4 to 5 one can control the alpha level for the average chart to around 2%.  There are many dated sources including Deming who suggest the best overall economic size for sample subgroups supporting variables control charts of about 4 to 5. 
    Given a subgroup size of 4 or 5 it’s not too challenging to estimate the degree of process control afforded the average using a typical Shewhart chart with 3SE control limits.  The operating window of control for the average works out to around +/-1.33 to 1.5 SDshort-term.  Such control provides about a 70% chance of detecting a value outside the control limits for the average given an actual process change.  Perhaps Dr. Taylor’s online reference provides reference using the same standard degree of control, but not sure.  To be honest, it’s been awhile since I’ve looked at Wayne’s work.  When I worked for him he suggested using +/-1.5 SDshort-term for the process operating window across the corporation when longer-term process data was not available. 
    This is not to say I endorse such a thing on this site, because I know the difficulties most have in anything to do with these numbers.  However, if one assumes a process following a normal distribution with an SD=1/12(USL-LSL), and an average within 4.5SDs of the nearest specification, you would find it produces a typical defect level of 3.4 dpm’s.  Such a process will have an operating window for the average of +/-1.5SD about the target.  Hope this helps.
    Ken

    0
    #126679

    Anonymous

    Thanks Ken .. you’ve been most helpful. It’ll take sometime to take it all in … after which time I hope we’ll be able to share our thinking with you …
    Cheers,
    Andy

    0
    #126684

    Darth
    Participant

    “we’ll be able to share our thinking with you”
    Andy, “our thinking”?  You aren’t about to reveal that, like Stan, you are really a composite screen name made up of multiple people?  Or are you really British royalty?  Or schizoid?

    0
    #126690

    Markert
    Participant

    Here is an image of Andy’s group as they finish the final few minutes of the great cricket match and get ready for some serious collaborative work on the isixsigma web site.
    Rumor is that several of them contribute for peanuts, unlike that highly compensated group Stan Inc.
     

    0
    #126705

    Anonymous

    Darth,
    No … none of those, I think!
    Cheers,
    Andy
     
     

    0
    #126731

    Mikel
    Member

    It’s the Bender drival – not worth reading.

    0
    #126732

    Bender
    Participant

    Bite my shiny metal a**

    0
    #126733

    Mikel
    Member

    Good one, except it must have been your dad who wrote the article.
    It’s actually sad that the whole 1.5 shift thing is based on some empirical research from 1962.

    0
    #171445

    CIC
    Participant

    Hi,
    Is anyone got an idee on how should this data be collected
    I have several X’s with more than 3 levels, ex: cutting machine availability as X and 4 diferent types of cutting machines as levels
    what is my measure for this each level and how do I related to my project Y, which is no of sets per week ? thanks
     
     

    0
Viewing 45 posts - 1 through 45 (of 45 total)

The forum ‘General’ is closed to new topics and replies.