FRIDAY, SEPTEMBER 21, 2018
Font Size
Topic Bimodal Residuals and Transformation

Bimodal Residuals and Transformation

Home Forums General Forums Implementation Bimodal Residuals and Transformation

This topic contains 56 replies, has 5 voices, and was last updated by  Dave Franco 6 months, 3 weeks ago.

Viewing 57 posts - 1 through 57 (of 57 total)
  • Author
    Posts
  • #504508 Reply

    Hello Community members

    I recently completed a DOE experimental analysis and was trying to analyse the data. Now I have attached the screenshot of my result. I realised that my initial analysis did not fit the normal distribution and thus transformed it by taking the square root.

    However I noticed a S shape in the curve and I read in one of the websites to be wary and careful about the same, since it indicates bi-modal residuals.
    How does bi-modal residuals affect your final results? Also if I transform the data, does this nullify the negative effects of bimodal residuals if any? Also it is showing a very high R, which is good I suppose. (R pred – is really bad meaning I would have to conduct a response surface or higher order fitting of the data, whereas the current linear model is not useful in prediction I suppose)

    Any links further regarding bi-modal residuals in DOE would be appreciated. Also if anyone could comment whether the DOE transformed response data can be further utilized, that would be great.

    Thank you in advance

    Regards

    Justin

    Attachments:
    #504510 Reply

    Oh just to clarify regarding information in the screenshot.

    The first image was the actual data. The second image is the transformed data after taking the sqare root of the data. Also the worksheet is corresponding to the transformed data if it helps.

    Regards

    Justin

    #504511 Reply

    The fact that your data doesn’t fit a normal distribution has no impact on your regression. The issue of data distributions in linear regression has to do with the distribution of the residuals only.

    If you take a look at your histograms for the residuals they approximate a heavy tailed bell curve. Your plots of residuals vs. predicted in both instances don’t exhibit any kind of trending. The normal probability plot is interesting but in light of the other two (histogram and residual vs. predicted) my reaction would be to forget about the square root transform and go with the analysis of the untransformed data.

    What you really want to know is does the equation actually describe the reality of your process with sufficient detail to be of value? The only way to check that is use the equation to predict an optimum and then run the process with the variables set to the levels indicated by your equation. If the results of that (or those if you want to try more than one setting based on your predictions) run fall anywhere inside the 95% CI for individuals around your predicted value then it would appear your equation has some value as far as your production efforts are concerned.

    #504517 Reply

    Hi @rbutler

    Thanks a lot as always. Yes I checked the residuals normality, but I found that even the residuals of the transformed data do not pass the anderson-darling test :(.

    I just wanted to clarify one Thing. In the equation provided by DOE, all the factors are linear. What does the coefficient of Center Point represent? What would be the variable for the Center Point (since it consists of 4 factors)?

    Regards

    Justin

    #504519 Reply

    Hi @butlet: I have a quick question on the answer you gave. In the residuals chart he provided the residuals vs fitted value has this funnel like pattern (unequal variance i guess). Are you saying ignore that because other plots are not showing anything abnormal?

    @justmattam: Please follow the advice of Robert Butler, he is an expert in this. Is it possible for you to post the session window of the analysis you did with the raw data (not transformed) for comparison with the result from transformed data. It will great if you can share the data here!!!1

    #504520 Reply

    Do not know how to edit my above post. The first part is for Robert Butler (@rbutler)

    #504521 Reply

    Justin first – I’m not surprised the residuals failed the Anderson-Darling test – that’s the test for heavy tails. The thing to remember is this – normality tests are very sensitive to deviations from whatever feature of a normal distribution they are designed to test which means you will have an excellent chance of failing to pass one or more of these tests when you apply them to the residuals.

    You need to remember that it is the t-test that is used to assess term significance in a regression model and that test is very robust to non-normal data which means it will still provide correct results even if the data (residuals) aren’t perfectly normal.

    The question you need to answer is this: Does that deviation from normality matter? …and the way you do that is by looking at the histograms and plots of residuals against predicted and asking yourself – does the histogram look more or less normal and do the residual vs. predicted plots look more or less random. Based on the plots you have posted I would say the fact that the residuals failed the Anderson-Darling test is, in this instance, of no great concern.

    The center point – it sounds like you ran a factorial with replication on the center point. Given the symmetry of the data points around the zero line in the residual vs. fit plots it looks like the linear terms are all you need. You can go ahead and toss in a squared term in the model for any one of your variables just to check for the possible existence of curvature but I would be surprised if that term came up significant.

    The reason you can choose a squared term for any of the variables of interest is because when you run a two level factorial with just a replication of the center point all the center point tells you (which is actually quite a bit) is whether or not there might be some kind of curvilinear effect due to one or more of the X’s. Because it is just a center point you won’t be able to associate the curvilinear behavior with a particular X variable. If the curvature was significant you would need to augment your existing design with additional experiments to identify the variable(s) associated with curvilinear behavior.

    Sam – I agree the residuals from the untransformed data do look slightly funnel shaped and the residual plot for the square root looks like a funnel in reverse. To check this it would be worth running the model on the log of the Y’s to see if anything changes with respect to significant terms in the final reduced regression model or to see if there is an improvement in the precision of the predictions of the final model.

    One final item: If you can get a copy of Fitting Equations to Data by Daniel and Wood through interlibrary loan, I would recommend you look at Appendix 3a which has normal probability plots of different sample sizes of data from perfectly normal data. Should you do this you will see just how crazy normal probability plots of this kind of data can be. I’d recommend copying those pages for yourself for future reference.

    #504523 Reply

    Thank you Robert. I will get a copy of this book.

    #504564 Reply

    Dear @rbutler and @Peach

    Thank you all for your insights. I really appreciate all your help. Here I have attached more Information regarding the expt of the actual data and not transformed data.

    Here I am detailing the steps and Picture Details.

    Step 1: (Picture 1) – I had 4 factors of consideration and conducted a Resolution 4 fractional factorial expt with 4 Center Points and 2 replications (this was for Screening expt purpose)

    Step 2: Picture 2 – This consists of all the graphs that I had shown earlier. A small Change in the Response data had to incorporated today.

    Step 3: Picture 3 – This contains all the anova data. As Butler rightly hinted, the curvature was significant. However since the 2 way interactions are aliased with other 2 way interactions should I be concerned ? I can confirm this only through expt or is there is a method to check if the interactions provided are right.

    Step 4: Picture 4- Now I took butlers advice and wanted to add the square term. So I performed a Response surface Analysis on the same expt in MINITAB. I do not know what type of design was applied by MINITAB. But the Picture Shows the configuration after I defined a customized Response surface data.

    Step 5: Picture 5 and 6 – Now the Regression values of prediction was very and @rbutler, I am planning to proceed by utilization of These coefficients for confirmatory tests. What do you think? Would this is be a good idea by proceeding this way?

    I really appreciate all your help, but I understood that a comprehension of the results requires a lot of knowledge and experience. Since this is my first test, I am seeking your Support.

    Attachments:
    #504566 Reply

    Annova test of actual result (not transformed)

    Attachments:
    #504568 Reply

    Graphs of the DOE Analysis as done earlier

    Attachments:
    #504570 Reply

    Response surface design of the same data done in minitab

    Attachments:
    #504572 Reply

    anova Analysis of the Response surface design

    Attachments:
    #504574 Reply

    Response surface ANOVA page 2 with coefficients

    Attachments:
    #504577 Reply

    Hi @justmattam: We will wait for the expert’s (Robert Butler) answer.
    I checked your data and as you correctly pointed, your center point is significant. With center points, it will only tell you if there is a curvature or not it won’t tell you out of the four factors that you have which one is not linear. Also your two factor interactions are significant and are confounded with others so we cannot separate them. My conclusion is, you need to run more experiment to include the curvature effect and find out the significant interaction terms before you can run the confirmation runs… Again please wait for Robert Butler he sure will have more to add and also correct my conclusion if i got this wrong…..

    #504580 Reply

    @justmattam . I just realized Robert in his post above explained the center point effect I explained in my comment.
    @rbutler I have another quick question. Justin ran 1/2 fraction replicated twice for a 16 runs excluding the center points. Since this is a 1/2 fraction 2 factor interactions are confounded. Wouldn’t it had been better for Justin to run full fraction 16 runs so all factors would be free of aliases?

    Sorry Justin to use your thread for asking questions…

    #504581 Reply

    Its totally fine @Peach. After all this is a community where we share our experience and knowledge. So please feel free. Yeah I do get the feeling that I should have tried to do a full factorial. But I strongly vouch for a screening because the effort is way too high. I think there should be a way out by now conducting a central composite design which can be appended to this fractional factorial design. I remember @rbutler saying about the same in a previous thread. But yeah I would like to hear his comments also. Its quite reaffirming to hear from an expert.

    Last but not least thank you to everyone who would be contributing

    #504582 Reply

    Your design permits a check of all main effects, 3 specific two way interactions which are confounded with three other specific two way interactions and a generic squared term.

    If you scale the variables Pulse Duration, Overlap, Average Power and Frequency so they are all scaled -1 to 1, then a check of the co-linearity diagnostics indicate the two way interaction confounding is
    Overlap x Average Power = Pulse Duration x Freq
    Overlap x Freq = Pulse Duration x Average Power
    Average Power x Freq = Pulse Duration x Overlap.

    If you ignore the generic squared term and run a stepwise regression on the following variables:

    Pulse Duration, Overlap, Average Power, Frequency, Overlap x Average Power, Overlap x Freq, and Average Power x Freq you get the following model for Jumma

    Jumma = 8.9 +3.3*NPulse Duration +8.2*NOverlap – 3.8*NFrequency
    – 3.1*NOverlap x NFreq +2.8*NAverage Power x NFreq

    An examination of the histogram of the residuals of this model (the top two plots in the graph frame) are very non-normal and the residual pattern has a definite curvature (note the hand drawn line). This suggests you should include the generic squared term and rerun the analysis.

    If you include the generic square term you get a model where all of the terms are statistically significant (P < .05) and you get a histogram of the residuals which looks reasonably normal and a plot of residuals vs. predicted that does not exhibit any trends (bottom two plots in the graph frame).

    The model using scaled X’s is

    Jumma = 5.04 +3.3*NPulse Duration +8.2*NOverlap – .42*NAverage Power – 3.8*NFrequency
    -.53*NOverlap x NAverage Power – 3.1*NOverlap x NFreq +2.8*NAverage Power x NFreq +4.9*NFreq*NFreq

    Since all of the terms are scaled from -1 to 1 you can look at their coefficients and compare their effects directly. Thus the big hitters in your model are Overlap and the generic squared term. Pulse Duration, Frequency, and the interactions of overlap and frequency running a distant second. The significance of the squared term suggests you have one or more variables which result in a curvilinear response. This means you should augment your existing design with design points that will permit an examination of the squared terms of the four variables in your design.

    #504583 Reply

    Ok…. the graph file is too large – let’s see if this reduced size makes it.

    Attachments:
    #504592 Reply

    Thank you so much Robert for the details.

    #504593 Reply
    #504594 Reply

    @justmattam @rbutler
    Justin: Regarding your comment on fractional design vs full factorial. The effort in your case would have been the same (no replicate of full). Either way you will run 16 experiment + center points. I was asking Robert was it a good idea to run a full fraction in this particular case.

    I think in the future you have the “resources” to run 1/2 with replicate you should run full fraction without replicate and include few center points. The center points will give the design extra degrees of freedom to calculate the error term better and also give you the curvature effect..

    Now what is your next steps? I think central composite designs (CCD) are an option, if you can move your -1 and +1 settings further. With CCD you run experiments with star points and add to your existing design then analyze the complete design together. This will let you separate the curvature terms. I do not know what CCD design can add to a 2^4-1 design. There is a standard CCD for 2^4 design.. Robert or someone else can help here.

    #504595 Reply

    If the design is something that was run recently and if you want to check for curvature to identify the variables responsible for the significance of the curvature as well as separate Overlap x Freq from Pulse Duration x Average Powerthen you might want to consider augmenting the design.

    For this you would need to run something like a D-Optimal design where you force in the existing experiments and then ask the machine to identify the few additional runs needed to do this. You would also want to include one or two of the original design points in the augmentation to make sure that nothing of importance had changed between the time you ran the initial design and the time you ran the augmentations. The reason for suggesting this as a possible approach is because, in the full model, other than the main effects, the big hitters are the generic squared term and the interaction mentioned above.

    Of course if you can afford to run more augmented points then you could ask the D-Optimal package to identify the experiments needed to separate all of the two way interactions in addition to the curvature. This approach would be cheaper than starting from scratch and building another design such as a composite, however, if you or your management are not comfortable with the idea of an augmented design then a new composite design might be preferable.

    #504598 Reply

    Dear @rbutler

    It took me some time to absorb all the Details. However I wanted to recapitulate what you said and would like your Feedback regarding the same.

    Option 1. Now we do have significant interactions, which are unfortunately aliased. For de-aliasing I would have to perform another 16 expts (which is a full factorial design with 2 replications and 4 Center points). Since I have already performed 20 expts with 4 Center Points, I would have to perform the extra 16 just for de-aliasing the interaction effects.

    Option 2: To find the square factor, is it sufficient to augment the existing design with face-centered design with 5 Center Points? I am suggesting face centered because these are the simplest to employ and requires only 3 Levels for each variable.

    Finally my quagmire is the following and this is where I require your Expertise.

    – I can directly conduct the Response surface design and determine the square Terms. However the risk I am taking is that my interactions were aliased in the first case. Thus would this prove detrimental in my final confirmatory expts

    OR

    – I can conduct the full factorial and de-alias the interaction. Subsquently I can conduct the same face centered expt. However here the effort is a Little too much and whether it is worth it?

    Thank you all

    Regards

    Justin Mattam

    #504599 Reply

    Oops just saw ur new post a Little too late @rbutler

    #504603 Reply

    Dear @rbutler

    I just wanted to ask confirmatory question. If I proceed with the face centered response surface design, I read that the predicted coefficient of the square variable might not be accurately predicted.

    But can I with sufficient confidence say that the method can atleast recognize the variable with the square term accurately?

    I am planning to conduct the full factorial expt and then augment that with axial points. Unfortunately I do no have access to the D-optimal package.

    Regards

    Justin

    #504606 Reply

    If what you mean by “accurately predicting” is the fact that the variability of the estimates of the various term coefficients will not be the same for a face centered composite then yes, that is true but that is as far as it goes.

    If you want to have equal variability for the coefficients then you would want to use a rotatable composite design.

    It’s a shame you don’t have access to a program like Minitab, JMP or Statistica since all of them have DOE packages which includes D-Optimal.

    #504607 Reply

    RobertB: Can you share a good book that explains D optimal designs please?

    #504608 Reply

    Sure – Understanding Industrial Designed Experiments – Schmidt and Launsby

    #504609 Reply

    @justmattam

    Trying to get to your original post….I looked at the attachment with the post associated with “Graphs of the DOE Analysis as done earlier”…

    Just a few more points to add to my great colleague @rbutler .
    1. Consider the practice that’s not well “documented” out there but I’ve seen experiments get rid of marginally important factors by not using an initial p-value of 0.15 for throwing out factors. Sometimes, you can throw out a factor that can turn significant when you add more info to the Error Term. After doing the 0.15 analysis, then turn right around and do a 0.05 p-value for final removal of statistically insignificant terms.

    Your residuals in the same attachment referenced above look just fine to me. I use the rule that the “fat pencil” test for normality test and don’t fret about the p-value for the Anderson Darling test for the residuals. It’s not exactly an exact test but as long as the residuals don’t seem to form a tail or odd shape around the expected normal line in the graph, and can be covered with a fat pencil laid on the screen…I move on. I next look at the versus fits and look for relatively equal variation across the fits…I don’t like to see more than a 2:1 ratio in range of residuals when I look at one portion of the fits to another–you want your model to predict well enough across the whole range of fits and not just part! Lastly, a quick cursory of the histogram to see relative bell shape which is same as fat pencil test above…close enough is fine. Lastly, the versus order graph is very important also…you don’t want to see something that indicates a pattern of residuals over the time of the experiment…you want to see (hard when not many points) an “in control control chart” if you visualized control limits….no upward or downward trend, S shape, etc.

    Hope this helps.

    #504610 Reply

    I forgot to say something… I saw something about squared terms…they aren’t squared if I understand your DOE but they are 2-way interactions. You get squared terms when you do response surface etc but RSM isn’t necessary for many activities–depending on the business case of course.

    #504623 Reply

    I am indebted to all your inputs. Do not know how to thank you enough. @rbutler I have minitab 16, but I did not know how to access the D-optimal design. I will try to find some youtube link regarding that. But thank you for the information.

    Wish me luck, :D ; excited to venture into the world of DOE

    Regards

    Justin

    #504624 Reply

    Justin, a quick check of the Minitab home page indicates you get to the D-Optimal program by doing the following: “You can find Select Optimal Design under Stat > DOE > Factorial or Response Surface or Mixtures.”

    #504652 Reply

    Hello once again. I did a a bit of research on D-optimal and found some information.

    I realized that I might have to do 60 runs in case of a face centered RSM. 3 continuous factors and 1 categorical factor (ie pulse duration is actually categorical). However I am planning to use D-optimal design and only do 30 runs. Anyone has a clue which method should be utilized for creating D optimal design (exchange method or federov method). I am not able to see a significant difference from the both, except ofcourse the run numbers selected.

    Just to clarify

    1. A lower D optimality(determinant of XTX) and A optimality is preferred. (will help in detection of significant factors)

    2. And for prediction a high G optimality and V optimality is preferred right.

    Would like to hear from those who have used D optimal methodology in minitab 17.

    #504656 Reply

    Having built dozens of designs using various optimality criteria it has been my experience that, from a usage standpoint, it doesn’t seem to matter much. Consequently I usually go with D-Optimal as the choice. It is much the same situation with respect to how the design is built. I usually go with the program default – in my case it would be DETMAX.

    Item to note: Pulse duration is not a categorical variable. You may have a situation where there are only X number of different pulse durations you can use but that does not change the fact that, in practice, pulse duration is continuous and you should treat it as such when building your design.

    #504657 Reply

    Thanks again @rbutler. I know this might be to premature to ask now. So the definition of a categorical term in DOE is that when it cannot be quantified we call it categorical; more like a qualitative aspect?

    #504658 Reply

    Ok thank you @rbutler, some probing further in minitab helped me find an explanation that categorical items are only for grouping or subsetting the data, which is clearly not my case scenario. Many thanks for the heads up.

    http://support.minitab.com/en-us/minitab/17/topic-library/basic-statistics-and-graphs/introductory-concepts/data-concepts/cat-quan-variable/

    #504659 Reply

    Last but not least, I realized today that DOE is more of an art which requires years of practise. The next time I meet a Black belt six sigma certified guy, I really am gonna take a lot of skepticism to that certification.

    Don’t know how to show my gratitude to communities such as sixsigma which has helped me immensely. I guess giving back and sharing the knowledge would be key.

    #504660 Reply

    Well, I’d at least give the guy/gal the benefit of the doubt with respect to knowledge of DOE construction – at least for the first meeting or so. What you should do, by way of providing yourself with a reality check, is to take the time to understand the fundamentals of the basic designs and how they operate- that is, if you had to do the calculations manually, how would you proceed.

    If you understand the fundamentals of simple 2 level 2 variable factorial design construction you can mentally extrapolate to the higher and more complex constructs without having to actually run an analysis by hand.

    As for the various optimality criterion – it is useful to know the thinking behind the various types as well as the methods of construction and what motivated individuals to develop the various types of computer generated designs.

    You should also make yourself familiar with the Box-Meyer’s approach to analyzing standard factorial designs. This approach addresses the question – which combination of factors will result in the minimum amount of variation of the output. It’s important that you know this because sooner or later you are going to run into a Taguchi fan who will assure you that only Taguchi designs are capable of examining the effects of variable change on the output mean and variance.

    #504899 Reply

    Dear @rbutler @cseider @Peach

    I wanted to update you with what happened in the past few weeks.

    I conducted the full factorial(4 factors- 16runs) as the aliasing was a point of dilemma for my team. I further augmented the design with 7 center points and 8 axial face centered points. I put these many center points so as to detect curvature and as well as since I am only doing one replicate. Now I conducted all 31 expts together in one go. But used the data separately for analyzing significant factors and then trying to model. I had 2 reponses for the Jumma machine ablation depth and Rz. I did the analysis based on all information I got from the forum. However I have some contention regarding the residual analysis and would like to run it by the experts and hear your comments. I have attached 2 pdfs subsequently with the results etc… I have also written in red colour where I have specific queries in what I have interpreted. Your inputs and comments would be invaluable. Here is a brief of the steps involved .

    1. Ablation depth analysis
    a. DOE analysis to find significant terms
    b. RSM face centered to find square terms
    c. Transformation of function using square root based on lack of fit and residual analysis.

    2. Rz analysis
    a. DOE analysis to find significant terms
    b. RSM face centered to find the square term
    c. Log transform of the response function

    #504900 Reply

    Here I have attached a drop box link to the 2 files. I have attached a word file also in addition to a pdf of the same file if you feel it is easier to add a comment in that.

    1. PDF version of ablation depth file link
    https://www.dropbox.com/s/76lgb7qrybjkbrt/DOE%20ablation%20depth%20sixsigma%20forum.pdf?dl=0

    2. PDF version of Rz depth file link
    https://www.dropbox.com/s/zv4voe0nf7qmouq/DOE%20Rz%20sixsigma%20forum.pdf?dl=0

    3. Word copy of both files together
    https://www.dropbox.com/s/tifyk36fxunhq2k/DOE%20and%20RSM%20both%20ablation%20and%20Rz%20for%20Isixsigma%20forum.docx?dl=0

    #504901 Reply

    Here I have attached a drop box link to the 3 files. I have attached a word file also in addition to a pdf of the same file if you feel it is easier to add a comment in that.

    1. PDF version of ablation depth file link
    https://www.dropbox.com/s/76lgb7qrybjkbrt/DOE%20ablation%20depth%20sixsigma%20forum.pdf?dl=0

    2. PDF version of Rz depth file link
    https://www.dropbox.com/s/zv4voe0nf7qmouq/DOE%20Rz%20sixsigma%20forum.pdf?dl=0

    3. Word copy of both files together
    https://www.dropbox.com/s/tifyk36fxunhq2k/DOE%20and%20RSM%20both%20ablation%20and%20Rz%20for%20Isixsigma%20forum.docx?dl=0

    #504904 Reply

    Justin, any chance you can just make those PDF’s an attachment to your post? They’re on something called dropbox and the only way to see them is to sign up with whatever that is.

    #504906 Reply

    Ok the file I wanted to send is not meeting the file size criteria. I am trying to compress. If you could send me you mail id, I could directly send it also.

    #504907 Reply

    Here is the second file attachment

    #504909 Reply

    The first file is here. I have tried to put a link. hope it works. This is the first file link.

    http://www.ilovepdf.com/download/71b5380a5d60dfc5eb7ecaacd053b6a8_198359f696c573ae5226646ba65f70bd

    Else I have attached a compressed file, but the image quality is pretty bad.

    #504916 Reply

    OK I have split the first file and the reattached the second file. So these are the new attachements.

    1. File name: DOE Ablation – Part 1
    2. File name: DOE Ablation – Part 2
    3. File name: DOE Rz

    #504918 Reply

    Second part attached here

    #504920 Reply

    3rd attachement as already posted is here

    #504938 Reply

    I thought I had replied earlier…via phone but maybe a user error.

    The only caution I can give you is I see an “odd pattern” of residuals versus order. It’s a downward path as you move left to right. This may influence your model too much if something else was going on in your process. I’d inquire as to what else was happening in the process.

    Hopefully, you can verify your model’s findings and see if the error is acceptable by running at the more desired conditions and see if the process output(s) are acceptable.

    #505038 Reply

    Hi @cseider

    Thank you for the feedback and sorry for the late reply. Yeah I did some confirmation runs and found them to be within my prediction interval. I could not unfortunately find the reason for the trend on my own. I did some remeasurements and found them to be same. So I think I have to consult an expert for some explanation or cater it to some random noise

    Thanks again

    #505055 Reply

    @justmattam
    I noticed that in your rsm design you changed your -1 design points. For example, one of the standard Minitab design point for your setting is this

    StdOrder RunOrder PtType Blocks Pulse AvgPwr Ovrlap Freq AblaDept RzValue
    17 11 -1 1 -88 60 50 300 13.393 11.7490

    But you ran this combination

    17 11 -1 1 8 60 50 300 13.393 11.7490

    Did this mean you did not run at -88 Pulse settings?. If you did not run at the star points that Minitab gave you in the standard design then i think your analysis is not correct.
    Assuming you ran all the settings that Minitab provided including the star points, Here is the model I got. The residuals vs fitted value has a curvature which i am not sure of..(attached is the residuals plot).

    Model Summary

    S R-sq R-sq(adj) R-sq(pred)
    17.8045 84.52% 79.81% 62.96%

    Coded Coefficients

    Term Effect Coef SE Coef T-Value P-Value VIF
    Constant 19.34 4.08 4.73 0.000
    AvgPwr 43.48 21.74 3.63 5.98 0.000 1.00
    Ovrlap 41.20 20.60 3.63 5.67 0.000 1.00
    Freq -24.91 -12.45 3.63 -3.43 0.002 1.00
    Ovrlap*Ovrlap 12.49 6.24 3.28 1.90 0.070 1.00
    AvgPwr*Ovrlap 44.13 22.07 4.45 4.96 0.000 1.00
    AvgPwr*Freq -29.38 -14.69 4.45 -3.30 0.003 1.00
    Ovrlap*Freq -23.31 -11.65 4.45 -2.62 0.015 1.00

    Regression Equation in Uncoded Units

    AblaDept = -62.0 + 0.542 AvgPwr – 0.101 Ovrlap + 0.329 Freq + 0.00999 Ovrlap*Ovrlap
    + 0.02207 AvgPwr*Ovrlap – 0.00367 AvgPwr*Freq – 0.00466 Ovrlap*Freq

    Fits and Diagnostics for Unusual Observations

    Obs AblaDept Fit Resid Std Resid
    2 172.18 128.79 43.39 3.01 R
    10 55.13 85.51 -30.38 -2.55 R

    Attachments:
    #505060 Reply

    Also here is the model i got for the Rz Value out put… The residuals looks normal there.

    RzValue = 8.54 + 0.313 AvgPwr – 0.442 Ovrlap + 0.0113 Freq + 0.001016 AvgPwr*AvgPwr
    + 0.00401 Ovrlap*Ovrlap – 0.000654 AvgPwr*Freq

    Here is both models i came up with after centering and scaling the variables.

    AbDep = 19.34 + 21.74 B + 20.60 C – 12.45 D + 6.24 C*C + 22.07 B*C – 14.69 B*D – 11.65 C*D

    Rz = 10.53 + 9.555 B – 1.015 C – 2.800 D + 1.626 B*B + 2.506 C*C – 2.62 B*D

    #505094 Reply

    Hi @Peach

    Thank you for the feedback. I did not understand regarding the pulse duration being set at 88 instead of 8.
    I conducted a face centered RSM. So my design points do not exceed the corner points. When I saw the residual vs fitted curve I decided to utilize a square root transformation. I utilized the same thing for Rz value and found out the fitted values improved in terms of equivariance.
    HOwever I did some tests to my model found it to be within the 95% prediction interval. Some results came very close to the extremem end of the prediction. I did 8 confiramtion runs. I think much more are required, but for a coarse model, I think it is sufficient. If the pulse duration being is 8 instead of 88 is still not clarified please ping me.

    But thanks once again for going through the results.

    Regards

    Justin

    #505149 Reply

    @justmattam
    Please ignore my pulse 88 comment. I did not read your posting carefully. I used your data and created a design with default alpha (-2) hence the reason for the design point at pulse -88…. I analyzed the experiment using the wrong design i created.. Sorry…. I will later redo my analysis using the design you used…. If you already did confirmation runs and your values are inline with your model predictions then you are good to go… The objective of this exercise is not get say text book pattern of residuals but a useful model able to predict correctly….

    #703958 Reply

    Here are the compressed pdf’s and image of the Graphs of the DOE Analysis as done.

    DOE 1.Pdf
    DOE 2.Pdf

    Attachments:
    #703961 Reply

    Sorry for not attaching the guides

    Here are the attachments

    DOE 1.pdf

    DOE2.pdf

Viewing 57 posts - 1 through 57 (of 57 total)

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Reply To: Bimodal Residuals and Transformation
Your information:






5S and Lean eBooks

Six Sigma Online Certification: White, Yellow, Green and Black Belt

Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
GAGEpack for Quality Assurance
Find the Perfect Six Sigma Job

Login Form