iSixSigma

Adjust IMR UCL/LCL?

Six Sigma – iSixSigma Forums Old Forums General Adjust IMR UCL/LCL?

This topic contains 18 replies, has 8 voices, and was last updated by  Lee 9 years, 11 months ago.

Viewing 19 posts - 1 through 19 (of 19 total)
  • Author
    Posts
  • #52816

    Lee
    Participant

    I have some data for which IMR charts are being developed.  In looking at residuals (Average minus observed data) for over 1,000 data points I found that the kurtosis is almost 13.5 (from MiniTab), thus the data is not considered by me to normally distributed, yet that is an assumption for the IMR charts (Right?).
    Should I make some sort of adjustment to the UCL and LCL for this case so that the traditional probability of finding outliers is retained?  I have not found anything on this, but perhaps someone on this list can point to a reference that speaks to adjusting the lines, or comment on the validity of my thoughts.

    0
    #186281

    GB
    Participant

    Eugene,
    With the limited data presented, do not adjust your control limits…The result is a symptom, or effect of some function, or interaction of inputs. Go look at your process.We have seen aot of folks wanting to tweak their CL’s, or specs to fit their data, instead of walking the process and investigating the potential causes.

    0
    #186282

    Taylor
    Participant

    Eugene
    I agree with HBGB, but want to ask the question: How are you calculating your Control Limits?
    If they are simple guardbands then that is your issue. (BOB DOEring take note).

    The control limits are set too tightly. This leads to over-adjustment and tampering with the process. Tampering adds to process variation, resulting in lower quality and higher costs.
    The control limits are set too loosely. Signals of process change are ignored and opportunities for process improvement are missed. The result is additional avoidable variation, lower quality, and higher costs.
     

    0
    #186283

    Lee
    Participant

    Thanks for the reply.
    The process is fairly complicated (50+ inputs, likely only one or two that are the culprit) so I was exploring the line I presented.  Apparently that is a dead end, so I just have to find a different solution.
    I now have to look at the efficiency of improving that process vs another process.  Thus far the spread of the data has not yet been correlated with a significant change in the as-shipped product (after more process steps).
    Thanks, again, for your input.

    0
    #186284

    Lee
    Participant

    The limits are being calculated with the standard formulas for an IMR chart, with the range determined from successive times the process is used. (Limits = Average +/- 2.659*average Range).
    I had first looked at the residuals awhile back, but I have noted that the number of outliers is much smaller than expected (out of 500-700+ readings with not one outlier recorded).  My concern is that the limits are set too loose.  Although I could not locate materials that spoke of adjusting the UCL and LCL, it might have been that the more experienced persons would know of it (I consider it affirmed that the reason I could not find anything is that the concept is not the best).  It appears that some variable is not controlled and causes the non-normal distribution, as suggested by HBGB.   I have considered that when an outlier is present that the logs are being falsified to a value that is in the acceptance range, but that is a painful idea to entertain except as a last resort, and would make that practice the norm across 8 operators (unlikely).
    Thanks for your time and thoughts …

    0
    #186285

    Lee
    Participant

    In your reply you wrote of “simple guardbands”.  If you asking if there is a natural boundary to the values, the answer to that is no.  The measuring scale is from zero through 45 (over that is just recorded as “Over”).  The absolute value of the averages is around 20, no average or measure under 5, and the bulk of the values from 15 to around 30.  There is no nearby constraint or limit on the values recorded.

    0
    #186286

    StuW
    Member

    I would check to see if your data shows positive auto correlation.  That would account for a series of measurements that appeared to run closer to the center line than expected.   Use the Autocorrelation and Partial Autocorrelation options under the Time Series and Stat menus in Minitab to check it.  If the data does show auto correlation than you can either adjust limits according to formulas that are available, or see if the auto correlation between data points can be reduced in some way.   In some settings, reducing the time between sampled data points will often reduce the serial correlation to a large degree allowing the plotted data to have characteristics that are closer to a Normal distribution.

    0
    #186289

    Darth
    Participant

    Eugene,
    First of all, you have a number of misconceptions regarding control charts. First, normality is not an assumption for using a control chart. Second, I saw repeated comments about the control limits being set. Control limits are not set, they are calculated from the data and they are what they are. They should be relatively robust to non normality. If the actual data is severely skewed, an I/MR chart may give you false signals. There was a recent thread and battle between Wheeler and Breyfogle regarding the use of transformed data in the event of severe skewness. Check the archives here and see if you can find the thread. You automatically went to an I/MR chart. Does the concept of rational subgrouping have any validity for your data/process? Is there a reason that the residuals are not normally distributed? What is the characteristic that you are measuring?

    0
    #186300

    GB
    Participant

    Well said, Darth.

    0
    #186302

    Darth
    Participant

    Were you able to tell it was me? I tried to disguise my voice behind the curtain.

    0
    #186306

    Lee
    Participant

    Thanks for the reply Darth.
    So far as a normality assumption, I thought I had read at one time that the IMR charts were sensitive to that assumption, but that the XBar-R charts were drawing upon the Central Limit Theorem to remove that sensitivity to the underlying data distribution, thus they are more robust and most often used.  I’ll go back and review what I though I knew.
    So far as the “setting of the control limits”  I agree 100% that the calculations are what they are.   That being said, my phraseology is misleading,  what I meant is that I do the calculations and control the value (from the calcs) provided to the production floor, so from the operators view I “set” the number. 
    Now, for the use of an IMR chart:  There is no rational subrouping that I can justify.  There are about 100 different solutions (generically we refer to them as “brines”) that are made here, all in a batch process system.  There is no time pattern to when any specific brines is made — some are made once a month or so, some less frequently, some as often as several times a week.  Because of the batch process, the non-predictability of when any specific brine is made, and that the making of a specific brines can occur on either shift, I did not perceive of any rational subgrouping that would apply.  Hence (am I on track still?), I did not see the applicability of the XBar-R chart for this situation. 
    Now, as to what is being measured:  The pH of the solution and the Brix(%) of the solution.  The Brix(%) is a measure of the equivalent disolved sugar level.  The meter has been calibrated, opertors trained, etc.  It is a linear scale runs from 0 through 45% for 0 through 45% disolved sugar.
    I’ll check out the tread you mentioned, perhaps that will also provide some clues.  In an earlier study of the Brix(%) data my notes (of which I had almost forgotton) are:
    The residuals were input into MiniTab 15.1.0.0 for evaluation.   The shape of the distribution was compared to that of the following standard distributions: Normal, 3-Parameter Lognormal, 2-Parameter Exponential,3-Parameter Weibull, Smallest Extreme Value, Largest Extreme Value, 3-Parameter Gamma, Logistic, and the 3-Parameter Loglogistic.  It is noted that several of these transformations are not possible because of the negative values inherently in residuals.
     

    Additionally, a Johnson transformation was requested that would yield a normal distribution for the residuals so that possible transforms for data prior to evaluation by Helix might be evaluated.  The automated Box-Cox transformation is also not possible because of the negative values inherently in residuals, although a series of manual transformations is feasible.”
    No Johnson transformation with p>0.1, or other data shape curve with p>0.05, was found, so efforts to adjust the limit lines to produce the classical %probability lines is not possible.  Box-Cox manual transformations were made at -2, -1.5, -1, -0.5, 0.5, 1.5, 2, and 0.37.   The p value was maximum at 0.37, but was only 0.007, still under the traditional 0.05 that indicates normality is evidenced.”
     
    Thanks for your time … I think I will gather a new set of data to examine and try looking at it with a set of “new eyes”.  My suspicion now is that there is a process variable at play thay I do not know about.

    0
    #186307

    Lee
    Participant

    Could be, I have not done that to confirm.   Thanks.

    0
    #186308

    GB
    Participant

    I think the tight-fitting lucha-libre wrestling mask obscured your voice…
    ;-)

    0
    #186314

    StuW
    Member

    Just a few more things to consider based upon the added information provided in your last note:
    Have you checked for “fixed effects”, in other words, differences in respnse means between the brines?   These could be small shifts, but nonetheless, important in how your I-MR chart looks.
    Have you done a variance components analysis and checked for “between” brine versus “within” brine variation patterns.  For most situations, you would want the brine-to-brine effect to be less than 20-25% of the total variation, or so, otherwise you will again see patterns in the I-MR chart that are due to changes in the brine made.
    Is it really reasonable to assume that the response means should be the same no matter which brine recipe is produced?

    0
    #186320

    Utkarsh
    Member

    I would suggest to find out the cause of non-normality. Do not adjust the control limits. Outliers should guide you in the direction of cause and effect. What is the gage r & r for the measurement system? How was the data collected? What are the KPIVs for the process control?

    0
    #186326

    Lee
    Participant

    Completed examining residuals for two of the 100+ brines.  No Autocorrelation above 5% significance (I was not taught about this in the BB training I had, but it looks like in MiniTab that the goal is to stay between the red 5% lines, and it does).  Because the brines are not made on any time frequency (i.e., not each Tue, not each 10th of the month, not every 5 hours, etc), I do not really expect that any of the remaining brines has autocorrelation either.
    Thanks for the suggestion though, I should be checking it in other applications though.

    0
    #186371

    Forrest W. Breyfogle III
    Member

    To answer your question you should not adjust the control limits; however, that should only be part of the consideration. I noted that you made no mention of process capability in your question or subgrouping interval considersations.  These are important issues that need to be addressed.
     
    Remember out of control signals are only to identify special cause occurrence but what if common cause variability is unsatisfactory relative to customer needs? Control charts of continuous data do not address this point.  When common cause variability has, for example, too high non-conformance, this occurances requires different action; i.e., you need to undertake a general process improvement project where all the data in the latest region of stability can give you insight to what should be done differently to improve.  
     
    I highly suggest that you keep an open mind when reading the Quality Digest article  below and its links.  This is one of the later articles where Donald Wheeler and I described our differing points of view and my inclusion of process capability statements within this analysis discussion, noting that this article was referenced earlier in another comment.
     
    Referenced article: “NOT Transforming the Data Can Be Fatal to Your Analysis: A case study, with real data, describes the need for data transformation”
    http://www.qualitydigest.com/inside/six-sigma-column/not-transforming-data-can-be-fatal-your-analysis.html
     
    To provide more specific suggestions, I would need to look at your data.  If you would like me to do so, let me know. My e-mail address is at the end of the Quality Digest article.
     
    Good luck.  

    0
    #186372

    lin
    Participant

    Hello,
    I read your article in your continuing debate with Dr. Wheeler. In your real data example, you say:”The example data used in this discussion is a true enterprise view of a business process. This reports the time to change from one product to another on a process line. It includes six months of data from 14 process lines that involved three different types of changeouts, all from a single factory. The factory is consistently managed to rotate through the complete product line as needed to replenish stock as it is purchased by the customers. The corporate leadership considers this process to be predictable enough, as it is run today, to manage a relatively small finished goods inventory. With this process knowledge, what is the optimal method to report the process behavior with a chart?Figure 1 is an individuals chart of changeover time. From this control chart, which has no transformation as Wheeler suggests, nine incidents are noted that should have been investigated in real time. In addition, one would conclude that this process is not stable or is out of control.”Do I understand you correctly that you used data from 14 lines and three different change outs on the same chart? Isn’t that kind of mixing apples and oranges? I don’t think I would put three different types of changeouts on the same individuals chart.It might be interesting if you provide the actual data and show the context in which it was taken including process, type of changeover, etc. Are 9 out of control points (with over 500 data points)due one type of changeout? I can’t tell based on our article.Regardless, interesting reading.Bill

    0
    #186444

    Lee
    Participant

    Forest – thanks for the reply.  I have been without internet connection for a couple of days, so I was not ignoring your advice & thoughts.
    I did try transforming the data and found the following:  The Johnson transformation is the only one that produced a normally distributed data set — but I am very far from convinced that the transform has anything to do with the actual process going on.  Nevertheless, when the transform was applied both the UCL and the LCL lines shifted by only about 10% of their values, and the difference between the lines was essentially the same.  So, independent of the side one wants to take on transforms, for this application there was not much effect.
    My current thoughts center around common cause variability, as you suggested.
     

    0
Viewing 19 posts - 1 through 19 (of 19 total)

The forum ‘General’ is closed to new topics and replies.