iSixSigma

Time Between Control Charts

Six Sigma – iSixSigma Forums Old Forums General Time Between Control Charts

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #48735

    Gastala
    Participant

    I’m wondering if anybody has any thoughts on, or experience with, time between control charts for monitoring rare events.
    One text I have suggests converting the time between to an annual rate and another using a transform x=y^0.2777 before using an ImR chart.

    0
    #165631

    Schwarz
    Participant

    The Poisson distribution is used to measure rare event occurance in a defined space-time.
    If the time between events is the concern, try the Exponential distribution.
    Dave

    0
    #165664

    Rick Haynes
    Member

    A great question.  I was asked the same by a student in the past month.
    The answer is to use a square root transformation.  It turns out that the time-between-failures typically has a poisson distribution if it is stable.  In this case, the square root is the correct transformation.  You can check this with simulated time data using a poisson random number generator.  You will find the square root to be the right transform.  when the mean exceeds 40 to 50, it is still benefiical but not a requirement.
    It is great that you are using the ImR chart in the non traditional uses, most would say use a C or U chart which will end up being truly useless for infrequent events.

    0
    #165667

    Gastala
    Participant

    Thanks. I’ll simulate all three methods that I’ve found so far, the rate, the square root and the power 1/3.6 and see what happens.
    My query was also prompted by a student, he works in healthcare where there are many potential uses for this type of chart.

    0
    #165700

    Jonathon Andell
    Participant

    I got a good idea from a talk by Don Wheeler. Try charting the number of “units” between events. For instance, a client was seeking to reduce the number of accidental needle sticks at a blood bank. We plotted days since the last needle stick. It made for a good I-MR chart, and allowed a process improvement to reveal itself as a single-observation “special cause” point above the upper control limit.

    0
    #165708

    Savage
    Participant

    Which text are you referring to? Did any text mention the t or g chart?

    0
    #165726

    Gastala
    Participant

    The transform y^1/3.6 was from Introduction to Statistical Control by Montgomery. The idea of converting to rates was from Implementing Six Sigma by Breyfogle.
    Somebody sent me a paper on g and h charts, by Benneyan, from the Journal of Health Care Management Science. It looks as though the g chart would also be suitable. I haven’t come across t charts.

    0
    #165730

    Gastala
    Participant

    The snag with using the number of days directly is that their distribution will follow a Poisson distribution rather than a Normal Distribution. It would be possible to calculate control limits but they would be ‘very asymmetric’ (Montgomery). This can be overcome by transforming the days between events so the distribution is near enough normal, and then using the ImR chart.
    Montgomery suggest a transform y^1/3.6, a posting here suggested y^1/2 and another suggestion is to convert to an annual rate (that is 1/y scaled for convenience).
    Another alternative is to use a g chart, which is a type of attribute chart. It plots the proportion of ‘events’ to ‘instances’, an instance can be a ‘day’.
    That’s what I’ve gathered so far. I’ve got some real data sets and I can generate random variables in Excel. When I get the chance I’ll simulate all the solutions and see what happens.

    0
    #165742

    Savage
    Participant

    t charts are really just the time between rare events. i.e. Days between shut-down

    0
    #165744

    Jonathon Andell
    Participant

    I cannot argue with your approach, because the underlying theory is sound as can be. (I wasn’t familiar with Y^(1/3.6) – was that a derivation or an empirical finding?)
    However, I have found that control charts are fairly robust with respect to departures from normal. On a number if instances I have charted both transformed and “raw” data, and the resultant pairs of charts rarely lead to differing conclusions.
    In a process improvement environment, the improved performance is so significantly better that the new data point is FAR outside the old control limits.
    My bottom line is that asking people to interpret transformed data can create hassles equal to or greater than relying on the robustness of a not-quite-correct control chart.

    0
Viewing 10 posts - 1 through 10 (of 10 total)

The forum ‘General’ is closed to new topics and replies.