Time Between Control Charts
Six Sigma – iSixSigma › Forums › Old Forums › General › Time Between Control Charts
- This topic has 9 replies, 5 voices, and was last updated 14 years, 5 months ago by
Jonathon Andell.
-
AuthorPosts
-
November 21, 2007 at 7:46 am #48735
I’m wondering if anybody has any thoughts on, or experience with, time between control charts for monitoring rare events.
One text I have suggests converting the time between to an annual rate and another using a transform x=y^0.2777 before using an ImR chart.0December 3, 2007 at 2:10 pm #165631
SchwarzParticipant@DaveS002Include @DaveS002 in your post and this person will
be notified via email.The Poisson distribution is used to measure rare event occurance in a defined space-time.
If the time between events is the concern, try the Exponential distribution.
Dave0December 3, 2007 at 10:40 pm #165664
Rick HaynesMember@Rick-HaynesInclude @Rick-Haynes in your post and this person will
be notified via email.A great question. I was asked the same by a student in the past month.
The answer is to use a square root transformation. It turns out that the time-between-failures typically has a poisson distribution if it is stable. In this case, the square root is the correct transformation. You can check this with simulated time data using a poisson random number generator. You will find the square root to be the right transform. when the mean exceeds 40 to 50, it is still benefiical but not a requirement.
It is great that you are using the ImR chart in the non traditional uses, most would say use a C or U chart which will end up being truly useless for infrequent events.0December 3, 2007 at 11:21 pm #165667Thanks. I’ll simulate all three methods that I’ve found so far, the rate, the square root and the power 1/3.6 and see what happens.
My query was also prompted by a student, he works in healthcare where there are many potential uses for this type of chart.0December 4, 2007 at 6:12 pm #165700
Jonathon AndellParticipant@Jonathon-AndellInclude @Jonathon-Andell in your post and this person will
be notified via email.I got a good idea from a talk by Don Wheeler. Try charting the number of “units” between events. For instance, a client was seeking to reduce the number of accidental needle sticks at a blood bank. We plotted days since the last needle stick. It made for a good I-MR chart, and allowed a process improvement to reveal itself as a single-observation “special cause” point above the upper control limit.
0December 4, 2007 at 7:11 pm #165708Which text are you referring to? Did any text mention the t or g chart?
0December 5, 2007 at 2:39 am #165726The transform y^1/3.6 was from Introduction to Statistical Control by Montgomery. The idea of converting to rates was from Implementing Six Sigma by Breyfogle.
Somebody sent me a paper on g and h charts, by Benneyan, from the Journal of Health Care Management Science. It looks as though the g chart would also be suitable. I haven’t come across t charts.0December 5, 2007 at 3:54 am #165730The snag with using the number of days directly is that their distribution will follow a Poisson distribution rather than a Normal Distribution. It would be possible to calculate control limits but they would be ‘very asymmetric’ (Montgomery). This can be overcome by transforming the days between events so the distribution is near enough normal, and then using the ImR chart.
Montgomery suggest a transform y^1/3.6, a posting here suggested y^1/2 and another suggestion is to convert to an annual rate (that is 1/y scaled for convenience).
Another alternative is to use a g chart, which is a type of attribute chart. It plots the proportion of ‘events’ to ‘instances’, an instance can be a ‘day’.
That’s what I’ve gathered so far. I’ve got some real data sets and I can generate random variables in Excel. When I get the chance I’ll simulate all the solutions and see what happens.0December 5, 2007 at 1:59 pm #165742t charts are really just the time between rare events. i.e. Days between shut-down
0December 5, 2007 at 4:00 pm #165744
Jonathon AndellParticipant@Jonathon-AndellInclude @Jonathon-Andell in your post and this person will
be notified via email.I cannot argue with your approach, because the underlying theory is sound as can be. (I wasn’t familiar with Y^(1/3.6) – was that a derivation or an empirical finding?)
However, I have found that control charts are fairly robust with respect to departures from normal. On a number if instances I have charted both transformed and “raw” data, and the resultant pairs of charts rarely lead to differing conclusions.
In a process improvement environment, the improved performance is so significantly better that the new data point is FAR outside the old control limits.
My bottom line is that asking people to interpret transformed data can create hassles equal to or greater than relying on the robustness of a not-quite-correct control chart.0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.