Control Chart
Six Sigma – iSixSigma › Forums › Old Forums › General › Control Chart
 This topic has 12 replies, 10 voices, and was last updated 15 years, 4 months ago by Savage.

AuthorPosts

July 16, 2007 at 8:23 pm #47583
I have continuous data that is not normally distributed, which control charts should I consider using and why?
0July 16, 2007 at 8:45 pm #158727
Omashi SabachiParticipant@OmashiSabachi Include @OmashiSabachi in your post and this person will
be notified via email.It depends
0July 17, 2007 at 1:40 am #158740Use a variables xbar/R chart. Your data don’t need to be normally distributed to use it.
0July 17, 2007 at 2:16 am #158743
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.Be wary of advice from those which seem so confident. Either use IMR or XbarR charts. If you use IMR charts, I’d consider finding the distribution type and find the 3 sigma equivalent since your distribution is nonnormal. Some might say I’m wasting your time by suggesting to find the equivalent but I’d hate to have you respond to “out of control” indications if they weren’t.
XbarR charts should be used IF you can find a rational reason for subgroups. However, if little process knowledge is known, I’ve seen some respected individuals use XbarR just to get the process of investigating special causes started.
Good luck.0July 17, 2007 at 2:58 am #158746
Jim ShelorParticipant@JimShelor Include @JimShelor in your post and this person will
be notified via email.Daniel,
If you have Minitab, the I,MR chart has a selection for doing a BoxCox Transformation on the data as it is analyzed.
Jim Shelor0July 17, 2007 at 12:44 pm #158762Daniel,
It might help if you explain your situation a little more. Why is the data nonnormal? Is it intrinsically nonnormal where you would generally use a transformation or did you just happen to fail a normality test on a batch of data? (Because of outliers, etc.)
HACL0July 17, 2007 at 4:31 pm #158770What are the considerations for determining sample size in variable control charting? For example, using a larger sample size will result in tighter control limits, but is this always desirable?
What if you run an IMR and you obtain indications of instability but running an XbarR shows stability? What does it mean and what is the approach?
THANKS0July 17, 2007 at 5:45 pm #158775If you are using a shewhart control chart it does not require data to be normal. That feature is one of the remarkable things about shewhart control charts.
0July 18, 2007 at 10:26 am #158797I have non normal service time data. As you can imagine it’s skewed to the left. The data transforms well using the natural log (LN).
I have a process target/mean that I want to calculate upper and lower spec limits for based on a Cp of 1.33.
Can I the usl and lsl limit by using the sigma of the transformed data.
For example use the transformed sigma multiple it by 6, then multiple it by 1.33. Divide the result by 2, add it to the transformed mean value. convert the transformed result back.0July 18, 2007 at 1:07 pm #158799
Jim ShelorParticipant@JimShelor Include @JimShelor in your post and this person will
be notified via email.Daniel,
The USL and LSL are limits that should be supplied by the engineer or whoever is the owner of the process. You should not be calculating these limits from the results of your data.
There must be a time you do not want to exceed for your service. That time is your USL. Your LSL is naturally 0.
Best regards
Jim Shelor
0July 18, 2007 at 1:27 pm #158801Daniel,Just a nuance, but your data is skewed to the right. Your method will work assuming your lower limit does not go below
0.You should take Mr. Shelor’s advice about whether it is reasonable to
set your limits this way however. The upper limit should be VOC; the
lower limit is not 0, but instead should represent the absolute
minimum time that the service can be accomplished correctly.0July 18, 2007 at 3:03 pm #158805
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.I’m not sure from the business point of view why you worry about calculating the Pp but if you want to remember that Minitab will calculate it for you if you know which distribution best fits your process data. Just be sure to use the capability nonnormal command.
They use the best estimation of the data that represents the 6s equivalent spread as the Pp calculation for normal distributions. Do all of this on nontransformed data.
My 3.4 cents. Good luck on reducing that service time….I”m presuming this.0July 19, 2007 at 4:27 pm #158843Stan,
You stated “the lower limit is not 0, but instead should represent the absolute minimum time that the service can be accomplished correctly.” I agree with you but could the lower limit also be defined as missing?
The reason I ask, if the minimum time that the service can be completed is X (thus LSL=X) and my process average is X+1 upper limit is X+10, then my Cpk/Ppk might be ‘artificially’ low.
Matt0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.