# Individual Charts vs. Using Subgroups

Six Sigma – iSixSigma › Forums › General Forums › Tools & Templates › Individual Charts vs. Using Subgroups

- This topic has 9 replies, 3 voices, and was last updated 8 years, 3 months ago by cfb.

- AuthorPosts
- March 6, 2012 at 12:09 pm #53984

SteveGuest@[email protected]**Include @[email protected] in your post and this person will**

be notified via email.A basic question on the concepts of control charts – does using a sample size of n (n>=2) to construct your control chart always give you a better control chart in terms of detecting when process goes out of control compared to using a sample size of 1 as in an individuals control chart?

I know you use n=1 when it’s difficult to obtain sample sizes > 1, but if you have a choice, shouldn’t you always use a subgroup > 1?

I have scanned a couple textbooks but haven’t found it spelled not to use I charts if you can get grouped data.

0March 6, 2012 at 8:47 pm #192460

MBBinWIParticipant@MBBinWI**Include @MBBinWI in your post and this person will**

be notified via email.Steve: The “purest” form of a control chart would be to plot every individual point. Of course, that is not always practical, either because the number of points is very large, the individual to individual variation is relatively small, or there is noise in the system such that you would not see process variation as much as you would see noise (and there may be other reasons, but it’s getting late and I’m tired).

In order to deal with these issues we do sampling. You can sample in several ways, some still use individuals just not every individual (like taking every 10th item) and some take “rational sub-groups.” One of the principal reasons for taking a sub-group sample is to dampen out the noises – either in the system or due to the measurement system (in fact, the fastest way to overcome a measurement system with poor repeatability is to do sub-groups). Sub-groups dampen out the noise (reduce the variation) to 1 over the square root of the number of items in the sample.

The other thing to take into account (at least if you are using Minitab for your control charts) is that using a sample size of 1 defaults to Minitab using a point to point range to estimate the variation, whereas using sub-groups of 3 or more allows for a direct std dev calculation.

There is no “better”, it’s just that you need to know when to apply which, and what trade-offs you have for either so that you use the most appropriate for the situation.

0March 7, 2012 at 8:17 am #192463

Robert ButlerParticipant@rbutler**Include @rbutler in your post and this person will**

be notified via email.One additional thought to add to MBBinWI’s post – many times there isn’t a rational subgroup and the only meaningful choice is a sample size of 1.

0March 7, 2012 at 12:19 pm #192467

SteveGuest@[email protected]**Include @[email protected] in your post and this person will**

be notified via email.OK, I agree that the taking samples in groups dampens out the noise and I want to know if the math works out to be able to conclude that using groups actually detects when a process goes out of control better – i.e. has a lower beta error.

I took 200 data points where I forced the mean to shift from 50 to 60 at point 101 – i.e. halfway through. Then I made control charts using I charts and Xbar charts with a subgroup of 5. So I\’m trying to determine if the Xbar chart would be a better detector when a process goes out of control. It seems to be – the I chart flags 5 of the 100 points after the process shift, but the Xbar chart flags about 15 of the 20 sample points after the process shift making a much clearer distinction.

So can you conclude that the Xbar chart actually is better statistically and should be used if both chart types are options.

0March 7, 2012 at 12:49 pm #192468

MBBinWIParticipant@MBBinWI**Include @MBBinWI in your post and this person will**

be notified via email.@rbutler – Robert: If that’s all you have to add, then perhaps I may consider myself to have “touched the pebble.” (obscure Kung Fu TV reference).

Steve: To draw any conclusion based on your scenario, I’d first need to know what level of variation you had. Clearly, the larger the variation, the more difficult it is to see through that variation to detect a change.

When you indicate that there were flags for the two charts, the question is what was flagged? If you’re using Minitab with all tests, you may have different types of characteristics being identified.

If you can, post the data set (or at least a graphic image) so that we know we’re talking about the same thing.0March 7, 2012 at 2:15 pm #192473Steve, I think you have two considerations.

1. Do you have rational subgroups or not? If not, then I would recommend not using subgroups and using an Individuals Chart assuming there is nothing unusual about your data (like extreme skewness).

2. If you do have subgroups, then you need to understand the variation in which each chart will use to establish limits. For the Xbar chart, you are looking at behavior relative to the within-subgroup variation and points will be flagged when they are extreme relative to this. But for a process where you know that parts are homogenous within subgroups but vary greatly between subgroups, this does not really tell you anything about process consistency. In that case you may want to do an Individuals Chart on the means of each subgroup.

Basically you need to consider what kind of shift you are looking to capture and consider the variation used for each chart to decide your best path. It is difficult to comment on the results of your test without knowing the variation in the data and how that compares to the shift, but in SPC terms you would want to capture that shift long before you’ve allowed an additional 100 points to be taken and “drowned out” the shift.

0March 9, 2012 at 11:03 am #192518

SteveGuest@[email protected]**Include @[email protected] in your post and this person will**

be notified via email.I just have the plot I made a few months ago – I’d say I made the process go from a normal distribution with a mean = 50, sd = 5, to a mean of 60, sd = 5. Minitab just flagged the “1” category for being outside UCL in both chart types and like I said, it’s a much more definitive signal using the Xbar chart compared to the I chart.

It seems to me that if you are indifferent to the 2 different chart types, just go with the Xbar. IS there really a downside to that choice?

0March 9, 2012 at 12:03 pm #192519

MBBinWIParticipant@MBBinWI**Include @MBBinWI in your post and this person will**

be notified via email.Steve: If you’re taking a measurement from every item (or in an evenly distributed manner), then use an I chart. If you’re taking a sub-group of measurements (the first 5 every hour, one from each filler head, etc.), then use an Xbar chart. That sub-group needs to be “rational”, meaning that they reasonably relate to one another but in a way different than the overall population.

0March 11, 2012 at 5:47 pm #192546

Joan AmbroseParticipant@joanambrose**Include @joanambrose in your post and this person will**

be notified via email.@Steve The XBar-R and XBar-s charts are more sensitive to process changes than the Ind-Moving Range chart (XR), so they would be your charts of choice if your process and sampling meet the requirements for their use (mentioned above by several other responders), which it sounds like they do.

0March 12, 2012 at 11:42 am #192553Rational subgrouping works when you have a rational reason to subgroup…The idea being to isolate the common cause variation to within the subgroups, thus highlighting the variation between them….But if there is not a logical reason to group them, don’t…Often transactional processes fall into this category. You also have the ideal of normality to consider when dealing with individual charts that is not present when subgrouping (considering adequate sizing).

0 - AuthorPosts

You must be logged in to reply to this topic.