To plot individual data or to group the data and plot the mean on a control chart, that is the question. Several authors (not including Shakespeare) have weighed in on this issue and I want to present their arguments and then my own.

First, let us assume that the process conditions are such that using a subgroup size of *n = 1* is not a logical requirement. Under these conditions many SPC texts recommend that the data be subgrouped so that the central limit theorem applies to the subgroup means and hence the means will be approximately Normally distributed. These authors argue that the individuals control chart is very sensitive to non-normality and hence the Type I error may be significantly different than what is expected. They conclude that the individuals control chart should be avoided unless the data is approximately Normal or can be transformed into an approximately Normal distribution [Montgomery, 2001].

On the other hand Dr. Wheeler says that control charts work irrespective of the distribution, so there is no reason not to use them for individuals data. I am going to take the liberty to suggest that when Dr. Wheeler says that they work he means that for all practical purposes differences in the false alarm rate (i.e., Type I error) do not significantly compromise the adequacy of the control procedure [Wheeler, 1995]. The logic supporting this argument goes back to Shewhart who based his conclusions on the Camp-Meidell theorem, which states if the distribution of *X* is unimodal (i.e., monotonically decreasing on each side of the mode) and the mode equals the mean, then the probability that *X* will deviate from its mean by more than k times its standard deviation is less than or equal to 1/(2.25k2).

Now here is my perspective of the issue. In contrast to Dr. Montgomery’s view I feel that the individuals control chart is to be preferred over the mean chart because the mean chart obscures the vision of the practitioner by preventing them from seeing what the process actually looks like. That is, using descriptive statistics (i.e., the mean) to characterize the behavior of the process rather than the actual data results in a loss of information, and game theory teaches us that the best decisions usually result from having the most information. I believe it is critically important for the analyst to have an accurate picture of the process, and the only way to do this is to look at the actual distribution of the individual observations.

The practitioner should be concerned about the false alarm rate, if the cost of failure is high relative to the cost of inspection or if too many false signals seem to be undermining the confidence of the operators in the program. So, if you are concerned about the false alarm rate, you can always model the observed data distribution and use probability control limits to get more accurate results. Remember that a picture is worth a thousand words, and an accurate picture is worth ten thousand!Â â€“ Do not throw away information!

**References**

Montgomery, D. C. (2001). *Introduction to Statistical Quality Control*. 4 Ed., John Wiley and Sons, New York, NY.

Wheeler, D. J. (1995). *Advanced Topics in Statistical Process Control*. Statistical Process Controls, Inc., Knoxville, TN.