iSixSigma

Are You Sure Your Data Is Normal?

Most processes, particularly those involving life data and reliability, are not normally distributed. Most Six Sigma and process capability tools, however, assume normality. Only through verifying data normality and selecting the appropriate data analysis method will the results be accurate. This article discusses the non-normality issue and helps the reader understand some options for analyzing non-normal data.

Introduction

Some years ago, some statisticians held the belief that when processes were not normally distributed, there was something “wrong” with the process, or even that the process was “out of control.” In their view, the purpose of the control chart was to determine when processes were non-normal so they could be “corrected,” and returned to normality. Most statisticians and quality practitioners today would recognize that there is nothing inherently normal (pun intended) about the normal distribution, and its use in statistics is only due to its simplicity. It is well defined, so it is convenient to assume normality when errors associated with that assumption would be minor. In fact, most of the efforts done in the interest of quality improvement lead to non-normal processes, since they try to narrow the distribution using process stops. Similarly, nature itself can impose stops to a process, such as a service process whose waiting time is physically bounded at the lower end by zero. The design of a waiting process would move the process as close to zero as economically possible, causing the process mode, median and average to move toward zero. This process would tend towards non-normality, regardless of whether it is stable or non-stable.

Many processes do not follow the normal distributions. Some examples of non-normal distributions include:

  • Cycle time
  • Calls per hour
  • Customer waiting time
  • Straightness
  • Perpendicularity
  • Shrinkage

To help you understand the concept, let us consider a data set of cycle time of a process (Table 1). The lower limit of the process is zero and the upper limit is 30 days. Using the Table 1 data, the process capability can be calculated. The results are displayed in Figure 1.

Table 1: Cycle Time Date
19113024242028272620
17533233392620482134
36434243414035242123
22202539265319132728
35114238322724221817
1715159512625134737
5217489181654911
83114312419412251
87163421914311614
521042211521111522
9325648272421243133
31154027242214131314
14433718174710131422
854825819189332
21166363692172828
20172515211011648
21232255211513146
12341514769142318
7101426122830263414
25171318192127272313
122224351228
Figure 1: Process Capability Analysis for Cycle Time

Figure 1: Process Capability Analysis for Cycle Time

If you observe the normal distribution curve for both the within and overall performance, you would see that the curve extends beyond zero and calculates the long term PPM less than zero as 36465.67.

Is this applicable in this process where the process is bounded by zero? Use of the normal distribution for calculating the process capability actually penalizes this process because it assumes data points outside of the lower specification limit (below zero) when it is not possible for that to occur.

The first step in data analysis should be to verify that the process is normal. If the process is determined to be non-normal, various other analysis methods must be employed to handle and understand the non-normal data.

For the above data, if we calculate the basic statistics they would indicate whether the data is normal or not. Figure 2 below indicates that the data is not normal. The p-value of zero and the histogram help in confirming that the data is not normal. Also, the fact that the process is bounded by zero is an important point to consider.

Figure 2: Descriptive Statistics

Figure 2: Descriptive Statistics

The most common methods for handling non-normal data are:

  • Subgroup averaging
  • Segmenting data
  • Transforming data
  • Using different distributions
  • Non-parametric statistics

Subgroup Averaging

  • Averaging the subgroups (recommended size greater than 4) usually produces a normal distribution
  • This is often done with control charts
  • Works on the central limit theorem
  • The more skewed the data, the more samples are needed

Segmenting Data

  • Data sets can often be segmented into smaller groups by stratification of data
  • These groups can then be examined for normality
  • Once segmented, non-normal data sets often become groups of normal data sets

Transforming Data

  • Box-Cox transformations of data
  • Logit transformation for Yes or No data
  • Truncating distributions for hard limits (like the data set presented here)
  • Application of meaningful transformations
  • Transformations do not always work

Using Different Distributions

  • Wiebull distributions
  • Log normal
  • Exponential
  • Extreme value
  • Logistic
  • Log logistic

Non-parametric Statistics

  • Used for statistical tests when data is not normal
  • Tests using medians rather than means
  • Most often used when sample sizes of groups being compared are less than 100, but just as valid for larger sample sizes

For larger sample sizes, the central limit theorem often allows you to use regular comparison tests.

When performing statistical tests on data, it is important to realize that many statistical tests assume normality. If you have non-normal data, there are parametric equivalent statistical tests that should be employed. Table 2 below summarizes the statistical tests to use with normal process data as well as the and non-parametric statistical test equivalents.

Table 2: Common Statistical Tests for Normal and Non Parametric Data
Assumes NormalityNo Assumption Required
One sample Z testOne sample sign
One sample t-testOne sample Wilcoxon
Two sample t-testMann-Whitney
One way ANOVA (analysis of variance)Kruskal-Wallis Moods Median
Randomized block (two-way ANOVA)Friedman test

If we look back at our Table 1 data set where zero was the hard limit, we can illustrate what tests might be employed when dealing with non-normal data.

Setting a Hard Limit

If we set a hard limit at zero and re-run the process capability, the results are presented in Figure 3.

Figure 3: Process Capability Analysis for Cycle Time

Figure 3: Process Capability Analysis for Cycle Time

Figure 3 now indicates that the long term PPM is 249535.66, as opposed to 286011.34 in Figure 1. This illustrates that quantification can be more accurate by first understanding whether the distribution is normal or non-normal.

Weibull Distribution

If we take this analysis a step further, we can determine which non-normal distribution is a best fit. Figure 4 displays various distributions overlaid on the data. We can see that the Weibull distribution is the best fit for the data.

Figure 4: Four-way Probability Plot for Cycle Time

Figure 4: Four-way Probability Plot for Cycle Time

Knowing that the Weibull distribution is a good fit for the data, we can then recalculate the process capability. Figure 5 shows that a Wiebull model with the lower bound at zero would produce a PPM of 233244.81. This estimate is far more accurate than the earlier estimate of the bounded normal distribution.

Figure 5: Process Capability Analysis for Cycle Time

Figure 5: Process Capability Analysis for Cycle Time

Box-Cox Transformation

The other method that we discussed earlier was transformation of data. The Box-Cox transformation can be used for converting the data to a normal distribution, which then allows the process capability to be easily determined. Figure 6 indicates that a lambda of 0.5 is most appropriate. This lambda is equivalent to the square root of the data.

Figure 6: Box-Cox Plot for Cycle Time

Figure 6: Box-Cox Plot for Cycle Time

After using this data transformation, the process capability is presented in Figure 7. This transformation of data estimates the PPM to be 227113.29, which is very close to the estimate provided by the Weibull modeling.

Figure 7: Process Capability Analysis for Cycle Time

Figure 7: Process Capability Analysis for Cycle Time

We have seen three different methods for estimating the appropriate process capability of the process in case the data is from a non-normal source: setting a hard limit on a normal distribution, using a Weibull distribution and using the Box-Cox transformation.

Subgroup Averaging

Now let us assume that the data is collected in time sequence with a subgroup of one. The X bar R chart with subgroups cannot be used. Had the data been collected in subgroups the Central limit theorem would come in handy and the data would have exhibited normality. If we use the individual moving range chart – which is the more appropriate chart to use – Table 3 displays the results.

Table 3: I/MR for Cycle Time
Test Results for I Chart
TEST 1. One point more than 3.00 sigmas from center line.
Test Failed at points: 12 36 55 61 103 132
TEST 2. 9 points in a row on same side of center line.
Test Failed at points: 28
TEST 3. 6 points in a row all increasing or all decreasing.
Test Failed at points: 49 50
TEST 5. 2 out of 3 points more than 2 sigmas from center line (on one side of CL).
Test Failed at points: 23 24 25 61 80 104 203
TEST 6. 4 out of 5 points more than 1 sigma from center line (on one side of CL).
Test Failed at points: 15 22 23 24 25 26 27 45 82 159 160TEST 8. 8 points in a row more than 1 sigma from center line (above and below CL).
Test Failed at points: 27
Test Results for MR Chart
TEST 1. One point more than 3.00 sigmas from center line.
Test Failed at points: 12 55 62 69 70 78 127 132 133TEST 2. 9 points in a row on same side of center line.
Test Failed at points: 52 53 54 200 201 202 203

Figure 8 indicates that the process is plagued by special causes. If we focus only on those points that are beyond the three sigma limits on the I chart, we find the following data points as special causes.

Figure 8: I and MR Charts for Cycle Time

Figure 8: I and MR Charts for Cycle Time

TEST 1. One point more than 3.00 sigmas from centerline.
Test Failed at points: 12 36 55 61 103 132.

The primary assumption in the Figure 8 control chart is that the data is normal. If we plot the I-MR chart applying the Box-Cox transformation as above, the looks much different (see Figure 9).

Figure 9: I and MR Charts for Cycle Time

Figure 9: I and MR Charts for Cycle Time

Table 4: I/MR for Cycle Time
Test Results for I Chart
TEST 1. One point more than 3.00 sigmas from center line.
Test Failed at points: 80
TEST 2. 9 points in a row on same side of center line.
Test Failed at points: 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 110 111
TEST 3. 6 points in a row all increasing or all decreasing.
Test Failed at points: 49 50
TEST 5. 2 out of 3 points more than 2 sigmas from center line (on one side of CL).
Test Failed at points: 61 80 92 104 165 203
TEST 6. 4 out of 5 points more than 1 sigma from center line (on one side of CL).
Test Failed at points: 15 22 23 24 25 26 27 45 82 113 159 160
TEST 8. 8 points in a row more than 1 sigma from center line (above and below CL).
Test Failed at points: 27
Test Results for MR Chart
TEST 1. One point more than 3.00 sigmas from center line.
Test Failed at points: 55 69 78 80 132 133 140
TEST 2. 9 points in a row on same side of center line.
Test Failed at points: 29 30 31 32 33 52 53 54 200 201

If we attempt to study the number of points outside the three sigma limits on the I chart, we note that the test fails at only one point – number 80 – and not at points 12 36 55 61 103 132 as indicated by the Figure 8 chart earlier. In fact, if you study the results of other failure points one would realize the serious consequences of assuming normality: doing so might cause you to react to common causes as special causes, which would lead to tampering of the process. It is important to note that for X bar R chart the problem with normality is not serious due to the central limit theorem, but in the case of Individual Moving Range chart there could be serious consequences.

Now let’s consider a case where we would like to study whether the mean cycle time of this process is at 20 days. If we assume data normality and run a one sample t test to confirm this hypothesis, Table 5 displays the results.

Table 5: One-Sample T – Cycle Time
Test of mu = 20 vs mu not = 20
Variable N Mean StDev SE Mean
turn around 207 21.787 12.135 0.843
Variable 95.0% CI T P
turn around (20.125, 23.450) 2.12 0.035

Based on the above statistics, one would pronounce at an alpha risk of 5 percent that the mean of the data set is different than 20. If we were to verify the fact that the data is not normal, we would have run the one sample Wilcoxon test which is based on the medians rather than the means, and we would have obtained the results found in Table 6.

Table 6: Wilcoxon Signed Rank Test: Cycle Time
Test of median = 20.00 versus median not = 20.00
N for Wilcoxon Estimated
N Test Statistic P Median
turn aro 207 202 11163.5 0.273 21.00

The Wilcoxon test indicates that the null hypothesis (test median is equal to 20) is accepted, and there is no statistical evidence that the median is different than 20.

The above example illustrates the fact that assuming that the data is normal and applying statistical tests is dangerous. As a better strategy in data analysis, it is better to verify the normality assumption and then – based on the results – use an appropriate data analysis method.

You Might Also Like

Comments 5

  1. PAULO CESAR FERREIRA FRANCO

    It is possible to contact directly with Author?

    0
  2. Kicab Castaneda-Mendez

    While the article in general does a good job of describing how one might analyze nonnormal data, there is one issue that needs further addressing. Your Table 2’s last column states “No Assumptions required” for nonparametric tests. This is false. ALL statistical tests require assumptions, including random sample and assumption of a probability distribution. That probability distribution may not have any parameters (hence, nonparametric) but it is still a distribution and it is still assumed. For example, a typical assumption for nonparametric tests is that the data are continuous and symmetrical (and perhaps even symmetrical about zero). One way to check this is simply do a search for “assumption of (blank) test”, e.g., Mann-Whitney.

    0
  3. Indresh

    The article is rich on content and helps identify how to proceed statistically. However, we should also focus on how to proceed purely from business perspective
    Typically i have seen Six Sigma BB stuck on data, and not able to understand business.

    The data shown in table clearly indicates
    – major process wastage (special cause)

    Which can be removed using Lean thinking. Secondly the normality is over rated and in service industry typically mostly data is non normal. Looking at the spread, one can easily identify if its totally skewed, or is tending towards normal. Decisions on how to segregate, identify special cases, analyse outlying data separately etc are first steps towards analysis and decision making.

    0
  4. Chris Seider

    Please don’t emphasize transformations.

    And don’t “imply” that one can/should process capability on subgroup means. Means aren’t to be used in any process capability analysis.

    Just my two cents.

    0
  5. manuela aumick

    For me, this article is helpful, but I don’t fully understand graphic and want to see more information about it.

    0

Leave a Reply