Manufacturing processes vary over time. Occasional large, special-cause events that are sprinkled against a background of lesser variation are natural to the process. Most processes, as a result, use a control strategy, at the heart of which is a measurement system designed to manage the product properties that are critical to customers.

To maintain accuracy, this measurement system should be regularly monitored and validated. One way to do that is by using the *5/3 strategy*, a means of estimating how much a measurement system’s variation contributes to overall process variation. This strategy can be used as an alternative to gage R&R (GRR) studies.

### Case Study: Nylon Polymer Manufacturing Process

To gain an understanding of how the 5/3 strategy works, consider a nylon polymer manufacturing process, where diamine and dicarboxylic acid monomers are condensed into polymer chains before being melted and extruded as nylon plastic chips or fiber. Process conditions must be controlled to manufacture these polymer chains at targeted lengths, so workers must measure polymer viscosity and amine end group (AEG) levels periodically. Product offerings are differentiated and their specifications set by their respective AEG levels, as well as their relative viscosity, moisture levels and levels of an array of different additives that influence product characteristics (e.g., luster, color, light fastness and strength) and other targeted properties.

The nylon manufacturing process, like most others, varies over time. Short term, it can vary within a multi-day manufacturing campaign and introduce product variation as process conditions shift or drift. Over the longer term, product variation might reflect targeting issues encountered from one production campaign to the next.

Process control strategies address not only these targeting issues, but also short-term (σ* _{ST}*) and long-term (σ

*) variation in AEG, as well as each of the polymer’s important characteristics. Product quality improves both with better targeting and by reducing σ*

_{LT}*and σ*

_{LT}*, usually in that order. Moreover, as a process continues to improve, a larger percentage of σ*

_{ST}*, the lot-to-lot variation in a batch process, or shift-to-shift variation in a continuous production line, is due to variation in the sampling and measurement system itself. Thus, measurement system validation strategies are important for each process characterization metric. For purposes here, we’ll just consider AEG levels.*

_{ST}### Viewing AEG Measures

A multi-month plot of the shift-to-shift AEG measures (Figure 1) exhibits assorted types of variation: longer-term process drift (solid σ* _{LT}* line) and perhaps even a reduction in shorter-term (σ

*), point-to-point variation from the first month to the second. During this two-month period, six measures fell outside its upper and lower control limits (μ*

_{ST}*+/- 3σ*), while all data fell within its upper (USL) and lower (LSL) specification limits.

_{ST}These six points aren’t part of the normal “voice” of this process, where, according to normal probability statistics, the process engineer could expect just one point in 370 (1/0.0027) to fall outside these limits. Six in 141 is too many to occur naturally for this process. Therefore, these points are special-cause variants, and, as such, they offer practitioners a chance to conduct further investigation to learn something about the process – perhaps even improve its variability.

In Figure 1, note, the grouping of five data points to the right; yellow dots track estimates of the percentage of total process variability contributed by the measurement system (read from the right-side axis). Suffice it to say, one of the measurement system’s estimates of percentage total variation at 42 percent is very different from the other four estimates, which are all less than 10 percent. It’s likely in error.

Process variation routinely is comprised of shorter-term, longer-term and measurement-system components, and each type of variation has its own causes and strategies for improvement. The process engineer must recognize their differences and relative impact before formulating improvement strategies.

### Estimating Measurement System Variation

As cut and dried as it might sound to analyze a measurement system’s contribution to variation, Don Wheeler, in his book, *EMP III Using Imperfect Data* (SPC Press, 2006), paints a very different picture:

“The problem of measurement error is so widely recognized that many different solutions, in different fields of endeavor, have been proposed. These various solutions cover the spectrum from naïve to theoretical, from simple to complex, and from right to wrong.”

There are also at least four major problems with the typical GRR characterization of measurement.

1. Donald Ermer addressed one problem in “Improved Gauge R&R Measurement Studies,” an article in the March 2006 issue of *Quality Progress*. Ermer wrote,

“The most significant error is the final variation ratios – percent equipment variation, percent appraiser variation and percent part variation. These are calculated using standard deviations instead of variances. The results obtained exaggerate the proportional effects of the equipment, appraiser and part variation. Therefore, this incorrect type of study cannot provide an index of whether the components of the measurement process are capable for the part of product under study.”

2. The second problem with GRR characterization is less menacing. The measurement system’s variance contribution is assessed against the natural part-to-part variation of a process. The smaller that percentage, the better the measurement system can be judged. If the parts used in the assessment are not distributed evenly between the specification limits, the GRR assessment of the measurement system will have errors. Think of it this way: If there is little variation among tested parts because they are all near target (or more likely, “most near target with a few outliers”), the measurement system will be judged as worse than it actually is because the variation it is being assessed against has been narrowed artificially.

3. The third problem is that the typical GRR assessment is a once-only static measure. It does not address the stability of the measurement system over time. Ideally, engineers would prefer the measurement system be validated continuously. GRR testing substantially increases a lab’s workload, so they are often forced to do this type of testing infrequently.

4. A fourth problem is that GRR testing is recognized by labs as a special test and treated as such. Technicians have a heightened awareness that they might not have when testing more routine samples.

What’s needed is a simpler, less onerous strategy to assess the quality of a measurement system continually. The interclass correlation coefficient, first developed as a measure of association by R. A. Fisher in 1921, provides the basis for such a strategy.

As explained by Fisher, the total variance of any process can be described as the sum of its components. This theory can be applied to a manufacturing process as follows (assuming the sources of the two variances are independent, this will always be a true statement):

*σ ^{2}_{Total} = σ^{2}_{Process} + σ^{2}_{Meas Sys}*

The first variable, σ* ^{2}_{Total}*, is long-term sigma squared (σ

*), where sigma is calculated from routine product measures tested over the life of a production run (e.g., the one obtained from the 141 data points in Figure 1). A GRR (or the 5/3 strategy to be discussed) can be used to estimate σ*

^{2}_{LT}*. The middle term, σ*

^{2}_{Meas Sys}*, which is impossible to measure directly because it will always be confounded with the measurement system, is calculated as the difference of the other two.*

^{2}_{Process}The *interclass correlation coefficient* – the ratio σ* ^{2}_{Process} / σ^{2}_{Total}* – relates the percentage of variation explained by process variation, while the ratio σ

*provides an estimate of the measurement system’s percentage of total process variation. But how does one estimate σ*

^{2}_{Meas Sys}/ σ^{2}_{Total}*directly? That turns out to be pretty simple.*

^{2}_{Meas Sys}### 5/3 Strategy: AEG Measurement System Validation

The 5/3 strategy is used to estimate measurement system variability (σ* ^{2}_{Meas Sys}*) on a continuing basis using a method far less imposing to an analytical laboratory than that commonly encountered with full-blown GRR studies. Over a two-week period, five routine samples are each split into thirds, hence the name “5/3.” Blind to the lab, one from each grouping is submitted as the routine sample, while its two mates are submitted as additional samples over the next two shifts (if age is an issue, all three might be submitted on the same shift). The idea is, over time, to subject these groupings of three identical samples to the variation routinely introduced by the measurement system – be it people, equipment, reagents or something else – to include any variation in the split samples themselves (i.e., any uniformity problems in the samples).

Each triad provides an estimate of σ* ^{2}_{Meas Sys}*, albeit one that is not well-defined and that has only two degrees of freedom. Over the two weeks, however, these five triads provide a pooled estimate with 10 degrees of freedom. Ideally, another sample would then be split on a regular basis – say every other week to begin, then monthly. Over time, the pooled estimate would be even more accurately defined and continuously monitored.

Table 1 summarizes AEG data collected from the 5/3 testing, where five routine samples were split into thirds and then tested blindly on three different shifts (shown in Figure 1 as the five connected yellow dots to the right). The idea, again, is to get routine measurement system data from routine samples that reflect true lab variation. Assuming the subsample compositions are identical (well blended), their pooled variance provides an estimate of the measurement system’s variance.

**Table 1: **AEG Data – 5/3 Study

Date | Time | Tech | AEG | σ^{2} | Percent Total Process (σ^{2 }Total) |

8/6/2008 8/7/2008 8/8/2008 | 1700 1300 1400 | DH CW NB | 45.32 45.25 45.15 | 0.007 | 5.50% |

8/11/2008 8/12/2008 8/13/2008 | 1700 1500 1400 | DB HB BC | 45.43 45.31 44.98 | 0.054 | 40.80% |

8/13/2008 8/14/2008 8/15/2008 | 1430 1800 1900 | BC SH RY | 45.23 45.33 45.3 | 0.003 | 2.00% |

8/15/2008 8/16/2008 8/17/2008 | 2200 1100 0200 | DH SBA DB | 44.78 44.8 44.7 | 0.003 | 2.10% |

8/17/2008 8/18/2008 8/19/2008 | 1230 1515 0155 | TSH GOH BC | 45.09 44.9 44.91 | 0.011 | 8.60% |

σ^{2 }_{meas system }(pooled) | 0.016 | Pooled % estimate | |||

σ_{meas system }(pooled) | 0.125 | 11.80% |

Figure 2 depicts the individual AEG measures made on five such triads, where each measure is presented as a deviation from its triad mean. The sixth point is problematic, as it inflates the second group’s variance estimate and causes the severe estimate by that grouping. Technician BC made that estimate but also made two others that agreed much better. The two sets of limits shown are based on pooled estimates of variation made with and without the second estimate. Clearly, the sixth measure is likely an outlier. Tracking these type measurement characteristics on the initial 5/3 set, as well as subsequently, helps identify problems within the measurement system itself, be they operator- or equipment-related.

The 5/3 strategy follows this initial two-week sampling with another single sample split every two weeks for another five groupings, and then transitions to collecting samples on a monthly basis going forward. Continuous validation of the AEG measurement system is thus provided at the cost of an extra two samples each month. If the samples show that the measurement system accounted for more than 30 percent of the total variation, an experiment designed around pertinent lab factors would be undertaken to improve the system.

For the AEG measure in this case, σ^{2}_{Meas Sys} was estimated to be 0.016, which, in turn, accounts for 11.8 percent of the σ^{2}_{LT} evident in the Figure 1 data. Table 2 partitions the variance into its components.

**Table 2:** Estimate of Process Variance Components

σ^{2}_{Total} | = | σ^{2}_{Process} + | σ^{2}_{Meas Sys} |

0.133 | = | 0.117 | 0.016 |

100% | 88.2% | 11.80% |

While the 5/3 strategy is simpler to execute than a full-blown GRR, the real advantages of the 5/3 strategy are that it uses routine samples and requires no special range of samplings. Also, it directly measures the variation routinely encountered in the sampling and measurement system. So how does the 5/3 strategy compare, in terms of its confidence interval widths, to a full-blown GRR? Simulation studies answer that question.

### Simulating Measurement System Variation

Monte Carlo simulation is used here to compare the 5/3 strategy with GRR. The advantage offered by Monte Carlo simulation is that the entire model, with all its sources of variation, can be defined. Therefore, engineers can easily evaluate the quality of various estimation strategies both by how close they come to the true values and the variance surrounding their estimates.

Simulation studies can be used to answer questions such as, do five samples split into three subsamples each provide an accurate enough estimate of true measurement system variation as compared to a full blown GRR? Or, are 10 such samplings needed? Or, are two replicates, each of five samples, accurate enough if an engineer wants to estimate σ^{2}_{Meas Sys} at minimum analytical cost?

After creating a distribution with mean equal to 45 and sigma equal to 0.35 and then adding a measurement component of 11.8 percent of total variation, 12 simulations were run. Each simulation was parameterized by number of iterations (1,000), number of master samples to be sub-sampled (5, 10 or 15) and number of subsamples (2 to 5). Each provided 1,000 estimates of the measurement variation percentage, which then could be sorted and assigned empirical confidence limits.

Figure 3 tracks the 5 percent and 95 percent limits for each population’s estimate of the measurement system’s percentage variation. As the number of samples split and the number of master samples increased, these limits narrowed: 19.5 percent range from 4.5 to 24 percent for the 5/3 simulation (10 degrees of freedom) and 10 percent range from 7.5 to 17.5 percent for the 20/5 simulation (80 degrees of freedom). As the sample splits accumulate, the measurement system’s estimate will be better defined, but the 5/3 strategy’s narrowing was worth the five additional samples versus the 5/2 strategy (in which 24.5 percent range from 2.5 to 28 percent estimates, with only 5 degrees of freedom).

In summary, the 5/3 strategy can be used first to estimate a measurement system’s contribution to total process variation, and then to provide continuous monitoring at minimal cost. Further, it is less disruptive to the lab than a GRR study and requires no special range of samples. Some form of measurement system validation should be used for every measurement system that is critical to process control.

A very informative and well written article.

I think something in your terminology is unclear:

How are 5 discrete parts split into thirds or are you only applying this to systems where the item to be measured can be subdivided?

Wouldn’t the lab notice they received 1/3 of the amount of their routine samples?

In most instances, I’m taking a polymer chip sample where a larger enough sample is taken to split into three regular size samples. Except for the lab manager, the samples are blind to the lab who treat them as blind samples.