Measurement system analysis (MSA) determines whether the measurement system is adequate and confirms that significant error is not introduced to the true value of a process characteristic. MSA is the one of the most misunderstood and underused concepts in Six Sigma. This article highlights two of the common mistakes made during the study and explains how to avoid them.

Maintain Low Measurement System Error

Mathematically, total variance is equivalent to the sum of true variance and the measurement system error. Measurement system error should be zero but, practically speaking, this is not often the case because of factors such as worn and noncalibrated gauges, inconsistency of an appraiser, and different knowledge levels of the appraisers. In other terms, total variation should arise due to the difference in the parts being measured. It is important to maintain a measurement system error as low as possible.

Variance (total) = variance (true) + variance (measurement error)

Continuous or Discrete Data

To consider a measurement system as adequate, there are set rules based on the data type being used. For continuous data, 1) gage R&R has to be within 10 percent (10 percent to 30 percent allowed if the process is not critical) of the total study variation, and 2) the number of distinct categories has to be greater than four. (For discrete data where attribute agreement analysis is used, kappa value has to be at least 0.7 for nominal and ordinal data, and Kendall’s correlation coefficient [with a known standard] has to be at least 0.9 for ordinal data.)

The process of conducting MSA study for continuous and discrete data is similar. Take 10 to 20 samples for a study, provide them to two or three appraisers for the first trial, and then rerun the study. The main difference lies in the fact that the appraisers use a gauge to measure the part in continuous data. For discrete data, however, it is left to the knowledge of the appraisers whether the transaction is defective.

MSA for Discrete Data

One common challenge faced in an MSA study of discrete data is regarding the two trials. How can the bias be removed when appraisers are given the same samples for the two trials through an email? When provided the same sample twice at the same time, the appraisers will surely provide the experimenter the same results for Trials 1 and 2; thus, no repeatability issues will be detected when the study is done in this manner. Additionally, if the two appraisers are aware of the study being run, then the reproducibility component results will be biased. The following example highlights such a mistake being made during an MSA study.

Example: Compliance Project in Banking

A project leader at a financial institution was asked to do an MSA study to confirm that the measurement system was adequate. He ran the study for a week, put 10 samples in a spreadsheet and sent them to the two appraisers. The study was completed and the data was shared with the Black Belt (BB). The BB completed the study in a statistical analysis program and found that there was no issue in repeatability. There were, however, some mismatches between the two appraisers. Curious, the BB asked the project leader how the study was conducted.

The project leader explained that he documented 10 samples in a spreadsheet and sent them to the two appraisers through separate emails. For the second trial, the project leader again sent the 10 samples in a spreadsheet via email. The BB told the project leader that while the project leader ensured that the two appraisers did not know that the study was being conducted by two different individuals, there was a repeatability bias involved in the process. The BB suggested that the project leader instead follow the following procedure to ensure that there would be no repeatability or reproducibility bias involved in the study.

  1. Write the unique identification numbers of ten transactions on paper and make a copy.
  2. Give each of those hard copies to the two appraisers (or subject-matter experts, SMEs) but do not tell the SMEs that two trials will be conducted.
  3. The SMEs should review the 10 transactions and provide their decisions on each transaction (defective or nondefective).
  4. Have the SMEs return those original copies with their now-added decisions.
  5. After a week’s time has passed, put the same 10 samples again on paper. Make a copy.
  6. Give each of those two papers again to the same SMEs.
  7. Have the SMEs review the 10 transactions and make their decisions.
  8. Collate all four papers.
  9. Mark the SME names and trial numbers (1 or 2) on each paper and collate in a spreadsheet.
  10. Send the data to the BB to run the study in the statistical analysis program.

The project leader took a new 10 samples and provided them to the SMEs following the new documented method. This time there were differences within appraisers, but the kappa value was within the permissible limit. By using this process, the repeatability bias was removed and the true measurement system error was determined.

MSA for Continuous Data

Another common challenge is frequently observed when an MSA study is done for a set of continuous data. How should a sample be selected when the manufacturing process happens on a number of machines that results in varying product sizes? Can that influence the MSA study?

Example: Multiple Machines in Manufacturing

A supervisor was conducting a MSA study for the thickness parameter of a grinding wheel. She had parts produced from different presses, which used to come in sizes varying from 5 mm to 200 mm in thickness (categorized into large, medium and small thickness wheels). The supervisor thought that one study of 10 samples done with two appraisers would be good enough for the study.

She met with the Six Sigma expert in the organization and asked if she was using the right approach to conduct the study. The Six Sigma expert asked her how she would ensure that no measurement error was introduced (taking linearity into consideration). The expert recommended that the supervisor needed to ensure that the gauge is linear across the entire range of measurements (varying range of thicknesses).

The supervisor then took another set of 10 samples each for the small, medium and large thickness wheels to check the linearity of the gauges (the gage R&R). This way the supervisor ensured that both accuracy and precision-related measurement errors were correctly addressed during the study.

Summary

While conducting MSA studies, be aware of their practical challenges and how to remove them so as to avoid measurement errors.

About the Author