iSixSigma

Verifying Their Data With Measurement System Analysis (MSA) Saved This Company $1M

Most businesses today brag about how they make data driven decisions on important company issues. But, what if the data is not accurate or reflective of what is really going on in the company? You might be making the wrong business decisions and inadvertently costing your company lots of money.

Let’s see how one company almost made a bad financial and operational decision — but was saved by using Measurement System Analysis (MSA) to find out that the solution to their problem was simple. 

Can the data be trusted? 

A consultant was helping the U.S. division of a major German-based producer of paint, ink and chemicals implement a continuous improvement effort. Midway through the deployment, the consultant was asked to go to the automotive paint division located in Mexico City to determine whether they could assist the Mexican division with some of their quality problems and customer dissatisfaction. One of their dissatisfied customers made up approximately 75% of their business in Mexico. The loss of that customer would close the division down.

The consultant agreed to go to Mexico to see if they could help.

The problem revolved around frequent rejection of painted cars due to the thickness of the applied paint. Sample painted metal panels pulled from the line were inspected in the quality lab, and based on the test results, a decision was made to stop, rework or adjust the painting process on the manufacturing line.

The division’s director of quality had concluded the problem wasn’t on the line, but that the lab test equipment was faulty and thus the data couldn’t be trusted. A requisition had already been approved to purchase approximately $1,000,000 in new lab equipment. The consultant requested they give them a day to test whether his assumption of faulty test equipment was valid or not.

Was Measurement System Analysis (MSA) the solution? 

One of the core concepts of Lean Six Sigma is to make data-driven decisions. Since the consultant was not sure whether the director of quality had made his decision using data, they recommended a MSA study be done.

To review, MSA is a formal statistical study that determines whether your measurement systems, whether they be measuring devices or people, are capable of providing reliable data so that you can make the best possible data-driven decisions. The statistical study used for continuous data is called a Gage R&R study, and the tool used for discrete data is called an Attribute Agreement Analysis.

MSA has been successfully used to evaluate the consistency of:

  • Answers provided by call center personnel
  • Evaluations given by supervisors to their direct reports
  • Errors detected in written documents
  • Mistakes in calculating discounts on invoices
  • Timing of work activities
  • Counting of warehouse inventory

Here was the as-is process the consultant observed:

  1. Metal panels with the dimensions of 12” x 6” were painted at random intervals during the actual painting of a car. These panels were sent to the lab to test the actual thickness of the paint application.
  2. There were 6 technicians using 5 similar models of equipment to do the testing. The test equipment consisted of a hand wand that acted as an ultrasound to measure the thickness.
  3. The technician placed the wand in three locations along the length of the panel and the depth recorded. There were specs for the average thickness as well as the three individual readings.

This was the setup for the MSA:

  1. The 5 test machines were tested to assure there was no statistical difference between their ability to measure thickness.
  2. Ten panels were randomly selected from the 30 currently in the lab.
  3. Three technicians were selected for the study.
  4. A crossed design was used with each part being measured two times.
  5. Parts were randomized between the two runs.
  6. No attempt was made to alter the existing testing procedure.
  7. Once the study was run and data recorded, the consultant analyzed the results.
  8. As the director of quality had stated, the process had poor repeatability and reproducibility.

But, what he didn’t know was that the consultant observed something significant.

Was the problem due to the equipment or technician variation?

The process of taking the measurements on the panel, as described above, consisted of the technician placing the head of the wand on three locations along the length of the metal panel and recording the reading. It became apparent that the three technicians were not placing the wand in a consistent location on the panel. Physics informs us that the thickness of the paint will be less towards the edges since gravity will tend to pull the paint over the edge and thin the paint coating.

The solution became obvious to the consultant.

The random wand placement on the panel would lead to variation in the measurements. This variation was apparently sufficient to create a repeatability problem within each panel and across the two panels each technician measured. Likewise, the repeatability problem will impact reproducibility since, if each technician can’t be consistent and repeatable with themselves, there can’t be consistency or reproducibility between the three technicians. This would explain the poor results for the MSA study.

To demonstrate this, the consultant took a piece of cardboard, a small glass and a small knife and cut three round evenly spaced holes in the cardboard, the size of the wand head. This would serve as a template that the technician could overlay on the length of the panel. The MSA study was repeated with 10 new panels. Each technician was instructed to place the cardboard template on the panel and then place the head of the wand in the hole.

Measurements were recorded and the results, once again, analyzed by the consultant.

As the consultant had expected, the repeatability and reproducibility of the measurement system was now acceptable. The variation was not a result of the test equipment, but with the testing procedure. A new standard work document was created and the new process put in place the next morning. Plastic templates were made to replace the cardboard one.

The director canceled the requisition for the new equipment, thus preventing the company from needlessly spending $1,000,000. Rework, line disruptions and rejections due to paint thickness were reduced by 85%.

A little extra help to become a better vendor 

At the conclusion of the consultant’s work, they were informed the company’s major customer was visiting the next day to review some of the company’s control charts and process capability analysis. The consultant was asked to stay another day and help the company prepare for the meeting. The consultant agreed. 

The consultant met with company personnel the next morning to review their control charts and process capability analysis. Unfortunately, the company had a poor understanding of how to create and use a control chart. The consultant quickly showed the company how to properly construct control charts for the critical metrics the customer was interested in reviewing. 

Upon reviewing the process capability studies they had prepared for the customer, the consultant realized the company based most of the process capability analysis on arbitrary internal specs rather than customer specs. The consultant advised the internal specs were unrealistic and showed the company in a bad light. The consultant suggested the company revise their process capability studies to more accurately represent their true process. The consultant was asked to be present at the customer meeting.

During the meeting, the consultant advocated for the company and convinced the customer the company was capable of meeting customer specs. The eventual outcome was the company was eventually designated as a preferred vendor for the customer. This allowed the company to keep the customer and to become successful and profitable.

3 best practices when doing a Measurement System Analysis study

The use of MSA quickly spread through other areas of the organization where data was being used to make decisions. Here are some best practices for the endeavor.

1. Be clear on the parameters and desired outcomes from your MSA 

Be clear on what is the process and metrics you are concerned with in your measurement system. Will you be using the MSA for people or equipment? Will you be using continuous or discrete data? Do you have a repeatable or reproducible issue with your measurement system? 

2. MSA is not a one time event 

Unfortunately, processes, people and equipment have a tendency to shift and drift over time. Calibration is one method of realigning your measurement system. But, it is recommended you continue to do a MSA study to validate the system is performing as it should. 

3. Remove and mitigate any noise in the environment 

Your goal is to determine the sources of variation in your measurement system. If you permit variation from sources outside your study to impact your results, you will not have a true picture and thus may draw erroneous conclusions about your measurement system.

“In God we trust; all others bring data.” – W. Edwards Deming

Your biggest takeaway from this story should be that you can’t trust your data unless you validate the integrity of your measurement system. MSA is a statistical tool used to determine whether your measurement system is repeatable (variation within a person) and reproducible (variation between people). It can be used for continuous or discrete data. 

In the case of our paint company, they were prepared to needlessly spend $1,000,000 dollars on new test equipment based on some intuition rather than statistical knowledge of the capability of their measurement system. It is not uncommon that many measurement issues can be resolved with simple process changes rather than expenditures for computers or sophisticated technology. 

Leave a Reply