# Normal Distribution Testing in the Micron Range

Six Sigma – iSixSigma › Forums › General Forums › General › Normal Distribution Testing in the Micron Range

- This topic has 4 replies, 3 voices, and was last updated 2 years, 3 months ago by Chris Seider.

- AuthorPosts
- January 29, 2018 at 6:08 am #55929
I am reviewing process control charts for customers and have a problem with test of normality. The dimension tolerance is 100 microns and the measurements are in the micron range. With the sizes, the instruments are, of course, digital so the results are grouped rather than continuous causing my P-score to be <0.005. I have used methods other than Anderson-Darling and that doesn’t solve my problem. How can I convince my customer that the process control numbers are reasonable when the numbers don’t indicate the distribution is normal? These are not Quality people so they go by the numbers. This issue will keep coming up as parts get smaller in the future.

0January 29, 2018 at 6:50 am #202199

Robert ButlerParticipant@rbutler**Include @rbutler in your post and this person will**

be notified via email.Your statement ” With the sizes, the instruments are, of course, digital so the results are grouped rather than continuous causing my P-score to be <0.005.” Does not make sense. There is no “of course” about a digital measurement that requires grouping. You have a continuous measurement (true they are small but they are still continuous) so the analysis should be run on the raw data.

You didn’t mention the size of the sample but if it is large then the chances that you will fail one or more of the normality tests is almost guaranteed. The Anderson-Darling looks at the “heaviness” of the tails and for large counts it will fail even with data from a generator of data from a normal distribution.

What you need to do is plot the data on probability paper and look at the results. If the plot passes the “fat pencil test” then the data is reasonably normal and the fact that it fails one or more of the normality tests is of academic interest only.

0January 29, 2018 at 7:34 am #202200

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.If parts are so small, why would dimension tolerance stay the same? Just curious.

If they are getting smaller, and the de facto tolerance is getting smaller, than to know where you are operating–are you having enough precision with your gage(s)? Did you do a gage R&R to see if the precision to % study variation is acceptable?

0February 5, 2018 at 5:22 am #202221

Mike BonniceParticipant@mbonnice**Include @mbonnice in your post and this person will**

be notified via email.Perhaps your customers need a better understanding of the measurement process and you could do some work to improve their understanding.

We don’t know the dimension you are measuring. If it is a one-sided tolerance then non-normality is common (flattness, roundness, runout, and so on).

By “grouped” perhaps you mean “stratified”, meaning that there are too few decimal places. It’s what Chris Selder refers to with precision in the gage. For a digital system you might be able to gain precision by changing the software to record a large number of decimal places. Many of them will be insignificant, however stratification in computer systems can be caused by software cutting off significant digits.

After increasing the number of decimal places, do a repeatability study to quantify the measurement variation. Check the repeatability for normality. Calculate the standard deviation and variance of measurement. If the errors in measurement are not normally distributed, you might have a mystery.

The number of significant digits is related to the standard deviation of the measurement process. There needs to be enough decimal places to be able to divide the histogram of the measurement variation into at least 5 or 6 bins.

Having dealt with precision and normality of the repeatability, work on the variation of of the part population. Gather a large random sample of parts and measure them. Calculate the standard deviation and variance of the sample. Test the distribution for normality.

Calculate the ratio of measurement variance to sample variance. You would like this number to be small, like 10% or less. If it’s not, the measurement system variation is too large.

Somewhere throughout this process of evaluating the measurement system you will reveal the truth of the matter. There are many other aspects of measurement systems that could also be explored, but this seems sufficient to get the knowledge that your customer needs.

0February 5, 2018 at 7:57 am #202224

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.@mbonnice Great details in your post.

0 - AuthorPosts

You must be logged in to reply to this topic.