NDC Mystery

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
  • #55336

    Tim M

    I have been reading some of the stuff on here about NDC (number of distinct categories) but I haven’t quite found the answer to my question. I have a part with a specific width measurement. The tolerance is ±0.10 millimeter. Because this is a plastic part with a definite maximum material condition (MMC), I elected to use a caliper, which easily captured the MMC for all inspectors and returned a gage R&R of 13%.

    My customer did not like the NDC of 2 and asked for a different measurement method. So we used a digital micrometer that reads to 0.001 millimeter and performed a new gage R&R study. This time the R&R was 16%, which I could live with, but the NDC is still 2. I am wondering how can this be?

    We are using the standard AIAG worksheet and its formulas. I also typed the data into Minitab and the NDC was confirmed. But when I look at the micrometer data, it sure looks like there are a sufficient number of different readings. What am I missing?

    Now the customer wants a special fixture built to use a drop indicator for measurement. My gut tells me this is unnecessary and I also cannot guarantee improvement in the NDC based on what I have experienced so far.

    I would sure appreciate any insight from your experiences.



    Joe D

    Tim M
    According to Dr. Wheeler the ndc is a meaningless number (See
    In that article Dr. Wheeler states the following;
    “The citation given in the AIAG manual for the ndc formula is in the first edition of Evaluating the Measurement Process by myself and Richard Lyday. In that text we defined a quantity which we called the classification ratio, and the ndc does provide an estimate of this classification ratio. However, nowhere in that text did we ever suggest that this ratio would define the number of distinct categories.” Text slightly modified.
    The ndc is the ratio of the “Product Variation” and the “Combined R&R”. To increase the ncd you either need to increase the “Product Variation” or decrease the “Combined R&R”.
    The primary factor in the “Product Variation” Rp which is the “Range of Parts Average”. So to increase the “Range of Parts Average” you need to increase the range of parts tested over the Spec limits if possible. Here the more variation the better.
    The other option is to decrease the “Combined R&R”. One way to do this is to decrease operator bias (Reproducibility) if possible. The other is to decrease the Repeatability which is what you were trying to do by increasing the precision of you measurement device. This virtually had no effect on the ndc. This would suggest that the Reproducibility is dominating. Check to see if you Reproducibility is bigger that your Repeatability.
    Good luck and let me know how things turn out.


    Chris Seider

    I’d be curious to see the raw data. Did you get typical process variation across various demographics of inputs?

    Your # of distinct categories is a bit small implying you didn’t get a good range of samples to measure. One may say that I’m “gaming the system” but I would reply, that if you didn’t measure across your typical process range–you don’t know if you have good repeatability/reproducibility across the range so it’s not gaming the system.

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.