capability and fraction nonconforming
Six Sigma – iSixSigma › Forums › Old Forums › General › capability and fraction nonconforming
 This topic has 11 replies, 10 voices, and was last updated 19 years ago by John J. Flaig.

AuthorPosts

January 13, 2003 at 5:48 pm #31215
What am I to make of Donald Wheeler’s statements in “Understanding Statistical Process Control”, Wheeler/Chambers, 1992 SPC Press, Inc. to the effect of “The conversion of a capability number (ie. sigma calculation) into a fraction nonconforming will always require the use of an assumed distribution……..such assumptions are essentially unverifiable………….assumptions certainly cannot support computations in the parts per million range.” Wheeler proceeds to completely undermine any assertions that link ppm nonconforming with a process capability. Can anyone familiar with this work give their opinion on the correctness of his statements ? Does this not entirely debunk sigma calculations in general because the fraction nonconforming is illusory ?
0January 13, 2003 at 6:19 pm #82134Dave
This statement is true. What is meant is that at such low levels of fraction defectives , it is difficult to defnitely say the type of distribution ie normal or not towards the tails. However I do not think it debunks the six Sigma . I think six sigma is much more that simply calculating Sigma Levels0January 13, 2003 at 7:03 pm #82135Agreed. Six Sigma is a comprehensive quality improvement strategy. But the “six sigma” part is what gets all the press. Ie. This company or that company is operating at 5 sigma, 6 sigma etc. and that means 3.4 defects per million opportunities. It still seems to be a touch disingenuous given the statistical nature of it’s marquee catchphrase.
0January 13, 2003 at 8:14 pm #82137Dave,
You can test for a specific distributional form with a Chi Square Goodness of Fit test or the Kolmogorov Smirnoff test. Given that you discover the true distributional form from one of these tests, you can use a spreadsheet to calculate Z and DPMO. We use a spreadsheet for this purpose. Minitab, with the Six Sigma extension, assumes a normal distribution for it’s Process Capabilty calculations. But, your process may indeed have a different form.
The best way to guess at a distributional form is to make a histogram. Then, get a stats book and look at the shapes of all the different distributions i.e. log normal, exponential, beta, gamma, etc. Generate some random data, the number of points in the random sample being the same as the number of points in your actual sample, and do the Chi Square Goodness of Fit test.
0January 14, 2003 at 2:32 pm #82153
Chip HewetteParticipant@ChipHewette Include @ChipHewette in your post and this person will
be notified via email.Although I’ve never read the cited work, I have read Dr. Wheeler’s book Understanding Variation, and sat in a private seminar given by Dr. Wheeler at my employer. Is this like staying at a Holiday Inn Express?
Perhaps the meaning is simply that converting from Cpk to dpmo for the population is inaccurate unless the underlying distribution of the population is known, not assumed.
I don’t think that Dr. Wheeler believes that the areas under a distribution curve are unknown, but that people make erroneous assumptions about which curve to apply.
For example, time and time again I have seen BB attempt to use a normal distribution on a time measurement. The assumption that time follows a normal distribution is in error, but the BB go merrily on calculating a Zscore and ignoring the obvious.0January 20, 2003 at 8:46 am #82311
Reinaldo RamirezParticipant@ReinaldoRamirez Include @ReinaldoRamirez in your post and this person will
be notified via email.Please read on page 130 of Dr. Wheeler’s book:
…Dr. Deming has said: “with enough data, no curve will fit the results of an experiments”…Therefore, the conversion of capability values into fraction nonconforming is an operation that has no contact with reality. It is nothing more than FANTASY, and the result are illusion, if not outright DELUSIONS.
He didn’t make any difference with the statistical assumption of the six sigma approach.0January 20, 2003 at 4:39 pm #82320
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.If you know the probability distribution formula for your population, then you have all the data you need to get Cpk, sigma level, PPM or whatever.
The problem is: YOU NEVER KNOW THE REAL DISTRIBUTION. For example, you never know the average, you just get an estimaton. You never get the standard deviation, just an estimation. And you can never prove that a particular mathematical distribution fits the real one. You just say “I have not enough data to reject that it fits” (If you had enough data, you would find that it does not fit. It never does).
Only if you had all the information (the values for every possible individual of the population, not just a sample) you could get real values and not estimators. The problem is that the populations tend to be infinitely large. For example, if I say “the diameters of balls that are produced with this process fit the normal distribution”, I would need to measure the balls not yet manufactured to check that it is true with all the individuals of the population. But even then, you have the measurement errors and variation.
Then all you have are estimations. Estimations are a good thing. You can get an estimation of PPM based on an estimated distribution shape with an estimated average and an estimated variation. The problem is that this estimations tend to become less and less accurate when you go farther and farther to the tails.
For example, Imagine that I have a process that produces a product where, for one characteristic, the estimated average is 10 and that the estimated standard deviation (sigma) is 1 and the distribution passes a goodnessoffit test for normality. Imagine the following two situations, where the characteristic has only an upper specification limit:
a) USL=10. In this case, Cpk=0 and the calculated PPM =500,000. In this case, the calculated PPM will be a good estimation of the actual PPM. About half of the parts will be non conforming. Small changes in the average, in sigma or in the shape will not lead to great variations in the PPM.
b) USL=14.5. In this case the USL is 4.5 sigmas away from the average. Cpk=1.5 and the calculated PPM = 4.3. In this case, the actual PPM easily be as far as 10 times more or less (0.4 or 43) with little changes in the average, sigma and shape (specially the last two).
Anyway, PPM (or DPMO) can be used as an indicator of capability, just as Cpk. The key is to know that it is just an indicator, and not an actual value. For example, if I get a PPM of 500,000, the pocess is bad regardless of whether the actual value is 400,000 or 600,000. If I get a PPM of 3.4, the process is good regardless of wheter the actual value is 0.4 or 43 (both limits are good).
A final note based on my personal experience: No proces is perfectly stable, and conversions like between Cpk and PPM are only valid for stable periods. Even worse, not all unstabilities events are detectable in a control chart. Sometimes, a very strung special cause acts in just one or two parts (that result to be far out of tolerance) and then dissappears until it appears agait, several thousands of parts later. Those parts are called outliers. Imagine a process that, on average, produces 1 out of this parts every 10,000. It will be very dificult to “catch” one of those parts in a subgroup of a control chart or in the sample of a capability study. And even if you calculate a Cpk of 10 for this process, what will lead to a PPM of 0, this process will still have 100 PPM (1 out of 10,000). It is my experience that pretty stable processes with a pretty good capability tend to have much more non conformings that wht one would expect from PPM calculations based on the process distribution. This affects specially the processes with a high capability. In the previous examples (the ones with average 10), 1 out of 10,000 will not make the difference in case a), but it will be a major difference in case b).
Hope this helped.0January 21, 2003 at 8:56 am #82328
John DubucParticipant@jdubuc Include @jdubuc in your post and this person will
be notified via email.Keep in mind the best we can do to find truth is to question, sample, observe, and estimate (using approaches with few or no assumptions). This discussion has focused on irrationalizing the process of converting capability estimates (say Cpk) to nonconformities (say PPM or DPMO). As nicely pointed out by everyone, beginning with Wheeler and Chambers [1], conversions from Cpk to PPM are laden with problems due to forced specification of an underlying probability model to represent nature. By popular demand (or convenience), the Normal probability model is used as the (incorrect) default. Translating capability ratios to PPM can get us into bad mud.
Why do we need to convert capability ratios to PPM when we can observe and estimate PPM with few or no assumptions? We observe nonconformities because specifications exist. When processes improve, the nonconforming events we observe come from the tails of the unknown underlying distribution. For processes operating closer to Six Sigma, zero nonconformities or defects are the most popular observation for a given area of opportunity or sample space. When we observe zero defects in a sample should we report 0 PPM and say the process is performing at infinity Sigma? Oops! the iSixSigma Sigma calculator does.
Statistical Thinkers will calculate and report an upper 95% confidence bound for PPM [2]. For example, suppose you sample 300 units and observe 0 defects. A point estimate for PPM (or DPMO) is 0 and an upper 95% confidence bound for PPM (based on the Poisson probability model) is 9986 PPM. Thus we can report with 95% confidence that the process is performing no worse than 9986 PPM.
The benefit is that we dont have to make assumptions about tail shapes; we just assume the defects are independent. Also, there are no further assumptions that can screw us up if requested to convert PPM to capability ratios or Sigma levels (i.e., converting in the opposite direction). For example, using the 9986 PPM upper bound estimate, we can report with 95% confidence that the process is performing no worse than 3.83 Sigma or 0.78 Cpk. These estimates are valid for benchmarking with processes that are closely approximated by the Normal probability model as dictated by the standard interpretation of these metrics.
[1] Understanding Statistical Process Control 2nd Ed. By Donald J. Wheeler, David S. Chambers, 1992 SPC Press, p.129130
[2] Upper bound because one should be concerned with managing the risk associated with poor performance. Leave it to others to worry about something being too good.
Warmest regards0January 21, 2003 at 3:24 pm #82334
Thomas C. TribleMember@ThomasC.Trible Include @ThomasC.Trible in your post and this person will
be notified via email.
The contributors have presented good discussions concerning the underlying assumptions of process capability, percent defective calculations and DPMO. Notes on the Six Sigma Concept by William Latzko provides additional discussion of these assumptions.
http://deming.eng.clemson.edu/pub/den/six_sig.pdf0January 22, 2003 at 1:17 pm #82352
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Why to estimate PPM when we can measure it? I like it!
0June 2, 2003 at 3:13 am #86561
Nivra EtrebacParticipant@NivraEtrebac Include @NivraEtrebac in your post and this person will
be notified via email.I have verify the practical benefits of what Dr. Wheeler have claimed but I cannot verify in actual condition the practical benefit of ppm’s of nonconforming as far as process capability is concern.What is correct or not would depend, in my opinion, on what purpose are we using such numbers and how it is able to explain the current and future condition of the process. Dr. Wheeler also has some further explaination on this issue in his book :Beyond Capability Confusion 2nd Ed. As an engineer , I see the relevant of Dr. Wheeler’s argument on application and our educated customer never argue when we present the capability of our process using Dr. Wheeler’s argument and are happy of the quality of our products. On this context , Dr. Wheeler is right.
0June 2, 2003 at 5:01 am #86563
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Dave,There are a lot of good points in the posts here. I would only like to add Dr. Demings comment on this subject — “What Normal distribution?”. John Dubuc dicusses estimating the observered value of p and confidence intervals for the estimate.Regards,John
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.