Cp Cpk against Pp Ppk?
Six Sigma – iSixSigma › Forums › Old Forums › General › Cp Cpk against Pp Ppk?
 This topic has 35 replies, 15 voices, and was last updated 20 years, 7 months ago by Gabriel.

AuthorPosts

February 27, 2002 at 6:29 pm #28871
I got this information from Ben. I perfectly understand the difference between Cp and Cpk but what about between Cp & Cpk against Pp & Ppk?
——————————————————————————————
Posted by: BenPosted on: Thursday, 10th May 2001Consider a car and a garage. The garage defines the specification limits; the car defines the output of the process. If the car is only a little bit smaller than the garage, you had better park it right in the middle of the garage (center of the specification) if you want to get all of the car in the garage. If the car is wider than the garage, it does not matter if you have it centered; it will not fit. If the car is a lot smaller than the garage (six sigma process), it doesn’t matter if you park it exactly in the middle; it will fit and you have plenty of room on either side. If you have a process that is in control and with little variation, you should be able to park the car easily within the garage and thus meet customer requirements. Cpk tells you the relationship between the size of the car, the size of the garage and how far away from the middle of the garage you parked the car. Hope this helps.
0February 27, 2002 at 9:50 pm #72575
Pete TetiParticipant@PeteTeti Include @PeteTeti in your post and this person will
be notified via email.I have several excellent papers that I have written and have graciously received from some statistician friends of mine that explain this concept in the clearest light. I will be happy to share! Call me at (860) 6544800. Also, go to http://WWW.AIAG.COM and for an excellent text (SPC Reference Manual) that explains this as well.
Pete Teti, Advanced Quality System Manager
Hamilton Sundstrand, United Technologies Corporation0February 27, 2002 at 11:12 pm #72576Thanks Pete TetiIf you can send that to my email or my fax number is 011526313111100 extension 3976.
thanks
0February 28, 2002 at 2:35 pm #72599One short article might help: http://www.pqsystems.com/eline/v200003/febanswer.htm
0February 28, 2002 at 2:38 pm #72601The difference between Cp and Pp is related to what data you are using. Typically, when a process is new, a preliminary capability is conducted. Any statistical assessment of capability on a new process/product based on a small sample size will be Pp and Ppk. Again, the P stands for preliminary. The capability assessment is done in a startup situation, frequently with engineers running the equipment, limited raw materials, and is a much more controlled assessment of the process variation. The time interval is usually only an hour or two. Once the process is running in normal production mode, then a capability assessment should be made. This “longterm” capability will allow variation from the different operators, different lots of materials, and all the normal type of variation that will effect the process during normal production periods. This data is most often collected on a statistical control chart and is usually over several days or longer. This capability assessment will use the Cp and Cpk designation.
Eileen
Quality Disciplines0February 28, 2002 at 2:56 pm #72603
Peter WoodingParticipant@PeterWooding Include @PeterWooding in your post and this person will
be notified via email.You’ve said it all – who could ask for a more comprehensive answer. My ‘smart’ answer would have been “If you measure Pp today, and it’s different tomorrow, it’s not Cp yet! Cp is what you get consistently, day after day, week after week”.
If ‘P’ stands for Preliminary (I didn’t know that), perhaps ‘C’ should stand for ‘Consistent’. By the way, what does the ‘k’ in Cpk stand for?0February 28, 2002 at 3:16 pm #72608Eileen –
Other than yourself, can you offer any reference or resource that calls Ppk as the “preliminary” process capability. This is the first mention, I’ve ever seen. Is it simply a terminology that you employ?
Most practitioners refer to the “P” indices as the long term “performance” indices. reference Breyfogle “Implementing Six Sigma” and Bothe “Measuring Process Capability”. Bothe suggests both measures are “performance” capability indices and uses the long term capability to be the P indices and the short term the C.
A preliminary study is usually reported as Cpk since it is very short term.
The differences are in how the sigma is estimated. In the C indices it is estimated from the range chart and is thus a good indicator of the potential process capability without considering long term shifts and drifts. In the P indices, the sigma is estimated from all the data using the standard root mean sums of squares formula adjusted by the unbiasing constants . This of course accounts for all the variation and gives an idea of how the process will perform over time versus customer requiremennts.0February 28, 2002 at 3:17 pm #72609Peter, Cp & Cpk measures your “shortterm” capability and Pp & Ppk measures process performance “longterm” capability. The “P” stands for performance index not preliminary. The “C” is the capability index. Your analogy is valid (Pp today…), but it is transposed (Cp today…).
0February 28, 2002 at 3:27 pm #72613Eileen,
Quality Disciplines? Your answer is exactly backward.
The reference for the use of these terms is AIAG. They are defined on page 80 of the SPC book. C is always associated with short term, P is always associated with long term. Go look at a capability output from Minitab, Pp is always less than or equal to Cp, Ppk is always less than or equal to Cpk. That is what happens when you consider all sources of variability.0February 28, 2002 at 3:41 pm #72617Stan,
Don’t be rude!
I am speaking for automotive. The use of capabilities was used extensively during the 1980’s without any designation between preliminary studies (preproduction) and production capability (longterm) capability studies. In 1993, the AIAG group published PPAP booklet, which on page 7 clearly showns the use of the P designation for the preliminary studies. I quote ” Preliminary process studies are shortterm and will not predict the effects of time and variation, etc.”
In most cases, the preliminary capabilities(short term Pp) will be less than the production capabilities (longterm Cp).
The confusion has come along with the current Six Sigma material written by Harry, et. al. There is considerable debate about the usage of the terms of shortterm and longterm. My response is based on the source of Harry’s work from Ford Motor company.
We all are entitled to are own perspective of these concepts – especially if you have worked on process capability studies for 20 years.
EB0February 28, 2002 at 3:50 pm #72620Dave,
No problem. My references are from automotive – the main source of these concepts for the past 20 years. Ford did extensive process capability studies in the 1980’s. There was no real distinction between the two studies. Initial studies were called preproduction or potential capability and still used the Cp designation. Production or longterm studies also used the Cp designation. In the early 1990’s this was changed. The preliminary or preproduction capability studies were to be designated as Pp or Ppk. This is clearly documented in the AIAG manual – Production part approval process on page 7, paragraph 2. It states “Preliminary process studies are shortterm and will not predict the effects of time and variation, etc.”
Since Davis Boothe is out of Detroit Diesel (GM), perhaps they had their own way. Hard to say why Harry and company presented these concepts they way they have in the Six Sigma material.
Eileen0February 28, 2002 at 4:31 pm #72625
Peter WoodingParticipant@PeterWooding Include @PeterWooding in your post and this person will
be notified via email.Sorry Stan; I’m with Eileen on this. I’ve just refreshed my memory on the definitions on p80 of the SPC Manual and see no reference to ‘long’ or ‘short’ term. The distinction (ah yes, now I remember!) is whether Std Dev (sigma) is calculated ‘properly’ as rms deviation (Pp) or is estimated from Rbar/d2 (Cp). The latter only gives a ‘correct’ value for sigma if the process is in statistical control. In my experience, prior to quality maturation (i.e before we’ve identified and got a handle on all process variables) the control charts vary with the weather and the value of the euro – you have to do the rms calculation. It’s only when everything is truly under control that Rbar/d2 works.
0February 28, 2002 at 4:44 pm #72626
Peter WoodingParticipant@PeterWooding Include @PeterWooding in your post and this person will
be notified via email.Ref previous remarks (I’m with Eileen…) I omitted to remark that it’s not surprising you get a good preliminary Pp – you’re not representing all the variability. You might get just as good a Pp tomorrow, but the mean value might have shifted a few sigmas – if you added the two day’sworth together it wouldn’t look so good! In the longer run, after quality maturation, you’ll get the effect of all the variables, and the Cp might not be so high as the earlier Pp obtained with plenty of TLC, but at least it will be the same whether you look at a day’sworth, a weeksworth or a month’sworth.
0February 28, 2002 at 4:46 pm #72627Peter,
Your question is “What does the k stand for?” When you calculate the distance the average is from the closest specification limit, you use a zscore. That is, you calculate the distance of the average from the spec limit in units of the std. dev. (take the distance and divide by sigma). Then the minimum z value is divided by 3. These normal zscores are used extensively in statistical analysis.
This calculation used to be referred to as a kvalue (1950’s?) (later referred to as z). Hence the confusion. Maybe we should start calling it Cpz ??
Eileen Beachell, Quality Disciplines0February 28, 2002 at 6:46 pm #72638Another lovely debate on Ppk versus Cpk….
Luis – if you search the forum you will find a number of discussions on this subject. As for this discussion I think both Eileen and Stan are correct (with the exception that Ppk is in majority cases a lower number, i.e. less capable than Cpk as sigma est. based on range Cpk cannot be higher than the one based on sample populationPpk).
I can see where in automotive setting you will have sample of 30 pcs for the PPAP approval. You will run than Ppk on these pieces for the preliminary estimate of the process potential – you will be hard pressed to apply Shewhart rational subgrouping to 30 pcs run in order to get to Cpk. You may also try to run a long term study with Ppk using a randomly selected unordered data although I do not advise that approach.
The one thing we need to remember is that whether you use Ppk or Cpk your process has to be in statisitcal control or the calculated indices are meaningless.
As to the meaning of the letters in these symbols – that is a whole another discussion, let’s try agree first on what each one of them means!
So here is my $0.02: anytime you have unordered data, randomly selected from the process stream use Ppk – but do not put too much weight into the results. If you have Xbar charts than use Cpk.0February 28, 2002 at 6:55 pm #72640Dear Peter. My email is [email protected]
regards0February 28, 2002 at 7:15 pm #72642Chaz,
Thanks for your clarification. I agree with you. Nice to hear a voice of reason.
The GE version of Minitab for Six Sigma Process report does use just the opposite for Cp and Pp. Here is how Minitab uses the indices:
Explanation of the Cp and Pp from Minitab for GE Six Sigma
Note: Cp and Cpk represent the potential (shortterm) capability of the process. Therefore, these formulas assume that the process is centered on the target or on the midpoint between specs (since you can assume that this is possible to do). Cp and Cpk use shortterm variability.
Note: Pp and Ppk represent the actual (longterm) capability of the process. Therefore, these formulas do not assume that the process is centered on the target or on the midpoint between specs. Pp and Ppk use longterm variability.Here lies the source of much confusion.
Eileen
Quality Disciplines
0February 28, 2002 at 8:10 pm #72647
Pat HammettParticipant@PatHammett Include @PatHammett in your post and this person will
be notified via email.I find this issue of great confusion to many students that I teach and I thought I would respond. The issue of short term versus long term is not the best way to look at the difference between Cp/Cpk and Pp/Ppk.
Cp and Cpk use the method of Rbar/d2 or the within subgroup method from ANOVA in estimating standard deviation. Explicit in the Cp and Cpk calculation is an assumption that the process is in a state of state of statistical control.
Pp and Ppk, on the other hand, uses the sample standard deviation or root mean square (traditional standard deviation calculation e.g., =stdev(array) in excel) in the calculation. In other words, you are not concerned with verifying a state of a statistical control. Thus, over the long term, the mean of a process may be unstable (shifting around) but the overall distribution still may be estimated assuming normality. Thus, since Pp/Ppk use the sample standard deviation instead of within subgroup estimate, Pp and Ppk are sometimes considered long term capability (If the mean is unstable, the overall variation will be larger than the within subgroup variation and will result in a Pp which is larger than Cp).
However, if you take say 30 consecutive measurements from a process, using the sample standard deviation is really a measure of short term measure of capability. Again, for small samples, Pp and Ppk are often used because the sample standard deviation is preferred over within subgroup variation estimates such as Rbar / d2 (note: you have not had a chance to establish statistical control with such a small sample). In this example, Pp is really short term capability.
Unfortunately, people try to equate Pp and Ppk vs Cp and Cpk with short term vs long term capability. In practice, it is better to recognize that they simply use different methods to estimate sigma. So, Pp could actually be a measure of short term capability (based on measuring 30 consecutive parts) or long term (actual) capability (based on measuring parts over a longer term where the mean is shifting around). Cp and Cpk are more appropriately used where you have established a state of statistical control of your process and you wish to estimate capability based on a stable estimate of the variation (supported by incontrol SPC charts).
By the way, the issue of centering really has to do with the difference between Cp/Pp and Cpk/Ppk. Cp/Pp exclude centering and Cpk/Ppk include centering in estimating capability.
just my two cents on the topic,
Pat Hammett,
University of Michigan0March 1, 2002 at 1:53 pm #72695Pat,
I too teach Six Sigma classes. Several friends also teach the Master Black Belt classes for GE. I have a master’s degree in statistics. I understand the difference in the estimates of standard deviation.
The difference has occured based on material out of Mikel Harry’s teachings. Your statements are correct and are a nice summary of teachings found in the Harry and others materials.
All of these are interpretations of the work done at Ford in the 1980’s. Whether you are able to use a control chart on a new piece of machinery and estimate the capability of the process from Rbar/d2, many in automotive will define this as potential or preliminary study. It doesn’t matter how you estimate the variation. You can use a small sample size of 30 or you can use a control chart with 100 samples. It is still trying to understand the inherent variation in the equipment and define it in some preliminary way. The time frame is still relatively short. Hence, shortterm capability.
The longterm capability allows for as much variation that is in the entire process to be assessed. This will include the inherent machine variation, the raw materials, the environment, measurement, etc. The purpose is to capture the variation and quantify it. I know the Six Sigma materials spends a great deal of effort regarding the shifting and then leading up to the big shift of 1.5 sigma. Enough has already been said about this, but I have not seen processes behave in this way. Harry’s reference is from the 1950’s and is very controversial.
I know that Minitab – Six Sigma version agrees with your explaination but not all software does. Nor do all companies.
That’s my one cent. EB
0March 4, 2002 at 7:49 pm #72789
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.As long as I understand, P (in Pp, Ppk) is for performance, an C (Cp, Cpk) is for capability. And the main and important difference is not in the length of the study, or in if it preliminar or not, nor in the formulae (wich of course ARE differnt). It is a concept. It is the fact that Pp, Ppk is a report of what happened and Cp, Cpk is a forecast of what we can expect in the future. The following is something I wrote in a previous thread:
I think that the main difference between Pp/Ppk and Cp/Cpk is not a formula, but a deep concept. Pp/Ppk refers to history, while Cp/Cpk refers to future.
Pp/Ppk are the proces PERFORMANCE indices. They tell you how the process PERFORMED in comparison with customer needs, only for the period of time of the study.On the other hand, Cp/Cpk are the CAPABILITY indices. They tell you how the process WILL PERFORM in the future in comparison with customer needs.
For the performance study (Pp/Ppk), time is not important because the result is linked only with the period of time of the study. You can also have “special causes of variation” (unstable process) because they are taken into acount in the formula (the data must be more or less normal shaped to be accurate).
Instead, in the capability study, you are trying to predict the future, so: – The study must take long enough to try to include all common causes of variation that could ever be involved (change of operator, raw material, enviromental conditions, set ups, perishable tooling status, etc.). That’s why it is triky to perform a shortterm Cp/Cpk calculation. – The process must be in statistical control (i.e. stable, i.e no special causes of variation must be pressent). There are two reasons for that: a) The formulae of Cp and Cpk does not compute for variation due to special causes. If this was the only problem, it could be possible to use the same formulae than for Pp and Ppk. In fact, when a process is in statistical control over a long study Cp and Pp will give you nearly the same figure, and the same for Cpk and Ppk. b) (and most important) If your process is not stable, you can not know how much of variation due to special causes will you have next time, so it is not possible to predict anything. Thats why, in any SPC manual yow can find many times a frase like “WARNING: Cp and Cpk values have NO MEANING AT ALL if the process is not in statistical control”
Hope it was helpful. Please feedback.
Gabriel0March 5, 2002 at 2:40 pm #72823Gabriel
Your comments on the Pp/Cp are somewhat different from my experience and from the Six Sigma material on capability by Harry. My perspective on this topic was formed in the 1980s when I worked on numerous process capability studies in the automotive industry. So, there are some aspects of capability that we will just have to agree to disagree.
We do agree on the following:
1. A process can be assessed for shortterm and longterm capability.
2. The shortterm capability or performance will be better than the longterm capability.
3. The shortterm capability is an assessment of what happened at the time of the study. It does not have the predictive ability of the longterm study. (I have no disagreement if you prefer to call it performance.)
4. A process capability study cannot be predictive without the verification of control via statistical process control. I agree that the longterm study does truly reflect the real capability of the process. Again, if you prefer to call this the Capability study, thats fine by me.
Im a little puzzled why you believe that you need not demonstrate control during the shortterm study. I know a lot of material doesnt require it but it seems this is a bit risky. If any special causes do show up in the shortterm study (or Performance), you will have a very unstable process that will need some attention. In other words, you dont have a chance in your capability study.
I dont believe that a capability study will have the same results as the shortterm or performance study. Most of my shortterm studies are done on brand new equipment and production lines. The studies are frequently conducted by engineers and very skilled technicians. There is much less variation in this assessment (your performance study) than in the actual longterm capability study. Entropy will come in and effect the process. Perhaps this is not when you do your performance assessments.
By the way, what calculations are you using in your performance and capability studies?
Eileen
Quality Disciplines
0March 6, 2002 at 12:02 am #72859
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Dear Eileen:I also work with SPC. I have been working 6 years as a Quality Engineer (now Quality Manager in SKF Argentina). I also took a Master in Qality Engineering. There are different source of information about CEP, and they don’t agree in everything, because each author uses his own criteria, and I also use mine. Thats why, I want this to be very clear, all what I said and all what I will say is what I understand to be correct, what can be different from what others understand to be correct. Pleas forgive the length of this message, but I am not a native English speaker and I am being too detaild trying to be sure that those who read can understand what I am trying to say.Said that, let me quote the current version (1995) of the SPC manual that is part of the QS9000 (AIAG – Ford, GM, Chrysler) package, in pages 79 an 80 under the title “Understanding process capability and process performance for variable data”:1) Inherent process variation: That part of process variation due to common causes only, which can be estimated from the variation in each subgroup (Rbar/d2; Sbar/c4, etc note: Sbar is the average of the subgroups standard deviation in a Xbar – S chart).
2) Total process variation: Process variation due to both common and special causes, which is estimated from the sample standard deviation (Stot), using all of the individual readings obtained from either a detailed control chart or a process study (Note from gabriel: note here the “process study” as opposed to a detailed “control chart”).
3) Process capability: The six sigma range of a process’s inherent variation (see 1), for statistically stable processes only.
4) Process performance: The six sigma range of a process’s total variation (see 2). (Note from Gabriel: no mention to stability here)
5) Cp: Capability index. Is the range of the specification divided by the process capability, irrespective of process centering.
6) Pp: Performance index. Is the range of the specification divided by the process performance, irrespective of process centering.
End of quoteYou can find definitions alike for Cpk and Ppk. No mention about whether something is preliminar or not, or short or long.Now, my conclusions:Given 1 population, Rbar/d2 and Stot are two estimators of the same population’s characteristic: sigma. One adresses to the question: From the population I take several samples of n (subgroup)and in each sample I find a max and a min. Which is the most probable value of the population’s sigma? The other question would be: I have this sample of N (total sample) from the population. If the sample has a given S, which is the most probable value for the populations’s sigma? Which is the difference, then, between Cp and PP? Your “population”, is in fact an ongoing process. When you calculate Pp and Ppk, you forget about this fact and consider only the “final” population of pieces manufactured as a whole batch. When you calculate Cp and Cpk, you are assuming that the differnt samples taken each time (subgrup)are taken from a process that was “delivering” the same distribution each time, and this is what we call a “stable process”. In other words: Pp doesn’t take in account “time”. All individual values have the same weight, no matter if it’s the first or the last or if it belong to this or that subgroup. Cp does not directly take in accout the average of each subgroup. If you forget for a second about the stability requierment, the folowing examples will lead to the same Cp, but different Pp: (nine pieces, 3 subgroups of three each) Example 1) (9,10,11); (8,10,12); (10,10,10). Example 2) (9,10,11); (4,6,8); (15,15,15). After this introduction, I will answer your “disagreements”, that I will refrase as questions:¿Why should ever a capability study have the same results as a short term or performance study?
I didn’t said that, because I didn’t said anything about “short term”, except that you shuld not perform a capability study and calculate Cp/Cpk in a short term because you must ensure that all common causes of variation (operator, raw material…) are present in the study to make a valid prediction of what to expect from the process in the future, that is what a capability study is about. What I said is that, if the process is stable, Cp will be very similar to Pp, and Cpk to Ppk. Maybe I wasn’t clear last time, but I ment that you are using the same data to calculate the indices. I didn’t ment that if in a stable process you make a short term study (two shifts), the Pp will be the same that the Cp in the next whole one month of production. But if you calculate Cp and Pp from the same raw data (never mind whether it’s short or long term), and the process is stable, Cp and Pp will be very alike. That’s because both Cp and Pp are based on estimators of the same characteristic of the same population, only that one of the estimators (the one for Cp) needs the condition of process stability to be valid. Given this condition, both estimators are statisticaly good estimators of the population’s sigma (which, even when unknown, it exists an is unique), so Cp and Pp are alike. Reread the definitions and look at this simple theorem!
– Cp = Spec / 6x(Inherent Variation)
– Pp = Spec / 6x(Total Variation)
– Inherent Variation = variation due to common causes
– Total variation = variation due to (common + special) causes
– Let a process be statistically incontrol. By definition of “statistically incontrol process”, there are not special causes of variation. Then “Inherent variation” = “Total variation”. Then Cp = Cpk. This mean that, if a process is stable, Cp and Cpk are not “alike”, but identical. If you don’t get identical figures, it is because a process is never 100% free of variation due to special causes and because you are using only estimators of “Inherent variation” and “Total variation”, and so you get only estimators of Cp and Cpk, and not the “true” values of these indices. Did you know that, because of the error of using a statistical estimator instead of true values, Cp, Cpk, Pp and Ppk have confidence intervals?. Even more, in page 86 of the same reference it recomends to make a graphical follow up plotting “inherent variation” versus “total variation” because “the size of the gap between the process “capability” and “performance” is a measure of the degree to which the process is out of control” (quote). Uderstand? No gap = in control.Why you don’t need to demonstrate control during the short term study?
Again, I didn’t siaid that. I just said that “for a performance (didn’t said “short term”) study (Pp/Ppk) time is not important (not “time is short”) because the result is linked only with the period of time of the study” and that “you can also have “special causes of variation” (unstable process) because they are taken into acount in the formula (the data must be more or less normal shaped to be accurate)”. So let’s refrase the question: Why you don’t need to demonstrate control during a “performance” study? If you use Pp and Ppk as inices of performance, then you are using “total variation” that is the “variation due to common and special causes” (definition 2). Which is the logic in including the special causes of variation in the definition of performance if you put special causes of variation out of the scope of a performance study? Remember: performance = history, no prediction involved in that. The performance study is a statistical method based on sampling to estimate, among other thigs, how many parts can I expect to be out of the specification in the process run that I have just finished (past tense). In the limit, I could measure 100% and I will have not a statistical estimation, but the actual quantity of patrts out of tolerance. I don’t need a stable process to do that. Now, if you have a new process and you are in the trial run, and you want to have an “idea” (don’t call it “forecast”) of what the process performance could be, but the run is not long enough to include all ever possible present common causes of variation, and because of that you can not perform a “capability study”, so you perform a “performance study” that, in this case it is also a “short term study” and use the Pp and Ppk as very first figures of a hope of capability, then you better have a stable process! But that is because you are trying to, somehow, guess capability, not performance. Doing this is not very reliable, and that’s why AIAG customers requieres a Ppk of 1.66 in a preliminar “short term” performance study with twou intentions: 1) have a good certainty that all pieces delivered from the trial run are within specification. 2) To be more or less confident that a Cpk of 1.33 is attainable in serial run conditions.
¿Why do we use, then, a short term preliminar performance study as an indicator (poor, but indicator) of expectation of performance? Because you can not do another thing in a short term run, and that is what a trial run usually is. That’s why, even if you have a very good Ppk (let’s say 2 or more)in such a study, the customer will sill requiere a long term “performance” study (which can be only long term) to verify whether the expectation of Cpk > 1.33 was finally attained or not in serial runs. 1.33 was finally attained or not in serial runs. 1.33 was finally attained or not in serial runs.The disagree begins because you understood that I was speaking of a “short term capability study” when, in fact, I was speaking of a “performance study”, and that a “capability study” is a “long term capability study”.I will finish with the following summary:
– In a “performance study” (Pp/Ppk) you are answering “How DID the process performed in comparison with customer needs?”. It can be long term or short term. The process may be stable or not. It doesn’t tells you how the process will perform next time.
– In a “capability study” (Cp/Cpk) you are answering “What can I EXPECT from the process in the future?”. It can not be short term. The process must be stable.
– You can not make a “short term capability study”. If you need to have and idea of what a capability could be and you don’t have time for a long term study, then you can perform a short term “performance study” but: a) The process MUST be stable, b) You must be conservative (look for Ppk = 1.66 if you want Cpk = 1.33), c) You have to confirm the results later with a true “capability study”.About the calculations:
Cp = (USLLSL)/(6xRbar/d2)
Cpk = min((USL – Xbarbar),(XbarbarLSL))/(6xRbar/d2)
Pp = (USLLSL)/(6xStot)
Ppk = min((USL – Xbarbar),(XbarbarLSL))/(6xStot)
Notes: Cp and Cpk as shown here are for a XbarR chart. Stot is the sample standard deviation of all individual values in all subgroups (in case you are using subgroups) all together.Finally, I want to remark that this is my understanding. As I think it is correct, I hope I convinced you. If it is not correct, I hope you convince me so I can do the things right in the future and improve my knowledge. If we can not reach an agreement of what is the “correct” way, I am ready to accept that there can be different oppinions, and noone has the right to claim one of them as “the absolutely correct one”. Anyway, I will very much appreciate that you tell me what do you think about it.Best regards (and forgive my English)
Gabriel0March 6, 2002 at 4:46 pm #72894
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Stan: I strongly disagree with you. I don’t know what SPC manual are you reading, but my AIAG’s SPC manual, in page 80 (whre you have the definitions of the indices) says nothing about “term”, whether short or long. Also, when you consider all sources of vcariability (icludeing special causes) Cp and Cpk does not exist (read page 80) How can you compare Pp with Cp, or Pp with Ppk then? Byu the wai, if you fakely consider that the Cp figure has a meaning at all in a not stable process, then Pp will be more than Cp, not less as you said. I recomend you to read my message that appears above in this thread.
Gabriel0March 6, 2002 at 6:11 pm #72896Dear Gabriel,
Thanks for the post. Now, I understand your perspective on the variation studies. Just to make certain, let me rephrase what you have stated. (I reread the AIAG SPC manual).
The performance study (Pp) includes the total variation in the index for Pp. This variation includes both common and special causes of variation. As a result of this definition, the process could be very unstable.
The Process Capability (Cp) is estimated from only a stable process no special causes on the control chart.
I think you need to look at the history and intent of the use of these indices to achieve high quality products low variation centered around the target. The AIAG manual was taken from the Ford Motor Co. manual titled Continuing Process Control and Process Capability Improvement written by Pete Jessup in 1984. The intent of the manual was to help Ford and their suppliers understand and apply SPC to production processes and to assess capability associated with a stable control chart created from typical production activity. There was no distinction between performance (or preliminary for that matter) and this capability study. The assumption was that until a process was brought into a state of control, there was no production assessment of capability. Even by doing the number crunching and calculating a Pp on an unstable process, it really is of little value (my opinion).
In 1984, during the launch of a new transmission for Ford (AXOD), a decision was made to make certain that all machines purchased for making the parts would be capable of performing. I did extensive work on this effort. We made a decision to perform Machine Capability studies, sometimes called potential studies on the machine at the machine builders facilities. At that time, we had written in the contract that this assessment would have to meet a Cp of 1.33. Once the machine passed this capability requirement, it was shipped into the production plant and reassessed for capability. In this case, it was called a shortterm or preproduction capability study again, no Pp, just Cp. The goal was to achieve a capability of 1 during routine production. So, once in production, the process was assessed again for what was referred to as longterm capability still Cp. The real goal was to ensure at every critical engineering phase, that we could proceed with some level of confidence to the next stage. Upon the completion of a very successful launch, the decision was made to use this approach for the launch of a new engine called the Romeo engine. In this case, the machine builders were invited to a meeting before bids were placed and were told that they would have to meet a machine capability or preliminary capability of 2. This was based on our experience with the AXOD machines. The expectation for the preproduction was 1.67, and for longterm capability of 1.33. Again, there was never any reference to the performance concept. This strategy is really the basis of the material in the AIAG manual Production Part Approval Process. The use of the preliminary process capability studies are defined on page 7.
So, now the question, why was material on this performance study added to the AIAG manual. Who knows? I dont. It doesnt make any sense to me. If I made a guess, I suspected that it was added to force suppliers to understand the influence of the special causes on the capability even without stability. I suspect a lot of SQA engineers heard from their suppliers that the processes had special causes and they could not calculated capability hence the performance assessment. In the AIAG book, they make the following comment on its usage: It should be used to compare to or with Cp and to measure and prioritize improvement over time. So, is it being used when the process becomes unstable and to try to work back to the original stability of the process? If the process is unstable, the calculated index is meaningless. It doesnt reflect anything about the process even the assessment of the history. The greater the gap, the more priority it should be given?
As for the statistical debate, my perspective is different. Dr. Deming was my mentor for 12 years. Although trained as a classical statistician, I view most statistical application from Dr. Demings perspective. In this case, what does that mean. First, there is no such thing as a True value. Everything is an estimate. Most of the value in capability assessment is in the prediction. I dont see the relevance of the confidence intervals on the estimates. I believe this because it is an analytic study not an enumerative study (prediction vs. history). If you are inclined to use confidence intervals, go ahead. I am curious with how you may use the information and at what level you chose? (Classical side coming out of me).
You are fortunate to work for SKF. I was offered a job with them in Europe to work on quality in the late 1980s. They are well ahead of most companies in the use and implementation of SPC and other statistical methods. Perhaps you should ask Chris Hellestrand (a student of George Boxs) how he views the purpose and intent of the performance index.
You are correct in stating that once the special causes are removed from the process, it should be close to the Cp. It does make me wonder just exactly how you are using and interpreting the performance indices.
Out of curiosity, is your goal 1.33 for process capability? If so, why? I know how and why it was selected by Ford, but how did your company choose to use it.
Your English is excellent and I have no problem understanding it.
Sincerely,
Eileen
Quality Disciplines
0March 6, 2002 at 6:19 pm #72897
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Pat:
Thanks god you are there! I was begining to think that I was the only one with a point of view like yours.
Your message is clear, concise, and correct.
Unfortunately I hadn’t read your message before I wrote my fivepageslong message you can find above somewhere in this thread, where I tried to explain in detail what I understand to be correct about process studies. Your message, which is previous than mine, is a great summary of what I said. Anyway, if you don’t mind getting bored, I would like you to read that message, just to confirm that we agree, and give me more selfconfidence that my concepts are not wrong. With so many people with a different point of view…
Gabriel0March 6, 2002 at 11:33 pm #72908
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Dear Eileen:
Thank YOU for your post. I wil try to answer some of your questions.
====
QS9000, 4.9.2 – Peliminary Process Capability Requierments: ” … a Ppk equal or grater than 1.67 should be achived for preliminary results and for chronically unstable processes”
QS9000, 4.9.3 – Ongoing Process Performance Requirements: “For stable processes … a Cpk equal or grater than 1.33 shuld be achived”
There is where I took the examples from, when I said what a customer wil require. Now, don’t ask me why in QS9000 they are linking Ppk with capability, and Cpk with performance, when in tha SPC manual they define the indices in the opposite way. I like the “SPC manual” way.
====
You said: “The assumption was that until a process was brought into a state of control, there was no production assessment of capability. Even by doing the number crunching and calculating a Pp on an unstable process, it really is of little value (my opinion)”. Well, it seems that our opinions fully agree on this point, as long as speeking of “capability (prediction). Still, the Ppk can tell you, even in non stable processes, how the process performed. Look at this: You came to my factory and, from a batch of 3000 finished parts you take a random sample of 150 parts, and mesure a characteristic. You calculate the average and S of the sample, and find that the Xbar3S is at 1S from the lower limit and that Xbar + 3S is at 2S from the upper limit. You also plot a histogram and find that the data from the sample fits more or less the normal distribution. Won’t you fill confident that the whole batch will meet the specification? Well, this is Pp = 1.5 and Ppk = 1.33. And we said nothing about control charts, stability, etc. Now, don’t use this data to say that next time you will expect the same figures, unless you know that the process is stable (and in this case, we know that Pp and Cp are statistically equal)
====
About the philosophical subjets of the “true value” and “confidence intervales”. In the previous ecample, would you still be confident that the whole batch will meet the specifications if the same figures where obtained from a sample of 10 pieces? Probably not. Pp will still be 1.5 and Ppk will still be 1.33. So why you are not confident now? If I take several samples of 150 pieces from the same batch, and perform the studys for each, probably I will not find a lot of differences in the figures from each sample. If I do the same with several samples of 10 pieces, probably I will find a BIG difference in the figures of each sample. In other words (the following figures are for the concept only, not calculated one): If ina a sample of 150 pieces I find a Pp = 1.5, I am 95% confident that the Pp of the population is somewhere between 1.3 and 1.7. If I find a Pp = 1.5 but in a sample of 10 pieces, I am 95% confident that the Pp of the population is somewhere between 0.8 and 2.2. And those are confident intervals.
====
You ask: “how you may use the information and at what level you chose” (about confidence intervals), “how you are using and interpreting the performance indices”, is your goal 1.33 for process capability? If so, why?
I’ll tell you the process when the limits are already defined, for example from a short term Ppk study that showed stability on the control chart: We follow the process with an SPC chart. Periodically we load the data of the control chart in a PC and print a report. This report contains: The control chart itself. Two histograms, one including all points and including in control points. The Pp and Ppk values. The Cp and Cpk values, and the confidence limits for a two sided 95% level (Why 95%? don’t know). Three charts of historical values (from this and from previous reports) showing: a) Cp evolution with calculated values, interval limits and the value assumed when the control limits where defined. It is expected that the asumed value falls within the confidence interval each time. Also it is expected tha the Cp value is continously improving (that means that, at a point, the asumed value will fall otside the confidence limits and then the limis may need to be recalculated). b) The same for Cpk. c) Evolution of Cp and Pp in the same chart, to see the gap between Total Variation and Inherent Variation. If there is a gap, we expect it to be smaller and smaller. About the goal for the Cpk, our target is to improve, with priority in lower Cpk processes. However, our automotive customers require 1.33. So we use “performance indices” as initial stuudies (I don’t like to call it “capability”) and for ongoing assesment of the “degree up to which the process is out of control”, as said in the SPC manual.
Hope you make a comment.
Gabriel0March 7, 2002 at 3:24 pm #72950Gabriel,
Let me try to explain what QS9000 is referring to regarding capability.
Section 4.9.2 relates to the preproduction studies required for PreProduction Approval Process (PPAP). This is not meant for an existing process. Nor does this have anything to do with the SPC manual which is intended for existing production processes. The intent of PPAP is for conducting variation analysis on the machines prior to installation on the factory floor (ideal) or just after installation in the factory before production begins. Again, this is not to be used for an existing production process. The original use of the term Ppk was used only in this application. It was intended to demonstrate the preliminary or potential capability of the equipment prior to production.
Section 4.9.3 relates to the production process. In most of Fords original material, the Cpk index was always referred to as the Performance Index since it uses both the process location, average, and the process variation. Again, at that time there was never a capability assessment on an unstable process as defined in the SPC manual. Most of Fords material was used to create the AIAG quality manuals and the automotive requirement for QS9000.
It seems to me that you are using the confidence intervals on the performance index for batch sorting. You are correct that all the statistical analysis is appropriate including the calculations and usage of confidence intervals. You said:
If in a sample of 150 pieces I find a Pp = 1.5, I am 95% confident that the Pp of the population is somewhere between 1.3 and 1.7. If I find a Pp = 1.5 but in a sample of 10 pieces, I am 95% confident that the Pp of the population is somewhere between 0.8 and 2.2.
So, would you ship this product to an OEM knowing that the lower performance index confidence limit is 1.3 ?
(One final note on the production performance index)
Although it appears on two pages in the SPC manual and people teaching the manual cover this concept, it is in conflict with previous and existing materials. I know that none of my clients are following or using that concept at least not in my presence. I would advise companies to ignore it. It is nonsense to use this on production processes. I understand how you are using it in your company. How is this really working for you? It seems with the amount of instability in your process, that you are simply chasing your processes month to month. You may never achieve any sense of real stability in your product. This certainly guarantees job security for the Quality department.
I asked you about your requirements for the process capability. You repeated the minimums specified in the OEM requirements. Is that all you are working to achieve? For the AXOD Transmission, there were 8000 dimensions 1800 were critical characteristics. The average Cpk was 4. In precision manufacturing such as grinding operations for ball bearings, a Cpk of 1.33 is not acceptable for producing a high quality (manufactured at targetnot just within spec) product. It is only a minimum to show compliance to engineering specifications.
Are you only trying to achieve capability on the automotive products? Perhaps that is why you are relying on the application of the socalled performance index to define capability for batches after production. Sounds like you are trying to inspect quality into a finished part.
Perhaps you should take a step back from all the requirements and think about what you would like to achieve with the product and the processes. Does it help you to look at the gap between the performance index and the capability index? Why do you even need to use the performance index? Why are you unable to keep your production processes stable? Are you only using SPC on the automotive OEM part numbers? If so, why? If you really what to satisfy your customers (all of them), focus on stabilizing the processes. Why do you have so many special causes? Cant you remove them from the process? What are you doing to continuously improve your processes even those that have a Cpk of 1.33?
Hope this helps.
Eileen
Quality Disciplines0March 7, 2002 at 5:08 pm #72962
MarianoParticipant@Mariano Include @Mariano in your post and this person will
be notified via email.After reading all of the previous messages on this thread and my previous knowledge I would like to make a couple comments and questions.
1As long as our process is in control, we could use either Ppk or Cpk. Both will gave us similar results but with differet asumptions(requisites) and different scope.
2 When you are going to study a new piece of equipment, you do not have history enough to define if it is a stable process. What index do you use then? or how can you prove stability (knowing the “usual” restrictions of time and test for production release) and then use these indexes?
3 To my understanding, the formulation of these Indexes are based on a normal distribution process. In my experience, automotive and electronics industry, that ocures seldomly (and in text books). Are there better ways (standardized) to analyze Capability and Performance?
Thanks.0March 7, 2002 at 11:07 pm #72969
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Mariano:
About your 3 points:
1) This is mainly true, if the process is really stable. But there is one thing nobody wants to talk about. What is a stable process? A process with no special causes of variation. How do you tell that? Because the control chart shows no outof control point (or pattern). Nevertheless, a point out of control is one that has about 99,7% of chance to belong to a ditribution differnt from the “dtable” one. What does it means? That you can have special causes of variation that are not detected by a control chart. Of course, the variation due to this special cause will not be too big. For example, how do you tell that the average has not shifted up? You will wxpect to find 7 points in a row above the average if so. The orbability for a point to be above the average is, in a stable process, 0.5 (50%). Then. the probability of seven points in a row above above the average is 0.5^7=0.008=0.8%, in a stable process. Then, if you find this pattern you are 99.2% (1000.8) confident that this points do not belong to the stable process you had before they appeared. Now, les suppose that at a point the process average shifts up an amout equal to sigma/2(n^0.5) (being n the subgroup size, this figure is 1/2 sgma of the averages distribution that you chart) ¿Which is now the chance to find 7 points in a row above the average? One poin above the average is now about 69.15% probable, so seven point is 0.6915^7=0.075=7.5% of chance. You can have a special cause of variation (for example, a stop has moved) and find no signal on the control chart. Again, for not to be shown in the chart, this variation due to special causes must be small compared with the process spread. But it still will have some influence in Pp (it will be a little larger than without the special cause), while Cp will give you the same figure with or without this special cuse. So the gap between Cp and Pp, specially if it is plotted in a “gap history” chart, may give you an idea of the excent up to which the process is out of control, even without outofcontrol signals on the chart. So, Cp will still tell you how good the process can perform when it is free of variation due to special causes, and Pp will still tell you how the process performed.
2) The best you can do is to make a short term Pp/Ppk study, verify that the process is stable up to what you can see in this limitted control chart, and take Pp and Ppk as a very first value of expectation about what the capability could be. Remember that, is a short study like this, some normal causes of variation will be absent (change in opperators, in batches of raw material, etc.) So expect that the capability of the process in series condition may be lower that what you find in this study (QS9000 requiers Ppk>1.67 for preliminar studies, and Cp>1.33 for series process). Also, the length of the study is usually too short for the special causes to have a chance to appear. So don’t be very sure that you will not find that the process was not so stable as you thought based in ths preliminar studty. Remember that a stable process is an achivement, not a natural state.
3) Everywere you read you find the warning: “The data must be normally distributed”. This is because, as you said, al de formulae are based on normal ditributions (for example, S=Rbar/d2, 99,7% of the pieces within +/3sigma, etc). Nevertheless, in a Xbar chart will work pretty well with almost any kind of distribution. That is because the distribution of averages (that’s what you plot in the Xbar chart) has the tendency to be normally distributed, whichever is the shape of the distribution for the individuals values. This is truer when the sample size (subgroup size) is bigger, but is a good aproximation even for small samples. For example, you have a process with only two possible outputs, 3mm and 7mm, and each piece has a 50% of chance of having each value. The distribution of this process is absolutely not normal. Now plot the distributions of the averages for each sample size (do like a histogram). It is noted Xbar(P), where Xbar are the possible values for the averages (mm) and P is the probability (%) of this value to happen.
Sample size=1: 3(50), 7(50)
Sample size=2: 3(25), 5(50), 7(25)
Sample size=3: 3(12.5), 4.3(37.5), 5.7(37.5), 7(12.5)
Sample size=4: 3(6.25), 4(25), 5(37.5), 6(25), 7(6.25)
Surprised? Except for sample size=1, the others look pretty normal. That’s why you have to be very carful with the shape of the distribution if you are using an “individual values” chart.
So, in general, for control charts it is not a big thing to have a non normal distribution.
The other part is the performance/capability studies. The Cp, Cpk, Pp and Ppk values are supposed to be linked to the proportion of pieces that will fall out of the specification. For example, if in a stable process you have Cp=Cpk=1 it is expected that 99.7% of the pieces be within specification, and 0.3% will be out of specification. This is true only for a normal distribution. However (there is allways a “however”), in any distribution you will find the great majority of the parts within +/3S. How much is “the grear majority” will deppend on the shape of the distriibution. It is 99.7% for a normal one, and it is more than 99% in about any nonnormal distribution you can imagine, and even it can be more than 99.7%. For example, a two discrete values distribution like the one from the previous example has 100% of the parts within +/3s, and also a rectangular distribution. A Cpk > 1.33 will be more or less equally good with any distribution. I read a paper a few time ago that made a computer simulation of SPC and Process Studies asuming populations distributed in many differnt ways. Te conclusion was that the concepts of SPC and process studies and the tools itselfes worked pretty well with about any shape. For those who want to achive a 6sigma level (+/6s, or Cp=2), dont worry about the shape of the distribution. Even with the famous 1.5sigma shift (Cp=2, Cpk=1.5) this will mean about a one digit PPM for any shape. But (there always a a “but” too) be careful with the outliers! (outliers=isolated pieces, very rare in the process, where a strong special cause was present and therefore the piece clearly falls far outside the distribution of the process, for example, supose that, in average, 1 from 5000 pieces is not loaded OK in the machine). Because they are very few pieces in the population, outliers are very dificult to catch in the samples taken for the chart or for a process study. So you can have a great value of Cp and Cpk (3, for example), and still have several PPM out of tolerance (1 in 5000 = 200 PPM).
Anyway, I know there are specific methods for nonnormal distributions (lognormal, precentile, parametric, Weibull). But I can’t help on that because I never used them.
Hope it helped. Please give your feedback.
Gabriel
PS: Hablas español? Los “Mariano” usualmente lo hacen.0March 8, 2002 at 12:15 am #72971
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Eileen:
You’ve been pretty hard this time! But I have to recognize that you have a good creativity to reach conclusions.
¿Do you have any doubt about the quality of SKF products? ¿Where I said that our target is Cpk=1.33? That’s a customer requierment. I clearly said that or target is to improve Cpk, whatever level we are, with priority on those processes with lower Cpk. We have many procesess with a Cpk of 4 or better. And they also meet the requierment of 1.33 for OEM customers, even if what we are manufacturing is not a OEM product. I understand what you say about how Ford’s original material. You mentioned a 1984 manual in which the AIAG’s SPC manual is based. Well, I have 1983 version of this manual, edited by Ford Motor Argentina. I was 11 years old then. Have you heard about evolution, improvement, etc?
What about your preliminar studies for new machines only? We make preliminar studies for new processes. An a process is much more than a machine (a new mwthod, a new material, a new tooling design, a new grinding stone type…) And that is in line with QS9000, APQP and PPAP and, mot important, with our Quality System.
You said that “at that time there was never a capability assessment on an unstable process as defined in the SPC manual” What SPC manual are you reading? In mine, there still isn’t a capability assessment on an unstable process. Instead, it says: “The capability index is useful for determining whether or not a process is capable of meeting customer requerment. This use should not be applied to performance indices” Unless my English is not as good as you said, you should have cristal clear that I fully subscribe to this.
Where do you take the idea that we are using SPC only on OEM products, that we are trying to achive good capability only on them, that our processes ar full of special causes of variation, etc? In case it is not clear, answer to both things are NO.
About how we use the performance indices, didn’t I mention it before? Yes, I did, but just in case. Preliminary capability studies (as they are called, even when I don’t like the name) and the follow up of the gap, IF there is a gap. By the way “A perfect state of control in never attinable in a production process. The goal of the SPC is not perfection, but a reasonable and economical state of control. If a chart never went out of control we would seriously question whether the operation should be charted” (Statistical Quality Control Handbook, Western Electric Co) I will ad to it that the goal is to achive a reasonable level in the short term, and then continously improve it.
Finally, on confidence intervals, Batch sorting? Where on earth did you take this from! SPC manual, under “Descriptions and Assumptions”, “There are 4 minimum conditions that must be met for all tha capability measures described: 1) Process is stable 2) More or less normal distribution of individual measurements 3) Specifications based on customer requierments 4) (look at this one!) There exist a willingness to accept the computed index value as the “true” index, not taking sampling variation in account” We do that. We use the confidence interval as one of the indicators (not the only one) that the control limits may need to be reviewd because, you’ll see, as we work for ongoing improvement, some times we do things right and we ge a process with less spread, so the control limitsd need to be stretched.
By the way, job security is not guaranteed in the Quality Deppartment, but leading improvement improves our confidence that we will not be fired.
Gabriel0March 8, 2002 at 1:46 pm #72985
EileenBParticipant@EileenB Include @EileenB in your post and this person will
be notified via email.Gabriel,
Yes. I did try to push you to think a little differently. Sounds like you have everything under control.
We just dont agree with the usage of the term process performance. From my view, this is still a type of Capability calculation and it is not appropriate. By choosing to use the socalled process performance calculation (I know you keeping telling me this isnt a capability index we disagree but call it what you want). I dont see this as continuing to learn and improve instead, this is simply creating voodoo statistics and performing a calculation without any theory.
When you do your studies on the new equipment (method, tool, design, etc.), you should be calculating the preliminary capability index called Pp. This was and is (my view) the appropriate use and designation of this index. This is clearly defined in the PPAP manual.
We do agree that there will never be a state of process control perfection. And there is not a true value (yes, I know the term is in the SPC manual they are wrong again, no theory there).
A final comment for you – I wrote Fords 8D problem solving methodology. I can only imagine your comments on that one. Since it is from 19861987 (when you were a growing lad), it must be obsolete. And dont forget the 1931 Shewhart Economic Control. Sad that you think because it wasnt done when you were in the world that there is no purpose.
EB0March 8, 2002 at 3:10 pm #72996
Mariano MarParticipant@MarianoMar Include @MarianoMar in your post and this person will
be notified via email.Gabriel
Estoy en Delphi Automotive en Mexico.
Tengo algo de experiencia en analysis estadistico, pero estoy empezando apenas mi entrenamiento en 6 sigma para BB.
Esto del Cpk y Ppk ha sido siempre una constante discusion.
saludos y gracias..0March 8, 2002 at 3:30 pm #72999
DewayneParticipant@Dewayne Include @Dewayne in your post and this person will
be notified via email.
From a management point of view, it has been awhile since I have enjoyed reading anything as much as I have this particular ongoing debate. This is what makes this site a bit different than most.
You would think that the technical aspects of process capability would be a bit dry, but no, there are fine elements of a novel here perspective, confrontation, opinions, history, future, camps, personalities and the agreement that it doesnt really matter as to what is right, as all paths are leading to the same destination of quality improvement versus better quality improvement.
What is important for me is that lots of facts, reasoning, opinions and above all, proposals are supplied as to how one might view and address the issues. From the information that has been supplied so far, I now have something I lacked before, a much better understanding of the subject, with different points of view providing much more knowledge than I could have received by attempting my own analysis of just one or two sources.
I now have that most valuable of ammunition to use in fighting antiquality gremlins choice as to what will fit our needs here, based on the various skilled inputs you have provided as to the facts I need to know to make my own decisions. Thanks to all of you. And keep up the debates .. on everything .. I learn each time.0March 8, 2002 at 6:20 pm #73010Mariano,
From my perspective, I would use the notation of Pp for a machine or process startup (before production). For the studies I have conducted I always make certain the output is stable over time. If they are not, I would not accept the equipment. Something is wrong if the machine tryout is unstable under such a controlled environment. I had a lathe to send for a transmission carrier hub. The lathe was very unstable during the machine tryout. The machine builder had not optimized the lathe. There were issues with amount of coolant, tooling, speed and feed rates to name a few. We ended up having to do several design of experiments to correct the issues with the machine. Once the setup was optimized, the machine output was very stable and a Pp (and Ppk) were calculated. I would use the designation Pp regardless of the formula used to calculate the preliminary capability.
I usually do a simple run chart when I am conducting the study. Once I have completed the run, I construct an individual and moving range on the data. If I have a high volume output, then I use the Xbar and range chart. You are correct that this type of study cant be long enough to cover all the factors that can influence the machinery such as seasonality. However, it does work very well.
If your process is not a normal distribution, you have a couple choices. (I wont reiterate what has already been posted). First, you can try to transform the data to achieve normality. Then you just follow the typical calculations. Software can help with this.
Secondly, for the variation index (Pp or Cp), you can estimate the spread of the distribution with 99.73% within the limits of the appropriate distribution (such as, exponential). Once you calculate the range, use this to replace 6sigma. For the Cpk, I just report the percentage of the process distribution outside the closest specification limit. You can then equate that to an equivalent capability index of a normal distribution.
Again, a good software package can do this.
A very good article on Process Capability Indices is by Victor E. Kane in the Journal of Quality Technology (ASQ) Volume 18, No. 1, January 1986. He gives the following caution: There is a tendancy to want to know the capability of a process before statistical control has been established. Capability refers to a quantification of common cause variation and what can be expected from a process in the future. The presence of special causes of variation make prediction impossible and the meaning of capability index is unclear. I believe this also applies to the preliminary studies as well as the socalled process performance.
Another caution: Do not use probability associated with points in and out of control charts. The control limits are not probability limits.
Eileen
Quality Disciplines
0March 8, 2002 at 6:39 pm #73011Dear Gabriel. What is your email? I have a good examples of Cp & Cpk Against Pp & Ppk?
Post in this Forum, I just check my emails Thurs0March 12, 2002 at 11:38 am #73113
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Estimado Luis:
Hace ya varios días, te envié dos mails a [email protected]… como me pediste (ya que el @yahoo lo chequeabas sólo los Jueves). Esperaba una respuesta a esos mails, en los que puedes encontrar mi dirección.
Por favor, hazme saber si los mails no te llegaron.
Cordiales saludos
Gabriel Braun0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.