iSixSigma

Ppk Not Needed Since About 1980

Six Sigma – iSixSigma Forums Old Forums General Ppk Not Needed Since About 1980

Viewing 100 posts - 1 through 100 (of 160 total)
  • Author
    Posts
  • #31632

    Schuette
    Participant

    Ppk may have been a good idea from about 1920 to 1980, but after that it just confuses things. I will prove that below. Apparently Cpk use to be the calculation for Process Capability. Then Dr. Shewart decided that a second measurement was need. And instead of making Ppk the estimate, he made the standard, based on Juran’s writing prior to 1970, Cpk the estimate. I’m not convinced that just based on that fact alone is making Dr. Shewart the “shiniest coin in the fountain” The other way makes more sense to me based on what Juran has written before 1980. So, this is the first thing that bugs me.Next, what does it matter? Allegedly Ppk us used to get a process capability for the entire sample, i.e. multiple sub-groups, where now Cpk is used for only one sub-group of samples. Why would we need this? To check if one sub-group of samples is capable? We can do that without any estimating. We are only looking at one sub-group. I challenge someone to show me mathematically, (and show your work because talk is very cheap), that Cpk should be and estimate because it differs so much from Ppk. Unless your sub-group is of 100 samples, it is NOT going to change that much. This is just one of the silliest things I have ever seen. According to …https://www.isixsigma.com/offsite.asp?A=Fr&Url=http://www.pqsystems.com/cpk-ppk.htmIn 1991, the ASQC/AIAG Task Force published the “Fundamental Statistical Process Control” reference manual …Well if you go to the http://www.asq.org/info/glossary/c.html#top you will find Cp and Cpk, but if you go to http://www.asq.org/info/glossary/p.html, there is no Ppk stuff. So lets assume that it mostly came from AIAG. Now there is a group that knows their quality control functions. Don’t believe me, then go to Consumer Product Safety Commission web site and look up the number of recalls. http://www.cpsc.gov/ And they don’t list the defects in vehicles that are not safety related, and there are thousands more of them. The best explanation I found was here. …http://www.statware.com/bulletin/2000/enews_aug00.htm‘Quality Query: Cpk vs. Ppk Q. What is the difference between the Ppk values reported by Statit and the Cpk values? Why are they both reported? Which one is correct? A. For Pp and Ppk calculations, the standard deviation used in the denominator is based on all of the data evaluated as one sample, without regard to any subgrouping. For Cp and Cpk calculations, the standard deviation is based on subgroups of the data using subgroup ranges, standard deviations or moving ranges. This “within-subgroup” process variation can be considerably smaller than the overall standard deviation estimate, especially when there are long-term trends in the data. Learn about whether Ppk or Cpk best characterizes your process data at http://www.statware.com/statware/quality.htm. ‘According to …http://www.freequality.org/beta%20freequal/fq%20web%20site/Training/Classes%20Fall%202002/Capability%20Analysis.doc‘The Cpk is the most commonly used index for calculating capability, however some have found that the Ppk index is actually better. The Cpk is used to gauge the potential capability of a system, or in other words, a system’s aptitude to perform. The Ppk (and relative pp and Pr) actually measure the performance of the system. To determine which of the indexes to use, determine whether you want to analyze the actual performance (Ppk) or the potential capability of the system (Cpk). Cpk is calculated with sigma equal to 3, which is an estimated sigma. Calculating Ppk uses a calculated sigma from the individual data.’ I guess they don’t go far enough back to realize that Ppk didn’t exist not very long ago. And Juran in his books indicates that Cpk is … Cpk Process Capability This is the capability of the process expressed in relation to a worse case scenario view of the data. It is denoted by the symbol Cpk. The formula is… Cpk = the lessor of …http://www.sixsigmaspc.com/images/cpk01.gif Juran says nothing about an estimate. So what someone has done is make something that is very pertain to Six Sigma, changed it from an actual calculated measurement to an estimate and created Ppk for some unknown reason. The only one I can come up with is it made things more confusing as to sell more books and training sessions. Now, what http://www.asq1212.org/NorEaster%200401/Noreaster%200401p1.htmlHas to say about it makes a little more sense because they also deal with the number of samples, which literally everyone else I have seen preach Ppk has left out. And that is ‘The importance of targeting relative to both Cpk and Ppk was stressed. Ppk is only long term Cpk. Typically Cpk is just a snap shot of the total process encompassing 125 data points (Thank you Dr. Shewart) using typically subgroups of five whereas Ppk is typically ten times or more the number of data points and should be over a time duration to include numerous tool changes. (One “crunches” the numbers for Ppk and Cpk the same way.) Ppk differs from Cpk in that it reveals the targeting shift over time. On the average a +/- 1.5 sigma shift can be expected in targeting alone. Dave Spengler pointed out that with the use of additional…’Based on this, who would want to use Ppk? Also apparently neither measurement is accurate and true unless your process is in control, which is the only other reason I could see splitting the original measurement.Another good reference is…http://www.symphonytech.com/articles/processcapability.htmA little common sense goes a long way here. This is a part of my last newsletter.‘I can see why during the 30′ s, 40’s, 50’s, 60’s 70’s and some of the 80’s, we may have needed to estimate sigma. This is because calculators were either not around, or werehard to use, expensive, etc. Not the case today. My little TI stats calculator cost only $30.00. On top of that I can enter 5 pieces of data and hit the mean, Standard Deviation, and range key to get all the numbers faster than I can use a normal calculator to do the calculations then look up in the number in the factor table to estimate sigma. So why are we estimating anything these days. The really silly thing, and we are guilty of it as well, is does it not make more common sense for a computer to always calculate sigma instead of estimating it? Do we not inject some error level into the overall computations just by estimating? If we already have the data points entered into a computer, which in theory is a really good numbers cruncher, and then we estimate sigma, is it just me, or is this silly? Also it makes the program more complex to estimate sigma. Most SPC software already must calculate sigma, so to estimate it as well requires more lines of code. The more lines of code any given piece of software has, the higher the probability of an error. It’s a number thing. What would happen if we stopped estimating sigma on the control limits of X-Bar and Range charts, etc. and started using the actual sigma number? Probably nothing with most processes. Yes I am talking about changing the system. How taboo of me.’Even if I know the factors off the top of my head I could still do the math faster with my TI Stats calculator. So why does Ppk exists. Mathematically, and show your work, how COULD it reflect the short or long term of a process? It’s just a difference of an estimate for crying out loud. I use to have a lot more respect for Dr. Shewart than I do now. No doubt I have just mad more enemies, but since I’ve already been black balled, it doesn’t matter! (GRIN)I was one of the very first people trained in Six Sigma when I was at Motorola. I was one of the first people at Motorola to go through Six Sigma training. There have been a lot of things added to it from the original concepts. And I can’t figure out where the buy-back would come from. As near as I can tell, a lot of the newer stuff cam from ISOxxxx. And based on the Ford and Firestone fiasco, we should all be aware of just how well that works. One last point, it really does not matter how one attains a 4.5 sigma design margin, as long as one does. Too many wr
    itten rules to follow makes it harder to achieve. A lot like ISOxxxx, etc. A lot of the time to obtain the correct design margin just means changing a specification. Of course if your specifications are looser than your competitors, then you will lose business to them, so it is wise to insure that you process is as tight as it possible can be.Please someone show me how the theory of Ppk vs Cpk can distinguish long term and short term capability, and include your work with examples so that I and other can understand it.Jim Winings

    0
    #83525

    Tierradentro
    Participant

    Wow, you are making this up as ou go aren’t you.
    I don’t believe you were one of the first at Motorola – that would have been the early eighties.
    And if you had ever taken data for yourself, youwould know there is a difference in the long term and short term and the size of that difference is very important for which path youtake tosolve a problem.
    Your academic mumbo jumbo serves no purpose – go take real data and tell us about it.

    0
    #83526

    CSSBB
    Participant

    Thanks for a valuable discussion, Jim.

    0
    #83529

    changedriver
    Participant

    As a long-time user of Cp/Pp, let me share what we used it for – you can argue the validity of the approach all you want….
    We mostly use Ppk for decision making regarding supplier equipment.  In the manufacturing environment, when you runoff a machine for approval to ship, or after a rebuild, you have a limited number of pieces to run/time, you are not experiencing shift change, temp change, tool change from your tool crib, etc.  So the capability is an estimate.  Problem is, this is a contractual arrangement.  So you need to make a judgement on whether the machine will meet your needs on your floor, with all the extra variables.  We required a 2.0 Ppk, and expected 1.5 Cpk as a result on the floor (in case you didn’t know, the auto companies used the reverse definitions – Cpk=long term, Ppk=Preliminary).  Because this is a contract arrangement, you need to define that short term vs long term – hense, Cpk/Ppk.  Sure, you could just say “give me 2.0 Cpk prior to ship”, but as Cpk is defined statistically, you will not really achieve that, because you are not incorporating all variables, and using a small (125) sample.  So, you need a contract definition of short term capability – Ppk (automotive).
    As for the shots at the auto industry in general – they aren’t perfect, but show me another industry that makes a product that complex, subject to so many regulations, that performs on demand across all climates and terain, and is expected to last ten years.  Try buying a car from Microsoft………..

    0
    #83530

    Schuette
    Participant

    >Wow, you are making this up as ou go aren’t you
     
    Making what up? That is why I provided links to external sources. If you wish to have the page numbers I refer to from Juran’s book, I would be happy to give them to you.
     
    >I don’t believe you were one of the first at Motorola – that would have been the early eighties.
     
    It was 1985 when I released the 1st six sigma software for Motorola. It was called the A17/78 Supplement program. I worked in what was then the Communications Sector, it’s now called Land Mobile in Ft. Lauderdale FL. Motorola address there is/was 8000. W. Sunrise Blvd. I worked in component engineering and qualified transistors, diodes and IC Chips as well as supporting incoming inspection. We were the test bed for six sigma. I worked for Motorola for 11 years. People like Keki Bhote taught one of my six sigma classes. Bill Smith told my bosses boss, Pete Peterson what to have me do with the software.
     
    >And if you had ever taken data for yourself,
     
    I have taken and entered hundreds of thousands of data points since 1982.
     
    >youwould know there is a difference in the long term and short term
     
    Based on the sample between a calculation and an estimate? Please, please explain it to me because I just don’t see it. You can adjust the sample size or look at the entire sample vs a sub-group, but sigma is sigma.
     
    >and the size of that difference is very important for which path youtake tosolve a problem
     
    And I am waiting for someone to show me mathematically how that could be possible. If one are staying in control, which I hope one is, then how is long term applied? You seem to have all the answers, please explain it instead of just shouting more of the same.
     
    >Your academic mumbo jumbo serves no purpose – go take real data and tell us about it.
     
    Personally I think that you should be a little more civilized in your responses. I have a better idea, since you seem to have so much knowledge in the area, why don’t you show be an example instead of just making accusations? I was just asking a question.
    Jim Winings
    http://www.sixsigmaspc.com
     

    0
    #83531

    Jim Winings
    Participant

    >…can argue the validity of the approach all you want…
    And that is my point. How valid is it? Is it worth the confusion in the long term or if we just call it Cpk and calculate sigma? Does it really cause more defects doing it that way?
    >As for the shots at the auto industry in general …
    You are correct. Point was not to take a ‘shot’, but I am tired of purchasing auto’s that have too many defectives to talk about and I don’t think they are qualified to set standards based on the number of defectives I personally have purchased. And I’m talking design defects, not manufacturing. Some come, if not most, come directly from their suppliers, but, someone somewhere approved those suppliers.
    Jim Winings
    http://www.sixsigmaspc.com
     

    0
    #83534

    ScottS
    Member

    Attached is explanation of subjet from Minitab.  On a related note, Minitab has made a macro on my request to not calculate Cpk (based on Rbar) in capability analysis when using individuals (rolling estimate of Rbar is typically used).
    The idea of long/short term variation has always seemed alien to me. Hasen’t the long term process drift that Motorolla experienced been refered to as “seasonal” variation or something like that (i.e. assignable cause).  Did they do this to say they had Cpk=2 when they were really short due to this cyclical variation?
    Another note of interest, I don’t think ASQ has any mention of long term and short term variation in its CQE BOK.  I would think that short term would equal “machine capability” and long term would equal “process capability”, but that is too easy…
     
    From Minitab:
    Description
    What do “within” and “overall” standard deviation mean in Capability
    Analysis and Sixpack (Normal)?
    Solution
    Within and overall refer to different ways of estimating process variation. A within estimate, such as Rbar/d2, is based on variation within subgroups. The overall estimate is the overall standard deviation for the entire study. Cp and Cpk are listed under Potential (Within) Capability because they are calculated using the within estimate of variation. Pp and Ppk are listed under Overall Capability because they are calculated using the overall standard deviation of the study. The within variation corresponds to the inherent process variation defined in the Statistical Process Control (SPC) Reference Manual (Chrysler Corporation, Ford Motor Company, and General Motors Corporation. Copyright by A.I.A.G) while overall variation corresponds to the total process variation. Inherent process variation is due to common causes only. Total variation is due to both common and special causes. Cp and Cpk are called potential capability in MINITAB, because they reflect the potential that could be attained if all special causes were eliminated.

    0
    #83535

    Tierradentro
    Participant

    Jim,
    I’ll give you this. In a very well controlled process the drifts are small, but most do not have very well controlled processes.
    I do not buy your claims about Shewhart and Cpk / Ppk. I think I have read Shewharts work and don’t recall any words about Cpk or Ppk. Please do give me the reerences for this.
    I do know that if the difference between long term and short term are quantified, you know if you are dealing with a control issue (easy to fix)  or a capability issue (harder to fix). Why in the world would you be against a tool that gives direction?

    0
    #83537

    Tierradentro
    Participant

    You former Motorola guys need to figure out where this really came from.
    You are the newest entry. Mikell Harry claims to be the center of this universe. Mike Carnell claims this all came out of Motorola’s Government group (seems pretty unlikely). There is a guy named Lupienski who gives speaches that looks like this all started in Buffalo and Motorola University has a book out that give credit to any of these folks.
    What is the truth?(where is our data)

    0
    #83544

    Schuette
    Participant

    >I’ll give you this. In a very well controlled process the drifts are small, but most do not have very well controlled processes
    Based on the fact that six sigma gives us a 4.5 design margin, .5 larger than Juran suggest in his book ‘Quality Planning and Analysis’, why would we spend money and time looking for small drifts. Is this really cost efficient? This is already accounted for with the added .5 sigma shift. If the process is not well controled, what does the distribution tell you about it? (One reason why our charts look diffeent than others and why we were the first to plot a normal distribution curve based on the specification with the curve based on the actual data)
    >I do not buy your claims about Shewhart and Cpk / Ppk. I think I have read Shewharts work and don’t recall any words about Cpk or Ppk. Please do give me the reerences for this
    My reference is listed in the first post from http://www.asq1212.org/NorEaster%200401/Noreaster%200401p1.html
    Maybe I miss interrupted what was said. But as I am sure you well know, finding the source of some of this stuff is impossible. And there may be a good reason for that. But who actually decide the stuff about Ppk really is not important to the issues of how important is it really.
    >I do know that if the difference between long term and short term are quantified,
    And I’m not convinced that ‘long term’ is applicable in six sigma.
    >you know if you are dealing with a control issue (easy to fix)  or a capability issue (harder to fix).
    Sometimes these issues can be masked by other problems. This is where just some good old common sense comes into play. Of course Universities have not figured out how to teach common sense yet and it is not a requirement for a Ph.D. I know some Ph.D.’s that are the smartest people I have ever meet. Of course I know some that are, huh, not.
    >Why in the world would you be against a tool that gives direction?
    I am not against any tool to aid in problem solving as long as it can be proven mathematically, or at least 100’s of case studies, that it indeed can aid in fixing a problem. Any thing less may not be cost efficient and that is contradictory to what I was taught six sigma is about. I don’t take things as the gospel just because someone says so. I want proof. And when I sit down and try to reckon and perhaps reverse engineer what they are saying and it doesn’t make sense, as in the common type, I question it. Perhaps to the Nth degree. And this entire Ppk and Cpk thing is one of those issues.
    Jim Winings

    0
    #83547

    Schuette
    Participant

    >You former Motorola guys need to figure out where this really came from.
    It didn’t necessarily come from one place. Each division had it’s own unique problems to solve. And any given fix for one may not have been the best choice for another. I don’t know, its team work before they had teleconferencing. We used Motorola’s old mainframe email system. That’s a whole other story.
    And indeed, we now have people saying you must do this, that, and the other to be six sigma. But this and that may not work for some processes and may actually cost more money to implement. Six sigma has become a set of rules, more like ISOxxxx, and that is NOT good. It use to be ‘suggestions’ on how to do this, that and the other. Not the gospel and all were backed by common sense ideas and methodologies that could be explained mathematically or based on hundreds of case studies. And going back that far, most of the actual case studies involved Japanese companies. (and looking at just the auto and electronics industries, maybe that should still be the case?)
    >You are the newest entry.
    Actually I’ve been around for awhile. Or as I like to say, ‘in the beginning. No, not that beginning, I’m not that old’. My proverbial 15 minutes came with writing the software to help Motorola’s suppliers meet these new standards. No software was around to do so at the time. My software was the first ever, outside of Motorola, to place the design margin sticks on the distribution charts. I did not add to or change any of the methodologies. I just used them in Motorola to test the theories. After it looked like they might work, they sent me to minority suppliers to help them get on board with the software because Motorola needed to insure they kept the minority suppliers for government contract reasons and Motorola had strict intensions that any suppliers that could not meet a 4.5 sigma design margin was not going to be a Motorola supplier any longer.
    >Mikell Harry claims to be the center of this universe.
    Yea, go figure. Actually I never heard of Harry until I left Motorola. So I’m not sure if he was in the late Bill Smith’s camp in Chicago, or, based on the fact that he is in Phoenix, or somewhere around there, he may have come from the Semiconductor Product group. There was something like 25,000-50,000 employees at Motorola at the time. But I will say this, some of the first components I look at from the Semiconductor Product did not meet a 4.5 sigma design margin. So being in component engineering where I qualified such devices, I rejected them on not meeting the minimum Cpk requirements. (calculated sigma by the way (GRIN!) I got a note back from the Semiconductor Product group saying they didn’t have to meet the design margin. I guess since Motorola is Motorola’s biggest supplier they thought that politics would save them. So I showed my boss, Doug Dolbee, who showed Pete Peterson and I reckon Peterson sent it to Bill Smith. About 10 days later, the Semiconductor Product group was ready to embrace six sigma with very open arms.
    Funny isn’t it how politics alone can make or break good and bad ideas.
    >Mike Carnell claims this all came out of Motorola’s Government group
    I never heard of him, and still haven’t. To the best of my knowledge, the Government group did a lot of special projects and because of this everything was already of a higher quality due to the fact that parts were hand picked, etc.. They were not in the initial testing of the six sigma methodologies, but I could be wrong. I know I never had any dealings with them and I left Motorola in 1989.
    >is a guy named Lupienski who gives speaches that looks like this all started in Buffalo
    John I know. Or at least I have had phone conversations and email conversations with him. The Buffalo plant manufactured the Radius radio, which was Motorola’s first attempt at a cheap 2-way radio. Due to this, John was in the Communications Sector. (Note that it is well documented all over the WEB that six sigma started in the Communications Sector) My boss told me to send a copy of the software to every facility in the Communications Sector. So I did. As I said, I had in-plant email, which most people did not have at that time. For 21 days straight I checked my email faithfully. It wasn’t easy back then. I had to load a VT100 terminal emulator, call into a modem, tell t he node which mainframe I wanted to access, etc. etc. I wasn’t getting any, so I didn’t check my mail for 3 days. Son of a gun, wouldn’t you know it, John sent me email asking a question about the software and when he didn’t get an answer, he called me on the phone. The first words he said to me was, ‘Is this Jim?’ I said yea, and he said, ‘DON’T YOU EVER CHECK YOUR EMAIL?’ I don’t remember what the question was, but I remember his attitude. Of course this may have been before he had his required six sigma training. The same basic training we all had. I’m not sure what happen to John after that.
    We all had non-disclosers that we had to sign at Motorola that said 10 years was the limit. Some of these guys were a lot higher in Motorola than I was and apparently the didn’t wait the required 10 years before starting six sigma training, etc. I on the other had due to being a smaller fish and just the ethics of the matter. I signed a paper that said I would wait 10 years, and I did. It won’t make you rich, but perhaps a little respect is due because of it.
    Jim Winings

    0
    #83552

    clb1
    Participant

     If you are citing the Spengler article with the title “Theory and Practice of Six Sigma Black Belts” as proof that Shewhart developed Cpk I think you had better re-read the article.  The only sentence that seems to apply is the following:
    “Typically Cpk is just a snapshot of the total process encompassing 125 data points (thank you Dr. Shewhart)…..”  If you check your basic texts on quality control and Xbar R charts, the rule for minimum data is expressed in forms such as …”an Xbar chart with 25 samples of 5 each has a very high probability of discovering such variation.”  (pp.446 Quality Control and Industrial Statistic-Duncan)…..25 x 5 = 125…it would appear that the “thank you” is aimed at the sample size-not Cpk.
      The earliest reference I can find to Cpk is Sullivan 1984 – Reducing Variability: A New Approach to Quality – Quality Progress.  In that article he refers to Cp and Cpk as being indices used in Japan and recommends them for use elsewhere.
     The various claims and counter claims surrounding who fathered 6 sigma bring to mind the old saying -“Success has a thousand fathers-failure is an orphan”

    0
    #83555

    ScottS
    Member

    Cpk came from the Japan at the latest in the 70’s.  What does the “k” mean?  Obvioulsy nothing in English, but it stands for “katayori” which is shift or offset in Japaneese.
     
    See previous thread on topic of origins of Cpk.

    0
    #83557

    Jim Winings
    Participant

    >as proof that Shewhart developed Cpk …
    Not Cpk, Ppk.
    >”Typically Cpk is just a snapshot of the total process encompassing 125 data points (thank you Dr. Shewhart)…..” 
    I thought the ‘Thank You’ was due to Ppk. I may be wrong about who said what and whom invented what, but that is not the basis of the discussion. I’ve been looking for 3 years to find out where Ppk came from. And at first I thought Harry came up with it. Just the fact that I cannot find the genius who came up with it is a red flag in my book. If it is the preverbal cat’s butt, then I would think that whom ever came up with it would be proud to say so.
    >…”an Xbar chart with 25 samples of 5 each has a very high probability of discovering such variation.”  (pp.446 Quality Control and Industrial Statistic-Duncan)…..25 x 5 = 125…
    Agree, Juran says the same thing in his book Quality Planning and Analysis 2nd edition 1980. Motorola decided that 75 pieces would be enough. I don’t know where they pulled that number from. That’s what was in the A17/78 documents that was the supplier incoming inspection requirements and the document that my software was married with.
    Going back to my AT&T Statistical Control Handbook, Copyright 1957 Western Electric Company, I could not find a rule for sample size, but their examples shows 100 samples. While this book is older than I, it is where the WECO rules came from, so I keep it around but I need to have it re-bound.
    >The earliest reference I can find to Cpk is Sullivan 1984
    Juran’s book before that, 1980, has Cpk in it. I’m looking for Ppk. And Juran does not say the sigma is an estimate.
    >”Success has a thousand fathers-failure is an orphan”
    I can’t argue with that! (GRIN)
    Jim Winings
    http://www.sixsigmaspc.com
     

    0
    #83558

    Jim Winings
    Participant

    Exactly! My Motorola hand out indicates that k is…
    Process Shift / Design Specification Width / 2
    This is why I have and continue to preach design margins. There is a bunch more stuff that go with that. I’m not going to type the entire bloody thing in, but you can find it here.
    http://www.sixsigmaspc.com/six_sigma.html
    Jim Winings
    http://www.sixsigmaspc.com
     

    0
    #83565

    Mikel
    Member

    I thought there is a iSixSigma policy against promoting products or services on the forum.
    This looks pretty blatant to me.

    0
    #83570

    Mikel
    Member

    Jim,
    Interesting contention that Ppk has not been needed since 1980. Ppk is the invention of Ford in the late eighties.
    Shewhart had no opinion about Cpk or Ppk – this is not a language he ever used.
    Juran attributes Cpk to Kane in 1986, but talks about inherent reproducibility or machine capability (both short term) and process capability (long term) a long time before that.
    You may be interested that Juran Institute uses Cpk and Ppk in their Six Sigma training.
    You will find that Ford started the second measure because people were using snapshots to state long term capabulity and then could not perform.
    I have looked at the product you are hyping on the link at the bottom of your posts. It looks to be useful, especially if it is inexpensive. Don’t blow it with a rant like this where you clearly have not done your homework.

    0
    #83571

    Jim Winings
    Participant

    Are you talking about me?? If so you put the message in the wrong place.
    Where is the line between promoting products, and pointing to resources. Most companies that have products on their web sites also have information that may be vital to any given topic. When one not only points to their web site, but to their competitors as well, think that just becomes a conversation and not promotion. Perhaps I am wrong, but it may be a personal issue as far as how one views it. I read very few discussions that don’t have a link to some web site. Even pointing to a university could be viewed as promoting that University.
    The other issue is I personally feel that if someone is trying to make any kind of intelligent point it is important that the people reading the discussion can verify the source. Don’t you think it is important to verify sources or resources? If verification is not done, then having an intelligent conversation would be difficult. Hearsay means nothing.
    Can you point directly to what bothers you?
    Jim Winings
    http://www.sixsigmaspc.com

    0
    #83573

    Jim Winings
    Participant

    Stan;
    >Shewhart had no opinion about Cpk or Ppk – this is not a language he ever used
     
    Good. My respect for him just went up again. Thanks.
     
    >Juran attributes Cpk to Kane in 1986, but talks about inherent reproducibility or machine capability (both short term) and process capability (long term) a long time before that.
     
    I don’t recall reading that, (long term and short term), in my Planning and Analysis, but I could have missed it. Even though I have read it cover to cover several time. I’m not as young as I use to be. Of course who is. I still don’t recall seeing anything in Juran’s book, 1980 about Ppk.
     
    >You may be interested that Juran Institute uses Cpk and Ppk in their Six Sigma training
     
    I will have to check their site out and see what is going on. If they are promoting Ppk, then I will have a few comments to give them about their methodologies.
     
    >You will find that Ford started the second measure because people were using snapshots to state long term capabulity and then could not perform.
     
    The search is over. Do you have a source to confirm that? Why did they decide that Cpk should be the estimate and not Ppk? Ant ideas? That just seems so asinine to me that I can’t finds words to describe it.
     
    >I have looked at the product you are hyping on the link at the bottom of your posts.
     
    Not my intent. I have spent a lot of hours answering post and doing research on this topic before posting. More than the profit per unit. If I am doing this for promotional reasons, I will be out of business soon. Personally I don’t have a problem with anyone posting their URL or business name in any given post. I feel it ads credibility to the words in the post. Now if I was giving the price and features and how to order, that would become a differ issue. But I guess everyone has their own ideas and opinions. It’s a basic human right.
     
    Jim Winings
    http://www.sixsigmaspc.com
     
     

    0
    #83574

    Jim Winings
    Participant

    Opps. Sorry Stan. You did have the correct message. Ever notice how the older you get the longer arms you need to read the numbers on your cell phone and how you need a bigger monitor?
     

    0
    #83586

    Eileen
    Participant

    Jim,
     
    I have read your postings on this topic with great interest.
     
    I agree with you that you can conduct a capability study and calculate the Cp and Cpk for that study. Perhaps, I can explain the use of the other notations associated with a capability study.
     
    The use of the different notations were used by Ford Motor Company. Specifically, the transmission division. Victor Kane and I started working in Ford’s transmission division in Livonia in 1984. At that time, Vic was working on an existing transmission and I was working on a new transmission about to be launched. For the new transmission, there were three major capability studies that would be conducted. The first study was conducted at the vendor’s site, usually on a machine. This is frequently referred to as a machine capability study. Once the machine passed this study, the machine was shipped and installed at the Livonia transmission plant. After installation, a second capability study was conducted to define the capability at the plant. Once production began, then a third capability study was conducted to understand the variation associated with the day to day production. It became clear fairly quickly, that there was a need to distinguish between the three types of capability studies. It was agreed upon by several groups of people that the following notation would be used:
     
    Cpt (Cpkt) for the machine tryout capability study
    Cpp (Cpkp) for the second capability assessment after installation but prior to production. This was called the machine potential study – hence the use of the p for potential
    CpL (CpkL) for the production or long-term capability study
     
    This provided a relatively easy way to understand the performance of the equipment across the various stages of usage. The indices notation helped to distinguish relative performance on a complex product (8000 characteristics) across numerous processes.
    As this approach spread across Ford and later was required of the supply chain, the machine tryout was essentially dropped. With the two remaining studies, it was simplified to distinguish the machine potential study as Pp (Ppk) from the production capability of Cp (Cpk). Of course, various companies and consultants have put their own spin on these indices. This has resulted in the general confusion associated with capability assessment.
     
    Eileen Beachell, Quality Disciplines

    0
    #83589

    Jim Winings
    Participant

    Eileen;
     
    Cpt (Cpkt) for the machine tryout capability study
    Cpp (Cpkp) for the second capability assessment after installation but prior to production. This was called the machine potential study – hence the use of the p for potential
    CpL (CpkL) for the production or long-term capability study
     
    This seems very reasonable except for the long-term deal. If one keeps one process in control, how could there be a long-term effect of any kind? Or do you mean that ‘long-term’ just indicating, ‘in production’ not any type of long-term potential for errors in manufacturing? Sigma was always calculated for Cp/Cpk? Did you proverbial guys estimate it or calculate for the abovementioned measurements? 
     
    >As this approach spread across Ford and later was required of the supply chain
     
    And I agree that each company has the right to request whatever they feel they need from their suppliers to meet quality as well as production requirements.
     
    >it was simplified to distinguish the machine potential study as Pp (Ppk) from the production capability of Cp (Cpk).
     
    But did Ford make the decision that Cpk, et. al. was to be estimated and Ppk, et. al. was to be the calculated even though for years Cpk was a calculated affair?
     
    >Of course, various companies and consultants have put their own spin on these indices
     
    Yea, well as I like to say, the patients are in charge of the asylum.
     
    >This has resulted in the general confusion associated with capability assessment.
     
    That would be an understatement. Too bad you didn’t know then what you know now huh??? (GRIN)
     
    But I’m still trying to figure out what genius decided to change a standard and make the original standard prone to errors by estimating sigma. Don’t you agree this is asinine? As well as how there is a difference. I understand how ‘they’ are using it. I just can’t justify it personally. Mathematically it doesn’t make sense. Not to mention too many rules are contradictory to what I was taught six sigma is and should be about.
     
    Thanks for your response. ’84 huh? That was about the same time I burnt up $250,000.00 worth of Motorola HC6811 processors in ALT because me and my big brain decided that instead of just biasing them we need to clock them as well. But I forgot to adjust the ALT chamber temperatures to allow for the extra internal heating. Of course I was just a young pup back then. Melted the lids right off the chip carriers. What a mess that was for me to clean up. I had green stuff all over the place. I hate when that happens. But the concept I still feel is right.
     
    One Rhetorical question: (+/-2) Why hasn’t anyone challenged this to the Nth degree until now? Or have they and I don’t know about it? Why have some just taken it as the gospel?
     
    Jim Winings
    http://www.sixsigmaspc.com
     

    0
    #83590

    Jim Winings
    Participant

    Opps, HC6811 Should be 68HC11, I had a gaseous brain discharge
     

    0
    #83591

    Eileen
    Participant

    Jim,
    Again, I think much of this has been abused and the method of statistical analysis has been twisted. The so-called long-term study was intended to study variation as it happened in production. Simple example is a lathe. During production runs you would experience tool variations, more raw material (bar stock) variation as well as maintenance and general entropy. We could not assess these sources of variation in a pre-production capability study.
    Both the machine tryout and the machine potential studies were lucky to have 20-30 parts. In most cases, these were taken in consecutive order and no sampling or subgrouping were possible. There was no way to take subgroups and execute a sampling plan. We were stuck. However, companies with processes that can spit out a lot of parts and subgroup components, it does make sense they should use the appropriate analysis. In transmission components, not possible.
    Whether you calculate the population sigma from a sample of 30 consecutive parts or from a subgroup on a control chart using R-bar, they are both estimates. For the production capability, control charts were used on the critical characteristics to assure stability over time and it was easy to use the R-bar (and it is more appropriate) to estimate the process variation.
    It is very interesting to see how other companies have struggled with this. In some companies and applications, it is not the best fit. The Livonia transmission plant for the new transmission did not make defects. All the processes were capable. The only issue were very small rejects at the test stands (less than .05%) due to tolerance stack ups. There was no focus on defect reduction because there weren’t any in the production process. The focus was to continue to reduce variation to improve the overall performance by achieving a product to target.
    Eileen Beachell, Quality Disciplines

    0
    #83595

    Mikel
    Member

    Jim,
    Go simulate a .5 sigma shift in your control charts and see how long it takes, on average, to detect. It takes a load of samples.

    0
    #83596

    Mikel
    Member

    A failure for tolerance stackup is not a defect? I disagree.

    0
    #83598

    Chris Seider
    Participant

    I have enjoyed the postings on this topic.  I think it sets the record for passion, history, pot shots, etc. on the Six Sigma forum. 
    You ask the question, what good does discussing Cpk vs Ppk.  I find this always a good point when interviewing candidates who say they are Six Sigma trained.  If they are trained in the more classical style (I’ve defined it as Motorola and adapted by GE, Honeywell, others), they don’t give me the dumb founded look OR give me a line of “bull”.
    One interesting point.  I find it interesting that many companies are adding these additional tools to the define phase (e.g. SIPOC).  This is an indication that leadership doesn’t know how to define projects to begin with and just throw their hands up in the air and say “BB,MBB, or GB fix it”.  Of course, I’ve been giving advice to Six Sigma training candidates in other companies who come to me for help and I’m amazed at the low threshold it takes to get certified by some organizations.  I saw one project recently that had a predetermined solution, the solution didn’t work, yet the candidate got certified.  The candidate asked “Would I certify them?” and I kindly told the person “no” since the project was a poor training project and he showed no improvement–but said I wouldn’t have allowed him to get trained on that type of project.  I said he had an understanding of the tools but in the cultures I best relate to success is driven by results not learning tools.
    I’ve pontificated myself and am wondering if I get a new string of responses on this heavy subject. 

    0
    #83599

    Jim Winings
    Participant

    Eileen;
     
    >The so-called long-term study was intended to study variation as it happened in production.
    >We could not assess these sources of variation in a pre-production capability study.
     
    Perhaps a question is did you need to asses these variations? See more below for what I am referring to.
     
    Yes, I understand that. I think my AT&T Statistical Quality Control Handbook, (c) 1957 Western Electric Company explains it best. The set up is the calculated Std. Dev is .86 whereas the estimated Std. Dev. Is .68 based on the example data.
     
    and I quote…
     
    “In the case where the R chart is in control but the X-Bar chart is out of control, the estimate of the sigma(universe) which is obtained from the R chart will be a better estimate of the standard deviation of the underlying universe than the value obtained by calculating the ‘root mean square’ deviation.”
     
    R-Bar for examples they show based on 5 samples was 1.59. d2 factor of 5 is 2.326 for an estimate of .68.
     
    “This is a truer estimate of the standard deviation of the underlying process than the value of .86 which was calculated. This is because the distribution shifted its center during the period when the measurements were obtained, and the shift in center has inflated the estimate arrived at on page 130”  i.e. .86
     
    end quote…
     
     
    But, the center shifted. It was out of control. I would have to assume that you were in control and centered prior to establishing the capability? Literally 4 pages before that statement they talked about statistical tolerancing.
     
    Juran’s book Quality Planning and Analysis 2nd edition, 1980 indicates that due to a potential process shift, that 3 sigma is not good enough. He suggest going to 4 sigma. All Motorola did was add .5 sigma to that theory to get the 3.4ppm number. Now I don’t know if the 1st edition of Juran’s book, which apparently was printed in 1970 also had the process shift theory or not. But we know in 1980 it did exist.
     
    And this is another one of my pet peeves, I preach design margin, (of course that and 50 cents will get you a cup of coffee in SOME establishments), however, by using statistical tolerancing in combination with design margin i.e. worse case process shift, can one not calculate the probability of manufacturing errors of a process that is in control no matter how many part steps or parts there may be assuming that a worse case shift would not be greater than +/-3 sigma of a normal distribution? (ever notice that there is too many assumptions in statistics?) All of this done of course in the design phase and not in the manufacturing phase because you cannot inspect quality into any product. Once you have these numbers, you set up your machine and try to keep it in control, but if it drifts slightly, the margin of error that you establish with statistical tolerancing should keep you from ever running defects.
     
    OK, I think I’m starting to confuse myself. I don’t think I have explained myself very well, sometimes I have problems putting into words what I can see in my mind. (Oh no wait, that’s a 70’s thing, never mind)
     
    As a matter of fact, isn’t that six sigma? However I never hear anyone talking about statistical tolerancing when referring to six sigma. Doing a search on isixsigma for “statistical tolerance” yielded no results. (of course it will now after google spiders this site and updates their databases)
     
    Heck, maybe I’m not making any sense at all, I’ve been up for 16 hours straight now. My point is that could you not have used these theories, which I didn’t know about till about 1987 or so to also obtain a capability for the process in reverse so to speak?
     
    But we have kind of gotten away from the original topic which was who changed the standard for Cpk to Ppk. But perhaps it good reading for one to think about at least.
     
    (Man that took awhile to research and type up)
    Jim Winings
     

    0
    #83600

    Cone
    Participant

    Chris,
    You have made my day with your comment about training. Keep up the good work

    0
    #83601

    Jim Winings
    Participant

    >I have enjoyed the postings on this topic.  I think it sets the record for passion, history, pot shots, etc. on the Six Sigma forum.
     
    You forgot bullheadedness! (GRIN)
     
    To me six sigma has become something that is more akin to QSxxx or ISxxx than what I was originally taught. And there is probably a reason for that. Too many hands in the pot all with a better wheel. (I couldn’t think of any more clichés to add there)
     

    0
    #83606

    Chris Seider
    Participant

    Is this Mr. Cone?  If so, please contact me at [email protected] since I’ve lost your contact info.  Even if this isn’t who I think, I’m glad I was able to make you smile. 

    0
    #83607

    Chris Seider
    Participant

    I forgot to ask, what ever prompted you “Jim” to start this string of comments?

    0
    #83608

    Jim Winings
    Participant

    Ah, annoyance??? (GRIN)
    I’ve been looking for the bloody person(s) who is trying to change a standard and make me rewrite my software. (Grrrrrrr) Not that I would. I want to disprove their theories and set some things straight and cause my competitors to rewrite their software. (they have already stolen ideas from me) Or be proven wrong and learn something. I’ve been on both ends of that stick before.
    After looking at several issues, I’m convinced that it is silly to preach that one MUST be accurate in measurements to do statistics, even if it costs a company a ton more money to achieve it than it may be worth as far as getting a RONI or RONA and then we screw up the accuracy with estimates. No one has proven to me mathematically that estimates are worth more than actual calculations when based on a normal distribution, which is what six sigma is based on. I am wondering what flavor six sigma would take on if estimates were outlawed and only calculations were allowed. Would quality get better, worse, or stay the same. Also one would know that the ‘experts’ knew how to do square roots. And the list goes on and on and on, kind of like me.
     
     

    0
    #83609

    Mikel
    Member

    Jim,
    I just read the articles on your web site and you are out of touch with Pp and Ppk the way the Six Sigma community is using them. Your Pp is the automotive industries Cp and vice versa. The metric you are at odds with (or maybe don’t understand) is the automotive industries Cp.
    Go read the definitions in AIAG’s SPC manual.
    Minitab is set up to duplicate these defintions, except Minitab defaults to s-bar for the short term. R-bar/d2 can be chosen if you want.

    0
    #83611

    Jim Winings
    Participant

    Regardless how one is using Ppk/Cpk et. al. my point is still the same. Ppk ads no value and just makes things more confusing than what it is worth. Six Sigma should be about making things easier not more confusing.
     

    0
    #83613

    Jim Winings
    Participant

    Eileen;
    After thing about this post for awhile I realized I had heard this story before. Looking up in my Design for Manufacturability course material, Motorola 1986 Issue 4 I came across a section in the back where there were several articles from Journal of Quality Technology Vol. 18 No. 1 and low and behold I came across one from January 1986 called Process Capability Indices by victor E. Kane Ford Motor Company Transmission and Chassis Division PO Box 2097 (7) Livonia MI 48150. Sound familiar? Funny thing is Dr. Kane also quotes Juran/Gryan, (we seem never to give credit to Gryan for some reason), Quality Planning and Analysis 2nd Edition just as I have. Of course the reason for this may be the what was then ASQC used it as their bible. I assume you have read this.
    I just thought it was funny, and I don’t mean ha, ha. (GRIN)
     

    0
    #83614

    Jim Winings
    Participant

    I wish there was some way to edit these posts!!!!! See what happens when one has been up for over 24 hours.
    I wish there was some way to edit these posts!!!!! See what happens when one has been up for over 24 hours.
    I wish there was some way to edit these posts!!!!! See what happens when one has been up for over 24 hours.
    ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz
     

    0
    #83615

    Eileen
    Participant

    Jim,
    Thanks for your postings. I think you asked some really good questions in spite of your lack of sleep.Fundamentally, I think we are in agreement. Although statistics is useful as an aid to make judgements, it is not a substitute for good engineering knowledge. Of course we could use statistical models and make a boat load of assumptions and perhaps be able to better estimate the tolerances. Unfortunately, for transmission manufacturing, there are 8000 characteristics with about 1500 being critical. The process capabilities vary from marginally capable (1.33 – 1.66) to highly capable (5-8). These are not set in stone- they do vary somewhat with time. It was very important to remove unwanted sources of variation in the production process. Of course, there is the matter of economics and at some point you would have to stop. Some processes were fine at a Cpk of 2, others really needed to be higher. All processes do drift and the margin of error does continue to protect the product. Even with the 4 sigma or 4.5 sigma, because of the amount of components interfacing, it needed to be higher on some of the components.In addition, a lot of the processes are nonnormal which adds it’s own complexity.
    You are right – the processes were centered and stable prior to the calculation for capability – including the machine tryout and the potential study.
    I believe the Cpk was changed to Ppk within Ford to simply designate the two different studies. The was mostly likely done by a committee at some point – my money is on the Supplier Quality Assurance (SQA) group at Ford. To identify a single person (even if you could) I don’t think would shed any more light on this issue.
    Again, thanks for your comments and perspective on Ppk.
    Eileen, Quality Disciplines

    0
    #83616

    Gabriel
    Participant

    Jim, I’m not sure what’s your point about Cpk/Ppk and Estimates/Calculates. I hope you will tell me after reading this long post.
    But I will share my view about this subject which is based on AIAG’s SPC manual and my own working experience and use. That does not mean that either myself or AIAG are right. It goes long, so sit down and relax. After reading it, I would like to know you opinion.
    I will use the Xbar-R charts, but could be extended to other charts.
    First, some definitions. For them we will think of process as something that delivers individuals that are measured. The process is not limited to the period of time of the study, so the process individuals include those not delivered yet (or delivered before).
    Stable process: A process running without variation due to special causes. Such a process delivers the same distribution over time.
    n=subgroup size, m=number of subgroups in a sample (or in a chart), N=n times m=total sample size (number of parts in all subgroups in the study).
    X=mesurement result of any of th individual of the process. If the mesurement variation is low, it can be taken as the “true value” of that individual (we will assume that).
    Xbar=the average of the Xs in one subgroup.
    R=max(X1,X2,…,Xn)-min(X1,X2,…,Xn) in one subgroup.
    Mu=Mu(X)=Mu(Xbar)=true average of the process distribution*
    Mu(R)=The true average of the subgroup ranges distribution.*
    Sigma=Sigma(X)=true standard deviation of the process distribution*
    * These values are defined only for a stable process, because an unstable process can not be characterized by ONE distribution unless the unstability behavior is prdictable. We will omit that last case. Note that these true values are unknowable because some of the individuals (and some subgroups) are not delivered by the process yet.
    Xbarbar=avrage of Xbar1,Xbar2,…,Xbarm=average of X1,X2,…,XN.
    Rbar=Average of R1,R2,…,Rm.
    S=S(X)=whole sample standard deviation of the X1,X2,…,XN individuals.
    ^=Hat=Estimation. For example. Mu^ is an estimation of Mu.
    Within variation: Process variation due to common causes only. Characterized by Sigma(w)*
    Total variation: Variation due to common and special causes. Characterized by Sigma(t)*
    Cp=process CAPABILITY index=ratio between the tolerance range and the process within variation=T/6Sigma(w)*
    Pp=Proces PERFORMANCE index=ratio between the tolerance range and the process total variation=T/6Sigma(t).*
    *(T=tolerance range). These 4 definitions are from AIAG’s SPC manual, slightly modifed. The same conclusions derived below for Cp/Pp can be derived for Cpk/Ppk.
    d2=Mu(R)/Sigma in any normal distribution. It is a function of n. We will asume that the process, when stable, is normally distributed. Note however that a not normally distributed process violate this, and then Sigma will be NOT Mu(R)/d2, and the error involved in using this should be analyzed.
    MY VIEW
    Because “variation due to special causes” can not be =Sigma(w), and then Pp<=Cp, and Ppk<=Cpk
    STABLE PROCESS:
    By definition, if the process is stable then variation due to special causes=0 then within variation = total variation = process variation, Sigma(w)=Sigma(t)=Sigma=Mu(R)/d2. Being Sigam(w)=Sigma(t)=Sigma, then Cp=Pp=T/6Sigma.
    However, all these values can not be known, but can be estimated in these two ways:
    Way 1:
    Sigma(w)^=Sigma(t)^=Sigma^=S
    Cp^=Pp^=T/6S
    Way 2:
    Sigma(w)^=Sigma(t)^=Sigma^=Mu(R)^/d2, and we can estimate Mu(R)^=Rbar, so we get Sigma(w)^=Sigma(t)^=Sigma^=Rbar/d2
    Cp^=Pp^=T/6(Rbar/d2)
    Note that both S and Rbar/d2 are just two estimations of the same parameter, Sigma (the process standard deviation). Because of random error, either of these estimators can be either greater or smaller than the actual value Sigma. Also, in one sample of m subgroupos of m, either one of these estimators can be greater than the other. If you make several studies over time allways in the same stable process and plot both estimations of Sigma against time you will get two curves affected by random variation (like an Xbar plot) that cross each pother several times and stay close to the same horizontal line, which is the process Sigma. The same if you plot both estimations of Cpk=Ppk.
    In this condition, the any distinction between within and total variation, or Sigma(w) and Sigma(t), or Cp/Cpk and Pp/Ppk is just stupid. It is all the same thing.
    However, for a reason we will see next, we use “Way 1” to estimate within variation and “Way 2” to estimate total variation, then:
    Sigma(w)^=Rbar/d2, Cp^=T/6(Rbar/d2)
    Sigma(t)^=S, Pp^=T/6S
    BUT, IS THE PROCESS STABLE?
    No it’s not. No real process is really absolutely stable. Western Electric Co said that perfec stability can not be reached, and if it was, then there would be no point in charting the process, and that the aim is to get an enough stable process.
    However, some “small” inestabilities introduce “samall” variations that are not detectable. For example, imagine that the average actually shifts 0.1 sigma due to a real special cause. How would you detect that? In SPC we say that the process is “in-control” when the charts do not show “out-of-control” (OOC) signals. In fact, the best we can say in such a case is “I do not have enough evidence to suspect that the process is unstable (or affected by a special cuse)”. Some times we simplify and say “The process is stable (or free from special causes)”, what is Ok from a practical perspective but is not strictly correct.
    So, being “true” stability impossible to reach and, if it was, it is still impossible to prove, what do we do with Sigma(w) and Cp/Cpk?
    Imagine that you have a process that is stable in periods of time but from period to period the average shifts due to a special cause. If you stay within one period, the process delivers the same distribution over time. If you compare one period with another, they both have the same distribution shape except that the position is shifted. But if you take several periods, you see that the individuals belong to a wider distribution because of the shift of the average. That means that the process variation within a period is equal in any period but the variation in all periods together is grater. The first is the variation due to common causes only, the second is the variation due to common and special causes. If you could eliminate the special cause that makes the process average shif you will get a total variation equal to the variation within any period of stability (that’s obviousof course, now it is all stability).
    Imagine that we have an Xbar-R where any subgroup is made of individuals belonging to the same period of stability and different subgroups can belong either to the same or different periods of stability. How do we calculate the variation within the stability periods (that was assumed to be the same in any period)? That would be Sigma(w). We can’t, but we can estimate it.
    As we saw, d2=Mu(R)/Sigma for any normal distribution (remember we asumed that, when stable, the process was normally distributed). We just said that within any period of stability, Sigma(w) was the same. Because d2 is a constant, that means that within any period also Mu(R) must be the same. That means that Sigma=Mu(R)/d2 will be the standard deviation within any one of the stability periods, i.e Sigma(w)=Mu(R)/d2. We don’t know Mu(R), but if we can estimate it we can estimate Sigma(w) as Sigma(w)^=Mu(R)^/d2. If we could know which subgroups belong to which stability period, we could average the R values for each stabilty period and get an Rbar for each. Thn the Rbar could be used as an estimation of Mu(R) in each period. Because we know that Mu(R) is the same in all periods, we could average oll the estimations of Mu(R) to get a better estimation. that would be averaging the Rbar of all periods. We don’t know which subgroup belongs to which period, but averaging an average is the same than averaging the whole original data without the intermediate average. So we can take the whole Rbar of all subgroups (regrdless of the periods) as an estimation of Mu(R). Once we made Mu(R)^=Rbar we can make Sigma(w)^=Rbar/d2. Thats why we used “Way 1” for Sigma(w) and Cp.
    Now, if you wanted to know how well the process performed, Sigma(w) would not be a good indicator becuase the process had more variation than “due to common causes only”. For that we need Sigma(t). Sigma(t) should be calculated as the population standard deviation, but can be estimated as the standard deviation of a sample. The sample could be, for example, all the individuals in the subgroups in our chart. And then we get Sigma(t)^=S. Taht’s why we used “way 2” for Sigma(t) and Pp. Of course, you can take another sample from the batch and calculate S. It does not need to be the same sample used for the subgroups.
    Wait a minute! What if the distribution of the whole batch, mixing all the stability periods (then including the variation due to common and special causes) happens to be normal? Then we could apply the d2 definition again and say that an estimation of Sigma(t) can be Sigma(t)^=Rbar/d2. Taht would make Sigma(t)^=Sigma(w), whic is absurd because we know that, because there IS varaition due to special causes, Sigma(t)>Sigma(w).
    The contradiction raises becuase the individuals in the sugroup is not really a random sample of the whole batch. We stated that all the individuals within any subgroup belonged to the same stability period and the belonged to a stable distribution that is NOT the distribution of the whole batch. Then the R in each subgroup in NOT representative of the variation in the whole batch.
    If you want, you can take the whole batch and, at random, take parts and place them in subgropups without any rationality, meaning that any individual of the batch has the same probability to be chosen for any subgroup. Now you can calculate the R of the subgroups and get Rbar (which will grater that the Rbar of the rational subgroups used to calculate Sigma(w)^), and make an estimation of Sigam(t) as Sigma(t)^=Rbar/d2. That would be a perfectly valid estimation of Sigma(t), just as Sigma(t)^=S. Note that if you plot the data from those subgroups in an Xbar-R chart, that will tell you nothing aout stability because the horizontal axis is not longer “time”.
    CAPABILITY VS PEFORMANCE
    In a process CAPABILITY study we estimate Cp/Cpk. It is about FUTURE. Wat the process CAN DO. It tells you how the process PERFORMS when it is stable. It is useful to predict what the process will deliver. Indeed, when the process is stable it delivers the same distribution over time, so it is posible to predict that the same distribution will be delivered later. Note that if the process is not stable, then the distribution is changing and you do not know which distribution will be delivered next time. That, together with the fact that Rbar/d2 is an estimation of Sigma only if the process is stable, is whiy stability is a prerequisite to calculate Cp/Cpk.
    In a preocess PERFORMANCE study we estimate Pp/Ppk. It is about HISTORY. What the process DID. How the process PERFORMED during the time covered by the study. Stability is not a requisite. It would be stupid to reuire stability to calculate Pp because the “variation due to special causes” is part of its definition. It can not be used for prediction of future performance, becuase stability is not a requisite.
    Now, if you used Pp/Ppk AND the process is stable, you can make predictions. After all, we’ve already seen that in that condition Cp=Pp and Cpk=Ppk.
    I GOT A Cp SMALLER THAN Pp, HOW IS IT POSSIBLE?
    It is not. Sigma(t) can’t be smaller than Sigma(w), then Pp can’t be greater than Cp.
    Some times we write Cp=T/6(Rbar/d2) and Pp=T/6S. But that’s a simplification. These are not Cp and Pp, but their estimators Cp^ abd Pp^. Those estimators can be, at random, slightly above or below the actual value that they are estimating.
    If the process is stable then Cp=Pp. If the process is slightly unstable (the variation due to special causes is small) then Cp will be only a little larger than Pp. In these conditions, the estimators may “corss” and you can get an estimation Cp^ a little smaller that the estimation Pp^. But that does not mean that the actual Cp is less than the actual Pp. It is not possible.
    In the other hand, when a process is clearly unstable Cp will be large enoug compared to Pp as to assure that Cp^ will be larger than Pp^.
    SHORT TERM VS LONG TERM
    The length of the study has nothing to do with Cp/Cpk/Pp/Ppk. Nowere in the definitions of these indexes the words “term”, “short” or “long” appear.
    You can make a SHORT CAPABILITY study, a LONG CAPABILITY study, a SHORT PERFORMANCE study or a LONG PERFORMANCE study.
    However, the validity of a SHORT CAPABILITY study can be questioned. We saw that stability was a porerequisite for capability. Stability is ussually assesed by the lack of OOC signals in the control chart. But, if you make a study with let’s say 100 consecutive parts processed in let’s say half an hour, you leave a lot of potential source of special causes out of the study, like day/night, operator cahnge, tool wear, tool change, different batch of raw material, and many more. What would you tell me if I want to sell you that “The process has the capability to behave stably and with a variation that is 1/9 of the allowed tolerance (Cp=1.5) because that’s what I got in 100 parts made in 1/2 hour”?
    That is a problem. If I build a machine or start a new process and have to make a short preliminar test to show my client the potential of it, how do I make it? My opinion it hat you don’t. You can’t.
    The best you can do is to make a SHORT PERFORMANCE (Pp/Ppk) STUDY. Calculating Pp/Ppk is somehow conservative because it tends to be smaller than Cp/Cpk even for small unstabilities that are not detected as OOC signales in a chart, specially in the short term. Also you will probably require a greater Pp/Ppk in the short term study than the expected Cp/Cpk in the long term. And finally, you’d better get a “free of OOC signals” chart in the short term if you want to have any chance of having a stable process. Lok that now you are requiring something like stability even when you are using Pp.
    I think that’s why the QS-9000 requires preliminar Pp/Ppk>1.66 for a new process and a Cp/Cpk>1.33 for ongoing processes.

    0
    #83617

    Jim Winings
    Participant

    HOLY COW!!!!!!!!!!!
    Gabriel gets the gold star for longest most complex post. I was just getting ready to write a summation as a checklist of what facts have been dug out so far. Thanks!! (GRIN)
     
    I’m still going to write the summation today, and will answer you post on Monday. I’m going to have to flow chart that puppy out. However, a couple of points, and as I add all this stuff together some things are starting to come to light.
     
    Six sigma is NOT QS-9000 and AIAG’s does not set six sigma standards. I’m not convinced that anyone does, but I would have to say that ASQ, (formally know as the organization of ASQC) maybe the closest and they do not mention Pp/Ppk in their glossary on their web site.
     
    And believe me, do not get me started on QS and ISO standards, self-audits, et. al. That would take another gig of isixsigma’s hard drive space.
     
    Jim Winings
    http://www.sixsigmaspc.com
     
     

    0
    #83618

    Jim Winings
    Participant

    Summation of facts so far uncovered.
     
    Pk/Ppk did not exist as a unit if measurement until around 1984 or a little later. (Eileen)
     
    Cp/Cpk did exist as a unit of measurement at least in 1980 (Juran/Gryna Quality Planning and Analysis 2nd Edition) Please if someone has a copy of Quality Planning and Analysis Juran/Gryna 1st Edition which should have came out around 1970 according to the 2nd edition’s copyright notice, please look up Cp/Cpk and post results here.
     
    Checking my AT&T Statistical Quality Control Handbook, (C) 1957 it does discuss process capability, but does not include the terms Cp/Cpk Ergo, unless someone has another verifiable source, Cp/Cpk came into play as a unit of measurement sometime between 1957 and 1980.
     
    Sometime between 1984 and 1991 apparently either Ford’s Supplier Quality Assurance or the AIAG set Ppk as a calculation sigma and Cpk as an estimated sigma. Not ASQC/AIAG as some indicate on their web site.
     
    Sidebar:
    Prior to giving up Six Sigma training, Motorola sent everyone that was to get the original six sigma training to a ‘Basic Statistics’ course (c) 1985 Motorola Issue #1. Given by the Motorola Training and Education Center. The forerunner to Motorola University. (ENG 125) It was based on Dr. Dale Besterfield’s book Quality Control. The 2nd Edition of that book just came out because some participants got the 1st edition and some got the 2nd Edition so the course related to both books. The course material does not mention Cp/Cpk but does calculate sigma using the square root method except for the use in control charts. The chapter on capability studies uses Cumulative Frequency. Funny thing is that while it doesn’t refer to Cp/Cpk it does have a session on ‘Setting Assembly Tolerances’ And of course distributions, X-bar/R Charts. P Charts, etc.
     
    AT&T Statistical Quality Control Handbook, (C) 1957 covers using an estimate sigma for process capability but indicate that it is more accurate only if the process is shifted and out-of-control.
    Note: while I read this and posted it here, I still do not see a mathematical correlation.
     
    Motorola Suppliers document based apparently on Dr. Dale Besterfield’s book Quality Control The 2nd Edition and Juran/Gryna Quality Planning and Analysis 2nd Edition clearly indicate to calculate sigma using the square root method for Cp/Cpk numbers. For exactly what it says go to…
    http://www.sixsigmaspc.com/six_sigma_spc_about.html
    and click on the ‘Sigma’ link. It’s in the 1st paragraph. You don’t need to read it just look for the link. It’s in a link color. I don’t want to upset Stan, but I also don’t want to do something twice. That wouldn’t be very Six Sigma of me now would it?
     
    Further research that I have done in the past 24 hours indicates that there may be a difference in the way to obtain sigma between the sigma universe, sigma actual, sigma population, and sigma (enter you favorite statistics word here) and this may be adding to the confusion. Also note that Dr. Dale Besterfield’s book Quality Control The 6th Edition (c) 2001 shows that Cp/Cpk to be estimated in at least 2 different ways and never calculated using the square root method. I’ll cover this later perhaps, but he says that R-Bar/d2 ‘does not give a true process capability and should only be used if circumstances require its use.’ Of course he does not indicate what these ’circumstances’ are and this is different from his 1st and 2nd editions. However in both books he says that whether grouped or ungrouped sigma by itself is always the square root method. I could not find anything on Pk/Ppk.
     
    Checking some other sources, Rath & Stong’s Six Sigma Pocket Guide, (a good cheap little book but hard to find anything in), Rath & Stong apparently another pioneer in six sigma, but then again who isn’t, and The Six Sigma Way, both books cop out by using look-up tables for sigma. I guess that’s one way to stay out of the problem, but it also may indicate something else. Neither refer to Pk/Ppk.
     
    Ok, when in doubt, I go to a book that isn’t based on manufacturing, Statistics: An Intuitive Approach, Weinberg and Schumaker 3rd Edition. (No it’s not the meteorite Schumaker) Now since this is not a manufacturing book, they don’t mention process capability, ergo, they show sigma always to be the calculated square root method. If all your previous processes were design to six sigma specifications and monitored, in theroy, you shouldn’t have any long-term problems.
     
    I’m starting to see a trend here. Is anyone else? All my Motorola six sigma training material, and I still have it all, always indicates to calculate sigma using the square root method. Any one have training material from GE or Honeywell that indicates different?
     
    Pk/Ppk appears nowhere expect apparently in the AIAG’s SPC manual, QS-9000 and perhaps ISOxxxx.
     
    While different generic quality control books indicate to calculate sigma in several different ways, there is no consistency or uniformity between all these Ph.D.’s And none of them show mathematically why. The just say, this is the fact and none of them call it Pk/Ppk.
     
    Now, I am a firm believer that everything in the universe can be explained with mathematics with the acceptation perhaps of Michel Jackson. And nowhere has it been proven mathematically that estimating sigma for process capability, under any given circumstances, is more accurate than calculating using the square root method. I still see it as common sense. I also see the only consistent method is the square root method.
     
    It would appear to me that six sigma, which is a culture, is having a culture clash with people coming from the automotive industry. Now I’m not picking on the automotive industry, there are some fine autos, Toyota, Honda, Mitsubishi, Nissan but I do see a trend here with this topic. Or is it just me?
     
    Jim Winings
    http://www.sixsigmaspc.com
     

    0
    #83619

    Loehr
    Member

    Hi Jim,
    One more source you might try in order to get some answers about Cpk versus Ppk is the book, Measuring Process Capability, by Davis Bothe.  It’s carried by both amazon.com and the ASQ Web site.  It addresses most of the issues raised in many of these posts.

    0
    #83620

    Chris Seider
    Participant

    Gabriel,
    We have had good discussion in here in the past.  This is a continuation of this exchange of intellectual firepower.
    I appreciate your treatise ( a compliment) above.  It has made me think about 2 things.
    1.  Am I incorrect that for a technically correct Cpk, Cp, Ppk, etc. calculation done, the process must be in control?  I know that many do not worry about this.  I myself do not worry about this unless I have an extreme case of nonnormality of data–e.g. definitively bimodal, extremely skewed, etc.  My question to you is this.  Do you feel the calculation of Pp, Ppk with calculated standard deviation is correct even for non-normal looking (or statistically failing a test such as Anderson-Darling) distributions?  I also ask the tricky question about being in control because I have heard all extremes of what in control means (all tests or just the 3 sigma tests).  Your specific comment or reference link would be appreciated.
    2.  It has been evident that our backgrounds are different and we have different application background from past posts.  I’m disturbed by your statement that it is NOT defined anywhere that Ppk is for long term capability and Cpk is for short term.  I agree in the exact formulas it isn’t stated.  However, everything I’ve been taught and everything I’ve experienced says the concept of long term vs short term is VERY important in problem solving methodology.  Are you saying that Cpk could be used to describe long term capability?  I have a gut problem with this because my understanding of Pp for long and Cp for short is the ONLY thing which keeps communication of capability well understood among parties involved.  It is my understanding that a good part of the Six Sigma community
    I must admit my technical books are at work now so I can’t consult AIAG and others right now.  I look forward to your comments.  I have other thoughts about machine capability only being capable 1/9th of the time but I will let that pass, unless your prod me.

    0
    #83621

    Chris Seider
    Participant

    Jim, please see my post to Gabriel’s post before replying.  However, I can tell you other industries are using Ppk and Cpk.  The chemical processing industry that is starting to use Six Sigma uses Ppk and Cpk.  However, I find most industries (maybe because of earlier quality training by those involved) don’t calculate more than Cpk and do it on longer term data (though using estimated s.d.) and therefore make it even more confusing out there.  I rarely see Ppk reported.  I’ve had connections with the automotive supplier base, chemical industry, converting facilities, etc.  Where I see those who talk about Ppk and Cpk are those applying the more classical Six Sigma approach and use the short term and long term understanding to get a solution faster–not for reporting capability to customers, suppliers, etc.
    I believe Honeywell, GE, Dow, and Dupont all still have Ppk in their training materials which involve many chemical industries.  However, you find many don’t emphasize this distinction much because of the confusion on the concepts–just look at the chatter on this board and the lack of experience with working with short and long term capabilities by some of the trainers.
    If others have knowledge otherwise or more enlightening, please post.

    0
    #83622

    John J. Flaig
    Participant

    Gabriel,I enjoyed reading your very thoughtful argument
    and I agree with the mathematics. However, I
    have a question for you. If a process is unstable,
    then it is unpredictable. Therefore, computing a
    metric like Pp or Ppk seems to me to be of no
    practical value since it does NOT predict
    anything.Regards,
    JohnJohn J. Flaig, Ph.D.
    Applied Technology
    http://www.e-AT-USA.com

    0
    #83624

    Jim Winings
    Participant

    Great! I’ll do that but it will take awhile. I’ll have to order it.
     (Just what I need another freaking book I can’t find anything in. I guess you can’t have too many quality control books)  GRIN

    0
    #83625

    Mikel
    Member

    Garriel,
    You need to go back and read the SPC manual – short is Cp/Cpk not Pp/Ppk

    0
    #83626

    Mikel
    Member

    ASQ may not mention Ppk on their web site but they do teach it in their Six Sigma training

    0
    #83645

    Gabriel
    Participant

    Hello Carl, just my view, ok?
    “1.  Am I incorrect that for a technically correct Cpk, Cp, Ppk, etc. calculation done, the process must be in control? “
    This is based on AIAG’s SPC: For Cp/Cpk, the process must be in-control or, if there are a few points showing OOC signals which special cause has been identified and eliminated (so they won’t happen again), those OOC points can be eliminated from the calculation of Cp/Cpk. For Pp/Ppk, “variation due to common and special causes” is part of its definition. If you requested stability, you would never have “variation due to special causes” and the definition would be stupid. If you accept unstability, then you must accept OOC.
    “I myself do not worry about this unless I have an extreme case of nonnormality of data–e.g. definitively bimodal, extremely skewed, etc.”
    I hope we would agree that “stability/control” is one thing and “normality” is a complettly different and independent thing. I think that this fact goes beyond “my view”. If you don’t agree, please tell me and we can go deeper on this.
    “Do you feel the calculation of Pp, Ppk with calculated standard deviation is correct even for non-normal distributions?”
    Well, this is a whole new subject, but shortly, it depends. If you want to use Cp/Cpk or Pp/Ppk as an absolute indication of how good is the process behaving relative to the allowed variation (tolerance) you should find a distribution that matches the “real” process distribution beyond the percentiles 0.135% and 99.867% (which correspond to +/-3 sigmas in a normal distribution) and beyond the specification limits (whichever is farther from the average). This can be almost impossible in a well performing process. For example, at Cpk=1.5 you have 3.4 PPM. Can you imagine what a giant sample you would need to have enough data in that zone to find a mathematical distribution that matches the process distribution? If you want to monitor your improvement efforts, then you can use the formula with the standard deviation. If the Cp/Cpk improves, then the process improves, even when a Cpk of 1.33 calculated with this stright formula can mean different PPM for different distribution shapes. However, note that about any imaginable process distribution has pretty more than 99% of the individuals within +/-3 sigmas. For example, the rectangular and triangular have 100%.
    “…what in control means (all tests or just the 3 sigma tests)”
    In control = no OOC signal in the chart. I’ll let you decide which signals will you take into account. I admit that we don’t use all the 6 test for both Xbar and R (12 in total). Something interesting is that if you used this criteria in a perfetly stable process (such a random computer simulation) you would get at least 1 OOC signal in 1 out of 3 SPC sheets of 30 points each. Of course that would be a false alarm, because it is not associated with any special cause.
    “2 … I’m disturbed by your statement that it is NOT defined anywhere that Ppk is for long term capability and Cpk is for short term”
    Not in the AIAG’s SPC manual. I don’t mean that it is not defined anywhere.
    ” … the concept of long term vs short term is VERY important in problem solving methodology.  Are you saying that Cpk could be used to describe long term capability?  I have a gut problem with this because my understanding of Pp for long and Cp for short is the ONLY thing which keeps communication of capability well understood among parties involved.
    Something I forgot in my previous post. We have to distinguish between “short” and “long” when we are speaking about the length of the study with “short” and “long” when we are speaking about the term variation. What I seaid in my previous post referred to the length of the study, and NOT to the short term / long term variation. For that last subject, I will give you a real life example from my own current experience.
    We have a process that is very stable ans is monitored with SPC. Every mont we take the data from the SPC and make a report that includes, anong other things, the Cpk and Ppk values which are plotted together in a chart with “date” in de horizontal axis. Both curves oscilate arround the same value and cross each other several times (remember that they are estimates so you have random error). This monitoring started about 1 year ago. We will agree that it is a long study. Now, Cpk shows short term variation, and Ppk shows long term variation, ok? But they show the same figures! Of course, I forgot! If the process is stable, then “variation due to common causes” = “variation due to common and special causes” and then Cpk=Ppk (both by definition and by fact). Instead of interpreting that as “short term” vs “long term”, I prefer to say that the every time we calculated the report the process PERFORMED (Ppk, history) as it has the CAPABILTY TO PERFORM (Cpk, prediction). If the process is not very stable, then you will get a lower Ppk because the process has the CAPABILITY TO PERFORM (Cpk) better that what it ACTUALY PERFORMED during the study. If short term and long term variation are not equal, then the process is not fully stable. If it was, then it should deliver the same distribution all the time, and then there would be no difference between short and long term variation.
    “I have other thoughts about machine capability only being capable 1/9th of the time but I will let that pass, unless your prod me.”
    I prod you :-) What’s that about being capable 1/9th of the time. I hadn’t say anything about that. I said something about a process with a standard deviation that was 1/9th of the tolerance range, and that was Cp=1.5, but it has nothing to do with being capable 1/9th of the time, does it?

    0
    #83646

    Gabriel
    Participant

    John.
    We agree. Pp/Ppk does not predict anything becuase stability is not a must, and without stability you can not know what will happen next time. Then, Pp/Ppk are of limited value.
    The practical value of Pp/Ppk is to analyze how the process performed during the time where the data belongs to, and only in that time. For example, I make a batch of 10000 in 24 Hs parts and during the manufacturing I take 5 parts every half an hour, total 240 parts. Then I can calculate Ppk to see the quality of this batch regarding the specification. It’s just as to take a sample of 240 parts from the finished batch and plot the values in a histogram together with the specification limits. It just tell’s me about this batch. It tells me history. How the process PERFORMED. Not how it WILL PERFORM.
    Note: In fact, when computing Pp/Ppk you put all values together in the same bag for the S calculatiion, so “subgrouping” has no effect. Then you can make the Pp/Ppk calculation post-mortem, just taking a random sample of the desired size from the finished batch and without plotting a control chart (it would be meaningless if the data is not time-ordered). Of course, this does not work for Cp/Cpk because you need the subgroups to calculate Rbar/d2 and need the time axis to assess stability.

    0