# Ppk Not Needed Since About 1980

Six Sigma – iSixSigma › Forums › Old Forums › General › Ppk Not Needed Since About 1980

This topic contains 159 replies, has 23 voices, and was last updated by John J. Flaig 16 years, 3 months ago.

- AuthorPosts
- March 5, 2003 at 10:43 am #31632
Ppk may have been a good idea from about 1920 to 1980, but after that it just confuses things. I will prove that below. Apparently Cpk use to be the calculation for Process Capability. Then Dr. Shewart decided that a second measurement was need. And instead of making Ppk the estimate, he made the standard, based on Jurans writing prior to 1970, Cpk the estimate. Im not convinced that just based on that fact alone is making Dr. Shewart the shiniest coin in the fountain The other way makes more sense to me based on what Juran has written before 1980. So, this is the first thing that bugs me.Next, what does it matter? Allegedly Ppk us used to get a process capability for the entire sample, i.e. multiple sub-groups, where now Cpk is used for only one sub-group of samples. Why would we need this? To check if one sub-group of samples is capable? We can do that without any estimating. We are only looking at one sub-group. I challenge someone to show me mathematically, (and show your work because talk is very cheap), that Cpk should be and estimate because it differs so much from Ppk. Unless your sub-group is of 100 samples, it is NOT going to change that much. This is just one of the silliest things I have ever seen. According to https://www.isixsigma.com/offsite.asp?A=Fr&Url=http://www.pqsystems.com/cpk-ppk.htmIn 1991, the ASQC/AIAG Task Force published the “Fundamental Statistical Process Control” reference manual Well if you go to the http://www.asq.org/info/glossary/c.html#top you will find Cp and Cpk, but if you go to http://www.asq.org/info/glossary/p.html, there is no Ppk stuff. So lets assume that it mostly came from AIAG. Now there is a group that knows their quality control functions. Dont believe me, then go to Consumer Product Safety Commission web site and look up the number of recalls. http://www.cpsc.gov/ And they dont list the defects in vehicles that are not safety related, and there are thousands more of them. The best explanation I found was here. http://www.statware.com/bulletin/2000/enews_aug00.htmQuality Query: Cpk vs. Ppk Q. What is the difference between the Ppk values reported by Statit and the Cpk values? Why are they both reported? Which one is correct? A. For Pp and Ppk calculations, the standard deviation used in the denominator is based on all of the data evaluated as one sample, without regard to any subgrouping. For Cp and Cpk calculations, the standard deviation is based on subgroups of the data using subgroup ranges, standard deviations or moving ranges. This “within-subgroup” process variation can be considerably smaller than the overall standard deviation estimate, especially when there are long-term trends in the data. Learn about whether Ppk or Cpk best characterizes your process data at http://www.statware.com/statware/quality.htm. According to http://www.freequality.org/beta%20freequal/fq%20web%20site/Training/Classes%20Fall%202002/Capability%20Analysis.docThe Cpk is the most commonly used index for calculating capability, however some have found that the Ppk index is actually better. The Cpk is used to gauge the potential capability of a system, or in other words, a systems aptitude to perform. The Ppk (and relative pp and Pr) actually measure the performance of the system. To determine which of the indexes to use, determine whether you want to analyze the actual performance (Ppk) or the potential capability of the system (Cpk). Cpk is calculated with sigma equal to 3, which is an estimated sigma. Calculating Ppk uses a calculated sigma from the individual data. I guess they dont go far enough back to realize that Ppk didnt exist not very long ago. And Juran in his books indicates that Cpk is Cpk Process Capability This is the capability of the process expressed in relation to a worse case scenario view of the data. It is denoted by the symbol Cpk. The formula is… Cpk = the lessor of …http://www.sixsigmaspc.com/images/cpk01.gif Juran says nothing about an estimate. So what someone has done is make something that is very pertain to Six Sigma, changed it from an actual calculated measurement to an estimate and created Ppk for some unknown reason. The only one I can come up with is it made things more confusing as to sell more books and training sessions. Now, what http://www.asq1212.org/NorEaster%200401/Noreaster%200401p1.htmlHas to say about it makes a little more sense because they also deal with the number of samples, which literally everyone else I have seen preach Ppk has left out. And that is The importance of targeting relative to both Cpk and Ppk was stressed. Ppk is only long term Cpk. Typically Cpk is just a snap shot of the total process encompassing 125 data points (Thank you Dr. Shewart) using typically subgroups of five whereas Ppk is typically ten times or more the number of data points and should be over a time duration to include numerous tool changes. (One “crunches” the numbers for Ppk and Cpk the same way.) Ppk differs from Cpk in that it reveals the targeting shift over time. On the average a +/- 1.5 sigma shift can be expected in targeting alone. Dave Spengler pointed out that with the use of additional Based on this, who would want to use Ppk? Also apparently neither measurement is accurate and true unless your process is in control, which is the only other reason I could see splitting the original measurement.Another good reference is http://www.symphonytech.com/articles/processcapability.htmA little common sense goes a long way here. This is a part of my last newsletter.I can see why during the 30′ s, 40’s, 50’s, 60’s 70’s and some of the 80’s, we may have needed to estimate sigma. This is because calculators were either not around, or werehard to use, expensive, etc. Not the case today. My little TI stats calculator cost only $30.00. On top of that I can enter 5 pieces of data and hit the mean, Standard Deviation, and range key to get all the numbers faster than I can use a normal calculator to do the calculations then look up in the number in the factor table to estimate sigma. So why are we estimating anything these days. The really silly thing, and we are guilty of it as well, is does it not make more common sense for a computer to always calculate sigma instead of estimating it? Do we not inject some error level into the overall computations just by estimating? If we already have the data points entered into a computer, which in theory is a really good numbers cruncher, and then we estimate sigma, is it just me, or is this silly? Also it makes the program more complex to estimate sigma. Most SPC software already must calculate sigma, so to estimate it as well requires more lines of code. The more lines of code any given piece of software has, the higher the probability of an error. It’s a number thing. What would happen if we stopped estimating sigma on the control limits of X-Bar and Range charts, etc. and started using the actual sigma number? Probably nothing with most processes. Yes I am talking about changing the system. How taboo of me.Even if I know the factors off the top of my head I could still do the math faster with my TI Stats calculator. So why does Ppk exists. Mathematically, and show your work, how COULD it reflect the short or long term of a process? Its just a difference of an estimate for crying out loud. I use to have a lot more respect for Dr. Shewart than I do now. No doubt I have just mad more enemies, but since Ive already been black balled, it doesnt matter! (GRIN)I was one of the very first people trained in Six Sigma when I was at Motorola. I was one of the first people at Motorola to go through Six Sigma training. There have been a lot of things added to it from the original concepts. And I cant figure out where the buy-back would come from. As near as I can tell, a lot of the newer stuff cam from ISOxxxx. And based on the Ford and Firestone fiasco, we should all be aware of just how well that works. One last point, it really does not matter how one attains a 4.5 sigma design margin, as long as one does. Too many wr

itten rules to follow makes it harder to achieve. A lot like ISOxxxx, etc. A lot of the time to obtain the correct design margin just means changing a specification. Of course if your specifications are looser than your competitors, then you will lose business to them, so it is wise to insure that you process is as tight as it possible can be.Please someone show me how the theory of Ppk vs Cpk can distinguish long term and short term capability, and include your work with examples so that I and other can understand it.Jim Winings0March 5, 2003 at 4:46 pm #83525Wow, you are making this up as ou go aren’t you.

I don’t believe you were one of the first at Motorola – that would have been the early eighties.

And if you had ever taken data for yourself, youwould know there is a difference in the long term and short term and the size of that difference is very important for which path youtake tosolve a problem.

Your academic mumbo jumbo serves no purpose – go take real data and tell us about it.0March 5, 2003 at 4:54 pm #83526Thanks for a valuable discussion, Jim.

0March 5, 2003 at 5:33 pm #83529

changedriverParticipant@changedriver**Include @changedriver in your post and this person will**

be notified via email.As a long-time user of Cp/Pp, let me share what we used it for – you can argue the validity of the approach all you want….

We mostly use Ppk for decision making regarding supplier equipment. In the manufacturing environment, when you runoff a machine for approval to ship, or after a rebuild, you have a limited number of pieces to run/time, you are not experiencing shift change, temp change, tool change from your tool crib, etc. So the capability is an estimate. Problem is, this is a contractual arrangement. So you need to make a judgement on whether the machine will meet your needs on your floor, with all the extra variables. We required a 2.0 Ppk, and expected 1.5 Cpk as a result on the floor (in case you didn’t know, the auto companies used the reverse definitions – Cpk=long term, Ppk=Preliminary). Because this is a contract arrangement, you need to define that short term vs long term – hense, Cpk/Ppk. Sure, you could just say “give me 2.0 Cpk prior to ship”, but as Cpk is defined statistically, you will not really achieve that, because you are not incorporating all variables, and using a small (125) sample. So, you need a contract definition of short term capability – Ppk (automotive).

As for the shots at the auto industry in general – they aren’t perfect, but show me another industry that makes a product that complex, subject to so many regulations, that performs on demand across all climates and terain, and is expected to last ten years. Try buying a car from Microsoft………..0March 5, 2003 at 5:45 pm #83530>Wow, you are making this up as ou go aren’t you

Making what up? That is why I provided links to external sources. If you wish to have the page numbers I refer to from Jurans book, I would be happy to give them to you.

>I don’t believe you were one of the first at Motorola – that would have been the early eighties.

It was 1985 when I released the 1st six sigma software for Motorola. It was called the A17/78 Supplement program. I worked in what was then the Communications Sector, its now called Land Mobile in Ft. Lauderdale FL. Motorola address there is/was 8000. W. Sunrise Blvd. I worked in component engineering and qualified transistors, diodes and IC Chips as well as supporting incoming inspection. We were the test bed for six sigma. I worked for Motorola for 11 years. People like Keki Bhote taught one of my six sigma classes. Bill Smith told my bosses boss, Pete Peterson what to have me do with the software.

>And if you had ever taken data for yourself,

I have taken and entered hundreds of thousands of data points since 1982.

>youwould know there is a difference in the long term and short term

Based on the sample between a calculation and an estimate? Please, please explain it to me because I just dont see it. You can adjust the sample size or look at the entire sample vs a sub-group, but sigma is sigma.

>and the size of that difference is very important for which path youtake tosolve a problem

And I am waiting for someone to show me mathematically how that could be possible. If one are staying in control, which I hope one is, then how is long term applied? You seem to have all the answers, please explain it instead of just shouting more of the same.

>Your academic mumbo jumbo serves no purpose – go take real data and tell us about it.

Personally I think that you should be a little more civilized in your responses. I have a better idea, since you seem to have so much knowledge in the area, why dont you show be an example instead of just making accusations? I was just asking a question.

Jim Winings

http://www.sixsigmaspc.com

0March 5, 2003 at 5:55 pm #83531

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>…can argue the validity of the approach all you want…

And that is my point. How valid is it? Is it worth the confusion in the long term or if we just call it Cpk and calculate sigma? Does it really cause more defects doing it that way?

>As for the shots at the auto industry in general …

You are correct. Point was not to take a ‘shot’, but I am tired of purchasing auto’s that have too many defectives to talk about and I don’t think they are qualified to set standards based on the number of defectives I personally have purchased. And I’m talking design defects, not manufacturing. Some come, if not most, come directly from their suppliers, but, someone somewhere approved those suppliers.

Jim Winings

http://www.sixsigmaspc.com

0March 5, 2003 at 6:39 pm #83534Attached is explanation of subjet from Minitab. On a related note, Minitab has made a macro on my request to not calculate Cpk (based on Rbar) in capability analysis when using individuals (rolling estimate of Rbar is typically used).

The idea of long/short term variation has always seemed alien to me. Hasen’t the long term process drift that Motorolla experienced been refered to as “seasonal” variation or something like that (i.e. assignable cause). Did they do this to say they had Cpk=2 when they were really short due to this cyclical variation?

Another note of interest, I don’t think ASQ has any mention of long term and short term variation in its CQE BOK. I would think that short term would equal “machine capability” and long term would equal “process capability”, but that is too easy…

From Minitab:

Description

What do “within” and “overall” standard deviation mean in Capability

Analysis and Sixpack (Normal)?

Solution

Within and overall refer to different ways of estimating process variation. A within estimate, such as Rbar/d2, is based on variation within subgroups. The overall estimate is the overall standard deviation for the entire study. Cp and Cpk are listed under Potential (Within) Capability because they are calculated using the within estimate of variation. Pp and Ppk are listed under Overall Capability because they are calculated using the overall standard deviation of the study. The within variation corresponds to the inherent process variation defined in the Statistical Process Control (SPC) Reference Manual (Chrysler Corporation, Ford Motor Company, and General Motors Corporation. Copyright by A.I.A.G) while overall variation corresponds to the total process variation. Inherent process variation is due to common causes only. Total variation is due to both common and special causes. Cp and Cpk are called potential capability in MINITAB, because they reflect the potential that could be attained if all special causes were eliminated.0March 5, 2003 at 8:20 pm #83535Jim,

I’ll give you this. In a very well controlled process the drifts are small, but most do not have very well controlled processes.

I do not buy your claims about Shewhart and Cpk / Ppk. I think I have read Shewharts work and don’t recall any words about Cpk or Ppk. Please do give me the reerences for this.

I do know that if the difference between long term and short term are quantified, you know if you are dealing with a control issue (easy to fix) or a capability issue (harder to fix). Why in the world would you be against a tool that gives direction?0March 5, 2003 at 8:55 pm #83537You former Motorola guys need to figure out where this really came from.

You are the newest entry. Mikell Harry claims to be the center of this universe. Mike Carnell claims this all came out of Motorola’s Government group (seems pretty unlikely). There is a guy named Lupienski who gives speaches that looks like this all started in Buffalo and Motorola University has a book out that give credit to any of these folks.

What is the truth?(where is our data)0March 6, 2003 at 6:30 am #83544>I’ll give you this. In a very well controlled process the drifts are small, but most do not have very well controlled processes

Based on the fact that six sigma gives us a 4.5 design margin, .5 larger than Juran suggest in his book Quality Planning and Analysis, why would we spend money and time looking for small drifts. Is this really cost efficient? This is already accounted for with the added .5 sigma shift. If the process is not well controled, what does the distribution tell you about it? (One reason why our charts look diffeent than others and why we were the first to plot a normal distribution curve based on the specification with the curve based on the actual data)

>I do not buy your claims about Shewhart and Cpk / Ppk. I think I have read Shewharts work and don’t recall any words about Cpk or Ppk. Please do give me the reerences for this

My reference is listed in the first post from http://www.asq1212.org/NorEaster%200401/Noreaster%200401p1.html

Maybe I miss interrupted what was said. But as I am sure you well know, finding the source of some of this stuff is impossible. And there may be a good reason for that. But who actually decide the stuff about Ppk really is not important to the issues of how important is it really.

>I do know that if the difference between long term and short term are quantified,

And I’m not convinced that ‘long term’ is applicable in six sigma.

>you know if you are dealing with a control issue (easy to fix) or a capability issue (harder to fix).

Sometimes these issues can be masked by other problems. This is where just some good old common sense comes into play. Of course Universities have not figured out how to teach common sense yet and it is not a requirement for a Ph.D. I know some Ph.D.s that are the smartest people I have ever meet. Of course I know some that are, huh, not.

>Why in the world would you be against a tool that gives direction?

I am not against any tool to aid in problem solving as long as it can be proven mathematically, or at least 100s of case studies, that it indeed can aid in fixing a problem. Any thing less may not be cost efficient and that is contradictory to what I was taught six sigma is about. I dont take things as the gospel just because someone says so. I want proof. And when I sit down and try to reckon and perhaps reverse engineer what they are saying and it doesnt make sense, as in the common type, I question it. Perhaps to the Nth degree. And this entire Ppk and Cpk thing is one of those issues.

Jim Winings0March 6, 2003 at 8:34 am #83547>You former Motorola guys need to figure out where this really came from.

It didnt necessarily come from one place. Each division had its own unique problems to solve. And any given fix for one may not have been the best choice for another. I dont know, its team work before they had teleconferencing. We used Motorolas old mainframe email system. Thats a whole other story.

And indeed, we now have people saying you must do this, that, and the other to be six sigma. But this and that may not work for some processes and may actually cost more money to implement. Six sigma has become a set of rules, more like ISOxxxx, and that is NOT good. It use to be suggestions on how to do this, that and the other. Not the gospel and all were backed by common sense ideas and methodologies that could be explained mathematically or based on hundreds of case studies. And going back that far, most of the actual case studies involved Japanese companies. (and looking at just the auto and electronics industries, maybe that should still be the case?)

>You are the newest entry.

Actually Ive been around for awhile. Or as I like to say, in the beginning. No, not that beginning, Im not that old. My proverbial 15 minutes came with writing the software to help Motorolas suppliers meet these new standards. No software was around to do so at the time. My software was the first ever, outside of Motorola, to place the design margin sticks on the distribution charts. I did not add to or change any of the methodologies. I just used them in Motorola to test the theories. After it looked like they might work, they sent me to minority suppliers to help them get on board with the software because Motorola needed to insure they kept the minority suppliers for government contract reasons and Motorola had strict intensions that any suppliers that could not meet a 4.5 sigma design margin was not going to be a Motorola supplier any longer.

>Mikell Harry claims to be the center of this universe.

Yea, go figure. Actually I never heard of Harry until I left Motorola. So Im not sure if he was in the late Bill Smiths camp in Chicago, or, based on the fact that he is in Phoenix, or somewhere around there, he may have come from the Semiconductor Product group. There was something like 25,000-50,000 employees at Motorola at the time. But I will say this, some of the first components I look at from the Semiconductor Product did not meet a 4.5 sigma design margin. So being in component engineering where I qualified such devices, I rejected them on not meeting the minimum Cpk requirements. (calculated sigma by the way (GRIN!) I got a note back from the Semiconductor Product group saying they didnt have to meet the design margin. I guess since Motorola is Motorolas biggest supplier they thought that politics would save them. So I showed my boss, Doug Dolbee, who showed Pete Peterson and I reckon Peterson sent it to Bill Smith. About 10 days later, the Semiconductor Product group was ready to embrace six sigma with very open arms.

Funny isnt it how politics alone can make or break good and bad ideas.

>Mike Carnell claims this all came out of Motorola’s Government group

I never heard of him, and still havent. To the best of my knowledge, the Government group did a lot of special projects and because of this everything was already of a higher quality due to the fact that parts were hand picked, etc.. They were not in the initial testing of the six sigma methodologies, but I could be wrong. I know I never had any dealings with them and I left Motorola in 1989.

>is a guy named Lupienski who gives speaches that looks like this all started in Buffalo

John I know. Or at least I have had phone conversations and email conversations with him. The Buffalo plant manufactured the Radius radio, which was Motorolas first attempt at a cheap 2-way radio. Due to this, John was in the Communications Sector. (Note that it is well documented all over the WEB that six sigma started in the Communications Sector) My boss told me to send a copy of the software to every facility in the Communications Sector. So I did. As I said, I had in-plant email, which most people did not have at that time. For 21 days straight I checked my email faithfully. It wasnt easy back then. I had to load a VT100 terminal emulator, call into a modem, tell t he node which mainframe I wanted to access, etc. etc. I wasnt getting any, so I didnt check my mail for 3 days. Son of a gun, wouldnt you know it, John sent me email asking a question about the software and when he didnt get an answer, he called me on the phone. The first words he said to me was, Is this Jim? I said yea, and he said, DONT YOU EVER CHECK YOUR EMAIL? I dont remember what the question was, but I remember his attitude. Of course this may have been before he had his required six sigma training. The same basic training we all had. Im not sure what happen to John after that.

We all had non-disclosers that we had to sign at Motorola that said 10 years was the limit. Some of these guys were a lot higher in Motorola than I was and apparently the didnt wait the required 10 years before starting six sigma training, etc. I on the other had due to being a smaller fish and just the ethics of the matter. I signed a paper that said I would wait 10 years, and I did. It wont make you rich, but perhaps a little respect is due because of it.

Jim Winings0March 6, 2003 at 1:55 pm #83552If you are citing the Spengler article with the title “Theory and Practice of Six Sigma Black Belts” as proof that Shewhart developed Cpk I think you had better re-read the article. The only sentence that seems to apply is the following:

“Typically Cpk is just a snapshot of the total process encompassing 125 data points (thank you Dr. Shewhart)…..” If you check your basic texts on quality control and Xbar R charts, the rule for minimum data is expressed in forms such as …”an Xbar chart with 25 samples of 5 each has a very high probability of discovering such variation.” (pp.446 Quality Control and Industrial Statistic-Duncan)…..25 x 5 = 125…it would appear that the “thank you” is aimed at the sample size-not Cpk.

The earliest reference I can find to Cpk is Sullivan 1984 – Reducing Variability: A New Approach to Quality – Quality Progress. In that article he refers to Cp and Cpk as being indices used in Japan and recommends them for use elsewhere.

The various claims and counter claims surrounding who fathered 6 sigma bring to mind the old saying -“Success has a thousand fathers-failure is an orphan”0March 6, 2003 at 2:07 pm #83555Cpk came from the Japan at the latest in the 70’s. What does the “k” mean? Obvioulsy nothing in English, but it stands for “katayori” which is shift or offset in Japaneese.

See previous thread on topic of origins of Cpk.0March 6, 2003 at 2:40 pm #83557

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>as proof that Shewhart developed Cpk …

Not Cpk, Ppk.

>”Typically Cpk is just a snapshot of the total process encompassing 125 data points (thank you Dr. Shewhart)…..”

I thought the ‘Thank You was due to Ppk. I may be wrong about who said what and whom invented what, but that is not the basis of the discussion. Ive been looking for 3 years to find out where Ppk came from. And at first I thought Harry came up with it. Just the fact that I cannot find the genius who came up with it is a red flag in my book. If it is the preverbal cats butt, then I would think that whom ever came up with it would be proud to say so.

>…”an Xbar chart with 25 samples of 5 each has a very high probability of discovering such variation.” (pp.446 Quality Control and Industrial Statistic-Duncan)…..25 x 5 = 125…

Agree, Juran says the same thing in his book Quality Planning and Analysis 2nd edition 1980. Motorola decided that 75 pieces would be enough. I dont know where they pulled that number from. That’s what was in the A17/78 documents that was the supplier incoming inspection requirements and the document that my software was married with.

Going back to my AT&T Statistical Control Handbook, Copyright 1957 Western Electric Company, I could not find a rule for sample size, but their examples shows 100 samples. While this book is older than I, it is where the WECO rules came from, so I keep it around but I need to have it re-bound.

>The earliest reference I can find to Cpk is Sullivan 1984

Juran’s book before that, 1980, has Cpk in it. I’m looking for Ppk. And Juran does not say the sigma is an estimate.

>”Success has a thousand fathers-failure is an orphan”

I can’t argue with that! (GRIN)

Jim Winings

http://www.sixsigmaspc.com

0March 6, 2003 at 2:48 pm #83558

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Exactly! My Motorola hand out indicates that k is

Process Shift / Design Specification Width / 2

This is why I have and continue to preach design margins. There is a bunch more stuff that go with that. Im not going to type the entire bloody thing in, but you can find it here.

http://www.sixsigmaspc.com/six_sigma.html

Jim Winings

http://www.sixsigmaspc.com

0March 6, 2003 at 7:09 pm #83565I thought there is a iSixSigma policy against promoting products or services on the forum.

This looks pretty blatant to me.0March 6, 2003 at 7:55 pm #83570Jim,

Interesting contention that Ppk has not been needed since 1980. Ppk is the invention of Ford in the late eighties.

Shewhart had no opinion about Cpk or Ppk – this is not a language he ever used.

Juran attributes Cpk to Kane in 1986, but talks about inherent reproducibility or machine capability (both short term) and process capability (long term) a long time before that.

You may be interested that Juran Institute uses Cpk and Ppk in their Six Sigma training.

You will find that Ford started the second measure because people were using snapshots to state long term capabulity and then could not perform.

I have looked at the product you are hyping on the link at the bottom of your posts. It looks to be useful, especially if it is inexpensive. Don’t blow it with a rant like this where you clearly have not done your homework.0March 6, 2003 at 7:55 pm #83571

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Are you talking about me?? If so you put the message in the wrong place.

Where is the line between promoting products, and pointing to resources. Most companies that have products on their web sites also have information that may be vital to any given topic. When one not only points to their web site, but to their competitors as well, think that just becomes a conversation and not promotion. Perhaps I am wrong, but it may be a personal issue as far as how one views it. I read very few discussions that dont have a link to some web site. Even pointing to a university could be viewed as promoting that University.

The other issue is I personally feel that if someone is trying to make any kind of intelligent point it is important that the people reading the discussion can verify the source. Dont you think it is important to verify sources or resources? If verification is not done, then having an intelligent conversation would be difficult. Hearsay means nothing.

Can you point directly to what bothers you?

Jim Winings

http://www.sixsigmaspc.com0March 6, 2003 at 8:08 pm #83573

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Stan;

>Shewhart had no opinion about Cpk or Ppk – this is not a language he ever used

Good. My respect for him just went up again. Thanks.

>Juran attributes Cpk to Kane in 1986, but talks about inherent reproducibility or machine capability (both short term) and process capability (long term) a long time before that.

I dont recall reading that, (long term and short term), in my Planning and Analysis, but I could have missed it. Even though I have read it cover to cover several time. Im not as young as I use to be. Of course who is. I still dont recall seeing anything in Juran’s book, 1980 about Ppk.

>You may be interested that Juran Institute uses Cpk and Ppk in their Six Sigma training

I will have to check their site out and see what is going on. If they are promoting Ppk, then I will have a few comments to give them about their methodologies.

>You will find that Ford started the second measure because people were using snapshots to state long term capabulity and then could not perform.

The search is over. Do you have a source to confirm that? Why did they decide that Cpk should be the estimate and not Ppk? Ant ideas? That just seems so asinine to me that I cant finds words to describe it.

>I have looked at the product you are hyping on the link at the bottom of your posts.

Not my intent. I have spent a lot of hours answering post and doing research on this topic before posting. More than the profit per unit. If I am doing this for promotional reasons, I will be out of business soon. Personally I dont have a problem with anyone posting their URL or business name in any given post. I feel it ads credibility to the words in the post. Now if I was giving the price and features and how to order, that would become a differ issue. But I guess everyone has their own ideas and opinions. Its a basic human right.

Jim Winings

http://www.sixsigmaspc.com

0March 6, 2003 at 8:34 pm #83574

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Opps. Sorry Stan. You did have the correct message. Ever notice how the older you get the longer arms you need to read the numbers on your cell phone and how you need a bigger monitor?

0March 7, 2003 at 3:09 pm #83586Jim,

I have read your postings on this topic with great interest.

I agree with you that you can conduct a capability study and calculate the Cp and Cpk for that study. Perhaps, I can explain the use of the other notations associated with a capability study.

The use of the different notations were used by Ford Motor Company. Specifically, the transmission division. Victor Kane and I started working in Fords transmission division in Livonia in 1984. At that time, Vic was working on an existing transmission and I was working on a new transmission about to be launched. For the new transmission, there were three major capability studies that would be conducted. The first study was conducted at the vendors site, usually on a machine. This is frequently referred to as a machine capability study. Once the machine passed this study, the machine was shipped and installed at the Livonia transmission plant. After installation, a second capability study was conducted to define the capability at the plant. Once production began, then a third capability study was conducted to understand the variation associated with the day to day production. It became clear fairly quickly, that there was a need to distinguish between the three types of capability studies. It was agreed upon by several groups of people that the following notation would be used:

Cpt (Cpkt) for the machine tryout capability study

Cpp (Cpkp) for the second capability assessment after installation but prior to production. This was called the machine potential study hence the use of the p for potential

CpL (CpkL) for the production or long-term capability study

This provided a relatively easy way to understand the performance of the equipment across the various stages of usage. The indices notation helped to distinguish relative performance on a complex product (8000 characteristics) across numerous processes.

As this approach spread across Ford and later was required of the supply chain, the machine tryout was essentially dropped. With the two remaining studies, it was simplified to distinguish the machine potential study as Pp (Ppk) from the production capability of Cp (Cpk). Of course, various companies and consultants have put their own spin on these indices. This has resulted in the general confusion associated with capability assessment.

Eileen Beachell, Quality Disciplines0March 7, 2003 at 4:01 pm #83589

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Eileen;

Cpt (Cpkt) for the machine tryout capability study

Cpp (Cpkp) for the second capability assessment after installation but prior to production. This was called the machine potential study hence the use of the p for potential

CpL (CpkL) for the production or long-term capability study

This seems very reasonable except for the long-term deal. If one keeps one process in control, how could there be a long-term effect of any kind? Or do you mean that long-term just indicating, in production not any type of long-term potential for errors in manufacturing? Sigma was always calculated for Cp/Cpk? Did you proverbial guys estimate it or calculate for the abovementioned measurements?

>As this approach spread across Ford and later was required of the supply chain

And I agree that each company has the right to request whatever they feel they need from their suppliers to meet quality as well as production requirements.

>it was simplified to distinguish the machine potential study as Pp (Ppk) from the production capability of Cp (Cpk).

But did Ford make the decision that Cpk, et. al. was to be estimated and Ppk, et. al. was to be the calculated even though for years Cpk was a calculated affair?

>Of course, various companies and consultants have put their own spin on these indices

Yea, well as I like to say, the patients are in charge of the asylum.

>This has resulted in the general confusion associated with capability assessment.

That would be an understatement. Too bad you didnt know then what you know now huh??? (GRIN)

But Im still trying to figure out what genius decided to change a standard and make the original standard prone to errors by estimating sigma. Dont you agree this is asinine? As well as how there is a difference. I understand how they are using it. I just cant justify it personally. Mathematically it doesnt make sense. Not to mention too many rules are contradictory to what I was taught six sigma is and should be about.

Thanks for your response. 84 huh? That was about the same time I burnt up $250,000.00 worth of Motorola HC6811 processors in ALT because me and my big brain decided that instead of just biasing them we need to clock them as well. But I forgot to adjust the ALT chamber temperatures to allow for the extra internal heating. Of course I was just a young pup back then. Melted the lids right off the chip carriers. What a mess that was for me to clean up. I had green stuff all over the place. I hate when that happens. But the concept I still feel is right.

One Rhetorical question: (+/-2) Why hasnt anyone challenged this to the Nth degree until now? Or have they and I dont know about it? Why have some just taken it as the gospel?

Jim Winings

http://www.sixsigmaspc.com

0March 7, 2003 at 4:28 pm #83590

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Opps, HC6811 Should be 68HC11, I had a gaseous brain discharge

0March 7, 2003 at 4:55 pm #83591Jim,

Again, I think much of this has been abused and the method of statistical analysis has been twisted. The so-called long-term study was intended to study variation as it happened in production. Simple example is a lathe. During production runs you would experience tool variations, more raw material (bar stock) variation as well as maintenance and general entropy. We could not assess these sources of variation in a pre-production capability study.

Both the machine tryout and the machine potential studies were lucky to have 20-30 parts. In most cases, these were taken in consecutive order and no sampling or subgrouping were possible. There was no way to take subgroups and execute a sampling plan. We were stuck. However, companies with processes that can spit out a lot of parts and subgroup components, it does make sense they should use the appropriate analysis. In transmission components, not possible.

Whether you calculate the population sigma from a sample of 30 consecutive parts or from a subgroup on a control chart using R-bar, they are both estimates. For the production capability, control charts were used on the critical characteristics to assure stability over time and it was easy to use the R-bar (and it is more appropriate) to estimate the process variation.

It is very interesting to see how other companies have struggled with this. In some companies and applications, it is not the best fit. The Livonia transmission plant for the new transmission did not make defects. All the processes were capable. The only issue were very small rejects at the test stands (less than .05%) due to tolerance stack ups. There was no focus on defect reduction because there weren’t any in the production process. The focus was to continue to reduce variation to improve the overall performance by achieving a product to target.

Eileen Beachell, Quality Disciplines0March 7, 2003 at 7:31 pm #83595Jim,

Go simulate a .5 sigma shift in your control charts and see how long it takes, on average, to detect. It takes a load of samples.0March 7, 2003 at 7:33 pm #83596A failure for tolerance stackup is not a defect? I disagree.

0March 7, 2003 at 7:46 pm #83598

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.I have enjoyed the postings on this topic. I think it sets the record for passion, history, pot shots, etc. on the Six Sigma forum.

You ask the question, what good does discussing Cpk vs Ppk. I find this always a good point when interviewing candidates who say they are Six Sigma trained. If they are trained in the more classical style (I’ve defined it as Motorola and adapted by GE, Honeywell, others), they don’t give me the dumb founded look OR give me a line of “bull”.

One interesting point. I find it interesting that many companies are adding these additional tools to the define phase (e.g. SIPOC). This is an indication that leadership doesn’t know how to define projects to begin with and just throw their hands up in the air and say “BB,MBB, or GB fix it”. Of course, I’ve been giving advice to Six Sigma training candidates in other companies who come to me for help and I’m amazed at the low threshold it takes to get certified by some organizations. I saw one project recently that had a predetermined solution, the solution didn’t work, yet the candidate got certified. The candidate asked “Would I certify them?” and I kindly told the person “no” since the project was a poor training project and he showed no improvement–but said I wouldn’t have allowed him to get trained on that type of project. I said he had an understanding of the tools but in the cultures I best relate to success is driven by results not learning tools.

I’ve pontificated myself and am wondering if I get a new string of responses on this heavy subject.0March 7, 2003 at 8:00 pm #83599

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Eileen;

>The so-called long-term study was intended to study variation as it happened in production.

>We could not assess these sources of variation in a pre-production capability study.

Perhaps a question is did you need to asses these variations? See more below for what I am referring to.

Yes, I understand that. I think my AT&T Statistical Quality Control Handbook, (c) 1957 Western Electric Company explains it best. The set up is the calculated Std. Dev is .86 whereas the estimated Std. Dev. Is .68 based on the example data.

and I quote

In the case where the R chart is in control but the X-Bar chart is out of control, the estimate of the sigma(universe) which is obtained from the R chart will be a better estimate of the standard deviation of the underlying universe than the value obtained by calculating the root mean square deviation.

R-Bar for examples they show based on 5 samples was 1.59. d2 factor of 5 is 2.326 for an estimate of .68.

This is a truer estimate of the standard deviation of the underlying process than the value of .86 which was calculated. This is because the distribution shifted its center during the period when the measurements were obtained, and the shift in center has inflated the estimate arrived at on page 130 i.e. .86

end quote

But, the center shifted. It was out of control. I would have to assume that you were in control and centered prior to establishing the capability? Literally 4 pages before that statement they talked about statistical tolerancing.

Jurans book Quality Planning and Analysis 2nd edition, 1980 indicates that due to a potential process shift, that 3 sigma is not good enough. He suggest going to 4 sigma. All Motorola did was add .5 sigma to that theory to get the 3.4ppm number. Now I dont know if the 1st edition of Jurans book, which apparently was printed in 1970 also had the process shift theory or not. But we know in 1980 it did exist.

And this is another one of my pet peeves, I preach design margin, (of course that and 50 cents will get you a cup of coffee in SOME establishments), however, by using statistical tolerancing in combination with design margin i.e. worse case process shift, can one not calculate the probability of manufacturing errors of a process that is in control no matter how many part steps or parts there may be assuming that a worse case shift would not be greater than +/-3 sigma of a normal distribution? (ever notice that there is too many assumptions in statistics?) All of this done of course in the design phase and not in the manufacturing phase because you cannot inspect quality into any product. Once you have these numbers, you set up your machine and try to keep it in control, but if it drifts slightly, the margin of error that you establish with statistical tolerancing should keep you from ever running defects.

OK, I think Im starting to confuse myself. I dont think I have explained myself very well, sometimes I have problems putting into words what I can see in my mind. (Oh no wait, thats a 70s thing, never mind)

As a matter of fact, isnt that six sigma? However I never hear anyone talking about statistical tolerancing when referring to six sigma. Doing a search on isixsigma for “statistical tolerance” yielded no results. (of course it will now after google spiders this site and updates their databases)

Heck, maybe Im not making any sense at all, Ive been up for 16 hours straight now. My point is that could you not have used these theories, which I didnt know about till about 1987 or so to also obtain a capability for the process in reverse so to speak?

But we have kind of gotten away from the original topic which was who changed the standard for Cpk to Ppk. But perhaps it good reading for one to think about at least.

(Man that took awhile to research and type up)

Jim Winings

0March 7, 2003 at 8:46 pm #83600Chris,

You have made my day with your comment about training. Keep up the good work0March 7, 2003 at 9:27 pm #83601

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>I have enjoyed the postings on this topic. I think it sets the record for passion, history, pot shots, etc. on the Six Sigma forum.

You forgot bullheadedness! (GRIN)

To me six sigma has become something that is more akin to QSxxx or ISxxx than what I was originally taught. And there is probably a reason for that. Too many hands in the pot all with a better wheel. (I couldnt think of any more clichés to add there)

0March 7, 2003 at 10:02 pm #83606

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.Is this Mr. Cone? If so, please contact me at udubseider@earthlink.net since I’ve lost your contact info. Even if this isn’t who I think, I’m glad I was able to make you smile.

0March 7, 2003 at 10:04 pm #83607

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.I forgot to ask, what ever prompted you “Jim” to start this string of comments?

0March 7, 2003 at 10:31 pm #83608

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Ah, annoyance??? (GRIN)

I’ve been looking for the bloody person(s) who is trying to change a standard and make me rewrite my software. (Grrrrrrr) Not that I would. I want to disprove their theories and set some things straight and cause my competitors to rewrite their software. (they have already stolen ideas from me) Or be proven wrong and learn something. Ive been on both ends of that stick before.

After looking at several issues, I’m convinced that it is silly to preach that one MUST be accurate in measurements to do statistics, even if it costs a company a ton more money to achieve it than it may be worth as far as getting a RONI or RONA and then we screw up the accuracy with estimates. No one has proven to me mathematically that estimates are worth more than actual calculations when based on a normal distribution, which is what six sigma is based on. I am wondering what flavor six sigma would take on if estimates were outlawed and only calculations were allowed. Would quality get better, worse, or stay the same. Also one would know that the experts knew how to do square roots. And the list goes on and on and on, kind of like me.

0March 7, 2003 at 10:55 pm #83609Jim,

I just read the articles on your web site and you are out of touch with Pp and Ppk the way the Six Sigma community is using them. Your Pp is the automotive industries Cp and vice versa. The metric you are at odds with (or maybe don’t understand) is the automotive industries Cp.

Go read the definitions in AIAG’s SPC manual.

Minitab is set up to duplicate these defintions, except Minitab defaults to s-bar for the short term. R-bar/d2 can be chosen if you want.0March 7, 2003 at 11:22 pm #83611

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Regardless how one is using Ppk/Cpk et. al. my point is still the same. Ppk ads no value and just makes things more confusing than what it is worth. Six Sigma should be about making things easier not more confusing.

0March 8, 2003 at 4:29 am #83613

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Eileen;

After thing about this post for awhile I realized I had heard this story before. Looking up in my Design for Manufacturability course material, Motorola 1986 Issue 4 I came across a section in the back where there were several articles from Journal of Quality Technology Vol. 18 No. 1 and low and behold I came across one from January 1986 called Process Capability Indices by victor E. Kane Ford Motor Company Transmission and Chassis Division PO Box 2097 (7) Livonia MI 48150. Sound familiar? Funny thing is Dr. Kane also quotes Juran/Gryan, (we seem never to give credit to Gryan for some reason), Quality Planning and Analysis 2nd Edition just as I have. Of course the reason for this may be the what was then ASQC used it as their bible. I assume you have read this.

I just thought it was funny, and I dont mean ha, ha. (GRIN)

0March 8, 2003 at 4:34 am #83614

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.I wish there was some way to edit these posts!!!!! See what happens when one has been up for over 24 hours.

I wish there was some way to edit these posts!!!!! See what happens when one has been up for over 24 hours.

I wish there was some way to edit these posts!!!!! See what happens when one has been up for over 24 hours.

ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz ZZZZ zzzz

0March 8, 2003 at 2:22 pm #83615Jim,

Thanks for your postings. I think you asked some really good questions in spite of your lack of sleep.Fundamentally, I think we are in agreement. Although statistics is useful as an aid to make judgements, it is not a substitute for good engineering knowledge. Of course we could use statistical models and make a boat load of assumptions and perhaps be able to better estimate the tolerances. Unfortunately, for transmission manufacturing, there are 8000 characteristics with about 1500 being critical. The process capabilities vary from marginally capable (1.33 – 1.66) to highly capable (5-8). These are not set in stone- they do vary somewhat with time. It was very important to remove unwanted sources of variation in the production process. Of course, there is the matter of economics and at some point you would have to stop. Some processes were fine at a Cpk of 2, others really needed to be higher. All processes do drift and the margin of error does continue to protect the product. Even with the 4 sigma or 4.5 sigma, because of the amount of components interfacing, it needed to be higher on some of the components.In addition, a lot of the processes are nonnormal which adds it’s own complexity.

You are right – the processes were centered and stable prior to the calculation for capability – including the machine tryout and the potential study.

I believe the Cpk was changed to Ppk within Ford to simply designate the two different studies. The was mostly likely done by a committee at some point – my money is on the Supplier Quality Assurance (SQA) group at Ford. To identify a single person (even if you could) I don’t think would shed any more light on this issue.

Again, thanks for your comments and perspective on Ppk.

Eileen, Quality Disciplines0March 8, 2003 at 5:39 pm #83616

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Jim, I’m not sure what’s your point about Cpk/Ppk and Estimates/Calculates. I hope you will tell me after reading this long post.

But I will share my view about this subject which is based on AIAG’s SPC manual and my own working experience and use. That does not mean that either myself or AIAG are right. It goes long, so sit down and relax. After reading it, I would like to know you opinion.

I will use the Xbar-R charts, but could be extended to other charts.

First, some definitions. For them we will think of process as something that delivers individuals that are measured. The process is not limited to the period of time of the study, so the process individuals include those not delivered yet (or delivered before).

Stable process: A process running without variation due to special causes. Such a process delivers the same distribution over time.

n=subgroup size, m=number of subgroups in a sample (or in a chart), N=n times m=total sample size (number of parts in all subgroups in the study).

X=mesurement result of any of th individual of the process. If the mesurement variation is low, it can be taken as the “true value” of that individual (we will assume that).

Xbar=the average of the Xs in one subgroup.

R=max(X1,X2,…,Xn)-min(X1,X2,…,Xn) in one subgroup.

Mu=Mu(X)=Mu(Xbar)=true average of the process distribution*

Mu(R)=The true average of the subgroup ranges distribution.*

Sigma=Sigma(X)=true standard deviation of the process distribution*

* These values are defined only for a stable process, because an unstable process can not be characterized by ONE distribution unless the unstability behavior is prdictable. We will omit that last case. Note that these true values are unknowable because some of the individuals (and some subgroups) are not delivered by the process yet.

Xbarbar=avrage of Xbar1,Xbar2,…,Xbarm=average of X1,X2,…,XN.

Rbar=Average of R1,R2,…,Rm.

S=S(X)=whole sample standard deviation of the X1,X2,…,XN individuals.

^=Hat=Estimation. For example. Mu^ is an estimation of Mu.

Within variation: Process variation due to common causes only. Characterized by Sigma(w)*

Total variation: Variation due to common and special causes. Characterized by Sigma(t)*

Cp=process CAPABILITY index=ratio between the tolerance range and the process within variation=T/6Sigma(w)*

Pp=Proces PERFORMANCE index=ratio between the tolerance range and the process total variation=T/6Sigma(t).*

*(T=tolerance range). These 4 definitions are from AIAG’s SPC manual, slightly modifed. The same conclusions derived below for Cp/Pp can be derived for Cpk/Ppk.

d2=Mu(R)/Sigma in any normal distribution. It is a function of n. We will asume that the process, when stable, is normally distributed. Note however that a not normally distributed process violate this, and then Sigma will be NOT Mu(R)/d2, and the error involved in using this should be analyzed.

MY VIEW

Because “variation due to special causes” can not be =Sigma(w), and then Pp<=Cp, and Ppk<=Cpk

STABLE PROCESS:

By definition, if the process is stable then variation due to special causes=0 then within variation = total variation = process variation, Sigma(w)=Sigma(t)=Sigma=Mu(R)/d2. Being Sigam(w)=Sigma(t)=Sigma, then Cp=Pp=T/6Sigma.

However, all these values can not be known, but can be estimated in these two ways:

Way 1:

Sigma(w)^=Sigma(t)^=Sigma^=S

Cp^=Pp^=T/6S

Way 2:

Sigma(w)^=Sigma(t)^=Sigma^=Mu(R)^/d2, and we can estimate Mu(R)^=Rbar, so we get Sigma(w)^=Sigma(t)^=Sigma^=Rbar/d2

Cp^=Pp^=T/6(Rbar/d2)

Note that both S and Rbar/d2 are just two estimations of the same parameter, Sigma (the process standard deviation). Because of random error, either of these estimators can be either greater or smaller than the actual value Sigma. Also, in one sample of m subgroupos of m, either one of these estimators can be greater than the other. If you make several studies over time allways in the same stable process and plot both estimations of Sigma against time you will get two curves affected by random variation (like an Xbar plot) that cross each pother several times and stay close to the same horizontal line, which is the process Sigma. The same if you plot both estimations of Cpk=Ppk.

In this condition, the any distinction between within and total variation, or Sigma(w) and Sigma(t), or Cp/Cpk and Pp/Ppk is just stupid. It is all the same thing.

However, for a reason we will see next, we use “Way 1” to estimate within variation and “Way 2” to estimate total variation, then:

Sigma(w)^=Rbar/d2, Cp^=T/6(Rbar/d2)

Sigma(t)^=S, Pp^=T/6S

BUT, IS THE PROCESS STABLE?

No it’s not. No real process is really absolutely stable. Western Electric Co said that perfec stability can not be reached, and if it was, then there would be no point in charting the process, and that the aim is to get an enough stable process.

However, some “small” inestabilities introduce “samall” variations that are not detectable. For example, imagine that the average actually shifts 0.1 sigma due to a real special cause. How would you detect that? In SPC we say that the process is “in-control” when the charts do not show “out-of-control” (OOC) signals. In fact, the best we can say in such a case is “I do not have enough evidence to suspect that the process is unstable (or affected by a special cuse)”. Some times we simplify and say “The process is stable (or free from special causes)”, what is Ok from a practical perspective but is not strictly correct.

So, being “true” stability impossible to reach and, if it was, it is still impossible to prove, what do we do with Sigma(w) and Cp/Cpk?

Imagine that you have a process that is stable in periods of time but from period to period the average shifts due to a special cause. If you stay within one period, the process delivers the same distribution over time. If you compare one period with another, they both have the same distribution shape except that the position is shifted. But if you take several periods, you see that the individuals belong to a wider distribution because of the shift of the average. That means that the process variation within a period is equal in any period but the variation in all periods together is grater. The first is the variation due to common causes only, the second is the variation due to common and special causes. If you could eliminate the special cause that makes the process average shif you will get a total variation equal to the variation within any period of stability (that’s obviousof course, now it is all stability).

Imagine that we have an Xbar-R where any subgroup is made of individuals belonging to the same period of stability and different subgroups can belong either to the same or different periods of stability. How do we calculate the variation within the stability periods (that was assumed to be the same in any period)? That would be Sigma(w). We can’t, but we can estimate it.

As we saw, d2=Mu(R)/Sigma for any normal distribution (remember we asumed that, when stable, the process was normally distributed). We just said that within any period of stability, Sigma(w) was the same. Because d2 is a constant, that means that within any period also Mu(R) must be the same. That means that Sigma=Mu(R)/d2 will be the standard deviation within any one of the stability periods, i.e Sigma(w)=Mu(R)/d2. We don’t know Mu(R), but if we can estimate it we can estimate Sigma(w) as Sigma(w)^=Mu(R)^/d2. If we could know which subgroups belong to which stability period, we could average the R values for each stabilty period and get an Rbar for each. Thn the Rbar could be used as an estimation of Mu(R) in each period. Because we know that Mu(R) is the same in all periods, we could average oll the estimations of Mu(R) to get a better estimation. that would be averaging the Rbar of all periods. We don’t know which subgroup belongs to which period, but averaging an average is the same than averaging the whole original data without the intermediate average. So we can take the whole Rbar of all subgroups (regrdless of the periods) as an estimation of Mu(R). Once we made Mu(R)^=Rbar we can make Sigma(w)^=Rbar/d2. Thats why we used “Way 1” for Sigma(w) and Cp.

Now, if you wanted to know how well the process performed, Sigma(w) would not be a good indicator becuase the process had more variation than “due to common causes only”. For that we need Sigma(t). Sigma(t) should be calculated as the population standard deviation, but can be estimated as the standard deviation of a sample. The sample could be, for example, all the individuals in the subgroups in our chart. And then we get Sigma(t)^=S. Taht’s why we used “way 2” for Sigma(t) and Pp. Of course, you can take another sample from the batch and calculate S. It does not need to be the same sample used for the subgroups.

Wait a minute! What if the distribution of the whole batch, mixing all the stability periods (then including the variation due to common and special causes) happens to be normal? Then we could apply the d2 definition again and say that an estimation of Sigma(t) can be Sigma(t)^=Rbar/d2. Taht would make Sigma(t)^=Sigma(w), whic is absurd because we know that, because there IS varaition due to special causes, Sigma(t)>Sigma(w).

The contradiction raises becuase the individuals in the sugroup is not really a random sample of the whole batch. We stated that all the individuals within any subgroup belonged to the same stability period and the belonged to a stable distribution that is NOT the distribution of the whole batch. Then the R in each subgroup in NOT representative of the variation in the whole batch.

If you want, you can take the whole batch and, at random, take parts and place them in subgropups without any rationality, meaning that any individual of the batch has the same probability to be chosen for any subgroup. Now you can calculate the R of the subgroups and get Rbar (which will grater that the Rbar of the rational subgroups used to calculate Sigma(w)^), and make an estimation of Sigam(t) as Sigma(t)^=Rbar/d2. That would be a perfectly valid estimation of Sigma(t), just as Sigma(t)^=S. Note that if you plot the data from those subgroups in an Xbar-R chart, that will tell you nothing aout stability because the horizontal axis is not longer “time”.

CAPABILITY VS PEFORMANCE

In a process CAPABILITY study we estimate Cp/Cpk. It is about FUTURE. Wat the process CAN DO. It tells you how the process PERFORMS when it is stable. It is useful to predict what the process will deliver. Indeed, when the process is stable it delivers the same distribution over time, so it is posible to predict that the same distribution will be delivered later. Note that if the process is not stable, then the distribution is changing and you do not know which distribution will be delivered next time. That, together with the fact that Rbar/d2 is an estimation of Sigma only if the process is stable, is whiy stability is a prerequisite to calculate Cp/Cpk.

In a preocess PERFORMANCE study we estimate Pp/Ppk. It is about HISTORY. What the process DID. How the process PERFORMED during the time covered by the study. Stability is not a requisite. It would be stupid to reuire stability to calculate Pp because the “variation due to special causes” is part of its definition. It can not be used for prediction of future performance, becuase stability is not a requisite.

Now, if you used Pp/Ppk AND the process is stable, you can make predictions. After all, we’ve already seen that in that condition Cp=Pp and Cpk=Ppk.

I GOT A Cp SMALLER THAN Pp, HOW IS IT POSSIBLE?

It is not. Sigma(t) can’t be smaller than Sigma(w), then Pp can’t be greater than Cp.

Some times we write Cp=T/6(Rbar/d2) and Pp=T/6S. But that’s a simplification. These are not Cp and Pp, but their estimators Cp^ abd Pp^. Those estimators can be, at random, slightly above or below the actual value that they are estimating.

If the process is stable then Cp=Pp. If the process is slightly unstable (the variation due to special causes is small) then Cp will be only a little larger than Pp. In these conditions, the estimators may “corss” and you can get an estimation Cp^ a little smaller that the estimation Pp^. But that does not mean that the actual Cp is less than the actual Pp. It is not possible.

In the other hand, when a process is clearly unstable Cp will be large enoug compared to Pp as to assure that Cp^ will be larger than Pp^.

SHORT TERM VS LONG TERM

The length of the study has nothing to do with Cp/Cpk/Pp/Ppk. Nowere in the definitions of these indexes the words “term”, “short” or “long” appear.

You can make a SHORT CAPABILITY study, a LONG CAPABILITY study, a SHORT PERFORMANCE study or a LONG PERFORMANCE study.

However, the validity of a SHORT CAPABILITY study can be questioned. We saw that stability was a porerequisite for capability. Stability is ussually assesed by the lack of OOC signals in the control chart. But, if you make a study with let’s say 100 consecutive parts processed in let’s say half an hour, you leave a lot of potential source of special causes out of the study, like day/night, operator cahnge, tool wear, tool change, different batch of raw material, and many more. What would you tell me if I want to sell you that “The process has the capability to behave stably and with a variation that is 1/9 of the allowed tolerance (Cp=1.5) because that’s what I got in 100 parts made in 1/2 hour”?

That is a problem. If I build a machine or start a new process and have to make a short preliminar test to show my client the potential of it, how do I make it? My opinion it hat you don’t. You can’t.

The best you can do is to make a SHORT PERFORMANCE (Pp/Ppk) STUDY. Calculating Pp/Ppk is somehow conservative because it tends to be smaller than Cp/Cpk even for small unstabilities that are not detected as OOC signales in a chart, specially in the short term. Also you will probably require a greater Pp/Ppk in the short term study than the expected Cp/Cpk in the long term. And finally, you’d better get a “free of OOC signals” chart in the short term if you want to have any chance of having a stable process. Lok that now you are requiring something like stability even when you are using Pp.

I think that’s why the QS-9000 requires preliminar Pp/Ppk>1.66 for a new process and a Cp/Cpk>1.33 for ongoing processes.0March 8, 2003 at 7:05 pm #83617

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.HOLY COW!!!!!!!!!!!

Gabriel gets the gold star for longest most complex post. I was just getting ready to write a summation as a checklist of what facts have been dug out so far. Thanks!! (GRIN)

Im still going to write the summation today, and will answer you post on Monday. Im going to have to flow chart that puppy out. However, a couple of points, and as I add all this stuff together some things are starting to come to light.

Six sigma is NOT QS-9000 and AIAG’s does not set six sigma standards. Im not convinced that anyone does, but I would have to say that ASQ, (formally know as the organization of ASQC) maybe the closest and they do not mention Pp/Ppk in their glossary on their web site.

And believe me, do not get me started on QS and ISO standards, self-audits, et. al. That would take another gig of isixsigmas hard drive space.

Jim Winings

http://www.sixsigmaspc.com

0March 8, 2003 at 10:49 pm #83618

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Summation of facts so far uncovered.

Pk/Ppk did not exist as a unit if measurement until around 1984 or a little later. (Eileen)

Cp/Cpk did exist as a unit of measurement at least in 1980 (Juran/Gryna Quality Planning and Analysis 2nd Edition) Please if someone has a copy of Quality Planning and Analysis Juran/Gryna 1st Edition which should have came out around 1970 according to the 2nd editions copyright notice, please look up Cp/Cpk and post results here.

Checking my AT&T Statistical Quality Control Handbook, (C) 1957 it does discuss process capability, but does not include the terms Cp/Cpk Ergo, unless someone has another verifiable source, Cp/Cpk came into play as a unit of measurement sometime between 1957 and 1980.

Sometime between 1984 and 1991 apparently either Fords Supplier Quality Assurance or the AIAG set Ppk as a calculation sigma and Cpk as an estimated sigma. Not ASQC/AIAG as some indicate on their web site.

Sidebar:

Prior to giving up Six Sigma training, Motorola sent everyone that was to get the original six sigma training to a Basic Statistics course (c) 1985 Motorola Issue #1. Given by the Motorola Training and Education Center. The forerunner to Motorola University. (ENG 125) It was based on Dr. Dale Besterfields book Quality Control. The 2nd Edition of that book just came out because some participants got the 1st edition and some got the 2nd Edition so the course related to both books. The course material does not mention Cp/Cpk but does calculate sigma using the square root method except for the use in control charts. The chapter on capability studies uses Cumulative Frequency. Funny thing is that while it doesnt refer to Cp/Cpk it does have a session on Setting Assembly Tolerances And of course distributions, X-bar/R Charts. P Charts, etc.

AT&T Statistical Quality Control Handbook, (C) 1957 covers using an estimate sigma for process capability but indicate that it is more accurate only if the process is shifted and out-of-control.

Note: while I read this and posted it here, I still do not see a mathematical correlation.

Motorola Suppliers document based apparently on Dr. Dale Besterfields book Quality Control The 2nd Edition and Juran/Gryna Quality Planning and Analysis 2nd Edition clearly indicate to calculate sigma using the square root method for Cp/Cpk numbers. For exactly what it says go to

http://www.sixsigmaspc.com/six_sigma_spc_about.html

and click on the Sigma link. Its in the 1st paragraph. You dont need to read it just look for the link. Its in a link color. I dont want to upset Stan, but I also dont want to do something twice. That wouldnt be very Six Sigma of me now would it?

Further research that I have done in the past 24 hours indicates that there may be a difference in the way to obtain sigma between the sigma universe, sigma actual, sigma population, and sigma (enter you favorite statistics word here) and this may be adding to the confusion. Also note that Dr. Dale Besterfields book Quality Control The 6th Edition (c) 2001 shows that Cp/Cpk to be estimated in at least 2 different ways and never calculated using the square root method. Ill cover this later perhaps, but he says that R-Bar/d2 does not give a true process capability and should only be used if circumstances require its use. Of course he does not indicate what these circumstances are and this is different from his 1st and 2nd editions. However in both books he says that whether grouped or ungrouped sigma by itself is always the square root method. I could not find anything on Pk/Ppk.

Checking some other sources, Rath & Stongs Six Sigma Pocket Guide, (a good cheap little book but hard to find anything in), Rath & Stong apparently another pioneer in six sigma, but then again who isnt, and The Six Sigma Way, both books cop out by using look-up tables for sigma. I guess thats one way to stay out of the problem, but it also may indicate something else. Neither refer to Pk/Ppk.

Ok, when in doubt, I go to a book that isnt based on manufacturing, Statistics: An Intuitive Approach, Weinberg and Schumaker 3rd Edition. (No its not the meteorite Schumaker) Now since this is not a manufacturing book, they dont mention process capability, ergo, they show sigma always to be the calculated square root method. If all your previous processes were design to six sigma specifications and monitored, in theroy, you shouldn’t have any long-term problems.

Im starting to see a trend here. Is anyone else? All my Motorola six sigma training material, and I still have it all, always indicates to calculate sigma using the square root method. Any one have training material from GE or Honeywell that indicates different?

Pk/Ppk appears nowhere expect apparently in the AIAG’s SPC manual, QS-9000 and perhaps ISOxxxx.

While different generic quality control books indicate to calculate sigma in several different ways, there is no consistency or uniformity between all these Ph.D.s And none of them show mathematically why. The just say, this is the fact and none of them call it Pk/Ppk.

Now, I am a firm believer that everything in the universe can be explained with mathematics with the acceptation perhaps of Michel Jackson. And nowhere has it been proven mathematically that estimating sigma for process capability, under any given circumstances, is more accurate than calculating using the square root method. I still see it as common sense. I also see the only consistent method is the square root method.

It would appear to me that six sigma, which is a culture, is having a culture clash with people coming from the automotive industry. Now Im not picking on the automotive industry, there are some fine autos, Toyota, Honda, Mitsubishi, Nissan but I do see a trend here with this topic. Or is it just me?

Jim Winings

http://www.sixsigmaspc.com

0March 9, 2003 at 12:15 am #83619Hi Jim,

One more source you might try in order to get some answers about Cpk versus Ppk is the book, Measuring Process Capability, by Davis Bothe. It’s carried by both amazon.com and the ASQ Web site. It addresses most of the issues raised in many of these posts.0March 9, 2003 at 1:15 am #83620

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.Gabriel,

We have had good discussion in here in the past. This is a continuation of this exchange of intellectual firepower.

I appreciate your treatise ( a compliment) above. It has made me think about 2 things.

1. Am I incorrect that for a technically correct Cpk, Cp, Ppk, etc. calculation done, the process must be in control? I know that many do not worry about this. I myself do not worry about this unless I have an extreme case of nonnormality of data–e.g. definitively bimodal, extremely skewed, etc. My question to you is this. Do you feel the calculation of Pp, Ppk with calculated standard deviation is correct even for non-normal looking (or statistically failing a test such as Anderson-Darling) distributions? I also ask the tricky question about being in control because I have heard all extremes of what in control means (all tests or just the 3 sigma tests). Your specific comment or reference link would be appreciated.

2. It has been evident that our backgrounds are different and we have different application background from past posts. I’m disturbed by your statement that it is NOT defined anywhere that Ppk is for long term capability and Cpk is for short term. I agree in the exact formulas it isn’t stated. However, everything I’ve been taught and everything I’ve experienced says the concept of long term vs short term is VERY important in problem solving methodology. Are you saying that Cpk could be used to describe long term capability? I have a gut problem with this because my understanding of Pp for long and Cp for short is the ONLY thing which keeps communication of capability well understood among parties involved. It is my understanding that a good part of the Six Sigma community

I must admit my technical books are at work now so I can’t consult AIAG and others right now. I look forward to your comments. I have other thoughts about machine capability only being capable 1/9th of the time but I will let that pass, unless your prod me.0March 9, 2003 at 1:26 am #83621

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.Jim, please see my post to Gabriel’s post before replying. However, I can tell you other industries are using Ppk and Cpk. The chemical processing industry that is starting to use Six Sigma uses Ppk and Cpk. However, I find most industries (maybe because of earlier quality training by those involved) don’t calculate more than Cpk and do it on longer term data (though using estimated s.d.) and therefore make it even more confusing out there. I rarely see Ppk reported. I’ve had connections with the automotive supplier base, chemical industry, converting facilities, etc. Where I see those who talk about Ppk and Cpk are those applying the more classical Six Sigma approach and use the short term and long term understanding to get a solution faster–not for reporting capability to customers, suppliers, etc.

I believe Honeywell, GE, Dow, and Dupont all still have Ppk in their training materials which involve many chemical industries. However, you find many don’t emphasize this distinction much because of the confusion on the concepts–just look at the chatter on this board and the lack of experience with working with short and long term capabilities by some of the trainers.

If others have knowledge otherwise or more enlightening, please post.0March 9, 2003 at 1:46 am #83622

John J. FlaigParticipant@John-Flaig**Include @John-Flaig in your post and this person will**

be notified via email.Gabriel,I enjoyed reading your very thoughtful argument

and I agree with the mathematics. However, I

have a question for you. If a process is unstable,

then it is unpredictable. Therefore, computing a

metric like Pp or Ppk seems to me to be of no

practical value since it does NOT predict

anything.Regards,

JohnJohn J. Flaig, Ph.D.

Applied Technology

http://www.e-AT-USA.com0March 9, 2003 at 5:28 pm #83624

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Great! Ill do that but it will take awhile. Ill have to order it.

(Just what I need another freaking book I cant find anything in. I guess you cant have too many quality control books) GRIN0March 9, 2003 at 5:33 pm #83625Garriel,

You need to go back and read the SPC manual – short is Cp/Cpk not Pp/Ppk0March 9, 2003 at 5:39 pm #83626ASQ may not mention Ppk on their web site but they do teach it in their Six Sigma training

0March 10, 2003 at 1:23 pm #83645

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Hello Carl, just my view, ok?

“1. Am I incorrect that for a technically correct Cpk, Cp, Ppk, etc. calculation done, the process must be in control? “

This is based on AIAG’s SPC: For Cp/Cpk, the process must be in-control or, if there are a few points showing OOC signals which special cause has been identified and eliminated (so they won’t happen again), those OOC points can be eliminated from the calculation of Cp/Cpk. For Pp/Ppk, “variation due to common and special causes” is part of its definition. If you requested stability, you would never have “variation due to special causes” and the definition would be stupid. If you accept unstability, then you must accept OOC.

“I myself do not worry about this unless I have an extreme case of nonnormality of data–e.g. definitively bimodal, extremely skewed, etc.”

I hope we would agree that “stability/control” is one thing and “normality” is a complettly different and independent thing. I think that this fact goes beyond “my view”. If you don’t agree, please tell me and we can go deeper on this.

“Do you feel the calculation of Pp, Ppk with calculated standard deviation is correct even for non-normal distributions?”

Well, this is a whole new subject, but shortly, it depends. If you want to use Cp/Cpk or Pp/Ppk as an absolute indication of how good is the process behaving relative to the allowed variation (tolerance) you should find a distribution that matches the “real” process distribution beyond the percentiles 0.135% and 99.867% (which correspond to +/-3 sigmas in a normal distribution) and beyond the specification limits (whichever is farther from the average). This can be almost impossible in a well performing process. For example, at Cpk=1.5 you have 3.4 PPM. Can you imagine what a giant sample you would need to have enough data in that zone to find a mathematical distribution that matches the process distribution? If you want to monitor your improvement efforts, then you can use the formula with the standard deviation. If the Cp/Cpk improves, then the process improves, even when a Cpk of 1.33 calculated with this stright formula can mean different PPM for different distribution shapes. However, note that about any imaginable process distribution has pretty more than 99% of the individuals within +/-3 sigmas. For example, the rectangular and triangular have 100%.

“…what in control means (all tests or just the 3 sigma tests)”

In control = no OOC signal in the chart. I’ll let you decide which signals will you take into account. I admit that we don’t use all the 6 test for both Xbar and R (12 in total). Something interesting is that if you used this criteria in a perfetly stable process (such a random computer simulation) you would get at least 1 OOC signal in 1 out of 3 SPC sheets of 30 points each. Of course that would be a false alarm, because it is not associated with any special cause.

“2 … I’m disturbed by your statement that it is NOT defined anywhere that Ppk is for long term capability and Cpk is for short term”

Not in the AIAG’s SPC manual. I don’t mean that it is not defined anywhere.

” … the concept of long term vs short term is VERY important in problem solving methodology. Are you saying that Cpk could be used to describe long term capability? I have a gut problem with this because my understanding of Pp for long and Cp for short is the ONLY thing which keeps communication of capability well understood among parties involved.

Something I forgot in my previous post. We have to distinguish between “short” and “long” when we are speaking about the length of the study with “short” and “long” when we are speaking about the term variation. What I seaid in my previous post referred to the length of the study, and NOT to the short term / long term variation. For that last subject, I will give you a real life example from my own current experience.

We have a process that is very stable ans is monitored with SPC. Every mont we take the data from the SPC and make a report that includes, anong other things, the Cpk and Ppk values which are plotted together in a chart with “date” in de horizontal axis. Both curves oscilate arround the same value and cross each other several times (remember that they are estimates so you have random error). This monitoring started about 1 year ago. We will agree that it is a long study. Now, Cpk shows short term variation, and Ppk shows long term variation, ok? But they show the same figures! Of course, I forgot! If the process is stable, then “variation due to common causes” = “variation due to common and special causes” and then Cpk=Ppk (both by definition and by fact). Instead of interpreting that as “short term” vs “long term”, I prefer to say that the every time we calculated the report the process PERFORMED (Ppk, history) as it has the CAPABILTY TO PERFORM (Cpk, prediction). If the process is not very stable, then you will get a lower Ppk because the process has the CAPABILITY TO PERFORM (Cpk) better that what it ACTUALY PERFORMED during the study. If short term and long term variation are not equal, then the process is not fully stable. If it was, then it should deliver the same distribution all the time, and then there would be no difference between short and long term variation.

“I have other thoughts about machine capability only being capable 1/9th of the time but I will let that pass, unless your prod me.”

I prod you :-) What’s that about being capable 1/9th of the time. I hadn’t say anything about that. I said something about a process with a standard deviation that was 1/9th of the tolerance range, and that was Cp=1.5, but it has nothing to do with being capable 1/9th of the time, does it?0March 10, 2003 at 1:43 pm #83646

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.John.

We agree. Pp/Ppk does not predict anything becuase stability is not a must, and without stability you can not know what will happen next time. Then, Pp/Ppk are of limited value.

The practical value of Pp/Ppk is to analyze how the process performed during the time where the data belongs to, and only in that time. For example, I make a batch of 10000 in 24 Hs parts and during the manufacturing I take 5 parts every half an hour, total 240 parts. Then I can calculate Ppk to see the quality of this batch regarding the specification. It’s just as to take a sample of 240 parts from the finished batch and plot the values in a histogram together with the specification limits. It just tell’s me about this batch. It tells me history. How the process PERFORMED. Not how it WILL PERFORM.

Note: In fact, when computing Pp/Ppk you put all values together in the same bag for the S calculatiion, so “subgrouping” has no effect. Then you can make the Pp/Ppk calculation post-mortem, just taking a random sample of the desired size from the finished batch and without plotting a control chart (it would be meaningless if the data is not time-ordered). Of course, this does not work for Cp/Cpk because you need the subgroups to calculate Rbar/d2 and need the time axis to assess stability.0March 10, 2003 at 1:52 pm #83647

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Stan:

If with “SPC manual” you mean AIAG’s, then YOU need to go to it and discover that nowhere it says that Cp/Cpk is short.

Another possibility is that I am blind and can’t find it. In that case, please forgive me and help me telling me the page number where it is stated.

Anyway, I didn’t say neither that Pp/Ppk was short nor the opposite. I just said that Cp/Copk/Pp/Ppk are not “length” related.

At most, what I said was that a short, preliminar capbility study where you calculate Cpk can be questioned because it is hard to assess stability in a very sdhort time. However, note that this relates to the length of the study and not to the term of the “sigma” (whether it is short term sigma or long term sigma).0March 10, 2003 at 3:07 pm #83652

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>Jim, please see my post to Gabriel’s post before replying

Well, before I post any technical response anyway. Its going to take me a couple of days, because I need to do other stuff as well, to thoroughly understand the nuance of the Gabriel post.

>not for reporting capability to customers, suppliers, etc.

Agree. And Im afraid that is what is exactly what is happening in some industries/companies. So why do some companies and software use it that way? Where is the validity in it?

>just look at the chatter on this board and the lack of experience with working with short and long term capabilities by some of the trainers.

Experience, hell, Im still waiting for the proof that short/long-term deal is valid. And think about this, we all cant be right. That means that x% is wrong and taking it as the gospel. That alone is contradictory to what I was taught six sigma is about. Removing confusion from the entire process. But at some point six sigma turned into something more like ISOxxx which is confusing as hell for some companies with all the different rules and regulations and theories. Are all these rules, regulations and theories really required to achieve 3.4ppm? I dont think so based on my original six sigma training which may have been rotten because it was the first.

Let me explain about short/long-term deal being valid.

Im going to attack the issue from the point of economics and logic. Remember that I am a programmer and some people say we are weird. I like to say we are just slightly skewed. Some to the left half of the brain and some to the right half of the brain.

Some background.

In the late 70s one process at Motorola had a miced parts nest with x,y and z adjustments. It was on a screen printer. Each adjustment had a locking screw so it wouldnt move once you got it set. One day Im adjusting this thing by loosing one set screw at a time and moving the nest then tightening the lock screw back down. A group leader came over and said I couldnt do it that way. I looked at her funny and said WHAT??? She said that I had to loosen all 3 locking screws move the nest and then lock the 3 locking screws. Of course if one did this it could and usually did throw one of the adjustments out. I ask here if she knew how these nests worked mechanically and she confessed she did not. So then I ask her, so why do you think I have to loosen all the locking screws at once to make the adjustments? She said, because thats the way we have always done it. Well, that went over real well with me.

When I first started in Component Engineering at Motorola, about 1983-84, they were testing memory chips by putting in a checkerboard pattern of 0 and 255. I said guys, if the 0 programmed bit slips you will not know it, we should use 64 and 128. They changed it and last I knew they were still doing it that way. But since day one to that point they used 0 and 255.

Just prior to six sigma Motorola had a problem with 2 new big major products, (the entire story is actually on my site), and I believe that this directly lead them to develop the six sigma methodologies. They were modulating the 2-way radios with a 1Khz tone to make some measurements. Since they were having problems, I decided this was not the best way to do it. So I started modulating the radios with a sweeping frequency, taking data and just doing a distribution chart. Just a distribution chart. The development manager saw that there was a big delta between initial data they took and mine. So he had me run their data through the SPC software I was using, and that wasnt it. So they observed how I tested the radio and saw the sweeping frequency thing. This enabled them to change a part and fix the radio and they started testing the product wit the sweeping frequencies. For 40 years Motorola had modulated radios with a 1Khz tone.

My point thus far is, just because something has been done, perhaps for decades, doesnt make it right. Things change. long-term is mentioned in my AT&T Statistical Quality control Handbook from 1957.

Now I dont know who or when the genius, (and I mean that literally), came up with the factor table, but based on the following I can reckon why they did. And since we are talking about statistics, Im going to make a few assumptions.

The factor table came into play prior to 1960.

The factor table may have come into play prior to 1940.

Personally I believe it came into play during the 1930s or before.

Why did it come into play?

Going back to the 1800 and before, things were manufactured by craftsmen and some stuff was of high quality because of being made by hand. There was not mass manufacturing going on yet. Of course some things needed a way to be measured so that they could be improved but more importantly so that production could be raised.

As Henry Ford developed the production line, the work force in general was not well educated. Smaller companies could not hire people with college degrees. Production started rising along with defects. Ergo, the more of anything you produce and the more complex it is, the higher the probability that an error will occur. So statistics started becoming much more important. But to calculate standard deviation, was a long drawn out affair because the math had to be done long hand and only highly educated people could do it. At some point the slide rule came into play. While this made the calculations easier, you still pretty much needed a highly educated person to use it and understand what the results meant. So the labor force could not take data to and do statistics and smaller companies still could not afford the Harvard, et. al people. Industry need a way to allow less educated people to perform standard deviation calculations, thus doing statistics, without the need for them all to have to go to Harvard, et. al.

I truly believe that the factor table came into existence for this purpose. There needed to be a way to calculate control limits that was easy. It allowed less educated people to do basic statistics, which of course was more cost efficient than having an engineering manager run down to the production line to take data. It didnt have to be that accurate because any statistics being employed was better than no statistics at all. To me this makes the most logical, as well as common sense reason for the factor table to have come into existence in the first place.

I do not believe that the factor table came into existence to be a magical crystal ball that all of a sudden allowed people to view long-term or short-term anything. I believe that at some point after the factor table came into existence, somewhere an engineer was having a product problem they couldnt fix, so they started using the factor table just to get another number to track and analyze. Pretty much just as Eileen and Dr. Kane used back in 1984, (see previous post). But based on absolutely no valid mathematical formula at all those old engineers decided this was a new unit of measurement. How many times have we all seen specifications just pulled out of the air? And they decided they would bless this off as short-term or long-term this, that or the other.

The next question I think we need to ask ourselves is, how much bias went into this decision making process? We all should realize how important bias is in such matters and I dont think we are looking at that. We pick up the theory and run with it, making additional modifications to it that fit our needs at any given time, and thus, we end up with this topic that no one can agree on and that even Ph.D.s change their minds on. (see post above about Besterfield changing Cp stuff between books) Now to add insult to injury, we start have committees made up of only God know who, setting it in stone still based on nothing more than perhaps bias. Now call me silly, but if any of these assumptions are correct, where does that leave us?

My point is that just because something has been done in the past and a manager or committee decided that it would be true for the point in time that the decided it, does not mean that it is true and valid for ever.

Because no one is still talking about statistical tolerancing but about long-term/short-term stuff, have we gotten to the point that we are analyzing the hell out of our data using various tools and methods instead of just using a few basic statistical tools, common sense and manufacturing experience to fix the problems? Are we hoping that we can just justify any given problem by using various statistical means so we dont have to fix the problem? There was/is a book out that was called How To Lie With Statistics and I think that just the simple fact that book exists says something about all of us as humans.

Jim Winingshttp://www.sixsigmaspc.com0March 10, 2003 at 3:08 pm #83653Gabriel,

I don’t have it right in front of me but it is on page 81 or 82. The page that defines Cp/Cpk/Pp/Ppk. They give different symbols for the std dev associated with the C’s vs the P’s. At the top of the same page are the formulas for the different std dev’s.

You will find that the Cp/Cpk uses Jim’s dreaded r-bar/d2 which is within, instantaneous, short term – whichever term you are comfortable with. The Pp/Ppk uses overall std dev – the old sqrt of the sum of deach individual from the mean ^2 divided by n-1. The overall or achieved or long term – again which ever term your religion allows for.

The term that Jim is ranting against is the term Cp/Cpk as defined in the AIAG SPC book.0March 10, 2003 at 3:13 pm #83654>Experience, hell, Im still waiting for the proof that short/long-term deal is valid. And think about this, we all cant be right. That means that x% is wrong and taking it as the gospel. That alone is contradictory to what I was taught six sigma is about. Removing confusion from the entire process. But at some point six sigma turned into something more like ISOxxx which is confusing as hell for some companies with all the different rules and regulations and theories. Are all these rules, regulations and theories really required to achieve 3.4ppm? I dont think so based on my original six sigma training which may have been rotten because it was the first.

Jim, some people still think the world is flat. You are out of sync with the whole Six Sigma community – it passed you by 10-15 years ago. I challanged you to go look at data – to challange your belief in SPC by going looking at how sensitive SPC is not to shifts of a half sigma.

Go look for yourself or end your rant.0March 10, 2003 at 3:15 pm #83655

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Wow Jim!

Now that you explained that, I see how much we do agree.

A thing to correct you from a strict formal point of view, is that your differenciation about “estimating as Rbar/d2” vs “calculating using the square root method” is not correct. And this is not my view. It’s fact and it can be demonstrated:

Both ways of approaching the process sigma are estimations because you are estimating a “population parameter” using a “sample based statistic”, unless you use the square method with the full population and with N in the denominator instead of N-1.

Your approach to looking outside manufacturing for pure mathematics is good. And in that way, the following is true:

Given a normally distributed population wtih average Mu and standard deviation Sigma, and a sample of size N divided in m subgroups of n, both Rbar/d2 and S (the square root method) are unbiased estimators of Sigma. The demonstration for S can be found in any statistics book. In fact, the N-1 in the denominator of the sample standard deviation is to get an unbiased estimator of Sigma when the true average Mu is not known but estimated as Xbar (or Xbarbar) when you do the (X-Xbar)^2 calculation inside the square root. That’s called “sample standard deviation”. If Mu was known and you used (X-Mu)^2, then you should use N in the denominator. That’s why when you perform the caculation with all the individuals in the population, not just a sample, you use N (because Xbar in that case not an estimation of Mu, but Mu itself). That’s called “population standard deivation”. Then we say that the “sample standard deviation”, S, is an unbiased estimator of the “population standard deviation”, Sigma.

About Rbar/d2, it comes from the definiton of d2. d2=Mu(R)/Sigma in any normally distributed population with standard deviation Sigma. Then it is easy to see that Sigma=Mu(R)/d2. Now, Mu(R) is the true average of the population of ranges of subgroups of size n taken from the original population. If you knew the true Mu(R), then Mu(R)/d2 is not an estimation of Sigma. It is Sigma itself (exactly the same figure that the “suqare root” calculated with the whole population). The problem is that you don’t have the data from the full population, but just from a sample of n subgroups. In this circumstance, you can estimate Mu(R) as Rbar. We know that in any population, the sample average is an unbiased estimator of the population average. In the case of the population of ranges from which we have a sample, Rbar is an unbiased estimator of Mu(R), and then Rbar/d2 becomes an unbiased estimator of Sigma, just as S.

Notice that a normally distributed population has infinite quantity of individuals. If it didn’t, it would not be a normal distribution (remember, pure mathematics now). That’s mean that it is IMPOSSIBLE to compute or calculate Sigma, either with the square root or with Mu(R)/d2. The best you can do is to estimate it, either with S or with Rbar/d2.

So, as you see, in a normal distribution both Rbar/d2 and S are unbiased estimations of Sigma and non of them is a true “calculation” of Sigma.

Just two things:

1) Rbar/d2 is an unbiased estimator of Sigma ONLY IN A NORMAL DISTRIBUTION. S is an unbiased estimator of Sigma in ANY DISTRIBUTION. That’s a point for S. However, note that many people would not accept the calculation of Cpk or Ppk based on the process standard deviation if the process is not normally distributed. In that case, S neither S nor Rbar/d2 would be usable.

2) If the process is unstable, then Rbar/d2 does not show the real process standard deviation. In fact, in that case the real process standard deviation does not exist because the distribution is changing due to special causes of variation. However, you can think of a batch and imagine the distribution of this batch. If this batch was affected by unstability, then Rbar/d2 is not an estimation of the Sigma of the distribution of the batch. That is because Rbar is based on ratinal subgroups, and the special causes does not show inside them (only between them). If the subgroups are not rational but random (i.e., any individual has the same chance to belong to any subgroup regardless of when it was produced, it’s something we don’t do) then Rbar/d2 is an unbiased estimation of Sigma if the batch is normally distributed. On the other hand, for S you put all the individuals in the same bag, so you don’t care if they belong to one subgroup or another. So S is an estimation of the Sigma of the batch even if the process is unstable (and, as said before, even if it is not normally distributed).

3) (bonus) If the process is stable AND normally distributed, then S and Rbar/d2 are two equally valid estimations of Sigma. In fact, because they are both unbiased estimators. yow will get on average the same figure with any of those estimators. That average being Sigma.

So your point is to calculate Cpk using the square root, as if it was Ppk. If you use S for both Cpk, and Ppk, what’s the difference between them?. If when the process is stable they are the same, what’s the point after all to use Rbar/d2? For a stable process, there is no point. For an unstable process, the only thing I can imagine is that Cpk calculated with Rbar/d2 can give you an idea of what the process could deliver in the future (but could not deliver up to now) if you removed all that special causes that are adding more variation to the variation due to common causes. However, if you don’t know what those special causes are, or you don’t know how to eliminate them, or you decided to keep them, then Cpk tells you what you will never get.

I have an example of a slowly drifting process. The process drifts and has to be adjusted about every 8 hours. The variation in any subgroup of n consecutive parts is minimal compared to the variation due to the drift. Still, the adjustment is made every 8 hours because with this rate we are still comfortably inside the tolerance. The special cause is a grinding stone that wears. Nothing will be done to avoid that wear (impossible) or to automatically compensate for it (possible), because it would be very expensive compared with the benefit (avoid easy adjustments every 8 hours). If you calculate Cpk using Rbar/d2 you get about 3 or 4 times better than Ppk. Meaning that if we did something that we will never do we would get a process 3 or 4 times more capable than our actual process that is capable enough. Useless info!

By the way, this is an example of what I said I would not consider in my long post. There you have a predicatble and repetitive unstability. The process is changing its distribution over the time. But take any 8 hours frame and you will get allways the same. In this case, Ppk is perfectably suitable to predict the future. Cpk is not. Ooops!0March 10, 2003 at 3:15 pm #83656

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.The scope of my proverbial rant is much larger. But I do find the term Cp/Cpk as defined in the AIAG SPC book to be asinine at least. But that is just my opinion based on what I have read, common sense lack of proof and logic as I see it.

0March 10, 2003 at 3:28 pm #83657

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Ok, it says that Cp/Cpk use Rbar/d2 and call it “within” and that Pp/Ppk use S and call it “total”. But it says nothing about short term / long term.

What it does say is that Cp/Cpk is meaningless without stability. And with stability S(within)=S(total) because “variation due to special causes”=0. It is logical: With stability “short term variation” MUST be equal to “long term variation”. So what’s the point?

Anyway, again, note that in my original post I was referring about the length of the study, and not the term of the “variation”.0March 10, 2003 at 3:34 pm #83658

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Crap, I’ve got to get some work done, but…

>unless you use the square method with the full population and with N in the denominator instead of N-1.

And I use n unless n < 32 in which case I use n-1

0March 10, 2003 at 4:36 pm #83661

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.That’s an approximation.

It’s not that there is something magic about 32. It’s just that sqrt[1/n] is very close to sqrt[1/(n-1)] for large n. How large should be n? The answer is, How small do you want the error to be? Look:

n sqrt[1/n] sqrt[1/(n-1)]

1 1 Not def: div by 0

2 0.7071 1

5 0.4472 0.5

10 0.3162 0.3333

15 0.2582 0.2673

20 0.2236 0.2294

30 0.1826 0.1856

32 0.1768 0.1796

50 0.1414 0.1428

100 0.1000 0.1005

and so on.

In the same way, as you increase n the estimator of Xbar gets closer to the real Mu. So it’s coherent.

But you wanted pure mathematics, didn’t you?

I never understood why “to use n instead n-1 when n is larger that” a given value. Using n-1 is just as simple as using n, is simpler that switching formulas in function of n, and it’s the right thing to do.0March 10, 2003 at 5:31 pm #83664

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>But you wanted pure mathematics, didn’t you?

Yea, well be careful of what you ask for! (GRIN)

>I never understood why “to use n instead n-1 when n is larger that” a given value. Using n-1 is just as simple as using n, is simpler that switching formulas in function of n, and it’s the right thing to do.

I saw why 32 somewhere but I cant remember where now. Its of course a degree of freedom and Juran says it is beyond the scope of his book but the underlying concept is can be stated Degrees Of Freedom, (DF) is the parameter involved when, e.g., a sample Std. Dev. is used to estimate the true Std. Dev. of the universe. DF equals the number of measurements in the sample minus some number of constraints estimated from the data in order to compute the Std. Dev..

Besterfield says The reason for using n-1 is because one degree of freedom is lost due to the use of the sample statistic, X-Bar, rather then the population parameter, u.

AS I recall, and I dont know if it is related or not, but Motorola had a sample plan where most lots ended up with a sample size of 32. Maybe thats it? I just did it that way because as I said I read it somewhere else before and that was the way Motorola’s supplier document said to do. So I figured someone who knew more about it than I do had a good reason for doing it that way.

0March 10, 2003 at 5:51 pm #83666Re: Sample size of 30 using n rather than (n-1).

I also tend to use about thirty for empiracal reasons.

Do a sensativity analysis on the sd formula. The difference from dividing by 29 rather than 30 is on the order of 3%. Taking square roots further reduces impact to about 1%. Close enough for most any technique.

Even Student’s t distribution values converge within 5% or less (depending on alpha level) at thirty.

My spin on that.

0March 10, 2003 at 5:58 pm #83667

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>Close enough for most any technique.

Or as I like to say

Close enough for government work

0March 10, 2003 at 6:00 pm #83668

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Apparently, pick a way, pick any way. (GRIN)

0March 10, 2003 at 6:29 pm #83669

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Ok, but why to use n-1 until n=30 and then switch to n? Just because it is “close enough”? Using n-1 (even above 30) is even closer. Which is the benefit of using n, then? Is it simpler? I just don’t get it…

0March 10, 2003 at 7:54 pm #83672Jim:

First of all it has been great reading this latest thread and as both a practitioner and instructor, later school. I think it is fantastic you offer yourself up for battle and can keep your composure. It is a great learning media.

Back to the concept at hand. Cpk Vs. Ppk with a Sigma twist (Shaken not stirred)

As an interested learner and educator, I suppose what bothers me is how concepts become entrenched into the philosophy without proper disclosure, or rather information regarding the “new” standard of measure (Things seem to come under the radar)

I can appreciate the 1.5 shift based on Motorolas study regarding long term process variation, glad someone was awake at the wheel, and feel this is an important facet of long term process variation which every BB should know. However, when Minitab and almost all other process sigma tables and charts offer up actual long term figures as short-term sigma one begins to get a funny feeling.

From (isixsigma) . https://www.isixsigma.com/library/content/c020507a.asp

“Final Thought: When we talk about a Six Sigma process, we are referring to the process short-term (now). When we talk about DPMO of the process, we are referring to long-term (the future). We refer to 3.4 defects per million opportunities as our goal. This means that we will have a 6 sigma process now in order to have in the future (with the correction of 1.5) a 4.5 sigma process — which produces 3.4 defects per million opportunities”

Thus, we are really saying that to have a defect rate in the future of 3.4 DPMO (Absolute sigma of 4.5) today we would have to operate based on 1,000,000,000 (Billion units) with 1 defect opportunity and 1 reported defect which would result in more or less a short-term sigma of 5.998 and then shift down to 4.5 over time, but we call it a 6 sigma process?!.

OK who is pulling whos shorts on this one? The corporate world has incorporated a feel good factor of 1.5 sigma to make the world look a bit more appealing, but much less realistic in absolute terms. I suppose much like DPMO, as the only one who cares about DPMO is the manufacturer breaking his arm patting himself on the back when the most important measure is based on the customer. Who cares about how many googol opportunities you (the manufacturer) have to screw things up, it only matters when the customer has a defect free good, which exceeds his/her expectations; this is the true measure of quality, at least all my instruction emphasis is on VOC not how to make the business look good.The bothersome aspect here is the assumption that the variation of this mythical process is in such a relationship with the specifications that it would be impossible to have anything less than a 4.5 sigma process with a shift to either side of the process by 1.5 sigma (How convenient to have the process also centered). Reality dictates that we might now have in absolute terms a 4.5 sigma process and depending on the variation relationship to specification we could possibly have a 3 sigma process in the future or a 6 sigma process, two very different worlds. As much as we would like to make database decisions with long term data, we are still forecasting and we know how unreliable future shocks can effect our businesses.

Therefore, back to a Quality Measure or Cpk Vs. Ppk Vs. Sigma. For those of us who like to look at the world and explain things to managers in “real” terms what is the most appropriate measure for short-term capability. Keep in mind Minitab has already been mutated with this built in shift. Realistically, it is not in the best interest to assume that the current measure is appropriate as it has us all walking around commenting on how nice the emperors new cloths look, when we have all been trained and encouraged to call things as they really are at this point. I would say the emperor has a rather fat arse with a couple of pimples and a very small sigma. Can we ever agree to a single standard, afterall we agreed to VHS once upon a time.

Thank you for your considerationSteven

0March 10, 2003 at 9:08 pm #83673

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>I think it is fantastic you offer yourself up for battle and can keep your composure. It is a great learning media.

There are good reasons for universities to have debate clubs. I feel those reasons should follow us through life. This topic seems to be one of those sticky wickets. I just wish that we had people like Juran, Besterfield, and the late Bill Smith in on the discussion. What would they say? Ever notice that Einstein always showed his work? Is Juran still alive? I just read where he was born December 1904.

http://www.simplesystemsintl.com/quality_gurus/J_M_Juran.htm

>Can we ever agree to a single standard, afterall we agreed to VHS once upon a time.

This was due to the public, not corporations, voting with their pocketbooks. When you are dealing with YOUR money instead of shareholders money, perhaps ones value system is different. What would happen if engineers and managers all of a sudden had to pay for mistakes out of their own pockets? Ill bet there would be zero defects in the world and in short order. Ill bet there would be an entirely different set of rules, perhaps some none of us have heard about or created yet.

0March 11, 2003 at 5:48 am #83679The point is you don’t get it. Statistical control using SPC does not mean within = overall. There is no way to keep the mean from moving. SPC minimizes that movement at best. Go try what I challanged Jim to. Create a shart and then give it data where the mean is shifted by a half sigma. How long does it take to detect?

0March 11, 2003 at 12:45 pm #83681

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Yes, I don’t get it. Please understand that I don’t mean that my view is the truth. It’s just how I understand things today, what can change if someone shows me I am wrong. Maybe we can work it out together. Let’s see.

So what you mean is that the process is really unstable (the mean moves) but it can not be detected using SPC, so it is in “statistical control”. And you are right.

First question: statistical control = stablility. True or false? Even when we (including myself and several references as AIAG) use thiese things as the same thing, I actually think that “statistical control”=”no OOC signal in the chart”=”lack ov evidence of unstability”, that is not exactly the same than “stability” what would be “constant distribution over time”=”random variation only”=”no variation due to special causes”

Second question: Which estimation of Sigma should be used to calculate the control limits? Yes, that’s true. SPC is somehow slow (or even unable) to detect small shifts of the mean. So we agree that the control limits for the mean should be calculated using S(within) which is not sensible to changes in the average between subgroups and which, only if the process is really stable (regardless what you can detect with SPC) will be equal to S(total).

Third question: And for Process Capability? What’s the point to calculate a cpability using S(within)? When the process is really stable, it is the same than using S(total). If the process is so unstable that it is out of control (OOC signals in the chart), then it just can’t be calculated. If the chart shows no OOC signal there may be some “small shifts of the mean” that could not be detected and that will not be detected in the future. So using S(within) for Cpk will give you a potential capability that you will never reach. It is not the past performance. It is not the future performance.

So, what’s the point?0March 11, 2003 at 1:16 pm #83683Gabriel –

Good point about why use both. In practice I always use the (n-1) estimator. The only technique I know of that requires the (N) estimator is the “Z” test of hypothesis on the means. But using the sample standard deviation (n-1) gives a vanishingly small error with that also.

I found some interesting reading on this in Chapter one of Wheeler “Understanding Industrial Experimentation” . You might givc it a try. He credits Sir R. Fisher with introducing the (n-1) estimaor. It is an unbiased estimator of the variance which is so important for the F test and ANOVA. Probably why Fisher used it.

Also from Dr. Don , the Wheeler that is. He has some good thoughts on the capability issue in “Understanding SPC”.

Besides capability indices being confusing, usually taken with much too small a sample size to esitmate current capability within 20%, relying on a unlikely normality assumption to predict PPM defective(especially aat the tails, and generating WAY too much paperwork, I have no problem with them.

For how to estimate what you are sending to the customer, I would agree agian with Wheelers’ approach. I usually do.

Dave0March 11, 2003 at 9:37 pm #83707Good questions – I will try to behave and answer them.

Control charts use within for limits (the dreaded r-bar/d2) which is correct. By their nature control charts don’t readily detect small movements of the process center (not to be confused with a random sample where the mean did not shift, but the sample mean does not equal xbarbar). The alpha risk associated with each sample is .27%.

Why have capability metrics that reflect both within and overall. Simple – you want a simple directional tool when the overall is incapable – you want to know if it is a control issue or a technology issue. The path you follow is different.

That Mr. Winings started this rant with all sorts of false assumptions does not seem to deter anyone. I worry about anyone claiming that Ppk is not needed since 1980, when it is clear the term was not even in existance in 1980. He clearly has not done his homework. He also did not know the formula for Ppk and he is ranting against the measure we use for the xbar, r chart that he advocates. He is at best confused and at worst unwilling or unable to go do a little research and discover a few things on his own. But then again that is not the least bit unusual on this site; i.e. I am a student who must do a paper – please spoon feed me. But I am not starting to rant.

Within/overall, short term/long term, or whatever you wish to call it (I like Fred and Ethyl personally) is strictly a directional tool and is a good first cut at an incapable process. It would be the second thing I would look at after the MSA.0March 11, 2003 at 10:17 pm #83709

Chris SeiderParticipant@cseider**Include @cseider in your post and this person will**

be notified via email.Gabriel, thanks for your reponse. As is always the case reading boards, things got missed by me and I really would like to know one more thing.

re: cpk for short and ppk for long–either you have changed your mind or I misread. It does seem you now agree Cpk should only be used to describe short term variation and Ppk for long term variation. I do agree your study you studied would seem to be a longer term variation.

My question is this. Unless one does more diligent math or estimation, one cannot use the Cp calculation Cp=(usl – lsl)/6s with s estimated UNLESS the distribution is from an expected normal distribution? I am not saying one cannot report a Cpk for nonnormal distributions, it just takes more thought.

Forgive me, I will post something on the 1/9th later. I had thought you had meant something else. I am south of Rio Grande today and this different keyboard is driving me crazy (I cannot find the keys quickly).0March 12, 2003 at 6:17 am #83720

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Stan;

My point was that it wasn’t needed before it came into existance. Duh!

i.e. it wasn’t needed at all.

Jim Winings

http://www.sixsigmaspc.com0March 12, 2003 at 7:31 am #83723Jim,

BS – duh

You were and are talking off the top of your head.

Your rant was to bring attention to your web site.

Stan0March 12, 2003 at 3:36 pm #83740

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Stan, Stan, Stan;

>BS duh

Come on now, I know youre more intelligent than that.

>You were and are talking off the top of your head.

As opposed to talking out of another orifice?

So are you indicating that all the references I quoted I just made up or do you think I spent a lot of time looking things up so I could present facts, as best I could, and as they exist in publications, etc.? Just FYI, some people agree with some things that I have said, and some agree with other things I have said and some people agree with none of what I have said. But isnt that the point of a discussion group? Perhaps your six sigma training did not teach you that discussing differences of opinion is a good tool to work out problems, but mine did. Perhaps your six sigma training did not teach you that everyones opinion is important, but mine did. You may not agree with me, but just the fact that there have been several opinions about various aspects of the entire concept indicates to me that people see things differently.

Some people indicated that they have enjoyed reading these threads. Maybe it is because they find it useful, or heck, maybe they just find it to be like a reality TV show, but either way, isnt that what a discussion forum should be about? Do you ever have anyone play the devils advocate in your companys discussions, or does everyone just sit around and say yes, yes sir, your right, et. al.

Stan, do you ever talk to labor, the actual people that spend hundreds of hours actually running a process, or do you sit in an office and say, this is the way we do it, there is no other opinions worth hearing? More importantly, if you do talk to them, do you ever actually hear what they are saying?

>Your rant was to bring attention to your web site.

As I said before, I did that to add credibility to what I say. If you send email to CNBC, local TV stations, etc. they all require your name and perhaps address. Why? So they can keep from broadcasting what nuts say. I am just applying the same reasoning. If I am wrong, then so they must also me wrong.

I also put my real name in the posts. I do not hide behind any kind of aliases. People look me up in the search engines with my name all the time. Have been for a much longer time than this thread has existed. I could have made my name my company name or my product name, but I didnt do that did I? Because I dont hide who I am, unless I can make some kind of logical sense, wouldnt the net result be the opposite than what you are indicating I am doing. Ergo, I would be shooting myself in the foot.

This tells me you have no concept on how to market just for openers. First off, what I should have been doing instead of starting this thread was getting my late newsletter out. This produces 50% to 75% more hits right after it comes out than any other vehicle. That would have been much more cost and time effective if I was simply trying to promote my site. However my newsletter is not a discussion forum and sometimes one needs feedback. Preferably with people that have different intelligent opinions and that can carry on a conversation without getting angry and personally attacking someone for their opinions. If this really annoys you then wouldnt the smart person not answer any of my posts? Would a smart person keep attacking an individual so that they would, at some point, have to respond and thus keep the alleged marketing attempt alive? I just dont understand you ways and means.

Second, I am behind on getting other things done that add 100% more to the bottom line that starting this thread. Your argument just indicates you ignorance in the realm of marketing and site promotion. Im not saying your stupid, Im just saying you do not know about marketing so your point is moot.

To rant about an issue shows passion about that topic. To rant about a person shows anger. Personally I do not think your rants about me are just due to me putting my web page address below my name as the very last line in SOME of my messages. I believe that I may have said something that hit closer to home to invoke such hostilities. Because not only are you apparently the only one to have a problem with it, you have a major problem with it. This looks like an outlier to me. Such anger will kill you Stan. High Blood pressure, etc. Calm down. Cooler heads always prevail. My opinion is this ranting about my opinion and trying to skew it into something other than what it actually is, and you have done it on more than one occasion, and making personal attacks on me speaks volumes about you. But I reckon the fact that you can stay anonymous allows your true temperament to come forth.

You indicated that the Juran Institute and ASQ have Ppk in their training despite the fact that ASQ does not mention Ppk in their glossary. Have you had both of those six sigma training courses?

Have I used this information for research? You bet. Isnt that why this forum exists? Did I ask the question to start a stimulating intelligent discussion that goes much deeper than the typical discussion? I tried. Is your opinion the only one that counts, I doubt it. As a matter of fact I noticed several people not even responding to your posts. Perhaps this is because they have been posting here longer than I. I havent responded to your personal attacks until now, why, because it wasnt worth my time, and probably still isn’t.

0March 12, 2003 at 4:37 pm #83743

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Carl, just my view, remember.

Long vs Short term:

S(within) as Rbar/d2 takes int account ONLY the variation INSIDE the subgroups. Imagine that a subgroup is composed by 3 consecutive parts manufactured within 3 minutes. Rbar/d2 takes into account tha average variation in those 3 minutes periods and, in that sence, it is short term variation. Even if you make a 10 years study, if you calculate S as Rbar/d2 it will be short term in that sence. Imagine that the raw material has a characteristic (like the thickness of the strip) that has a negligible variation within each batch but it is normally distributed between batches (allways in specification). Imagine that your process uses 1 batch per hour. Imagine that that characteristic of the raw material has a direct but small effect in the characteristic you are SPCing. Imagine that, for your SPC, you take 1 or more subgroups within each batch. Then, at each 1 hour period the average shifts. The S(within) and Cpk tells you what happens within the batches, but not between them and, in that sence, is short term variation. Now, the overall variation S(total) will be larger (and Ppk will be lower), because it takes into account the also the variation between subgroups from the first to the last and, in this sence, it is long term variation. And if S(t) is not = S(w) it is because the process in unstable. Let’s see, is the process delivering the same distribution over time? No, it changes every hour. Is there a special cause. Yes, the thickness of the strip changes from batch to batch. Are there OOC signals in the chart. If the variation between subgroups is not very large, probably not. The shift will not be larger enough to deliver a point beyond the control limit and you don’t have enough time to get 7 points at one side of Xbarbar within a hour (and if you had, that OOC signal may not show anyway). So, what does Cpk tells you about capability in that circumstance? “IF THE SPECIAL CAUSE WAS ELIMINATED YOU MIGHT GET A PERFORMANCE OF Cpk”. But you can’t get any estimation of how the process performs compared with the specification. What does Ppk tells you? “THE PROCESS PERFORMED Ppk”. Further more, if the special cause and its effects are predicatable and repeatitive (like seems to be this case) “THE PROCESS WILL PERFORM Ppk AGAIN NEXT TIME”.

Normal vs non normal.

I agree with what you say, it’s just that… What is the idea to report Cpk? If it is to make a “pairing” between Cpk and actual DPMO, it just don’t work for capable processes. Besides what was said in the previous point, the process never matches the assumed distribution in the far tails where deffects happen. And if it does, it would be very difficult to realize or prove. For a Cpk of 1.5 you have about 4.3 PPM. What sample size would you need to test if the distribution fits in that zone (beyond 4.5 sigmas)? If you use a PPM or DPMO calculated from Cpk instead of actual count, then it is just a capability indicator that improves when the process improve and worsens when the process worsens just as Cpk, but it is not an indicator of actual PPM unless the process distribution and yhe mathematical model match in that far tail zone. Now if you want to use PPM just as a capabilty indicator, then you can use Cpk calculated with the normal distribution or the actual distribution (which will match in the middle zone but not in the defects zone). Any of the three will improve when the process improves and worsen when the process worsen and none of the three will be directly correlatable to an actual level of deffects. Yet, using the percentiles method (which matches the S method for a normal distribution only) or other approach to the actual distribution (as transformations to normal) will give you results that are more comparable between processes. But I really don’t have much experience in the non normal field.0March 12, 2003 at 5:58 pm #83746I have copies of Juran Institutes and ASQ’s training materials.

I have worked in Six Sigma for as long as you and in the Automotive arena even longer. Long term / short term is used to define solution paths on a dialy basis.

I am not anonymous, my email address is put in every time I post. I am not trying to make a name for myself – my intention is to challange bogus information – thus my interest in your thread.0March 12, 2003 at 6:13 pm #83747

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>Long term / short term is used to define solution paths on a dialy basis.

How long have you used these to define solutions on a daily basis?0March 12, 2003 at 6:32 pm #83749

Texas MBBMember@Texas-MBB**Include @Texas-MBB in your post and this person will**

be notified via email.HOLY SMOKES!!!

I also was a former “motorolan” and thank the good Lord I no longer am! The primary reason for my happiness is for people such this guy “Jim” and his arrogant and confrontational attitude.

I do not even wish, nor do I have time to read every post in this thread. But if a Motorola trained belt does not understand why or how Ppk can add value then he has bigger issues than this forum can answer.

One can argue anything if they put their mind to it. For example, how can anyone say a data set is normal? If there are a million pieces of data (say in a Monte Carlo simulation) I venture to say you could assume your P value will be zero. Does that mean you should throw the data out as “non-normal” even though it is quite clear it is normally distributed? Of course not!

For the record, Jim’s attitude along with folks like Mr. Chris Galvin is the primary reason a certain Finnish company kicked and continues to kick the little “m’s” butt all over the place.0March 12, 2003 at 7:19 pm #83752Stan,

MANY of us know that Stan is not your real name. We even have a pretty good idea what it is. You’re too transparent…but if you haven’t noticed, your email address doesn’t get published with your posting unless you put it in the message.

Judy0March 12, 2003 at 8:02 pm #83757

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>Jim’s attitude along with folks like Mr. Chris Galvin

Well that’s worst thing anyone has said yet! (GRIN) Comparing me to Chris. Geeze

>I do not even wish, nor do I have time to read every post in this thread.

Then how could one make an intelligent opinion? Estimate?

There is an issue that I tried to address before in this thread. No one has had any comment on it. It is statistical tolerance. I mentioned it in a post to Eileen. So Im not just starting this as a topic.

I see everyone trying to predict what short-term and long-term effects there are on a given process. However, no one has addresses the issue of specifications. Would one not have to know what ones specification tolerance is before one starts comparing long-term or short-term anything? The specification tolerance is directly related to what your process capability MUST be regardless of what unit of measurement one uses or if one calculates or estimates it. True? But perhaps this is beyond the scope of this forum.

There are several issues.

Ppk and how it is used i.e. Long-Term or Short-Term

Cpk and how it is used i.e. Long-Term or Short-Term

The calculation to obtain either of the above

The mathematical proof that either or both are valid or more valid than the other based on how one views/calculates or estimates the above issues

How drastic are the affects of the above when the correct specification tolerances have been applied

Each process is different. Can one theory be used as effective in any given process vs any other given process?

I indeed addressed the question before of should this be taken as the gospel. And that seems to still be a question.

If your process naturally drifts, using your X-Bar chart you should be able to see the time line when that process starts to drift and how much it drifts over any given time. Once again your specification tolerances can be used to know the absolute limits your design can stand and can adjust, if required the UCL and LCL using pre-control.

I was taught six sigma is all about design. Product design and process design. If you have all design margins in place at each BOM, kit, or whatever, when it gets to the next step, you should not be starting out with an out of control condition or with a non-normal distribution based on your specifications. Sometimes our product and process does not allow economically for such design margins. If not, we must either change our specifications to fit what we can produce or change the process to fit the specifications, +/- the statistical tolerance. Anything else is just wishful thinking is it not?

Now we can try to predict long-term and short-term capability, but how valid is it unless our specifications are set in a manner that would lend itself for such measurements to be accurate? i.e. realistic

Every measurement is subject to a margin of error. Sometimes it is an operator, sometimes it is environment, sometimes it is natural, sometimes it is mathematical. What is the margin of error for the above items, Ppk, Cpk, et. al. Does anyone know? Would one not have to also factor in the natural tendencies of any given process into that equation to have confidence in the prediction? Ergo can we be sure that any given theory is applicable for all processes, given the vast number of processes that there can be for everything that is manufactured worldwide?

I guess it is like the question of does one throw out an outlier from ones data set? All the books say you can, but what if the outlier happens naturally ever x number of pieces, processes, etc. If you throw it out how do you find that?

0March 12, 2003 at 8:50 pm #83758

John J. FlaigParticipant@John-Flaig**Include @John-Flaig in your post and this person will**

be notified via email.Gabriel,I can see we are in essential agreement. So let

me pose a second question. You indicate that

you would use Pp to tell you how the process

performed in the past (i.e., as a descriptive

statistic). Now this is certianly OK, but if I wanted

to know how the process performed I would

genetate a frequency distribution with spec

limits displayed and the common measures

such as the mean, standard deviation,

skewness, and kurtosis. I might even fit a curve

to the observed data and use it to estimate the

nonconformance rate and net senstivity of the

process.One could compare Cp with Pp to get a

measure of how unstable the process was. But

a better approach, in my opinion, would be to

compare the long-term sigma estimate with

short term sigma estimate. The F* test can be

used to do this and tables of significance exist

for this test. You can see an example of this type of analysis

on my website under DEMO. Let me know if you

would like references to the F* test.Regards,

JohnJohn Flaig

http://www.e-AT-USA.com0March 12, 2003 at 10:01 pm #83769

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Stan,

“Why have capability metrics that reflect both within and overall. Simple – you want a simple directional tool when the overall is incapable – you want to know if it is a control issue or a technology issue. The path you follow is different.”

Great, we agree! And sure, you presented the point clearly. Thanks!

I said before (in one of those posts) something like that if Sigma(t) is not equal to Sigma(w) then the process is unstable (even if you see no OOC signal in the chart) and that, in that case, Cpk tells you what the process may deliver (but could not deliver up to now) if you identified and elimited all those special causes that added all the “variation due to special causes” to the “variation due to common causes”.

In another post I gave an example of a process (a drifting one) that was predictably unstable and performed well enough (good Ppk), and that in that case Cpk told us what the process would deliver if we eliminated the unstability, thing what we would not do becuase we don’t need to and it is expensive. What is somehow implied but not stated is that if the Ppk was not good enough and you saw that Cpk is good, then you should consider eliminating that special variation or at least reduce it (for example, with a more frequent adjustment of the drifting process).

That leads to a new question:

What’s the point for a customer to require for example Cpk>1.33? In the previous example, I can have that Cpk and an awfull performance, with a lot of non conforming parts.

Of course, if the customer also requires stability, then the Cpk is valid, but just as Ppk (they are the same in this case).

Wouldn’t it be better to require for example Ppk>1.33 and a stable process or, at least, a predictable unstability due to known special causes?0March 12, 2003 at 10:35 pm #83771Gabriel, I haven’t read all the posts in this thread, but I think you make a number of good points here. My thoughts… the only reason a customer would demand a Cpk>1.33 without specifying a Ppk is that one they don’t understand what they are asking for or two they can easily adjust for differences between lots of raw materials and only need consistency within a lot. So it might not matter if there is a lot of overall variation as long as there isn’t much within a batch of material they will use at any one time. I don’t have any real world examples but they might exist.

Jamie0March 12, 2003 at 10:51 pm #83772

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Gabriel;

>you want to know if it is a control issue or a technology issue.

>if Sigma(t) is not equal to Sigma(w) then the process is unstable

BING!!!!!!!! (the little light bulb goes on)

I see what the deal is now. Its just I dont finish the conversion to Ppk or what ever. I just look at Sigma Spec. vs Sigma Actual and then compare the Cpk/Cp and delta between the nominal and the USL and the nominal and the LSL, etc.. Because I am doing it programmatically, I can run comparison test between it all a lot faster than a human. But the net result is the same by the time I divide. It indicates if the process is capable, needs to be re-centered, or both. Correct?

NEVER MIND!!

0March 13, 2003 at 2:39 am #83774Gabriel,

I agree. I think the issue is that both customers and suppliers talk out of both sides of their mouth.

Automotive suppliers have been required to met these kind of numbers for years, but everybody knows they don’t. The problem? We manage design poorly, we release drawings to meet a date where people get rewards. We talk about 18 month design cycles and achieve 30 months with the final 12 months compressed into 6.

The issue is to stop the lies which begin with how people get rewarded and punished. We get punished for telling the truth and rewarded for releasing a drawing with no content.

But I digress – until we start doing instead of talking we will always be in theoretical debates over tools instead of fixing systems. The need for all the things called Six Sigma and Lean are reduced by an order of magnitude of have a defined and respected method of doing business.0March 13, 2003 at 2:43 am #83775Jim,

Sigma total is always greater than sigma within. Always. Go analyze your own data and you will see it even if SPC has not shown anything! Your premise of perfect control of centering has no basis in reality.0March 13, 2003 at 4:21 am #83779

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.Stan;

>Sigma total is always greater than sigma within.

Actually, I have test data that shows it is not. Like I said, I didnt discover anything about six sigma, I was just doing what my bosses told me to do. But dont we all most of the time? It may be that I didnt go into enough detail for you to see what I am talking about. I’m starting to think that the issue here maybe a difference in terminology mostly. Im sure Im doing the same stuff, its just at some point someone hung an index on it to make more readable to a human. This would make sense. Remember that when I had six sigma training there were no black belts. We were literally the first people ever to have six sigma training. Almost certainly the first 50 or less people. Six Sigma as a methodology may have well still been in the how the hell do we do this stage.

I do need to do more research to compare the terminology used now with what I have. The 3 new six sigma books I have dont seem to much help actually. The more I read, and some of the G. mans post I had to read several times, no pictures you know, the more Im convinced it is pretty close to being the same stuff just not called what you are calling it today or referring to it in the same manner. This really makes me feel silly. But it may go to show that no matter what technology one may have, as long as a human is involved to interpret what is said there is a chance for a communications problem.

You wouldnt believe what I thought was going on. Furthermore, I was having a real hard time trying to figure out why no one could understand my point. Duh!

Now Im not convinced, If I right, that the terminology is what I would have used. I have to see if the long-term you people are talking about correlates to the long-term terminology already used in my apparent ancient text books. If it does not correlate then perhaps it should have been called something different. But this discussion has enlightened me. And Im not convinced that someone should take a unit of measurement that was around for 20 years and redefine it.

The same darn thing happen to me when Windows95 came out. I had been programming in C and Windows 3.x and all of a sudden everything I knew was wrong. Well not literally everything, but a lot of it. I hate when that happens.

0March 13, 2003 at 6:04 am #83780

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.“>Sigma total is always greater than sigma within.

Actually, I have test data that shows it is not.”

You don’t. Re-read the definitions of sigma total and sigma within. Within = do to common causes. Total = within + due to special causes. “due to special causes” must be zero or more in theory, and more than zero in real life (there is no perfectly stable process), even when it could be very small and then negligible (what we call a stable process).

If you got sigma within greater than sigma total, then it is just wrong.

What you probably got is an “estimation” of sigma within that was greater than the “estimation” of sigma total.

Imagine the follwing analogy. You measure a gauge block with a dial indicator several times and, because of the measurement variation, you get a distribution with average 10,01, value that you take as the estimation of the actual thickness of the gage block. Then you add to that gage block a thin film and measure it again. Now you have another distribution of measurement results with average 9.99, which you take as the the estimation of the thickness of the gage block + film. Would you conclude from that that the thickness of the gage block + film is less than the thickness of the gage block alone? I would conclude that the thickness of the film is small and the increase introduced by it in the total thickness gage + film is smaller than the measurement variation (the “signal” is covered by the “noise”).0March 13, 2003 at 6:23 am #83781

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.John,

“…you would use Pp to tell you how the process performed in the past (i.e., as a descriptive statistic). Now this is certianly OK, but if I wanted to know how the process performed I would genetate a frequency distribution with spec limits displayed and the common measures such as the mean, standard deviation,…”

First point. It is not “descriptive statistic” as I uderstand it. Descriptive statistic is when you only talk about your actual data. Inferential statistic is when you use your actual data to estimate information about the population where your sdample got from. For example, saying that the sample has an average of 10 is descriptive. Saying that the most proble value for the population average is 10, based on the same data, is inferential.

In that way, both Cp/Cpk and Pp/Ppk are inferential. You don’t measure 100% but take a sample, and then get conclusions about the population. The difference is that in some cases that population is a batch (no prediction) and in other cases the population is “the parts delivered by the process A” (prediction).

Second point. Basically, Pp/Ppk does what you said you would do. You take a sample, no subgroups needed, no control chart needed (if you’ve got data in a control chart, then you can use it instead of taking and measuring a new sample, but it is not mandatory). Then estimate the batch’s sigma using the sample standard deviation and the batch’s average using the sample average. Then compare the specification against 6S and the distance between the average and the closer specification limit against 3S. Ussually a histogram is plotted to check the shape of the histogram.0March 13, 2003 at 6:39 am #83783

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.Stan, by the way…

“The alpha risk associated with each sample is .27%”

This is for “one point beyond the control limits” only and with one chart only.

When you add the other criterias and both charts together (such as Xbar and R) the rate of false alarms increase dramatically.

I did a simulation with a parfectly stable process (simulated by random generation, of course), with sheets of 30 points each with both Xbar and R charts using subgroups of 7 and usig the classical 6 OOC criterias for both charts.

I found that about 1/3 of the sheets sheets showed 1 or more OOC signals.

And, by the way, the fact that “…control charts don’t readily detect small movements of the process…” is related to the betta risk.0March 13, 2003 at 7:15 am #83785Gabriel,

You are right.

I also apologize for any sarchasm I have sent your way. You are quickly becoming my hero after your last post to Jim.0March 13, 2003 at 2:15 pm #83801

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.>If you got sigma within greater than sigma total, then it is just wrong

We didnt use the phrase sigma within or sigma total Now maybe they are the same thing, but, maybe they are not. I have to dig into it more.

At this point I don’t want to go into detail. Like I said, It’s not my invention, it is what Motorola wanted the software to do. In fact their internal software does the same thing. I know because mine is modeled from it. The software I wrote for them was made public domain, so I can do it verbatim legally. In a day or so after I get caught up on work, I’ll explain more. We broke each measurement down and then compared them to various places within a range of sigmas and curves.

Maybe someone decided you way is better at some point in time, or easier for one do it without a computer program. And there is another story I know about yesteryear that makes that feasible. Before what I like to call the belted thing happen, which was very shortly after I took my money from Motorola and ran. They didnt have anyone that could even maintain the software I wrote. It was written in Quickbasic4 with calls into the DOS API. Some undocumented calls. Which means it broke forever as soon as Windows 95 came out. Back then it was complex because you could only get 64k of executable program and data into memory at the same time.

I think as we uncover the mystery here, I think we will discover that the way was changed to make it more user friendly to humans but the original software version is more sensitive. Perhaps too sensitive. That could be another story. My program doesnt look at an index. It breaks the net results down into smaller pieces, i.e. each one of the math functions, and then compares it to x,y, and z, then puts the results into human verbiage for an incoming inspector or minority suppliers inspector to understand. Remember, no green belts exist then.

Actually I havent look at that part of the program for awhile. Ive been adding a Cf chart and fixing some bugs, like keeping end users from enter characters into a numeric fields. But it looks like I need to revisit that and hang a Ppk number onto one if the ends of the sentences just to appease potential customers.

Or I could rip the guts out of the software and just do estimates and look-ups like the 3 new six sigma books I have. How do you guys and gals feel about getting DPMO to look up sigma, or however they are doing it.

Me0March 13, 2003 at 2:41 pm #83806News flash to Jim – Motorola uses Minitab just like 90% of the rest of us.

I had a copy of your A17 software somewhere in a life far, far away and it was useful at the time. This is like a 50 year bragging about being a high school football star – who cares?0March 13, 2003 at 3:57 pm #83809

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.News flash to Stan – You can’t train labor in how to use minitab if you have a high turnover rate. The 2 softwares are used for different things in different areas of quality processing. I’m not claiming nor have I ever that my software is better or does more than minitab, but it sure is 99% easier to use. Why? It doesnt do as much. The less anything does the easier it is to run. Also the fewer lines of code, the less chance for errors. You are comparing apples to oranges.

My girlfriend who is a BSRN couldn’t run minitab even with instructions written out0March 13, 2003 at 4:56 pm #83811

Jim WiningsParticipant@Jim-Winings**Include @Jim-Winings in your post and this person will**

be notified via email.And ah, Stan, the price of our software is only $150.00. So as you can see you are really comparing apples to oranges. I saw Consumer Reports one time compare an IBM to a Commodore 64. That was one of the most silly things I ever saw.

0March 13, 2003 at 5:58 pm #83818

John J. FlaigParticipant@John-Flaig**Include @John-Flaig in your post and this person will**

be notified via email.Gabriel,Now we have a problem. If I computed Pp it

would be with ALL the data in the population.

Please check the defintion (hence it is a

descriptive statistic). Pp is NOT an estimator,

Pp^ is. Second, I thought we agreed previously

that Pp could NOT be used as an inferential

statistic (i.e., it does NOT predict anything).John0March 14, 2003 at 4:03 am #83828

GabrielParticipant@Gabriel**Include @Gabriel in your post and this person will**

be notified via email.You are right, John. I did the mistake I told to avoid.

If you calculate Pp, then you are using all the individuals in the batch, then you are describing the batch. But if you estimate Pp of the batch estimating Mu(batch)^=Xbar(sample) and Sigma(batch)^=S(sample), then you are doing inferential statistic.

Maybe we are missunderstanding the word “predict”. I took it as “to say in advance that (something) will happen, forecast”, what happens to be the definition of “predict”. Maybe you are taking it as “to reach (an opinion) from facts or reasoning”, what happens to be the definition of “infer”. I had to look the dictionary because English is not my home language.

For example, if I take a random sample of 10,000 American men between 30 and 40 years old, weight them, and use the data to estimate the distribution of weight in the population of American men between 30 and 40 years old, I am doing inferential statistic, don’t I? Would you say I am predicting anything?

What I had said is that Pp does not require stability, and if the process is stable you can’t predict the future, or you can’t make a forecast. If the process is unstable neither Pp nor Cp let you predict the future. If the process is stable, then Cp=Pp so you can use any of them them to predict the future.

But usually for Pp and allways for Cp, you work with samples and then you don’t get the actual values of Pp and Cp but estimations. Because you are estimating the Pp at which the process performed or the Cp at which the process performs using data from a sample, and not calculating the actual values using all the individuals (what is impossible for Cp and seldom done for Pp), in both cases you are doing inferential statistics.

Even if you are using all the individuals in a batch to calculate the actual Pp, you still have measurement variation, so the value of each individual is, in fact, an estimation of its true value. So you are infering again. But this is splintting hairs.0March 14, 2003 at 5:16 am #83829Sorry about your girlfriend – mine can do Minitab in her sleep. But then again she understands Ppk.

0March 14, 2003 at 5:26 am #83830I am pretty sure that John can solve his problem if he does not use the data for anything – that is pretty much what I take from his posts.

John, your clarification of Pp^ has clarified my whole thinking on the subject.0March 14, 2003 at 1:53 pm #83837Nonsense!! There are no probability values associated with control charts. There are no alpha and beta risks associated with control charts. Control charts are not tests of hypotheses. Read Walter Shewhart’s book – Economic Control of Quality of Manufactured Products and/or Dr. Deming’s “Out of the Crisis”

Eileen

0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.