Ppk Not Needed Since About 1980
Six Sigma – iSixSigma › Forums › Old Forums › General › Ppk Not Needed Since About 1980
 This topic has 159 replies, 23 voices, and was last updated 19 years, 8 months ago by John J. Flaig.

AuthorPosts

March 14, 2003 at 2:52 pm #83842
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Ok, it may be not a formal hypothesis test, but it looks very alike. Let’s see:
You have an hypothesis: The process is stable, i.e. it delivers the same distribution over time, and same distribution = same average + same variance + same shape. To simplify, let’s take a part of the hypothesys:
The process delivers the same average Mu over time. This hipothesis can never be proven true so it looks very much like an Ho.
You have an alternate hypothesis: Now the process average is Mu1 different than Mu. In looks like an Ha.
You know that a point beyond average is very unlikely if the average is Mu, so if you find one you reject that Mu1=Mu and suspect that Mu1 is different than Mu. It sounds as if you were rejecting Ho for H1, and as if the “very unlikely to say that Mu has changed when it hasn’t” was the alpha risk.
If you don’t see an OOC point, then you have no enough evidence to say with enough confidence that Mu has changed.
But, how likely is that Mu has changed but you don’t see an OOC signal? It deppends on the ammount of change. For a very small change it is very proble that you don’t detect it. For a big change it is very unlikely that you don’t detect it. And further more, for a given change in Mu the chace miss it decreases with the subgroup size. The chance to say that you have no evidence to say that Mu has changed when it has looks very much as a betta risk.
I like the anology.0March 14, 2003 at 3:45 pm #83847As I said before. You need to read and study the work done by Dr. Shewhart. Since he developed and established the theory of the control chart, I feel he is more qualified on this topic than you. I am sorry, but you are so wrong.
Eileen0March 14, 2003 at 4:03 pm #83850
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Eileen:
Never like to see someone stand alone when they are right. Hypothesis testing and probabilities have absolutely zero, zilch, nada to do with control charting.
Charles H.0March 14, 2003 at 5:29 pm #83856
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Gabriel,I think this is a very good discussion we are
having because it sheds light on a lot of
misunderstood issues. Let me make a couple
more observations.1. You need to be sure to use the “hat” symbol
when youre talking about an estimator. 2. Here is a very important point. Pp^ is NOT an
estimator of Pp. I know this may come as a
shock to you given your notes to me, so let me
explain. In order for an estimator (sample
value) to predict the parameter (population
value) the estimator MUST be a RANDOM
variable. This means it has a distribution so we
can say the population value lies between
certain limits derived from the distribution of the
RANDOM variable. The problem is that Pp^ is
NOT a RANDOM variable. Pp^ has both
SPECIAL and RANDOM causes of it’s variation.
Hence, it does NOT have a fixed random
distribution. Therefore, Pp^ is NOT a RANDOM
variable and CANNOT be used to predict Pp.After you recover from the shock, let me know if
you understand this VERY important concept in
Statistics (and pass it on to your friends).Regards,
John0March 14, 2003 at 5:48 pm #83857
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.I agree with Eileen. (but apparently consider the source, me)
Besterfield says, and he says for proof of this see Jurans Quality Control Handbook.
Averages are used on control charts rather than individual observations because average values will indicate a change in variation much faster.
Even though to me it would seem that a sample of 2 in each subgroup would show more of a change in the average compared to 10 samples per subgroup because smoothing.
I think about the best one can do with a control chart is to observe WECO rules, which represent trending. If you need them tighter, then change the WECO rules slightly. For example, instead of eight successive points fall in Zone C or beyond, make it 5. We also use a Best Fit line on our control charts, but how helpful that is depends on the data, but that goes without saying.
Me0March 14, 2003 at 7:57 pm #83864I like Gabriels interpretation or analogy. The OOC test for any sample mean beyond 3 std errors from the mean sure seems to use all the same compentents as a 1 sample ttest where there is an alpha risk (1.997) you can compute a beta risk (you have sample size, delta/sigma). You have a target (the process mean) you are testing it against, a mean and variance. You have a null hypothesis: sample mean = process mean and a null hypothesis: sample process mean. You are asking whether this sample could have come from the same population. If your answer is reject the null then you conclude the process has changed. How is this really so different that you say zilch, nada, etc… does one really need to read several volumes to explain the difference? I think Gabriel did a nice job of justifying his thoughts, but I really haven’t heard a counter arguement.
One might be able to show it isn’t mathematically the same as a ttest (though I’m not so sure of this), but I can’t imagine how one could say it isn’t a hypothesis test.
Jamie0March 14, 2003 at 11:33 pm #83873
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Dr. Deming talks about this in his last book, The New Economics, on pages 176 – 177. “It is wrong (misuse of the meaning of a control chart) to suppose that there is some ascertainable probablility that either of these false signals will occur. We can only say that the risk to incur either false signal is very small. (Some textbooks on the statistical control of quality lead the reader astray on this point.)
It is a mistake to suppose that the control chart furnished a test of significance – that a point beyond a control limit is ‘significant.'”
The false signals refer to the two types of mistakes that can be made with control chart analysis, as defined by Shewhart (page 174);
“Mistake 1: To react to an outcome as if it came from a special cause, when actually it came from common cause variation.”
“Mistake 2: To treat an outcome as if it came from common causes of variation, when actually it came from a special cause.”
Went looking for my copy of Out of the Crisis for further definition on the topic, but can’t find it (what I get for moving over the holidays).
In reading Deming, one will note he never uses the term “probability” associated with control charts. He will say it is “predicatable” if it is stable and in control, but there are no probabilities associated with it.
Hope this helps the discussion.
Charles H.0March 14, 2003 at 11:39 pm #83874Of course there are probabilities associate with control charts unless of course you believe there are no probabilities associated with +/ 3 sigma limits.
0March 14, 2003 at 11:47 pm #83875John,
The only shock I recieve from your posts – all of them – is how little value can be derived from this theoretical diatribe.0March 14, 2003 at 11:48 pm #83876
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.>If your answer is reject the null then you conclude the process has changed.<
In rereading Jamie’s post, I noted this statement which slipped past me. This is not correct. You cannot conclude the process has changed based upon an out of control condition. All you can do is ask if the process had changed and that you need to investigate and determine the root cause, if one is present. False alarms on control charts do happen – even in the Red Beads Experiment – which is a very controled and stable process.
Charles H.0March 15, 2003 at 12:14 am #83877
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Correct Stan – though they have their basis in probabilites, there are no probabilities associated with control limits or the reason for Shewhart choosing +/3 standard deviations – a potentialy subtle but very important distinction. He did so because it was the most economic location for them, limiting the possibilities of making mistakes 1 and 2 and their subsequent economic impacts.
Charles H.
0March 15, 2003 at 2:33 am #83878
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Stan,Since you seem unable to understand the
significance of this issue I can only assume that
you need a refresher course in Statistics . Let
me suggest you talk to someone in the
Statistics department of your local university or
see Drs. Montgomery, Kotz, Lovelace, Wheeler,
or Deming’s comments on this subject. Then
you might not thing it is of such “little value”.0March 16, 2003 at 2:22 pm #83891there is statistical significance and practical significance.
this has no practical significance.0March 16, 2003 at 11:53 pm #83892
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Stan,On what basis do you make the claim that Pp
has “statistical significance and practical
significance”? Of course everyone is entitled to their opinion,
but you have made a claim WITHOUT DATA and
WITHOUT STATISTICAL PROOF to support it.
Do you have a statistical proof or can you site
recognized statistical experts that agree with
you?As a scientist I’ll change my position as soon as
you can do two things:
1. Show that the mathematical proof that I
provided is incorrect, and
2. You can convince the following list of
renowned Statisticians that they are also wrong
(Kotz, Montgomery, Lovelace, Johnson,
Wheeler, Khorasani, and Gunter). I’d be happy
to send you their email addresses so you can
send them your “proof”. Just let me know when
you ready to submit it.0March 17, 2003 at 2:53 am #83894
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.“…to me it would seem that a sample of 2 in each subgroup would show more of a change in the average compared to 10 samples per subgroup because smoothing”
That’s it. This sumarizes your understanding about the subject.0March 17, 2003 at 3:01 am #83895
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.I insist. SPC might be not a formal hypothesis test, but I like the analogy. Even based on what you posted to rject that.
“We can only say that the risk to incur either false signal is very small” (Sounds like Alpha risk?)
“The false signals refer to the two types of mistakes that can be made with control chart analysis” (Remember Type I and Type II errors?)
“Mistake 1: To react to an outcome as if it came from a special cause, when actually it came from common cause variation.” (Looks like Type I?)
“Mistake 2: To treat an outcome as if it came from common causes of variation, when actually it came from a special cause.” (Looks like Type II?)0March 17, 2003 at 3:17 am #83897
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Application with knowledge provides value.
Application without knowledge is guessing.
I provided information and sources. You provide analogies without information or sources. Insist if you must – but you are wrong. Dr. Shewhart and Deming would tell you “don’t mess with it – it works.” They wrote the book – so, with all repsect to you, Gabriel, I’ll listen to them.
Regardless of whether I agree with you, your posts always add to the discussion. Thanks for your contributions to the forum.
Charles H0March 17, 2003 at 3:49 am #83898
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.I thought that is what I said!
0March 17, 2003 at 3:59 am #83899
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.“Pp is NOT an estimator, Pp^ is.”
“Pp^ is NOT an estimator of Pp”
These frases are from your two previos posts. So tell me, if Pp^ is NOT an estimator of Pp, but Pp^ IS an estimator, then Pp^ is an estimator of ______ (fill the blank).
By the way, becuse Pp=Tolerance/Sigma(total) and Pp^=Tolerance/S(total), then you say that S(total) is not an estimator of Sigma(total)?
By the way, for a given batch, Sigma(total) is the population’s (batch) standard deviation as defined in any book, and S(total) is the sample standard deviation as defined in any book, so you mean that the standard deviation of a sample of a population is not an estimator of the standard deviation of the population?
By the way, give me a batch and I will be able to take infinite random and independent samples of size n from it. For any of these samples you can calculate Toleance/S, what happens to be Pp^. That will give you a distribution for Pp^ for samples of size n taken form that batch. So Pp^ IS a random variable with a fixed distribution for a given batch. And Pp^ is used to estimate the Pp for that batch.
By the way, if the process is unstable, then you can not predict what will happen in the next batch, neither with Pp^ nor with Cp^. If the process is stable, then Cp=Pp and then you can use either Pp^ or Cp^ to predict what will happen in the next batch.
By the way, according to your reasoning, Xbar (calculated for example with the same samle you uset to calculate Pp^) is not an estimator of the batch’s average. It is also affected by speical causes.
The key is that, once the batch is made, you eliminated the “time” as a variable and then there is no disctinction between special and common causes. It is just a batch with a distribution you don’t know but want to estimate. And that’s what Pp^ does.0March 17, 2003 at 4:32 am #83901
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.Rhetorical Question:
Have you ever been really, really, really, really, really, really, really, really, really, really, really, really, sorry that you did something?
0March 17, 2003 at 1:36 pm #83914Absolutely – but don’t be sorry – this was good
0March 17, 2003 at 1:47 pm #83916Mistakes 1 and 2 – do you mean type I and type II errors?
Limited possibilities?
Type I & II errors always have probabilities associated with them. Possibilities / probabilities – what is the difference?
I agree that Shewhart did not pick 3 sigma for the probability associated with it, but there is a probability associated with it.0March 17, 2003 at 1:55 pm #83917Johnboy,
What a high minded challange.
Let’s see the automotive industry bases there APQP process on the demonstration of capability. Every properly trained Six Sigma professional uses capability. AIAG thinks so much of much ado about nothing Pp^ that they don’t even mention it.
We’ll just keep on using these estimates while you high minded intellectuals talk about theory.
By the way, I respect Box, Montgomery and Wheeler, but they both have altered stands to remain saleable. Just look at Box’s stand on Taguchi in the 80’s and then look at the video series he did in the 90 – featured a fully saturated L8.0March 17, 2003 at 1:56 pm #83918Gabriel,
Well said.0March 17, 2003 at 3:05 pm #83923
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Stan,The expected response from you. 1. You resort to demeaning insults.
2. You provide only antidotal and irrelevant data.
The statisticians that I listed have not changes
their minds. Since you like Dr. Box why dont you
send an email to him. Im sure hed happy to
enlighten you.
3. You can’t find one statistician that supports
your claim of statistical significance for Pp.0March 17, 2003 at 3:20 pm #83926John,
What did you consider demeaning? High minded intellectuals? I doubt that most disagree.
I know George, but put his and Doug and Don’s email adresses out here for everybody.
I know loads of statisticians that use Cp and Ppk on a daily basis for decision making. It is pretty arogant to think you speak for all statisticians.0March 17, 2003 at 3:44 pm #83931
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Stan:
My experience has been that Dr. Deming and Dr. Shewhart were both very accurate and precise in the language they used. They did not say “Type I and Type II Error” for a reason. Now, why do you think that is? Was it a mistake on their part – careless use of language? Or were they trying to make an important distinction?
Charles H.0March 17, 2003 at 4:41 pm #83934Charles,
Please help me understand the important distinctions. What are mistakes 1 and 2?0March 17, 2003 at 5:31 pm #83937Just reinforces my opinion that the probability of two statisticians agreeing with one another is infinitesimally small. It never ceases to amaze me that statisticians can be so certain in their opinions about the science of uncertainty. I’ve had the pleasure on a number of occasions of watching the fur fly when our statistical friends debate classical vs. Taguchi vs. Shanin DOE techniques. They’re almost as entertaining as Crossfire or Hannity and Combs (or Ackroyd and Curtin).
0March 17, 2003 at 5:37 pm #83938Charles, I wanted to thank you for your references. To be fair I need to really research “the experts”. I must say though you concluding I’m wrong almost seems to further my point (but I doubt this will add much to convincing you). Sure there is always a chance of being wrong when you accept the alternative hypothesis. That chance is called alpha risk, the risk we are willing to accept. If you don’t conclude there is a high probability something is changed then why go investigate it. Would changing the OOC limits not change this risk (say to 2 std dev’s instead or 3)? Since I’m not enough of a mask wiz to be able to derive the probabilities (I’m assuming the true probabilities exist) then I must say I can’t say anything more then Gabrielle has which it is an anology. The only thing I can add is I’m not one to easily accept something because an “expert” (even those that were the original inventors) say its so. Now if they provide emperical evidence or mathmatical proof that another story (and if I do my research I my find that they do). You can’t post entire books, but what you did post certainly supported your point. Thanks for the discussion.
Jamie0March 17, 2003 at 5:39 pm #83939
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.Same thing with economist. I once worked with a couple of economists and one of them said that if you put 5 economists in a room and ask them all the exact same question, you would get 5 different answers, and none of them may be correct.
0March 17, 2003 at 5:59 pm #83942Kind of reminds me of the old joke –
How many statisticians does it take to solve a problem?0March 17, 2003 at 6:03 pm #83943
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.(I’ve heard this in several different forms)
OK, I give up, …
How many statisticians does it take to solve a problem?
0March 17, 2003 at 6:15 pm #83944Stan, I’m assuming you are refering to Type I and Type II error that was posted by Charles. If so (in no statistical terms)…Type I – we conclude a difference, i.e. we say something has changed when indeed it has not. Type II we conclude there is no difference, i.e. nothing has changed when indeed it did.
Question, I’m assuming we have been talking about the single test (or OOC condition) where we find a mean outside of 3 std deviations. Is it possible that the language is very specific where it does not refer to type I or type II error because this is just one of many tests (there are what 8 common) many of which look at time dependent relationships. I would say that if we collectively look at the 8 tests then I’d say we can not make the same analogies about control chart tests and hypothesis testing or ttests that have been discussed. I’m applying my thoughts to only the one most standard test.
Jamie0March 17, 2003 at 6:22 pm #83945How many statisticians does it take to solve a problem? After attempting to read through the 130+ posts to this thread ,I’d say the number has to be significantly less than a group of nonstatisticians attempting to solve the same problem.
:)0March 17, 2003 at 6:27 pm #83946
billybobParticipant@billybob Include @billybob in your post and this person will
be notified via email.Hello folks,
Ok..you finally drew me into this thread. Its always seem weird to me the only way a statistician could say something was good was say “fail to reject the null hypothesis”. For cripes sakes; were’re not dealing with the French here and the UN, if its good say its good. If its bad say its bad and move on!
And the Dixie Chichs suck, and i don’t care if its spelled wrong!
Later,
Billybob0March 17, 2003 at 6:29 pm #83947
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.hehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehe
0March 17, 2003 at 6:30 pm #83948
Jim WiningsParticipant@JimWinings Include @JimWinings in your post and this person will
be notified via email.Ah, I wanted to ask, was that an estimate???
:))0March 17, 2003 at 7:32 pm #83949
BrandonParticipant@Brandon Include @Brandon in your post and this person will
be notified via email.Hey Billybob,
How does it feel to be famous on isixsigma? Is this the start of your 15 minutes, or what?
Brandon
PS I like the Dixie Chicks. What’s the deal?0March 17, 2003 at 7:49 pm #83950
billybobParticipant@billybob Include @billybob in your post and this person will
be notified via email.Hello folks,
Hello Brandon…cute name, were your parents hoping for a girl?
Search by author…Billybob…I’m already famous!
No doubt you like those unAmerican; pro Iraqi/French Dixie Chicks…go read the news and turn off the sitcoms.
Now lets get back on topic….
Later,
Billybob
0March 17, 2003 at 9:09 pm #83952
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.My English is not my home language, so I consulted the dictionary:
Mistake: Error, fault.
And this is from my primary school:
I: Roman number for 1.
II: Roman number for 2.
Ok, I know, the dictionary does not allways cover the technical meanings of the words.
I still like the anology.
I rember in my Fluids Mechanics course that we derived the NavierStokes equation using the analogy of the elastic solids. Of course, noone ever tried to convice us that the fluids were elastic solids, but the analogy worked pretty well to derive the equation and to understand the concepts behind it.0March 18, 2003 at 1:21 am #83955
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Stan,To refresh your memory:1. I did not refer to you as Stanboy!
2. You are still unable to provide any statistical
proof of your claim.
3. You are still unable to name ONE emanate
Statistician that agrees with you.Here’s a quote for you to ponder.”The process performance indices Pp and Ppk
are more than a step backwards. They are a
waste of engineering and management effort —
they tell you nothing.”Douglas C. Montgomery. Introduction to SPC,
4th Ed., page 373.0March 18, 2003 at 2:15 am #83957
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Gabriel,Sorry for creating the confusion. When I said Pp^
was an estimator I just meant to imply that the
“hat” symbol was used to indicate an estimator
of a population parameter. Again Pp^ is NOT an
estimator of process performance. Also, youre
right, if the process is unstable, then xbar is
NOT an estimator of the process mu either. Now let me address some of your other points. 1. In assessing process capability we are
interested in what the process will do (i.e.
prediction). The process continues to produce
product (i.e., it is NOT a finite batch). It is an
infinite times series population.2. You said — Pp=Tolerance/Sigma(total) and
Pp^=Tolerance/S(total), then you say that S(total)
is not an estimator of Sigma(total)? For a finite population or an infinite one from a
random process s is an estimator of sigma.
However, if you have an infinite time series
population generated from BOTH random and
special cause variation, then s is NOT an
estimator of sigma. 3. You said — Give me a batch and I will be able
to take infinite random and independent
samples of size n from it. For any of these
samples you can calculate Toleance/S, what
happens to be Pp^. That will give you a
distribution for Pp^ for samples of size n taken
form that batch. So Pp^ IS a random variable
with a fixed distribution for a given batch. And
Pp^ is used to estimate the Pp for that batch.This is true, because you have a finite batch
size. But does it tell you anything about the
processes next batch? No it does not, because
the process is subject to special cause
variation. Also, your statement that you can take
an infinite number of random samples from this
finite batch is mathematically incorrect. You can
only take a finite number.4. You said — If the process is unstable, then
you can not predict what will happen in the next
batch, neither with Pp^ nor with Cp^. If the
process is stable, then Cp=Pp and then you can
use either Pp^ or Cp^ to predict what will
happen in the next batch.What is the value of Pp^ if the process is stable? 5. You said — The key is that, once the batch is
made, you eliminated the “time” as a variable
and then there is no distinction between special
and common causes. It is just a batch with a
distribution you don’t know but want to estimate.
And that’s what Pp^ does. What is the value of this knowledge? Nothing,
because it does not tell you what the next batch
will look like.0March 18, 2003 at 4:26 am #83958John.
I am sorry you found Johnboy demeaning – I was just inviting you out to play – I will not do it again.
Doug is welcome to his opinion – the whole Automotive and Six Sigma community is at odds with him.
He probably doesn’t care anymore anyway – making wine is more important than this nonsense anyway.0March 18, 2003 at 4:33 am #83959John,
Again I have to say, I doubt anyone knows what to do with this stuff you put out.
Your message in a nutshell – stats are important but don’t use them for anything.
Again I disagree. There is a load of information contained in Cp, Cpk, Pp, and Ppk (I know, I know – you think I should put hats on everything). I and thousands of others use the information daily to gain direction in order to solve problems.
0March 18, 2003 at 4:37 am #83960Do you know any statisticians that ever solved a problem?
The answer is obvious:
Zero – if you really want to solve the problem0March 18, 2003 at 12:10 pm #83964
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Jaime,
Same feeling. The most they try to convice you it is not an hypothesis test, the most it looks, at least, as if it worked like one (to avoid saying it IS one, thing that I can’t support).
And, for the record, if you mentioned me in your post, then it is Gabriel. Not Gabrielle. I’m sure you will note the difference (which is not jus “le”) :)0March 18, 2003 at 1:08 pm #83969
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.Jhon, we seem to agree in most things. Let’s see:
First, does “a batch” (undertood as tha output of a process in a defined period of time) qualifies as a population from which one can estimate something based on sampling? Or only “the process” qualifies as a population? For me, the answer is pretty clear. Both things qualify.
“Again Pp^ is NOT an estimator of process performance. Also, youre right, if the process is unstable, then xbar is NOT an estimator of the process mu either”
Pp^is an estimator of the process performance DURING THE TIME INVOLVED IN THE STUDY (i.e how the process performed in the batch). It is not (and I NEVER said it was) an estimator of process capability or future process performance. The same for Xbar. It is an estimation of the batch’s Mu, not the process Mu unless it is stable.
“1. In assessing process capability we are interested in what the process will do (i.e. prediction). The process continues to produce product (i.e., it is NOT a finite batch). It is an infinite times series population.”
That’s why Pp is “process performance” and not “process capability”. From that I conclude that if your process is not fully stable then you don’t care about what on Earth it is actually delivering. You don’t care about actual, real, current customer satisfaction. You are only interested about what it “could” deliver in the future if it was stable, or the “potential capability” to satisfy your customer.
“2. You said — Pp=Tolerance/Sigma(total) and Pp^=Tolerance/S(total), then you say that S(total) is not an estimator of Sigma(total)? — For a finite population or an infinite one from a random process s is an estimator of sigma. However, if you have an infinite time series population generated from BOTH random and special cause variation, then s is NOT an estimator of sigma”.
In that case, the process distribution is not defined, because the distribution changes over time. Process Mu, Sigma and shape do not exist. I said that in the “definitions” part of my first, long post. And as long as I remember I didn’t say anything contrary to that. Sigma(total) and S(total) are appliable only for the period of the study where the sample comes from (what I called “the batch”).
“3. You said — Give me a batch and I will be able to take infinite random and independent samples of size n from it. For any of these samples you can calculate Toleance/S, what happens to be Pp^. That will give you a distribution for Pp^ for samples of size n taken form that batch. So Pp^ IS a random variable with a fixed distribution for a given batch. And Pp^ is used to estimate the Pp for that batch. — This is true, because you have a finite batch size. But does it tell you anything about the processes next batch? No it does not, because the process is subject to special cause variation. Also, your statement that you can take an infinite number of random samples from this finite batch is mathematically incorrect. You can only take a finite number.”
No it does not tell anything about the future, including next batch. How many times do I have to say it? You quoted me below saying “If the process is unstable, then you can not predict what will happen in the next batch, neither with Pp^ nor with Cp^”. And you are quoting me in this point saying “Pp^ is used to estimate the Pp for that batch”. And about the samples, why couldn’t I take an infinite ammount of samples of any size (even bigger than the population size) from a finite population? Just replace the sample to the population after taking it. If you didn’t, then the samples would not be independent and the result would be biased.
“4. You said — If the process is unstable, then you can not predict what will happen in the next batch, neither with Pp^ nor with Cp^. If the process is stable, then Cp=Pp and then you can use either Pp^ or Cp^ to predict what will happen in the next batch. — What is the value of Pp^ if the process is stable?”
What’s the value of Cp if the process is stable? Since in that case Cp=Pp, I guess the value is the same (both the mathematical value and the practical value).
“5. You said — The key is that, once the batch is made, you eliminated the “time” as a variable and then there is no distinction between special and common causes. It is just a batch with a distribution you don’t know but want to estimate. And that’s what Pp^ does. — What is the value of this knowledge? Nothing, because it does not tell you what the next batch will look like.”
It does not tell you how the next batch will look like. It is true an I said that all the time. As you see, we agree in about everything but one thing. You think that the only valuable information is that that let you predict the future. I don’t. I also want to know what I am shipping to the customer now. And, by the way, even if I am not predicting the future it is inferential statistics anyway, because I am estimating parameters of the population (batch) from sample statistics.
And, finally and to add more to the discussion. There is one case when Pp^ is much better to predict the future than Cp^. When you have an unstable process where the the special causes and their effects are known, predictable, and will stay there (no intention to remove them). What is called a predictable unstable process.
In previous posts I gave two examples. A controlled drifting process and a process affected by the thickness of a metal strip (raw material) where that thickness had a known variation from coil to coil but negligible variation within the coil. In those cases Cp tells you what you could get if you did what you won’t do. And that’s useless infromation. Pp, on the other had, tells you how the process performed and how it will perform, since the special causes will be pressent with the same effects in the future.0March 18, 2003 at 1:36 pm #83971Gabriel, I enjoy your thoughts, thanks for all the posts. Sorry for the misspelling… and for the books its Jamie (not Jaime :)).
Jamie0March 18, 2003 at 3:33 pm #83978
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Thanks for your kind words, Jamie. I sense that you have adopted lifelong learning as part of your personal continual improvement philosophy. Bravo! I applaud your approach – investigate, read, learn – and then reach your own conclusions on the issues. The method of a true Six Sigma professional. Don’t take the “experts” word for it – seek profound knowledge and understanding. We need assumption challengers out there! Your search will serve you well.
As we have seen in this thread, there are two camps involved in this discussion – those that believe as Deming and Shewhart prescribe (don’t associate probabilities and hypothesis tests with control charts), and those that believe differently. My belief is that many posting to this thread are reacting based on assumptions and intuition, not upon their own investigation, knowledge and understanding. All one can ask is that we look at the facts, read and understand the “experts,” then decide for ourselves. [By the way – is it just my perception or is there a general lack of knowledge and/or an aversion in the Six Sigma community towards Dr. Deming’s and Shewhart’s teachings?]
I do not find the analogies used in this discussion to be compelling. Analogies are great for getting a point across, but analogies are nothing more than a “resemblance in some particulars between things, otherwise unlike.” I find Gabriel’s analogy, good as it may be on the surface, to be of little value in this discussion. An analogy is not a good substitute for an accurate and precise statistical discussion, based upon data and facts. This is not a slam on Gabriel – I find his posts to be appropriate and well thought out (thanks Gabriel – good stuff).
Lastly, my thanks to Jim for raising this thread. I know you’ve taken a lot of hits, Jim. You put yourself out there on point and I gotta respect that – aside from whether we agree or disagree on the finer points. :) Good job!
Best to all,
Charles H.
0March 18, 2003 at 3:48 pm #83981Charles, Thanks for the post … it actually allows me to leave this long thread feeling pretty good. I’m off to learn more. Thanks to all for the great discusion.
Jamie0March 18, 2003 at 4:30 pm #83984
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.The “analogy” appeared in this thread becuse we were discussing about how slow in uncapable could a control chart to detect small shifts of the mean, and because of that the absence of OOC signals did not ensure that the process was stable (if we understand stable = delivering the same distribution over time). And Stan had the bad idea to adress that point saying something like “the Alpha risk is 0.27%” and I had the bad idea to reply to that using the analogy and saying that the risk to fail to detect the unstability would be the “Betta risk”, not “Alpha”.
Then the thing began to grow and grow adding what I think it’s a lot of information with a little value. If you use the tool properly and get value from it, it is not very important whether you say that it is an hypothesis test, or that it works as one, or that it has nothing to do with an hypothesis test.0March 18, 2003 at 4:41 pm #83985
Charles HParticipant@CharlesH Include @CharlesH in your post and this person will
be notified via email.Hey Gabriel:
You hit the nail on the head. When it’s all over and done with, it comes down to what gives you value and what doesn’t in the real world. My only caution is, that in ignoring the intent and details of the tools (the underlying statistical foundations), we take the risk of misapplying the tool, getting erroneous information, thus we make an erroneous decision based upon the data – and “off we go to the Milky Way”. I have seen this happen to practitioners numerous times in my journey. We can only make the determination of what is and is not significant to our given situation based upon a good understanding of the tools and their limitations. I think we would be in agreement on this point?
Charles H.0March 18, 2003 at 5:12 pm #83986
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Gabriel,It seems we are pretty much in agreement, but
let me try and address the points where you feel
that Pp^ is of value.1. You’re right Pp^ can be used as a metric for a
fixed batch. But as I mentioned previously I think
a better measure is S(LT) vs. S(ST) and using
the F* test, a histogram, the fraction
nonconforming in the tails, and net sensitivity
(see my article in Quality Engineering, Marcel
Dekker, Vol. 11, No. 4, 1999).2. I found the last paragraph of your response
quite interesting. You gave an example of
process having controlled drift and I think your
suggesting that Pp^ could be useful in
predicting the future process results. Youre
right! However, you should ask yourself, is the
process really unstable? The answer is NO.
The process is actually in dynamic control and
can be modeled using time series methods
(see Montgomery’s SPC text for a tool ware
example). So in this case you should use Cp for
the dynamic model because the process is
actually incontrol.I need to the think about your coil example. I’ll
get back to you if I can come up with an answer.It is a pleasure to discuss these ideas with you.
You have a very sharp mind and come up with
very thoughtful examples.Regards,
JohnPS Here is a quote from Dr. Kotz (Professor of
Ind. Eng. and the worlds leading authority on
process capability indices)”We strongly recommend against the use of Pp
and Ppk as these indices are actually a step
backwards in quantifying process capability”0March 18, 2003 at 6:02 pm #83988
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.We are in full agreement. I’ve never liked those “instant puddinn”“don’t ask how it works”“ready to use without any background knowledge” toolkits.
Imagine that I am leading an SPC course and I say: “There are some signals you can look for in the control chart that will seldom happen if the process distribution has not changed since the time the control limits were calculated. Take, for example, 7 points above the average. If the process is stable, any point can be either above or below the average with a probability of 0,5 for each case independently of whether the previous points were above or below the average. Then the probability for any 7 consecutive points to be all above the average just by chance in a stable process is 0.5^7=0.008 or 0.8%. Then, if you find that signal in a control char you would reject reject that the process is stable and will investigate, find and eliminate the special cause that, for example, moved the average upwards. Of course, there is a risk that you rejected the stability wrongly because, in a stable process, about 1 out of 100 groups of 7 points will show that signal just by chance, regardless of the subgroup size. It’s like the Appha risk of an hypothesis test, remember? Note that there is also a risk that the average has actually shifted upwards but, by chance, you are still have some points below the average so you fail to detect the unstability. It’s like the Betta risk of an hypothesis test. Note that this risk to fail to detect small shifts is grater than the risk to detect large shifts and that, for a given shift, the risk is smaller for bigger subgroups becuase, as we’ve seen before, the larger the subgroups the smaller the variation of the Xbar, and then it is less probable that a point will fall below the average line drawn in the chart if the actual average has shifted upwards, so it is more probable to get 7 consecutive points above the average”.
I used probabilities, I used the analogy with an hypothesis test. Do you find the previos explanation confusing and missleading, expalanatory and helpful to understand the concepts and use of the tool, other?0March 18, 2003 at 6:34 pm #83991
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.John, I’m happy we agree so much.
I saw something about those tools and metrics you mention in an ULR you posted before. It looked very interesting.
About the nonconforming fraction in the tails, I don’t like it as a real estimator of the fractiion of non confroming, specially if the process is capable (or performed very well in this batch) becuse in this case you have too few parts in the tails and probably no part from the sample will be that far, so it will be very difficutl to prove tah any given mathematical distribution fits the process (or batch) distribution that far. Now, if you use that value not as a real estimator of the % non conforming but just as a capability (or performace) index, it’s Ok. My point is that if your process is very good (or performed very good in one batch) like, for example, Cpk=1.5 then the process is very good, never mind if you actually have 0.4, 4 or 40 PPM. And if the process before the last improvement was at Cpk=1, then it was “barely acceptable” (or not, deppending on particular criteria) porcess even with 1500, 2500 or 4500 PPM (the fraction non conforming is more reliably estimated when it is bigger), and now it is better. I am not saying that it is wrong to estimate the non conforming fraction. I just don’t like it for capable processes.
About comparing S(LT) vs S(ST), it is the same than comparing Ppk^ with Cpk^. Just that the F test (and as I could see the F* test too) are designed for S. From that point of view using S is easier because you don’t have to develop a new test for Cp^ vs Pp^, what could be done.
Now, I have to recognize my limitations. I don’t know the F* test, and the net sensivity.
I also have to recognize that I don’t know the “dynamic control” concept. I will read the material someday, but I am curious, how do you calculate Cpk in that case?
And finally:
“We strongly recommend against the use of Pp and Ppk as these indices are actually a step backwards in quantifying process capability”
I agree. So let’s keep from using them to quantify process capability. I never pretended to do that except, may be, for the predictable unstable (or dynamically stable) process case.0March 18, 2003 at 8:47 pm #83994
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Gabriel,1. Your right, the fraction nonconforming
estimation can be tricky. In my software I used
techniques from reliability mathematics to
model the process even when no points are
beyond the spec limits (see reliability analysis
with censored data). I also developed a
modified Weibull modeling approach that is
more accurate that the standard Weibull
modeling. Combined these techniques do an
excellent job of estimating the nonconformance
rate.2. Here is the reference to the F* test.
Cruthis, E. N. and Rigdon, S. E. (1993).
Comparing Two Estimates of Variance to
Determine the Stability of a Process. Quality
Engineering, Vol. 5, No. 1.3. To compute Cp for a process having uniform
drift you can use a transform to rotate the data
so that it then appears as a standard control
chart. You can then compare the spec tolerance
to the process tolerance (i.e., Cp). Basically, the
same approach can be used for Cpk, but using
the transformed value of the (now constant)
mean.Regards,
JohnPS I think you can get a copy of my paper on
Net Sensitivity and my paper on Process
Capability Optimization on the ASQ web site.0March 19, 2003 at 11:14 am #84004
GabrielParticipant@Gabriel Include @Gabriel in your post and this person will
be notified via email.“3. To compute Cp for a process having uniform drift you can use a transform to rotate the data so that it then appears as a standard control chart. You can then compare the spec tolerance to the process tolerance (i.e., Cp). Basically, the same approach can be used for Cpk, but using the transformed value of the (now constant) mean”
I was afraid that it was something like this. Now, I repeat I don’t know this method, but it sounds to me that a process that begins with let’s say average 10 and ends with average 12, and another process that starts at 10.8 and ends with an average of 11.2, allways with the same S(within), would have the same Cp and Cpk. No… I have to be wrong.0March 19, 2003 at 1:44 pm #84007Gabriel,
You are right, that is why Pp and Ppk are important. They would see the difference. Cp and Cpk will not.0March 19, 2003 at 4:51 pm #84021
John J. FlaigParticipant@JohnFlaig Include @JohnFlaig in your post and this person will
be notified via email.Gabriel,Get a calculus book and look for translation and
rotation of axises. It should explain how to
generate the transform. Once you have the
equation you can put it in Excel and transform
the original data (and the specification limits)
into the new data. The new data will look like a
standard Shewhart control chart. I hope this helps.Regards,
JohnPS Also Montgomery covers this technique in
his SPC text (see the tool wear example).0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.