iSixSigma

Ppk Not Needed Since About 1980

Six Sigma – iSixSigma Forums Old Forums General Ppk Not Needed Since About 1980

Viewing 60 posts - 101 through 160 (of 160 total)
  • Author
    Posts
  • #83842

    Gabriel
    Participant

    Ok, it may be not a formal hypothesis test, but it looks very alike. Let’s see:
    You have an hypothesis: The process is stable,Β i.e. it delivers the same distribution over time, and same distribution = same average + same variance + same shape. To simplify, let’s take a part of the hypothesys:
    The process delivers the same average Mu over time. This hipothesis can never be proven true so it looks very much like an Ho.
    You have an alternate hypothesis: Now the process average is Mu1 different thanΒ Mu. In looks like an Ha.
    You know that a point beyond average is very unlikely if the average is Mu, so if you find one you reject that Mu1=Mu and suspect that Mu1 is different than Mu. It sounds as if you were rejecting Ho for H1, and as if the “very unlikely to say that Mu has changed when it hasn’t” was the alpha risk.
    If you don’t see an OOC point, then you have no enough evidence to say with enough confidence that Mu has changed.
    But, how likely is that Mu has changed but you don’t see an OOC signal? It deppends on the ammount of change. For a very small change it is very proble that you don’t detect it. For a big change it is very unlikely that you don’t detect it. And further more, for a given change in Mu the chace miss itΒ decreases with the subgroup size. The chance to say that you have no evidence to say that Mu has changed when it hasΒ looks very much as a betta risk.
    I like the anology.

    0
    #83847

    Eileen
    Participant

    As I said before. You need to read and study the work done by Dr. Shewhart. Since he developed and established the theory of the control chart, I feel he is more qualified on this topic than you. I am sorry, but you are so wrong.
    Eileen

    0
    #83850

    Charles H
    Participant

    Eileen:
    Never like to see someone stand alone when they are right.Β  Hypothesis testing and probabilities have absolutely zero, zilch, nada to do with control charting.
    Charles H.

    0
    #83856

    John J. Flaig
    Participant

    Gabriel,I think this is a very good discussion we are
    having because it sheds light on a lot of
    misunderstood issues. Let me make a couple
    more observations.1. You need to be sure to use the “hat” symbol
    when youΒ’re talking about an estimator. 2. Here is a very important point. Pp^ is NOT an
    estimator of Pp. I know this may come as a
    shock to you given your notes to me, so let me
    explain. In order for an estimator (sample
    value) to predict the parameter (population
    value) the estimator MUST be a RANDOM
    variable. This means it has a distribution so we
    can say the population value lies between
    certain limits derived from the distribution of the
    RANDOM variable. The problem is that Pp^ is
    NOT a RANDOM variable. Pp^ has both
    SPECIAL and RANDOM causes of it’s variation.
    Hence, it does NOT have a fixed random
    distribution. Therefore, Pp^ is NOT a RANDOM
    variable and CANNOT be used to predict Pp.After you recover from the shock, let me know if
    you understand this VERY important concept in
    Statistics (and pass it on to your friends).Regards,
    John

    0
    #83857

    Jim Winings
    Participant

    I agree with Eileen. (but apparently consider the source, me)
    Β 
    Besterfield says, and he says for proof of this see JuranΒ’s Quality Control Handbook. Β…
    Β 
    Β‘Averages are used on control charts rather than individual observations because average values will indicate a change in variation much faster.Β’
    Β 
    Even though to me it would seem that a sample of 2 in each sub-group would show more of a change in the average compared to 10 samples per sub-group because smoothing.
    Β 
    I think about the best one can do with a control chart is to observe WECO rules, which represent trending. If you need them tighter, then change the WECO rules slightly. For example, instead of eight successive points fall in Zone C or beyond, make it 5. We also use a Β‘Best FitΒ’ line on our control charts, but how helpful that is depends on the data, but that goes without saying.
    Β 
    Me

    0
    #83864

    Jamie
    Participant

    I likeΒ Gabriels interpretation or analogy. The OOC test for any sample mean beyond 3 std errors from the mean sure seems to use all the same compentents as a 1 sample t-test where there is an alpha risk (1-.997) you can compute a beta risk (you have sample size, delta/sigma). You have a target (the process mean) you are testing it against, a mean and variance. You have a null hypothesis: sample mean = process mean and a null hypothesis: sample process mean. You are asking whether this sample could have come from the same population. If your answer is reject the null then you conclude the process has changed. How is this really so different that you say zilch, nada, etc… does one really need to read several volumes to explain the difference? I think Gabriel did a nice job of justifying his thoughts, but I really haven’t heard a counter arguement.
    One might be able to show it isn’t mathematically the same as a t-test (though I’m not so sure of this), but I can’t imagine how one could say it isn’t a hypothesis test.
    Jamie

    0
    #83873

    Charles H
    Participant

    Dr. Deming talks about this in his last book, The New Economics, on pages 176 – 177.Β Β  “It is wrong (misuse of the meaning of a control chart) to suppose that there is some ascertainable probablility that either of these false signals will occur.Β  We can only say that the risk to incur either false signal is very small. (Some textbooks on the statistical control of quality lead the reader astray on this point.)
    It is a mistake to suppose that the control chart furnished a test of significance – that a point beyond a control limit is ‘significant.'”Β 
    The false signals refer to the two types of mistakes that can be made with control chart analysis, as defined by ShewhartΒ (page 174);
    “Mistake 1: To react to an outcome as if it came from a special cause, when actually it came from common cause variation.”
    “Mistake 2: To treat an outcome as if it came from common causes of variation, when actually it came from a special cause.”
    Went looking for my copy of Out of the Crisis for further definition on the topic, but can’t find it (what I get for moving over the holidays).
    In reading Deming, one will note he never uses the term “probability” associated with control charts.Β  He will say it is “predicatable” if it is stable and in control, but there are no probabilities associated with it.
    Hope this helps the discussion.
    Charles H.

    0
    #83874

    Mikel
    Member

    Of course there are probabilities associate with control charts unless of course you believe there are no probabilities associated with +/- 3 sigma limits.

    0
    #83875

    Mikel
    Member

    John,
    The only shock I recieve from your posts – all of them – is how little value can be derived from this theoretical diatribe.

    0
    #83876

    Charles H
    Participant

    >If your answer is reject the null then you conclude the process has changed.<
    In rereading Jamie’s post, I noted this statement which slipped past me.Β  This is not correct.Β  You cannot conclude the process has changed based upon an out of control condition.Β  All you can do is ask if the process had changed andΒ that you need to investigate and determine the root cause, if one is present.Β  False alarms on control charts do happen – even in the Red Beads Experiment – which is a very controled and stable process.
    Charles H.

    0
    #83877

    Charles H
    Participant

    Correct Stan – though they have their basis in probabilites, there are noΒ  probabilities associated with control limits or the reason for Shewhart choosing +/-3 standard deviations – a potentialy subtle but very important distinction.Β  He did so because it was the most economic location for them, limiting the possibilities of making mistakes 1 and 2 and their subsequent economic impacts.
    Charles H.
    Β 

    0
    #83878

    John J. Flaig
    Participant

    Stan,Since you seem unable to understand the
    significance of this issue I can only assume that
    you need a refresher course in Statistics . Let
    me suggest you talk to someone in the
    Statistics department of your local university or
    see Drs. Montgomery, Kotz, Lovelace, Wheeler,
    or Deming’s comments on this subject. Then
    you might not thing it is of such “little value”.

    0
    #83891

    Mikel
    Member

    there is statistical significance and practical significance.
    this has no practical significance.

    0
    #83892

    John J. Flaig
    Participant

    Stan,On what basis do you make the claim that Pp
    has “statistical significance and practical
    significance”? Of course everyone is entitled to their opinion,
    but you have made a claim WITHOUT DATA and
    WITHOUT STATISTICAL PROOF to support it.
    Do you have a statistical proof or can you site
    recognized statistical experts that agree with
    you?As a scientist I’ll change my position as soon as
    you can do two things:
    1. Show that the mathematical proof that I
    provided is incorrect, and
    2. You can convince the following list of
    renowned Statisticians that they are also wrong
    (Kotz, Montgomery, Lovelace, Johnson,
    Wheeler, Khorasani, and Gunter). I’d be happy
    to send you their e-mail addresses so you can
    send them your “proof”. Just let me know when
    you ready to submit it.

    0
    #83894

    Gabriel
    Participant

    “…to me it would seem that a sample of 2 in each sub-group would show more of a change in the average compared to 10 samples per sub-group because smoothing”
    That’s it. This sumarizes your understanding about the subject.

    0
    #83895

    Gabriel
    Participant

    I insist. SPC might be not a formal hypothesis test, but I like the analogy. Even based on what you posted to rject that.
    “We can only say that the risk to incur either false signal is very small” (Sounds like Alpha risk?)
    “The false signals refer to the two types of mistakes that can be made with control chart analysis” (Remember Type I and Type II errors?)
    “Mistake 1: To react to an outcome as if it came from a special cause, when actually it came from common cause variation.” (Looks like Type I?)
    “Mistake 2: To treat an outcome as if it came from common causes of variation, when actually it came from a special cause.” (Looks like Type II?)

    0
    #83897

    Charles H
    Participant

    Application with knowledge provides value.
    Application without knowledge is guessing.
    I provided information and sources.Β  You provide analogies without information or sources. Insist if you must – but you are wrong.Β  Dr. Shewhart and Deming would tell you “don’t mess with it – it works.”Β  They wrote the book – so, with all repsect to you, Gabriel, I’ll listen to them.
    Regardless of whether I agree with you, your posts always add to the discussion. Thanks for your contributions to the forum.Β 
    Charles H

    0
    #83898

    Jim Winings
    Participant

    I thought that is what I said!

    0
    #83899

    Gabriel
    Participant

    “Pp is NOT an estimator, Pp^ is.”
    “Pp^ is NOT an estimator of Pp”
    These frases are from your two previos posts. So tell me, if Pp^ isΒ NOT an estimator of Pp, butΒ Pp^ IS an estimator, thenΒ Pp^ is an estimator of ______ (fill the blank).
    By the way, becuse Pp=Tolerance/Sigma(total) and Pp^=Tolerance/S(total), then you say that S(total) is not an estimator of Sigma(total)?
    By the way, for a given batch, Sigma(total) is the population’s (batch) standard deviation as defined in any book, and S(total) is the sample standard deviation as defined in any book, so you mean that the standard deviation of a sample of a population is not an estimator of the standard deviation of the population?
    By the way, give me a batch and I will be able to take infinite random and independent samples of size n from it. For any of these samplesΒ you can calculate Toleance/S, what happens to be Pp^. That will give you a distribution for Pp^ for samples of size n taken form that batch. So Pp^ IS a random variable with a fixed distribution for a given batch. And Pp^ is used to estimate the Pp for that batch.
    By the way, if the process is unstable, then you can not predict what will happen in the next batch, neither with Pp^ nor with Cp^. If the process is stable, then Cp=Pp and then you can use either Pp^ or Cp^ to predict what will happen in the next batch.
    By the way, according to your reasoning, Xbar (calculated for example with the same samle you uset to calculate Pp^) is not an estimator of the batch’s average. It is also affected by speical causes.
    The key is that, once the batch is made, you eliminated the “time” as a variable and then there is no disctinction between special and common causes. It is just a batch with a distribution you don’t know but want to estimate. And that’s what Pp^ does.

    0
    #83901

    Jim Winings
    Participant

    Rhetorical Question:
    Β 
    Have you ever been really, really, really, really, really, really, really, really, really, really, really, really, sorry that you did something?
    Β 

    0
    #83914

    Mikel
    Member

    Absolutely – but don’t be sorry – this was good

    0
    #83916

    Mikel
    Member

    Mistakes 1 and 2 – do you mean type I and type II errors?
    Limited possibilities?
    TypeΒ  I & II errors always have probabilities associated with them. Possibilities / probabilities – what is the difference?
    I agree that Shewhart did not pick 3 sigma for the probability associated with it, but there is a probability associated with it.

    0
    #83917

    Mikel
    Member

    Johnboy,
    What a high minded challange.
    Let’s see the automotive industry bases there APQP process on the demonstration of capability. Every properly trained Six Sigma professional uses capability. AIAG thinks so much of much ado about nothing Pp^ that they don’t even mention it.
    We’ll just keep on using these estimates while youΒ high mindedΒ intellectuals talk about theory.
    By the way, I respect Box, Montgomery and Wheeler, but they both have altered stands to remain saleable. Just look at Box’s stand on Taguchi in the 80’s and then look at the video series he did in the 90 – featured a fully saturated L8.

    0
    #83918

    Mikel
    Member

    Gabriel,
    Well said.

    0
    #83923

    John J. Flaig
    Participant

    Stan,The expected response from you. 1. You resort to demeaning insults.
    2. You provide only antidotal and irrelevant data.
    The statisticians that I listed have not changes
    their minds. Since you like Dr. Box why donΒ’t you
    send an e-mail to him. IΒ’m sure heΒ’d happy to
    enlighten you.
    3. You can’t find one statistician that supports
    your claim of statistical significance for Pp.

    0
    #83926

    Mikel
    Member

    John,
    What did you consider demeaning? High minded intellectuals? I doubt that most disagree.
    I know George, but put his and Doug and Don’s email adresses out here for everybody.
    I know loads of statisticians that use Cp and Ppk on a daily basis for decision making. It is pretty arogant to think you speak for all statisticians.

    0
    #83931

    Charles H
    Participant

    Stan:
    My experience has been that Dr. Deming and Dr. Shewhart were both very accurate and precise in theΒ language they used.Β  They did not say “TypeΒ I and Type II Error” for a reason.Β  Now, why do you think that is?Β  Was it a mistake on their part – careless use of language? Or were they trying to make an important distinction?
    Charles H.

    0
    #83934

    Mikel
    Member

    Charles,
    Please help me understand the important distinctions. What are mistakes 1 and 2?

    0
    #83937

    CSSBB
    Participant

    Just reinforces my opinion that the probability of two statisticians agreeing with one another is infinitesimally small.Β  It never ceases to amaze me that statisticians can be so certain in their opinions about the science of uncertainty. I’ve had the pleasure on a number of occasions of watching the fur fly when our statistical friends debate classical vs. Taguchi vs. Shanin DOE techniques.Β  They’re almost as entertaining as Crossfire or Hannity and Combs (or Ackroyd and Curtin).

    0
    #83938

    Jamie
    Participant

    Charles, I wanted to thank you for your references. To be fair I need to really researchΒ “the experts”. I must say though you concluding I’m wrong almost seems to further my point (but I doubt this will add much toΒ convincing you). Sure there is always a chance of being wrong when you accept the alternative hypothesis. That chance is called alpha risk, the risk we are willing to accept. If you don’t conclude there is a high probability something is changed then why go investigate it. WouldΒ changing the OOC limits not change this risk (say to 2 std dev’s instead or 3)? Since I’m not enough of a mask wiz to be able to derive the probabilities (I’m assuming the true probabilities exist) then I must say I can’t say anything more then Gabrielle has which it is an anology. The only thing I can add is I’m not one to easilyΒ accept something because an “expert” (even those thatΒ were the original inventors) say its so. Now if they provide emperical evidence or mathmatical proof that another story (and if I do my research I my find that they do). You can’t post entire books, but what you did post certainly supported your point. Thanks for the discussion.
    Jamie

    0
    #83939

    Jim Winings
    Participant

    Same thing with economist. I once worked with a couple of economists and one of them said that if you put 5 economists in a room and ask them all the exact same question, you would get 5 different answers, and none of them may be correct.
    Β 

    0
    #83942

    Mikel
    Member

    Kind of reminds me of the old joke –
    How many statisticians does it take to solve a problem?

    0
    #83943

    Jim Winings
    Participant

    (I’ve heard this in several different forms)
    OK, I give up, …
    How many statisticians does it take to solve a problem?
    Β 
    Β 

    0
    #83944

    Jamie
    Participant

    Stan, I’m assuming you are refering to Type I and Type II error that was posted by Charles. If so (in no statistical terms)…Type I – we conclude a difference, i.e. we say something has changed when indeed it has not. Type II we conclude there is no difference, i.e.Β nothing has changed when indeed it did.
    Question, I’m assuming we have been talking about the single test (or OOC condition)Β where we find a mean outside of 3 std deviations. Is it possible that the language is very specific where it does not refer to type I or type II error because this is just one of many tests (there are what 8 common)Β many of which look at time dependent relationships. I would say that if we collectively look at the 8 tests then I’d sayΒ we can not make the same analogies about control chart tests and hypothesis testing or t-testsΒ that have been discussed. I’m applying my thoughts to only the one most standard test.
    Jamie

    0
    #83945

    Anon2
    Participant

    How many statisticians does it take to solve a problem?Β After attempting to read through the 130+ posts to this thread ,I’d say the number has to be significantly less than a group ofΒ  non-statisticians attempting to solve the same problem.
    :)

    0
    #83946

    billybob
    Participant

    Hello folks,
    Ok..you finally drew me into this thread. Its always seemΒ weird to me the only way a statistician could say something wasΒ good Β wasΒ  say “fail to reject the null hypothesis”.Β Β  For cripes sakes; were’re not dealing with the French hereΒ and the UN, if its good say its good. If its bad say its bad and move on!Β 
    And the Dixie Chichs suck, and i don’t care if its spelled wrong!
    Later,
    Billybob

    0
    #83947

    Jim Winings
    Participant

    hehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehehe

    0
    #83948

    Jim Winings
    Participant

    Ah, I wanted to ask, was that an estimate???
    :-))

    0
    #83949

    Brandon
    Participant

    Hey Billybob,
    How does it feel to be famous on isixsigma? Is this the start of your 15 minutes, or what?
    Brandon
    PS I like the Dixie Chicks. What’s the deal?

    0
    #83950

    billybob
    Participant

    Hello folks,
    Hello Brandon…cute name, were your parents hoping for a girl?
    Search by author…Billybob…I’m already famous!
    No doubt you like those un-American; pro- Iraqi/FrenchΒ  Dixie Chicks…go read the news and turn off the sit-coms.
    Now lets get back on topic….
    Later,
    Billybob
    Β 
    Β 

    0
    #83952

    Gabriel
    Participant

    My English is not my home language, so I consulted the dictionary:
    Mistake: Error, fault.
    And this is from my primary school:
    I: Roman number for 1.
    II: Roman number for 2.
    Ok, I know, the dictionary does not allways cover the technical meanings of the words.
    I still like the anology.
    I rember in my Fluids Mechanics course that we derived the Navier-Stokes equation using the analogy of the elastic solids. Of course, noone ever tried to convice us that the fluids were elastic solids, but the analogy worked pretty well to derive the equation and to understand the concepts behind it.

    0
    #83955

    John J. Flaig
    Participant

    Stan,To refresh your memory:1. I did not refer to you as Stanboy!
    2. You are still unable to provide any statistical
    proof of your claim.
    3. You are still unable to name ONE emanate
    Statistician that agrees with you.Here’s a quote for you to ponder.”The process performance indices Pp and Ppk
    are more than a step backwards. They are a
    waste of engineering and management effort —
    they tell you nothing.”Douglas C. Montgomery. Introduction to SPC,
    4-th Ed., page 373.

    0
    #83957

    John J. Flaig
    Participant

    Gabriel,Sorry for creating the confusion. When I said Pp^
    was an estimator I just meant to imply that the
    “hat” symbol was used to indicate an estimator
    of a population parameter. Again Pp^ is NOT an
    estimator of process performance. Also, youΒ’re
    right, if the process is unstable, then xbar is
    NOT an estimator of the process mu either. Now let me address some of your other points. 1. In assessing process capability we are
    interested in what the process will do (i.e.
    prediction). The process continues to produce
    product (i.e., it is NOT a finite batch). It is an
    infinite times series population.2. You said — Pp=Tolerance/Sigma(total) and
    Pp^=Tolerance/S(total), then you say that S(total)
    is not an estimator of Sigma(total)? For a finite population or an infinite one from a
    random process s is an estimator of sigma.
    However, if you have an infinite time series
    population generated from BOTH random and
    special cause variation, then s is NOT an
    estimator of sigma. 3. You said — Give me a batch and I will be able
    to take infinite random and independent
    samples of size n from it. For any of these
    samples you can calculate Toleance/S, what
    happens to be Pp^. That will give you a
    distribution for Pp^ for samples of size n taken
    form that batch. So Pp^ IS a random variable
    with a fixed distribution for a given batch. And
    Pp^ is used to estimate the Pp for that batch.This is true, because you have a finite batch
    size. But does it tell you anything about the
    processes next batch? No it does not, because
    the process is subject to special cause
    variation. Also, your statement that you can take
    an infinite number of random samples from this
    finite batch is mathematically incorrect. You can
    only take a finite number.4. You said — If the process is unstable, then
    you can not predict what will happen in the next
    batch, neither with Pp^ nor with Cp^. If the
    process is stable, then Cp=Pp and then you can
    use either Pp^ or Cp^ to predict what will
    happen in the next batch.What is the value of Pp^ if the process is stable? 5. You said — The key is that, once the batch is
    made, you eliminated the “time” as a variable
    and then there is no distinction between special
    and common causes. It is just a batch with a
    distribution you don’t know but want to estimate.
    And that’s what Pp^ does. What is the value of this knowledge? Nothing,
    because it does not tell you what the next batch
    will look like.

    0
    #83958

    Mikel
    Member

    John.
    I am sorry you found Johnboy demeaning – I was just inviting you out to play – I will not do it again.
    Doug is welcome to his opinion – the whole Automotive and Six Sigma community is at odds with him.
    He probably doesn’t care anymore anyway – making wine is more important than this nonsense anyway.

    0
    #83959

    Mikel
    Member

    John,
    Again I have to say, I doubt anyone knows what to do with this stuff you put out.
    Your message in a nutshell – stats are important but don’t use them for anything.
    Again I disagree. There is a load of information contained in Cp, Cpk, Pp, and Ppk (I know, I know – you think I should put hats on everything). I and thousands of others use the information daily to gain direction in order to solve problems.
    Β 

    0
    #83960

    Mikel
    Member

    Do you know any statisticians that ever solved a problem?
    The answer is obvious:
    Zero – if you really want to solve the problem

    0
    #83964

    Gabriel
    Participant

    Jaime,
    Same feeling. The most they try to convice you it is not an hypothesis test, the most it looks, at least, as if it worked like one (to avoid saying it IS one, thing that I can’t support).
    And, for the record, if you mentioned me in your post, then it is Gabriel. Not Gabrielle. I’m sure you will note the difference (which is not jus “le”) :-)

    0
    #83969

    Gabriel
    Participant

    Jhon, we seem to agree in most things. Let’s see:
    First, does “a batch” (undertood as tha output of a process in a defined period of time) qualifies as a population from which one can estimate something based on sampling? Or only “the process” qualifies as a population? For me, the answer is pretty clear. Both things qualify.
    “Again Pp^ is NOT an estimator of process performance. Also, youΒ’re right, if the process is unstable, then xbar is NOT an estimator of the process mu either”
    Pp^is an estimator of the process performance DURING THE TIME INVOLVED IN THE STUDY (i.e how the process performed in the batch). It is not (and I NEVER said it was) an estimator of process capability or future process performance. The same for Xbar. It is an estimation of the batch’s Mu, not the process Mu unless it is stable.
    “1. In assessing process capability we are interested in what the process will do (i.e. prediction). The process continues to produce product (i.e., it is NOT a finite batch). It is an infinite times series population.”
    That’s why Pp is “process performance” and not “process capability”. From that I conclude that if your process is not fully stable then you don’t care about what on Earth it is actually delivering. You don’t care about actual, real, current customer satisfaction. You are only interested about what it “could” deliver in the future if it was stable, or the “potential capability” to satisfy your customer.
    “2. You said — Pp=Tolerance/Sigma(total) and Pp^=Tolerance/S(total), then you say that S(total) is not an estimator of Sigma(total)? — For a finite population or an infinite one from a random process s is an estimator of sigma. However, if you have an infinite time series population generated from BOTH random and special cause variation, then s is NOT an estimator of sigma”.
    In that case, the process distribution is not defined, because the distribution changes over time. Process Mu, Sigma and shape do not exist. I said that in the “definitions” part of my first, long post. And as long as I remember I didn’t say anything contrary to that. Sigma(total) and S(total) are appliable only for the period of the study where the sample comes from (what I called “the batch”).
    “3. You said — Give me a batch and I will be able to take infinite random and independent samples of size n from it. For any of these samples you can calculate Toleance/S, what happens to be Pp^. That will give you a distribution for Pp^ for samples of size n taken form that batch. So Pp^ IS a random variable with a fixed distribution for a given batch. And Pp^ is used to estimate the Pp for that batch. — This is true, because you have a finite batch size. But does it tell you anything about the processes next batch? No it does not, because the process is subject to special cause variation. Also, your statement that you can take an infinite number of random samples from this finite batch is mathematically incorrect. You can only take a finite number.”
    No it does not tell anything about the future, including next batch. How many times do I have to say it? You quoted me below saying “If the process is unstable, then you can not predict what will happen in the next batch, neither with Pp^ nor with Cp^”. And you are quoting me in this point saying “Pp^ is used to estimate the Pp for that batch”. And about the samples, why couldn’t I take an infinite ammount of samples of any size (even bigger than the population size) from a finite population? Just replace the sample to the population after taking it. If you didn’t, then the samples would not be independent and the result would be biased.
    “4. You said — If the process is unstable, then you can not predict what will happen in the next batch, neither with Pp^ nor with Cp^. If the process is stable, then Cp=Pp and then you can use either Pp^ or Cp^ to predict what will happen in the next batch. — What is the value of Pp^ if the process is stable?”
    What’s the value of Cp if the process is stable? Since in that case Cp=Pp, I guess the value is the same (both the mathematical value and the practical value).
    “5. You said — The key is that, once the batch is made, you eliminated the “time” as a variable and then there is no distinction between special and common causes. It is just a batch with a distribution you don’t know but want to estimate. And that’s what Pp^ does. — What is the value of this knowledge? Nothing, because it does not tell you what the next batch will look like.”
    It does not tell you how the next batch will look like. It is true an I said that all the time. As you see, we agree in about everything but one thing. You think that the only valuable information is that that let you predict the future. I don’t. I also want to know what I am shipping to the customer now. And, by the way, even if I am not predicting the future it is inferential statistics anyway, because I am estimating parameters of the population (batch) from sample statistics.
    And, finally and to add more to the discussion. There is one case when Pp^ is much better to predict the future than Cp^. When you have an unstable process where the the special causes and their effects are known, predictable, and will stay there (no intention to remove them). What is called a predictable unstable process.
    In previous posts I gave two examples. A controlled drifting process and a process affected by the thickness of a metal strip (raw material) where that thickness had a known variation from coil to coil but negligible variation within the coil. In those cases Cp tells you what you could get if you did what you won’t do. And that’s useless infromation. Pp, on the other had, tells you how the process performed and how it will perform, since the special causes will be pressent with the same effects in the future.

    0
    #83971

    Jamie
    Participant

    Gabriel, I enjoy your thoughts, thanks for all the posts. Sorry for the misspelling… and for the books its Jamie (not Jaime :)).
    Jamie

    0
    #83978

    Charles H
    Participant

    Thanks for your kind words, Jamie.Β  I sense that you have adopted life-long learning as part of your personal continual improvement philosophy.Β  Bravo!Β  I applaud your approach –Β  investigate, read, learn – and thenΒ reach your own conclusions on the issues.Β  The method of a true Six Sigma professional.Β  Don’t take the “experts” word for it – seek profound knowledge and understanding.Β  We need assumption challengers out there!Β  Your search will serve you well.
    As we have seen in this thread, there are two camps involved in this discussion – those that believe as Deming and Shewhart prescribe (don’t associate probabilities and hypothesis tests with control charts), and those that believe differently.Β  My belief is thatΒ many posting to this thread are reacting based on assumptions and intuition, not upon their own investigation, knowledge and understanding.Β  AllΒ one can ask is that we look at the facts, read and understand the “experts,” then decide for ourselves.Β  [By the way – is itΒ just my perceptionΒ or is there a general lack of knowledge and/or an aversion in the Six Sigma community towards Dr. Deming’s and Shewhart’s teachings?]Β 
    I do not find the analogies used in this discussion to be compelling.Β  Analogies are great for getting a point across, but analogies are nothing more than a “resemblance in some particulars between things, otherwise unlike.”Β  I find Gabriel’s analogy, good as it may be on the surface, to be of little value in this discussion.Β  An analogy is not a good substitute for an accurate and preciseΒ statistical discussion, based upon data and facts.Β  This is not a slam on Gabriel – I find his postsΒ to beΒ appropriate and well thought out (thanks Gabriel – good stuff).
    Lastly, my thanks to Jim for raising this thread.Β  I know you’ve taken a lot of hits, Jim.Β  You put yourself out there on point and I gotta respect that – aside from whether we agree or disagree on the finer points.Β  :-)Β  Good job!
    Best to all,
    Charles H.
    Β 
    Β 

    0
    #83981

    Jamie
    Participant

    Charles, Thanks for the post … it actually allows me to leave this long thread feeling pretty good. I’m off to learn more. Thanks to all for the great discusion.
    Jamie

    0
    #83984

    Gabriel
    Participant

    The “analogy” appeared in this thread becuse we were discussing about how slow in uncapable could a control chart to detect small shifts of the mean, and because of that the absence of OOC signals did not ensure that the process was stable (if we understand stable = delivering the same distribution over time). And Stan had the bad idea to adress that point saying something likeΒ “the Alpha risk is 0.27%” and I had the bad idea to reply to that using the analogy and saying that the risk to fail to detect the unstability would be the “Betta risk”, not “Alpha”.
    Then the thing began to grow and grow adding what I think it’s a lot of information with a little value. If you use the tool properly and getΒ value from it, it is not very important whether you say that it is an hypothesis test, or that it works as one, or that it has nothing to do with an hypothesis test.

    0
    #83985

    Charles H
    Participant

    Hey Gabriel:
    You hit the nail on the head.Β  When it’s all over and done with, it comes down to what gives you value and what doesn’t in the real world.Β  My only caution is, that in ignoring the intent and details of the tools (the underlying statistical foundations), we take the risk of misapplying the tool, getting erroneous information, thus we make an erroneous decision based upon the data – and “off we go to the Milky Way”.Β  I have seen this happen to practitioners numerous times in my journey.Β  We can only make the determination of what is and is not significant to our given situation based upon aΒ good understanding of the tools and their limitations.Β  I think we would be in agreement on this point?
    Charles H.

    0
    #83986

    John J. Flaig
    Participant

    Gabriel,It seems we are pretty much in agreement, but
    let me try and address the points where you feel
    that Pp^ is of value.1. You’re right Pp^ can be used as a metric for a
    fixed batch. But as I mentioned previously I think
    a better measure is S(LT) vs. S(ST) and using
    the F* test, a histogram, the fraction
    nonconforming in the tails, and net sensitivity
    (see my article in Quality Engineering, Marcel
    Dekker, Vol. 11, No. 4, 1999).2. I found the last paragraph of your response
    quite interesting. You gave an example of
    process having controlled drift and I think your
    suggesting that Pp^ could be useful in
    predicting the future process results. YouΒ’re
    right! However, you should ask yourself, is the
    process really unstable? The answer is NO.
    The process is actually in dynamic control and
    can be modeled using time series methods
    (see Montgomery’s SPC text for a tool ware
    example). So in this case you should use Cp for
    the dynamic model because the process is
    actually in-control.I need to the think about your coil example. I’ll
    get back to you if I can come up with an answer.It is a pleasure to discuss these ideas with you.
    You have a very sharp mind and come up with
    very thoughtful examples.Regards,
    JohnPS- Here is a quote from Dr. Kotz (Professor of
    Ind. Eng. and the worlds leading authority on
    process capability indices)”We strongly recommend against the use of Pp
    and Ppk as these indices are actually a step
    backwards in quantifying process capability”

    0
    #83988

    Gabriel
    Participant

    We are in full agreement. I’ve never liked those “instant puddinn”-“don’t ask how it works”-“ready to use without any background knowledge” toolkits.
    Imagine that I am leading an SPC course and I say: “There are some signals you can look for in the control chart that will seldom happen if the process distribution has not changed since the time the control limits were calculated. Take, for example, 7 points above the average. If the process is stable, any point can be either above or below the average with a probability of 0,5 for each caseΒ independently of whether the previous points were above or below the average. Then the probability for any 7 consecutive points to be all above the average just by chance in a stable process is 0.5^7=0.008 or 0.8%. Then, if you find that signal in a control char you would reject reject that the process is stable and willΒ investigate, find and eliminate the special cause that, for example, moved the average upwards. Of course, there is a risk that you rejected the stability wrongly because, in a stable process, about 1 out of 100 groups of 7 points will show that signal just by chance, regardless of the subgroup size. It’s like the Appha risk of an hypothesis test, remember? Note that there is also a risk that the average has actually shifted upwards but, by chance, you are still have some points below the average so you fail to detect the unstability. It’s like the Betta risk of an hypothesis test. Note that this risk to fail to detect small shifts is grater than the risk to detect large shifts and that, for a given shift, the risk is smaller for bigger subgroups becuase, as we’ve seen before, the larger the subgroups the smaller the variation of the Xbar, and then it is less probable that a point will fall below the average line drawn in the chart if the actual average has shifted upwards, so it is more probable to get 7 consecutive points above the average”.
    I used probabilities, IΒ used the analogy with an hypothesis test. Do you find the previos explanation confusing and missleading, expalanatory and helpful to understand the concepts and use of the tool, other?

    0
    #83991

    Gabriel
    Participant

    John, I’m happy we agree so much.
    I saw something about those tools and metrics you mention in an ULR you posted before. It looked very interesting.
    About the non-conforming fraction in the tails, I don’t like it as a real estimator of the fractiion of non confroming, specially if the process is capable (or performed very well in this batch) becuse in this case you have too few parts in the tails and probably no part from the sample will be that far, so it will be very difficutl to prove tah any given mathematical distribution fits the process (or batch) distribution that far. Now, if you use that value not as a real estimator of the % non conforming but just as a capability (or performace) index, it’s Ok. My point is that if your process is very good (or performed very good in one batch) like, for example, Cpk=1.5 then the process is very good, never mind if you actually have 0.4, 4 or 40 PPM. And if the process before the last improvement was at Cpk=1, then it was “barely acceptable” (or not, deppending on particular criteria) porcess even with 1500, 2500 or 4500 PPM (the fractionΒ non conforming is more reliably estimated when it is bigger), and now it is better. I am not saying that it is wrong to estimate the non conforming fraction. I just don’t like it for capable processes.
    About comparing S(LT) vs S(ST), it is the same than comparing Ppk^ with Cpk^. Just that the F test (and as I could see the F* test too) are designed for S. From that point of view using S is easier because you don’t have to develop a new test for Cp^ vs Pp^, what could be done.
    Β Now, I have to recognize my limitations. I don’t know the F* test, and the net sensivity.
    I also have to recognize that I don’t know the “dynamic control” concept. I will read the material someday, but I am curious, how do you calculate Cpk in that case?
    And finally:
    “We strongly recommend against the use of Pp and Ppk as these indices are actually a step backwards in quantifying process capability”
    I agree. So let’s keep from using them to quantify process capability. I never pretended to do that except, may be, for the predictable unstable (or dynamically stable) process case.

    0
    #83994

    John J. Flaig
    Participant

    Gabriel,1. Your right, the fraction nonconforming
    estimation can be tricky. In my software I used
    techniques from reliability mathematics to
    model the process even when no points are
    beyond the spec limits (see reliability analysis
    with censored data). I also developed a
    modified Weibull modeling approach that is
    more accurate that the standard Weibull
    modeling. Combined these techniques do an
    excellent job of estimating the nonconformance
    rate.2. Here is the reference to the F* test.
    Cruthis, E. N. and Rigdon, S. E. (1993).
    Comparing Two Estimates of Variance to
    Determine the Stability of a Process. Quality
    Engineering, Vol. 5, No. 1.3. To compute Cp for a process having uniform
    drift you can use a transform to rotate the data
    so that it then appears as a standard control
    chart. You can then compare the spec tolerance
    to the process tolerance (i.e., Cp). Basically, the
    same approach can be used for Cpk, but using
    the transformed value of the (now constant)
    mean.Regards,
    JohnPS- I think you can get a copy of my paper on
    Net Sensitivity and my paper on Process
    Capability Optimization on the ASQ web site.

    0
    #84004

    Gabriel
    Participant

    “3. To compute Cp for a process having uniform drift you can use a transform to rotate the data so that it then appears as a standard control chart. You can then compare the spec tolerance to the process tolerance (i.e., Cp). Basically, the same approach can be used for Cpk, but using the transformed value of the (now constant) mean”
    I was afraid that it was something like this. Now, I repeat I don’t know this method, but it sounds to me that a process that begins with let’s say average 10Β and ends with average 12,Β and another process that starts at 10.8 and ends with an average of 11.2, allways withΒ the same S(within), would have the same Cp and Cpk. No… I have to be wrong.

    0
    #84007

    Mikel
    Member

    Gabriel,
    You are right, that is why Pp and Ppk are important. They would see the difference. Cp and Cpk will not.

    0
    #84021

    John J. Flaig
    Participant

    Gabriel,Get a calculus book and look for translation and
    rotation of axises. It should explain how to
    generate the transform. Once you have the
    equation you can put it in Excel and transform
    the original data (and the specification limits)
    into the new data. The new data will look like a
    standard Shewhart control chart. I hope this helps.Regards,
    JohnPS- Also Montgomery covers this technique in
    his SPC text (see the tool wear example).

    0
Viewing 60 posts - 101 through 160 (of 160 total)

The forum ‘General’ is closed to new topics and replies.