iSixSigma

Short term and Long term

Six Sigma – iSixSigma Forums Old Forums General Short term and Long term

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #33323

    David Neagle
    Participant

    I am more than a little confused, so any help is appreciated. I am trying to get to grips with the difference between CPk and PPk. As I understand it CPk is short term and PPk is long term. And, that is where the difficulty arises. Short term what and long term what. And, what constitutes short term and long term in time terms. The reason I am asking is because I need to decide what capability index is best suited to our processes. We have parts produced from dedicated tooling and relatively small batches. Say a 1000 once every 2 or 3 months.

    0
    #89997

    Gabriel
    Participant

    My opinion (and many will not agree) is that neither Cpk nor Ppk have anything to do with “term”.The best I can say in few words is that Pp is what the process delivered (note the past tense) while Cp is what it would deliver (what Pp would be) if it was completly stable.If the process is actually stable, Cp and Pp will be (on average) the same figures. And because stability is equality of behaviour over the time,  both Cp and Pp will be what the process delivered, delivers, and will deliver as long as it remains stable.There were long discussions on this subject before. If you search the forum you will find lots of threads on this topic.

    0
    #90000

    Mike Carnell
    Participant

    David,
    Nobody is going to be able to give a concrete answer on this unless they understand the way you run your operation. That does not mean they won’t try.
    Look at the formulas and how the numbers are calculated. They are two different pieces of information. Look at you process and figure out what information you need. Then you decide on which formula, using which data will tell you what you want to know. Similar to Gabriel’s advice on “term”.
    Try something else. Do a short run – around 30 pieces. Control everything the best you can – one operator, characterize the input material (so it is some what homogenious), control tool wear, setup, etc. This will basically simulate perfect control (in all probability it will send the people who are “experienced” in capability ballistic – just ignore them). When you get your Cp, Cpk, Pp, and Ppk numbers you will have a pretty good estimate of how much control you lose as the process runs by calculating the difference in those numbers and the short controlled run. I will give you an entitlement number. If the short run is the same at the “long term” you will know you have a basic technology issue rather than a control issue. This always seemed to be a more valuable piece of information.
    Just my opinion. Good luck

    0
    #90022

    David Neagle
    Participant

    Thanks to both Gabriel and Mike. I can see I have a loooong way to go in my understanding.
    David
     

    0
    #90025

    Mikel
    Member

    Try starting with the definitions in AIAG’s SPC manual pages 79 & 80.

    0
    #90038

    Reigle Stewart
    Participant

    David:On Dr. Harry’s Q&A forum there are several
    questions related to yours. I have taken the
    liberty to “copy and paste” one of them for you. I
    hope this helps you understanding.The capability of a process has two distinct but
    interrelated dimensions. First, there is “short-
    term capability,” or simply Z.st. Second, we have
    the dimension “long-term capability,” or just Z.lt.
    Finally, we note the contrast Z.shift = Z.st – Z.lt.
    By rearrangement, we assuredly recognize that
    Z.st = Z.lt + Z.shift and Z.lt = Z.st – Z.shift. So as
    to better understand the quantity Z.shift, we
    must consider some of the underlying
    mathematics.The short-term (instantaneous) form of Z is
    given as Z.st = |SL – T| / S.st, where SL is the
    specification limit, T is the nominal specification
    and S.st is the short-term standard deviation.
    The short-term standard deviation would be
    computed as S.st = sqrt[SS.w / g(n – 1)], where
    SS.w is the sums-of-squares due to variation
    occurring within subgroups, g is the number of
    subgroups, and n is the number of
    observations within a subgroup.
    It should be fairly apparent that Z.st assesses
    the ability of a process to repeat (or otherwise
    replicate) any given performance condition, at
    any arbitrary moment in time. Owing to the
    merits of a rational sampling strategy and given
    that SS.w captures only momentary influences
    of a transient and random nature, we are
    compelled to recognize that Z.st is a measure of
    “instantaneous reproducibility.” In other words,
    the sampling strategy must be designed such
    that Z.st does not capture or otherwise reflect
    temporal influences (time related sources of
    error). The metric Z.st must echo only pure error
    (random influences).
    Now considering Z.lt, we understand that this
    metric is intended to expose how well the
    process can replicate a given performance
    condition over many cycles of the process. In its
    purest form, Z.lt is intended to capture and
    “pool” all of the observed instantaneous effects
    as well as the longitudinal influences. Thus, we
    compute Z.lt = |SL – M| / S.lt, where SL is the
    specification limit, M is the mean (average) and
    S.lt is the long-term standard deviation. The
    long-term standard deviation is given as S.lt =
    sqrt[SS.t / (ng – 1)], where SS.t is the total
    sums-of-squares. In this context, SS.t captures
    two sources of variation – errors that occur
    within subgroups (SS.w) as well as those that
    are created between subgroups (SS.b). Given
    the absence of covariance, we are able to
    compute the quantity SS.t = SS.b + SS.w.
    In this context, we see that Z.lt provides a global
    sense of capability, not just a “slice in time”
    snapshot. Consequently, we recognize that Z.lt
    is time-sensitive, whereas Z.st is relatively
    independent of time. Based on this discussion,
    we can now better appreciate the contrast Z.st –
    Z.lt. This type of contrast poignantly
    underscores the extent to which time-related
    influences are able to unfavorably bias the
    instantaneous reproducibility of the process.
    Thus, we compute Z.shift = Z.st – Z.lt as a
    variable quantity that corrects, adjusts, or
    otherwise compensates the process capability
    for the influence of longitudinal effects.
    If the contrast is related only to a comparison of
    short- and long-term random effects, the value
    of Z.shift can be theoretically established. For
    the common case ng = 30 and a type I decision
    error probability of .005, the equivalent mean
    shift will be approximately 1.5S.st. If the contrast
    also accounts for the occurrence of nonrandom
    effects, the equivalent mean shift cannot be
    theoretically established – it can only be
    empirically estimated or judgmentally asserted.

    0
    #90039

    Reigle Stewart
    Participant

    David:As a follow-on to my last post you should
    understand that: Cp = |T – SL| / (3 * S.st) …
    Cpk = |M – SL| / (3 * S.st), where M is the
    process mean … Pp = |T – SL| / (3 * S.lt) …
    Ppk = |M – SL| / (3 * S.lt). It is also just as
    important to understand that Cp is an index of
    Dr. Harry calls “instantaneous reproducibility.”
    This is a term that describes how capable the
    process is if everything is perfect (mean is on
    target and process spread is due to random
    causes only). It is the moment-in-time estimate
    of capability. It is the absolute “best” the
    process technology can do. At the other
    extreme is Ppk. This says how “pragmatically
    capable” the process is over many cycles of
    operation (also called longitudinal capability
    because it is made over a long period of time).
    Ppk considers any off-set in process centering
    and also takes into account the long-term
    spread of the process (due to random and
    nonrandom causes). In his eight volume set of
    books (Vision of Six Sigma) Dr. Harry says that if
    a good rational sample is taken then Cp is a
    measure of how capable the process
    technology really is. He goes on to say that Ppk
    is a measure of how well that technology is
    controlled over time. If Cp = Ppk we know the
    technology is being “statistically controlled” over
    time … no assignable causes were present
    over the long period of time in which the
    capability study was done.
    Hope this helps to clarify thingsOBFG

    0
    #90045

    Kim Niles
    Participant

    Dear Reigle:
     
    I’ve really enjoyed reading your posts and wish to thank you for taking the time to make them. 
     
    Secondly, I see you worked with Dr. Harry in developing your book “Six Sigma Mechanical Design Tolerancing” but the book appears to be out of print.  How can I get a copy?
     
    Sincerely,
    KN – https://www.isixsigma.com/library/bio/kniles.asp, http://www.KimNiles.com
     

    0
    #90048

    Reigle Stewart
    Participant

    Kniles:Thank you for your comments. The book you
    referenced is out-of-print after a 15 year run.
    Every now and then I see some for sale on the
    Internet. You might be able to call Motorola and
    still buy one.Regards,Reigle Stewart

    0
    #90078

    David Neagle
    Participant

    I would like to say a big thank you for all the people who have take the time to reply to my question. Whilst not being totally clear, it is becoming clearer. I am sure I will have many more questions to pose and I am sure that I will get a very good response. Once again, thank you everyone.
    David
     
     

    0
Viewing 10 posts - 1 through 10 (of 10 total)

The forum ‘General’ is closed to new topics and replies.