iSixSigma

kbower50

Forum Replies Created

Forum Replies Created

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #68610

    kbower50
    Participant

    I was not advocating the use of Pp and Ppk.  In particular, I am in agreement with Montgomery’s discussion on this in his most recent SQC book.  Merely the terminology was addressed.  Of course, with a stable process, Pp and Cp should be very similar, as should Cpk with Ppk.  Since many quality practitioners I meet use these indices such a discussion is, of course, entirely valid in such a forum as this – for it to be a healthy place of discussion.
    Incidentally, your comment:
    However, a crude estimate of yield and/or fraction non-conforming can be made from Capability Indicies, provided the data are distributed normally.
    could be broader.  In particular, it may be possible to perform a meaningful transformation of the data, perhaps using the Box-Cox algorithm, along with probability plots and good engineering sense.  The weibull distribution may also be employed, if appropriate, and is included in MINITAB software.  There is useful information through the help section, as well as a couple of papers on the website.

    0
    #68583

    kbower50
    Participant

    I just wanted to clarify one section of your answer as it may be misinterpreted from first inspection:
    In particular,
    Many people will tell you that the data needs to be normally distributed. That is simply not true. Cp is a just ratio–nothing more. Once you use Cp (or Cpk) to estimate percent defective then you must find if the data is truly normally distributed.
    You’re correct in the first sentence – if we are monitoring a process using a control chart, then of course it is (generally speaking) robust to the assumption of Normality – we can rely upon, e.g., Tchebycheff’s inequality to show that the probability of falling within 3S/root(n) is small.  I’m concerned people may read the first statement and skip over your 4th sentence.  Capability indices such as Cp and Cpk are highly sensitive to the Normality assumption – a fact frequently ignored.  For further discussion, I would advise referring to Somerville & Montgomery (1996) “Process Capability Indices and Non-Normal Distributions,” Quality Engineering, Vol. 9.  It’s a very interesting paper.

    0
    #68567

    kbower50
    Participant

    You’re right – it used to be called the homogeneity of variance test, but was changed to “test for equal variance” due to customer demand.  I’ve a paper on using MINITAB to do ANOVA, along with relevant references, reproduced from SC&I at http://www.minitab.com/company/VirtualPressRoom/Articles/SCIFeb2000ANOVA.htm
    Hope it helps.  There’s also a paper on the paired t-test through http://www.minitab.com/company/virtualpressroom/Articles/index.htm

    0
    #68568

    kbower50
    Participant

    Sir R.A. Fisher’s original work on DOE was in agricultural experimentation, so the experimenter will be in fine company.  I would urge you to look at effects including blocking (piglets nested under each pig) for optimum feeds, etc.  I’d refer you to Montgomery’s DOE books (e.g. 2000) for a fuller, contemporary discussion on DOE, as well as Box, Hunter & Hunter (1978) of course.

    0
    #68571

    kbower50
    Participant

    Minor correction, if we are to use the Automotive Industy Action Group (AIAG) terminology.  Cp uses the within-subgroup (a.k.a. short-term) standard deviation, hence we require Cp = Cpk = 2 for a “six sigma” process.  Hence, in the “short-term” (theoretically, of course) we’re actually at .001ppm. 
    When the long-term shift of 1.5 sigma is taken into account, we move from a discussion of Cp and Cpk to Pp and Ppk.  It is Ppk = 1.5, therefore required, using their terminology, for a “6sigma” process.  Note that the estimate of the true process standard deviation used in in Pp and Ppk involves “overall standard deviation.” as reflected in MINITAB output, formulae contained in help.
    Note that the z-bench values may also be obtained in MINITAB by choosing “options” thru the Normal capability analysis dialog box.

    0
    #68474

    kbower50
    Participant

    The 2nd law of thermodynamics provides an incentive to give ourselves some margin of error when thinking of how a process will behave over time.  The use of 1.5sigma as a value throughout all industries is controversial.  If the reader is pragmatic, however, at least we’re moving away from the unrealistic viewpoint of a process fluctuating randomly around some fixed mean for the indefinite future.  SQC does it’s best to reach that state, but let’s be honest….. 
    Obviously, we should carry on, regardless!!  Of course capability indices, etc. require evidence of stability for parameter estimates to be used validly, so this concept shouldn’t be used as a scapegoat.  Incidentally, George Box has a nice discussion in Quality Engineering earlier this year.  Hope this helps.

    0
    #68289

    kbower50
    Participant

    Hello, I’m very concerned with your statement that you can report capability indices for a non-stable process.  This would not be good practice if one were interested in reporting indices to suggest to a customer how your process will perform over time.  If we have unreliable parameter estimates for the process mean and process standard deviation, then their usage in a capability index is disingenuous.  As a note, in the AIAG (Automotive Industry Action Group) manual, it states (SPC p. 80) that Pp and Ppk… “should be used only to compare to or with Cp and Cpk and to measure and prioritize improvement over time.”  In Montgomery’s newest SQC book, he has a discussion on this topic.  I would strongly urge you to consider his comments, as well as those by Shewhart and Deming on this topic.  For example, as Deming once wrote (Out of the Crisis, p. 314) “…a process has a capability only if it is stable.” 

    0
    #68290

    kbower50
    Participant

    Other good sources of information are Montgomery’s most recent SQC book and there are a couple of papers available from the Minitab website through the Company>Virtual Pressroom>Magazine Articles and Reprints route.
    There will also be a paper on the use of confidence intervals for Cp and Cpk at that site shortly, the macro of which may already be downloaded from that website.  As a follow-up to the previous respondant, I agree entirely that subgroup sizes should be a consideration.  The confidence intervals seek to incorporate this information.

    0
    #68292

    kbower50
    Participant

    To reinforce the previous note, if you use the macro available from the Minitab website, using the information you provided, an approx 95% CI for Cp is 1.51 to 9.96.  For Cpk, an approx 95% CI is 1.06 to 9.66.
    I’m using the usual caveats of Normality and stability for this type of analysis to even make sense in the first place, of course.

    0
    #68294

    kbower50
    Participant

    Matt, check out my second paper on capability analysis – it deals with non-Normality.  It can be found on the Minitab website via Company>Virtual Pressroom>Magazine Articles and Reprints.  There’s a discussion on Box-Cox transformations as well as Weibull modeling.  You can download the data I used for the examples in the paper from the Minitab site as well, if you so choose.
    Hope it’s useful.
     
    -Keith

    0
    #68297

    kbower50
    Participant

    The arguments for the robustness to Normality are associated with Tchebycheff’s inequality, as is discussed in Shewhart’s original discussions on control charting.  It’s true that the CLT will allow for approximate Normality, hence the false alarm rates frequently referred to would be valid in such an instance (importantly, related to independence also.)    I would be concerned with discussing such false alarm rates in the context of I-MR charts in the presence of non-Normality as the CLT, by definition, couldn’t assist since n=1.

    0
    #68299

    kbower50
    Participant

    I have a tough time with the concept of 1.5 sigma being the shift.  It may apply to a particular company at a particular time, but not for all companies.  However, I feel that one has to be pragmatic.  The second law of thermodynamics is a pain, but it exists.  If everybody uses the same sigma shift (1.5) then at least everybody is speaking the same language.  Personally, I’d rather talk about just looking at the short-term performance (within-sigma) and do confidence intervals on a STABLE process before getting worked up with the long term performance estimate.  To parapharase John Maynard Keynes, in the long run, we’re all dead.  Using this type of concept has some justification, I feel – at the very least we should remember that stability around a fixed mean is merely a theoretical concept – SQC tries to bring us to that state…..

    0
    #68300

    kbower50
    Participant

    If there is a significant interaction effect, the Xbar-R computations give lousy estimates of the reproducibility aspect.  Using ANOVA we have ways around it (e.g. dropping the interaction term and refitting the model, as Minitab does if the p-value is >0.25 for the op*part effect.)  You may want to check out my gage r&r paper at http://www.minitab.com/company/virtualpressroom/Articles/index.htm and the references therein, esp. the Montgomery and Runger paper from Quality Engineering. 
    Hope this helps.

    0
Viewing 13 posts - 1 through 13 (of 13 total)