Cp and Cpk too high?

Six Sigma – iSixSigma Forums Old Forums General Cp and Cpk too high?

This topic contains 3 replies, has 4 voices, and was last updated by  Erik L 12 years, 4 months ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #47174

    Martin Burkert

    After calculating some Cp and Cpk-values many of them are vey high. For example a feature of an electronic board test:Lower Limit: 4977,5mVUpper Limit: 5022,5mVTarget: 5000mVAverage (of 40 samples): 5003,05mVstddev: 0,13mV
    For above example the calculated cp and cpk-values are nearly 60! Is it senseless to test a feature with such big tolerances (compared to the variation) – is there a rule or “good practise” that says, “when cp/cpk above X the tolerances should be reviewed”?
    thanks for any help


    Dhruv Mittal

    Hi Martin,
    It seems like your process is extremely good compared to your tolerance. I would agree with you that it would be senseless to test such a feature. If you want to be completely sure, you may want to test over a longer period of time and get Pp and Ppk but with such high values for Cp and Cpk it shouldn’t be a problem anyway.
    What you could do with this though is to make the spec limits tighter if the customer has a value for this. Many times the customer values it if you tell them that your product has a spec of 10 +/- 2 compared to the normal market spec of 10 +/- 30. This competitive advantage may help you win orders and sometimes get better prices.
    Dhruv Mittal  –  Master Black Belt



    Just because you have a high Cpk does not mean the process is in control. Review the rules for runs and trends and adjust the process according to those rules, and the Cpk will follow a more normal pattern. Tighter control limits can be applied to the process for the purpose of control, this will almost assure zero defects. Another approach is lesson the frequency of charting, this should gain some spread in the data.


    Erik L

    My only caveat to what has additionally been posted would be to look at how appropriate is the sample that the data was taken from.  Having an n, whatever that sample size is, does not in itself ensure believable data.  Are the 40 data points independent?  Did they come from the same shift?  Same location?  Same hour?  Same instrument?  Same batch of raw material?  Same operator?  The more we hold things constant, and/or not pay attention to the variance components of the process, the more we are at risk of collecting data that is dependent-resulting in meaningless data that is not reflective of the true VOP. 

Viewing 4 posts - 1 through 4 (of 4 total)

The forum ‘General’ is closed to new topics and replies.