iSixSigma

Karimkonda

Forum Replies Created

Forum Replies Created

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #57784

    Karimkonda
    Participant

    From my knowledge, we cannot estimate uncertainty just by knowing the Gauge R and R Results, there are many more factors which we need to condider in uncertainty budgeting
    ISO – GUM gives a very clear estimation methods for various situations.
     
    Thanks
    Ashwin

    0
    #57783

    Karimkonda
    Participant

    I am well aware about the measurement uncertainty principles and the result of the uncertainty budgets, but i had also started doing Gauge R and R studies in my factory with regard to my customer requirement and six sigma drive.
    I am under the confusion that which method is more appropriate, can i really go with one single method and validate all my measurements??
     

    0
    #163720

    Karimkonda
    Participant

    Hi,
    What I mean by accuracy rate is that….In any process errors are critical. If we can control the number of errors then we can reduce the error rate and inturn improve the accuracy rate. We can identify and improve the main causes of errors by following the DMAIC methodolgy.
    Regards
    Ashwin

    0
    #149433

    Karimkonda
    Participant

    defects:12
    units 150
    no of opps: 2DPO DPMO/PPM Sigma Yield Cpk
    0.04 40000 3.25 0.9599409 1.0833
    email
    [email protected]

    0
    #144058

    Karimkonda
    Participant

    Thanx Dominic
    I am already into all these tools like APQP and Web enabled follow ups
    what i wanted to ask ,what objectives (common yard stick) like u mentioned  would this progarm have
    Dealing with vendors on development have issues like
    1.timely deliveries to the action plan
    2.quality issues
    sitting in a sourcing office what should be contol objectives & parameters and then mechanism.I have some crieterias and i do rank them on those parameters.Primarily Quality , Cost and Deleiveries being the major ones.
    In short how six sigma as a strategy or what project work i would take and whether with the existing tools what i am working on i can improve the process of VD & monitoring.
    thanx for your eloborate reply , lets go into the details .
    regards
    ashwin

    0
    #129826

    Karimkonda
    Participant

    Sukumar,
    Dont waste your time. Six Sigma will not help you in any way. Focus on what you are doing. You are doing great.

    0
    #88501

    Karimkonda
    Participant

    Rick,
    The long term issue of Cpk and Ppk often become irrelevant if your process is under statistical control. If your process is under control, and has been for a while, and it shows no sign of deviating from being under control (as per control charts and so forth) then what happens is that your actual process variance, calculated from the raw data, and the sigma_hat, the estimated variance (from Rbar/d2) become more or less similar.
    As a result, you can use either one to give a fair picture of short- and reasonably long-term process capabilities. One thing to note, I have worked with companies that require Ppk as well as Cpk presented side by side.
    Also, I don’t mean to start a flame war here or anything, but from what I have had experience with, I don’t think Cpk is going to go out soon.
    Also, if I may quote you from your previous post :
    “If Cpk goes up, you need to explain why?  If Cpk goes down, should I get a bonus? “
    I find this rather perplexing. The aim of studying your process, is to increase Cpk (1.33 is a nice number but the higher the better). Your statement seems to read otherwise. Cpk is obtained by dividing by variation, so the smaller the variation the better, and hence the smaller the Cpk.
    As per the ‘snapshot’ comment, a process under statistical control will report similar (or if a source of common cause variation is found and eliminated, increased) values of both Cpk and Ppk. So, Cpk and Ppk can both be used as a long term indicator of process capability and performance if and only if your process is under statistical control. It becomes a snapshot only if your process is not under statistical control, because your process is subject to random causes of variation (a.k.a. special causes).
    Since the use of these indices is statistical in nature in the first place, and there are robust theories behind them, I think you should not say such things about statisticians (I am an engineer myself.). Just one more thing to be said about statistics, is that the sigma = Rbar/d2 generally gives an overestimate of the sigma_hat calculated from the raw data. This is a consequence of Shewart methods.
    Personally, I think that Cpk is the better one to use, simply because (as Rick said) more people know about it, and have a better idea of what it stands for.

    0
    #88458

    Karimkonda
    Participant

    Hi migs,
    Its like this. Cpk and Ppk differ on just one fundamental aspect. The kind of variation used to compute them. Cpk uses the variation computed using Rbar, whereas Ppk uses the s.d of the data itself. Now, like the indices are called Cpk is for capability, and Ppk is for performance.
    It all depends on what you want to do with the indices. I recall reading somewhere on this post that for a process in statistical control (no special causes) that Cpk provides a sort of glimpse into the future of the process.
    If you’re doing some long term analysis, I would say go for the Cpk. That’s not to say ignore Ppk completely.
    I hope this helps.
    Ashwin.

    0
Viewing 8 posts - 1 through 8 (of 8 total)