Karimkonda
@AshwinMember since November 28, 2010
was active Not recently activeForum Replies Created
Forum Replies Created
-
AuthorPosts
-
December 12, 2009 at 4:07 am #57784
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.From my knowledge, we cannot estimate uncertainty just by knowing the Gauge R and R Results, there are many more factors which we need to condider in uncertainty budgeting
ISO – GUM gives a very clear estimation methods for various situations.
Thanks
Ashwin0December 11, 2009 at 10:14 am #57783
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.I am well aware about the measurement uncertainty principles and the result of the uncertainty budgets, but i had also started doing Gauge R and R studies in my factory with regard to my customer requirement and six sigma drive.
I am under the confusion that which method is more appropriate, can i really go with one single method and validate all my measurements??
0October 24, 2007 at 2:39 pm #163720
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.December 22, 2006 at 5:49 am #149433
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.defects:12
units 150
no of opps: 2DPO DPMO/PPM Sigma Yield Cpk
0.04 40000 3.25 0.9599409 1.0833
email
[email protected]0September 30, 2006 at 6:52 am #144058
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.Thanx Dominic
I am already into all these tools like APQP and Web enabled follow ups
what i wanted to ask ,what objectives (common yard stick) like u mentioned would this progarm have
Dealing with vendors on development have issues like
1.timely deliveries to the action plan
2.quality issues
sitting in a sourcing office what should be contol objectives & parameters and then mechanism.I have some crieterias and i do rank them on those parameters.Primarily Quality , Cost and Deleiveries being the major ones.
In short how six sigma as a strategy or what project work i would take and whether with the existing tools what i am working on i can improve the process of VD & monitoring.
thanx for your eloborate reply , lets go into the details .
regards
ashwin0November 17, 2005 at 8:38 am #129826
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.July 31, 2003 at 6:08 am #88501
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.Rick,
The long term issue of Cpk and Ppk often become irrelevant if your process is under statistical control. If your process is under control, and has been for a while, and it shows no sign of deviating from being under control (as per control charts and so forth) then what happens is that your actual process variance, calculated from the raw data, and the sigma_hat, the estimated variance (from Rbar/d2) become more or less similar.
As a result, you can use either one to give a fair picture of short- and reasonably long-term process capabilities. One thing to note, I have worked with companies that require Ppk as well as Cpk presented side by side.
Also, I don’t mean to start a flame war here or anything, but from what I have had experience with, I don’t think Cpk is going to go out soon.
Also, if I may quote you from your previous post :
“If Cpk goes up, you need to explain why? If Cpk goes down, should I get a bonus? “
I find this rather perplexing. The aim of studying your process, is to increase Cpk (1.33 is a nice number but the higher the better). Your statement seems to read otherwise. Cpk is obtained by dividing by variation, so the smaller the variation the better, and hence the smaller the Cpk.
As per the ‘snapshot’ comment, a process under statistical control will report similar (or if a source of common cause variation is found and eliminated, increased) values of both Cpk and Ppk. So, Cpk and Ppk can both be used as a long term indicator of process capability and performance if and only if your process is under statistical control. It becomes a snapshot only if your process is not under statistical control, because your process is subject to random causes of variation (a.k.a. special causes).
Since the use of these indices is statistical in nature in the first place, and there are robust theories behind them, I think you should not say such things about statisticians (I am an engineer myself.). Just one more thing to be said about statistics, is that the sigma = Rbar/d2 generally gives an overestimate of the sigma_hat calculated from the raw data. This is a consequence of Shewart methods.
Personally, I think that Cpk is the better one to use, simply because (as Rick said) more people know about it, and have a better idea of what it stands for.0July 30, 2003 at 7:51 am #88458
KarimkondaParticipant@AshwinInclude @Ashwin in your post and this person will
be notified via email.Hi migs,
Its like this. Cpk and Ppk differ on just one fundamental aspect. The kind of variation used to compute them. Cpk uses the variation computed using Rbar, whereas Ppk uses the s.d of the data itself. Now, like the indices are called Cpk is for capability, and Ppk is for performance.
It all depends on what you want to do with the indices. I recall reading somewhere on this post that for a process in statistical control (no special causes) that Cpk provides a sort of glimpse into the future of the process.
If you’re doing some long term analysis, I would say go for the Cpk. That’s not to say ignore Ppk completely.
I hope this helps.
Ashwin.0 -
AuthorPosts