Apologies
Six Sigma – iSixSigma › Forums › Old Forums › General › Apologies
 This topic has 2 replies, 3 voices, and was last updated 16 years, 10 months ago by Anakin.

AuthorPosts

July 8, 2005 at 2:07 am #39937
To: All forum members
From: dUFO
I want to apologize for my idiotic posts yesterday. I am on serious meds for mental deficiencies and I forgot to take them yesterday.
Your (medicated) friend,
dUFO0July 8, 2005 at 2:22 am #122734Stop it Stan!!!
0July 8, 2005 at 2:31 am #122735dUFO,
Don’t worry your pretty little head off. You remind me of me.
Anakin
PS – In addition to my dreams of Stan, I have been thinking of process indices. dUFO, what do you think of this?
Thanks for your response to my questions dated Apr 3 in the Cpk/Ppk for Anakin discussion thread. Your answer address for those interested is: https://www.isixsigma.com/forum/showmessage.asp?messageID=u812 In response to your answer I performed the requested simulation on one of the two setups. I reasoned if I missed information on one side of the target I would also miss info on the other side. Instead of performing one study using a mean of 99, and an S.D. of 1. I performed 10 studies using a MonteCarlo routine I developed within Excel assuming the data were normally distributed. I could have used another distribution, but that would have complicated the simulation and discussion. I then ported each run over into Minitab and performed 10 SixPack studies. I tabulated the Cp, Cpk, Pp, and Ppk for all runs. Next, I computed the average of each of the 10 indicies comprising estimates of a total of 1,000 random variates over 10 capability simulations. I then compared the estimates of Cp, Cpk, Pp, Ppk for inconsistency. I next extimated the 95% confidence bounds for the Cp and Cpk averages for inconsistency against any of the four indicies. The following tabulates my results: Indicies –> Cp Cpk Pp Ppk Averages 2.18 1.80 2.12 1.75 95% Conf Cp = 1.90 to 2.46 95% Conf Cpk = 1.51 to 2.08 To verify that my simulation spreadsheet was working correctly I conducted a bootstrapping analysis on sample data from the spreadsheet using mu=99, and sigma=1, (low setup per your requirements). After 4,000 total bootstraps of 4 independent datasets the calculated long term mean=98.995, and sd=1.060. These numbers were very close to the requirements for the simulation, and therefore supported the Monte Carlo simulated results I observed for the Minitab analysis. My question is: What did I miss here? Looking at the indicies above I don’t see much difference. However, you claim Cpk will completely miss the off target setup where the target is 100. In fact, my estimates indicate your values for the simulation yield better than Six Sigma performance, i.e., a defective rate of less than 3.4 dpmo. A reevaluation of your suggested simulation indicates the mean should in fact be closer to 98.5 with an sd of 1.0 to yield about 3.4 dpmo with specs at 100 +/ 6. Again, what am I missing here? How do you qualify the statements you made? What references should I look at to gain the remarkable insight you alone have on all these things? While I’m just trying to understand your point, in the final analysis I think we’re spliting hairs. Dr. D. Wheeler and others have shown that minor changes in distributional shape parameters will have drastic affects on defect estimates in the dpm range. Simply going from a normal assumption to a Burr assumption causes a 10X change in these estimates. Isn’t the point here to have methods which allow us to bring controlled improvements to our processes. Again, why do these things have to get so complicated and convoluted? With all the C’s out there how can anyone new to this stuff ever hope to fully understand?
Wow dUFO, I didn’t expect you to do so much work. First of all, we are splitting hairs for the ones new to this. I’ll swap files with you to show you my point and so that I understand what you did. The point that I think you missed is that people are quite often analyzing mixed streams of production when they do capability analysis. Even though you and I know that everything must be in control (the real utility of the capability six pack) to do the analysis, most do not respect the rule. If you had done the mixed stream analysis I described you would have found that Cp = Cpk even though we know both streams were off target. I find this quite often in situations of multiple machine, multiple heads, multiple operators, multiple set ups, …. We shouldn’t trust a metric that is blind to process variation and Cpk can be. My experience says we need to be concerned with what is possible (Cp) and reality (either Ppk or better Cpm) and make sure the gap between the two is reasonably small.
Mixed stream analysis now… I don’t believe you elaborated that very well in your earlier requirements. My practice before assuming homogeneity of multiple streams is to test the assumption by using a variety of statistical comparison methods. If this is NOT done, I agree combining the two or more streams will mask the intent of the process to achieve the target requirements. However, this is not a unique observation in process control methodology. When between stream mixture is greater than ~0.5 SE’s, one can usually observe the mixture pattern on a simple average or individuals control chart provided the sampling is not completely randomized between steams. This requirement is usually satisfied in between shift data, and often in data obtained from the same shift due to proximity effects. For smaller differences one could use CUSUM or EWMA charting methods. Again, there is nothing new here. What is required in instances where mixture or stratification could be present is the proper use of standard statistical process evaluation methodology. All team members need to both understand the use and misuse of a capability assessment prior to commissioning the work. Why do we need to invent new inefficient capability indices in order to contend with poor practice? Again, if the members of the process team are trained correctly they will know how to correctly conduct a process evaluation… I am not interested in methods that will allow me to uncover poor practice! It’s too late to use Ppk or Cpm if the work has been done incorrectly. I’m more interested that people understand the correct use of the tools, and use them correctly…
O.K. After qualifying the mixture conditions for your test simulation I performed a reevaluation of the data. I conducted two tests involving limited and complete interspersing of the two populations shifted 1 SE on each side of the target. With limited inspersing of the data I received the following indicies: Cp=1.92 (mixture is observable on control chart) Cpk=1.91 (not very useful) Pp=1.41 (not very useful) Ppk=1.40 Cpm=1.41 (not much different from Ppk) With full interspersing of the data I received the following: Cp=1.77 (mixture is not well observed Cpk=1.70 on control chart) Pp=1.48 Ppk=1.42 (more conservative than Cpm, but either could be used) Cpm=1.46 (not much different from Ppk) Conclusion: In situations where mixture between two or more processes are possible, but not obvious on the control chart, either Ppk or Cpm provides a better estimate of actual process capability than Cpk. Thanks for the thought challenge!
I have one concern with the scenario discussed earlier. In my last message to you I agreed with the use of Cp and Ppk or Cpm. However, there is still a question concerning process stability when mixture is present. In all of the previous simulations, and about 20 additional simulations I later performed I was unable to achieve a stable combined process. This lack of stability was evident when mixture was present between processes having mean shifts of about 0.5 SE’s or greater from the target. How do you handle a situation when the combined output is flagged as unstable? Do you still report a Cp and Ppk? If not, then how much additional value will Ppk and/or Cpm provide to process understanding when the process is unstable? It appears, that when the process is stable, (has no mixture or stratification), that both Cp and Cpk provide adequate measures of capability. Can you provide an example when the process is stable, and Cp/Cpk give unreasonable information?
As you point out, those who know how to analyze the data would find discrepancies in the data before the analysis was done. The point is most do not either know how or just don’t do the analysis. we have Cp, Cpk, Pp, and Ppk numbers flying everywhere. Since I cannot be sure of the correct analysis (a simple understanding of the capability sixpack in Miitab would be adequate), I do not want a number that can fool people. Cpk can be fooled, Ppk cannot. By the way Cpm was very close to Ppk in your analysis because, on average, the process was on target. Get the process off tartget and Cpm explodes (another number that is hard to fool).
Just got back into town from a business trip, but couldn’t help but look over the board before retiring. It appears Joe P. is still having difficulty with some basic understanding of variation, but that does not surprise me. I’m interested in making two hopefully noncontroversial comments about capability indicies with the desire of quickly closing this topic before is becomes another “+/1.5 shift topic”. Question/Comment 1: Some time back I asked the question, not sure when, what do we plan to do with these indicies? This question establishes the basis underwhich we construct the statements we will make about the process. If these statements involve capability estimates, it’s prudent to characterize the conditions and timing that comprise the statement(s). Comment 2: Following from my previous comment: if our desire is to make a statement which compares the process variability to process/product requirements (e.g., estimate capability), then it’s incumbent upon us to insure the reliability of this statement. This is done by first assessing whether the process we are describing is statistically stable, or is predictable. Any attempts to make a statement of capability w/o correctly assessing stability breaches at the least upon ignorance, and at the most on fraud. DUFO, I’m sure this comment makes reasonable sense to you as a consultant… My point here is that while Ppk is a robust index for capability, as you have cited, what is the value of this estimate when the process is not stable? My exhaustive simulations and personal experience has found that if the process IS stable in both the center and variation, then the Cpk is a reasonable estimator of shortterm capability. If an estimate of longterm capability is desired, then compute the Ppk using the total standard deviation. Both of these estimates are subject to about +/15% error even with 100 variates. Using Ppk in place of Cpk because of a mixture concern is misguided direction. Why? Because this action evades a more fundamental issue. That being to first ask if the process is stable… If the process is stable, then the next question could be to assess either the short or longterm capability of the process, or both. I firmly contend that Ppk should not be used simply because there is a likelihood of incorrectly classifying the process as stable, AKA committing a Type II error in process evaluation. Rather, I again advocate the correct use of pro
Rather, I again advocate the correct use of process evaluation methods to first correctly assess process stability. If mixture is a problem, then identify the cause. If two mixed streams of data are the cause separate the data and reassess the two processes independently. This to me is the more prudent action. So, how many teeth does an ox have?
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.