# Use of 1.5 Sigma Shift and 3.4 PPM

Six Sigma – iSixSigma › Forums › Old Forums › General › Use of 1.5 Sigma Shift and 3.4 PPM

- This topic has 202 replies, 78 voices, and was last updated 10 years, 10 months ago by Kaare.

- AuthorPosts
- October 10, 2005 at 3:07 pm #128130

GomezAdamsParticipant@GomezAdams**Include @GomezAdams in your post and this person will**

be notified via email.Whoops! Thought this was going to be posted back in the isixsigma article section where I originally saw the article. Here is the context of the statement that I agree with.

In answer to Mike’s question, “Where does the number 3.4 come from?” I’ll pose a question to all reader’s: Why would you want to use the fluctuation (variation) in coffee bean prices in Brazil to infer any kind of performance measurement in your processes or your suppliers’? The derivation, Mike, of the 3.4 PPM is the proportion defective outside of a normal curve assuming the normal curve has drifted off target by 1.5 sigma. This figure (3.4) goes from the data that Mototrola has collected off of their processes after finding that their processes has a history of drifting about the mean by 1.5 sigma. This figure is useful only to those processes measured at Motorola and must be verified every time they want to check the performance of those processes because historical performance may not be relavant to the current process if it has improved or deteriorated. A really distorted and egotistic application of this data is to assume that the data is applicable anywhere else in the world. What has Motorola’s process variation got to do with U.S. Steel, Pampers, Frizbies, or your and my processes untill data from our processes is evaluated? If you or others would like a table of zero to 6 sigma listings verses PPM rates and Verses PPM rates after a 1.5 sigma shift, just write me. You will see that 6 sigmas are equal to 0.002 PPM, or, 3.4PPM after a 1.5 sigma shift off target.0March 3, 2006 at 2:07 am #134569

Jim ArrasmithParticipant@Jim-Arrasmith**Include @Jim-Arrasmith in your post and this person will**

be notified via email.The 1.5 sigma shift is clearly a confusing topic to most people. I know it has been to me. I don’t know if my current understanding will be helpful and withstand the rigor of close inspection. But I think it will, and if so, please spread the word that so others will not have to wrestle with this anymore.

I think the key to understanding this is in two aspects. First, a Z-score as used in Six Sigma should be understood as a translation of a dppm or, better yet, a dpmo, into a different, but equivalent, measurement scale. Second, the 1.5 unit shift is to be applied to the calculated Z-score, not to the process mean. Let me elaborate on both of these statements.

Here’s how I learned to calculate Z-score for a process. First, figure out dppm on the lower side of the distribution. Then figure out dppm on the upper side of the distribution. Now, add these two numbers together to get the total dppm. Then, from the unit normal distribution, calculate the Z-score that equates to that total dppm. That’s all it takes. And in this sense, Z-score should always be understood as a number that is equivalent to and can be used to calculate total dppm for a process. With this foundation, the 1.5 sigma shift is easy to understand.

Imagine a process that in the best case short term produces 1000 dppm. Use the unit normal distribution to determine that the Z-score for this process is 3.09, or Zst = 3.09. Now, when this process is run over the long term in manufacturing with all the additional sources of variation, the process distribution will broaden, will spread out. The question is, how much? Six Sigma predicts that the Z-score will be reduced by 1.5 units. So the long term Z-score, Zlt = 3.09 – 1.5 = 1.59. How much is this in dppm? Again, use the unit normal distribution to calculate that we would expect 55,917 dppm.

If you look at the interplay between Zst and Zlt in this way, it’s really very easy. I think the confusion comes from using the term “sigma shift”. It makes it sound like we’re expecting a process to actually shift 1.5 sigma units away from the mean and stay there forever, over the long term. That’s clearly unrealistic, no process would ever run that way, and so we’re left scratching our heads and trying to understand how long the shift might last, etc. etc. Instead, we need to understand that the shift doesn’t apply to the process mean. It applies to that line we always draw on the unit normal distribution that illustrates how much area is representing our process dppm. From the example above, imagine a unit normal distribution with a vertical line drawn at Z=3.09. Shade in the area under the curve to the right of that line to represent the portion of the distribution that is defective. That’s the short term situation. Now, to predict what will happen in the long term, shift that vertical line 1.5 units to the left (which is 1.5 sigma on this unit normal distribution). Now shade in the area to the right of this line to represent the long term dppm. This is the 1.5 sigma shift that is spoken of.0March 3, 2006 at 8:03 am #134577Jim,

The whole purpose of Shewhart Charts is to avoid large shifts. A 1.5 sigma shift is easily detectable on a Shewhart chart using subgroups of n=3, g = 10, a total sample size of m=30 … a case so often cited in these discussions.

What I mean is a chart set up using SPC limits based on n=3, g=10 would be sensitive enough to detect a shift in the mean of about 0.6 sigma, which is a far cry from 1.5 sigma.

Of course engineers who have actually taken responsibility for a process would typically use 30 subgroups .. not 30 sequential measurements. In a semiconductor process this would comprise about 450 measurements, which only goes to show how out of touch some corporate people really are ….

Now there are two ways to calculate sigma for a sample size of m = 30. You can either use the formula for sigma, or you can estimate it from g=10 subgroups of n = 3.

If you take the trouble to do this you will find a difference, and the difference is about 1.7 sigma.

So the real question is why the difference … it has nothing to do with ‘manufacturing shifts.’ It is due to the ‘entropy’ of the calculation. One way of looking at this is to consider what happens to the subroup averages of each of the g= 10 subgroups – they tend to vary more than the ‘pinned’ grand average of the m = 30 sigma calculation, which is an additional source of statistical noise.

Therefore, the subgroup method is ‘inflated’ … and the amount of inflation is about 1.7 sigma, so any adjustment based on a ‘1.5 sigma shift’ is unnecessary and flawed.

The idea that there is a difference between a short term and long term capability is an athema to statistical process control. Yes, I do understand that Shewhart charts suffer a small shift of the mean, but a Cusum chart does a lot better.

Regards,

Andy Urquhart0March 3, 2006 at 10:13 am #134581

Praveen GuptaParticipant@Praveen-Gupta**Include @Praveen-Gupta in your post and this person will**

be notified via email.Here is the simple reason for 1.5 sigma shift:

First, it is the maximum allowable shift, not a typical shift.

Second, it is based on the Shewhart’s subgroup size of 4. If you have a subgroup size of 4, the control limits for an xbar chart will have square root of subgroup size 4, which is 2. Thus the control limits for mean will be 1.5 sigma of population.

In Six Sigma model, a maximum shift of 1.5 sigma is allowed, considering any shift beyond, which is at a gross level, will be noticed, and corrected. If there is a process, and shift of 1.5 sigma in process mean is not detected and corrected, have a long way to go in understanding and implementing six sigma.

I hope it helps.

Praveen Gupta0March 3, 2006 at 2:13 pm #134600Nonsense

0March 3, 2006 at 2:56 pm #134607Stan:

You are good at writing ‘nonsense’.! Anything better?

Paul0March 3, 2006 at 3:47 pm #134611Yes, but the stupidity of attributing the 1.5 shift to a SPC sample of 4 does not deserve anything else.

0March 3, 2006 at 5:18 pm #134619

Jim ArrasmithParticipant@Jim-Arrasmith**Include @Jim-Arrasmith in your post and this person will**

be notified via email.Andy, actually, my discussion wasn’t about sample sizes, control limits, and the ability to detect a process shift. I was actually saying that the infamous 1.5 sigma shift actually has very little to do with these topics. It’s just a rule of thumb that you can use to predict a likely long term dppm when you don’t have any long term data and only have very short term data from a very controlled environment (possibly in a lab in an R&D setting). If you understand the 1.5 sigma shift in these terms, then there’s no need for tortuous arguments about sample sizes and higher math and the inanity of expecting that a mfg organization would allow a process to run 1.5 sigma units off-center over time, etc.

As for the “anathema” of differences in short term and long term capability, I don’t really understand your statement. It may be a terminology issue. “Capability” refers to the ability of a process to meet specification. Even wildly out of control processes can be highly capable if the spec limits are very wide (don’t confuse spec limits with control limits). “In Control” means stable and predictable. Processes that are in control can have very poor Capability to meet process specs. So “In Control”, which is the concern of SPC, and “Capable” are two completely different measures of a process. If you understand the discussions about differences in short term and long term capability as assertions that short term processes can be in control while long term processes can’t, then I’d agree that there’s a contradiction with SPC theory and practice. But it’s also fairly clear that an on-target process will be more capable of meeting specification over a short term with few variation sources than over a long term with more variation sources adding to the spread of the distribution.

Jim0March 3, 2006 at 5:28 pm #134621I don’t know why this discussion is always on the board. There are plenty of places on the net (including this site) that have Mikel Harry’s defintion of the Six Sigma 1.5 shift. Whether he is “right” or “wrong” I suppose is still arguable. But, I should think he should know what he’s talking about regarding this…. Certianly more so, then I and the majority of posters on this or any other chat board.

For more, please see “Q&A with Mikel Harry” on this site…

“Over time, there has been very fine debate (both positive and negative) surrounding the 1.5 sigma shift. As such, the on-going discussion well serves the need to “keep the idea alive,” so to speak. To this end, I have recently completed a small book on the subject, soon to be available on iSixSigma. The title of the book is “Resolving the Mysteries of Six Sigma: Statistical Constructs and Engineering Rationale”. Of interest, it sets forth the theoretical constructs and statistical equations that under girds and validates the so-called “shift factor” commonly referenced in the quality literature.

As mathematically demonstrated in the book, the “1.5 sigma shift” can be attributable solely to the influence of random sampling error. In this context, the 1.5 sigma shift is a statistically based correction for scientifically compensating or otherwise adjusting the postulated model of instantaneous reproducibility for the inevitable consequences associated with dynamic long-term random sampling error. Naturally, such an adjustment (1.5 sigma shift) is only considered and instituted at the opportunity level of a product configuration. Thus, the model performance distribution of a given critical performance characteristic (CTQ) can be effectively attenuated for many of the operational uncertainties encountered when planning and executing a design-process qualification (DPQ).

Based on this understanding, it should be fairly evident that the 1.5 sigma shift factor can be treated as a statistical constant, but only under certain engineering conditions. By all means, the shift factor (1.5 sigma) does not constitute a literal” shift in the mean of a performance distribution as many quality practitioners and process engineers falsely believe or try to postulate through uniformed speculation and conjecture. However, its judicious application during the course of engineering a system, product, service, event, or activity can greatly facilitate the analysis and optimization of configuration repeatability.

By consistently applying the 1.5 sigma shift factor (during the course of product configuration), an engineer can meaningfully design in the statistical confidence necessary to ensure or otherwise assure that related performance safety margins are not violated by unknown (but anticipated) process variations. Also of interest, its existence and conscientious application has many pragmatic implications (and benefits) for reliability engineering. Furthermore, it can be used to normalize certain types and forms of benchmarking data in the interests of assuring a level playing field when considering heterogeneous products, services, and processes.

In summary, the 1.5 sigma shift factor should only be viewed as a mathematical construct of a theoretical nature. When treated as a statistical constant, its origin can be mathematically derived as an equivalent statistical quantity representing the worst-case error inherent to an estimate of short-term process capability. Hence, the shift factor is merely an algebraic byproduct of the chi-square distribution. Its general application is fully constrained to engineering analyses especially those that are dependent upon process capability data. Perhaps even more importantly, it is employed to establish a “criterion” short-term standard deviation — prior to executing a six sigma design-process qualification (DPQ).”0March 3, 2006 at 5:44 pm #134622

Jim ArrasmithParticipant@Jim-Arrasmith**Include @Jim-Arrasmith in your post and this person will**

be notified via email.Thanks for this response. Harry’s writing is a little dense, but confirms my simpler understanding hasn’t strayed from what I learned from his Academy. I looked around on this site and board trying to find something definitive before posting. Thanks for helping me find that.

0March 3, 2006 at 11:31 pm #134647Jim,

I was always taught to check that a process is in a ‘state of control’ before measuring process capability. This usually means there are no assignable causes present.

I’m not trying to be argumentative, I just thought someone out there might be interested in understanding where ‘you know who’ went wrong. Of course, I’m not referring to you .. I’m refering the the guy who is trying to create a myth.

Cheers,

Andy0March 4, 2006 at 1:26 pm #134655I actually agree with what is said below but how do you go from there to the 1.5 correction being permanently built into the DPMO conversions?

All these explanations were done long after erroneous assumptions were built into dogma. They were just trying to justify what was done but it doesn’t pass any sanity check.0March 6, 2006 at 10:17 pm #134720Stan,

I really don’t know. I think once they decided to “shift” 1.5 for design capability I guess they felt the need to standardize the shift across everthing to keep it straight. From a pure statistics standpoint, I have no idea what works and what doesn’t from the capability / shift standpoint. I still use Cpk / Ppk / Cpm most of the time anyways –

Cheers –0March 6, 2006 at 10:33 pm #134725Andy,Interesting discussion. I’m not sure I follow how you obtained the difference quantity of “1.7 sigma”, and really not too interested. But, I am interested in your remark “entropy of the calculation.” Never heard that one before anywhere. Did you make it up, or is there really such a thing as a calculation wearing down into chaos and random dissipation?Where can I obtain more information on this new and unique characteristic of calculations?Your Italian Friend,Vilfredo

0March 8, 2006 at 12:32 am #134775Vilfredo

I discussed the entropy concept with Andy sometime ago and the below excerpt on Random Shift analysis from one of my old posts should help.

Random Shift Analysis

The Observation that Natural processes have natural mixing , disordering and spreading out tendencies over time can be illustrated through the use of simple Independent Event probability models from Statistical Mechanics. Mean Shift events such as a Gass expansion in a Vacuum , Hot/cold liquid temperature equilibrium or extreme run patterns on a Control Charts where the probability is ½ that the average of any subgroup falls above the universe average occur because disordered arrangements significantly outnumber the ordered ones. According to Shannons Measure(Bit units) of Information content, Maximum Uncertainty occurs for a total of N events when they are all equally probable P= 1/N1=1/N2=1/Ni.=1/N For two state systems (ex: a coin flip), Maximum Uncertainty results when P1=P2 =1/2. Item by Item Sequential Analysis is an efficient/ useful tool for analyzing studying shifts because it contains an continued sampling Uncertainty Region with a geometry that is dependent on the magnitude of the type I II errors and sample size N. The Worst Case error alpha = beta = ½ corresponds to the mean asymptotic limit* line of the test.

Finally, complex real processes can have events that are not necessarily Independent(conditional/chaotic Fractal based) and could require sophisticated computer modeling for accurate design Margins estimates. Many years ago, Grant and Leavenworth commented in their 4th edition Statistical Quality Control Text that misuse of the hypotheses of event independence and the Probability Multiplication Theorem could lead unrealistically low estimates of a systems reliability ex: 2000 individual system components each with an individual reliability of .999 have an overall estimated system reliability of .999^2000 or .135 I.e, it is easier to say what not to do in estimating system reliability than to give advice on what to do.

Mathematical Notes:

(A) As an thought experiment, suppose 2 crystals consisting of two a atoms and two b atoms are brought together (aa) (bb) and natural diffusion or mixing takes place. The six possible arrangements is easy to list (aa)(bb) (ab)(ab) (ab)(ba) (ba)(ba) (ba)(ab)(bb)(aa) and its obvious that the mixed or disordered states predominate; Unmixed =2, Mixed=4 . The number of possible arrangements n in general for the total two crystal system is (e) n = (Na+ Nb)! /Na! Nb!. A two crystal system consisting of 10 a atoms and 10 b atoms has n =184756 possible arrangements and the dominance of the mixed states over time is obvious. Unmixed =2 , Mixed=184756. I.e if one second was assigned to each arrangement, then this simple 2 crystal system would spend 2 seconds in the ordered state and 51 hours in the disordered state.

Similarly for runs on a Statistical Control Chart, the number of arrangements for a sequence of 14 points with 7 points(Na=7) appearing above the mean and 7 below(Nb=7) computes as n = 3432. In a sequence where say 13 points are above the mean and 1 appears below n=14.

(B) Process Informational Entropy: the great Bell Labs engineer Claude Shannon invented a quantitative definition of information that linked to it to probability distributions and the concept of entropy (disorder metric) found in Statistical Mechanics. Shannon defined information as a decrease in uncertainty and in quantitative terms as the amount of information that allows one to distinguish between equally probable events . He cleverly defined a unit of information as a 0 or 1 binary Bit Thus, in a single flip of an unbiased coin say heads h =1 tails t=0 one Bit B of information(0)or (1) is required to reduce the initial field of 2 possible events or arrangements to a certainty and for 2 flips 2 Bits(0,0) (0,1) (1,0) (1,1) , 4 flips 4 bits etc. Shannons Metric of Information content in Bits B for processes is given by

(1) B = (1/Ln 2) x Sum(over n arrangements) – Pn Ln Pn where Pn is the arrangement probability.

The Metric predicts for example that processes with an assignable cause bias require less information to reduce events to a certainty. For instance, in a single flip of a biased coin with heads appearing 80 % on average , Shannons Metric computes as B= (1/Ln2) x (-.80ln.80-.20ln.20 ) = .7219 Bits as opposed to the 1 Bit required for the unbiased coin. Now , it can shown* that at the Worst Case maximum state of process information uncertainly all of the state probabilities are equal or P1=P2=P3= ..Pn= 1/n . In this state of maximum process equilibrium or stability there is a complete lack of assignable causes.

*i.e when dB/dPn= 0 and the probability normalization requirement P1 + P2 + Pn=1 applies.

When the above Worst case constraints are imposed on equation (1) a Process Informational Entropy Metric S results:

(2) S = B = (1/Ln 2) x Sum(over n arrangements) – 1/n Ln 1/n or

(3) S = (1/Ln 2) x Sum(over n arrangements) Ln n

The metric measures the systems randomness from a state of order to disorder . and it is identical to Boltzmans Thermodynamic Entropy metric (In ergs/degrees kelvin) S = k Ln n except for the differences in the constants (1/ln2) and k . For two state systems such as extreme runs on a Statistical Control Chart S such as reaches its maximum value when there is a 50% Probability that the average of any subgroup falls either above or below the universe average. As is the case with Thermodynamic Entropy, Information Entropy is additive since the overall arrangements equals the product of the individual arrangements; N= N1N2 =N2N1. S= (1/ln2)xLn(N1N2)= (1/ln2)x( LnN1 + LnN2) = S1 +S2 . Thus the Metric could serve as a useful tool for Process Stability characterizations.

Best Regards,

John H0March 8, 2006 at 1:06 am #134778Thanks John for providing the background information behind Andy’s assertion. Where was this prose taken from? Is there substantive peer reviewed reference supporting this writ in one or more journals that I can obtain, or was this written by someone on this forum? If there are details within a peer reviewed journal, could you or someone else provide me with the name of the journal, volume, date, and title of the paper? Thanks much in advance…I might humbly suggest there are vast differences between a classically derived model depicting molecular motion, (and the mechanics therein), and the observation of empirical data from manufacturing processes. The latter does not lend itself to having a reliable underlying probability distribution despite claims to such. For details follow the work of Shewhart and Deming, both mathematical Physicists, and more recently the work of D.R. Wheeler. I’m interested to know how the author of this iSS passage manages the probabilistic distinctions between classical mechanistic theory and empirical data collection to support an entropic process. Vilfredo

0March 8, 2006 at 1:17 am #134779Does a +/-1.5 sigma shift have to be built into estimations using Poisson distributed data? Would it be possible to simply estimate the probability of say for example achieving results without any defects using incremental DPMO estimates? I’m just trying to figure my way through the argument. Any help is greatly appreciated…Vilfredo

0March 8, 2006 at 1:30 am #134780Doggie,Pyzdek stole that comment from Dr. George Box. Let’s give credit where credit is due!Vilfredo

0March 8, 2006 at 3:29 am #134781

John H.Participant@John-H.**Include @John-H. in your post and this person will**

be notified via email.Vilfredo

The prose is an except from my old post https://www.isixsigma.com/forum/showmessage.asp?messageID=32928

that argues against the universality of the Long Term 1.5 sigma shift and I hope it does not ignite another debate on this old topic . Darth nicely summarized the topic recently by calling it a fudge factor and I prefer leaving it at that. If you do some research with Google , you will find a substantive amount of linkage literature on Entropy, Information Theory and manufacturing .(Signal processing for manufacturing and Machine monitoring). Entropy considerations apply to long term processes and not to the Statistical fluxing of small groups of samples.

Regards,

John H.0March 8, 2006 at 8:13 am #134784J. Hickey,I get the sense that you may have brushed upon Taguchi’s work from time to time in the past in addition to spending time within a diaphanous area of signal theory. You present an interesting twist on the continuing discussion supporting the so called Six Sigma shift and its relationship to expected process performance. I looked over your 2003 notes to Reigle, and attempted to compare them to the literature I could find on the internet per your suggestions. In most of the Inet literature the word entropy was used to describe the complexity of a system, and therefore the level of information it possessed. Higher complexity relating to higher entropy. Hartley’s formula was used to develop these concepts, but I must admit to having some difficulty following the details into control theory, and more specifically manufacturing process control. Your Word attachment provided a bit more detail to your recent forum post, and I thank you for the link. I was reasonably challenged following your logic from coin flips to crystal diffusion to process information entropy. It was a great ride, and I’m sure it all works out well in your mind. When all is said and done I do not agree with Darth that the so called +/-1.5 shift is a fudge factor. Instead, I rather agree with Mike Carnell that it is simply a reasonable a starting point for the process operating window given no other available process data, and can be used until you obtain enough data over time to provide a more accurate measure of longer-term variation. I’ve seen the mathematical developments of the long-term shift factor by Bothe, Bender, Harry, and others. I must say yours is by far the most creative. Enjoyed the adventure. Thanks for the ride!V

0March 8, 2006 at 8:20 pm #134803‘Instead, I rather agree with Mike Carnell that it is simply a reasonable a starting point for the process operating window given no other available process data, and can be used until you obtain enough data over time to provide a more accurate measure of longer-term variation’

And don’t you see the relation of this statement with entropy of data ? Which entropy your data have at ‘starting point of process’ and which at ‘long term variation’ ? And how many ‘short term’ do you need to have a ‘long term’ ?

Or maybe it is just an ‘estimation matter’ of samples variance (when Student’s correction on samples variance formula achieve 95% of variace of population CI ?) ?

John is right. Within any good Probability Theory’s book, you’ll find chapters about Entropy and Information of data set or system.

Cheers.

PS Vilfredo, are you really italian ?0March 9, 2006 at 1:40 am #134811This is Mikel Harrys data from ABB with Z(lt) and Z(st)

March: This is Mikel Harrys data from ABB with Z(lt) and Z(st).

Please find that the calculated difference between the values is not 1.5. This has

nothing to do with sample sizes, control charts and whatever else you read.

When you have continuous data you can calculate the Shift, so do so. When you

only have discrete data, you have to guess what the Z(shift) could be. Keep in

mind that with discrete data, you will only be measuring the process capability

of processes in the range of 1-3 sigma. In this region of the graph of the

data, a value of 1.5 is as good a starting point as any.Z(lt)

Z(st)

Z(shift)

0.43

0.49

0.06

0.65

0.69

0.04

0.34

1.13

0.79

1.56

1.55

-0.01

1.04

1.96

0.92

2.21

2.19

-0.02

2.18

2.67

0.49

3.15

3.18

0.03

1.76

3.04

1.28

2.69

3.20

0.51

3.48

3.85

0.37

2.17

3.32

1.15

2.20

3.63

1.43

1.42

3.37

1.95

0.86

3.43

2.57

2.20

4.49

2.30

1.34

4.68

3.33

0.59

4.56

3.97

2.92

5.21

2.29

3.72

5.77

2.05

0.73

5.82

5.10

3.61

6.40

2.79

4.39

6.40

2.01

4.46

7.82

3.36

1.05

8.27

7.23

Data from “The Vision of Six Sigma: A Roadmap for

Breakthrough” 4th Ed., P. 9.17,

Mikel J. Harry, Ph.D., Sigma Publishing Company, Phoenix, Arizona. 1994. Cheer, BTDT0March 9, 2006 at 4:10 am #134814The acceptance of a 1.5 sigma shift seems contrary to the idea of variation reduction, which is the foundation of Six Sigma. It also seems like a bit of an over-generalization. How can all processes, regardless of their design or nature, exibit the same type of shift?

Just my humble opinion!

0March 9, 2006 at 4:57 am #134815

Praveen GuptaParticipant@Praveen-Gupta**Include @Praveen-Gupta in your post and this person will**

be notified via email.I think people have gotten it wrong. The simple assumption was that until six sigma, the best tool to control process was control chart. I have personally spoken with Bill Smith about 1.5 Sigma shift. It is an assumption about maximum allowable shift. It does not mean each process shifts. Up to the 1.5 Six shift of the process mean (XBar), or standard error of estimate of 1.5 Sigma, it is assumed that process is in statistical control based on the limits of Xbar R Chart.

Variation is inherent in any system. The motorola six sigma is a model, however, someone can estimate more accurate ppm based on the actual shift in the process mean.

I do know that Bill had not performed a very complicated statistical analysis to come up with a 1.5Sigma shift. It is more of a practical approach to determine a realistic sigma level compared to the sigma level with ‘no’ shift.

Again, according to the six sigma methodology, it is not the actual ppm or sigma level that matters as much as rate of improvement, which must be aggressive to force creativity/innovation in developing dramatic improvement

Praveen

0March 9, 2006 at 10:18 am #134819BTDT, thanks for reply, but your data without explanation about their way of calculations, means nothing. Please provide the details about formulas used, so I can analize it and better answer you.

I agree with you on your last statement, in fact I said it is just a matter of confidence interval you want use, for you estimation, at starting point.

What is curious about this debate, is that I don’t see anyone to talk about CI in sigma calculation (my fault, maybe).

Cheers0March 9, 2006 at 3:08 pm #134830What is this new tool to control a process? I still use control charts, and they are very effective.

0March 9, 2006 at 3:58 pm #134832March:The data come from one of Harry’s books. They are presented as “Case study data from a manufacturing process. Data courtesy of Asea Brown Boveri, Ltd.” Harry’s data shows that the 1.5 shift is not universal. When you have a look at enough processes, you will find the same relationship.My point is that when you have continuous data, you can calculate the Zst and Zlt, then substract the two to find Zshift. There is no 1.5 shift found or observed. End of debate.What I find curious about this whole debate is that it continues to occur at all.Cheers, BTDTP.S. – People don’t usually calculate a confidence interval for Sigma, it requires bootstrapping. I don’t see why that would be your fault, though.

0March 9, 2006 at 4:32 pm #134839I asked for formulas used to calculate what you call Zst and Zlt, but I don’t see it in your reply.

Cheers.0March 9, 2006 at 5:37 pm #134845March:Zst and Zlt are not just what I call them. These formulae are standard and have been around for years and years. You will find them in your SS training material and referenced hundreds of times on this site. I thought your post was a more general one about the endless debate on the 1.5 shift.Cheers, BTDT

0March 9, 2006 at 5:51 pm #134846You are wrong. The debate is ended, from my side, because you still do not provide formulas. I checked also dictionary in this website, but there are just words, not formulas. Anyone that would debate about data, need numbers and formulas. Or admit it is just an opinion.

Cheers0March 9, 2006 at 6:20 pm #134847March:I’m, not sure what you mean by ‘wrong’. You will find lots of references to the formulae. Here, for example, https://www.isixsigma.com/library/content/c040405a.asp.For yet more information, have a look at your Six Sigma training material. There will be a large section complete with subgroup definitions, ANOVA analysis with SS(within)and SS(between) calculations, Z calculations, discussion and definitions.The one part lacking in typical training material will be a detailed discussion about the 1.5 shift and 3.4 DPMO = 6 sigma. That is the center of the debate. Use the data I have posted to form your own conclusions.Cheers, BTDT

0March 9, 2006 at 6:47 pm #134849March:I checked the link, and something went wrong, I’ll try again.https://www.isixsigma.com/library/content/c040405a.aspCheers, BTDT

0March 9, 2006 at 9:56 pm #134858BTDT,

You’ve provided the best explanation for the 1.5 sigma ‘moveable feast’ I’ve encountered.

Let’s see if I understand it correctly. Someone sets up a new process with a Shewhart Chart and uses control linits based on n = 3, g =10. Later, they find out that a total sample size of 30 only corresponds to a 95% confidence, but Shewhart Charts use 99.973% confidence intervals, and as a consequence there was a shift in the process mean.

Now wiser, the person deletes all the assignable causes and updates the control limits based on a much larger sample size .. i.e greater than 50 :-)

Respectfully,

Andy

0March 9, 2006 at 10:06 pm #134859Andy:Thanks for the compliment, but you lost me in the Shewhart chart stuff.Are you just baiting me because I didn’t give the formulae?;)Cheers, BTDT

0March 10, 2006 at 10:02 am #134880BTDT,

Not at all .. that’s not my style. I’m not smart enough!

I had a long day as I’m working on the other side of London at present, and I expressed myself poorly.

My attempted reference was to the confidence intervals of standard deviation. My understanding is that n = 30 provides a CI ~ 95% for sigma, but n ~ 50 about 99.97%.

My thinking was if a person sets up a control chart on a new process with a n= 3, g= 10, a total sample size = 30, then the statistical limits would be somewhat ‘fuzzy,’ since as you pointed out the sample size is too small.

Later, when the person comes back to re-calculate the SPC limits – standard practice amongst process engineers – and after removing all assignable causes, but re-calculating with a much larger sample size; he may well find the process mean has ‘shifted.’

Of course processes under SPC do not ‘drift’ by an appreciable amount if corrective action taken immediately and the data removed from the re-calculation: But this is only justified if the offending part is part removed. This is why I think the control chart ‘argument’ is relavant because as I mentioned previously it is usual practice to determine if a process is ‘stable’ over time before calculating process performance or process capability (individuals.)

When we use ‘short term’ and ‘long term’ capability is it an admission that we ignore out of control situations, or is it a consequence of starting up a new control chart with a sample size that is too small? If the later is true then an obvious solution is to start off with a sufficiently large sample size and continue to use Cp and Cpk, as I do! :-)

Cheers,

Andy0March 10, 2006 at 4:55 pm #134898Or, if someone really wanted to know – they would simply go buy the book that the person is saying the data comes from. If they are really that intersted in it.

0March 10, 2006 at 6:37 pm #134905

Mr am I ?Participant@Mr-am-I-?**Include @Mr-am-I-? in your post and this person will**

be notified via email.Buy ? Do you mean, pay ? For that ?

Oh, don’t worry if you paid for it, everyone do some mistake …

It is important do not persevere …

0March 11, 2006 at 3:48 am #134916When you bring confidence intervals into the discussion, this should make the 1.5 sigma shift believers start to quiver!

You need a stable process that follows a normal distribution in order to accurately determine a cpk. If you want to transpose your cpk into a sigma value, the same holds true. The point-estimate and the confidence intervals rely on key assumptions. If my process jumped from +1.5 sigma, back to target, and then to -1.5 sigma, does that meet the stability assumptions? How about a tri-modal distribution?0June 1, 2006 at 7:36 pm #138520Mr. Joe Perito, could you please send me the table of zero to 6 sigma? I’ve been looking for this table.

Thank you!

“If you or others would like a table of zero to 6 sigma listings verses PPM rates and Verses PPM rates after a 1.5 sigma shift, just write me. You will see that 6 sigmas are equal to 0.002 PPM, or, 3.4PPM after a 1.5 sigma shift off target.”0June 2, 2006 at 4:18 pm #138553Mike,

We ran Perito’s mangey butt out of here years ago.

Go create the chart yourself in Excel. You only need the normsdist function and a basic understanding of what you are looking for.0June 2, 2006 at 6:40 pm #138564Mr. Stan, thank you for your reply.

I was wondering if you could tell me how to calculate the relationship between Cp and Proportion Defective when the process mean is correctly centered? for example:

Sigma=3 Cp=1 Defects=66810ppm

when correctly centered with Cp=1, Defects=10000ppmSigma=6 Cp=2 Defects=3.4ppm

when correctly centered CP=2, Defects=2ppbThank you!0July 4, 2006 at 4:07 pm #139934Let me first start off by saying I tried not to post to this discussion, but after a sleepless night I decided better to stir up trouble and sleep than suffer in silence and not.

What truly upsets me is many of you will accept measurement error, sampling error, but you will discount your process is composed of multiple processes. Obviously we all know about measurement error, the same factors that effect measurement effect your process. Temperature, humidly, electrical noise, operator interference, thermal expansion, improper lubrication/improper maintenance, all contribute to measurement error. The machines that make the products that you have been arguing the 1.5 sigma shift are subject more than measurement equipment to variance. (This is the point were many of you will launch into a rant about data and your process being stable.) Every moving part of a machine is a separate process. Every heated/cooled system has multiple processes. Every pneumatic system has multiple processes. Every hydraulic has multiple processes. Most machines use PLC and have closed looped systems to control machine automation, something we all take for granted. The controller takes reading from load cells, pressure transducers, encoders, and so on. Many of you know this but what you dont know is most machines dont use isolator cards. Without isolator cards even the best sensor will have noise that will affect the performance of the machine. (I could go off on a rant about sensors being used beyond their capability.) PID loops are often guessed at based on the face reading of a sensor and not in regards to variation of product. (I could really rant for days on programming errors that are truly illogical to the effect on product.) This coupled to the fact most machines only get PM when they break or are down for plant shut downs, add variance to product.

The prior being said to establish that no one posting in this discussion is dealing with one process. This is why the 1.5 sigma shift do not coincide with Shewharts teachings. Also when you guys are doing capability studies you are testing the capability of the machine and the measurement equipment together. Even after doing a MSA and factoring out the measurement error. (Your measurement error can shift after you calculated the error .DOH!) I admit I have never heard of the 1.5 sigma shift. Going back to my first SPC class the consultant that explained to me that everything effects a process, and a process that is in control will still have observed values though out +/- 3 sigma. This could explain a general rule of thumb statement that a good process can have up to 1.5 sigma shift. I think thats a fair statement to make to machine operators and supervisors.

Before anyone launches a campaign of me being a flaming moron that is overly concerned about inconsequential factors. Please do the following;

Go out to a machine and ask the operator, Who do you think is the best operator and why? Then ask, What do you do to make sure you produce a good part?

Go to the machines control cabinet and look inside. See any loose or disconnected wires? How does the inside of the cabinet look, if its not clean and organized it should be. Pick a wiring address it should just be a number taped to a wire going into the terminal block or on the block itself.

Next go to your Maintenance department and ask them to find the wiring address in the machine wiring diagram. Ask one of the Maintenance people the next two questions. How do you calibrate a load cell after you replace a bad one?, What does PID stand for?

If you did what I ask and are still ok with your statement, I put myself at your mercy.

0August 7, 2006 at 2:52 am #141467Hi Joe,

Thanks for your comments. This has helped me to think about the 1.5 Sigma Shift which I use to consider. Please send me the table of zero to 6 sigma listings verses PPM rates and Verses PPM rates after a 1.5 sigma shift.

Thanks and Regards,

Sachin0August 7, 2006 at 1:45 pm #141485I believe we have a new record for response to an old post!!!!!!

Create your own table in Excel – it’s easy – it will also help you understand what it means.

BTW – the shift is a crock.0January 4, 2007 at 7:10 pm #149958

Ahmed EL KHAMLICHIParticipant@Ahmed-EL-KHAMLICHI**Include @Ahmed-EL-KHAMLICHI in your post and this person will**

be notified via email.I would like a table of zero to 6 sigma listings.

Thank you VM.

Sincerely.

A. EL KHAMLICHI

Moroccan Consultant in Casablanca0January 4, 2007 at 9:27 pm #149966

Marlon BrandoParticipant@Marlon-Brando**Include @Marlon-Brando in your post and this person will**

be notified via email.Just purchase one of the famous SS Books,such like the the SS Way..etc.A real consultant should have at least 5 books in his library?

0January 4, 2007 at 10:05 pm #149968It looks as though someone will have to keep posting the history of the six sigma tables until people realize they are nonsense:

Bill Smith, a Motorola engineer claims that for uncontrolled processes “batch to batch variation can be as much as +/-1.5 sigma off target.” He gives no references or justification for this. In reality there is no limit to how much that uncontrolled processes may vary.

At that time Motorola used Cp=1. Bill Smith suggested “Another way to improve yield is to increase the design specification width.” He broadens specification limits to Cp=2.

Mikel Harry derives +/-1.5 as a theoretical “shift” in the process mean, based on tolerances in stacks of disks. Stacks of disks obviously bear no relation to process.

Harry names his Z shift. The Z shift makes a number of additional errors: his derivation dispenses with time yet he refers to time periods; he claims a “short term” and “long term” yet data for both are taken over the same time period.

Harry seems to realise his error in the 1.5 and says it “is not needed”.

Harry in about 2003 makes a new derivation of 1.5 based on errors in the estimation of sigma from sample standard deviations. For a special case of 30 points, p=.95 he multiplies Chi square factor by 3, subtracts 3 and gets “1.5”. The actual value ranges from 0 to 50+. He calls this a “correction”, not a shift.

Harry’s partner Reigle Stewart adds a new calculation he calls a “dynamic mean off-set.”: 3 / sqrt( n ) where 3 is the value for control limits and n is the subgroup size. For n=4 he gets “1.5”. Reigle says “This means that the classic Xbar chart can only detect a 1.5 sigma shift (or larger) in the process mean when subgroup size is 4″Reigle is quite incorrect. Such data is readily available from ARL (Average Run Length) plots.

In summary, the 1.5 does not exist, despite the many attempts to prop it up. Calculations involving 1.5 are hence meaningless. That is, 3.4 dpmo is meaningless. Six sigma tables are meaningless. Sigma levels are meaningless.0January 5, 2007 at 1:20 am #149974

Six Sigma ShooterMember@Six-Sigma-Shooter**Include @Six-Sigma-Shooter in your post and this person will**

be notified via email.Cool – so they don’t exist – what does it have to do with continually improving your processes and product and your leaning, adapting, and applying? Go improve something and forget about the 1.5 shift.

0January 5, 2007 at 1:46 am #149976Agreed.

Forget about the 1.5.

Forget about the 3.4.

Forget about counting defects.

Continuous improvement is the key to emulating companies like Toyota.0January 5, 2007 at 2:15 pm #149996Does forget about counting defects mean don’t keep data on defects?

0January 5, 2007 at 2:25 pm #149999So – based on your list – all attribute control charts are crap? How can you advise someone not to determine if their product/service is defective?

0January 6, 2007 at 11:40 am #150047You sound as though you don’t know the difference between an attribute and a variable outside of specification … perhaps you think SS is all about attributes ?

0May 9, 2007 at 3:43 pm #155877

Michael MurphyParticipant@Michael-Murphy**Include @Michael-Murphy in your post and this person will**

be notified via email.As always Thomas… all well spoken and accurate!

0May 9, 2007 at 9:23 pm #155892As Mr. Pyzdek may no longer visit this site – can someone else answer on his behalf?As it is now common for Six Sigma to borrow Japanese words, such as Muda, Gemba, etc. What is the Toyota borrowed word for a process shift? Could it be ‘shifto’ by any chance because that approach often seems to work for me in Japan.Since Motorola-san doesn’t appear to have any data for their data-driven management-by-facts 1.5 sigma shift approach – perhaps we can borrow Toyota’s ‘shifto’ data!Mac

0July 11, 2007 at 5:03 pm #158477Joe,

could you please send me the list of defct rates for both the actual sigma rating from 1-6 and those defect rates after the 1.5 sigma shift.

Also is the 1.5 sigma shift typically one-sided or two-sided?

thanks,

CC0July 11, 2007 at 5:38 pm #158479We have a new record for response to an old post! Over six years!

CC, just a hint. Joe has not been sitting there waiting for your post, do the table yourself in Excel.0December 20, 2007 at 11:24 pm #166431

Mike ClaytonParticipant@Mike-Clayton**Include @Mike-Clayton in your post and this person will**

be notified via email.Lighten up! This was actually NOT from Motorola data, but was APPLIED to characterization efforts as a way of comparing dissimilar processes and ranking them for actions. They found it in literature on 40 years of industrial research, and it was “typical” shift across many industries.

So just study ANY process using stat methods, find the variance components graphically (most stat software hides or misleads unless it graphs the families of variation in “multi-vari” format. Do the Cp and Cpk calculations if you have limits and if the data is continuous, or do the PPM calculations if its attribute data, and then start driving improvement by DOE’s or Lean methods until you run out of steam, and then look at Design for Six Sigma methods…or Design for Manufacturability if its a manufacturing process. As Jack Welch says in “Winning” it seems like a trip to the dentist, but it works.

For SERVICE people….good luck using attribute data only, as you need LOTS of it. But the idea of measuring something important to the customer and graphing the data, is good start.0December 21, 2007 at 10:15 am #166441You’re funny Mike ..

If people use a sample size of 30 to characterise equipment, as some people did at Motorola, then they will see a long term shift, even if they operate on the flat part of the curve.

Matt

0December 21, 2007 at 2:12 pm #166448Nonsense

0December 25, 2007 at 4:18 am #166506

Fake Harry AlertParticipant@Fake-Harry-Alert**Include @Fake-Harry-Alert in your post and this person will**

be notified via email.Why?

0December 25, 2007 at 7:24 am #166507

Eric MaassParticipant@poetengineer**Include @poetengineer in your post and this person will**

be notified via email.Hi Mike!

Long time no see!! I happened to see your response and thought I’d use this chance to say hi!

Happy Holidays, and a Happy New Year to you!

Best regards,Eric0January 10, 2008 at 4:40 am #167034

Dan ConnorsParticipant@Dan-Connors**Include @Dan-Connors in your post and this person will**

be notified via email.I guess I’m in favor of truth in advertising. If one is saying the process is now running at six sigma, then I think that means it is running at the .2 ppb (or whatever that number was). I think it’s fine to say one expects the mean to move by 1.5 SD over time and that shift will likely increase the number of defects. However, if I were the sponsor, I’d find that real hard to swallow. Seems like it falls under the old “measure it with micrometers, mark it with chalk, and cut it with an axe” syndrome. The probability that the sample mean can be off by 1.5 SD of the underlying data seems unbelievable to me. Rather, I’d say the x’s had been carefully selected to produce a spectacular result.

0January 10, 2008 at 7:00 am #167035Anyone interested in the origins and details of the 1.5 and 3.4ppm should read this excellent article :

http://qualitydigest.com/IQedit/QDarticle_text.lasso?articleid=119050January 10, 2008 at 11:23 am #167037Does Harry know who Stan is?

Stan spends too much energy trying to put this quality giant down.

Poor Stan you are loosing your sleep over Dr. Harry.0January 10, 2008 at 11:51 am #167039Joe honey,The 2 am Stan is a fake, Stan doesn’t lose sleep over the quality

marketing “giant”. There are so many assumptions and just outright

lies in the referenced article – the guy is a joke.0January 10, 2008 at 1:21 pm #167041Another StanBoy’s Boy!

0January 10, 2008 at 6:06 pm #167053

TaylorParticipant@Chad-Vader**Include @Chad-Vader in your post and this person will**

be notified via email.I guess if you have PhD behind your name you can publish anything.

0October 27, 2008 at 5:30 pm #177102

Érico Cantarelli JrParticipant@Érico-Cantarelli-Jr**Include @Érico-Cantarelli-Jr in your post and this person will**

be notified via email.Mr. Joe

I would like a table of zero to 6 sigma listings verses PPM rates and Verses PPM rates after a 1.5 sigma shift in order to check the article.

Best Regards

Érico

0October 27, 2008 at 8:07 pm #177110Did you really expect to get an answer from a post that is 7+ years old?!?!

0October 27, 2008 at 11:24 pm #177116

Mr. JoeParticipant@Mr.-Joe**Include @Mr.-Joe in your post and this person will**

be notified via email.Erico,Thank you for your request. I have been waiting a long time for someone to ask. I have mailed out the table as you requested. Let me know when it arrives. Again, thank you for your interest.

0October 28, 2008 at 4:21 pm #177134that was awesome…

0October 28, 2008 at 8:39 pm #177144

Gary ConeParticipant@garyacone**Include @garyacone in your post and this person will**

be notified via email.You can do them for yourself in Excel How to – http://blog.gpsqtc.com/

0October 29, 2008 at 4:22 am #177158

Tasneem AlamMember@Tasneem-Alam**Include @Tasneem-Alam in your post and this person will**

be notified via email.Dear Joe!

It sounds from what you have written that PPM and DPMO are not different since most quality professionals would calculate the sigma levels in terms of DPMO not on PPM levels. The second thing is motorola’s philosophy of 1.5 sigma shift can be good to their processes since they have learnt it from their history but can it be equally valid to other companies as well. For example how would you compare a company with two processes with Cpk of 1.5 and Cpk 3.0 Should we consider the same process shift for the two processes?. I will indeed appreciate your expert opion. thanks in advance.0April 20, 2009 at 2:06 pm #183526Hi,

In my book it is a misconception that “long-term variation” is allowed with the 1.5 sigma shift. As the critical parameter is placed under SPC, then the system will start to generate alarms whenever the mean shifts. On the other hand, if a process change occurs, the mean is allowed to re-settle within the +/- 1.5 sigma region. The mean will NEVER be allowed to drift (even long-term), as this would mean that the process is out of control.

From a design perspective: When doing a new design, the first thing to establish is the specification limits. When the design is done, it would be tampering with the process to adjust the limits to fit the process. However, in order for the designers to have some degree of freedom, it is accepted that the mean of the design, lands within +/- 1.5 sigma of the target value (really needed with specifications that has internal dependencies). But it is then required that the mean is stable at the designed value (or the SPC will yield alarms once more).0April 20, 2009 at 2:22 pm #183527Your statements about the shift from the design perspective don’t

make any sense. Please elaborate.0April 20, 2009 at 3:01 pm #183529As a design engineer, you design against a specification.

When doing the optimization, it is neccesary to hit the nominal value, get the variation in the design to meet six-sigma requirements (Cp > 2) and balance the the design if there are more requirements that are in contradiction of each other. If two or more of the design objectives are critical parametres, you might not be able to hit the nominal value on both, while maintaining low variation. Hence it is allowed to be offset 1.5 sigma from nominal target (on both critical parametres) while maintaining low variance. This allows you to ballance the design, and to decide when to stop optimization.

As soon as Cp > 2 AND Cpk > 1.5, then the design criterias has been meet, and you are able to establish the control limits for SPC (and implement this in production). In this case, Cpk and Ppk will be the same…0April 20, 2009 at 3:07 pm #183530What about Taguchi’s Loss Function?

0April 20, 2009 at 3:15 pm #183531

TaylorParticipant@Chad-Vader**Include @Chad-Vader in your post and this person will**

be notified via email.I wish the moderator would remove and ban all conversation about 1.5 sigma shift.

0April 20, 2009 at 8:59 pm #183545Lets take an example:

Critical parameter: 200mm +/- 5%

The Capabillity Cp must be above 2, meaning the standart deviation must be belov 1.67mm.

The flexibillity allows the mean to vary 1.5 Sigma (2.5mm).

As long as BOTH the above criterias are meet, the process is 6-sigma.

But the if the mean of the pilot series are 202.5mm, then I will require that the mean is 202.5mm and the variation is 1.67mm when I access the process again in a years time.

The design freedom is in the mean hitting the 197.5mm to 202.5mm range.

0April 20, 2009 at 9:14 pm #183548

TaylorParticipant@Chad-Vader**Include @Chad-Vader in your post and this person will**

be notified via email.All well and good if fit, form and function allow. Just not the way I would do things

0April 21, 2009 at 6:52 am #183552I do not see a contradiction. Could you please elaborate your question?

I have seen the loss function used to offset a design deliberately, and design the wear and tear on i.e. moulds to prolong the lifetime of the tool, but I am not a big fan of exploiding this on customer critical parametres.

The quality that reach the customer should be the same no matter what point in time the purchase is made.0April 21, 2009 at 11:02 am #183554Thank you for the response. I wanted to see if you understood the

loss function which I believe you do not. You are advocating reducing variation with respect to limits.

Taguchi is advocating reducing variation with respect to where you

want the process centered. This may sound trivial but what you suggest could make it

acceptable to have three-fourths of output above or below target. The practical implication is a process with a Cp of 1.5 kept on

target could better and less expensive than your suggested

process.0April 21, 2009 at 11:44 am #183555Actually, a centered distribution with Cp=1.5 would have (almost) twice the defect rate, since it has both “tails” outside the spec limit compared to a process with Cp=2 and Cpk=1.5

Also, the variation seen from a customer view would be larger.

Cp defines variation and Cpk depicts how well centered the distribution is. If compleetly centered they are equal.

Of course the defect rate will be less with a centered distribution, and what the individual company aims for here is a descision to be made internaly. On the other hand, the development cost associated with getting compleetly centered is higher, so it all comes down to the businesscase.

Also, if you can get from 3.4dpM to 2dpB, how many additional minuttes are you allowed to spend in the production per unit/process…

0April 21, 2009 at 1:13 pm #183556

TaylorParticipant@Chad-Vader**Include @Chad-Vader in your post and this person will**

be notified via email.“the development cost associated with getting compleetly centered is higher”

Simply not true0April 21, 2009 at 1:44 pm #183557

Gary ConeParticipant@garyacone**Include @garyacone in your post and this person will**

be notified via email.Four things – You comments show you don’t understand Taguchi’s Loss Function

conceptually.Cpk does not depict how centered a process is, it depicts how

close to an edge it is.You said – “On the other hand, the development cost associated

with getting compleetly centered is higher, so it all comes down to

the business case.” – I agree with Chad – what nonsense.You said – “Also, if you can get from 3.4dpM to 2dpB, how many

additional minuttes are you allowed to spend in the production per

unit/process.. ” – I try to never get into theoretical nonsense, when

you get a process where everything is 3.4 dpM, let me know and

then we will talk about 2 dpB.0April 21, 2009 at 1:50 pm #183558I will be willing to rephrase that to “could be higher”.

If the parameter is independant I agree, but if you are optimizing several linked parametres, you might not be able to get all of them centered (RSM is only going to give you the best compromise).

There could also be problems where the optimal variance yields an offset in mean. It is not allways possible to adjust one without affecting the other.0April 21, 2009 at 2:48 pm #1835591: Do we agree that a centere distribution with Cp=2 might have a higher cost using Taguchi than a distribution with Cp=5 that is offset from the mean? It is the area covered by both the loss function and the distribution that indicates the cost.

2: This is a matter of semantics…Cpk is in my view an indicator for how far from center the distribution is. If it is equal to Cp the distribution is centered. If it is equal to zero, the mean is on one of the limits and if it is negative, the mean is outside the spec limits.

3: As I wrote to Chad, I can partially agree to this.

4: If you have tried to volume produce equipment where a robot inserts 400 + screws into one unit, the screw has better perform well better than 3.4dpm. it takes seconds to insert the screw. If the diameter is too high, and the head breaks off, it takes 30 to 60 minuttes to replace it. Solder joints on a 400 component board is another example (more than 1000 solderjoints on one board).0April 21, 2009 at 4:51 pm #183570

Gary ConeParticipant@garyacone**Include @garyacone in your post and this person will**

be notified via email.1. No we don’t agree2. It is not sematics at all. Cpk alone tells you nothing about how

centered a process is. If you also know Cp you can figure it out.3. There is nothing to partially agree with. Your statement is wrong. 4. I did not ask you for examples of what needs to be capable. I

am saying your wanting to discuss better than 3.4 per million is

theoretical. Assuming your screws or solder is better than that,

what about all of the others supporting the same fulfillment

process?0April 21, 2009 at 5:35 pm #183572Gary: WIN!

0April 21, 2009 at 5:56 pm #183573As I understand the Loss function, it tells you what cost is associated with any given sample in a given distance from center. Hence an offset distribution can never get below the cost at the mean, no matter how capable it is. A centered distribution can of cource approach zero, but if the variance is high, then some of the samples will have a very high cost and potentially bring the average cost above the before mentioned offset distribution. If this is not correct, I would like to know your interpretation (I am willing to learn here).

Cpk does not tell you how centered the distribution is on its own…I agree (but Cpk without Cp does not really tell you anything at all).

If we are not in agreement on the Taguchi loss, of course we will not agree on point 3 either.

For a system (more than one opportunity for failure) you design for a given dpmo, or a given first pass yield. But if each individual opportunity does not meet the six-sigma requirement (3.4dpm), would you then call the system/product six-sigma?0April 21, 2009 at 8:27 pm #183579

TaylorParticipant@Chad-Vader**Include @Chad-Vader in your post and this person will**

be notified via email.“but Cpk without Cp does not really tell you anything at all”

Wrong, In contrast Cpk can tell you more about your process than Cp.

You are working completely in theory……………..

Normal distribution of a process is inherent within Cpk data. If the data is beleivable and good, then Loss function has no relevance to cost. Scrap is scrap and should be delt with accordingly. If a given part meets parameter specifications and fit form and function are met, how can a cost be associated with variation from the nominal? Deviation from nominal may compromise the integrity of the final product, but if so, then design from the beginning is flawed. It is errational to think otherwise.

“you design for a given dpmo, or a given first pass yield”

I think I know what you mean, but please elaborate.

Forget 3.4 DPMO and 1.5 Sigma Shift…………0April 21, 2009 at 10:50 pm #183582

Mike CarnellParticipant@Mike-Carnell**Include @Mike-Carnell in your post and this person will**

be notified via email.Kaare,

I am in complete agreement with Gary’s comments. You do appear to be confused particularly with regards to Taguchi.

I don’t buy your screw example. Regardless of how capable your process is for driving 400+ screws I have never seen a process that produced screws that were better than 3 sigma.

Just my opinion.0April 22, 2009 at 6:20 am #183588OMG have you got a lot to learn – you say you have a book?

0April 22, 2009 at 6:22 am #183589Chad,Don’t bother with this guy. Another person that knows nothing beyond the training rhetoric.

0April 22, 2009 at 10:08 am #183593“Wrong, In contrast Cpk can tell you more about your process than Cp”

Cpk will tell you the likelyhood of a defect occuring. if the calculation is based on history, it enables you to predict the level of defects to be expected. Cp on the other hand tells you how much variation the process exhibits related to the variation the spec limits allow for. When designing a product, the biggest challenge is lowering the variation of the customer critical parametres (in my experience), and this makes the Cp’s very valuable in optimizing the design. Cpk is in this context a measure for how well centered (or how close to the individual limits) the design is. The development strategy I have used in designs, has been heavily based on reachin the desired variation (Cp) of the system. This design phase is then followed by a design centering where the means are centered to reach the desired Cpk (while still maintaining the required Cp). A lot of this work is done in simulation (both mechanical and electrical) and then followed by a prototyping that verifies the expected performance.

“You are working completely in theory……………..”

No, this has been the way I have worked for the last 7 years

“Normal distribution of a process is inherent within Cpk data”

No, Cpk can be calculated for all distributions (that can be transformed into a normal distribution).

If the data is beleivable and good, then Loss function has no relevance to cost. Scrap is scrap and should be delt with accordingly. If a given part meets parameter specifications and fit form and function are met, how can a cost be associated with variation from the nominal? Deviation from nominal may compromise the integrity of the final product, but if so, then design from the beginning is flawed. It is errational to think otherwise.

I agree that whenever a sample is inside spec limits, the process or product should fulfill all requirements. I should probably have said un-exploited value instead of cost, sorry!

The loss function is founded in the belive that a centered design performs to a higher quality level than a offset design. I do not believe this to be true for all parametres, but the example i pulled from the first textbook on my shelf is valid: it regards the diameter of the balls in a ballbearing. if the balls have a higher or lower diameter than nominal, the lifetime of the bearing is reduced. The design criteria for lifetime then gives you the specification limits for the diameter of the balls. If the balls are produced with a very narrow distribution that is offset from the the nominal value, of course the mean lifetime will be lower, but the minimum lifetime will be higher (hence the periode you can guarantee operation will increase). of course the minimum life would be even lower if the very narrow distribution was centered, but this might not allways be possible… The increased minimum life (with the same specification) can justify an increased cost, as the custommer might be willing to pay for it. In this context, the loss function relates to a potential value of your process, and you can use it for building a businesscase for an improvement project.

“”you design for a given dpmo, or a given first pass yield”

I think I know what you mean, but please elaborate.”

When there is a very limited set of opportunities for failure (the screw example), it is reasonable to set a dpmo target, as a 99% fty would not really be a challenge. When a system is compriced of several hundred components and thousands of oppertunities, a dpmo target would potentially allow a relatively high defect rate on the system. in those cases, I have worked with DPM on individual critical parametres typicall 3.4dpm (the real requirement for individual parametres are Cpk>1.5 and Cp>2), and an overall fty that are tighter typically 90% fty for the system. these are requirements from the factory in order for the product to release for production.0April 22, 2009 at 10:41 am #183594Stan,

I never claimed to be an expert on Taguchi. My work has been based on a Cp, Cpk approach. I have tried to relate my experience to Taguchi as I understand it, and I then hope for a open discussion (that should be the purpose of a forum right?), and maybe some enlightment regarding the loss function.

So far I have a recieved two statements from you:

1: I am wrong

2: I do not know what I am doing

If you interpret the Tagushi loss function in a different way than I do, then please tell me where I am in error so I can learn from that mistake. I joined this forum to learn and share experience. If my inputs are not agreed to then fine by me, but at least present your views so that I have a chance of learning from them. I might be wrong, but please convince me!0April 22, 2009 at 10:55 am #183595The screw example is a real world case. we used approximately 2 mill. screws per week, and had repair on 2-6 units a week after optimization.

This was based on a major effort to reduce screw defects, and included optimization of the screw robot, the cast receptable and the screw spec.0April 22, 2009 at 12:49 pm #183597You interpret specs as absolutes and they are not. That is the

message of the loss function.If you want to have a rational discussion, I will. But you need to

stop with all of your prepackaged answers. The truth is the best

production in the world is done by people following the principles

of the loss function. Let’s tallk about when your offset is an okay solution and when it’s

not. You present it as “the answer”. It is not, so how do we decide

when it’s okay?0April 22, 2009 at 8:57 pm #183620

Mike CarnellParticipant@Mike-Carnell**Include @Mike-Carnell in your post and this person will**

be notified via email.Kaare,

I don’t buy it for two reasons. I have battled with crap screw suppliers for to many years to know that they don’t build screws that well particularly in those volumes. Second after reading your posts and your lack of understanding I would be willing to bet you would not be optimizing anything.

Just my opinion.0April 22, 2009 at 9:16 pm #183624First, I am sorry if I have come across as square. this was never my intention.

“You interpret specs as absolutes and they are not.”

In the projects I have worked with, the spec has very rarely been open for discussion. Sure the design concepts and underlying specifications could be changed, but a lot of the truly challenging specifications has been given by legislation or customer contracts.

Also, the work with VOC and QFD has been done properly, you should have a good idea about where the spec limits are, and what the “cost” is if you need to compromise on one.

You are of course free to design a better system, but the spec is an acceptance criteria.

Also, if I send a spec to one of my suppliers, I do not expect to change it. Particulary if I am introducing a second source to an existing component. I do not say all specs are absolute, but in my work most have been.

Could you give an example of a spec that is not absolute?

PS: Am I interpreting “absolute” correctly here?0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.