# Use of 1.5 Sigma Shift and 3.4PPM

Six Sigma – iSixSigma › Forums › Old Forums › General › Use of 1.5 Sigma Shift and 3.4PPM

This topic contains 10 replies, has 5 voices, and was last updated by Ken Myers 18 years, 6 months ago.

- AuthorPosts
- March 20, 2001 at 5:00 am #27135

Joe PeritoParticipant@Joe-Perito**Include @Joe-Perito in your post and this person will**

be notified via email.I’ll caution readers again that the 1.5 sigma shift is a number derived from Motorola’s experience with their processes and may not (probably) is not applicable to your processes until you have valid statistical data to demonstrate that the Motorola process data “IS” applicable to your process. Contrary to the suggestion in another article, this is not fancy or wishful thinking. The 1.5 sigma shift gaining popularity “is” reported by the best know autorities to be the shift that Motorola has seen in their processes over time. This has no statistical relevance or statistical (formula)derivation outside of Motorola’s processes. Contrary to a previous article “the maximun expected shift using a sample size of four measures per subgroup would be approximatley 1.5 standard deviations” has no factual base in statistics. To calculate the sample size you need for detecting a 1.5 sigma shift depends on 1.) the amount of variation in the process (sigma), 2) the confidence level you wish in the data (Alpha in the form of a “Z” factor), and the detection level you want in determining the shift. The sample size n = (Z squared, times sigma squared)/d squared. It should be intuitively obvious to readers that any number of processes with different amounts of variation would not all require the sample sample size (4) to detect a 1.5 sigma shift. It should be also obvious that a cutting tool breaking on a screw machine does not have to have a “maximum shift” of 1.5 sigma just because the sample size is “4”. Also take note that the “standard error of the mean” calculates the standard deviation (or variation between)mutiple means… or subgroup averages. This formula has nothing to do with calculating a “maximum shift”. The formula has the standard deviation of the individuals divided by the square root of the subgroup size. Again, the formula is missing the detection (or sensitivity) level that you wish to detect (1.5 sigma) and it is missing the confidence level (Z) that you want in making the calculation. The previous article then states 5 samples per subgroup is needed to detect a 1.34 sigma shift and this the reason 4 or 5 samples are used in a subgroup is because this is the best economic trade off for detecting shifts in the mean. 4 and 5 “are” the most commonly used subgroup sizes due to wide acceptance. In my 35 years experience I have “always” found that when I asked why those sample sizes where selected for the process under discussion, the answer “

0March 20, 2001 at 5:00 am #66055

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Come on Joe, get the books out and try reading them… The +/-1.5SD shift I speak of is calculated as the the standard error of the mean. The mean shift in a process has typically been used as the measure of long-term variation.

I agree for individuals or transactional data this shift may be different. For some automated processes I’ve observed shifts of less than +/-1.0SD’s. This number comes from collecting data over months. If you are controlling your process using Statistical Process Control methodology, the control limits on the control chart of averages are placed at +/- 1.5 standard deviations from the grand mean when using a subgroup sample size of 4. If you maintain a stable process, i.e., all values within the control limits, then this process will have a maximum shift in the MEAN of +/- 1.5 standard deviations.

Yes, the reference of a +/-1.5 standard deviation shift in long-term variation was originally derived from Motorola’s processes. But, in lieu of collecting process data over long periods to determine the actual long-term variation, Motorola’s numbers serve as a guide in ESTIMATING the expected process performance using the SHORT-term variation.

Again, the slant here is towards the SPC methodology. I understand some of my colleagues may not agree with this thinking. But if you have limited data to work with and want to get some idea of the expected long-term process performance a +/-1.5SD shift is a reasonable starting point.

While we spend so much time debating the merits of this 1.5 SD shift, and whether it’s reasonable, what are we really debating? Do we plan to do something with this expected shift, what ever it is? Perhaps we plan on using it as a basis for estimating the expected long-term process performance. Perhaps there are other reasons of which I’m interested to hear. The connection between long-term variation and the Six Sigma estimate of process quality, 3.4 dpmo, is highly depended on how well the data approximates the normal distribution. While we are concerned with the overall accuracy of estimating process perfomance we should be advised our estimates come from the tails of the data. Usually, there is far more estimation error working in this region than the error associated with estimating the long-term variation.

0March 20, 2001 at 5:00 am #66061

Kim NilesParticipant@Kniles**Include @Kniles in your post and this person will**

be notified via email.Dear Joe: I like your post as it is very statistically oriented and provides a clear and detailed point of view.

Dear Ken: I like your question of “what are we really debating?”. Wouldn’t a clear definition of a stable process solve the dilemma?

Suppose we define a stable process as that where the standard deviation used to define the process contains all of the variation possible that the process will ever see. Of course, it would make Six Sigma as achievable as “zero defects” but the process would never shift, negating the need for any process shift.

Now suppose we define a stable process as anything else. It could be the point where data taken from the process forms a normal distribution with between > 65% confidence. It could be when greater than 50% of the data consistently falls within the specifications or when our Cpk would always be above zero, etc. No matter how you look at it one can always argue that the process will shift with time. Then it comes down to an argument over how much it should shift and we’ve been there, done that.

Personally, I worked on a process used to make millions of gloves that was seven years “stable” and shifted from summer to winter so much that we had completely different material formulas in use to compensate for it. I like the 1.5 sigma shift for all the other completely non-statistical reasons, the emotional ones, the ones that motivate people to strive to reach realistic goals.

Lets accept the shift as one of the parts of Six-Sigma that makes it novel but in order to do this we need to develop a new definition of Six Sigma process stability.

Do you agree?

What would it be?KN http://www.znet.com/~sdsampe/kimn.htm

0March 20, 2001 at 5:00 am #66062

Joe PeritoParticipant@Joe-Perito**Include @Joe-Perito in your post and this person will**

be notified via email.Ken, I have been in QA/QC work for 35 years and I “have” read the books… that’s part of the reason I have a Bachelor’s and Master’s Degree in Quality Assurance Engineering.

I don’t know where you are getting your information, but it is not out of a book. The control limits for subgroup data are based on the formula: X double bar plus A2 Rbar for the upper control limit and X double bar minus A2 Rbar for the lower control limit. A2Rbar is a conversion factor based on subgroup sizes to convert the ranges of the subgroups into the “3” sigma limits. Thus, regardles of the sample size of the subgroup, the A2Rbar conversion factor “Always” calculates the upper and lower “3” sigma control limits. This has nothing to do with, and does not calculate a 1.5 sigma control limit. You will find with further study as the sample sizes (subgroups) get larger and larger, the A2Rbar factor gets smaller and the 3 sigma control limits will get tighter and tighter. That’s because of the statistical fact of “the larger the sample sizes, the more accurate the estimate of the true process mean, the smaller the standard error of the means, and thus, the tighter the control limits. Individuals have the control limits at Xbar plus and minus 3 sigma. Subgroup control limits get tighter as the sample size of the means increase… but these are still 3 sigma control limits.

0March 20, 2001 at 5:00 am #66063

Joe PeritoParticipant@Joe-Perito**Include @Joe-Perito in your post and this person will**

be notified via email.Ken, I have been in QA/QC work for 35 years and I “have” read the books… that’s part of the reason I have a Bachelor’s and Master’s Degree in Quality Assurance Engineering.

I don’t know where you are getting your information, but it is not out of a book. The control limits for subgroup data are based on the formula: X double bar plus A2 Rbar for the upper control limit and X double bar minus A2 Rbar for the lower control limit. A2Rbar is a conversion factor based on subgroup sizes to convert the ranges of the subgroups into the “3” sigma limits. Thus, regardles of the sample size of the subgroup, the A2Rbar conversion factor “Always” calculates the upper and lower “3” sigma control limits. This has nothing to do with, and does not calculate a 1.5 sigma control limit. You will find with further study as the sample sizes (subgroups) get larger and larger, the A2Rbar factor gets smaller and the 3 sigma control limits will get tighter and tighter. That’s because of the statistical fact of “the larger the sample sizes, the more accurate the estimate of the true process mean, the smaller the standard error of the means, and thus, the tighter the control limits. Individuals have the control limits at Xbar plus and minus 3 sigma. Subgroup control limits get tighter as the sample size of the means increase… but these are still 3 sigma control limits.

0March 20, 2001 at 5:00 am #66065

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Joe,

What is your definition of the word “sigma”?

What is your definition of the words “standard deviation”?

Are they the same thing?

You are thinking correctly, but confusing the two sets of terms. Sometimes no matter how much education and experience one has they can sometimes be prone to confusion. This is an understandable, and usually an acceptable human trait. But, when we refuse to listen, something else is most likely the issue…

Long-term variation in a process is the component of the total variation that is observed on the average control chart. Long-term variation is that variation that constitutes the “shift” we have all been talking about. When looking at any Six Sigma reference tables, two columns are provided, one for short-term and long-term variation. The short-term variation is observed on the Range or Standard Deviation chart. The square of the short and long-term variations add together to equal the total variation. This I hope you accept this mathematical trivia.

The control limits on the average chart are +/-3 STANDARD ERRORS from the grand mean, not +/-3 sigmas. A standard error is equal to: (std deviation)/(n)^1/2.

In words, that is the short-term variation divided by the square root of the sample size per subgroup. If the control limits are +/-3 std. errors, then that would be +/-(3*(std. deviation)))/(n)^1/2. Please look at this equation closely… Given the sample size per subgroup is 4 this calculation would yield: (watch close) +/-(3*(SD))/(4)^1/2 = +/-(3*(SD)/2 = +/-1.5(SD). That’s +/-1.5 times the short-term variation computed using the Range chart, or R-bar/d2 = R-bar/2.326. If you are using a chart of Standard Deviations, then you would compute the pooled standard deviation and substitute it directly. Notice for a sample size per subgroup equal to 4 that you control the variation of averages over time within +/-1.5 STANDARD DEVIATIONS, measured as short-term variation. As you correctly stated increasing the samples per subgroup allows better control of the average. For example, for n=10 the average could be controlled to +/-0.95 STANDARD DEVIATIONS, not sigmas. Notice, when using a subgroup sample size other than 4 the shift in the mean will be larger or smaller than +/-1.5 standard deviations. That shift in the mean is what I have been talking about since the beginning, and it is the same shift the tables refer to when estimating the expect performance of the process.Sigma is a

0March 21, 2001 at 5:00 am #66071Joe,

Simple question – How much shift are you detecting in your processes?

0March 21, 2001 at 5:00 am #66074You know the answer posted by Mike on March 12 was right. Mike said —

How about finding your own short term capability by subgrouping and looking for actual process potential. Measuring the difference between Long term and short term tells you whether you are either out-of-control or the process technology requires improvement…in most cases both. The tool is called a Z Control vs. Technology plot. Isn’t that the point of Measure Phase?

What Mike is referring to is a simple tool used to great effect by those lightweights – GE. It says find your short term capapbility and your sigma shift (easy to do with Minitab). If your short term is incapable (say less than a Cp of 1.5) you have a technology problem. That does not say go buy new equipment but it does say to take on projects aimed primarily at variance reduction. If you sigma shift is large (> 1.5), it says you have a basic control problem and that your projects should be focused on better controls. If you are both incapable and have too large of shift, work controls first (simpler) and variation reduction second. Oh, by the way make sure the measurement system was capable before you start.

0March 21, 2001 at 5:00 am #66076

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Gary,

Mike writes, and you reiterate:

How about finding your own short term capability by subgrouping and looking for actual process potential.

Ken responds:

What are we subgrouping? What is the measure of process potential? Is this the same method as used when doing SPC, and computing capability indicies?

Mike write, and you reiterate:

Measuring the difference between Long term and short term (variation???) tells you whether you are either out-of-control or the process technology requires improvement…in most cases both.

Ken responds:

What is the process here? Comparing long-term to short-term variation? Do you mean looking at Cp and Cpk vs. Pp and Ppk, you mentioned Minitab does this, perhaps via the SixPack evaluation? However, don’t you need to first ask if your process variation is stable on a short-term basis in order to determine if it is predictable? You don’t evaluate process stability by comparing short and long term variation do you?

More questions to ponder:

What is shifting in the process when speaking of a “sigma” shift?

Why do we need to talk about a new way of performing this assessment using a Z Control vs. Technology plot? Isn’t this stuff confusing enough to newbees without introducing new language exclusive to the Six Sigma elite?

Isn’t most of the work done in the measure phase to determine how well your process meets the requirements really the typically process characterization tools such as Process Behavior Chart(AKA Control Charts), Capability Studies, and alike we’ve been using since the mid-1950’s? Have processes changed that much? You ‘ve suggested a process is a process is a process. Why all the new terminology?

Is is really necessary to invent a new language, and new methods that look like the old ones in order to do this work?

No wonder there is so much confusion on what the Six Sigma method is really all about…

Regards,

Ken

0March 22, 2001 at 5:00 am #66081Ken-

Actually the process is fairly simple. Z-Control vs. technology takes a Cp from a logical subgroup that exhibits best practice (for whatever reason) plotted on the technology axis. Then the Ppk for the population (for arguement sake lets assume that the sample we are calling the population represents the LT process) is subtracted from the Cp value for our ST “best practice”. That value is plotted on the control axis. There is some arguement and controversy over the value of the tool. For my use, its a best guess early in the project that merely points in a direction for investigation for “X’s”. If the plot shows an “out-of-control” condition, what about your best practice is different than the LT process? It can also set a benchmark for comparison of success of the project once control phase is complete. I’ve never seen the tool used to determine LT variation in a process. My input was to discourage the use of generically applying a 1.5 sigma shift to LT (population) and calling that ST data. I think some get too caught in the arguement on rating a processes sigma level. Six Sigma is about making improvements towards a goal. The goal may vary from process to process and company to company. I like to assess improvements in terms of % as long as the measurement technique remains constant.0March 22, 2001 at 5:00 am #66082

Ken MyersParticipant@Ken-Myers**Include @Ken-Myers in your post and this person will**

be notified via email.Thanks very much for your feedback.

0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.