# sigma level question (from a naive newbie)

Six Sigma – iSixSigma Forums Old Forums General sigma level question (from a naive newbie)

Viewing 24 posts - 1 through 24 (of 24 total)
• Author
Posts
• #51441

Tim
Member

I’m confused as to how the ‘sigma level’ definitions have got bound to an arbitrary shift of 1.5 standard deviations from the equivalent numbers for a normal distribution (eg 6 sigma level is the probability of a value being > 4.5 standard deviations above the mean). 4.5 standard deviations above the mean). 4.5 standard deviations above the mean).As far as I can see, the 1.5 is a fudge factor from Motorola specific processes and accounted for how their processes moved over time. If I understand the literature, the original explanation put the fudge down to sampling over relatively short time periods missing longer term variations caused by not controlling for enough factors.Is this changed use of the term sigma (ie standard deviation – 1.5) consistent throughout the six sigma fraternity: if a ‘six sigma’ person refers to an 8 sigma process, is that a 6.5 sigma process to the man on the street?I also see many people commenting on two tail variation from the mean, while I’m pretty sure that sigma level is only defined as above, with, I suppose, the other tail being considered. I can see that either analysis could be reasonable depending on what’s being measured. Is this a common area of confusion?

0
#178289

GB
Participant

Try searching on the topic of 1.5sigma shift…There’s at least 6 yrs worth of material to sift through.

0
#178290

mojocrampin
Participant

It has been found that processes tend to exhibit more variation in the long term than in the short term.  This “shifting and drifting” of short term subgroup averages needed to be accounted for.  Examples could be tool wear, different operators, different lots of raw materials.
This is where the 1.5 comes from.  Practitioners found that if they shifted the mean of their short term distribution curves by 1.5 in each direction toward the upper and lower specification limits, then the amount of defects in the tails of the curve approximated the real time defects they could expect over the long term from variables (either uncontrollable or not cost effective to try and control) that will inherently be in every process.
Generally, practitioners referring to sigma levels are referring to the Z table scores, which equate a long term DPMO to a short term sigma level.  i.e. a process exhibiting 233 DPMO in the long term has a short term sigma level of 5.0
I agree, it is a bit confusing.  When in doubt, refer to your Z to DPMO conversion tables.

0
#178292

Tim
Member

thanks for the responses. I know what the data mean. I’m not familiar with six sigma, but I used to build mathematical and statistical models of various types of processes (inc. chemical reactions, physical systems, and manufacturing processes).I found it very odd that a discipline that’s focused on understanding the quality of the data that it uses, and which, rightfully emphasises the usual statistician’s passion for accurate and consistent definition of the meaning of the data, then redefined the meaning the usual abbreviation for standard deviation (sigma).Surely it should be 4.5 sigma :-)If the term were used in it’s normal sense, there wouldn’t be any need for the conversion tables.I don’t have much problem with the meanings of the terms, I’m just trying to assess the commonality of understanding among the six sigma community.I’m also more than a bit worried about the size of the fudge factor (1.5 sd) – it sounds like an artifact of a limited sampling approach. Surely, if I can get in some decent automatic measurement, I can eliminate this: I can pick out the other factors and do some sensible time-series analysis to spot and eliminate any ‘shifting and drifting’ and remove such sources of variation like I aim to do with the rest of the six sigma toolkit?

0
#178295

GB
Participant

I’ve seen some distributions exhibit in excess of 1.5…I’ve seen some exhibit <1.0.   Bottom line:   in my experience, most processes, if left alone, will tend to slump some.

0
#178298

Tim
Member

of course they’ll move (usually becoming worse as they aren’t being watched). That’s the nature of any time series data. But, as you say, the shift varies from case to case.So why hard code in 1.5 sd into the conversion tables and obfuscate the situation?Surely this just makes it harder to understand how the tools and techniques work as there’s a bit of ‘magic’ in there?Whatever. It’s the way that it is.

0
#178304

Anonymous
Guest

Tim,You’re asking all the right questions … Some more questions you might like to ask is:What method does Toyota use to count defects? :-)What shift does Toyota use? :-)How about it’s statistical approximation – Lean Enterprise?Andy

0
#178306

Tim
Member

Thanks, Andy. I now feel like I’m not necessarily barking up the wrong tree.What’s the answer to your questions?I’m more interested in the approach to:
– working out a relevant fudge factor for a particular process domain (I doubt that’s corporate specific, more likely business process type).
– understanding the benefit/cost of continuous measurement so that there’s no need for a fudge factor.My main problem domain of interest is the processes for managing IT in large enterprises (mostly post building the code). The common problem that I see is that the numbers used to run IT are very precise… And fabricated – so that they meet the goals, rather than providing management tool. Few people measure error rates, but I’m seeing 800k dpmo for many IT management processes, with correspondingly large increases in capital and operational costs.So, I’m interested in the value of data accuracy – hence my concern over such large fudge factors. I need to be able to justify these from a theoretical and practical perspective.I guess that it’s common knowledge in the six sigma community that the dpmo for 6 standard deviations is 0.00099, rather than 3.4. That’s quite a difference!

0
#178308

Mikel
Member

Nonsense. Practitioners found nothing. The 1.5 is just dogma – it’s
not based on any data.The only worthwhile message is that processes will degrade
somewhat over time. A well controlled process will degrade less
than 0.5, a poorly controlled process – who knows.In the absense of any real process data, the 1.5 is an okay
assumption. Nothing more.To build it permanently into the table is just more dogma.

0
#178312

Steven Crowden
Member

My understanding is that the 1.5 sigma shift came from Motorola as they measured and remeasured their processes they found this was an average shift over time. The problem with this, as touched on in a previous mail, is that if Motorola (or anybody) has inconsitent processes the shift will be more or less or an extended (long term)  period of time. So, could it be said that if Motorola had started with “better” or more consistent processes we may have grown up with a short to long term sigma shift of 0.75 or 1.0. Who knows?  I also understand that GE found their own sigma shift was closer to 1.2 sigma over time. But, even GE stick to the 1.5 shift, industry standard.
Having said all that, I am not a fan of the Sigma score at all. My experience is that people (Managers/Directors) put a great deal of store in having a sigma score and then improving it. Why?
For me you must measure your process and measure it in a way that doesnt dictate how the process will be run, i.e. if i measure output then people working the process will understand what circumstances end up with a pat on the back and which circumstances end up with no bonus. Therefore the measure will begin to dictate the method.
The key is to find the measure(s) that are complimentary to the purpose of the process, that cannot be manipulated by the operators. This measure is all you need to start with as a base for improvement.
From your measure you can implement change that improves the process and then remeasure. The key is to always review your process with the view to making incremental improvements.  What I am not saying here is that one measure taken at the beginning will be sufficient over a long period of time. But my point for this string is that converting data into a sigma score for me is an extra process step that is generally not required.
Thanks

0
#178314

Tim
Member

The reason that there’s so much store in the numbers is that it gives the managers something to use to manage the business that removes any interpretation. For a large business this is crucial – the org is managed by internal team benchmarking.If you can get the right metrics in place this works very well indeed. The challenge is to continuously evaluate the metrics and keep them ‘clean’. In the worst cases, as you say, they can drive bad behaviour. If the culture’s not right (inc. rigour about the numbers), there’s also a tendency for middle managment to become a process of ‘fixing’ the data, which not only drives the wrong business decisions, but is also a hugely expensive process in itself.

0
#178317

Confused
Participant

From my xperience lots of companies and moving away from sigma scores to DMPO or Yields. This removes another area of confusion and jargon to hide bad projects behind.

0
#178318

Anonymous
Guest

I agree with Stan’s view …

0
#178320

Robert Butler
Participant

If you are looking for published papers on the subject – there aren’t very many.  Some time ago I listed the then known documents in the post below:
https://www.isixsigma.com/forum/showmessage.asp?messageID=39663
Since then, the only “new” paper on the issue of which I’m aware is one written by George Box.  I haven’t read it but I’ve been told it is essentially a confirmation of Bothe’s conclusions.
My personal experience and the research I did in trying to understand the issue is in accordance with Stan’s post.

0
#178322

Anonymous
Guest

Tim,What method does Toyota use to count defects? None – the line stops as soon as they encounter a defect.What shift does Toyota use? NoneHow about it’s statistical approximation – Lean Enterprise? None that I know of, but I’m not an expert in this system.My own opinion about software development is the same as patent drafting. Since both processes do not involve replication, I don’t see how we can take a statistical quality approach.For me, code design is similar to the design of a mechanical prototype – it is generally a heuristic approach. If we start measuring the number of false trails we follow, we soon end up with designer’s block!

0
#178325

Bill Fowlkes
Participant

Stan is absolutely right.  The fudge factor is not based on any real cause, it is essentially dogma.
Two observations:1. This discussion has been going on for at least 16 years.  I know that because the first public Six Sigma class was 1992 and it was taught then.2. There is one valid reason for the shift to exist, and that is the fact that the mean of a sample is itself a statistic subject to some uncertainty.  The common cause confidence interval can be estimated from the observation data, but it is never zero.  There is also the likelihood that over a long time, special causes (special only in the sense that they may not have been effecting the initial obervations) will cause deviations that exceed the expected estimate error.
Any discussion about what those “special causes” are, what there magnitude is, and how much the capability index will change as a result is pure speculation.

0
#178328

Mikel
Member

The Motorola story is a myth. No one had any data.

0
#178329

Mikel
Member

DPMO and sigma scores are the same metric. If you know one, you
know the other.

0
#178332

Jonathon Andell
Participant

I have yet to see anybody derive any useful process knowledge from the alleged 1.5-sigma shift. I wrote an article in ASQ’s Quality Management Forum in 1999, describing my cynical view of the concept. If you want a copy, you can contact me off-line at [email protected].

0
#178336

Mario Pere Wilson
Participant

I worked for Motorola, Inc. from 1984 to 1991. My direct responsibility, as head of the department of statistical methods, was to implement and disseminate the use of statistical methods to achieve and sustain Motorola’s corporate quality “Five Year Goal” which was: “Achieve Six Sigma Capability by 1992 – in Everything We Do”.
When the document “Our Six Sigma Challenge” was distributed in January 15, 1987, which makes reference to the plus of minus 1.5-sigma shift, and the 3.4-ppm defect level, I communicated with corporate to find more factual details about these statements. The facts were always “anecdotal”. There was no data, no hard analysis, no conclusive evidence, and no statistical validation to these assertions. As the inquiries grew, so did the doubt about the quality goal. Later in 1988, Mikel Harry and Riegle Stewart came to the rescue to add credibility to the statements by publishing an internal document “Six Sigma Mechanical Design Tolerancing”. This document again presented no data to support any validation of the 1.5 sigma shift, however, it makes reference to articles written by David H. Evans and A. Bender.
If you follow the trail by reading the articles: David H. Evans, “Statistical Tolerancing: The State of the Art, Part I. Background,” Journal of Quality Technology, Vol. 6 No.4, (October 1975), pp. 188-195, David H. Evans, “Statistical Tolerancing: The State of the Art, Part I. Methods for Estimating Moments,” Journal of Quality Technology, Vol. 7 No.1, (January 1975), pp. 1-12, David H. Evans, “Statistical Tolerancing: The State of the Art, Part II. Shifts and Drifts,” Journal of Quality Technology, Vol. 7 No.2, (April 1975), pp. 72-76, and A. Bender, “Benderizing Tolerances – A Simple Practical Probability Method of Handling Tolerances for Limit-Stack-Ups, “Graphic Science, (December 1962), pp. 17-21, you will probably come up with a similar conclusions as I did, which is that there is nothing to substantiate the plus or minus 1.5 sigma shift. https://www.isixsigma.com/forum/showmessage.asp?messageID=39713
I can assert that there was never any data to support the “plus or minus 1.5 sigma shift”.
Anyone can make assertions about the 1.5 sigma shift, but I was there -inside Motorola- and there was never any data to support the 1.5 sigma shift. Back in 1987, Bob Galvin did not have it, Jack Germain did not have it, Bill Smith did not have it, Mikel Harry did not have it, Riegle Stewart did not have it, and I could not get it either.
It is a know fact that processes vary, by how much, we do not know. The second law of thermodynamics tells us that left to itself, the entropy (or disorganization) of any system can never decrease. Although we cannot completely defeat this law, we can appease it, by forcing the system (a process) to a state of functional equilibrium by process monitoring and process adjustments, hence, statistical process control.
My suggestion is, put this illegitimate subject to rest and focus on something more meaningful.

0
#178337

Gary Cone
Participant

Hear, hear.I was there too and there was no data, although considerable time
was spent trying to back into a justification for it.The method and topic are NVA.

0
#178339

Taylor
Participant

Finally The awesome power of the truth

0
#178340

GB
Participant

Exactly…Mario is, “the horses mouth”, in this case!

0
#178352

Tim
Member

Mario/GaryThanks. You’ve put my mind at rest.Tim

0
Viewing 24 posts - 1 through 24 (of 24 total)

The forum ‘General’ is closed to new topics and replies.