why six sigma not seven sigma
Six Sigma – iSixSigma › Forums › Old Forums › General › why six sigma not seven sigma
 This topic has 34 replies, 19 voices, and was last updated 15 years, 5 months ago by Lebowski.

AuthorPosts

March 5, 2007 at 11:28 am #46293
Amany NadyParticipant@AmanyNady Include @AmanyNady in your post and this person will
be notified via email.what is the relation between NO.six and the name of six sigma
0March 5, 2007 at 11:31 am #152742This is just a name that has been given to this methodology…it has its relevance upto an extact that SIX SIGMA level is significant to most of the industries and this is in a way highest and optimum level to run the business. However there are processes which run in 12 to 24 Sigmas as well(Medical Systems, Airospace projects etc.)
0March 5, 2007 at 11:36 am #152743
VidyadharMember@Vidyadhar Include @Vidyadhar in your post and this person will
be notified via email.Sigma – stands for standard deviation which is a measure of variation for a process. When u say that a process operates on Six Sigma it means that the process deviation is minimum to such a scale that your specification limits are six std deviations away from the centre ( for normal distribution )
for a process at 3 sigma – the area covered by the normal curve is approx 99.73% which means you will still have : 2700 defects in million opportunities.against a process at six sigma where the normal curve would cover 99.99966% of the area, which means you will have 3.4 defects in a million opportunities.
Also to ur question about why six sigma & not seven – it depends on the type of product and service.
For some products a 4 sigma level is good enough, while for some even six sigma is less ( as it has a scope of 3.4 defects in million opportunities ).
lets take an example of the process of a delivered baby being given to its correct parents – if the process is at six sigma out of 2 million newborns, 67 would go to the wrong parents. So what sigma level one needs to operate on is dictated by the nature of the process and service0March 5, 2007 at 6:27 pm #152761
Hemant GhamParticipant@HemantGham Include @HemantGham in your post and this person will
be notified via email.We have seen that Six Sigma itself is a stringent quality goal and specific defect rate (3.4 million defects per million parts or opportunities).
Six Sigma Level is almost always considered to be a “Goal” and organizations and individuals who have understood the value of the methodology strive to reach this stringent target.
Remember, when we say 6 Sigma, we are expecting a defect rate of 3.4 per 1,000,000 opportunities.
I cannot recall at this moment, but think I read somewhere that the average sigma level of manufacturing industries is between 3 – 3.5 sigma. In various countries you may find various levels. This is because the sigma level of a typical manufacturing company is strongly driven by three types of variations.
Example: Let us assume you are making some ‘X’ Product. There are three types of variations affecting the quality of this product. These are:
1) Unittounit variations – These are product characteristics or process variables.
2) Usage Variations – These are environmental factors (environment in which the product is operated) like climate, location, mechanical, chemical stresses, vibration, etc.
3) Noise Variation – This is most important among the three. This includes variations in the attitude of people, the behavioral aspects, and rigor in process implementation. You needn’t be surprised if you find high sigma levels in Japanese companies.
Considering all the above variations in the industry, it is far more difficult to go beyond a certain sigma level. Most of the processes followed do not go or shall I say need not go beyond certain performance levels. And that seems to be a hard fact.
A quick comparison between 3, 4, 6, 7 sigma levels using Defects Per Opportunities (DPO).
Total opportunities or parts
Defects @ 3 sigma
Defects @ 4 sigma
Defects @ 6 sigma
Defects @ 7 sigma
1 million or 1,000,000
66807
6210
3.4
0.0212
100
6.6807
0.621
0.00034
2.120E06
1,000
66.807
6.21
0.0034
2.12E05
10,000
668.07
62.1
0.034
0.000212
100,000
6680.7
621
0.34
0.00212
From the table above, suppose Company A is producing 100,000 parts, and if there are 621 defective ones, then the company is at 4 sigma level and it may analyze its economic loss and take actions. If Company B is producing same number of parts, but operating at 6 sigma level, then the defects here itself are almost zero (0.34). Another Company C at 7sigma will produce 0.00212 defects per 100,000 parts.
A centered 6sigma process (a process operated with perfection) can give us 0.001 defects per million only.
To reach the 6 sigma level we need to act on 1), 2) and 3) of the above variations and see how much we could possibly reduce them improving our capabilities and performance. For some companies it is an overwhelming task and very difficult due to several reasons/factors that include organization structure, nature of business, and attitude towards work and contribution capability. These companies are satisfied with what they have. Will this satisfaction make them competitive? Will this satisfaction get customers knocking at their doors in the long run? Questions to answer0March 5, 2007 at 9:08 pm #152767
Heebeegeebee BBParticipant@HeebeegeebeeBB Include @HeebeegeebeeBB in your post and this person will
be notified via email.Good catch Brit.
Vidyadhar, No soup for you…ONE YEAR!
On a serious note, I use the measure with a boilerplate caveat about “The great shift debate”.0March 5, 2007 at 9:16 pm #152765Vidyadhar:
Your explanation of six sigma is wrong. The measure of sigma level is the number of standard deviations between the mean and one specification limit. Your explanation indicates the distance between two spec limits. The 99.73% comes from the natural spread of the normal distribution and should not be confused with the sigma level performance mentioned in the six sigma methodology. I personally don’t use the measurement due to all the hype surrounding the sigma shift. But if someone is explaining it, they should at least be in the ballpark.
This link from the isxsigma.com website may help you.
https://www.isixsigma.com/library/content/c010101a.asp
A 3 sigma process does not have a 2700 DPMO.0March 5, 2007 at 10:10 pm #152768Hello BritW,
I am interested in where Vidyadhar went wrong. I understood his explanation to be that if your spec limits (both) are 6 STD from the mean, if your mean has that “natural 1.5 sigma drift”, you would experience a 1.5 sigma cushion and theoreticly achieve zero defects. I will read the link you attached, but I am interested in any insight you can provide.0March 6, 2007 at 4:11 am #152774Helper, you are no help at all. Will people ever wake up that the 1.5 is nonsense, crap, BS ? Read its history here:
http://qualitydigest.com/IQedit/QDarticle_text.lasso?articleid=11905
“Sigma levels” are meaningless. An incontrol process does not have 99.7% points in control limits because we can never precisely know the distribution for our data and control limits are not probability limits.
Zero defects means as little as 3.4 dpmo.
Does anybody here understand the basics ?0March 6, 2007 at 9:49 am #152778
accringtonParticipant@accrington Include @accrington in your post and this person will
be notified via email.Len,
I think there are people visitng this forum who do have some understanding of the basics, but are weary of continually flogging the same dead horse.
I’d just let them get on with their quasi – religious babblings about normal distributions (has anyone ever seen one?), areas under the extreme tails of said distribution, and 1.5 sigma shifts.0March 6, 2007 at 10:32 am #152781
VidyadharMember@Vidyadhar Include @Vidyadhar in your post and this person will
be notified via email.Hi people,
What Brit pointed out was about deviations from mean , I mentioned the word centre instead of mean which is incorrect. Otherwise I dont see any diffrence between Btrits explanation & mine…0March 6, 2007 at 2:43 pm #152786First please everyone – lets not get into the thread about sigma shift and whether “normal” exists or not. I am not advocating whether the sigma level measure is accurate or not. My point was that if you were going to explain sigma level to a novice, then it should be explained as intended – whether the theory is correct or not.
Vidyahar chose to confuse the explanation of the % area under the normal curve between control limits (standard deviation spread) with the count of standard deviations between a spec limit and the mean (sigma level). It is common in this forum to confuse these, and in turn, confuse others. I hope some other posters can help me with the explanation…
Vidyadhar:
You said:
“for a process at 3 sigma – the area covered by the normal curve is approx 99.73% which means you will still have : 2700 defects in million opportunities.against a process at six sigma where the normal curve would cover 99.99966% of the area, which means you will have 3.4 defects in a million opportunities.”
A process measured at a 3 sigma level in six sigma terms means you can fit 3 standard deviations between the mean and one spec limit. Your statement refers to the normal distribution spread of 3 standard deviations from one side of the curve to the other BETWEEN CONTROL LIMITS. These are two different concepts and should not be used in the same explanation – it confuses the novice. Again, please visit the New to Six Sigma link for the complete statistical definition of what six sigma means.
Others – please chime in – and again – please, lets stay away from the shift and normality thread…..0March 6, 2007 at 3:55 pm #152794Helper – Vidyadhar said the following:
Sigma – stands for standard deviation which is a measure of variation for a process (this is right).
When u say that a process operates on Six Sigma it means that the process deviation is minimum to such a scale that your specification limits are six std deviations away from the centre ( for normal distribution ) for a process at 3 sigma – the area covered by the normal curve is approx 99.73% which means you will still have : 2700 defects in million opportunities(this is wrong)..against a process at six sigma where the normal curve would cover 99.99966% of the area, which means you will have 3.4 defects in a million opportunities
Sigma level is a measure of performance between your target and one spec limit. It is how many SDs you can fit between the target and one spec limit. It has nothing to do with the control limits. The 99.73% refers to the area under the curve between two control limits and the mean (centre). As we all know the center may not be where our target is.
Operating at a 3 sigma level does not translate into 2700 DPMO, it is more to the range of 66,800 DPMO (see any standard DPMO to Sigma table). His explanation mixed explanation of the area under the normal curve between control limits and the explanation of what sigma level means between target and spec limits. He mixed, in the same conversation, the 99.73% expectation in normality between +/ 3 standard devaitions under the normal curve with the definition of sigma level (# of Z’s between target and spec).
We expect under normal circumstances (central limit theorem), that our process will operate within +/ 3 SDs (99.73%) of the time. This does not mean we are meeting our customers specifications 99.73% of the time, which the sigma level is trying to measure.Again – you will hear a lot of discussion about the 1.5 drift. It is relative nonsense, as has been discussed many times in this forum. All I can say is that if the sigma level theory is being discussed, spec limits and control limits should not be confused.0March 8, 2007 at 5:20 pm #152930Thank you BritW,
Please answer my question with respect to the 1.5 sigma drift and six sigma performance accounts for this drift by achieving variation such that there is 6 STDs from the center line and either the upper or lower spec limits. As I understand that is the basis for establishing 6 sigma performance.
Helper0March 8, 2007 at 6:39 pm #152936Your statement is correct except it’s not from the center line. It is from a target and one spec limit. Your process mean (center line in a normal curve) may not equate to the target line. Mixing the terminology between expected performance relative to the normal curve (99.73% for +/ 3 sigma spread) and the measure associated with six sigma sigma level is wrong – and that is how I took the original post.
As for the drift thing – yes it references the distance between target and spec (not center line) where an arbitrary 1.5 long term performance was added based on experiential data at Motorola to the expected 4.5 in the short term. People will argue on both sides that this is an accurate or inaccurate method for measuring probabiltiy of a defect. You can find extensive threads on this site relative to that argument. I hope we don’t start another one here on the “drift”.0March 8, 2007 at 6:50 pm #152938Len,
So control limits are not probability limits? Why do they call the normal distribution a normal “probability” distribution. What is the 68.95.99.7 rule and what does it fundementally mean?
I suspect that in a stable process/stable probability distribution, the chance of a data point falling outside of a control limit (based on +/ 3sigma) is 1.9973 or .0027. Which means the odds are in your favor that you have special cause variation in your process performance. Can you agree whith that? If you can, that means if we set our spec limits at the same place at our control limits you can expect 2.7 defects per thousand. Yes? Now, if you can accecpt this principle/truth that physics and the science of statistics is built upon, you would have to accept that IF your variation in your process performance is so tight that your spec limits fall three sigma beyond your control limits, the mathmatical conclusion is justified accordingly.
If you disagree with any of what I mentioned above, I am open to any type of discussion you would like to continue with. If you don’t agree with the conclusions above then we can’t because you would be disagreeing with the basis of physics and the fundementals of probability the normal probability distributions, sampling distributions and the central limit theroem. In this case, additional conversation will be a fools route. You agree?
Sigma levels are meaningless?
Hope this helps…….!
Helper0March 8, 2007 at 7:03 pm #152940Hello BritW,
Thanks for your response. With respect to the distinction between the center line and target, please refer to my latest posting in my response to Len. (Please keep in mind that the response is an answer to Lens contention that “Helper isn’t much help” – nothing personal)
Helper0March 8, 2007 at 7:12 pm #152941You wrote:
“So control limits are not probability limits? Why do they call the normal distribution a normal “probability” distribution. What is the 68.95.99.7 rule and what does it fundementally mean?”
They do not measure the probability of being able to meet customer specifications – that measure is a capability measure. You can certainly have a control chart (or normalshaped histogram) without customer specifications. You cannot have capability measurements without specifications. The spread %’s represented by the standard CLT description is the probability that data will fall within +/ 1, 2 or 3 standard deviations from the mean, not the target (which sigma level measures).
Once again – I am not arguing the rational of the normal probability distribution or it’s standard spreads in relation to the standard deviation. What I said was that the explanation of sigma levels in six sigma is different than the explanation of the area under the normal probability curve between mean (center line) and control limits. Control limits are not probability limits in terms of meeting the customer’s specifications. They are expected probability of where data will result in, regardless of what the customer wants. Six sigma deals with the customers expectations (specs) and a target (or goal), the normal probability issue (CLT) deals with in control or out of control based on statisitcal expectation not customer expectation.
The two are different.
Secondly, you stated…:
“If you can, that means if we set our spec limits at the same place at our control limits you can expect 2.7 defects per thousand. Yes?”
Well – yes. But, you don’t set the spec limits, the customer does. On top of that, your control limits change as you improve/get worse and as the mean changes. You would have to reset customer specs every time your data changed! Impossible. Once again – control limits and spec limits are different. It isn’t a physics and science issue.
Finally – I did not say that sigma levels are meaningless. I said that some will argue that they are (see countless threads on this site). I choose not to use sigma level in communicating measurement because my CEO and CFO don’t want to hear an hour dissertation on the evolution of sigma level, and I find the layperson has a hard time understanding the concept (as do some “experts”). They want to see results – that can be presented in other formats when dealing with the probability of meeting customer expectations.
Robert, Darth, HACL, Stan, somebody…..Help!!!0March 8, 2007 at 7:14 pm #152942I posted before I read “to Len”.
I still stand by the post however, and I certainly didn’t take anything personal.
Hope others will chime in to help clear up the issue.0March 8, 2007 at 10:58 pm #152959
Ken FeldmanParticipant@Darth Include @Darth in your post and this person will
be notified via email.MBB, I was OK with your rantings until you made this point:
What is the 68.95.99.7 rule and what does it fundementally mean?
It is just more nonsense from six sigma nut cases.
Not sure that Gauss would have considered himself a SS Nut Case. The Empirical Rule is what it is and is valid for it’s assumptions. Now, if you try to put it into the context of a control chart then maybe it is a different story. Personally, I prefer Chubby Tchevy’s inequality better. Shewhart would have avoided a lot of controversy had he stuck with that as his values for the control chart. Unfortunately it was a little bit loose to provide a lot of help in identifying the distinction between his assignable and routine variation.
Maybe you should start handing out Wheeler’s book on Normality and the Process Behavior Chart. He makes some good arguments for not worrying about normality.
0March 8, 2007 at 11:00 pm #152958So control limits are not probability limits?
Correct.
Why do they call the normal distribution a normal “probability” distribution.
That’s what it is … a theoretical probability distribution
What is the 68.95.99.7 rule and what does it fundementally mean?
It is just more nonsense from six sigma nut cases.
I suspect that in a stable process/stable probability distribution, the chance of a data point falling outside of a control limit (based on +/ 3sigma) is 1.9973 or .0027
Wrong. It is impossible to assign such probabilities. You never know the actual data distribution.
Which means the odds are in your favor that you have special cause variation in your process performance. Can you agree whith that?
No. Control limits are economic limits. They provide a signal as to when you should investigate. They do not necessarily mean there is a “special cause”.
If you can, that means if we set our spec limits at the same place at our control limits you can expect 2.7 defects per thousand. Yes?
Wrong again.
If you don’t agree with the conclusions above then we can’t because you would be disagreeing with the basis of physics and the fundementals of probability the normal probability distributions, sampling distributions and the central limit theroem
Read Shewhart !!! Control charts have nothing to do with normal distibutions or the CLT. This is a very common misunderstanding propagated by the many poor teachers in quality.
As a start, look up Burr distributions on the net. You will find that it is impossible to collect enough data (with even thousands of points) to determine is any data set is Burr or Normal in its distribution. Fortunately, it doesn’t matter for control charts. As Shewhart proposed, the data distribution doesn’t matter.
I accept that Shewhart’s “economic limits” are a difficult concept to grasp. Reading Deming will also help you understand the basics of Shewhart’s control charts.
0March 8, 2007 at 11:39 pm #152961MBB,
How misguided or misunderstood can you be. True, the normal probably distritubtion is in fact a theoretical distribution just as one has a theoretical chance of flipping a coin and getting tails half the time. If you remember from highschool physics the 68.95.99.7 rule comes not from a six sigma nut case, that is if your refer to Carl Fredrick Gaus as a nut case. The bedrock of physics and statistics is built upon the law of averages and the rule mentioned above. Part of this bedrock dictates that the probability of a data point falling outside of 3 STD in a under normal circumstances is 1.9973. No need to argue, it is already taken as a scientific fact, and has been accepted as such for well over 300 years. Which leads into the rational for using control charts in the first place. Because of the central limit therom, we can estimate the what is going on in the population of our data based on statisticly valid sample sizes – hence the use of the rational subgroup.
As far as control limits being economic limits, they are so if variation in the performance of your operation is performing efficiently. That is why manager’s try to reduce the variation in their performance…yes? But from a more fundemental perspective, as you know, control limits sit 3 STD from the average and are calcuated, in part, by finding the squared deviation from the mean. Of course you can agree if you have wide variation that goes beyond spec limits that are defined by your break even points in the cost of your operations, then your control limits are hardly economical……….yes? The rules used for detecting special causes are the most economical not the limits themselves. The limits are nothing but a mathmatical calculation of variation.
Yes, I have read Shewhart but you can find a richer understanding and more depth by just picking up a book on basic statistics and it will confirm alot of what Shewhart was talking about and give you a better context for understanding the validity of six sigma.
God! I hope this helps….
Helper0March 9, 2007 at 6:23 am #152967Helper
I can understand the difficulty you are having understanding the basics. If you had really read Shewhart as you claim we wouldn’t be having this discussion.
If you are stupid enough to believe “the law of averages” … try going to a casino … when you come home penniless you might start so see some sense.
Connecting the CLT with rational subgrouping is just too silly for words … you obviously don’t understand the meaning of either term.
If you read the following paper it will help you get started on the way to understanding what control charts really mean and how Shewhart intended them:
http://www.spcpress.com/pdf/Wheeler_Neave.pdf0March 9, 2007 at 8:49 am #152972
baakaranParticipant@baakaran Include @baakaran in your post and this person will
be notified via email.hi friends,
Sixsigma bcoz, The process average of your process away from the your specification is 6 time of your process standard deviation. Bcoz of that its called sixsigma.
Baski0March 9, 2007 at 7:54 pm #153009
Sweta / MintuParticipant@Praveen Include @Praveen in your post and this person will
be notified via email.It 6 not 7 because good Mr Mikel Harry is psycholgist and he know that 7 is very unlucky number. 6 very lucky for him and make him lots of money for no good reason.
0March 9, 2007 at 9:42 pm #153018MBB,
I can see you are a very intelligent, well schooled and experienced Master Black Belt. And you have responded to my posting to provide me and everyone else some enlightment. I thank you for that.
Please review the following response and put the logic, math and rational to the test. If you have the courage to respond with salient and definitive comments either proving or disproving the concepts and applications below, I challenge you to do so in an inteligent manner.
———————————————————————–
The Gaussian/Normal distibution: Gauss the groundwork and basis for the theory of probability and statistics with his groundbreaking primary research of the normal distribution. He found that the law of averages and the additional laws associated with the bell curve (68.95.99.73) is the most basic of all natural laws. True? If the population or process from which a sample is taken is normally distributed, then the sampling distribution of the mean will also be normally distributed, regardless of its sample size. Hence the sample mean is naturally an unbiased estimator of the population mean. The law of the normal distribution dictates it. I could go on an share with you how the normal distribution and its related laws is often used to approximate other distributions like the binomial and Poisson distributions but I think it would blow your mind.
CENTRAL LIMIT THEOREM: Lets talk about your contention that rational subgrouping is too silly to talk about when considering the CLT. Of course you can agree with the well accepted fact that the CLT dicatates that as sample size is increased, the sampling distribution of the mean approaches a normal distribution in form, regardless of the form of the population distribution from which the sample was taken…….yes? Now lets extend this little theorem into taking samples at some specified rate or time interval and plotting them on a discrete data or continouous data control chart. Do I need to say more or is this silly expression not too silly any more?
To close, I just want to respond to the contention that because we are having this discussion that I have not read nor understand Shewart. This is going to really blow your mind so I hope you are sitting down.
SHEWHART (1931): A phenomenon will be said to be controlled when, through the use of past experience, we can PREDICT, at least WITHIN LIMITS, how the phenomenon may be expected to vary in the future. Here it is understood that prediction within limits means that we can state, at least approximately, the PROBABILITY, that the observed phenomenon will fall within the given limits.
Helper contends that the LIMITS Shewhart was referring to were the control limits based upon the standard deviation calculation; and the PROBABILITY he was referring to are those based on the 68.95.99.73 laws embedded in the normal distribution. Prove Helper’s contention wrong with atleast 2 reputable sources of direct and explicit contrarian views………if you have the MBB courage.
By the way, isnt the calculation of the standard deviation based on the laws of averages? If it is, the question becomes should you or should you not get your training and certification money back from who ever trained and certified you as an MBB.
Hope this helps……….
Helper0March 12, 2007 at 12:47 pm #153128And let’s not forget the seven seals – very lucky for a few, very unlucky for most.
0March 12, 2007 at 1:04 pm #153125
accringtonParticipant@accrington Include @accrington in your post and this person will
be notified via email.You’re wrong about 7 being unlucky. According to that father of Seven Sigma, the great Muddy Waters:
‘On the seventh hour, of the seventh day,
of the seventh month, the seven doctors say:
He was born for good luck, and that you see
I got seven hundred dollars, don’t you mess with me.’
(Of course, Muddy deserved every penny of his seven hundred dollars)
0March 12, 2007 at 3:28 pm #153147
accringtonParticipant@accrington Include @accrington in your post and this person will
be notified via email.Yeah, but Seven Seals is old stuff. The father of Five Sigma, Son House, sings about that in John the Revelator
0March 12, 2007 at 4:26 pm #153156
AllthingsidiotParticipant@Allthingsidiot Include @Allthingsidiot in your post and this person will
be notified via email.Good Work,a real
0March 12, 2007 at 4:40 pm #153152Stan and Accrington,
Good one!0March 12, 2007 at 9:29 pm #153185
LebowskiParticipant@Lebowski Include @Lebowski in your post and this person will
be notified via email.a real WHAT? What complete nonsense.
Lebowski0March 12, 2007 at 9:34 pm #153186
LebowskiParticipant@Lebowski Include @Lebowski in your post and this person will
be notified via email.Now you have done it. We are going to get to see that complete piece of trash post of some chronology of Harry’s esoteric excuse for a contribution posted one more time as if it actually means anything to anyone other than mental midget that repeatedly posts it.
Time to ride the bike before wonder boy shows back up.
Lebowski0March 13, 2007 at 2:16 am #15319350% of the population has below average intelligence … and most of them are obviously here.
You guys don’t really believe this 3.4 dpmo “target” crap do you ?0March 13, 2007 at 3:49 am #153199
ET meets logicParticipant@ETmeetslogic Include @ETmeetslogic in your post and this person will
be notified via email.If 50% of the population is below average, and most of them are here, and ET is here… Let me introduce you to “Barbara”. Basic logic sometimes helps even those with an assumed above average IQ. Congratulations for shooting yourself in the foot :).
0March 13, 2007 at 9:10 pm #153253
LebowskiParticipant@Lebowski Include @Lebowski in your post and this person will
be notified via email.ET,
“50% is below average intelligence” so intelligence is normally distributed? Lets see that data.
“most are obviously here.” That is not that obvious to me. How did you decide that? More data. Just because you aren’t buying that Six Sigma crap doesn’t mean you get to throw around inflamatory statements without something to substantiate them.
You seem to have a very superficial knowledge of Six Sigma. Exactly what in the DMAIC process is it that relies on using the 3.4 dpmo target since that seems to be your only issue?
BTW that makes about 50% of the sentences in your post crap so there must be something to ET meets logic’s post. You are sexist on top of it all. Women are allowed and do post here as well as “guys”. I guess that may put you about 67 sigma out on the negative end of your intelligence curve. Without the shift regarless of what size it is.
Lebowski0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.