iSixSigma

4.5 sigma?

Viewing 38 posts - 1 through 38 (of 38 total)
  • Author
    Posts
  • #32807

    melvin
    Participant

    Will somebody please explain to me how 3.4 DPMO does not, in fact, equal 4.5 sigma?

    0
    #88010

    walden
    Participant

    The DPMO is based on the premise of the “1.5 sigma shift”.
    You must subtract the 1.5 sigma shift from the +/- 6 sigma spread. This results in the process being 4.5 standard deviations from the nearest spec limit. As you state, if you look up 4.5 sigma in the standard normal curve table you will get 3.4 DPMO.
    Hope this helps.
     

    0
    #88011

    melvin
    Participant

    I understand.  It seems to me that the whole premise on which 6 sigma is sold is a lie.  It should be called 4.5 sigma. 

    0
    #88012

    Taylor
    Member

    Why don’t you at least *try* to understand it before you place your ridiculous labels on six sigma. Processes shift over time. That is a fact with most processes. Not sure if yours does, measure it and prove me wrong — show me the data that proves it doesn’t shift.
    I’m not exactly sure where the 1.5 comes from. There are conflicting stories. Some say it was measured over multiple processes at Motorola. Others say it was proven theoretically. I’m not sure, but regardless it’s the standard way of reporting. Why hasn’t the US switched to metric units like the rest of the world?…because we’ve standardized on english units. There’s no better reason than that.
    Let me know what you learn after you’ve done some research. I’m sure everyone will benefit from your inquiry into this matter.
    Taylor

    0
    #88014

    Heebeegeebee BB
    Participant

    Hey Bob,
    Before opening your yap and betraying your predisposed prejudices, check this link out for clarification:
    https://www.isixsigma.com/library/content/c010701a.asp
    My $0.02.
    -Heebee

    0
    #88015

    melvin
    Participant

    Thats’s all well and good.  My point is simply this:  3.4 DPMO equals 4.5 sigma not 6.  Sigma is a fixed area/percentage.  It can’t equal 3.4 out of a million one day and something else the.
    My purpose for posting on this forum is to research this ‘misconception’.  Everyone I ask to tell me what percentage is equal to 6 sigma says 99.99966 and this number is very prevalent in the limited literature I’ve seen (I am in a green belt class as I write this).  This is not true.  99.99966 is about 4.5 sigma.
    I would appreciate any insight. 

    0
    #88017

    melvin
    Participant

    Thanks for that link.  It explains the origin of the ‘shift’ but not why the resources provided in my greenbelt class have led everyone to believe that 3.4 DPMO is 6 sigma.  It seems that since the whole system is called ‘6 Sigma’ that it would be explained.
    I was just curious.  You guys seem to take this way too seriously.

    0
    #88024

    Loehr
    Member

    Hi Bob,
    As you have discovered, one of the more confusing elements of six sigma involves the association of a 6 sigma with 3.4 ppm.  The 6 sigma quality level refers to the short-term capability of a process, sort of an “instantaneous” capability, whereas the 3.4 ppm refers to the long-term performance of the process. 
    If there were no shocks to a 6 sigma process and the process average remained stationary, then there would be only 2 nonconforming parts out of every billion produced (assuming the process average is centered at the middle of the tolerance).
    However, Motorola discovered that over time, the average of a process would move around somewhat due to temperature changes, variation in different batches of material, machine vibration, influences of different operators, and a whole host of other small changes in the process.  Because these changes move the average away from the center of the tolerance and closer to either the LSL or the USL, the process would be producing more than 2 ppb.
    In somewhat of a worst-case scenario, Motorola assumes the process average could move by up to 1.5 sigma (either higher or lower).  Thus, a process displaying six-sigma quality in the short term, could be producing at only a 4.5 (6.0 – 1.5) sigma level in the long term.
    Not every process will experience shifts in its average as large as 1.5 sigma, but to be safe, and to assure that their quality goal of 3.4 ppm is met, Motorola assumed the worst case and requires every process to have a short-term capability of 6 sigma.  If a process reaches this goal, then it should produce no more than the goal of 3.4 ppm over the long term.
    There is an article offering statistical justification for a 1.5 sigma shift in the process average.  See “A Statistical Reason for the 1.5 sigma Shift,” by Davis Bothe in Quality Engineering, Vol. 14, No.3 , pp. 479-487.  If you’re really interested in understanding the 1.5 sigma shift, read this article.
    I hope this helps clear up some of the confusion about a confusing topic.

    0
    #88056

    BB
    Participant

    Have you ever taken out a loan Bob? The loan officer tells you that your rate is 5% and your very happy. Then you get a truth in lending statement showing a higher Annual Percentage Rate (APR).
    Did your loan officer lie to you? No, the APR is just another metric describing the same loan rate. Which is “right”? Neither. You just need to be aware of which rate you are using, and be aware of what it means.
    The nice people on this forum have been trying to educate you on what is well documented on this site: the commonly used sigma values are based on a long term view showing drift in the mean. Is it easy to understand or explain? No. Try explaining the difference between your loan rate and an APR rate. Is it wrong? No. You just need to learn the technical terminology and the technical reasons for “Sigma Quality Levels.”
    Don’t get so frustrated with this “wrinkle” in sigma reporting that you miss the point: short term process control does not equal long term process control. Our sigma targets must be adjusted to take this into account.

    0
    #88060

    melvin
    Participant

    I now understand the source of this confusion and I thank everyone for their input.  Whomever compiled the materials for our class needs to clarify that you can claim a 6 sigma process when you achieve 4.5 sigma output, however.  The charts in our material are unambiguous in stating that 3.4 DPMO equals 6 sigma and that is what everyone in our company thinks.  It may not be a deliberate lie but it is still not true.

    0
    #88062

    Carlos Umbro
    Participant

    Thank you for your excellent post. I think it summarizes the point very nicely. You are very good at explaining such points with good example from the real world.
    Carlos

    0
    #88064

    Gabriel
    Participant

    Ok, I just have one question:
    Why are we happy when the process reaches a given level in the “short term”? Why is our metric “short term performance” and not “long term performance”? Which one do you think has a greater impact on customer satisfaction and bisiness results?
    “I measured 4.3 DPMO over one year. That means that the process average (if normally distributed) is 4.5 sigmas away from the closest specification limit (and pretty farther from the other limit). Now, if I assume that in this long term the process has drifted 1.5 sigmas to the wrong side, at the beginning it must have been at 6 sigmas in the short term. So I call this a 6 sigma process”.
    I just don’t like it. I agree that process do shift, so one has to aim to 6 sigma in the short term to get 4.3 DPMO in the long term. But then, may be it shifts less than that so it is better or maybe it shifts more so it is worse. Why would I call it a 6 sigma process in anyway? Ok. If I know the actual drift I can use it instead of the 1.5, right? Then the one with the larger shif is better!!!! Is that sound? (The reasoning: it had to be much better in the short term to get to 4.3 DPMO in the long term with a shift of more than 1.5 digmas)
    Again. I just don’t like it. I feel it would be much better and sound to report long term performance.
    Standards are standards. But standards can be changed if it is sound to do it.

    0
    #88066

    Hans
    Participant

    The same reason a bank charges higher rates in the long term… We just don’t know what variation will happen long term.

    0
    #88069

    BB
    Participant

    Gabriel,
    Thanks for your post. I understand the confusion that Bob is having better now.
    4.5 Sigma means 6.8 DPMO. If I experience a 1.5 sigma drift over the long term, my performance will get worse. The number of defects will increase to 1,350 DPMO, which is approximately 3.2 Sigma.
    Motorola decided instead of calling this performance 4.5 Sigma based on the short-term observation of 6.8 DPMO, they would call this performance a 3.2 Sigma Quality Level to indicate long term expected results of 1,350 DPMO. Thus, they invented a new measurement system called “Sigma Quality Level” that has been confusing people ever since. “Sigma” and “Sigma Quality Level” are two different measures and cannot be used interchangably. They do not mean the same thing. Because just about everyone uses Sigma Quality Level, we often get lazy and report this as “Sigma”, but it is not the same thing.
    What I think Bob and Gabriel are so upset about is that 4.5 Sigma and 6 Sigma Quality Level have approximately the same number of defects. This makes it look like we are wrongly improving our Sigma score by 1.5! Back to our previous example, by using a Sigma Quality Level, our “sigma” score went down, not up.
    When you try to compare Sigma to Sigma Quality Level it is like comparing inches and centimeters. You can’t say that centimeters are wrong because 6 centimeters is really only 2.5 inches. They are different measures.
    Where people can go wrong is if they measure the short term performance as 3.4 DPMO and claim “6 Sigma” status.   3.4 DPMO assumes a 1.5 shift over the long run. If you see 3.4 DPMO in the short term, Motorola says you can expect approximately 1,350 DPMO over the long run, which is a 4.5 Sigma Quality Level. You have to use the right measure for the right thing.
    The 6 Sigma “program” has declared that all processes can expect a 1.5 sigma drift over time, and that we all should use Sigma Quality Level as the measure. While many have and will continue to debate the wisdom of this, it is the measurement system of the 6 Sigma program. You can try to fight it, but it is what it is. As part of a 6 Sigma program, you’ve got to use it. Just use it responsibly: as a measure of long term performance.
    Hope that helps!

    0
    #88076

    Gabriel
    Participant

    BB. Note that I didn’t said that it is wrong. In fact, you and me are saying just the same (well, you are clearer). Just that I don’t like it. Just as I don’t like to use inches. And the world (except US) dropped the British units because they found the Metric to be easier to use (even the British did)
    And I don’t like it for 2 things: 1) It is obviouly misleading. Any doubt about that? Just search the forum with the keyword “1.5” or “shift” 2) It is based on the assumption that the process shifts 1.5 sigmas, which is just an assumption.
    You know, I can say that Alan is my brother or that Alan is the son of that brother of the son of my grandfather who is not Alan’s uncle. Both things are right. I just don’t like the second one.

    0
    #88077

    Mikel
    Member

    Blah, blah. blah, blah …
    Much ado about nothing. Who cares?
    Every business has opportunity for improvement. Go find the greatest need and improve it. The business will be healtier.
    Bob who started this thing hasn’t even done a project yet. Why don’t we wait until he earns the right to ask stupid questions. Most don’t by the time they have earned the right.

    0
    #88079

    BB
    Participant

    Gabriel,
    I noticed after I sent my last post that I missed your point entirely… Sorry! I was concerned about Bob’s final comment: “It may not be a deliberate lie but it is still not true.” I hope he doesn’t discount all of 6 Sigma over this one issue. Our MBB’s avoid this issue in training if they can help it, but inevitably, someone picks up on it and confuses the group.
    I agree with you that it is confusing. I think that “short term DPMO” and “long term DPMO” would be better metrics than “Sigma” and “Sigma Quality Levels”. But 6 Sigma sounds so much cooler than “3.4 DPMO in the Long Run.”
    I agree with Stan’s latest post that focusing on improvement is more important than saying “my process is at a 4.21 Sigma Level.” But Stan, don’t lose your patience with the 6 Sigma newbies posting questions to this site. That’s what it’s here for. If we can’t convince the newbie on this site that 6 Sigma is a good deal, what happens when we get a newbie for a boss who wants to get rid of 6 Sigma? Or a boss who reads this site and is convinced that “It may not be a deliberate lie but it is still not true”?

    0
    #88083

    ishai
    Participant

    In the mid 80’s the failure rate levels in the Electronic manufacturing industry,
    were around the 3Sigma (statistical sigma). At that time Motorola launched a program for continues improvement in quality, known as the “Six Sigma” program.
    The aim was to improve the quality of the products by controlling the manufacturing processes (SPC). It made sense that the next step would be, to improve from 3Sigma (statistical sigma) to 6Sigma (statistical sigma).
    mmm….looks that the guys that started to develop the 6Sigma (statistical sigma) program in Motorola, probably knew a lot about statistics, but I am afraid they knew very little about manufacturing processes capabilities. When Motorola realized that it would be impossible to achieve the manufacturing goal of quality improvement from 3Sigma (statistical sigma) to 6Sigma (statistical sigma), it was already to late…. The genie was out of the bottle. So…Motorola came with a new definition to the statistical 6Sigma – The Motorola “Six Sigma”.
    Instead of a target of 2 failures per billion opportunities in a statistical 6Sigma, the Motorola “Six Sigma” talks of a target of 3.4 failure per million opportunities. Still a very difficult target for the manufacturing guys, but not impossible!
    The Motorola “Six Sigma” is in fact a statistical 4.5Sigma. The 1.5 Sigma shift, is a correction factor/constant that Motorola invented to cover for the BIG MISTAKE!
    I have been in this industry for 15 years and have been involved in hundreds of DOEs and analyzed thousands of data sheets including many SPC sheets, and have never seen a long-term 1.5Sigma drift/shift.
    If this drift/shift indeed exist, it should impact 2Sigma, 3Sigma…any NSigma processes, making it a very common and well known factor, but it isn’t.(This is my personal view.)

    0
    #88084

    Mayes
    Participant

    I remember all this discussion in GB and MBB training years ago. My MBB instructor got so frustrated with me continually hammering away on this issue (a sincere attempt on my part to understand the “sigma shift”) that he finally accused me of being an engineer. He was right.My personal conclusion after considering this issue for years is…………
    Six Sigma simply sounds sweeter than “Four and a Half Sigma.”

    0
    #88104

    Hein Jan
    Participant

    Gabriel, and others,
    I do appriciate your comments and explanations on this and other subjects. I feel you also “improve” the discussion and the knowledge flowing on the forum.
    Ok, in relation to your uncomfortable feelings in relation to short term capability:
    I share your concern, but…. look at it this way:
    If I have a process I wish to improve. Afterwards I wish to prove that I’ve indeed improved. For this I (and the managers/bosses) don’t want to wait a year (long term). Managers tend to be an impatient lot.
    So the only reason to use short term capability is to make a quick estimate. And yes it makes me a litlle uncomfortable too, but I do realise that it is a neccessary evil.
     
    Kind regards,
     
    Hein-Jan

    0
    #88106

    Gabriel
    Participant

    Wow, thanks! You have a bettrer image of me tham myself!
    I agree with almost all you said. You need a quick estimation of the result of the improvement effort. Just don’t call that a 6 sigma process just because it behaved 6 sigma in a “quick estimate, specially if you expect that the result will not be sustained! (for example, expecting a shift of 1.5 sigma)
    Of course, once we all agree in the definition (6 sigma level = 4.5 sigma long term), we can use it. But I think that that definition was unnecesary and only complicates the system.

    0
    #88108

    DaveGat
    Participant

    Ishai,
    Excellent insight!  Here is another insight:  because all real-world distributions are discrete, as a process approaches zero defects (think of a mathematical limit) there must be a threshold value beyond which the defect rate “jumps” to 0.000000000000…  This process would be free of all the special causes that nature could possibly create.
    In other words, when a company approaches perfection, eventually they can predict and thus prevent ALL defects, so why would they ever make defective product?
    I hope to see this in my lifetime.  I think the most obvious, and mundane, example of an industry that approximates this performance is grocery stores (at least in the U.S.).  When did you every see a product that was damaged or similarly compromised in some way?
     

    0
    #88109

    DaveG
    Participant

    Ishai,
    Excellent insight!  Here is another insight:  because all real-world distributions are discrete, as a process approaches zero defects (think of a mathematical limit) there must be a threshold value beyond which the defect rate “jumps” to 0.000000000000…  This process would be free of all the special causes that nature could possibly create.
    In other words, when a company approaches perfection, eventually they can predict and thus prevent ALL defects, so why would they ever make defective product?
    I hope to see this in my lifetime.  I think the most obvious, and mundane, example of an industry that approximates this performance is grocery stores (at least in the U.S.).  When did you every see a product that was damaged or similarly compromised in some way?

    0
    #88117

    Hein Jan
    Participant

    Gabriel,
     
    True enough the definition (6 sigma, but in fact 4,5 (one sided)) complicates things for the statistically schooled, or the ones trying to become so, but…
    Suppose I tell management I have a Six Sigma process and that such a statement means that it has 3,4 ppm defect-rate. Which is most of the time what is happening in actual live.
    I now have protected myself against the decisionmakers. I say it is Six Sigma, and can prove this through a short term capability, and still live with the estimated 1,5 sigma drift (long term) before to many defects start occurring. Management isn’t interesseted in Cpk’s etc. They’re just interested in low defect rates.
    So the definition helps in simplifying things for management.
    I find that Six Sigma has a good focus on getting results that convince management. It is us the Belts that need to make the translations from data, tests etc into proposals for management decisions.
    To me the 6 is 4,5 sigma rule is a (somewhat difficult) part of this translation.
     
    Kind regards,
     
    Hein-Jan

    0
    #88121

    CM
    Participant

    Wow.  These guys really get worked up about their sigmas!  Responses seem to cover the range from concerned to angry to ridiculously paranoid.
    In many ways, the whole concept of shift is something of a philosophy…processes over time don’t tend to function like they do today.  Mikel Harry gave the estimate of 1.5 sig for this shift, and Minitab will actually calculate the long term shift for you, though I don’t know what algorithm it uses.
    The bottom line is that practioners can buy into it or not.  Personally, I don’t care, and many others have made the same comment to me.  On a project basis, the salient issue is whether you have made improvement.  By the way, you can’t really do this with a sigma…you can do it with such tools as ANOVA, Chi-Square, etc. using the original data.  Sometimes sigma can be used to give a quick summary (if you have several factors that have improved), but you still need to confirm statistical improvement with the original data.
    I doubt that the shift is the result of a worldwide conspiracy of misinformation by Motorola to cover up someone’s aggressive goals, but I could be convinced that it was a marketing ploy by Mikel Harry to promote his product.
    Regardless, whether or not you buy into the shift (I buy into the concept, but not it’s usefulness), the real benefit is on demonstrating real improvement in you process or project.
     
     

    0
    #144626

    tbatson
    Member

    I am currently in a Masters Level Quality control class in college.  I have a question posted to the class:  “Is Six Sigma enough, should it actually be 4 Sigma?”  It appears by your statement that you think it should be 4 -4.5 Sigma. Why??
    Any explination would be helpful.
    Thanks,
     

    0
    #144639

    Mike Carnell
    Participant

    tbatson,
    I refuse to get into the number selection game because it is a waste of time. What you will see about 99.9996% of the time is that capability based on short term data is overstated. You don’t like 1.5? Pick a number.
    In the context of your question, if you start with 6 short term you will get less. If you start with 4.5 you will still get something less. Take that “less” number and roll it about 10 times (simulating a process) and see if you get a number that represents the risk of shipping a defect or the cost of rework and scrap you are willing to take.
    It’s just business not a religion or law. You want 4.5?  Nobody goes to statistical jail for 4.5. The intersting part will be watching you get to 4.5. You’re working on the wrong end. Instead of wasting time on some esoteric idea – take Stan’s advice and figure out how to make stuff better. Maybe as you get better SS won’t look so impossible.
    Just my opinion.
    Good luck

    0
    #144644

    tbatson
    Member

    Mike,
    Thank you very much for that reply.  The whole basis of this course is quality statistics. However, the true meaning is how much time is waisted tracking, measuring and varifying.  In actuality what we need to do is what you and Stan suggest; find a way to build better stuff.
    Thanks,
     

    0
    #144654

    Mike Carnell
    Participant

    tbatson,
    You are welcome. Remember that is only my opinion.
    To build better stuff there is a degree of tracking, measuring and varifying that is necessary to enable that. The problem becomes what do you trak, measure and verify and for how long (it is a balancing act between to much and not enough and the mean shifts 1.5 std dev. – just kidding). Try and remove a control chart sometime – it is like a quality pacifier. If you understand the DMAIC flow that works through a dependent variale to all the independent variables it is to sort out the variables that create leverage on the behavior of the Y. It puts some intelligence in the decision process on what and how to control something and to what level.
    I haven’t been in the automotive world directly for about 5 years but if you want to see an example of a customer driving waste up and their cost down watch the behavior of the domestic automotive SQE’s when they are in a suppliers facility. They wants lots of control charts on everything so they have proof they are “working” with their suppliers (whatever that means). In one Mortorola facility I had 3 full time people making Control Charts, auditing control charts, training people on control charts, etc and 99% of them were what Marty Rayl (our Director of Quality) refered to as wall paper (“Nice f__king wallpaper but it is still wallpaper). All neatly hung in their control chart prophylactics so the SQE’s thought we built good stuff. We did build good stuff but it had nothing to do with most of the wallpaper.
    Just my opinion.
    Good luck with your project.

    0
    #144657

    Ward
    Participant

    There are volumes of literature showing that 3.4 DPMO is garbage, despite Mikel Harry’s attempts to support it. The very foundation of SS is rubbish.
    Is there anyone who still actually believes in the 3.4 DPMO ?

    0
    #144658

    GB
    Participant

    Show us your data pete…Please leave “gut-checks” & vague inuendo at the door.

    0
    #144660

    Mikel
    Member

    Do you prefer to have your processes running better or worse than 3.4 DPMO?

    0
    #144662

    Mikel
    Member

    It is still fun to walk in a facility with pretty charts and pull one out and write on it – the reaction hasn’t changed in 20 years.

    0
    #144670

    Orang_Utan
    Participant

    When I decided to remove all SPC wallpapers and WIP travellers in a new production line for IBM magnetic head assembly, the operators are the most happy lot.
    IBM auditor commented that she tried hard to convince IBM suppliers to eliminate all paperworks for last 20 years, but she failed to do so until she visited my former company’s plant in Asia.
    The result was production volume up 5 times, 30% floor space and better yield than old process designed by R&D team based in USA. Instead of hiring 1000 people to support ramp-up plan, we managed to run with less than 300 operators.

    0
    #146998

    holein1
    Participant

    I under stamnd the 4.5/6 sigma concept but if I generate a control chart with short term data with +/- “3 sigma” control charts can someone explain why I cannot add the 1.5 sigma shift and produce long term control charts with control limits at +/- “4.5 sigma”
    Thanks

    0
    #147000

    Marlon Brando
    Participant

    Yes .I  still  believe  in it?

    0
    #147011

    D K
    Participant

    I can help Pete out here. For those like HBGBB^2 who have been asleep, here’s a brief history of the 1.5.  I’m happy to give more details if you wish.

    Bill Smith observes “sudden shifts” (some claim long term shifts) due to special causes and broadens tolerance to Cp=2.
    Mikel Harry derives +/-1.5 as a “shift” in the process mean, based on tolerances in stacks of disks. He calls this his “Z shift”.
    Harry seems to realise his error and says the 1.5 “is not needed”.
    Harry in about 2003 makes a new derivation of 1.5 based on errors in the estimation of sigma from sample standard deviations. For a special case of 30 points, p=.95 he multiplies Chi square factor by 3, subtracts 3 and gets “1.5”. The actual value ranges from 1 to about 20. He calls this a “correction”, not a shift.
    Reigle adds a new calculation he calls a “dynamic mean off-set.”: 3 / sqrt( n ) where 3 is “the value for control limits” and n is the subgroup size. For n=4 he gets “1.5”. Reigle says “This means that the classic Xbar chart can only detect a 1.5 sigma shift (or larger) in the process mean when subgroup size is 4”.  Reigle is quite incorrect. Such data is available from ARL (Average Run Length) plots.
    In short … a collection of nonsense that has fooled the gullible.

    0
    #147013

    Correction
    Participant

    DK,
    You will love the citations from the Quality Progress article :-))))).

    0
Viewing 38 posts - 1 through 38 (of 38 total)

The forum ‘General’ is closed to new topics and replies.