MONDAY, JUNE 25, 2018
Font Size
Topic Negative Sigma Levels

Negative Sigma Levels

Home Forums General Forums General Negative Sigma Levels

This topic contains 27 replies, has 5 voices, and was last updated by  Rip Stauffer 4 days ago.

Viewing 28 posts - 1 through 28 (of 28 total)
  • Author
  • #705044 Reply

    I just found an old discussion where people were talking about the possibility of a “negative process sigma.”
    This, to me, illustrates a couple of problems:
    1. General misunderstanding of process capability
    2. The problem with trying to use a metric like process sigma in the first place.

    1. In that discussion, one author accused another of not understanding the difference between process capability and process stability. One thing that seems to have been lost in much of the six sigma literature — a concept that was well-understood as a given in the statistical process control literature — is that there is no process capability without process stability! If you cannot predict the variation in your process, you can’t estimate how much of the output will be found outside the customer’s specifications. A histogram means very little in the absence of a state of statistical control. A good capability study measures the voice of the process (given by the centerlines and control limits in a chart for a stable process) against the voice of the customer (given by the customer’s specifications). If you think about it, the control chart gives you the raw capability of the process (what you get); comparing that raw capability to the spec limits (what you want) tells you how well the real world is doing in meeting your (customers’) desires.

    2. So, the idea behind the process sigma was probably never a great one…it’s convenient to have a single number to compare to another single number, I guess; however, when those numbers are as counter-intuitive as the process sigma, it’s not necessarily a great idea. DPMO is a pretty good metric; Roger Hoerl and Roland Snee (and others) made the case in the late ’90s and early 2000s that being able to use DPMO as a sort of Rosetta Stone for comparing data from processes that produce Binomial, Poisson and continuous situations was useful.
    I’m not going to spend a lot of time with the “1.5-sigma shift.” It’s always been nonsense, and an indication that someone forgot how SPC works…It was a useful SWAG for design, but ridiculous as an operational metric. If you have SPC set up at all correctly, you cannot physically have a sustained undetected 1.5 sigma shift. So I’ll leave the debate about “Long-term” and “Short-term” variation to another discussion. I’m a little more concerned about this idea of a “negative” process sigma.
    The process sigma is intended to tell us how much of our data is (a) contained in the space under a normal curve, and (b) inside the specification limits. Essentially, without the 1.5 shift, a Cpk of 2 is equivalent to “Six Sigma” performance, a Cpk of 1 is equivalent to “Three Sigma,” etc. It used to be that we had tables, published in Six Sigma books, that showed us the DPMO-to-Process Sigma conversion tables for various shift assumptions (usually from 0 to 1.5). In those tables, for example, a DPMO of 833,804 would give you a process sigma of 0.6 (assuming the 1.5 shift). This was due to the fact that integrating the normal curve would tell you that 83.3804% of the area under the curve to one side of the limit (with the 1.5 shift) would be outside the specification limit, and so the probability of a defect (translated into DPMO) would be 833,804. With the exception of the shift assumption, this at least made some sense.
    Somewhere along the way, that got lost. Packages like Minitab did not do us any favors by adding “Z-benchmark” scores that could go to values less than zero. So now, in addition to the 1.5-sigma shift, we are stuck with negative sigma values, which really make no sense.
    I have ceased teaching the process sigma, and the 1.5-sigma shift, as anything other than a historical footnote. Better to teach good SPC (in Measure, carried throughout), talk about Capability based on stability, and use DPMO or its inverse (yield) as progress and comparative measures. That way you are talking about real-world concepts. Managers and executives tend to instinctively understand DPMO and yield, and you don’t have to refer to tables.

    #705045 Reply

    Intriguing post and I’m sure it might get some passionate thoughts.

    I’ve seen many areas where a “negative sigma” that you say makes no sense was truly an indication of a process not meeting specs. Not sure the distaste by yourself on negative sigmas even after reading it twice. Yes, DPMO is fine also but I prefer sigma level (shifted or non shifted) or ppm since the conversation on DPMO often becomes a discussion in itself about how many value added opportunities.

    Nice post.

    @rbutler and @Mike-Carnell

    #705048 Reply

    @cseider Thanks Chris. I will be very quick on this particularly after reading point #1. This is one of those “I long for the good old days of control charts when we did everything right and the world was pure.” Maybe it was. That is an academic view of the world – quality was tolerated, not important with no voice and our factories produced junk.

    I have no interest in the SS vs Control chart exclusively argument. In the book Getting To Yes they talk about negotiation and there are people who discuss issues and people who take positions. Nothing happens when you engage someone who has taken a position so don’t waste your time. I am a believer in that advice from Mr. Fisher.

    You have 22 years that I am aware of as a practitioner on some of the best performing deployments in the world. This is esoteric. It is beneath you.

    Just my opinion.

    #705049 Reply

    …are we talking about the 27 September 2002 exchange? If so what I said then still applies:

    “The real problem here is one of labeling. Someone made a very poor choice when it came to renaming the Z score. Whoever they were they at least chose to call the Z score the Sigma Value and not Sigma. Unfortunately, the same cannot be said of any number of articles and computer programs. I know that Statistica calls their program the Sigma Calculator and the final result Sigma. The problems that can result from confusing Sigma Value (which can have negative values) with process standard deviation (which cannot have negative values) are too numerous to mention.”

    #706084 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    I am just trying to understand how we got here…My feeling is that we got here by confusing a lot of things that should never have been mixed together. This is not, to me, an academic discussion at all. It’s a discussion about what happens in the real world. I also have quite a few years (27) and a lot of successful (and, yes, some unsuccessful) deployments under my belt. It’s not that I “long for” control charts–I just know that they (or, at least, some evidence of statistical control) is necessary before you can assess process capability. Process capability can not be assessed until you have some indication of process behavior. That is just a fact, not a philosophical position or an opinion. There is no “SS v control chart” argument, at least not from me. An assessment of sigma level does, however, depend on an assessment of process control. Whether that comes from a control chart or an automated high-speed SPC system doesn’t matter; what matters is that you demonstrate control before you start talking about capability indices or DMPO or sigma levels. That was true before 1983 (when we first started using capability indices in the U.S.), and it’s true today.

    The question of how we got away from that basic understanding interests me, but what interests me more is understanding how we got away from the tables (which translated DPMO to process sigma level). Even with the 1.5-sigma shift nonsense, you didn’t have negative process sigma levels.

    #706086 Reply

    As I stated in that discussion back in 2002 you can get negative sigma values. From that discussion

    ” If you go back to your sigma calculator in whatever program you are using (mine is on Statistica and it does add 1.5 to the Sigma Value) and plug in values such as 999,999.999 for your DPMO you will get a Sigma Value of -4.49 or -5.99 depending on how your particular calculator uses 1.5 when computing Sigma Values. These Sigma Values just mean that the vast majority of your product is completely outside the customer spec limits. It says nothing about the standard deviation of your process.”

    I agree calculating the sigma level for the case where there is very little of your product within a customer’s specification is silly because it should be blindingly obvious that this is the case without recourse to any kind of calculation.

    What you do need to remember is this calculation is only commenting on how your process lines up with customer specifications – it doesn’t say anything about process stability. You can have a very stable process and still get this kind of a result. For example there is a sudden drastic change in the customer needs and the change is such that virtually none of the current product from your stable and in control process can meet the new requirements.

    #706087 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    So, it sounds as though it was a matter of software manufacturers deciding to do it that way. I would like to talk to the people who originally made that decision. While I sort of understand their reasoning, it did represent a shift in the way the process sigma was originally taught; I’d be interested to know why they thought that was an improvement. It probably had something to do with ease of calculation within their algorithm. It was probably easier to do it that way than to shift the distribution (as the tables did).
    It’s interesting that books written after those software packages came out continued to use the tables. In those tables, the majority of your product would be defective at .6 sigma (without the shift) or 1.5 sigma (with the shift)…that’s where DPMO was at greater than 500K.

    #706162 Reply

    @ripstaur You made your stance on Process Control clear and your “fact.” Here is where I am. Process Control doesn’t move me a step closer to improvement. It is a nice to know number, tied to this Cpk/Ppk/Pp/Cp that means nothing to people outside of Quality Assurance. In general I could care less. The whole thing about a process needs to be under control before you do capability is irrelevant. I don’t need capability so I could care less.

    Now the assumption that it needs to be under control to improve. That is a circular argument. The reason things are a project is because they are out of control. I have done support on enough projects and found someone who is doing nothing because he is waiting for that first Control Chart to show control and stability. It doesn’t work that way.

    The 1.5 sigma shift died a long time ago. It was barely alive right from the beginning. If you have an issue with it you can commiserate with the guy from Australia who is a borderline psychotic about it.

    Sigma is a management metric. Most management teams do not have a firm grasp on what it does or doesn’t mean. Beyond comparing dissimilar products it is essentially useless. So the whole negative sigma discussion isn’t even remotely significant. In my opinion not at all.

    Just my opinion.

    #706215 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    Mike, thanks for that clarification. I agree whole-heartedly with some of what you’ve said. I would like to think that the 1.5 Sigma shift died a long time ago, but it has been included in ANSI-ISO Six Sigma standards now “by convention.” So, unfortunately, there is a little more beating that has to be done before that horse is dead. I think I know the “guy from Australia” you are talking about…there is no commiserating with him.

    I have improved a lot of processes without their being in control at the beginning. That’s just the nature of the game, as you point out. Sometimes an important process requires improvement and the process owners didn’t know enough to apply SPC prior to the beginning of the project. That’s OK with me; I would almost never ask them to wait until the process is in control before starting the project. Usually, if we get the measures and charts going, acting on any special causes gives us quick and easy wins early on.

    I do disagree that SPC doesn’t get you any closer to improvement, though. There are many, many cases where managers have used SPC over time to get processes to much higher than 6 sigma, just by acting on the special causes when they come up. Granted, they had to do a lot of work early to get those processes in control, on target and with reduced variation; but once they did, monitoring was simple and continual improvement became possible.

    It’s also been my experience that knowing the baseline process center and spread are important early in the project, and then tracking those throughout the project will be important as well, to know how your project is progressing. The best tool I’ve found for that purpose is the control chart (process behavior chart). Run charts are OK, but it’s not like calculation of control limits is rocket science, especially with all the software available, and you can’t measure variation in a run chart. If the run chart shows a reasonable degree of statistical control, you could then use a histogram for variation. However, the control chart gives you both, and lets you know when (and to what degree) your improvement solutions have been successful.

    #706223 Reply

    @ripstaur I will be honest with you I have no appetite to be in this discussion basically because I believe it is a waste of time. If you want to use your control charts there is nobody stopping you. I have watched countless people do things on projects and as the book “Getting to Yes” points out if someone has taken a position they aren’t going to change. Well the Quality profession took up control charts around 1931 with Shewhart’s book the Economic Control of Quality of Manufactured Product. The operative word being Economic. Basically it is in the same boat as Managerial Accounting it was created at the turn of the 20th Century and has not progressed a day since. These days we have controllers that sell for less that $100 that can control a process an it shows up for work every day, it never skips a sample because it is to busy, nobody has to make blank charts for it, it will shut down when you tell it to, etc. and that list goes on ad nauseam. Basically there are better ways of controlling a process currently that are the Economic way to do it.

    Now if you really want to take the position that you can control a process beyond SS with the normal selection of attribute and variables control charts you may want to work that through with some numbers. First they are backward looking so the defects have already occurred so the problem of going beyond 4.5 sigma, since you still believe in the shift, are pretty much impossible. By the time you recognize an out of control situation at that level of quality you are no longer at that level of quality. Attribute charts will never get you there in the first place. If you can’t make it work at 4.5 sigma and you believe in the true value of SS it obviously will not work there either. I would be really interested in talking with a manager that was controlling a process beyond SS (regardless of which one you believe in) with a control chart. Mike Harry believed he could do it with EWMA and Cusum charts but to my knowledge he never really had those process (in a practical application) that ran to a high enough level to be able to prove it.

    Attribute charts don’t really do you much good to begin with. Once I am out of control with # of defects, % defective, etc. I still don’t know anything beyond something is wrong and I still have no clue what it is. Rather than have this mess out there around an attribute chart how about figuring out how to stop creating defects rather than counting them.

    So that leaves control charts for analysis. You agreed that you do have to work on a process before it comes into a state of statistical process control so we understand I do not have to prove a process is in control before I do anything (so all you out there sitting at your desk waiting for your process to bring itself into control need to get up and go work on the process). Control charts are a good tool to evaluate stability if I need to see it in order produced. If I just want to see what has happened to my process variation I can do that with a homogeneity of variance test if I do that then I can determine the power of the test with sample size and get a statistical determination of the difference or a lack of difference.

    So to go back to where I began. If you like them use them. It doesn’t make me any difference. I use tools when I think they are appropriate. The problem we have here for the certification industry is that is if you don’t teach control charts then you have about 2 days worth of material to teach the Control Phase. Most people selling training don’t like that and the Quality people have a fit if you don’t train the Charts. Personally it would just as soon buy everybody a Raspberry Pi controller and let them play with it for a couple days as the Control Phase training. This is the 21st century. We are 17 1/2 years into it so we should probably join it. Our job is to build good products not find ways to use our favorite analysis tools. (I have no financial interest in this company and I am told even this is quickly becoming old technology. Maybe or they see my grey hair and tell you that because they know I don’t know).

    Just my opinion.

    #706224 Reply

    Ralph Stauffer
    Reputation - 176
    Rank - Aluminum

    Sorry you feel that way, Mike…if you look at Wheeler’s Japanese Control Chart, especially the discussion of it in Advanced Topics in Statistical Process Control, you’ll see one good example of what I was talking about. One note: If trainers are only teaching control charts in the CONTROL phase, they are doing a disservice to their trainees and to the organizations they are working for. Good luck to you.

    #706227 Reply

    Richard Heller
    Reputation - 15
    Rank - Aluminum

    For what it’s worth, I do agree with you about the utility of the 1.5 sigma shift. The shift isn’t what’s important to quality and measurements. However, I have found it helpful to counter their argument that setting specifications at +/- 3 standard deviations isn’t a good idea. As I understand it, Motorola used the shift to address the problem of having a 10-15% reject rate on units having 3 standard deviation limits. My explanation was that if we take a sample from a production run, calculate the standard deviation from that run, and then establish specifications from that run, we overlook a lot of the real-world variability . . . different machines, different raw materials, different operators, different set-ups, etc., etc., etc.

    However, trying to use the shift to explain or even address a stable process doesn’t really tell us anything from a quality perspective. The difference between what we use in quality and what engineers that set the initial specifications failed to see was that we need to set specifications based on the real world.

    Fortunately, in the years since Motorola started this issue, I’ve seen designers improve their approach to specifications and, consequently, significant improvements in the overall quality of products. So, a historical oddity or not, the concept deserves a good deal of credit for aiding quality improvements.

    #706228 Reply

    @ripstaur I am not sure why people think some thing on their own that means the have not read Wheeler. How did you come to the conclusion I had not read Wheeler?

    If you read what I posted I did not say Control charts are all that they teach.

    Here is the part I do not understand but it is pretty indicative of todays society. “Sorry you feel that way, Mikeā€¦” which means what. I think something different than you and you are sorry about that? You believe something different than I do and I don’t feel bad about that.

    Just my opinion.

    #706235 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    Mike, you said, “Now if you really want to take the position that you can control a process beyond SS with the normal selection of attribute and variables control charts you may want to work that through with some numbers. First they are backward looking so the defects have already occurred so the problem of going beyond 4.5 sigma, since you still believe in the shift, are pretty much impossible.”
    I was not taking a position, I was talking about what people in industry have done with control charts for many years now. I cited the Japanese Control Chart as one example. In the time period covered by that narrative, the workers at Tokai Rika started with a process in control, with a Cpk of greater than 2 (therefore already better than six sigma), and ended up cutting the variation even more just by monitoring with an XbarR chart and addressing special causes when they came up. I did not know whether you had read it or not…just pointing to it as an example.
    Just for clarity, also, I do not believe in the 1.5-sigma shift. It’s nonsense–it was intended as a design swag for testing specs in engineering simulations, and was not originally intended as an operational metric. I have been fighting the use of it for years, but the Six Sigma world is reluctant to let it go.
    I’m just sorry you feel that you’re wasting your time discussing this further, because I enjoy a good discussion.

    #706236 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum


    A while ago, I started doing some research to write an article about the history of the shift. I happened to be working as a sub to the Six Sigma Academy at the time, and managed to get an interview with Mikel Harry. He said that the metric as it has evolved (into the process sigma levels that assume the 1.5 sigma shift) was absolutely NOT what was intended. He didn’t say it, so much as yell it–he was very emphatic. He then explained that he had used it as a design metric (this squared with what I had learned working with a statistician I knew who had run a pager operation at Motorola). He would basically tell his engineers that when they were running their systemic simulations they should shift all the output 1.5 sigma in the worst-case direction. Once the simulation was complete, then, they could see the effects of tolerance stack (or, more specifically, variation stack) on capability. That’s a good idea, from the viewpoint of robust design principles.

    However, to translate that design concept into a universal assumption that your process has a sustained undetectable shift of 1.5 sigma in the worst-case direction (which is what the process sigma level tables do) is ludicrous. If you are using SPC correctly, that cannot happen.

    #706244 Reply

    @ripstaur I was in Phoenix for Thanksgiving in 2016. My wife and I spent 3 hours at lunch with Mike and Sandra Harry. You know how many time the 1.5 sigma shift came up? Never. If you got that interview when Mike owned SSA then you are talking a very long time ago.

    You want to paint the SS community as keeping that alive? This website sees over 500,000 visitors a month. That is a pretty big sample size. Now take you and your buddy from Oz out of the equation and see how often the topic comes up. It doesn’t.

    I saw a guy who is well known on this website talk about using metrics to understand what does and doesn’t make things relevant. The 1.5 sigma shift has become a footnote. It is irrelevant. The concept that a short term capability study will be over stated generally and that the long term will appear less capable is generally true. You want to know the difference measure it. I have posts on the site back to 2001 that state exactly that.

    If Mike raised his voice then I would be willing to bet he was probably irritated that he was taking time to speak with you and you were wasting his time discussing something he considered insignificant to what was happening at that time.

    If you want to know that your tolerance stack up looks like you run a monte carlo simulation with all your distributions, means and Std deviations. That will tell you when you have issues. All of this design stuff is a function of how good your knowledge and assumptions are. “All models are wrong, some are useful” George Box.

    #706245 Reply

    That sounds about right, Mike. I’m very glad to hear that the shift has become a footnote, at least here. I talked to Mikel Harry back in about 2011…at the time, I was on a couple of ANSI TAGs, including one for statistical methods. When I joined, they were putting the final touches on a DMAIC standard, in which they explained the metric and the shift, and said they were using it “by convention.” Unfortunately, I joined too late to get that statement removed. I still run across training materials that contain those tables and the shift. I won’t use them, but I still see them and talk to people who don’t know better but think that maybe I “just don’t understand it well enough.”
    Mikel was happy to talk to me, but he did express some disbelief that anyone was still interested.
    The Monte Carlos you are talking about are the very same simulations he was talking about.

    #706248 Reply

    @ripstaut Ok so you have made it clear (and your buddy from Oz) that you two are stuck in the 1990’s. I could care less about some ANSI standard. If they write nonsense like that they obviously do not have people writing the standard that have a clue.

    So since you only believe in the real (unshifted) vales from the Z table then you realize that a Cpk of 2 is a Z value of 6? From the table that is 9.9 parts per billion defective.

    If that is what I have (Cpk = 2.0) then control charts that count defectives then a P, NP, C and U chart are completely useless.

    #706249 Reply

    @ripstaur There were 73.5 million cars produced world wide in 2017. So at that rate we will see 10 defects world wide every 13.6 years?

    #706250 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    I get about 2 ppb, Mike. And of course you would not use attributes charts at anything close to those levels.

    #706251 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    Thst would be highly unlikely, Mike. But if their cigarette lighter sockets were manufactured at Tokai Rika, it is also highly unlikely that any of those lighter sockets would be defective.

    #706253 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum


    I’m curious about the Raspberry Pi controllers you mentioned. Are these programmed with the actual process control limits, to provide operators with special cause signals if they arise? Or are they programmed as p-controllers, with spec limits and rules that adjust the machines if they get too close to the specs?

    #706256 Reply

    @ripstauffer I won’t get into a disagreement over 2 or 9. If we use 2 you get 2 defects every 27.2 years in worldwide auto manufacturing. This is only to make a point. When you are even at a cpk of 1.0 or 3 sigma The only chart that may be remotely useful is a variables chart. Actually what it does prove is how ridiculous it even is to think there is a reason to be producing at those numbers. You said you knew someone from pagers. Then you understand that the MTBF was in excess of 100 years and what that did was kill the market for pagers. A customer bought 1 and they were done for life. Here is a real life example. My wife and I were buying flooring. There was some flooring for some amount that had a 50 year guarantee and more for some with a “lifetime” guarantee. I am 65, 50 year guarantee is a lifetime guarantee. There is a point were you can calculate a number but the question is should you because it makes you look ridiculous.

    Now let’s remember what Shewhart called this. “The Economic Control of Quality of Manufactured Product. So you choose to use a cigarette lighter for justifying this discussion. A cigarette lighter in all probability is less that $1. Lets say I was the only manufacturer in the entire world so I had the contract for 73.5 million per year. Lets say I have a Cpk of 1 so I am seeing 3 defects per 1000. That means we will see 73,500 defects per year and still be in control. If I throw them away and don’t screw around with them that is my cost. Now you want to implement a control chart to get me to a Cpk of 2 so I make a defect, to your calculation, every 27.2 years and do it in the name of Economic Control of Manufactured Product. We won’t even get into the fact that you are improving a product that is pretty much not used any longer so this basically like being the best buggy whip manufacturer in the world around the turn of the 20th century.

    Is that a serious question about controllers? It does what you program it to do. I really hope that question was not an implication that I do not understand the difference between Control Limits and Spec Limits. You chose to combine the concept of Cpk (spec limits) and Control Charts (control limits) so I assume you understand you can be completely in control and produce 100% defective product.

    #706257 Reply

    @ripstaur I have 2 other people who have a Z value of 6 at 9ppb.

    #706261 Reply

    I’m not sure how they arrive at that; of course, operational definitions are important. Here’s how I did it: If you use =NORM.S.DIST(-6,TRUE) in Excel, you get 0.000000000987 for the percent of the curve to the left of 6 sigma; doubling that for the other tail yields 0.000000001973, which translates to about 2ppb. That is what I have seen in unshifted tables for years (usually expressed as “0.002 ppm”). I’d be interested to see how you and your friends calculated it.

    #706262 Reply

    No, Mike; I’m not trying to imply that you don’t know the difference
    between spec limits and control limits; had I believed that, I wouldn’t
    have asked the question that way. I just wanted to know how you are using
    those controllers (I’d still like to know).

    I also realize that most people don’t use cigarette lighters any more
    (although they often still use cigarette lighter sockets). That was from
    the Japanese Control Chart example we discussed earlier. The actual part
    doesn’t matter to this discussion, anyway. They were, and are, achieving
    the same levels of quality in producing most of their components (many of
    which are still being produced for cars in 2018). I was responding to your comment that I “might want to work through that with some numbers;” the numbers are there, in the Wheeler example.

    Again, I am well aware that attributes charts are not useful once you get
    close to a Cpk of 1; I don’t know what I’ve said that makes you think I
    believe otherwise, but I must have said something, because you keep
    returning to that point.

    My point about the Tokai Rika chart was that they achieved that kind of
    quality using SPC; they just used the control chart (which they were using,
    anyway, to monitor the process). If they had had an automated way to get the special cause signals, they could have done the same thing (but it
    still would have been SPC). At the time, they just had a paper control

    #706264 Reply

    @ripstaur Here is the table. I have no idea how the other two arrived at their number.

    Let’s make sure we understand that you and I think differently. You seem to enjoy this esoteric nonsense. I have been involved in SS since the time it was deployed in Motorola and consulting in 95 till now beginning with the Allied Signal deployment. The difference comes when I decided to put my money where my mouth was and purchase a business. I ran that business for 9 years. I could care less about the esoteric nonsense. If I would have had this discussion with someone I employed I would have fired them. There is no quicker way to lose credibility in a factory than to start talking about 9ppb.It is a nonsense number. You seem to want to walk around the Economic Control. The number of $73k is less than one FTE. From a business standpoint you are no longer doing something that makes business sense therefore it becomes esoteric.

    I understand where the numbers come from. Because I read something it doesn’t make it true and if I accept the source as credible, which Wheeler is, it doesn’t mean I think it was the right thing to do. I told you to work through the numbers because to do that you need to do more than read Wheeler and regurgitate what someone else has said. You need to understand what you read and then you need to do the math that says I see this as something that makes sense. When people do arguments by quoting people and none of their own work then it is generally a waste of time. I would rather spend $73k on rewarding employees for attendance, idea submissions, etc.

    I asked you about the attribute control chart because I want to make sure you understand you are trying to make your point with regards to variables control charts only. Now you want to use variables control charts because they are the light and the way and only the pure of heart do this. Ok so now I can do two things to get to a Cpk of 2. The simplest way would be to leave the process alone and widen my specs so wide I could pitch a dog through it. Now when I am dealing with some cutting edge technology like a cigarette lighter and those extremely tight tolerance demanded by that technology this probably has a lot to do with it. Now if I want to write an article about my world class cigarette lighters then I spend the time and resources to get to a Cpk of 2. So now I have a process that has capability of 9ppb that I want to use an individuals chart on and how many of the trends, runs, shifts and cycles test are actually meaningful. I am 6 standard deviations away. I can walk away from this process and run the n = 2 type control and never have to worry.

    It goes back to Economic Control. This isn’t some college game. This is business. In the late 80’s I sat in a discussion of control charts with Dr. Deming in Long Beach. There was an idiot who brought in a control chart of solder pot temperature. He had taped pages together and had a chart that was approximately 10 feet long. Dr. Deming pulled no punch’s explaining to him what an idiot he was running a control chart on something with a controller. I went back to my job and played around with it. When I looked at all the drama that surrounds a control chart i.e. people who do not shut down for OOC, the cost, almost never seeing one shut down because things were so good, etc. they are more trouble generally than they are worth. You stick a controller on something that is a variables input to a process and you don’t have to screw around with measuring the output. That would be called a leverage variable something we have discussed in SS since it began. If people were all distracted with some 1.5 sigma shift they probably missed that.

    Controllers are used to get me to what I need for the operation. When you get a spec anything within the spec should be ok with the customer. That allows me to manufacture inside that window that has the most economic advantage to my organization (the one that pays me) and as long as I know the process and can choose how much risk I can put my customer at based on the standard deviation – that would be the Taguchi Loss Function.

    That is how this is supposed to work. You look at different ideas. You play with them yourself so you internalize the knowledge and you figure out how to help your organization without putting your customer at risk. That is what we get paid for. That is the job.

    #706266 Reply

    Rip Stauffer
    Reputation - 176
    Rank - Aluminum

    It might make you feel better to know that I would not talk what you have disparaged as “esoteric nonsense” with CEOs of companies…that’s like trying to teach a pig to dance. However, I think that in these professional discussions, these finer points should be discussed. Putting them into action on the shop floor doesn’t require these discussions.
    You have accused me of living in the ’90s…it sounds to me as though you are using those controllers as p-controllers, which was discredited in the ’80s, once we started learning about SPC. One of the first things Wheeler or Deming would do was tell the manufacturers to turn them off…they are tampering engines, and drive variation up.
    The Taguchi Loss Function, by the way, is the reason for continuing to improve. You can get everything within specs (and then you’re back in the ’70s for sure); what Taguchi demonstrated is that you need to keep the process on target and continually reduce variation. While we (in the ’60s, ’70s and early ’80s) were trying to build everything within specs, the Japanese were working to make all the parts identical. SPC was one of the strategies that made that possible (at a very low cost). If you haven’t yet, you might want to study Wheeler’s Advanced Topics. There is a section in there where he computes the Taguchi loss for that very well-controlled Tokai Rika process. The economic loss of even that process was much higher than most conventional accounting paradigms would tell you.

    But enough esoteric nonsense for now.

Viewing 28 posts - 1 through 28 (of 28 total)

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Reply To: Negative Sigma Levels
Your information:

5S and Lean eBooks

Six Sigma Online Certification: White, Yellow, Green and Black Belt

Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
GAGEpack for Quality Assurance
Find the Perfect Six Sigma Job

Login Form