iSixSigma

Process Capability – Should be predictable first?

Six Sigma – iSixSigma Forums Old Forums General Process Capability – Should be predictable first?

Viewing 35 posts - 1 through 35 (of 35 total)
  • Author
    Posts
  • #39747

    kkndhz
    Participant

    Demming said: “A process has no measurable capability unless it is in statistical control.”
    My confusion comes like: How can we compair the capability of two unstable processes?
    There are a lot of metrics in my hand, i.e Std Variation, Stable or not, Process Sigma Value of these two process.
    How do I go ahead and select which process is better in any case. i.e both are not stable, both are stable, only one is stable(and what if the not stable one’s process sigma is higher?).
    My understanding is that Demming comment capability only from process improvement perspective, but we do need one simple machanism to select among alternative processes.

    0
    #121647

    Sanjeev Sadavarti
    Member

    A process out of control is a dangerous situation. It is better to focus on controlling it and then comparing it with any other process.
    Yes, if you have to choose a controlled process with uncontrolled and you do not intend to improve it to controlled level, blindly choose process under control as option. On other hand if process can be brought under control, then compare.

    0
    #121654

    Anurag
    Participant

    Dear kkndhz
    Deming said: “A process has no measurable capability unless it is in statistical control.”
     Deming is absolutely right, statistical control process means that the process, which is free from all the Special Causes, U have to eliminate all the Special Cause associated to that process and make it a stable process, once u have check the stability of the process (by C.C) u can go further for Process Capability

    0
    #121655

    kkndhz
    Participant

    I understand that Demming says that from process improvement perspective, which means process improvement is the #1 priority.
    I’m a 6 Sigma BB and face this during my real project.
    In other sitations, like select best process in real time situation. I have some confusion. I’ll give one example:
    1. There are 3 servers provide same kind of service. Their performance is monitored by SPC.
    2. We need to select “best” server as our best provider ever 30 seconds. Which means we need to evaluate all 3 servers and select one “best” server regularly.
    3. Confusing sitations:
    3.1 What if all servers are not stable? Which indicator can help?
    3.2. What if Server A is stable and have process sigma at 0.5, and server B is not stable and have process sigma at 4. Shall we select process A as our provider?
    My assumption is that we are from customer’s perspective to evaluate servers. And it’s not customer’s responsibility to improve process. Customer just select the BEST? Is stable process always better than unstable process from service quality perspective? Shall we select process sigma 0.5 rather than process sigma at 4?

    0
    #121723

    kkndhz
    Participant

    I’m afraid that my last post was cover up by so many new posts.
    I’m seeking help to resolve my confusion. Thanks.

    0
    #121725

    indresh
    Participant

    kkndhz,
    seeing you example one sever is stable but has sigma value at 0.5 and other is unstable at 4, there are too many inbuilt process variations in first and there are some special causes occuring in server 2.
    with all people replying they have hinted to the fact that see if we can remove these special causes. well for the customer if these special causes occur with so low a frequency as to have a capability of 4 sigma then customer obviously choose the server 2, subject to criticality of defect.
    just check if you have enough data points else collect longer period data and try sub grouping to gain and analyse stability ?

    0
    #121734

    kkndhz
    Participant

    Can I say that process sigma is the only indicator for process performance? Plz note I’m not saying process capability, since process capability requires stable process first.
    Can I select process only based on process Sigma? As indresh’s comments indicate? Can I ignore process stability to during “best” selection?
    I think my confusion is created by terminology only. The meaning of performance and capability is different.
    Can I say performance is from customer perspective, and capability is from process improvement perspective?

    0
    #121757

    kkndhz
    Participant

    Can any one help to conclude on this?
    Stability, Capability, Process Sigma, … Which one should be the #1 criteria during alternative process selection?

    0
    #122826

    Kiloli
    Participant

    Can any one help to conclude on this?
    I agree quality is sharing some unknown common sense to people. But sometimes, we just confuse people.
    Can I say from process improvement perspective, we need to focus on stability. I fully agree on this point.
    Now come to my challenging point. Can we say from customer perspective, process sigma is the only indicator? Since customer will feel the highest proces sigma provide best service.  In this case, can we say there is no space for us to explain stability to customer?
    But process sigma can not give any indication to process stability.
    Any big guru can conclude this. I want some point from customer perspective, not from quality process improvement perspective.

    0
    #122841

    anurag kabthiyal
    Participant

    Dear go back to basic…. customer is king…. (i kno u ll agree on this)
    Customer is least bother about our sigma level …our stability in process…..or in our process….he only count the  benefits that he ll get by buying  our product or our services……
    plz note customer always see the variation…..
    to delight our customer we have to make our process such that it always meet his expectation ….
    for that we make our process stable, or we calculate sigma level…
    let c the views of  gurus…….

    0
    #122842

    Kiloli
    Participant

    This is really a confusing concept to me, although I’m a BB.
    My question is:
    When we talk about stability, is stability a boolean value or continuous value?
    In other words, How we describe stability? Just distingush stable or unstable, or we give process standard deviation(or you can suggest some better indicator) to describe/ compare stability?
    I believe the later one is better? Any one agree with me? If I confuse you. Sorry first. But I sincerely invite you to go thru my exmaple which will give you some hint.
    I’ve given on example on Jul 17. I post it again below.
    I understand that Demming says that from process improvement perspective, which means process improvement is the #1 priority.
    I’m a 6 Sigma BB and face this during my real project.
    In other sitations, like select best process in real time situation. I have some confusion. I’ll give one example:
    1. There are 3 servers provide same kind of service. Their performance is monitored by SPC.
    2. We need to select “best” server as our best provider ever 30 seconds. Which means we need to evaluate all 3 servers and select one “best” server regularly.
    3. Confusing sitations:
    3.1 What if all servers are not stable? Which indicator can help?
    3.2. What if Server A is stable and have process sigma at 0.5, and server B is not stable and have process sigma at 4. Shall we select process A as our provider?
    My assumption is that we are from customer’s perspective to evaluate servers. And it’s not customer’s responsibility to improve process. Customer just select the BEST? Is stable process always better than unstable process from service quality perspective? Shall we select process sigma 0.5 rather than process sigma at 4?
     

    0
    #122844

    HF Chris
    Participant

    To identify the customer need or impact, you need to perfom a business diagnostic. You also need to map out your IPDS or IPO. You will find that not all area improvements are for process stability. This is just one small part of the bigger picture. The goal in many companies is to look at growth from the customer’s needs, in other words how can we make the customer successful?
    Chris

    0
    #122934

    Kiloli
    Participant

    I agree with Chris.
    However, it seems people are avoiding to answer my doubts.
    Old quality gurus’ infulence is quite big. But I feel there are some flaws in these concepts. I’ve given examples.

    0
    #122954

    BBMole
    Participant

    Ask yourself would you spend your own money on a process that is out of control. Too often we forget the bottom line. What is it worth to get into control.
    Try this look at all the processes, and decide which one has the best average, I know I can’t believe that I am saying this, also look at the variance of each process. Using these numbers compare each process, and plump for the one that will acheive the optimum performance if you get it into control. The variance gives you the clue to which one should be the easiest to bring into control.
     

    0
    #123149

    Kiloli
    Participant

    I was not convinced by bbmole.
    Yes, from process improvement perspective, stability should be at higher priority.
    But from customer perspective, the ultimati result- process capability is their only concern. They don’t care how you improve the process. Customer are result-oriented, not process-oriented.
     
     

    0
    #123166

    Mike Carnell
    Participant

    Kiloli,
    I do not agree with either you or bbmole on stability. Stability is a figment of someones imagination. You deal with whatever process you are given and in or out of control you work on the source of variation.
    As far as what you work on it is more than an issue of control or capability. There is a cost component to everything. Not every defect is of equal value. At my current customer we have a market that is in a worldwide shortage. Driving cost makes no sense. Driving throughput is where the money is for the company. Since it is a mining company and we get X grams per ton of ore hoisted the best move for the company is to focus on recovery rates (which can be viewed as a capability measure) as we process the ore. That in fact will frequently increase cost and the various steps are significantly different in value for anything we lose.
    This is why all the focus a lot of these obtuse metrics makes no sense if you are driving without a view of cost.
    Just my opinion.
    Good luck.

    0
    #123174

    Kiloli
    Participant

    I need some senior expert to clear my doubts:
    1. When we talk about stability, are we comparing stable against unstable? Is it a better way to compare their standard deviation?
    2. When we talk about capability, shall the process be stable first? Can we compare the performance of different process in any cases(i.e among stable and unstable processes)?
    Thanks to all that were helping me on this topic. Mike, I agree that we need to take cost into count.  That is above six sigma level.

    0
    #123176

    Ray The real deal
    Participant

    Dear Kiloli,Below is my answers/opinion to your questions. When we evaluate a process, we want to know if our process is stable or not stable. typically, when the process is not stable, we may find it that the unstable process are due to special causes. We do not compare stable against unstable process. An unstable process is where we need to investigate further what had happened in the process during our data sampling, it could be noise factors, operator’s factor, reliability factor, or etc. Once we have identified the special causes, thus we can proceed and move to Measure/Analyze. As BBs, we are consider as the Specialist, for example, a surgeon is also considers as a specialist, agree?
    The next questions is, have you or any of your relatives that has gone through a major surgery? Recently, my uncle had a triple by-pass surgery. Prior to the operation, I noticed that the surgeon had to evaluate my uncle’s health systems for example, blood pressure, kidney problems, diabetes, and so forth. During the test, I asked the surgeon, “why do you need to the health’s test before you start operating him?”, the surgeon then replies “The reason we do that is because we want to make sure all of the systems are stable and OK, thus, it will provide us the decisions whether he is fit to perform the surgery or not, he then continues’ “If his systems are all stable, we will then proceed the operations with minimal risk, but if the system is not stable and not performing well, we will provide him some drugs and make sure he has some rests to make sure that he will be in the stable region.
    Back to BB’s life as a Specialist, we always want to make sure that the process is stable before we conduct any improvements. If the process is not stable, we have other means of alternative (i.e.formula) to stablized the process (weibull, Johnson transormation, log, natural log, and many more). After we have identified the process is stable, thus, we can proceed with the process capability and make improvements (assuming that our MSA are good!)Hope it Helps!Ray – The real deal.

    0
    #123177

    Kiloli
    Participant

    Dear Ray,
    I was impressed by your example. I agree with you on all your points.
    Is there a way/ metrics for us to compare the performance among processes?
    Let’s look at below simplest example:
    Suppose we are evaluating 3 vendors, all 3 request the same price for the same kind of service. And we don’t have control over their process(meaning process improvement is out of scope). Now all these 3 vendors have give us their process output data for past 30 days(around 900 data points).  How to select the best one out of 3?
    Let say:
    Vendor 1: Not stable, process sigma is 3.5
    Vendor 2: Stable, Process Sigma is 3
    Vendor 3: Not stable, process Sigma is 3
    Which one is the best? In my point, we need to select Vendor 1. Is this correct?

    0
    #123178

    Ray The real deal
    Participant

    Dear Kiloli,If its my company, I will choose Vendor 2.
    Reason, the process is stable and process sigma of 3. Therefore, if we (the owner of the company) decide to work with the vendors by sharing our knowledge to improve the vendor’s process, the risk will be minimal. Having said that, I would suggest that the Vendor 2 need to provide us with the MSA results. Hope it helps!Ray – the Real Deal

    0
    #123179

    Kiloli
    Participant

    Dear Ray,
    We need to take below 2 points into account.  I’m worry about vendor 2.
    1. We can not influence on vendors’ process.  Process improvement is out of the scope here.
    2. Process Sigma at 3 has DPMO  of 66k, while process sigma at 3.5 has DPMO of only 22k. There are some big difference between vendor 1 and vendor 2.

    0
    #123180

    Ray The real deal
    Participant

    Dear Kiloli,I understand the big difference. If process improvement is out of the scope, I would ask myself who is the customer now? If we cannot influence on vendors’ process, then, I will not only work with Vendor 2 but all vendors. Kiloli, you need to look at the long term relationship between vendors and Customers. What do we want as cutomers and also as vendors to our customers? Do we want to have a 1 year or 5 year deal? Think about it. For example, when Hyundai introduce the 10 year warranty in states. Do you think that Hyundai did it because of Sales or do they want to stay with the customers for another 10 years? and how do you think Hyundai got their suppliers to commit to a 10 year warranty? Its all about the vision, mission, and commitment; and knowing how long do we want to be in the business.Hope it helpsRay-The Real Deal

    0
    #123181

    HF Chris
    Participant

    But you can influence the vendors’ process. I just attended a six sigma foundations course side by side with one of our material vendors taught by my company. It was a win win situation because we also share business on the government side as well (who also uses a variation of six sigma).
    Chris

    0
    #123184

    Morning Li
    Participant

    hi,
    Quality performance evaluation is related many aspects. Cpk is main factors. stable process is too. you have to know your situation before you select vendor. which one, Cpk and Stable process, is your first priority in your process because imcoming material will be used in your process?
    at the same time, vendor attitude, delivery in time, faulty unit anlysis etc. should be considered.
    evaluating vendor from quality indicators matrix can show the factual vendor performance.

    0
    #123185

    Ray The real deal
    Participant

    Morning Li,I couldn’t agree more. Ray – The real deal

    0
    #123195

    “Ken”
    Participant

    “Stability is a figment of someones imagination”  How does one achieve a predictable level of process control without an understanding of process stability?  Do you mean to say all the work of Shewhart, Deming, Juran, Figenbaum, Wheeler, and others over the past 70 years are for not?  Please tell us more of these great ideas of yours. 
    Oh, you’re doing Lean and throughput work at a mining co…  Do you mean to tell me that your customers don’t want to be able to predict future performance of the cost reduction work you’ve done to date?  or, perhaps we leave it up to the gods…
    Bewildered,
    Ken

    0
    #123198

    Tierradentro
    Participant

    Hi Mike,
    Just out of curiosity, what is getting mined ? (the material)
    thanks
    John 

    0
    #123201

    BBMole
    Participant

    Cost is always a concern. Even in Mike’s mining example of worldwide shortage, there is a diminishing return. You could use the stability index of the process as a measure. I disagree that the end customer does not care about how you reach control. If it costs them more money, then they care. If the lead time is longer then they care. Anything that affects the customer CTQ, be sure they care about it.
    In my last post I made the mistake of assuming that we all routinely take costs into account when improving any process. It appears that not everyone does!
    I have attached a definition of stability (one of many) FYI
    StabilityStability represents variation due to elapsed time. It is the difference between an individual’s measurements taken of the same parts after an extended period of time using the same techniques.Also, PROCESS STABILITY INDEX is often used in SPC where many charts are ranked by the %OOC (instability) due to application of control limits and alarm rules. Stability is not the same as Capability. Stability is based on statistical control limits, while Capability is based on customer specification limits. Often shown as %OOC and Cpk. But of course a Stable process has LOW %OOC near zero, but never zero long term if limits are set correctly due to false alarm rate with good limits and rules. Stationary (lack of drift) is opposite of Dynamic. Not the same as Stability. Engineers and statisticians argue about these terms.
    Posted By: Modified By: Mike ClaytonLast Modified: Aug. 20, 2003

    0
    #123248

    Mike Carnell
    Participant

    John,
    Platinum is the primary metal. We get PGM and some base metals.
    Regards

    0
    #123250

    Mike Carnell
    Participant

    Kiloli,
    When you chose to compare two different processes you compare a distribution and the analysis is based on the shape of the distribution. An “unstable” process is typically going to look as if it has either more variation or non normal or both.
    In general capability doesn’t move your project forward. It is not a problem solving tool. If you present short term data and classify it as such then people genaeally understand over the long term it will be different. In an “unstable” process you may see the effect of the stability in a short term study but you will definately see it in the long term study.
    Here is why Stability is just a figment of someones imagination. I can consistently bring ore up from underground for 2 years and maintain a fairly consistent head grade and produce a concentrate that is predictable going into my smelter. I con convince my self that this is the be all and end all to mining and assure my customers this is it here is two years worth of data and look it passes all these great stability tests. One night a drill blows a hose and dumps desiel fuel all over the ore. My concentrator goes on its butt until the desiel fuel soaked ore is gone. What did my two year chart buy? It bought nothing. It didn’t stop any failures from happening. My 2 year chart was just a way of looking at data and convincing myself and my customers they were more secure than they really were.
    Some will say ok it is an assignable cause. First who cares what we call the cause. It happened and performance was affected. Second we have seen a process run and it could have been considered stable so some extent. Now we have the catastrophic failure and the process explodes. When we come back up the “stability” means what? It means nothing. It doesn’t stop you from losing it a little or a lot. It may reduce the probability of a failure but ultimately it is a false security blanket.
    The real issue with stability comes from those who seem to need it before they are willing to improve a process. We have been dowen this road repeatedly and watched hundreds of wanna-bes tell you they would love to help you out with a little process improvement but they haven’t reached stability yet. You are being asked to improve it. So stability is irrelevant. You study the process and the process variables and fix them as you go.
    Just my opinion.
    Good luck

    0
    #123253

    “Ken”
    Participant

    Mike,
    Ok, now I see your perspective.  I certainly agree with you that improvement doesn’t start after achieving stability.  In fact, instability patterns on a control chart can provide clues for achieving process improvement.  Often times random instability is due to errors in process setup and operation.  Perhaps we’re reading too much into this stability thing.  The general premise with process control is, in fact, fairly simple.  Control of the critical inputs in a process yields control of the key outputs of the process.  This approach allows us to manage process variation, and with a little discipline maintain it to the lowest level possible for the present system design. 
    Equipment failure that produces drastic changes in the pattern of natural process variation is not evidence for daming the concept of stability.  This observation doesn’t make logical sense.  In fact, it’s evidence that the team missed something in the setup and/or maintenance aspect of the process, or possibly in understanding the reliability of the equipment, ie. the concentrator.  I suspect that if the equipment were better understood, and/or maintained, the performance indicated by a stable control chart would suit its purpose.  This is a great lead into TPM…  Remember, it’s not unexplained events we are trying to predict, but the future performance of the process given it’s observed variation.  If there are business requirements connected to the process, then you could reliably predict the expected capability of a stable process.  The converse is also true.  If the process is not stable, then you’re not able to reliably predict its future performance.   
    None of what I’ve said above is new.  Most of it has stood the test of time for close to 70 years.  I’ve used the foundation concepts of process control to successfully improve both transactional and manfacturing processes across the spectrum.  Maybe I missed something in your explanation.  I’m open to understanding a new perspective in process control. 
    As a good reference to this discussion consider looking at “Statistical Thinking, Improving Business Performance” by Roger Hoerl and Ronald Snee, Duxbury, 2002.  Both guys are quite prominent in the Six Sigma improvement game, and perhaps you may already know Roger from past work with WCBF.  As always the folks on this forum can review the posted concepts through Dr. D.J. Wheeler’s texts, “Making Sense of Data”, and “Understanding Statistical Process Control”, at his site http://www.spcpress.com.  (The former text is very good for folks at any level of training and experience)
    Great story about the ore mining ops!
    Ken
    Ken

    0
    #123260

    Tierradentro
    Participant

    Hi Mike,
    Thanks for the info,
    I was ‘hoping’ that you may have said ‘fine quartz powders’, then i could have picked your brains a little more!! (we are customers of this mined material)
    thanks anyhoo..
    John

    0
    #123295

    Mike Carnell
    Participant

    Ken,
    My position on this always gets that big response. Here is the link to why this doesn’t mean that much to me.
    For years if you were at a conference, a particular organization monthly meeting, a particular automotive suppliers SQE review or a Motorola review of some sort and you put up some data that would almost always thow out one of two responses: 1. Your data isn’t normal the first problem being that they had not tested it so they didn’t know either and the second they generally were completely debilitated by non-normal data) 2. The process isn’t stable so you can’t improve it (here is Catch 22 – Deming and Juran have been credited with that comment so if you diagree you disagree with them if you don’t then you go back to the snail paced incremental improvement mentality). Basically it was comments by spactators with no skin in the game trying to look profound on a couple oneliners they stole.
    The real issue was that it bled into your organization. You assign a project to a person. Check in with them and it was “I can’t do anything yet the process isn’t stable.” A month later “Stll not stable but we are working on it.” A year later “It’s almost stable now.” 18 months later “It was stable for a while but we had a bad data point last week so we can’t do anything yet.”
    They also get into the Common Cause/Assignable Cause game. Sorry that one is a Common Cause we can’t do anything about that one. There is more esoteric crap that slows or stops progress than helps.
    When we start the process is what it is and that is where you begin. If it is unstable – like you said you deal with it. If you want to sit around and get stable first go work on someone elses Six Sigma program.
    When we finish a project there does need to be proof that there has been a shift. I don’t care if they are doing it with a Control Chart a regression analysis or a hypothesis test. I do want to see that we have had a significant shift which always comes down to sample size for a certain confidence. Beyond that I don’t worry to much. If you create this finish line called stability everybody goes to sleep. Once a project is finished it may drop off the MBB/BB Praeto but it isn’t off the GB’s Pareto and they had better not go to sleep.
    The concept of stability for me has created a starting point and a stopping point and that is a problem if you are in the continuous improvement business.
    We do need a process that is predictable as any other business does. It enables JIT and it helps with customers. No problem with either one of those issues. TPM and a real Supply Chain Management program is invaluable. The only long term solution to most of this is DFSS. Unfortunately it is now being sold as something new which it isn’t.
    Just my opinion.
    Regards

    0
    #123299

    “Ken”
    Participant

    Mike,Wow, it sounds like you had some experiences on the dark side of SPC. I suspect those experiences may limit the value of my response to you, but I am a consumate optimist when it comes to human nature.You know, over the years I’ve attended two of Deming’s seminars, prior to his passing in ’92, and one seminar of Juran’s, who will probably live forever. I think I’ve read everything Deming has written on the subject of quality, SPC, and management. I admit I’ve only used Juran’s Handbook for 15 years, and have only read his and Gryna’s book on quality at least twice(two different editions), usually when there was nothing else to read! Nowhere in any of these texts do I remember finding the statement, “if the process isn’t stable you can’t improve it.” I’ve even reviewed Shewhart’s work from time to time, who was a predecessor to Deming, and nope it’s not there-reference “Economic Control of Quality of Manufactured Product.” I’ve reviewed Donald Wheeler’s work, a contemporary to both Shewhart and Deming, and don’t remember seeing such a statement. So, you got me. Maybe my memory is playing tricks on me, because you sound so certain about this statement. So, where shall I look for this statement? One you use to support such strong claims? You’ve peaked my interest, and I probably won’t sleep until I locate it. By the way, back in the mid-70’s I once interpreted Deming as, “if the process is stable it’s no longer my problem–it’s the problem of the system which is management’s problem.” I used this understanding as a basis for pushing the problem away from me, even though I was a representative of management through engineering. Well in fact, on page 7 of Deming’s “Out of the Crisis” he stated, “any substantial improvement[of a stable system] must come from action on the system, the responsibility of management. Wishing and pleading and begging the workers to do better was totally futile.” [I so miss Deming’s roughneck approach to management] I can see how having a stable system might cause one to freeze in their footsteps[as I did in the ’70s], but unstable systems were always actionable per these guys from as far back as I can remember. If not, how did Deming and Juran ever do anything in Japan? If neither stable nor unstable systems are actionable for improvement, what’s left? In fact, even stable systems are actionable. You just need the right tools that will help you to understand the primary variation generators. Six Sigma methodology provides such tools and methods. So, by the 80’s I obtained a bit of profound knowledge [as Deming would put it] which allowed me to address both stable and unstable variation. One primary tool for combating unstable variation is Mistake Proofing, aka. Poke Yoke. It works! My hat is off to Shigeo Shingo [a Toyota guy] for giving this one to us..When a process run record shows instablity, I do a very strange thing that’s make others scratch their heads.. I jump up and down with glee… Why? Because I have signals that I can use to identify the causes of instabilities. When a system is stable, but NOT capable, I scratch my head while others jump up and down with glee. I ask the grasshoppers, “why are you jumping up and down for a system that consistently produces 20% defective product?” They say to me, “because its stable and predictable.” My response is, “what the f… are you talking about.” Mike, your experience and mine concerning systems and the course of our actions using data are very different. Of this I have no doubt. But, I contend if the simple process analysis tools are used correctly, by those who understand them, very wonderful results are achieved. I suggest to you that providing advice on the use of process data which omits a review of stability, and the correct actions therein, is inconsistent with almost 70 years of experience and understanding. Many look to you to provide them with sound guidance on this forum, and possibly outside of it. It might be a good idea to move beyond your past experiences to see where the flaws existed, and where the lapses in theory existed. Doing this will provide you with an ability to support your clients and others with excellence. I’ve successfully used a variety of SQC tools for over 20 years. I am always willing to help others where I can to remove the misconceptions on their application. No bragging, but a sincere willingness to help.Just my opinion.Ken

    0
    #123302

    “Ken”
    Participant

    Mike,
    Sorry I missed a key comment in my previous post to a long-standing misconception of SPC evaluation.  One that has been out there since the beginning-that being that the data need to be normally distributed for the SPC analysis to be valid.  Nope!  Neither Shewhart nor Deming supported this claim.  Control charts work regardless of the underlying distribution of the data.  No transformation is necessary.  The Type I error increases to less than 5% on worst-case distributions, such as gamma or exponential. 
    References, “Understanding SPC”, 2nd Ed., D.J. Wheeler and D.S. Chambers, 1992, pages 68-76.  “Normality and the Process Behavior Chart”, D.J. Wheeler, 2000, all SPCPress.
    A most notable reference to shake out the normality requirement is an original, “Statistical Quality Control Handbook”, Western Electric Company, 1956–lots of discussion on normal distribution throughout this reference, but no specific link to normal distributions and the process control chart aside from the that of the chart of averages.  For those who have this first reference on SPC, consider reading section F-15 Natural Patterns starting on page 170.  For those still unimpressed, consider this:  Range and standard deviation charts have been used for years along with average charts.  The rules for computing the control limits for these charts assume that ranges and SDs distribute normally, but neither do, and yet the limits are calculated and used today much as they have been in the past.
    Another problem, but not a misconception is to use the Western Electric trend rules on Individuals charts.  Not always a good idea, because Individuals data are sometimes non-normal, and the trend rules assume the data follow a Normal distribution.  This is a big problem for those using trend rules from the Pharma and BioPharma industries, because assay data tend to distribute log-normally.
    Not my opinion.
    Ken

    0
Viewing 35 posts - 1 through 35 (of 35 total)

The forum ‘General’ is closed to new topics and replies.