iSixSigma

What is a stable process?

Six Sigma – iSixSigma Forums Old Forums General What is a stable process?

Viewing 93 posts - 1 through 93 (of 93 total)
  • Author
    Posts
  • #27386

    Kim Niles
    Participant

    I think it’s funny that we keep on debating around the bush on issues that have an underlying theme of undefined process stability. I recently posted under “Any one could tell me when I should use CPK or PPK? THKS!” the following:
    +~+~+
    Montgomery states in his book: Montgomery, Douglas. C. “Introduction to Statistical Quality Control”. Wiley & Sons, Inc. New York. 2001. 4th ed. Pg 372, that in 1991 the Automotive Industry Action Group (AIAG) was formed with one of their objectives being to standardize industry reporting requirements. He says that they recommend Cpk when the process is in control and Ppk when it isn’t. Montgomery goes on to get really personal and emotional about this which is unique to this page of this book and other books I have of his. He thinks Ppk is baloney as he states “Ppk is actually more than a step backwards. They are a waste of engineering and management effort – they tell you nothing”.

    While Montgomery gets frustrated over the use of Ppk, he does a poor job of explaining what a stable process is. I respect his works just the same as he alone has even attempted to try to explain the difference between a stable process and a non-stable one.
    +~+~+

    This argument as well as numerous others that you can find on this site regarding use of Cpk / Ppk metrics, the validity of Six Sigma shifts, process capability, SPC, etc. all reflect our lack of definition for what is process control.

    So, how can we define process stability and or process control? Perhaps we can agree on some given amount of process shifting (1.5 sigma)? Perhaps we can agree that a stable process is that where it’s Cpk values are above 1.67? Perhaps some combination of these or other events needs to take place such as three consecutive Cpk samples over 1.67, etc.

    Until we can define what a stable process is, we are doomed to argue forever all use of any statistical metric.

    For the love of a all science, please help!!

    Sincerely,
    KN – http://www.znet.com/~sdsampe/kimn.htm

    0
    #66975

    “Ken”
    Participant

    Kim,

    It’s nice to hear from you under you own name. I hope you came through you last course with an “A”. After the last discussion about opportunities, I hope we can survive one dealing with process stability. Well, here goes my two cent input…

    First, I believe in your request you mentioned using the capability index as a measure of stability. Unfortunately, the use of “process” capability, Cp/Cpk, as a stability indicator is not done in common practice. These indices are used to compare the variation of the process against the requirements of the process. To make this comparison one assumes the process operates on a consistent basis. A consistent process is one in which the variation over time is stable. So, my initial answer begs the first question, “what is meant by process stability.” Here I will use my words, while others are probably more concise and elegant. In simple terms a “stable” process is considered one in which only “chance” variation exists. In other words if the process exhibits any systematic variation beyond a certain degree, than it is called unstable. The degree of any possible systematic variation is usually limited to +/- 3 sigmas(or standard errors) from the center.

    Please be aware there is great controversy in this definition which I’m sure you will hear shortly. Howerver, the definition I give is based to the best of my recollection on references from Shewhart. Even Shewhart’s close working partner Deming disagreed with the basic concept of stable processes when he was alive. Deming was known to suggest that there are no such things as stable processes. From a theoretical perspective I tend to agree with Deming. However, from a practical basis I believe we could define a stable process from an economic perspective. This is exactly what Shewhart was trying to get at in his 1931 publication, “(The) Economic Control of Quality of Manufactured Product”

    If you happen to have access to ASQ’s Journal of Quality Technology, then consider finding the October 2000 issue, Volume 32, No. 4. Pages 341-350 review the Controversies and Contradictions in Statistical Process Control as presented by William H. Woodall. From pages 351-377 prominent authors provide their comments and insights on Woodall’s article. It’s a great read during the first part of the day, but don’t attempt it late at night.

    Well, I hope this info helps. Tell me, are you preparing for another paper? If you are please feel free to sen

    0
    #66980

    Grant Blair
    Participant

    What an great question…you’ve either opened an interesting discussion };->, or a real can of worms };-<.
    First of all, as a Deming disciple (and one who saved $M applying his theories), you must understand there is no such thing as a truly stable process. Left unattended, and given enough time, all processes will deteriorate. The principle is called entropy, and it even affects humans (we grow old and die!!).
    However, from a practical standpoint, process stability is something which should be evaluated at start-up along with process capability. One of the hardest things to get people to do with a new process is to LEAVE IT ALONE when you are first determining process capability. Besides learning the true variation, you are also determining the inherent stability of your process…that is, how long you can run without requiring either adjustment or intervention.
    Many textbooks state that capability cannot be determined on an unstable process because the distribution is non-normal. This is incorrect. Theoretically, it can be determined with TOTAL sigma using Chebychev’s inequality. I like to draw this case as a large normal distribution with a smaller normal distribution shown as a “bump” in the tail. Applying the inequality, you will find that ~ 98% of individual points will fall between +/-9 sigma (Ppk>3.0) REGARDLESS of the shape of the distribution. It is no accident that this is the number accepted by AIAG in the revised QS-9000 standard.
    Now, for a stable process (single-peaked distribution) there is a modification of Chebychev called Camp-Meidel which states that ~98% of individual points will fall between +/-6 sigma (Cpk>2.0). Now, I draw this case as a normal distribution with a 1.5 sigma shift.
    We can put this in plain language as follows:
    If your process is one in which a bull occasionally runs through the china shop, you must have a Cpk(Ppk) of 3.0 or better to control it. However, if there are no bulls, a Cpk of 2.0 (or Six Sigma) will suffice.
    Note these principles apply REGARDLESS of the shape of the distribution. Obviously, the probabilities improve as the distribution appoaches normality

    0
    #66996

    Kim Niles
    Participant

    Dear Ken: You were right, I found the excellent article you suggested at http://www.asq.org/pub/jqt/past/vol32_issue4/qtec-341.pdf and list a few highlights as follows:
    1- One purpose of SPC is to distinguish between common cause and assignable cause variation in order to prevent overreaction and underreaction to the process.
    2- The distinction between the two types of variation is context dependent such that they may switch places from time to time.
    3- The distinction can also change with sampling as one only wants to react if it is practical and economic to do so.
    4- A process is “in statistical control” if the probability distribution is constant over time.
    5- Deming (1986) advocates more than meeting specs…reduce variation; Taguchi (1981) advocates variation reduction until it isn’t economically advantageous.
    7- It is very important to distinguish control chart phases; phase 1 ~exploratory data analysis, phase 2 = in control.
    8- To view control charting as equivalent to hypothesis testing is an oversimplification
    9- Control limits of the X and R charts assumes normality yet non-normality appears to have little effect (Burr 1967).
    10- probability of SPC signals varies depending upon distribution shape, the degree of autocorrelation in the data, and number of samples.
    11- Wheeler (95) states autocorrelation coef >0.6 is significant, otherwise not.
    12- Deming…the shift in the mean of a normal distribution may also be shifting normally such that no process measures can be perfect.
    13 – Discussion on Bhote pre-control: process in control shows that it is good but it might not be capable based on specs that aren’t shown.
    14- ASQ references Bhote for the CQE yet Bhote refers to control charting as “a waste of time” and DOE as “of low statistical validity”. [good points but I support ASQ for this one]
    15- New process adjustment strategies include regression-based, multivariate, variance components, variable sampling, change-point techniques, etc.
    16 – The 7 or more consecutive points control chart method is ineffective and should be discontinued.
    17 – The scope of SPC should be broadened to include more understanding
    18- One communication problem is that researchers put narrow contributions into the context of an overall SPC strategy.

    KN http://www.znet.com/~sdsampe/kimn.htm

    0
    #67004

    “Ken”
    Participant

    Grant,

    You present some interesting propositions in your discussion. I am curious about your use of both Chebychev’s(always have difficulty spelling this!) inequality and the Camp-Meidel property in relation to a capability study. I’m not certain I clearly understand the tie to the original question, “What is a stable process?” Again, it seems we invariably get into a discussion of capability even when the question centers around “statistical stability.”

    I believe the use of capability indices explicitly assumes the variates are distributed normally. Perhaps I’m incorrect in this assessment, but many references seem to indicate the same. Without this assumption we are left wondering what the Cp and Cpk estimate tells us when using some other method for determining capability. Not to mention the problem that even if we assume normality the probabilities computed with high capability use the tails of the distribution where considerable estimation error occur.
    Could I suggest that if you would like to use Chebychev’s inequality, or the Camp-Meidel property, then your estimates should report the Pp or Ppk of the process. These indices do not explicitly require an assumption of normality. What do you think of this proposal?

    Ken

    0
    #67006

    Jim Parnella
    Participant

    Hi Kim,
    Good to hear from you again. I’m confused over these two points:
    14- ASQ references Bhote for the CQE yet Bhote refers to control charting as “a waste of time” and DOE as “of low statistical validity”.

    A “waste of time”? “Low statistical validity”” Wow! I guess I better read that article to find out what in the heck they are talking about.

    16 – The 7 or more consecutive points control chart
    method is ineffective and should be discontinued.

    Wow again! I guess I can’t think of anything more effective to show that a shift in process average has occurred (well maybe eight or more in a row).

    Thanks for bringing up the article – it looks really interesting (unusual?).
    Jim
    P.S. Had any power brownouts lately? Hope not.

    0
    #67009

    Grant Blair
    Participant

    I rechecked my post and I did state that application of both inequalities required the use of TOTAL sigma, not an estimate derived from ranges or any other type of successive difference.
    I’m not sure why you would think the use of sigma implies normality. This is one of the great strengths of Chebychev’s inequality (yes, it’s Russian and spelled differently by different authors), which states that for ANY distribution, the probability of any point falling between +/- N sigma units will be equal to 1- 1/(N)squared.
    In order to clarify my point, let me explain why BOTH my examples are, by definition, non-normal:

    In the first case, I draw the distribution for an unstable process as a large distribution, with a smaller peak in one of the tails. There may be other types of instabilities and resulting distributions, but I can tell you from experience, this is the most frustrating type to deal with…process will run fine for a week or two, then the “bull runs through the china shop”…may go away and you hope you will never see the bull again…but he’s back in another couple of weeks }:-(( Unless you have a highly capable process, (Cpk>3.0) when it is stable, you will be too busy segregating NCP for either scrap or rework to do an effective search for an immediate cause, and will NEVER find the resources to search for root cause. Since this is a double-peaked distribution, Chebychev’s inequality applies and at least 98% of the data will fall between +/- 9 sigma.

    Now, the second case is where 6 sigma theory comes in. By definition, there is NEVER a truly stable process…all processes exhibit entropy…things wear out, then break. This is why Six Sigma allows for a 1.5 sigma shift. I draw this case as a gradual shift between two overlapping normal distributions, which results in a broad, single-peaked distribution… still not strictly normal…but also notice that two overlapping distributions of ANY shape will still create a single-peaked distribution. The Camp-Meidel modification of the inequality states that for an unimodal distribution the probability become 1- 1/2.25 (sigma squared), which gives you the identical 98.8% probability for 6 sigma units, or a (Cpk=2.0).

    Additionally, There are two further properties which work in your favor:
    1. Probabilities increase as distributions approach normality.
    2. Wheeler (and others) have proven empirically that “approximately 99%” of ANY distribution will fall between +/- 3 standard deviations. Obv

    0
    #67010

    Gary Cone
    Participant

    Thanks to Kim, Ken, and Jim for the good posts.

    As far as Keki and now Adi, god bless their worship of Dorian. Dorian was a very good guy for those who knew him. I have never seen anybody better for walking into a process cold and seeing things to be done. All of his stuff is statistically valid too. Email if you want to know why. Their tools were powerful and appropriate (probably still so in a process team environment), but with the computer and software tools availble today, I cannot imagine why we would limit our tool set to the Shanin tools. Precontrol using capbility data (set green equal to +/- 1.5 sigma and yellow +/- 3 sigma when Cp >> 1) is not too bad if you cannot depend on training and appropriate reaction from management to normal SPC.

    Be soft with Keki, using Shanin tools are far superior to doing nothing which is what most do.

    Gary

    0
    #67011

    Grant Blair
    Participant

    You referred to:
    >>>16 – The 7 or more consecutive points control chart
    method is ineffective and should be discontinued.

    Wow again! I guess I can’t think of anything more effective to show that a shift in process average has occurred (well maybe eight or more in a row).<<<

    There are some recent JQT articles which show that the “old” trend rule of 7 points steadily increasing or decreasing is ineffective and should be replaced by the two Western Electric rules:
    1. 4 out of 5 points outside 1 sigma
    2. 2 out of 3 points outside 2 sigma
    This is what the article is talking about.

    0
    #67014

    “Ken”
    Participant

    Jim,

    It’s difficult to get a real understanding of Woodall’s journal article in JQT, and others reponse by reading Kim’s synopsis. In fact, there were two key issues all authors agree upon: 1) Issues with making decisions on charts having auto-correlated data, (we’ve spoken personally on this), and 2) Moving people away from the monotonically increasing or decreasing trend rules of 8-9 values on the control chart. There has been considerable articles on this last topic over the past 2-3 years, and the general consensus is that 8-9 may not yield the desired significance for signalling a process change. Especially if there is auto-correlation in the data greater than about 0.6. I instruct the folks I work with to observe at least 11 or 12 monotonically increasing or decreasing values on the control chart before qualifying a process change.

    Concerning Keki’s support of Dorian’s work… All authors made the same claim as Gary Cone. Anything is better than nothing, but today we have better high performance computing tools to work with. They all suggested working with methods that have stronger statistical foundation such as variables search methods trying to uncover the Red-X variable as suggested by Dorian via Keki. Obviously, the Multi-Vari charting approach is still a very useful tool for identifying the input with the greatest level of variation. However, today a simple nested ANOVA coupled with a quick analysis using Minitab or Stat-Graphics will characterize nicely the components of variation, and their percent contribution to the total process variation.

    Again, most of this stuff is NOT advanced. Rather, today many would consider it generally mainstream.

    Ken

    0
    #67037

    Kim Niles
    Participant

    Wow, Lots of really super posts…. THANKS!!

    So where are we now? The problem as I now see it with your help is that we are looking at a third order problem with first and second order thinking. There are obviously many different ways to convince many different people of what process stability is depending upon many different types of situations.

    We either need a universally acceptable solution supported by respectable organizations or some other more obvious third order solution (via breakthrough) that fits all types of situations. For example, ASQ and ASA could state that they support AIAG’s definition of process stability with one of “within 9 sigma regardless of distribution shape and or process mean shifting”. One third-order example could be some very descriptive formula that allows anyone who reads it to fully understand the process being described in comparison to any other process. For example, the phase xx process being measured with xx capability is xx% stable over xx time given xx number and xx types of distribution assumptions.

    Any other ideas?

    0
    #67039

    Grant Blair
    Participant

    You said:
    “For example, ASQ and ASA could state that they support AIAG’s definition of process stability with one of “within 9 sigma regardless of distribution shape and or process mean shifting”.

    It would appear to me you are defining the real conflict as being between AIAG and the Six Sigma community. AIAG would support your definition, Six Sigma would restate as “within 6 sigma regardless of distribution shape and or process mean shifting by 1.5 sigma units”
    Could it be the only one still using second order thinking is the Six Sigma community? Remember, neither one beleives that any process is truly stable.
    The theorem I presented was originally derived while I was working for a plant manager who was a Deming advocate. We had a plant-wide goal of attaining a Cpk of 3.0 on all critical processes. Not saying every process was doing this when I retired (reasons were often interesting!};->, but I’ve run a few 3.0 processes in my time and can tell you from experience that anyone who stops when they get to a Cpk of 2.0 is missing out on a real robust process .

    0
    #67041

    Sinnicks
    Participant

    Are we getting bogged down in terminology and looking for a universal mathematical formula?

    Let’s come at it from the perspective that the statistics we use, whether control charts, indices, etc., are mathematical models used to describe and predict a process’ outputs. There was some discussion on the fit of the model based on the distribution of the variation. Stability is then the fit of the model over time. In some processes the temporal variation will be greater than others based on the internal constituent components, complexity, and control strategy of the process. Then, you have the external process factors like the organization’s approach to continuous improvement, process monitoring, maintenance (equipment, training, measurement systems, …), and …

    That is what the long-term estimates like the 1.5 sigma shift are all about. It is a quick and easy estimate of temporal process shift. Is it right? NO! There are those that have arrived at the number as an “average”. A practical set of questions to ask before applying it could be: 1) Does it provide a forecast with a “reasonable” fit? 2) Is the expense of creating a more accurate forecasting model warrented by the value of having the greater accuracy? 3) How do you validate a model’s forecasting power / accuracy without waiting for the “long-term” variability variability to occur? 4) Would the model tend to be autocorrelating since we would hope that one would act upon the forecast obtained in order to allocate improvement efforts to the most worthy processes?

    So, in summary we can attempt to construct a forecasting model that takes into account all the specific factors for the process (which may also change with time), or apply a constant or simple formula like the 1.5 sigma shift.

    In the end the “goodness of fit over time” (stability) evaluation we make on the process can not be based solely on the process output for a certain amount of time like what a control chart records. A mathematical formula based on the data is like drawing a line on a graph (or chart) past the region where data has been collected. You can speculate on whether the line is straight or fuzzy. But it is still speculation.

    So, what do you say if management or a customer asks if a process is stable? Personally, I answer as if they asked the question, “Will this process’s output continue to be essentially the same assuming there are no significant changes to the process?”

    0
    #67042

    Grant Blair
    Participant

    To be perfectly honest, I could give you some of the requirements based on experience, for example, that a properly constructed control chart should routinely show 1-2% special cause points, the importance of control charting your measurement system, etc. Accordingly, my definition of a stable process would first come from capability studies…and I would expect to see no more than 1-2% random special cause over a week or two if the process was left alone.
    But, to be perfectly honest, I also think a period of a week or so may only work for my industry, and am heistant to say it would be a universal rule.
    One reasonable approach might be to look at this the way a scientist would look at applications using light. Theoretically, light can be treated as either a wave or as particles. In practice, however, the theory depends upon the application you are studying.
    Therefore, it may be completely appropriate to say:
    If the process appears unstable over an appropriate time period base on your experience, use 9 sigma, otherwise use 6 sigma.

    0
    #67043

    Jim Parnella
    Participant

    Grant,
    I must have missed those JQT articles. Personally, I find the 7 in a row on the same side of the center line a dead ringer of a process shift (assuming no autocorrelation as noted by Ken below). I wouldn’t want to give up this rule under any circumstances. I’ll keep an eye out for those JQT articles as I plan to go through all my quality magazines (I’m being overrun by them) and tear out and keep the good articles and toss the rest.

    0
    #67048

    Grant Blair
    Participant

    It’s a fairly easy matter to visualize the problem with
    the trend rule; 7 points steadily increasing or decreasing by drawing 7 points with a obvious trend, then just “jogging” one of the points up or down a little bit.
    This is what really happens; there is a real (Sometimes quite obvious }:-| trend, but common cause varaiation “jogs” the point enough so you won’t break the rule.
    It is important to REPLACE this rule with the two Western Electric rules, otherwise, you run the risk of too many false signals.
    Also, note that WE rules use 8 points either above or
    below the centerline, not 7. Again, not a big deal, but
    it reduces the risk of false signals.

    0
    #67059

    DRAGOS BANDUR
    Participant

    I think that a process can be considered stable if- given the level of significance- the subgroup points are randomly distributed whithin control limits.
    Example: for a level of significance of 5% it is possible that in long run, one point out of 20, in average, may go out of limits without us jumping to conclusion that process is OOC, unless the randomness’ hypothesis was violated (previous points show some pattern).
    The Capability Coefficients are statistics that measure short- or long term capability. I think that if these two measures of capability are not significantly different (by the way: what tests of significance can be used for Capabity Coefficients?) for a certain period of time, then we can conclude that the process was also stable during that period of time.

    0
    #67062

    Kim Niles
    Participant

    I just thought I’d clean up lose ends as I prepare to summarize all the different thoughts. I’ve re-posted below the other three related topic posts made elsewhere on this site recently in an effort to look at it all at once. I’m having a great time with this, and learning a lot, thanks.

    +~+~+
    Stable Process: Posted By: Rajanga Sivakumar Posted On: Wednesday, 13th June 2001

    Any process which performs in a predictable manner over a period of time i.e with known variances can be considered stable. However, a stable process does not necessarily mean that it is the best performing or ideal process. If the process can be improved to an extent that only “natural variability” remain, then it could be considered as the ideal process. This is my understanding and it may not be very correct.

    Rjanga Sivakumar
    +~+~+~+~+
    Stable process: what is it? Posted By: SAMIR MISTRY Posted On: Monday, 11th June 2001

    a stable process in simple terms is a process of which all the causes of variations are known and are acted upon and the process is then governed by common causes of variations, where the output of the process is fairly predictable. management decision requires to further increase the capability of the process.

    +~+~
    Re: Stable process: what is it? Posted By: Ken K. Posted On: Tuesday, 12th June 2001

    I wouldn’t go so far as to say that “all the causes of variations are known”. That is a pretty extreme statement.

    I would tend to say a stable process is one that is comprized of mostly common cause variation, as opposed to special cause variation. As you hinted, that common cause variation will be comprized of a whole bunch of sources of variation, some will be knowable and some won’t.

    The whole idea of process improvement is to understand many of those sources of variation and try to remove/control them.

    0
    #67068

    Grant Blair
    Participant

    Actually, these comments are pretty consistent with the Japanese definition of a stable process. Discussed this topic with a friend (who says “Sayonara Yawl” every time he leaves work ). Turns out their definition of a stable process is a process which REQUIRES NO CONTROL CHARTS.
    Don’t mistake this for lack of control. If just means that if you asked a person running the process to show you their charts, they’d think you were nuts. Everything is still being controlled, but it’s all in the background. If certain limits are exceeded, or if unusual patterns are detected, then the operator is alerted and will investigate. Also, there are certain critical customer parameters which are still subject to continual improvement by operator-mgt teams.
    Here’s an example from the automotive industry everyone should be able to relate to:
    When the automakers first heard about Japanese quality, and investigated, they found that Japanese automakers gave the workers the ability to stop the assembly line whenever they found the slightest defect. The worker was treated like a hero, and the cause was investigated. What everyone overlooked in all this was some simple statistics: If your defect level is ~zero, then EVERY DEFECT is a special cause. No control chart is required.

    0
    #67078

    Rajanga Sivakumar
    Participant

    Any process which performs in a predictable manner over a period of time i.e with known variances can be considered stable. However, a stable process does not necessarily mean that it is the best performing or ideal process. If the process can be improved to an extent that only “natural variability” remain, then it could be considered as the ideal process. This is my understanding and it may not be very correct.

    Rjanga Sivakumar

    0
    #67082

    Jayraman Anand Kumar
    Participant

    The process to plot control charts is as below.

    1. Collect data over a period of time.
    2. Plot this on a run chart.
    3. Collect minimum 30 data sets. (preferably subgroups)
    4. Look for the special cause trends (7 of them in all)
    5. If none of them exist then plot control limits and then calculate capabilities later on.
    6. Even in the phase of control charts one needs to look for special cause trends.

    These trends have been identified with very strong probability backgrounds in tandem with the central limit theorm which froms the crux of conrtol chart philosophy.

    As long as your process does not exhibit any of the special cause trends, the process is stable.

    Capability index Cpk is only an index for measuring the process over the customer spec and is not an measure of stability.

    0
    #67084

    Grant Blair
    Participant

    You are correct in saying that a stable process is not necessarily capable.
    However, be careful in saying that a stable process is predictable…even Shewhart and Deming had problems with this one!
    We can also argue that a stable process is UNPREDICTABLE. For example, if you asked me what the next point in the process will be, I will not be able to tell you, even though I may be confident it is likely to be within some range of values.
    I usually illustrate this in class be referring to a thermostat on the wall…If I take regular readings and kept getting the same number, I would suspect either:
    I’m reading the set point, or
    the thermostat is broken
    On the other hand, a process with special cause is PREDICTABLE:
    If I see a sudden, sharp change in the temperature,then I can PREDICT, with confidence there is a problem with the heating/cooling I can search for and find.
    Also, If I see the following pattern (Celsius) 20, 20.5, 21, 21.5, 22, 22.5, 23, 23.5, 24, 24.5…..Anyone in the class can PREDICT, with confidence, what the next reading will be.
    The only problem with making an accurate prediction turns out to be common cause variation, the UNPREDICTABLE part of the process. It may make the readings look like the problem has been fixed (say, it reads 24, but then the next reading
    is 25.5)

    0
    #67085

    “Ken”
    Participant

    Grant,

    You raise some interesting concepts concerning the definition of “stable” vs. “unstable” processes. However, I was unable to find reference your claims you supporting Shewhart’s and Deming’s comments concerning “stable” and “predictable” processes.

    To the best of my research Deming was the first one to use the words “stable” and “predictable” together with reference to systems or processes. Clearly, in his 1982 book, “Out of the Crisis”, Deming did not appear to have problems with using these words together because on page 7 he states: …”This plot (a run chart) showed stable random variation above and below the average. The level of mistakes, and the variation day to day, were accordingly predictable. What does this mean? It means that here is a stable system for (the) production (of) defective items.”

    I was not able to find prose from Shewhart using the word “stable.” Instead, Shewhart used the words “chance cause” in conjunction with processes showing only random variation within a defined range.

    The nature of Shewhart’s original work was to establish a range of operation underwhich the economic control of processes could be established. Your claim that “common” or “chance cause” variation is “unpredictable” is an established fact. However, this is not where the definition of “stability” is constructed. Instead, “stability” is defined within the statistical limits which establishes the demarcation between “explained” or “predictable” performance and “unexplained” performance for the process. Within these limits, if one can claim the process operates as it has in the past, then it will continue to operate in the same way in the future. This claim is a prediction of the future behavior of the process. If the process performs as predicted, then one can state they have established conditions underwhich a process may act in a “predictable” fashion. Under such conditions we normally use the term “stable” to describe the process. Again, stability is defined as a process exhibiting only chance or random variability. While that variability is and of itself is presently unpredictable, this does not mean the process is operating in an unpredictable manner.

    Your example supporting a “predictable” system having assignable causes assumes apriori that you have made a change to one or more inputs to the system. This example is better explained using control theory.

    Ken

    0
    #67092

    Grant Blair
    Participant

    Take a look at Figure 32 in Shewhart’s book “Statistical Method from the Viewpoint of Quality Control” and you will see what I’m talking about. Here, Shewhart has rearranged groups of drawings from a normal bowl in ascending order of magnitude. This is the same data as shown in Figure 8, but the conclusion he makes is that you can no longer conclude the data represents a stable process BECAUSE THE SEQUENCE OF POINTS NO LONGER BEHAVE IN A RANDOM MANNER.
    Shewhart repeatedly emphasizes that stability in a process is more than just showing a histogram that looks like a normal distribution…he even shows data sets which are clealy not normal,
    (like the speed of light, Figure 17)….the real key to showing stability is demonstrating a random SEQUENCE of measures which show no assignable causes present.
    The exercise I do with the thermostat can actually be found in one of Deming’s books, but you will
    have to look in one of his earlier works to find it. Obviously, he didn’t use a thermostat back in the late 30’s, but the approach is similar. You seem to have missed the point of this exercise…I’m not “bumping” the set point to see what will happen, The readings I’m getting tell me something unusual is happening to my system, and I can PREDICT, with confidence, a thorough search will find the cause.

    0
    #67093

    “Ken”
    Participant

    Grant,

    I reviewed the Figures and references you cited from Shewhart’s book. I read closely the section associated with Figure 32, entitled “The Specification of Accuracy and Precision”, Chapter IV. I believe Shewhart’s intent with Figure 32 is to illustrate the difficulty of judging whether a grouping of measures provides a valid understanding of both the accuracy and precision of the true value against the prescribed requirements. Nowhere in this chapter did I find the words “stable” or “stable process”. In constructing Figure 32 Shewhart ordered the random set of data from Figure 8 so he could describe an unusual run pattern. It’s clear to me the new pattern developed from ordered data “would” contain special causes if it existed in a natural stream. Shewhart used the highly unusual run pattern in Figure 32 to illustrate the nonsense of deriving meaningfull information about the accuracy and precision of the process if assignable causes were present Surely you would agree the chance of actually observing such patterns, (shown in Figure 32), from a process exhibiting only chance causes would be astronomically low… I am unclear how Figures 8 and 32 support your claim that processes operating purely under a state of chance cause variation could be considered “unpredictable.”

    I agree with you that simply developing a normal histogram from process data does not define the stability of a process. However, I do not believe I suggested such in past discussions. To determine the “economic” stability of a process I would use tools Shewhart and others prescribe, e.g., process behavior or control charts. I would contend that many processes theoretically exhibit extraordinary variation even when the control chart signals the process is stable. Again, this comment is couched theoretically. In a practical sense, a process satisfying the basic Western Electric run rules would be considered stable. Therefore, from a practical perspective the 3SE limits provide very good guidelines for determining the stability of a process. These guidelines are clearly supported by many authors, not just Shewhart.

    I’m still not sure how your most recent response supports your original claim that processes with “assignable” causes, can be considered “predictable.” Perhaps I missed something here, or perhaps we’re now dealing with fine details that are of little use to most practitioners.

    Ken

    0
    #67110

    Grant Blair
    Participant

    You’re apparently so tangled up in the theory, you’ve missed the practical application of Shewhart’s charts. The discussion I presented earlier is what I use to illustrate the APPLICATION of Control Charts:
    A. A STABLE process will have ONLY common cause variation. (we hope…it’s really what we’re trying to find out in this discussion.) Shewhart said it “should look as though it was drawn from a bowl of normal random numbers” I’ll make you the same offer I make all my students. “If you can PREDICT the next point (or x points in the series), you win a free trip to Los Vegas…I’ll pay for it, and we’ll gamble with my money.” Frankly, if you can get that much from reading Shewhart, I don’t see why you’re wasting your time doing Six Sigma. (but I have my suspicions };->
    B. UNSTABLE processes will have special cause variation. I can now make the following PREDICTIONS:
    1. It ain’t going away. It may look like it does, but sooner or later it will return, and will keep returning until the cause is found and fixed.
    2. The average grunt on the floor can find it. Usually takes a team, and it’s almost never easy, but they’ve been doing it since Shewhart’s time.
    Now, what really fries my bacon is when the “XXX XXXXX expert”. (Insert name of latest quality Fad…Six Sigma fits fine) is called in and says;
    “Process is UNPREDICTABLE. This means I can’t calculate or use a Cpk, etc. because I don’t know the correct sigma, etc. etc. Once the grunts on the floor have fixed the special causes, give me a call, and you can pay me big bucks to do my XXX XXXXX stuff.” (Now you see my suspicions…probably pays better than Vegas !! };-|
    Now, please don’t take this personally, because I’m not trying to knock how someone makes a living. Just don’t try to say that Shewhart and Deming really supported you, because what I get from their writings supports what I just said and doesn’t appear to me to favor your position.

    0
    #67113

    “Ken”
    Participant

    Grant,

    I really wonder why this material gets a select few so worked up! I never disagreed with the your specific claim of prediction efficiency “within” the control limits. Obviously it would be difficult, if not impossible, to predict the next observation from a process operating solely under chance variation. This observation and its associated claim is trivial.

    I believe our mutual confusion is the conditions which you claim to make the prediction(s) on the process. You claim, as I understand, that once an “observation” of a special cause is observed on the control chart you can predict a cause will be found. Again, this is a trivial observation provided we resign ourselves that we will NOT find an associated cause to the process change about 0.27% of the time for process data distributed normally. If the process data is NOT distributed normally the Type I error will be higher.

    Again, “stability” is the claim that the next point will be found “somewhere within” the control limits if the process continues to operate in a stable manner. It is NOT a claim of determining the exact location within the control limits for the next observation. No one can make this claim! My definition of predictability for a stable process is based upon the work Shewhart, Deming, Juran, Feigenbaum, and Wheeler to name a few. This is a well established criteria… I’m surprised to see you receive another understanding of this simple concept, trips to Las Vegas not withstanding.

    May I suggest you give D. Wheeler’s material a look. He does a fine job of translating Shewhart’s concepts in basic understanding. The track to follow from Shewhart is Deming to Wheeler. I usually have to work far harder than I desire to understand Shewhart’s perspective, and I’m trained in the same area as Shewhart. Try Wheeler to clear up the concept confusion… :-)

    Ken

    0
    #67117

    Grant Blair
    Participant

    Opened up Wheeler’s “Advanced Topics in Statistical Process Control” and didn’t even have to get out of the first chapter:
    “We are not trying to find some exact model to describe the process, but we are trying to determine if the process fits (at least approximately) a very broad model of RANDOM BEHAVIOR……”
    “Futhermore, in “Out of the Crisis”, Deming points out that, in practice as opposed to mathematical theory or hypothesis, EXACTLY STABLE PROCESSES DO NOT EXIST…..”
    “Entropy is relentless. Because of it every process will naturally and inevitably migrate toward the STATE OF CHAOS……”

    Still looking for something which supports your position, but I’ve still got another 400 pages to go.
    Could you give me a hint?

    0
    #67118

    “Ken”
    Participant

    Grant,

    I’m not sure how to close this discussion on a clear note, but I’ll try. “Exactly stable”!!! Please feel free to point out the discussion note(s) I submitted using these words…

    You’re right, we will never know the “exact” model supporting the process we want to control. But that is not the goal of using SPC and control charts. Wheeler does a pretty good job of dispelling the need for the process having to behave normally, let alone “exactly” normal, in his reference Avoiding Man-Made Chaos, Part II. In it he identifies 1,143 unique probability distributions, and concludes that over 1,000 of them are adequately supported using three-sigma limits for detecting changes in a process. Notice I’m NOT trying to describe which distributions are supported OR are not supported using three-sigma limits. It’s just not that critical. Again, my concern with your comments centered around your claim that a process exhibiting ONLY common cause varition is “unpredictable.” Secondly, that a process exhibiting special cause variation IS “predictable.” Did I get you message wrong? If so, I apologize. If not, could you direct me to the reference that provides this exact language, then I could adequately review the merits of your statements. So far, none of the references you’ve given provide support. Remember, Shewhart operated on the data to make it appear to have special causes to illustrate a point. Thus, the reference from Shewhart do not adquately support you claims.

    I’m glad you have such success with your students. Like you I’ve been using this stuff for well over 20 years. Been teaching it both in the industrial and college settings. It seems we both read the same material, but get a different understanding. Usually, when a situation such as this comes up it’s because we are using the same language to describe different things. I’m willing to accept this explanation, and close down this discussion thread. What do you think? If you would like to correspond via email, then I will try to reply as time permits. I believe we have hashed this discussion enough for the others…

    Ken

    0
    #67120

    Grant Blair
    Participant

    I agree it’s time to shut the discussion down.
    I’ll repeat again, one key reason control charts work on the floor is because Common Cause variation is unpredictable and Special Cause variation is predictable. Whenever we try to treat them the other way, we screw up the process. Obviously, if you didn’t get the reasoning for this statement from the thermostat example, then an infinite discussion of the number of angels currently dancing on Deming and Shewhart’s head won’t help….Commandments from the gods are never any use when you won’t listen to wisdom from the trenches.
    You’re looking at the cards and not even noticing that the Spades and Clubs are red, and the Hearts and Diamonds are black….not much I can do about that.

    0
    #67132

    Kim Niles
    Participant

    Well, I count 30 posts not including this one which when printed out is over 16 pages in 10 point text. The sad thing is that we are no closer today to defining what a stable process is than we were a week ago when this “discussion” started.

    From a really global perspective, I think we can all accept a definition that can’t possibly exist, that of “A stable process is one that contains only common cause variation”. Why is it that we can’t allow any reality into our definition of a stable process without a lot of debate? We just aren’t thinking outside the box!!

    My next step is to do as the Quality gurus would do and start placing all the key points on sticky notes, then organize them into categories using affinity diagrams in order to make an effort to think our way out of this.

    “Talk” to you later. Sincerely,
    KN http://www.znet.com/~sdsampe/kimn.htm

    0
    #67133

    Grant Blair
    Participant

    If were’re saying that a stable process can only have common cause variation?
    And we’re also saying that a process which only has common cause variation can’t possibly exist?
    Can we conclude there is no such thing as a stable process?
    This would certainly be consistent with Deming and Wheeler’s viewpoint.
    Same as?????:
    No cat has 9 tails
    Any cat has one more tail than no cat
    q.e.d. Any cat has 10 tails
    I really don’t think we’ve beaten the topic to death yet. There’s still hope that a couple of these
    squirrels will jump to another branch and find himself on a new tree

    0
    #67154

    Grant Blair
    Participant

    Saw a reference to Taguchi methods and thought I might make a run at shaking the branches a little.
    (DiBono says that if you can get a squirrel to jump from enough branches, he will eventually realize there are other trees out there. If not, then the squrrel thinks his tree is the whole universe};-)
    Taguchi theory says there are two types of variables which will define a system:
    1. Parameters in which level affects process variation.
    2. Parameters in which process variation is unaffected by level.
    The idea behind robust design is to set Type 1 parameters at the level which minimizes total process variation. Type 2 parameters are used to control and/or adjust the process.
    Can we assume from this that Taguchi would define a stable process as “robust”? that is,
    levels are chosen which will maintain the process “on target at minimum variance” as per Wheeler’s definition. Does this imply an truly stable process would self-correct?

    0
    #67187

    Kim Niles
    Participant

    Well, this isn’t the full summary of all our posts I promised but it’s a first step.

    I’ve reviewed all the posts and come to realize that they all fit into two global method categories for defining what a stable process is.

    The first category is using statistics to define a stable process. Most of our posts were related to this method given that it seems to have the most controversy. Sub-categories of this method likely include:
    1- Distribution type and importance
    2- Variation type and importance
    3- Entropy and philosophical potential

    The second category is using economics. We didn’t discuss this much but it makes a lot of sense to me. Shewhart was mentioned as taking this line of argument in defining what a stable process is. I suppose that any process that consistently produces good economic results relative to expectations and or specifications could be considered “stable” regardless of how much or what type of variation there is within those expectations.

    Why can’t we just accept this economic model as our definition? Is it too simple? Can we find a hybrid definition that includes statistical measurements as well?

    We are making progress!!
    Thanks.
    KN

    0
    #67188

    Grant Blair
    Participant

    Good summary, and excellent points.
    You may have found a good ally in Shewhart. Here’s what he had to say about the economic role of statistics in Statistical Methods From The Viewpoint of Quality Control. (Chapter 10)
    “In the future the statistician in mass production must do more than simply study, discover, and measure the effects of existing chance cause systems: he must devise means for modifying these cause systems to bring about results that are desirable in the most efficient use of materials. He must not be satisfied simply to measure the demand for goods; he must help to change that demand by showing, among other things, how to close up the tolerance range and to improve the quality of goods. He must not be content simply to measure production costs; he must help to decrease them.”

    You will also find another friend in Taguchi, who demonstrated that minimizing variation will minimize economic cost.

    Interesting that Shewhart mentioned tolerances, wonder if he meant the same thing as we do now?

    0
    #67199

    melvin
    Participant

    My group defines stability as “absent of special causes”. That is, using the control chart for the process in question as your guide – are there any special causes present? There are several “boundaries” by which to define special causes (1 point in excess of a control limit, seven in a row above or below mean, etc. etc.). We recommend using only four of these constraints at a time – reacting to more increases your risk of tampering. Once special causes have been identified, explained and removed – the process is stable.

    With regard to capability, my group defines the use of indices as follows:

    ppk is used when (and only when) your process mean is centered in between your process specs.

    cpk can be used whether your process mean is or isn’t centered in between your process specs. This being because cpk is equal to the lesser of cp upper and cp lower – each if which being calculated independently of each other.

    I hope this proves helpful – it works for us.

    ASIDE:
    I’m new at this discussion forum, but it hasn’t taken long for me to notice the few “chest beaters” (Allen – a.k.a “Ken”) that seem to float around discrediting everything not written by them. That said, I’m sure they’ll take a stab at the above as well…..this is fine, but do everyone a favor and have some tact (Allen – a.k.a. “Ken”). When you don’t, you’re a discredit to everyone in this profession and should not be taken seriously under any circumstance.

    0
    #67204

    Grant Blair
    Participant

    You won’t have to worry about Ken…Today, he just promised me that he won’t post on any thread I’m involved in
    Two quick questions:
    Noticed you use 7-point rules. Based on earlier posts in this thread, what’s your feeling about changing to Wheeler’s rules (8 in a row, 2 of 3, 3 of 4)?
    Been mulling over something Shewhart said. Where do you get your tolerances for Cpk, Ppk? Do you ever review them to see if they need to be changed? What would be your basis for changing them?

    0
    #67205

    Kim Niles
    Participant

    Dear Bob / anonymous

    Thanks for taking the time to post your sincere thoughts regarding the subject. However, regarding the “ASIDE”, I have to comment that you’ve strayed a bit off the subject in what would appear to be paranoia, anger, and or bad communication.

    I’m glad you posted under “anonymous” because you aren’t the only one that has done this so I can more easily address it in generic terms. Three times during this “discussion” from three different people (assuming you are different and from your post I have high confidence in that), on-line and off, I have heard the same type of thing you posted. That of people who are afraid of others that post using different names, of others who’s ideas are so crazy that they must be stopped or they will do harm to the truth, and or of those that are trying to discredit others. Think about it, it’s paranoia, anger, and or bad communication.

    Ways to combat this fear:
    1- Check the properties of the email address which often shows the true person behind it.
    2- Stay away from this site all together
    3- Learn to accept the worst and move on from that. Accept that some people are “bad guys” and that there is nothing you can do about that but lead by good example, post the truth as you know it, move on, and hope that rational people will see the difference.
    4- Try to be a better listener. Those “bad guys” aren’t bad because they just like to see others suffer. They are “bad” because they are having a hard time communicating their strongly held point of view. By really understanding their point of view, you might change yours…or at least learn something. Worst case, they relax because you really tried to understand them.
    5- Reference and or refer the “bad guy” to the on-line bible of internet etiquette at: http://www.fau.edu/netiquette/net/index.html

    By the way, I have even been accused of posting anonymously in order to slander others which I state loud and clear has never happened. That’s all I can do.

    Sincerely,
    Kim Niles – Quality Engineer
    Delta Design, Inc. (www.deltad.com)
    Phone: 858-848-8000; ext. 1295
    http://www.znet.com/~sdsampe/kimn.htm

    0
    #67207

    melvin
    Participant

    Thanks for the reply….

    Actually I’m not really complaining about the “real” Ken – just this “Allen” character that I think sometimes responds to his own messages (agreeing adamantly, of course) using the name “Ken”. In any case, on to more important things….

    With regard to special causes – we typically stick with a basic set of “tests for special causes” – out of bounds test, number of runs test, length of run test and trend test. We do realize the validity of the other tests and are not opposed to their use – we just do not advocate applying more than four tests to a data set. Minitab, I believe, offers eight or so different tests to checkk for special causes. Applying all nine at once increases the alpha risk.

    0
    #67208

    melvin
    Participant

    With regard to our capability measurements, most of our black belts work with processes and key measures that do not have specification limits – in such cases, in lieu of coming up with specs “just for the sake of coming up with them”, we disregard capability indices as a measure for their improvement efforts. In the cases where viable specification limits exist, they are used in capability calculations, monitored as applied to measure the project’s on-going success and periodically assessed to ensure their continued relevance.

    Our company is not very big on capability as a measure – at least not when compared to most companies, it seems.

    0
    #67209

    melvin
    Participant

    Kim Niles:

    I apologize for putting the bee in your bonnet. I should have just kept my feelings to myself as I had not even been a victim (until today) of the verbal abuse. Who was I to complain?

    You know, you’re right – I believe I’ll take the option of staying away from these discussions – I really like the website otherwise. Thank you for allowing me to see this light.

    You seem to have quite a bit of time on your hands, so I’ll leave the question-answering to the likes of you -and the gentleman that responded to my posting without making some kind of joke out of the whole process (Thanks, Grant).

    Good luck in your Six Sigma endeavors,

    Sincerely,

    Good ‘ol “Anonymous” Bob

    0
    #67210

    Mike L.
    Participant

    Grant,

    I have a copy of the reference you cited by Shewart, Statistical Methods from the Viewpoint of Quality Control. My copy has a total of four chapters. What page is Chapter 10 located on? Did you mean Chapter 1?
    Nice quote otherwise!

    0
    #67211

    PN
    Participant

    Use nine sigma instead of six sigma what?

    0
    #67213

    Bill P
    Participant

    Bob,

    Interesting comments about how your group uses the various indices for computing capability…

    Do you have a basis Grant, ur I mean Bob for using these indices in this fashion?

    0
    #67217

    Grant Blair
    Participant

    You’ll have to look at one of my earlier posts in this thread.
    I derived capabilities for non-normal distributions using Chebychev’s inequality (9 sigma) and a modification of that inequality (6 sigma.)

    0
    #67218

    Grant Blair
    Participant

    Absolutely, its the next to last paragraph in Chapter I, and when I turn the page it’s Chapter II (says Grant, with a stupid look on this face };-0.
    Obviously, the chapter before chapter II is chapter 10, right? Really glad in wasn’t in the next chapter,
    or you would have been looking for chapter 110 (LOL)
    Sorry about the error.

    0
    #67219

    Mike L.
    Participant

    Sounds like a description of in part what the Six Sigma movement is all about. Perhaps Shewhart was well ahead of his time…

    0
    #67222

    Kim Niles
    Participant

    Admirable comeback. Anger gets the best of all of us from time to time. Makes one want to go squirrel hunting .

    KN

    0
    #67229

    Grant Blair
    Participant

    Agree with your point about using only 4 or so tests.
    I’ve been teaching all of the tests, (plus one no one in the U.S. has ever heard of, but that’s another story!) but I’m thinking about changing my approach.
    I’m running into more and more references which say the 7 point trend rule is dangerous.
    BTW, Will also be glad to post some of the risks associated with runs rules, if anyone’s interested?
    Recently had a need to look them up in the Western Electric Handbook.

    0
    #67253

    Tony Foley
    Member

    I recently read a good definition along with case studies that may clarify the situation. This information was taken from: STATISTICAL THINKING: IMPROVING BUSINESS PERFORMANCE by Roger Hoerl & Ronald Snee, 2001, Duxbury Press. I will paraphrase their comments: the distinction between stable and unstable processes is typically done with Run Charts or Control Charts. Stability in genberal implies a lack of special causes in which case to improve the process a fundamental change or changes have to be made to the process. A system of Common Causes typically characterize a stable process.

    In contrast, instability implies the presence of special causes. To improve an unstable process, we need to identify and eliminate the root cause(s) of the instability. A system of Special Causes typically characterize an unstale process.

    0
    #67257

    Grant Blair
    Participant

    Finally finished thinking through Shewhart’s comment about “narrowing the tolerances”, and think we can shake the tree a little more!!!
    One of my biggest problems with one product line was their definition of tolerances. Whenever they started a process, their definition of customer requirements was quite simple:
    1. Determine process sigma from a control chart
    2. Multiply process sigma by 4….this is your Customer Tolerance.
    3. Cp, and Cpk is “always acceptable” at 1.33.
    Although this may seem stupid today, it beat the heck out of the old way, which was to send a product out to the customer, and if he said it was o.k., the range of individual values in that initial run became the customer tolerances. (3 sigma, if you were lucky Cpk=1 };->
    Now, the hardest part of this when you’re after Q1 and QS-9000 is getting EVERYONE to STOP!!! recalculating specifications when you improve the process, INCLUDING THE CUSTOMER! This also makes things tough for companies with 6 sigma or 9 sigma capability targets. Problem with this, if you’re a Deming Disciple, you learn that if you exceed customer requirements (delight the customer!!), this now becomes a customer expectation…you have to do it every time, and this now becomes the new spec. Personally, I think this also follows from Taguchi’s loss function. Accordingly, later this week, I will post two derivations of Taguchi’s function (don’t panic!, they’re so simple you won’t need any math to understand them.};-)
    At the other extreme, we had another product line which did a fantastic job of determining customer requirements by deliberately “tweaking” the pilot plant process and determining customer response. Only downside of this was that this product line had a real knack for hiring/promoting managers who could be very “creative” when determing customer requirements..whatever Cpk was needed, they could get it for you! (This is one reason I caution everyone about our plant manager’s target of Cpk>3.0!!!)

    0
    #67258

    Grant Blair
    Participant

    Finally finished thinking through Shewhart’s comment about “narrowing the tolerances”, and think we can shake the tree a little more!!!
    One of my biggest problems with one product line was their definition of tolerances. Whenever they started a process, their definition of customer requirements was quite simple:
    1. Determine process sigma from a control chart
    2. Multiply process sigma by 4….this is your Customer Tolerance.
    3. Cp, and Cpk is “always acceptable” at 1.33.
    Although this may seem stupid today, it beat the heck out of the old way, which was to send a product out to the customer, and if he said it was o.k., the range of individual values in that initial run became the customer tolerances. (3 sigma, if you were lucky Cpk=1 };->
    Now, the hardest part of this when you’re after Q1 and QS-9000 is getting EVERYONE to STOP!!! recalculating specifications when you improve the process, INCLUDING THE CUSTOMER! This also makes things tough for companies with 6 sigma or 9 sigma capability targets. Problem with this, if you’re a Deming Disciple, you learn that if you exceed customer requirements (delight the customer!!), this now becomes a customer expectation…you have to do it every time, and this now becomes the new spec. Personally, I think this also follows from Taguchi’s loss function. Accordingly, later this week, I will post two derivations of Taguchi’s function (don’t panic!, they’re so simple you won’t need any math to understand them.};-)
    At the other extreme, we had another product line which did a fantastic job of determining customer requirements by deliberately “tweaking” the pilot plant process and determining customer response. Only downside of this was that this product line had a real knack for hiring/promoting managers who could be very “creative” when determing customer requirements..whatever Cpk was needed, they could get it for you! (This is one reason I caution everyone about our plant manager’s target of Cpk>3.0!!!)

    0
    #67262

    Mike McBride
    Participant

    Several of the responses on this subject seem to be confusing stability & capability.

    Stability is simply the absence of assignable cause variation (including conformance with the Western Electric rules established by Walter Shewart). It is the voice of the process which is entirely independent of specifications.

    Commom cause variation is inherent in the process as it currenlty exists, it is random and thus it is virtually impossible to “know” all components of this type of variation. Assignable cause variation, as the name implies, is non-random variation which can be identified. Once identified, a cause can be “assigned” and eliminated to achieve process stability.

    Capability is where specifications come into the picture. Capability is a measure of how well our process is operating in terms of meeting customer requirements. It should be obvious that one must be sure about the stability of the process prior to using a capability index.

    A process can display statistical control and still not meet customer specs. In such cases a process change is in order. This change could be a shift in the mean or a reduction in common cause variation, depending on exactly why the process is not meeting requirements. Taguchi is famous for his “loss function” which is a good treatment on this subject.

    We don’t need to reinvent the wheel as these concepts are well defined by Deming, Shewhart, and Taguchi. Another extremely good reference on this subject by Donald Wheeler is “Understanding Statistical Process Control”.

    0
    #67264

    JDG
    Participant

    Stability above all means predictability. In other words, based on today’s data, can I predict what the process will do tomorrow, and with what degee of confidence? Each and every process (and characteristic within that process) needs to be considered individually. Questions need to be asked such as, “What’s the shape of the distribution?”, “How critical is this characteristic?”, “Am I more concerned about small process shifts, or large ones?, “Is there any serial correlation of the data?”, and others. Such questions should be asked and answered buy a cross-functional team working under the guidance of someone who understands why it’s important to ask these questions.

    0
    #67268

    JDG
    Participant

    I’ve enjoyed following this discussion train (see previous posting). What disturbs me is all the “fighting for position” that seems to be going on. I think the questions that have been raised here are much more important than the answers. I hope we keep asking those questions. I remember someone saying that the scientific method is the quest to prove yourself wrong. If we keep asking questions we’ll all get closer to Deming’s state of “profound knowledge. If we stop, we’ll never get there.

    0
    #67270

    Grant Blair
    Participant

    This derivation will require to draw some things as you go along. First, it’s an easy matter in a classroom to demonstrate a perfectly stable process…just give someone a dice (single die) and let them roll it. Demonstrates all the properties of a stable process:
    1. I can easily create a control chart for both X-Bar (and Range, if I take groups of samples) UCL=6 LCL=1 RUCL=5. It will work forever.
    2. It’s unpredictable. If you try to make adjustments based on the last roll, you will make things worse. (I have a standing offer of a trip to Los Vegas for anyone who can prove me wrong). This also applies to runs within limits. Too many 3’s or 4’s and I get suspicious (change the die!!!)
    3. Special cause is predictable. We use to die to determine how long to cut a piece of standard paper. Then, I casually switch to legal paper.
    Next, I switch to an 8-sided die. Control chart will ALWAYS tell me something’s wrong.
    4. If you plot the distribution, it’s rectangular (start drawing!!!). Now, it you take an average (3.5) and report it to the customer, he could say “sounds good” After getting over the shock, you agree on 3’s of 4’s. Problem is, you’ve got to either scrap or rework your rejects.
    5. It can be improved. Just replace the die with a poker chip (3 on one side, 4 on the other). Improve even more with a chip which has either 3 or 4 on both sides (like a two-headed coin). That’s as good as you can get…you’ve met your measurement capability!!!
    Now, most people think their processes run like this in real life (goalpost mentality) Product is either awful (outside specs), or perfect (inside specs 1-6). Doesn’t take long for someone to figure this out, and they learn that awful prodict (0.999) can be made perfect (1.001) just by remeasuring or resampling. You can show this on the chart you just drew by labeling the vertical scale
    Taguchi disagreed, and proved what everyone else already knew: Product goes like this on the vertical scale: awful, poor, fair, good, better, perfect. If you draw this for a two-sided spec if will be a curve with a peak in the exact center (and should look familiar).
    Where did these specs come from? Taguchi said it was the POINT AT WHICH CUSTOMER COST became unacceptable. (flip the curve upside down and label the vertical scale as $$$$) There is still a cost to the customer when you are inside specs and it is at a minimum when you make perfect product. Now this is an IMPORTANT PROPERTY of a normal distribution….If I

    0
    #67281

    Grant Blair
    Participant

    First, I wanted to say the purpose of this post is to agree with Mike, just in case someone misunderstood.
    Secondly, it is to add the part of the derivation that got cut off in the last message.
    In the case where the minimum cost lies at the center of the distribution, then the normal distribution provides the perfect solution to the problem of producing “perfect product”, because an important property of the normal distribution is that MOST of
    the distribution lies at the mean.
    You can look at this as God’s derivation of Taguchi’s
    loss function, or say, as I do “When you have a normal distribution, God is on your side”. MOST of my product will be perfect, as long as I stay on target.

    0
    #67311

    Grant Blair
    Participant

    This is the second derivation of Taguchi’s loss function. It applies to a one-sided specification, and is based on a “trick” which any good salesman knows instinctively.
    Let’s suppose I am a salesman trying to convince one of your customers he needs to buy my IDENTICAL product. All I have to do is get the customer to do a comparison to get ~ 25% of your business!!! Here’s how it works:
    Draw two identical normal distributions, labeled as MY PRODUCT and YOUR PRODUCT. Now draw a line through the centers, and label each side as 1/2 (50%). There will be four possibilities when the customer makes a comparison:
    My product above average, your product above average….no sale
    My Product below average, your product below average..no sale
    My product below average, your product above average..no sale
    My product above average, your product below average…BINGO!!! got the business!!!!!
    Now, suppose I learn enough about my process to REDUCE common cause variation, and I do it so well, all my product is PERFECTLY on target!!. (Erase the MY PRODUCT distribution and leave only the center line)
    This leaves only two possibilites:
    My Product at average, your product above average…no sale
    My Product at average, your product below average..BINGO!!! got the business!!!!!
    In orther words, as I reduce variation, I can now take about 50% of your business.
    Now, if I have learned enough about my process while reducing variation, I will also be able to MOVE my product’s centerline (and, with a normal distribution..I don’t have to move it much to pick up a LOT of sales on the comparison…3 sigma units will give me 99+%, and my old process was already naturally capable of making an occasional part at this level
    This theoretical case actually happened when Ford subcontracted some transaxles from Mazda, and noticed Mazda transaxles had significantly fewer customer complaints.

    0
    #67425

    Kim Niles
    Participant

    I’ve been privy to some great off line “discussion” of terminology, posting guidelines, and “conversational direction pointing” with Ken, Bob, and Grant that I’d like to highlight in order to maintain the course of progress towards our goal of saving the future of all science from heated debate over “what is a stable process”.

    First of all, our string was summarized by Kerri Simon at:
    https://www.isixsigma.com/library/content/c010625a.asp

    Directional topics to explore are as follows:
    1. What are the economic properties of a stable process?
    2. What are the properties of an unstable process?
    3. How would we define “measurement capability”?
    4. What experiences do we have supporting capability measurement processes?

    Here are highlights of post guideline discussions:
    1. The discussion should be limited to the topic, and not involve any exchange of negative remarks of any kind.
    2. Let’s try to keep the discussion based on known understanding, not hearsay.
    3. When applied statistical methods for various techniques are provided, they should be accompanied with supporting reference which includes author, title, and page numbers.
    4. Any methods suggested without references will be considered personal opinions, unless a derivation can be made from known reference.
    5. Personal opinions or experiences should not be considered theory.
    6. We should each keep in mind the primary goal of this discussion is to come to a common understanding and language of the topic. Any discussion point that does not have a line to the central topic will only add to confusion and misunderstanding.

    I hope this starts some fresh ideas.
    KN

    0
    #67446

    Grant Blair
    Participant

    Thanks for the direction forward. You’ll have to look around a little for the derivations I will refer to, but they are in this thread. Note that both of my proofs of Taguchi’s loss function depend upon making “perfect” product. Question is, is this an ideal, or is it really possible?
    This is really possible if you define PERFECT product as being INDISTINGUISHABLE as measured by the customer…that is, 100% of the product will appear IDENTICAL when measured (and/or used) by the customer.
    Now, we have a basis for defining a CAPABLE process:
    Definition: A capable process will approach, as a limit, 100% perfect product.
    Please note what did NOT enter into this definition:
    1. Tolerances: We assume the customer’s expectations set the tolerances, regardless of how much we improve the process. Measurement capability is the only limit to this….we cannot improve what cannot be measured.
    2. Cp, Cpk, 6 Sigma: Just a historical way of “keeping score”
    3. Stable Process: A stable process is not necessarily capable. We’re still searching for that definition.
    Now, I’ve previously defined process stability in terms of Cpk requirements: Stable processes require a Cpk>2, Unstable….a Cpk> 3.0.
    The next challenge is to see if a stable process can be defined without invoking Cpk.

    0
    #67449

    “Ken”
    Participant

    Kim,

    I’m glad to see my suggested guidelines for discussion topics I emailed to you on this discussion board. I would suggest that when we open a new central topic that these suggestions follow the question or topic item. Perhaps we could convince the folks at iSixSigma to post these discussion guidelines into the Articles Area of the site. Then all we would need to do is point to that link after introducing a new topic. Do you think this is a reasonable suggestion?

    Ken

    0
    #67455

    Kim Niles
    Participant

    Dear Ken:

    Yes, the idea of having a page of guidelines (copied out of our emails in this thread; incl “paranoia”) sounds like something I would do if I were in charge of the site.

    However, I am pleased so far with the management of this site and so am optimistic that iSixSigma will continue to grow and maintain it’s well respected status with or without guidelines and or other slightly off-focus / additional information.

    KN

    0
    #67475

    Mike McBride
    Participant

    Excellent points Grant but why would we use Cpk to define stability if, as you stated, it is independent of customer requirements?
    A process is in control if the measure tracked forms a stable distribution over time. A simple control chart serves this purpose. Customer requirements are not needed to make a determination of process stability.
    Cpk is a measure of the “goodness” of a process in terms of its ability to satisfy the customer. In other words, how much of the aforementioned stable distribution fits within the customer specification.
    It is my understanding that Taguchi went one step further to explain that while we must stabilize our processes and improve our ability to meet customer expectations, it is false to assume that we don’t incur any loss when we fall within specifications. As you demonstrated in an earlier message, if the process misses the target a loss occurs even if the product is acceptable to the customer.
    P.S. – Great discussion. You guys are shaking the rust out of my neurotransmitters.
     

    0
    #67487

    Grant Blair
    Participant

    I an earlier post, I provided my definition of a stable process, which is pretty similar to the way you characterized it…just give someone a dice (single die) and let them roll it. Demonstrates all the properties of a stable process: 1. I can easily create a control chart for both X-Bar (and Range, if I take groups of samples) UCL=6 LCL=1 RUCL=5. It will work forever. 2. It’s unpredictable. If you try to make adjustments based on the last roll, you will make things worse. (I have a standing offer of a trip to Los Vegas for anyone who can prove me wrong). This also applies to runs within limits. Too many 3’s or 4’s and I get suspicious (change the die!!!) 3. Special cause is predictable. In class, I use to die to determine how long to cut a piece of standard paper. Then, I casually switch to legal paper. Next, I switch to an 8-sided die. Control chart will ALWAYS tell me something’s wrong. 4. If you plot the distribution, it’s rectangular. Now, it you take an average (3.5) and report it to the customer, he could say “sounds good” After getting over the shock, you agree on 3’s or 4’s. Problem is, you’ve got to either scrap or rework your rejects, so even though the process is stable, it’s not capable.,,it has to be improved!!!
    Now, in real life, I also look for:
    5. About 1-2% points showing special cause. Statistically, I expect to see about that  many “false alarms”. Also, I have learned that accumulating these alarms over time and applying the Pareto principle can provide important clues about sources of common cause variation.
    Accordingly, a first working defintion of a stable process is: Predictable limits over time, but completely random within those limits, with no more than 2 percent special cause indicated. The only variable not defined is the length of time the process needs to be stable, which is answered by leaving the process alone and seeing how long the output remains stable….my rule was a week, but I don’t think that will work for every process.

    0
    #67560

    Kim Niles
    Participant

    Another Thought.
    Most would seem to agree that a capable process is one with a high CpK value (1.33 or better) but by dictionary definition of capable, a process would be capable if it can produce one good part consistently. If it can produce one part then with tweaking, it is capable of producing 100% good parts, etc.  With this in mind, Grant and I have reached an agreement that a stable process is not necessarily capable and a capable process is not necessarily stable……  I’ve got a headache …
    KN

    0
    #68040

    Phil W.
    Participant

    We need to remember that the advantage of having a “stable” process is that the variable measurement is “predictable”. 
    An unstable process does not allow for reliable hypothesis testing (common variation root cause diagnosis) because the process mean, median and standard deviation are shifting from one measurement run/sequence to the next meaning that there are unaccounted for variables in our models.
    I’ve always used the standard tests for special cause variation.  As listed in the previous posting.

    0
    #68046

    Ken K.
    Participant

    Can you give the Section names that that reference from Montgomery is in? I have the 3rd Edition and would like to try to find it (if its there), but clearly page numbering is different.

    0
    #68059

    Hemanth
    Participant

    very true. The purpose of this whole thing called “stability” is to make sure the process remains within the control limits. I would say (please do correct me if I am wrong..) “If 1000 consecutive datapoints fall within my control limits then I can say with 99.999  % confidence that my process is under control and stable.”

    0
    #75453

    Emre ÜNAL
    Participant

    First of all all the variation factors should be defined very carefully before calculating (or trying to calculate) the stability of the process;
    This is not a simple task. For ex. for a certain type of experiment you can accept operator variation factor 10%, and then you have to check this assumption regularly. Second, known variation factors and noise factors must be defined. You have to decide which factors to control (simply by pareto analysis).
    You can’t expect a proses to be stable itself, you can make a proses stable. Many of the statistical assumptions do not fit with the process control problems. Briefly, if the process is under control then it is stable.
     

    0
    #75467

    RR Kunes
    Member

    A stable process is one that only contain common cause variation.

    0
    #77221

    Anilkumar
    Participant

    Hello,
    I would like to know more about AIAG’s definition of process stability with one of ‘within 9 sigma regardless of distribution shape and or process mean shifting’. Where can I GET more on “9 Sigma” process control.
    Anilkumar
     
     

    0
    #77223

    aush
    Participant

    Any data outside the control limit classifies the process unstable

    0
    #77233

    Mike Carnell
    Participant

    Aush,
    That definition will eliminate any process from ever being stable.

    0
    #77244

    Gabriel
    Participant

    Mike:
    stable = free of variation due to special causes. Stop.
    This definition also eliminate any process from being a stable one, because there will allways be variation due to special cause. A proces that, time to time, goes out of control and a propper action is followed, or where some small amount of “small special variation” doesn’t get to be detected by the control chart, is not a stable one but may be “acceptably close to stable”.
    “A perfect state of control is never attainable. The goal is not perfection, but a reasonable and economical state of control. For practical purposes, a controlled process is not one where the chart never goes out of control. If so, we would seriously question whether that operation should be charted. For shop purposes a controlled process is considered to be one where only a small percentage of the poins go uot of control an where out-of-control points are followed by proper action” (Western Electric Co Inc, Statistical Quality Control Handbook, 1956).

    0
    #129488

    Mahesh
    Participant

    what is Sorry, Unable to process request at this time — error 999. this error

    0
    #131323

    HS Anand
    Participant

    Hello friend !I think I should give you an example from live situation and you will never forget.Let’s take my example. I am an unknown person to you and it will certainly be in your interest to verify and be sure of my credentials and reliability features. That is whether or not I am a person whose information could be relied upon. In terms of process characteristics, whether or not I am centered in my behaviour, reliability and trust. Once you have done this, you will then take my information more seriously. Is it not. If this much is understood by you, we can proceed further.Yes, now a situation has arrived, where you have starting trusting me and my words. But then in this situation will it be wiser to continue believing my words for the rest of your life. If you do this you could be cheated or face a negative situation at one time or the other. Agree !Now what should be done. It would be fair to perform some sample checks from time to time to ensure that you are not duped. This type of checks fall under cpk where you take subgroups of 4-5 and on averaging check a sample size of say 60 to 120. This check will tell you that though Mr Anand has a stable process but at this time, he is lying off centred.,ie to say that the process capability index at the time of check is off centred thas giving a low to the reliability factor.NOTE : Please remember that in ppk studies, piece by piece is taken into consideration, whereas in cpk studies, a consecutive subgroup of 4-5 pcs is taken for consideration. Hence ppk study is more comprehensive and thorough. This is the reason, most customers will ask you to first carry out the ppk studies and when you prove the stability of the process, then only they will ask you to switch over to cpk studies.When cpk is talked about a manufacturing or servicing process, it only means that part of the checked population is falling outside the USL or LSL.Take another example of an intelligent and smart person who behaves in a perfect (Ppk just right) but at the time check he might be under the influence of liquour and as such his behavior may be off-centred i.e., the cpk has gone low.Finally to conclude, we might say that whenever we are introduced to a brand new process (New Product Development) we need to perform a ppk (Process Performance Check) and as the process is confirmed to centred and behaving perfectly normal, that is the time switch over to cpk studiesIf I have been able to make you understand to some extent, please revert back on email ID “anandqgp@rediffmail.com”. I shall be obliged.Sincerely yoursHS Anand

    0
    #146619

    Suhail Ansari
    Member

    Actually exact quantifying value varies from process to process and the set specification limits.For example it is different in a semiconductor process than an automobile process.
    “Whatever the process, if the variables are within the specification limits and its process capability is Cpk>=1.33, it is considered as an stable process.”

    0
    #154131

    Alan Jung
    Participant

    But wait a minute, Boeing uses the phrase ” an in control and stable process”, so obviously they do not consider control as the one and only requirement for a stable process.  What else?
    Also, I am using individual X charts, and Boeing wants me to calculate Cpk using a minimum jof 5 data points.  Can any process be considered stable after only 5 data points.  How many points are necessary?  Is there any documentation to back this up?
    Thanks,
    alan
     

    0
    #154132

    Heebeegeebee BB
    Participant

    WOW!!!!
    Folks we have a winner!   The new posting response record is SIX YEARS!!!!!!
    I salute you:

     

    0
    #154133

    Jim Shelor
    Participant

    It sounds to me like Boeing is trying to tell you they want you to use an X-bar/R chart with a 5 sample subgroup instead of the I/mR chart you are currently using.
    The reason for shifting to an X-bar/R chart is that you get a much better estimate of sigma-common cause from an X-bar/R chart than you do from an I/mR chart.
    With an X-bar/R chart it takes 25 points for calculate a capability analysis (Cp, Cpk, Pp, Ppk) with any confidence.
    With an I/mR chart, it takes 30 points before a capability analysis can be made with any confidence.
    There are hundreds of references that talk to this.  Look on the left side of this screen ans select Statistics & Analysis under Quality Directory.
    Also, talk to your representative at Boeing and ask them if what I have said above is what they want.  I think you will find it is.

    0
    #154134

    Alan Jung
    Participant

    I would love to use an X-bar chart with a fjive piece subgroup, but our manufacturing system is fairly unique in that we have many parts with a production rate of 1 part per month.  If I had to wait 5 months for a data point……… So it is an Individual X chart.
    I was thinking, Stability in calibration and MSA means consistency over time. So a process in control with sufficient data points and with no indication of a process shift should be considered stable.
     
    Alan

    0
    #154175

    Jim Shelor
    Participant

    If you are only making one part per month, then you are undoubtedly doing 100% inspection.  Why do you need a capability analysis under those conditions.  Even with an I/mR chart you need 2.5 years of data to have a sufficient number of points for an adequate capability analysis.
    Are these the parts that Boeing wants you to do a capability analysis on?  Control charting, and capability analysis is designed to give you confidence that the parts you do not inspect are likely to be in specification.  Since you are doing 100% inspection, what are you trying to find out from these analyses?
    Doing control charting and capability analysis on parts you only produce once a month is a useless exercise that tells you nothing, given that it takes 2.5 years to get anything but a preliminary answer.
    It is hard to believe Boeing actually wants you to do that.

    0
    #154177

    Alan Jung
    Participant

    You are correct.
    You know this and I know this, but my auditors (both Boeing and Internal) do not, and my bosses want to please the auditors.
    I don’t mind recording and plotting the data, as there is some internal value, but…..according toBoeing’s Advanced Qualtiy System requirements, I am required to do a gage R & R and other costly investigations for calculated Cpk’s less than 1.33, with Cpk’s based on as few as five data points” if the process is in control and stable”
    The only relief I can see is to determine with proff that the process is not stable.  Therefore I need a documented (official?) definition of process stability to back up my judgement that the process is not stable after 5, 10, or 25 data points.
     
     

    0
    #154191

    Jim Shelor
    Participant

    A process must be in control to conduct a capability analysis.
    A stable process is a process that is in control with only common cause variation.
    The last thing you want to do is to prove your process is unstable.  To say that your process is unstable makes it unpredictable.
    You can do what Boeing is telling you to do, but the control limits should be marked PRELIMINARY and the report should include the fact that control limits based in less than 30 points can be misleading and conclusions drawn on these control limits may lead to incorrect actions.  That having been said, you must decide whether or not you want to put those statements in your report.  If your bosses really want to bend over for the auditors, that action could be career limiting.
    Now, let’s talk about MSAs.  I got the impression that you are already making these parts and already delivering them to Boeing.  If you did not do an MSA using these parts and your measuring system prior to the final inspection, then you do not really know if you shipped good parts or not.  Without an MSA, you cannot determine if your measurement system is capable of discriminating between a good part and a bad part.
    Since you only make 1 a month, getting an MSA done is difficult because you need at least 3 and perferably 5 parts to perform a useful MSA.  The question is, do you make only 1 a month because that’s all the customer needs; or does it take a month to make one?
    You will have to take into account how much these parts cost, but you need a set to do an MSA with.
    I hope this helps a little.

    0
    #160077

    George Noyes
    Participant

    Now I am a bit confused by all these comments.  I always thought a process was stable when special causes of variation were removed and all that was left was common causes.  Thus the process was stable or predictable over time.  I also thought that being stable had nothing to do with spec limits and should not be used on the same chart.  You only talk spec limits when you start discussing being “in control.”

    0
    #186626

    Newby
    Participant

    Outside of an academic setting, I don’t see the value in debating the nuances of an idea that has forever been grounded in practicality and varying assumptions – What you measure, what you don’t, your methods, your device, your sampling plan, subgroup strategy, sample frame, etc are all inexact calculations made by the investigator that have much greater relevence toward reducing your uncertainty about your process than debating ad nauseum details that arent operationally relevent.
    Work to build a theory of knowledge on your process and your business.  PDCA. Are you better today than yesterday?  Now do it again.
    I gotta get back to work. 
    But what do I know.  The name says it all.   
      

    0
    #186635

    Darth
    Participant

    The fact that you responded to a 2 year old post says even more about you. Keep working unless you can actually contribute in a timely manner.

    0
    #186636

    GB
    Participant

    Darth,
    Actually, he responded to an 8 yr old post…doh!

    0
    #186637

    Darth
    Participant

    Correctomundo….that makes him even less useful.

    0
    #186639

    clb1
    Participant

      I suppose you could look at it as responding to an 8 year old post. However, it could also be viewed as nothing more that the continuation of a tradition.  Other than a few outliers in 2002 (Anikumr, Aush, Carnell, Gabriel, Emre and RR) and 2006 (Suhail) all of the other posts have been on odd numbered years.  Since 2009 is coming to a close perhaps the most recent poster just wanted to make sure that 2009 would not go unrepresented.

    0
    #186640

    Taylor
    Participant

    Holy cow, I thought I too much time on my hands………………..lol

    0
    #186645

    Darth
    Participant

    I am still convinced that Katie posts one or two of these a day to get us going so we up the posting volume.  She does one in the morning that is plain idiotic and is posted with a name containing all consonents.  Then she posts one in the afternoon relating to the oldest thread she can find at the moment.  And we keep falling for it.

    0
    #186665

    MrMHead
    Participant

    Then it shouldn’t be long before we see another post asking about the Lt sigma shift. That always gets a flurry of responses!

    0
Viewing 93 posts - 1 through 93 (of 93 total)

The forum ‘General’ is closed to new topics and replies.