Random cause
Six Sigma – iSixSigma › Forums › Old Forums › General › Random cause
- This topic has 24 replies, 9 voices, and was last updated 17 years, 3 months ago by
Six Sigma Tom.
-
AuthorPosts
-
April 27, 2005 at 5:51 pm #39151
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.I posted a message about this topic in reply to another thread, but after thinking about it I decided that the topic might deserve its own discussion.My premise is this: the concept of random causes often does more harm than good. People tend to treat so-called “random causes” as equivalent to no causes at all: they ignore them. However, even when variation in Y appears to be random, it is still caused. It’s just that there are many causes, each having a relatively small effect, and it’s tougher to attribute any particular fluctuation to a particular cause.But this doesn’t mean that we shouldn’t try. True, it’s not a STOP THE PRESSES emergency like a special cause might be. But the core idea of Six Sigma is that we should pursue variation to the PPM level, which requires that we take “random causes” seriously. This requires a lot more work and the use of sophisticated tools like DOE, but that’s what we’re paid for!
0April 27, 2005 at 7:33 pm #118535
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Tom:
I would agree that this topic is worthy of further discussion. I would also agree that these two words, as well as their union, are frequently misunderstood and inappropriately applied (by novices and experienced practitioners alike).
Some would say that nothing in nature happens by chance, everything moves in some sort of trend, shift or cycle. If this is true, then one would also tend to believe that every X plays a role (of some type or form) in the consequential determination of Y, given that Y = f ( X1 , , XN). However, not all variables are created equally. Some have more influence than others. Thus, we recognize that some of the Xs are more influential than others. Hence, the vital few versus the trivial many.
This is not to say the trivial many Xs have a random influence on Y. However, this is to say that the trivial many are not as influential in terms of their relative weight (with respect to a point estimate of Y or some parameter thereof). In other words, the related partial derivatives of the trivial many is of such a small magnitude that the related set of variables are of no practical value during an improvement effort.
The capability of DOE to discern the vital few is well established; however, would you not agree that sample size plays a highly interactive role in this capability? The practice of DOE may be capable of separating the independent effects, but if the sample size is too low, the DOE will not be able to detect a given amount of change (for a selected level of alpha and beta risk). As sample size is increased (for a given alpha and beta risk), the probability of being able to detect a fixed amount of change (in the response mean or variance) also increases.
From another angle, we can consider the temporal behavior of Y (or any given X). If the outcome of an autocorrelation study fails to reveal an association (across all possible lags), is it safe to say the observed behavior is not patterned (i.e., random)?
Is the idea of randomness a curiosity of the human imagination, or does it have some basis in the real world? If so, how do we conclusively prove it? If not, then what are the implications?
Reigle Stewart0April 27, 2005 at 8:07 pm #118536I agree – All variation has cause. Is the cause random? I guess that depends on how we define random?
Having no specific pattern, purpose, or objective: random movements. See Synonyms at chance.
Mathematics & Statistics. Of or relating to a type of circumstance or event that is described by a probability distribution.
Of or relating to an event in which all outcomes are equally likely, as in the testing of a blood sample for the presence of a substance.0April 27, 2005 at 8:11 pm #118537
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.Reigle,Sure, sample size is an issue. A sample that’s too small will miss important effects. But with modern computers we often have the opposite problem: sample sizes so large that miniscule effects are flagged as “statistically significant.” But that’s a subject for another post!
It’s interesting that you mention the “trivial many,” a phrase created by Dr. Joseph Juran. Dr. Juran has since said that he considers the cumulative impact of these variables to be more important than he’d orignally thought and suggests calling them the “significant many.”
Although DOE requires an adequate number of replicates, it’s probably even more important to include the correct factors in the DOE and to analyze the DOE properly. The key statistic is the F-Ratio, which is a signal-to-noise ratio. The denominator variance (“error” or “noise”) will be very small if the correct factors are included in the best model, MUCH smaller than the variance from a typical control chart placed on a process after addressing only variables that produce out-of-control signals. Also, components of variance information must be used to select the proper error term for the denominator. If the best model includes the right factors and the ANOVA uses the right denominator, then the error term is often very small even if the number of replicates isn’t too great.
Your philosophical question (“Is the idea of randomness a curiosity…”) is also very interesting. IMO, there are events in the universe whose impact on subsequent events is so small and/or remote that it can safety be said that there is no cause and effect relationship. If I jump up and down 1000 miles away from you, the earth beneath your feet may vibrate, but it’s undetectable to our science. I’m not suggesting we chase causes to their theoretical or philosophical limit. But I am interested in Juran’s significant many. I don’t think we’ll reach Six Sigma performance without exploring these factors.Tom
0April 27, 2005 at 8:29 pm #118538
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.Mr. IAM,Here are some data: 110, 143, 145, 120, 111, 128, 144, 140, 105, 133, 138, 116, 125, 138, 149, 149, 117, 120, 134, 120, 110, 119, 132, 114, 133, 129, 101, 108, 125, 107, 105, 146, 145, 105, 109, 102, 132, 127, 110, 122, 129, 102, 108, 149, 106, 142, 108, 103, 123, 124.Are these data random?If you plot the data on a control chart and perform all of the run tests, the answer is yes. If you apply Mr. IAM’s definitions, it’s also yes. Now, what if I investigated and learned that two people were involved in the process that generated these outcomes, Bob and Mary. Let’s suppose I stratified the data according to who was running the process and came up with these results:Bob: 110, 120, 111, 105, 116, 117, 120, 120, 110, 119, 114, 101, 108, 107, 105, 105, 109, 102, 110, 122, 102, 108, 106, 108, 103, 123, 124Mary: 143, 145, 128, 144, 140, 133, 138, 125, 138, 149, 149, 134, 132, 133, 129, 125, 146, 145, 132, 127, 129, 149, 142Suddenly the data are no longer “random.” Bob’s results are all 124 or less. Mary’s are 125 or more. But Bob’s data looks random, as does Mary’s. Are they? I submit that they are not. We just need to figure out what’s causing the variability within each person’s data. And then, on to the next sub-level, and the next.It’s not a question of definitions, mathematics, statistics, or philosophy. The issue is: how do we economically eliminate variation? Whether the variation appears random or not doesn’t matter. (But economics matter. We don’t want to spend dollars to save pennies.)Randomness is just another word for ignorance about the causes.Tom
0April 27, 2005 at 8:42 pm #118539“But the core idea of Six Sigma is that we should pursue variation to the PPM level, which requires that we take ‘random causes’ seriously.”
I don’t necessarily agree with this approach to a six sigma project. Usually a SS project deals with either adjusting the process mean or reducing variation (from a stats perspective). If there’s assignable cause, assign it and eliminate it. If there’s no assignable cause, reduce the variation.
Are you saying that a project is initiated, a control chart or capability study is completed, and the conclusion is nothing needs to be done? If that’s the case, there’s probably a fundamental error in selecting the critical Y(s) of the project. Gotta be an identified performance gap BEFORE you start analyzing!0April 27, 2005 at 8:44 pm #118540
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Tom:
Great post! The concept of “significant many” is interesting. But how should one define the term “significant”? This is another tributary of the analytical Amazon that should eventually be explored.
Do you believe Dr. Juran’s meaning to be “statistically significant”? Could he mean “practically significant?” Yes, the cumulative influence of the “trivial many” might be large in some situations. But, would any of the contributing effects prove “significant” in any way? I would think not.
For any reasonably complex cause-and-effect scenario, would you agree that the slope of any given X will likely prove to be statistically and pragmatically insignificant? Accordingly, for any complex transform involving a large range of independent variables, it is doubtful that the associated distribution of partial derivatives will prove to be uniform. I would tend to think that such a distribution would be skewed, to such extent that only a few of the variables could be declared as “statistically AND pragmatically significant.”
Respectfully Submitted,Reigle Stewart
0April 27, 2005 at 8:47 pm #118541
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Tom:
I love your last sentence: “Randomness is just another word for ignorance about the causes.” Great numerical example.Reigle
0April 27, 2005 at 9:03 pm #118542
AnnonymousParticipant@AnnonymousInclude @Annonymous in your post and this person will
be notified via email.Mr. Tom:
Do you believe the Zeta Distribution, Riemann Hypothesis, or Zipf’s Law has anything to do with this discussion?
Annonymous0April 27, 2005 at 9:20 pm #118543
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.“Are you saying that a project is initiated, a control chart or capability study is completed, and the conclusion is nothing needs to be done?No. Instead, let’ say you complete the control chart/capability study and it shows statistical control. But your critical Y is still varying too much. Many would say, “Hey, random causes only. So we need to completely redesign the process.” I say “Nay, Nay!” Look for harder-to-find causes during the analyze phase.
0April 27, 2005 at 9:40 pm #118546
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.Do you believe Dr. Juran’s meaning to be “statistically significant”? Could he mean “practically significant?”Juran made the comments in the context of his model of optimal quality costs. This model showed that there was a quality level that optimized the total cost of quality (sum of cost of poor quality and cost of quality control.) Juran’s original model showed the optimum as being the minimum of a cost curve which rose on both sides of the optimum value. His revised model allowed that, for many processes, the total cost of quality might be optimized at or near perfect quality. With this model it makes sense to keep looking for causes even when the cost improvement might be small, providing we don’t spend too much to look.You say: “For any reasonably complex cause-and-effect scenario, would you agree that the slope of any given X will likely prove to be statistically and pragmatically insignificant?…I would tend to think that such a distribution would be skewed, to such extent that only a few of the variables could be declared as “statistically AND pragmatically significant.””REPLY: Well, “any given X” might be a bit strong. But I agree with the latter part of your statement. If you run a screening experiment you usually find that only a few factors rise to the top, then you explore these further. Or, if you run a principal components analysis (or partial least squares regression) and look at the scree plot, the result is the highly skewed distribution you predicted.
0April 27, 2005 at 9:51 pm #118547
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.Annonymous,First I’ll admit my ignorance. I had to look these terms up on the Internet. Having done so, let me state my understanding of your question, then try to reply. These things all relate to the density of primes.
In the Six Sigma context I think you’re saying that as we increase the number of variables in a model, the percent of variation explained by each new variable decreases and that a plot of this relationship would look similar to a Zeta distribution.
I agree. Juran thought it looked like a Pareto distribution, but the Zeta and Pareto both look alike to me and I wouldn’t be surprised if they’re related mathematically.0April 27, 2005 at 10:16 pm #118548
AnnonymousParticipant@AnnonymousInclude @Annonymous in your post and this person will
be notified via email.Mr. Tom:
This is my understanding as well. Perhaps our kind forum consultants might comment on this particular aspect of the discussion.
Annonymous0April 28, 2005 at 12:18 am #118557“Perhaps our kind forum consultants might comment on this particular aspect of the discussion.”Why? So you can jump on as Reigle and give us a Mikel or ASU reference.
0April 28, 2005 at 12:25 am #118559To conclude a process needs to be redeisigned because it is in control and has too much variation is complete stupidity. Only a person lacking implementation knowledge (like Reigle, Annonymous, and their esteemed hero) would even consider such a thing. It does not jive with reality.
0April 28, 2005 at 1:56 am #118565How can a process be in control and have too much variations? I can understand that an in-control process can have random variations but “too much variation” rimes better with assignable causes.
0April 28, 2005 at 2:08 am #118566You don’t think it’s possible to have a Cp of 0.5 and be in control?
0April 28, 2005 at 2:30 am #118568
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Issa, a process can be in a state of statistical control
(random variation only), yet be too wide relative to the
performance specification limits. For example, consider a
given manufacturing technology that is centered on the
target value such that M = T, where M is the process
mean and T is the design target. To further our
discussion, it will also be understood the individual
“deviations” that comprises the variance are fully
unpredictable (i.e., random). In this case, it would not be
practical or economically feasible to track down and
eliminate the source(s) of such deviations. Hence, it can
be said that the given technology is centered and
operating at its maximum capability, say 2 sigma. In this
simple example, we can see that the process is “in
control” but unfit for use. This process would be “in
control,” but producing a high defect rate. To resolve this
problem, one must; a) live with the variation, b) increase
the specifications, c) find a robust solution that minimizes
the variance, d) upgrade the underlying technology, e)
block the effect of causative variables, or some
combination of the above.0April 28, 2005 at 4:14 am #118571If Cp < 1, the process is wider than the specifid limits, and is not capable. So why would you want to have a Cp = 0.5?
0April 28, 2005 at 4:47 am #118572Reigle Stewart,
I fully agree with you and I do not see any contradiction with what I said.0April 28, 2005 at 6:27 am #118573Reigle,
How about 100% insepction/measurement? While I agree this is not an ideal solution, it would reduce the number of defectives.
Processes without a natural process capability is not uncommon problem, especially where high performance gives a marketing advantage. (I won’t mention TPS today!)
Do you remember the ‘fastest static RAM in the world?’
Cheers,
Andy0April 28, 2005 at 3:08 pm #118592SS Tom –
Randomness is just ignorance of cause? You just said prior to that post that it is a question of what is economically feasible to investigate. So, which is it? Is it ignorant to choose not to investigate? Is it possible to assign cause to every variation? I dont think it is unless someone around here has reached god like status!0April 28, 2005 at 3:34 pm #118594Really I agree with Tom .
Random causes are those which when controlled gives you excellence.
Yes basic need is you should have a stable condition for a good time span.When ever you accept random causes as “Ignorable”,you can’t think of a Best in class condition.
More over we all are here to improve our processes to come out with excellent results.
One hidden benifit is when you start working on random causes you start your journy to PROCESS EXPERT.
0April 28, 2005 at 4:24 pm #118600One of my favourite quotes.”The generation of random numbers is too important to be left to chance.”Robert R. Coveyou, Oak Ridge National Laboratory
0April 30, 2005 at 2:32 am #118659
Six Sigma TomMember@Six-Sigma-TomInclude @Six-Sigma-Tom in your post and this person will
be notified via email.Mr. IAM,Here’s my take on this: it’s economics.1. If the unknown cause has such a big impact that it produces a point beyond the 3-sigma control limit, then it’s probably economical to look hard and look fast for the cause. In other words, do what Shewhart and Deming said: stop and look.2. If the unknown cause produces a statistically significant run, but no points beyond the control limits, then it’s probably a good candidate for an off-line meeting by the engineer (or other technical expert) and the front line supervisor.3. If the unknown causes produce an apparently random pattern of variation, but one that is wide enough that customers or shareholders take unfavorable notice, then it’s probably a good candidate for a 6-sigma project. DMAIC will drill down to the cause or bundle of causes that are responsible for the unacceptable variation. DMAIC projects are more expensive and time consuming than options 1 or 2.Option #3 is what’s new about 6-sigma compared to TQM or traditional quality.Tom
0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.