1.5 SHIFT SIMULATION
Six Sigma – iSixSigma › Forums › Old Forums › General › 1.5 SHIFT SIMULATION
- This topic has 82 replies, 11 voices, and was last updated 18 years, 5 months ago by
Statman.
-
AuthorPosts
-
January 19, 2004 at 2:26 am #34320
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.To all that are interested in the 1.5 sigma shift:So as to actually visualize the 1.5sigma shift in action,
try conducting the following Monte Carlo simulation (in
Excel). It only takes a few minutes to set-up and is very
easy to execute and plot. From this simulation you will
discover (without the theoretical math) what the 1.5
sigma shift thing is all about. Beyond any doubt, you will
discover first-hand that: 1) the resulting shift factor is an
equivalent and compensatory off-set in the mean of a
population distribution that emulates the influence of
random sampling error at the limits of process capability
(i.e., unity), and 2) it is a statistical worst-case condition
that is fully independent of performance specification
limits. IT IS ESSENTIAL TO RECOGNIZE THAT THE
LIMITS OF PROCESS CAPABILITY ARE
CONVENTIONALLY ACCEPTED TO EXTEND FROM –
3.0S TO +3.0S. THIS IS TO SAY THAT 100% OF THE
AREA UNDER THE NORMAL CURVE IS DECLARED TO
EXIST BETWEEN -3.0S TO +3.0S. This is important to
understand because you will be comparing the sampling
limits of unity to the corresponding population limits. This
is vital to understanding the nature of the 1.5 sigma shift. Step1: Using an Excel spreadsheet, create n=30 random
normal numbers (with a population mean=100 and
population stdev=10) in cell locations A1:AD1. This can
be done by using the equation: =100+10*
(normsinv(rand()) in each of the n=30 cells (A1:AD1).
Step 2: Copy the contents of A1:AD1 through cells
A1000:AD1000. At this point, you will have formed a
matrix containing 30 columns by 1,000 rows, or a total of
30,000 random normal numbers (randomly drawn from a
normal distribution with a mean of 100 and a standard
deviation of 10). Step 3: In cell location AE1, input the
equation: =stdev(A1:AD1). This will compute the
sampling standard deviation for the first row of n=30
random normal numbers. Step 4: In cell location AF1,
input the equation: =3*AE1. This equation provides the
upper limit of unity for the given sampling distribution
(defined by n=30 random normal numbers). Step 5: In
cell location AG1, input the equation: =AF1-130. This
equation gives the difference between the sampling
distributions upper limit of unity and the populations
upper limit of unity. Some of the differences will be
positive and others will be negative. Note that the value
130 is simply 100+(3*10), where 100 is the population
mean, 3 is the number of standard normal deviates that
defines unity, and 10 is the population standard deviation.
Step 6: In cell location AH1, input the equation: =AG1/10.
This equation provides the given difference in Z units
(relative to the population distribution, not the sampling
distribution). Thus, we have the proverbial shift factor for
the first row of n=30 random normal numbers. Z.shift is
the number of population standard deviations that
separates the upper limit of unity for the sampling and
population distributions. Step 7: Copy the contents of
cells AE1:AH1 through cells AE1000:AH1000. You
should now have 1,000 Z.shift values. Step 8: Plot the
contents of cells AH1:AH1000 (the Z.shift values) on a
simple line chart. Note the maximum and minimum
(worst-case) Z.shift value. Again, some will be positive
while others are negative. Step 9: Recalculate the entire
spreadsheet by pressing the F9 button on your computer
and evaluate the new graph as before (looking for the
maximum and minimum Z.shift values). As you will see,
the statistical worst-case condition is APPROXIMATELY
+/- 1.5 sigma. Using the chi-square distribution with df=
29 and alpha=.005, it is possible to compute the exact
worst-case condition. Doing so will reveal that Z.shift=
1.46 (just as the Monte Carlo simulations suggests).Reigle Stewart0January 19, 2004 at 2:38 am #94323
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Sorry for the oversight, Step 4 should read: In cell location
AF1, input the equation: =100+3*AE1. If forgot to add
100.Reigle Stewart0January 19, 2004 at 11:29 am #94329
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Reigle,
Shouldn’t it be AF1=AVERAGE(A1:AD1)+3*AE1? Whay are you adding the “sampling” standard deviation to the “population” average, which is supposed to be unknwon?
I mean, shouldn’t we compare Xbar+3S (what is the upper limit you can tell from the sample) against µ+3sigma (what is the upper limit of the actual population?0January 19, 2004 at 12:31 pm #94331
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Reigle,
So that means that, if I have a process that has an actual (and unknown) short term performance of 3 sigma, then the short term performance calculated from a sample of 30 parts can be as low as 1.5 sigma or as high as 4.5 sigma.
This is true:
– Only if the sample size is 30.
– Only if the actual short term sigma level is 3.
– Only to convert the point estimation of the short term sigma level to the limits of a confidence interval (worst case) for the short term sigma level.
From your explanation, the 1.5 sigma shift does not work:
– If the sample size is other than 30.
– If the actual short term capability is other than 3 sigma.
– To convert sigma short term to sigma long term or viceversa, even if the sample size was 30 and the actual process short term performance was 3 sigma. Again you are only taking a worst case of the short term performace based on a point estimation of the same short term performance.0January 19, 2004 at 1:17 pm #94333Reigle,
Give it up. Mikel has been exposed for what he is not (a change agent) and your groupie behavior is sad.
Try this. Set up a control chart with a given mean and a given std dev. Shift the mean and run 30 samples of 5. How long is it before the shift is detected? One? Two at most?
Mikel and Reigle have no clothes. (actualy Mikel has no clothes and Reigle pretends not to see that)0January 19, 2004 at 2:41 pm #94337
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Gabriel:The given cell equation is correct: M+3s, where M is the
population mean (center) and s is the sample standard
deviation. I have used the population mean (100)
because the process center is USUALLY easy to adjust
(at will); however, the variance cannot be so easily
adjusted. Besides, I am only interested in making a
comparison at the points of unity (not considering
centering analogous to Cp). So think of M like you
would T (target value). Even so, the conclusions basically
remain the same whether you use the population mean or
the sample mean. Gabriel, you speak of short-term
capability and sigma capability. Please re-read my
statement in ALL CAPS. The Monte Carlo is NOT
considering specification limits, so your implication of only
being applicable to 3 sigma short-term capability is not
appropriate in this context. You would be correct in
saying that it only applies at the limits of unity. Remember
that there is “no probabiliy” beyond the limits of unity
(once the limits of unity have been declared). So if the
points of unity are exactly aligned, then the sampling
distribution and population distribution are
probabilistically equivalent at the points of unity only.
Again, remember that the points of unity define the limits
of process capability (by quality convention, not my
convention … like in CP we assume unity +/- 3s). You are
also correct in saying the conclusion only applies to the
case n=30, but this is a fairly typical sample size for
qualifying a process. The converting of short-term to
long-term has nothing to do with this simulation. It does
have something to do with the qualification procedure that
logically stems from this simulation. If a designer
postulates a given distribution (infinite degrees-of-
freedom), we then know how much to implode the
distribution to set a target variance for process
qualification. You are also correct in saying that I only
consider the worst-case condition. From a design
engineering perspective, that is the only case I would be
interested in. The statistical worst-casecondition is what
I (as a designer) must guardband against by establishing
design margins (safety margin).Reigle Stewart0January 19, 2004 at 4:47 pm #94341Reigle,
Congratulations!
You have successfully provided a simulation that proves that the upper (1-alpha)*100% confidence interval for the sample standard deviation for n=30 and alpha is small is approximately 1.5. Gee, I think they are doing a similar proof in green belt training now. Maybe you should sign up for one so that you can get a grasp on basic statistics.
Unfortunately, you have not provided a proof or simulation of the 1.5 sigma shift. Lets summarize your simulation:
Steps 1-3: Calculate a series of sample standard deviations (S) from a random sample of n=30, average = 100, s = 10.
Step 4: Determine the 3*standard deviation upper limit for the sample based on the population average (100+3*S). If we take the worst case level of S (upper 99.5 limit), S will be approximately 15 (1.5* s) so this value will be 145.
Step 5: Determine the difference between the upper 3*sigma population limit and the upper 3*S sample limit based on the population average (D): D = (100+3*S) (100+3*s) = 3*(S- s). For worse case D = 3*(15-10) = 15. Essentially, we have calculated the linear distance between the upper 3*S and the upper 3*s limits.
Step 6: Divide D by s: shift* = 3*(S- s)/ s = 3*(c-1) where c= S/ s and since c for n=30 and alpha = 0.005 is approximately 1.5, shift* = 1.5.
Step 7-8: Rinse and repeat and graph.
So we know that the worse case difference between the upper 3*S and the upper 3*s limits is 1.5* s.
So what! The whole purpose of calculating the Z value is to determine the probability of the process producing product beyond the spec limit; or the percent of the population out of spec. Determining the number of standard deviations between the point of unity of the short term and long term distribution is a meaningless operation. The distribution with the true population sigma does not have the same pdf of a distribution using the worst case sampling error estimate of sigma. Shifting the Zst (Z of the true population) by 1.5 sigma does not produce the same cumulative probability beyond the upper spec limit of the sample distribution except in the unique situation that Zst is 4.5.
If, in your example, the upper spec of this process is 145, then
Zst = (145- 100)/10 = 4.5 and Zlt=(145-100)/15 = 3.0 and the Z shift is 1.5.
However, lets make the upper spec 130.
Then Zst = (130- 100)/10 = 3.0 and Zlt=(130-100)/15 = 2.0. Now the shift is 1.0
Want another example? Lets make the upper spec 160:
Then Zst = (160- 100)/10 = 6.0 and Zlt=(160-100)/15 = 4.0. Now the shift is 2.0
This example shows that using your assumptions about sampling error, a process that is 6 sigma short term is 4 sigma long term not 4.5.
Lets make this the general rule and throw out this 1.5 shift:
Let C = SLt/Sst
Then ZLt = Zst/C
Or if the goal is to determine the S.qual based on the required S.design, then you can simply divide S.Design by C. The arguments to use the “shift” is only adding complexity and confusion and limiting the application of C.
Statman0January 19, 2004 at 6:25 pm #94347Reigle,
This is the greatest amount of BS that you have ever posted:
“So if the points of unity are exactly aligned, then the sampling distribution and population distribution are probabilistically equivalent at the points of unity only. Again, remember that the points of unity define the limits of process capability (by quality convention, not my convention … like in CP we assume unity +/- 3s)”
Maybe you should re-check the formula for CP. You will notice that the formula for CP is the ratio of the specification range to the 6*sigma range. The CP STATISTIC DOES INCLUDE THE SPEC RANGE!!!!
You are adding nothing but confusion by trying to justify this unjustifiable shift.
Statman0January 19, 2004 at 7:57 pm #94354
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:First, as professional courtesy, I will ignore your
inferences about my intellectual background.
Really, this adds no value. On the topic at hand,
in previous posts, you completely dismissed
the entire shift thing. Next, you acknowledge a
shift, but say the only relevant measure is c =
S.design/S.sample (also from Dr. Harrys book).
In your last post, you state Divide D by s: shift*
= 3*(S-__)/__ = 3*(c-1) where c= S/___and
since c for n=30 and alpha = 0.005 is
approximately 1.5, shift* = 1.5 So we know
that the worse case difference between the
upper 3*s and the upper 3*S limits is 1.5*S. By
your own math, you are now showing the
algebraic equivalency between an inflation of
the standard deviation (due to random
sampling error) and a linear off-set in the mean
(exactly what Dr. Harry said in his book). Again,
you are trying (very hard) to include the
specification limits, but as I said, the spec limits
do not apply (because they algebraically cancel
out). You go on to say So what! The whole
purpose of calculating the Z value is to
determine the probability of the process
producing product beyond the spec limit; or the
percent of the population out of spec.
Determining the number of standard deviations
between the point of unity of the short term and
long term distribution is a meaningless
operation. Well, the old dumb guy will say it
again, we are not trying to (or interested in) the
calculation of Z, but you keep providing
examples based on specification limits.
Statman, look at it from another angle.
Consider the .9973 confidence interval of the
mean. If the standard deviation (as related to
the standard error) is expanded (in accordance
to chi-square), then how much wider will the
confidence intervals 3 SE points actually be? I
do believe you will discover the confidence
interval will be approximately 1.46S wider at the
points of unity, or about +/-1.5 sigma. In other
words, the limits of unity will be shifted by
about 1.5s.Respectfully,Reigle Stewart0January 19, 2004 at 8:09 pm #94355Reigle,
Don’t you have something of your own to say here or at least something worthwhile to go do – I am sure Mikel’s boots need to shined.0January 19, 2004 at 8:30 pm #94357
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:You state: Maybe you should re-check the
formula for CP. You will notice that the formula
for CP is the ratio of the specification range to
the 6*sigma range. Well, given a symmetrical
bilateral specification such that Cp=(USL-LSL)/
6S, we can split the equation into two parts
and generalize to the case Cp=|T-SL|/3S, where
T is the target (nominal) specification. Statman,
you should save your rhetoric about me trying to
confuse things to justify the shift. It is not me
that you should quibble with. I dont think your
verbal tactics will work during the debate (which
should be scheduled soon I am still working
on getting the details together).RegardsReigle Stewart0January 19, 2004 at 9:00 pm #94360Hey Reigle – time to stop the BS about the debate as well. Mikel would not do anything where he does not dominate the stage.
0January 19, 2004 at 10:57 pm #94361Reigle,
You said:
By your own math, you are now showing the algebraic equivalency between an inflation of the standard deviation (due to random sampling error) and a linear off-set in the mean (exactly what Dr. Harry said in his book).
No, I said: Essentially, we have calculated the linear distance between the upper 3*S and the upper 3*s limits. And I go on (for about the fifth time on this forum) to prove that a linear off-set in the mean is not equivalent to an inflation of the standard deviation due to the non-equivalency in the pdfs of the two distributions.
Give me one example from process study or new product qualification in which there is no consideration of the range of acceptability of the process (ie the specs). Without the specifications, the capability of the process has no meaningful context.
Let me give you this from your example:
Mean short term is 100, standard deviation short term is 10. We will only launch the product if the PPM long term is less than 400.
It has also been determined (through extensive research) C= S.lt/S.st = 1.5
Do we launch the product?
Please answer this for me with no additional information
Statman0January 20, 2004 at 5:07 am #94366
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.“It has also been determined (through extensive research) C= S.lt/S.st = 1.5”
Ha,ha,ha!!!! Extensive research! You really made me laugh with that.
Did that research include reading Harry’s book?
Welcome back, Statman. And thanks for the humor.0January 20, 2004 at 5:25 am #94368
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Your request is: Give me one example from process
study or new product qualification in which there is no
consideration of the range of acceptability of the process
(ie the specs). Without the specifications, the capability of
the process has no meaningful context. The PROCESS
STUDY example I offer is quite simple and is presented
as follows. Consider a factory that has multiple machines
performing the same task (like several punch presses that
are identical (this is quite common). Machine A1: S.st=10
and S.lt=15. Machine A2: S.st=12 and S.lt=18. Machine
A3: S.st=6 and S.lt=12. Which machine has the greatest
capability? Obviously Machine A3 (without regard to the
possibility of sampling error). This conclusion is SELF-
EVIDENT and did not require the use of specification
limits. Statman, there are several types and forms of
capability. I do believe that the process standard
deviation is a form of capability it represents the
capability of a process to REPEAT itself regardless of
the specification limits. Several corporations retain
databases of process capability and only record (archive)
the standard deviations and not the specification limits.
Im sorry, you only wanted one example and I provided
two. And yet another form of capability is the idea of
process reliability. Of course, estimates of process
reliability can be made without any regard to specification
limits. So there is a third example. Often in design
engineering work, we start with a process standard
deviation (capability) and then establish specifications,
not the other way around. After all, I do believe this is a
key principle in DFSS establish specifications based
on the existing capability, not the other way around.
Simply knowing the Z value of a previous circumstance
does not help an engineer work through a current
circumstance but knowledge of the process standard
deviation (capability) is essential. Forgive me for not
responding to your targeted example. We both know this
example is contrived, off base, and not related to our
discussion. You seem quite intent on the idea that I am
somehow attempting to prove equivalent pdfs, even
though I am not. Perhaps you should try to enlarge your
view of capability and reason through the situation
without regard to specification limits. You seek to ignore
the fact that the world of quality engineering often defines
“unity” as the +/-3s limits of a process (ignoring any tail
are probability beyond these limits).Respectfully,Reigle Stewart0January 20, 2004 at 5:26 am #94369
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Reigle,
Sorry but a X sigma shift that:is valid only for a sample size of 30,
is valid only 3 sigmas away from the average (sorry, I won’t call it “limit of unity”),
is valid considering that sampling variation occurs only for the standard deviation because the average is easy to adjust (even leaving away cases such as ovality, roughness, strength, noise, torque, flatness, transparency, imputities, etc, etc, etc; the average will not be too easy to to adjust beyond sampling variation of the average because, know what? when you try to adjust it in production the sampling variation of the average will still be arround)
does not take into account the specification limits (and note that (T-SL) are specification limits too),
and would be used for proces qualification without taking into account that the process will not perform in production as well as with 30 consecutive parts.
I do not care about.
You are right: X=1.5. And what?
I’m happy you found “this” 1.5 sigma shift useful. If I ever happen to feel I need it, I will review these threads in iSixSigma.0January 20, 2004 at 5:54 am #94370
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Gabriel:As you likely know, the purpose of Dr. Harry’s book was to
discuss the logic and reasoning that was used to
establish Motorola’s Six Sigma initiative. This is well
stated in the book’s introduction. The book was intended
to answer two questions: 1) Why six sigma and not some
other value? and 2) Where does the 1.5 sigma shift come
from? Dr. Harry answered both of these questions to the
satisfaction of several well respected technical reviewers.
Perhaps yourself and Statman don’t like his answers or
approach, but this was the technical reasoning at that
time at Motorola … and is still valid to this day.
Remember, it was Bill Smith at Motorola that first
advanced the idea of a 1.5 sigma shift. It was Dr. Harry
that invented the DMAIC process and invented the X-belt
structure that is so well proven. So many times we try to
teach people to “reason outside the box.” We teach the
idea of “innovation.” But alas, when they do venture
outside the box, there are those laying in the weeds,
instantly ready to degrade their efforts (without any
attempt to build on the ideas). Senior executives have
little time for such naysayers. Perhaps that is why they
are where they are in the corporate heirarchy. Those with
leadership, creativity, innovation, and courage usually
wind up on top, just like Dr. Harry did. There is rarely a
day that goes by but what I don’t personally witness such
professional jealousy as we see on this site. In earnest,
one should look at their own innovations and make them
known (if they have any to offer). Like it or not, Dr. Harry
has changed the course of the quality profession,
elevated the stature of statiticians, and helped businesses
in such a big way … even the American Statistical
Association publicly acknowledged this fact in 1999.Respectfully,Reigle Stewart0January 20, 2004 at 6:36 am #94372
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Stan:THE SIX SIGMA ZONE by Donald J. Wheeler. In this
technical publication, Dr. Wheeler says: Shifts of 3.0, 4.5,
and 6.0 standard deviations were used … simply because
shifts of this size are quite common in virtually all
production processes. Moreover, as will be illustrated in
the next section, even larger shifts are not uncommon …
By taking into account the fact that small shifts (one sigma
or less) are hard to detect, and shifts of up to 1.5 sigma
will not be detected quickly.
Respectfully,Reigle Stewart0January 20, 2004 at 7:20 am #94373Ok, so maybe this reply won’t add value, but it’s late and I can’t sleep. Especially after reading Reigle’s last response. Reigle, this isn’t a jab – just some constructive criticism. I have yet to read a response from you that hasn’t made my head spin. I openly admit I’m not deeply involved in math or statistics. The reason so many people anticipate responses from Statman, Stan, Gabriel, Mike Carnell is because they stress simplicity. If you feel constant hostility after you post, I thinks it’s because others are left confused by what you’ve said. I’m pretty sure every post from you is about the 1.5 sigma shift. Do you have ideas or experiences in other aspects of Six Sigma? One thing I can count on from your responses is that you’ll use WAY TOO many words and not even answer the question posed. From someone sitting on the sidelines, I believe it’s really easy to prove a theory when you constantly make up the numbers to support the theory. The Six Sigma community is large – if anyone has some real data that supports the shift, let’s see it. Let’s use it.
Enough said. I’m off the box now.0January 20, 2004 at 7:50 am #94374
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Dear Reigle,
According to its possition in the thread, your message is supposed to be a reply to my message that is a reply to your message that is areply to my message that is a reply to your message about the simulation in Excel.
Dr Harry, “the book”, Motorola, etc… had not been even mentioned in these messages (which was incredible coming from you), up to the last one.
That’s as far as I will get.
Take care,
Gabriel0January 20, 2004 at 8:12 am #94375
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Using subgroups of 4 (to make it simple because sqrt(4)=2) a shift in the process average equal to 1.5 process standard deviations equals a 3 sigma shift in Xbar (remember the centrla limit theorem?). Then your new shifted µ will match the CL for Xbar. That means that 1 out of 2 points will be beyond the control limit. Using only the “beyond control limits” criteria the chcances to detect such a shift are 0.5, 0.75, 0.87, 0.94, 0.97, … , 1-0.5^n in the first n points. Add to that the criteria of 2 out of the last three points in C zone (beyond 2 sigma to the same side) and you are almost certain to detect the shift in the first two points.
Stan is right on this. With subgroups of 5 (probably the most typical subgroup size in the industry practices) a 1.5 sigma shift will be asily detected in the next point, next two points at most.
An you should know that. Or you could have run the simulation proposed by Stan.0January 20, 2004 at 1:54 pm #94385Your first example is bogus. The capabilities of the two machines could be equivelent from an economic impact if the range of acceptability is wide relative to the process spread of the two machines. Remember Reigle, a difference is a difference only when it makes a difference.
Your second example is silly. You can keep all the databases you want but they only have meaning in the ability of the process to meet customer requirements.
You said: “After all, I do believe this is a key principle in DFSS establish specifications based on the existing capability, not the other way around”
I don’t think there is a practicioner in DFSS that will agree with you on that one. The principles of DFSS is to meet or exceed the customer expectations and produce the produce at a high level of capability.
Since you are struggling with my example, let me give you some additional information:
the Z.st = 5.0
So mean, Std dev short term are 100 and 10
PPM defective must be less than 400.
C = 1.5
Do we launch?
Statman0January 20, 2004 at 1:58 pm #94386thanks Gabriel,
Good to be back and good to hear from you
Statman0January 20, 2004 at 2:18 pm #94387
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:As usual, you find little merit or value in my discussion.
You ask for an example and I give you one, then you say
it is “bogus.” The idea of being “bogus” is predicated on
your postulate begining with the word “IF.” Well, anything
can be declared bogus or without merit IF we construct an
IF statement. Lets just agree to disagree.RespectfullyReigle Stewart0January 20, 2004 at 2:33 pm #94388Gabriel,
You forgot a few more constraints on the use of the 1.5 shift:
Ø Is valid only when the actual difference between short and long term can not be empirically established through components of variation analysis
Ø Is valid only when the process is free from special causes and only subject to the uncertainty of sampling error
Ø Is valid only when the underlying distribution of the process is normal
Ø Is valid only when the metric is continuous
This is why I think a better title to Harrys book is The incredible Shrinking Shift.
I also love the argument that this book is not about the justification of the use of the 1.5 sigma shift but about where it comes from. This is like writing a book about why it was once believed that the earth was flat. The only difference is that we are still seeing the practice of inappropriate application of the shift. Only crack-pots of the flat earth society are doing calculations based on a flat earth.
Cheers,
Statman0January 20, 2004 at 2:52 pm #94390No Reigle,
You are wrong about the use of “if in a debate about the logic of an argument. The use of if” is to declare a premise to the argument that shows that the argument is either invalid or limited. Your task is to now show why the premise is not valid.
The declaration to “agree to disagree” is often incorrectly thought to be a declaration that the debate is now at a stalemate. However, this is actually a declaration that you have lost the debate because you do not have a counter argument.
It has been, once again, enjoyable to engage in a mental joist with you.
Hope you can return once again when you have re-tooled your arguments.
Statman
0January 20, 2004 at 4:25 pm #94393Reigle,
Obviously, you have no intention to solve the problem that I gave you. I have posted the solution below. This is to show the users of this form how misleading the 1.5 shift is and the dangers of using it even when we have established that the ratio of long to short term process standard deviation is 1.5. After reviewing this solution, it should be clear that the concept of shifting the distribution mean by C to compensate for an inflation in the standard deviation by C just flat wrong.
Solution to the example:
Mean short term is 100
Sigma short term is 10
Z.st is 5.0
Ratio of the C = S.lt/S.st = 1.5
Requirement to launch the new product is that the PPM defective long term is no greater than 400s
DO WE LAUNCH THE PRODUCT?
Using the formula: Z.lt = Z.st/C
Z.lt = 5.0/1.5 = 3.3333
Pr(Z>3.333) = 1-NORMSDIST(3.3333) (in excel) = 0.000429
0.000429*1,000,000 = 429 PPM
Since 429 > 400, the product does not meet requirement to launch Do not Launch product
Note: Using the Harry/Reigle method of the 1.5 sigma shift, we would draw the wrong conclusion since:
Z.lt = Z.st-1.5 = 5-1.5 = 3.5 and
Pr(Z>3.5) = 1-NORMSDIST(3.5) (in excel) = 0.000233
0.000233*1,000,000 = 233 PPM
Proof:
Z.st = (USL-T)/S.st
5.0 = (USL 100)/10 USL = 150
C = S.lt/S.st0January 20, 2004 at 4:28 pm #94394Oops, cut off the proof:
Solution to the example:
Mean short term is 100
Sigma short term is 10
Z.st is 5.0
Ratio of the C = S.lt/S.st = 1.5
Requirement to launch the new product is that the PPM defective long term is no greater than 400s
DO WE LAUNCH THE PRODUCT?
Using the formula: Z.lt = Z.st/C
Z.lt = 5.0/1.5 = 3.3333
Pr(Z>3.333) = 1-NORMSDIST(3.3333) (in excel) = 0.000429
0.000429*1,000,000 = 429 PPM
Since 429 > 400, the product does not meet requirement to launch Do not Launch product
Note: Using the Harry/Reigle method of the 1.5 sigma shift, we would draw the wrong conclusion since:
Z.lt = Z.st-1.5 = 5-1.5 = 3.5 and
Pr(Z>3.5) = 1-NORMSDIST(3.5) (in excel) = 0.000233
0.000233*1,000,000 = 233 PPM
Proof:
Z.st = (USL-T)/S.st
5.0 = (USL 100)/10 USL = 150
C = S.lt/S.st
1.5 = S.lt/10 S.lt = 15
Z.lt = (USL T)/S.lt = (150-100)/15 = 50/15 = 3.3333 not 3.5
Statman0January 20, 2004 at 5:06 pm #94398Reigle,
If it makes you feel better, I can agree to always disagree with you if you are just spouting Mikel retoric. Much ado about nothing of value.
Got anything worthwhile or original to say?0January 20, 2004 at 5:11 pm #94400What Wheeler has to say doesn’t matter. Go create a data set with a 1.5 sigma shift and apply SPC. One maybe two samples are all that is needed to detect and if the shift occurs concurrent with some process change (most do) and we follow simple rules of how to turn on a process after a change, you will never turn on a 1.5 shift. Period. Go try it instead of coming back with something that is a quote.
By the way, I am honored that you answered me. Now take my advice and use real data and see what you find.0January 20, 2004 at 9:44 pm #94411
Philippe MarquisParticipant@Philippe-MarquisInclude @Philippe-Marquis in your post and this person will
be notified via email.An animated Shewhart chart with 1, 2 and 3 shift is available at http://www.multiqc.com/shewhart.htm
0January 21, 2004 at 12:34 am #94412Invented DMAIC? Not quite, it’s just a relabeling of the Japanese PDCA and Juran’s Journies from Symptom to Cause and Cause to Remedy.
Invented x-belt? Not quite, it is just a label put on the change agents recognized (not invented) by Peters and Waterman in In Search of Excellence.
Mikel did not invent anything – he is a master spin artist (the best I have ever seen), a great salesman, and a world class BSer. That’s all.
So simple.0January 21, 2004 at 1:27 am #94413
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Stan:You should review the iSixSigma document that traced
the history of X-belts (black belt naming convention).
Their writer was given a copy of the contracts Dr. Harrry
had with Unysis corporation in 1986. In these contracts
Dr. Harry called out “Black Belts,” “Brown Belts,” “Green
Belts,” and “Champions.” This was the first use ever of
the naming convention. Dr. Harry also provided the writer
with the original correspondence from Mr. Cliff Ames
(Unisys facility manager at Salt Lake) where he set forth
the terms Black Belt and so on. (and this was
acknowledged in the letter by Mr. Ames). In 1989 Dr.
Harry brought the X-belt concept to Motorola (where it
was adopted). From there to ABB, to Allied, to GE, and
other clients. The Breakthrough Strategy ™ was
developed by Dr. Harry and first published in 1984 (as the
logic filters), then revised and re-published in 1986 and
then again in 1994 (while a VP at ABB). I was with Dr.
Harry in California when he assembled all of the
documents for the iSixSigma writer. Stan, there is
DOCUMENTED EVIDENCE that shows how wrong and
off base you really are. Proof again that Stan lacks
integrity. This is why I will not converse with you Stan.
Statman and I may disagree, but he does have integrity.Without formal courtesyReigle Stewart0January 21, 2004 at 1:34 am #94414
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Stan:And now Dr. Wheeler is also wrong, as well as Dr. Harry.
Seems you are the only one that knows the “truth”. Stan,
no offense, but you seem to have a propensity to dismiss
the research and work of several distinguished quality
professionals without showing any supporting scientific
reasons … only conjecture and opinions … and then
professing you have reality in your back pocket. So, I will
just say OK Stan. Goodbye Stan.Reigle Stewart0January 21, 2004 at 1:56 am #94415
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman: In your last post, Statman stated: Note: Using the Harry/
Reigle method of the 1.5 sigma shift, we would draw the
wrong conclusion since: Z.lt = Z.st-1.5 = 5-1.5 = 3.5.
Statman, where did you get the 1.5 number from. You
are treating the shift as a constant when it is not. Dr.
Harry has never said it is a constant. In fact, he says in
the book it is not a constant. He says 1.5 number should
only be used if no other information is available. I believe
the correct math (not what you are putting in my mouth) is
Z.shift = Z.st Z.lt. This is what Dr. Harry has always
published. He also had demonstrated that k=1-(1/c). So,
k=1-(1/1.5)=.3333. Since you defined Z.st=5.0, then
Z.st(1-k)=Z.lt. = 5(1-.3333)=3.333, not the 3.5 you are
declaring that is our position. This only shows your lack
of understanding about the shift factor and its theoretical
basis. Come on Statman, you can do better that this. You
just PROVED that Dr. Harry is right without even
knowing it. You got the same answer as would be given
by the equations in his book. You just took the c route
and I took the k route. Read the part of his book where
he goes through all the Cpk stuff. Again, this is PROOF
that an inflation of the standard deviation can be
expressed as an equivalent mean-offset (shift).Reigle Stewart0January 21, 2004 at 2:24 am #94417
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:As you correctly stated: Z.lt = Z.st/c. It is also true that: Z.lt
= Z.st(1-k), so k=1-(1/c), or c = 1 / (1-k). Since you
declared that Z.st=5.0 and c=1.5, then k=.333; thus, Z.lt =
Z.st(1 – k) = 5(1 – .333) = 3.33, thus Z.shift = Z.st – Z.lt =
5.00 – 3.33 = 1.67. Hence, an inflation of c=1.5 is
equivalent to a shift of Z.shift=1.67. This method was
established by Dr. Harry and Dr. Lawson in their 1988
book “Six Sigma Producibility Analysis and Process
Characterization.” Sounds to me like you need to do
some reading as you are about 16 years behind the
technology curve.Got YaReigle Stewart0January 21, 2004 at 2:42 am #94418Reigle,
Thank you!!!!
This is what I have been saying all along! There is no linear shift in the numerator of the Z equation that will compensate for an inflation of sigma in the denominator and the shift is dependent on the size of Z.st.
We are finally in full agreement!!! Thanks for coming over to my side!
Since you arrived at 3.3333 (Z.lt) by multiplying Z.st by (1-K) and K = 1-1/c, it is quite easy to show that your equation is equal to my equation:
(1-k) = 1/C
So the correct calculation is Z.lt = Z.st/C not Z.lt = Z.st Z.shift
Of course, we always have an estimate of C dont we? It has been shown in your simulation that it is the upper 99.5% CI on the sample standard deviation.
Therefore, we need to change all of those conversion tables dont we?
No more shift YES
Statistics is saved!
Reigle, Please tell Dr. Harry for me that he has been wrong all this time in his advice to adjust Z.st by subtracting 1.5 to get long term. He may take it better from you.
Cheers and once again thank you,
Statman
Ding Dong the shift is dead the wicked shift is dead.0January 21, 2004 at 2:59 am #94420Reigle,
Any thoughts on how we will now handle the issue that a 6 sigma process short term will produce 32 PPM long term when for years we have been advertising as on 3.4?
That change of almost an order of magnitude in the number of defects in a 6 sigma process may be hard for some to swallow.
Just curious,
Statman0January 21, 2004 at 3:01 am #94421Got me? What do you mean you got me?
I couldn’t be happer! More evidence that the Shift is dead!
Ding Dong the shift is dead……0January 21, 2004 at 3:04 am #94422
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Sorry, but your cheap shot does not cut the mustard.
What you call “your equation” was advanced by Dr. Harry
in 1988 and again in his new book. Yes, a numerator
correction can be made to compensate for a denominator
expansion, this is what Dr. Harry’s equation k=1-(1/c) is all
about. Read and study these equations on pages 108
through 110 of his new book. Again, Dr. Harry only
advises to use “1.5s” when no other information is
available. You try to create a smoke screen by saying Dr.
Harry is wrong and you are right. Such a declaration is
meaningless, especially since I demonstrated that your
so-called “proof” was wrong. My previous posts
mathematically deomonstrates you have been in error,
statistically and now judgementally, as well as ethically.
You continue to put words into my mouth and say things (I
or Dr. Harry) never said. Why don’t you point out a page
number and reference where Dr. Harry says the shift is
“always 1.5s. In fact, there are numereous publications
and books where Dr. Harry says the 1.5s should only be
used if no other information is available (but you
deliberately ignore this). The details of the “debate” are just about finalized, I will
be getting back to you with a series of possible dates.
Seems several recognized statiticians have read your
posts and found numerous errors in your reasoning and
statistics. I sure hope you bring more than rhetoric to
Arizona (assuming you don’t back down from the debate).
Don’t worry that this debate will go dead … I am preparing
some publicity on it, so you will not be able to “hide”
behind a code name and take pot-shots. You talk-the-
talk, but will you be able to walk-the-walk?Reigle Stewart0January 21, 2004 at 3:16 am #94423Reigle,
You talk-the- talk, but will you be able to walk-the-walk?
What are we going to do arm wrestle over this?
I hope that the competition is better than what I have seen so far.
Statman
0January 21, 2004 at 3:41 am #94424Reigle,
You are an idiot.
Regards,
Stan0January 21, 2004 at 3:45 am #94425Go create a data set with a 1.5 sigma shift and apply SPC. One maybe two samples are all that is needed to detect and if the shift occurs concurrent with some process change (most do) and we follow simple rules of how to turn on a process after a change, you will never turn on a 1.5 shift. Period. Go try it instead of coming back with something that is a quote.
Reigle – go try it and report back the results.0January 21, 2004 at 3:47 am #94426
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Seems that WWIII is about to begin.
I have a proposal:
Let’s stop this 1.5 sigma shit.
Until after the debte, at least? A break?0January 21, 2004 at 3:49 am #94427Aside from silly labels, there is nothing the least bit original about Dr. Mikel.
Go read Managerial Breakthrough by Dr. Juran from 1964 if you want to see where Mikel got the idea for the Breakthrough Strategy. Mikel read it in 1983. Strange but true.
0January 21, 2004 at 3:52 am #94428I agree, let’s also stop those posts that are only spouting rhetoric.
(Reigle – will you have anything to say?)0January 21, 2004 at 3:54 am #94429Yes,
I’m done. The topic is now dead….like the shift
Cheers,
Statman
PS what should we discuss?0January 21, 2004 at 4:34 am #94431
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Its not me that you will have to debate with. I am a
neophyte next to Dr. Harry.Good LuckThe I-am-glad-not-in-your-shoes-guy0January 21, 2004 at 4:44 am #94432
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Thank you, I too wish to stop now that your position has
miraculously changed mid-stream. You and Dr. Harry will
settle this debate soon (by action of qualified referees that
will publish the results and only tolerate documentation,
valid math, and facts with references). I will be back in a
few days to ask your most availible dates. I assume you
are not backing out of this high visibility “all expenses
paid” chance to prove you are right. Are you still on the
dance card?Reigle Stewart0January 21, 2004 at 4:56 am #94433Im glad to hear that the expenses will be taken care of. I dont think that my parents would have paid for the trip. I just hope that they let me skip school on those days if it is before June.
Statman0January 21, 2004 at 5:21 am #94434
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Is that a yes or a no?
0January 21, 2004 at 8:12 am #94436In 1986 Jack Scholls, who had attended Juran training, published an internal memo describing the DMAIC process. This process was then adapted by Richard Schroeder in his ground breaking ‘Five Imperatives’ presentation in about 1987. Jack’s DMAIC process is different to the current version and is far more effective because defining requirements and designing correct metrics can be a formidable challenge.
0January 21, 2004 at 8:29 am #94437Reigle,
Thank your for your efforts in raising these issues. I appreciate your dielectic. (Sometimes I suspect that you are playing the devil’s advocate!)
Without wishing to offend Dr. Wheeler, may I suggest that he study Shainin’s (or whoever invented it) randomised sequence as I’ve found this helpful in isolating temporal variation – even when the objective is only achieved after several processing steps. Another method that uses randomisation to detect temporal variation is the ‘split lot.’
As has been pointed out previously, X-bar and R charts are very senstive to small deviations of the process mean. More important though, in many processes the batch uniformity is a much larger source of variation than any variation in the mean, which was totally missed by corporate Motorola, but not by SPS. And it is truly sad to learn that SPS is to be sold off this first quarter.
Cheers,
Andy
0January 21, 2004 at 3:32 pm #94442Thanks … I really like the bivariate plot. The Shewhart Chart also shows how easy it is to detect a shift.
0January 22, 2004 at 11:11 am #94480Reigle,
If you are really setting up a debate (I think you are just pretending – like playing house or playing cowboy, something you and Mikel should know about), I will cover any of Statman’s logistics including any and all costs. Statman – do you want to stay and eat at the Phoenician?
I will also make sure everyone interested in such a debate is given the simplest of simulations that shows that basing this agrument on SPC is silly. A 1.5 shift simply would come and immediately be dealt with. So simple – not worthy of all the smoke and mirrors you and Mikel like so much.0January 22, 2004 at 11:23 am #94481Phillippe,
Nice contribution. Shows that basing the 1.5 debate on SPC is really stupid. Reigle – are you open minded enough to look?
People should know that your subgroup size is 3. It would be even more sensitive to larger subgroups.0January 22, 2004 at 3:49 pm #94495
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Stan:As a respectful suggestion, maybe you should
stick to more qualitative things. The control
limits of an Xbar chart are nothing more than
the 99.73% confidence limits of a mean. Note
that such limits for a confidence interval
represent the points of unity.” In turn, these
points define the idea of statistical control. A
control chart can also use the 95% limits (Z=
1.96) but, by way of quality convention, we
traditionally use Z=3.0 as the basis for declaring
an out of control condition. So, we recognize
that UCL = M+Z.a/2(S/sqrt(N)), where M is the
mean, Z is the standard normal deviate
associated with the upper confidence interval
(UCL), a is the probability of type I decision
error (alpha=.0027), S is the standard
deviation, and N is the number of
observations. We also understand (from
available references) that a typical subgroup
size is N=4. Based on the case NID(0,1) and
N=4, we can determine the sensitivity of the
classic Xbar chart. Plugging in the numbers
reveals that 0+3(1/sqrt(4)=1.5. This means that
the classic Xbar chart can only detect a 1.5
sigma shift (or larger) in the process mean
when subgroup size is 4 and the three sigma
limits are used to define an out of control
condition. This also means that a designer
should set tolerances that are robust to a 1.5S
shift or larger. Perhaps you can now see why
the 1.5 sigma shift is so important to both
process and design work. Try using a Monte
Carlo simulation that does not rely on a
sustained shift (i.e., use a shift that is dynamic
and embedded). If you use a simulation that is
based on a dynamic mean that momentarily
shifts and drifts by 1.5 standard errors, it is
difficult to detect. Dr. Harry regularly uses such
an Excel based simulation to demonstrate that
an Xbar and S (or R) chart is not so sensitive as
you make it out to be.Respectfully,Reigle Stewart0January 22, 2004 at 5:38 pm #94499What nonsense. The designer only needs to understand the inherent process capability of the processes their design will run on plus the operating philosophy and “control” capability of the same process. I define the control capability as the amount of drift that occurs long term. If I know that my process has a Cp of 2, that my fullfillment philosophy is to always center my processes and I know that my shift is only 0.5 sigma, I’ll design for that and smoke you and your designers on cost and quality every day of the week.
So the answer of, if I now see why the 1.5, is no. No because it is nonsense. And your explanation using SPC is smoke and mirrors. SPC is not an event where the probability of the specific subgroup determines my long term capability. It is a series of decisions. And as I pointed out to you before most “shift” occurs when the process is being intentionally changed and there we use more restrictive rules for interpreting the chart – rules that have been published in Juran’s handbook for all of both of our careers. The rules, if followed, would prevent a 1.5 shift from ever starting.
It is clear that you and Mikel have never lived on the Operations side of things or you would know your arguments are hollow.0January 22, 2004 at 6:12 pm #94506I concur .. the purpose of a control chart is to find sources of variation and eliminate them – not to guardband against them.
Even if there is a ‘mathematical’ type of variation in a ‘theoretical’ process, which I wouldn’t descibe as a shift; it is possible to use both warning limits (2 sigma) and control limits on a Shewhart Chart to detect smaller ‘transients’ in the mean, using the rules suggested by the Western Electric Company.
· 2 of 3 consecutive points fall outside warning (2-sigma) limits, but within control (3-sigma) limits.
· 4 of 5 consecutive points fall beyond 1-sigma limits, but within control limits.
· 8 consecutive points fall on one side of the centre line.
I agree with previous comments – the 1.5 sigma shift is unnecessary and confusing and detracts from the ‘real’ issues affecting western industry – rolled yield.0January 22, 2004 at 6:43 pm #94510
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Andy:I though the purpose of DFSS was to create
designs that are “robust” to variation … so the
operations people don’t have to track down and
eliminate variation. You know, the idea that we
should “design to existing capbility.” I do
believe this is called “guardbanding against
variation,”Respectfully,Reigle Stewart0January 22, 2004 at 6:45 pm #94512
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:From our previous discussion, we AGREED that
k = 1 (1/c), just as given in Dr. Harrys 1988
book and his 2003 book (as well as other
books authored by Dr. Harry). Expanding the
quantity k, we observe that: |T-M| / |T-SL| = 1
(1/c). From this, we note that: |T-M| = |T-SL| * [ 1
(1/c)] = |T-SL| – ( |T-SL| / c ). Dividing both
sides by the short-term standard deviation
(S.st), you must surely acknowledge that Z.shift
= ( |T-SL| / S.st) – ((|T-SL| / Sst) / c )) = Z.st Z.lt.
So there you have it, Z.shift = Z.st Z.lt. Thus,
we are able to equate or otherwise calibrate a
dynamic expansion of the short-term standard
deviation (in the denominator term) to a static
off-set in the mean of the numerator term (even
though you said this could not be done). So, as
you can mathematically see, the static quantity
k can be made equivalent (calibrated) to the
dynamic quantity c. RespectfullyReigle Stewart0January 22, 2004 at 6:57 pm #94513Reigle,
I said I was done commenting on this string but this one I cant resist.
First of all, Stan is right and now you agree with him. Any sustained shift of 1.5 sigma will be detected quite rapidly with a common Shewhart control chart with n= 4. Using the 8 Nelson/Western Electric rules, the shift will be detected in fewer than 5 subgroups.
So to wiggle out of this, you have had to further constrain the application of the 1.5 shift and suggest that we are not talking about sustained shifts but transient shifts in the subgroup averages. So we have now eliminated a good 95% of the types of process shifts that would inflate the long term sigma of the process.
This leaves us only with those random transient shifts that can go undetected in a common Shewhart control chart; those fleeting buggers that cant be detected in trend analysis. We are no longer talking about a shift in the numerator of Z but inflation of the standard deviation in the denominator (back to the same old argument). As you and Dr. H have shown us, this inflation factor to have an alignment of the 3 sigma tails by shifting the short term distribution 1.5 sigma is in the situation that:
C = S.lt/S.st = 1.5
So the question is: will an inflation of 1.5*S.lt due to transient random shifts between the subgroups go undetected in a common Shewhart XbarR chart with subgroup size of 4?
The answer is no, they will be detected quite easily.
First, lets define S.shift as the standard deviation of the transient random shifts Then:
S.lt = (S.st^2 + S.shift^2)^0.5
Since S.lt = S.st*c and C = 1.5,
(1.5*S.st)^2 = S.st^2 + S.shift^2 and
S.shift = sqrt(1.25)*S.st = 1.118*S.st
Therefore, the standard deviation of the transient random shifts is greater than the short term (subgroup standard deviation). Since the control chart has subgroup of size 4, the standard deviation of the shifts is 2.24 times larger than the standard error of the control chart.
The transient shifts are random normal with mean 0 and standard deviation of S.shift in order for the control chart to not detect special causes and to apply normal theory. Therefore, with the condition of the ratio of S.lt to S.st of 1.5, (1-NORMDIST(1.5,0,1.118,TRUE))*2 = 0.1797 or about 18% of the shifts will be greater than the 3 sigma control limits and a shift will be detected in at least 5 subgroups.
To test this I set up a random normal series of 200 data points with mean=100 and S.st = 10. I then added to that a random series of subgroup shifts with mean = 0 and S.shift = 11.18. Control charted with subgroups of 4 the data with the random transient shifts. The result are below. The shift was detected at data point 4. I have also performed a component of variation analysis on the data. You can see the S.st (within), the S.shift (Between) and the S.lt (total). C = 14.522/9.728 = 1.49.
Variance Components
Source Var Comp. % of Total StDev
Between 116.240 55.12 10.781
Within 94.640 44.88 9.728
Total 210.880 14.522
Xbar/S for C3
Test Results for Xbar Chart
TEST 1. One point more than 3.00 sigmas from center line.
Test Failed at points: 4 6 13 14 15 20 22 32 41 49 50
TEST 5. 2 out of 3 points more than 2 sigmas from center line
(on one side of CL).
Test Failed at points: 4 6 8 14 21 22 33 45
TEST 6. 4 out of 5 points more than 1 sigma from center line
(on one side of CL).
Test Failed at points: 4 6 31
Now I am sure you will come back an tell me that Dr. Harry already said this, or that He said it would only work when the planets were aligned a certain way or some silly thing like that. The point Mr. Stewart is the value of the 1.5 shift. Stay focused on that.
Focus .
Statman0January 22, 2004 at 7:20 pm #94515No you told me that k = 1 (1/c) then you multiplied Z.st by (1-K)
I told you that 1-K = 1/c and that you have done the same calculations that I have done to get the correct answer:
Z.lt = Z.st/c
and now we are in agreement that the formula
Z.lt = Z.st – Z.shift is BS
We have declared this string dead until after the debate. But if you would like to post a comment to it each day, I am all right with that because I like seeing the topic “Death blow to the 1.5 sigma shift” show up each day.
Statman0January 22, 2004 at 7:23 pm #94516
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman: You keep shifting your arguments.
You now state: “Therefore, with the condition of
the ratio of S.lt to S.st of 1.5, (1-
NORMDIST(1.5,0,1.118,TRUE))*2 = 0.1797 or
about 18% of the shifts will be greater than the 3
sigma control limits and a shift will be detected
in at least 5 subgroups.” So 18% of the time,
the shift will go undetected? 18% of the time.
18% of the time!!! Seems to me the designer
should guardband against this … which is the
entire point to the 1.5 sigma shift. Before the
proverbial shift factor, designers just assumed
M=T. What about the first 4 subgroups that go
undetected, but the “shift” is there nonetheless.
Surely you are not saying this is acceptable?
And what about the case of n=30 during a
process qualification? Wow statman, you are
really out there on this one.Respectfully,Reigle Stewart0January 22, 2004 at 7:28 pm #94517
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Dave Antis said to say Hi. As you know, Dave
Antis is president of Uniworld. He is looking
forward to your debate with Dr. Harry. Are you
still on? If so reply ASAP because I do not want
to proceed if you are not “in the game.” If I do
not receive a positive reply soon, I will assume
you are backing out.Respectfully,Reigle Stewart0January 22, 2004 at 7:47 pm #94518
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:If you have not noticed 1-k = 1/c is the same as
k=1-(1/c). Again, you will find that by dividing
both sides by S.st, we have Z.lt=Z.st(1-k). Thus,
Z.st(1-k) can be directly calibrated to Z.st/c. How
much simplier can it get, but yet you persistently
deny basic algegra.Reigle0January 22, 2004 at 7:50 pm #94520Never heard of him.
But tell him Hi for me and I look forward to meeting him. I did visit his —- website which I had also never heard of. Tell him that he looked like a deer caught in the headlights on his CNBC appearance. I guess he is not use to being on tv.
I hope that you will have more than just gear heads – Mech-Es – at this debate.
Give me some possible dates.0January 22, 2004 at 8:23 pm #94522Ah .Thats what I just said .
At least I can spell algebra0January 22, 2004 at 9:36 pm #94526
GabrielParticipant@GabrielInclude @Gabriel in your post and this person will
be notified via email.Hypothesis:Let S.lt be any real number.Let c be any real number other than zero.Lets define S.st = S.lt/c.Lets define k = 1-1/c
Thesis:There exist S.shift that belongs to the real numbers such as:S.lt = S.st – S.shift
Demonstration:k = 1-1/c ==> c = 1/(1-k)S.st = S.lt/c = S.lt/[1/(1-k)] = S.lt*(1-k) = S.lt – k*S.ltBecause of the property of the addition and division of being closed operations in the real numbers, since c is a real number other than zero then k = 1-1/c is a real number.Because of the same property of the multiplication, S.shift = k*S.lt is also a real number.
QED
So yes, S.st = S.lt S.shift.
However, this is as trivial as saying that a number can be expressed as the sum of other two numbers. The question is wether there exist ONE S.shift or not, because we dont need Dr Harry to tell us that there always exists some number S.shift such as S.lt = S.st + S.shift. This number just happen to be S.lt S.st.
And note that I dindt declare what S.st, S.lt, S.shift, c, or k are, and I didnt even mention standard deviations, chi-square distributions, or any other statistical concept, as I didn’t mentioned processes, specifications, subgroups, etc. It’s just abstract algebra.
The shift exist, but we never said it was a constant Gee, thanks for the new and revolutionary info.0January 22, 2004 at 10:10 pm #94528Yes Gabriel,
However, this is as trivial as saying that a number can be expressed as the sum of other two numbers. The question is wether there exist ONE S.shift or not, because we dont need Dr Harry to tell us that there always exists some number S.shift such as S.lt = S.st + S.shift. This number just happen to be S.lt S.st.
You can always explain things clearer than I can and I am the one that English is my first language.
So we go back to my Points from my first post on this topic in November:
The mysteries that Dr Harry has (knowingly or otherwise) uncovered:
1. There is no constant shift of 1.5 (or any other number)
2. The shift from short to long term depends on the Zst ratio and the sigma inflation coefficient
3. A six sigma process short term will be 4sigma long term (owing to the worse case sampling error c=1.5) not a 4.5 sigma process long term
4. Because of point #3, a six sigma process will have a DPMO of 31.69 not 3.4 long term.
5. A sigma shift of K in the numerator of the Z equation does not equate to a Z with a K*sigma inflation.
Thats all I am trying to say.
Thanks and Best Regards,
Statman0January 22, 2004 at 10:13 pm #94529
Ken FeldmanParticipant@DarthInclude @Darth in your post and this person will
be notified via email.Maybe I am missing something but is the argument whether a shift exists or whether it is exactly 1.5? I have been debating the notion of the shift for the past decade but never from the perspective of whether processes can shift over time. My issue is whether the absolute value of 1.5 applies to every organization. It has been my impression that each organization needs to assess its long term process variation and compare it to the short term variation to determine what is an appropriate number. Most of the rangling in this thread has been to either prove or disprove the 1.5. I have been using control charts on a number of processes for 5 years. If I compare the variation over that time period and look at it on a composite month to month basis I find that my process shifts only about .83 sigma. Shift happens but why would I be inclined to use 1.5 rather than .83?
0January 22, 2004 at 10:21 pm #94530
Praveen GuptaParticipant@Praveen-GuptaInclude @Praveen-Gupta in your post and this person will
be notified via email.It looks like your processes have shifted about .83. What kinda confidence interval around .83.
As far as I know 1.5 sigma shift in a process is corresponding to 3 sigma control limits for processes with sample size of 4. Your interpretation can be thought of saying my processes never shift as much as 3 sigma, so why use 3 sigma control limits.
So, it is a methodology where 1.5 sigma shift is built in calculation to correlate with the control chart theory. Of course, if you know the answer to variation in your processes, you can use actual probabilities. However, you will have hard time benchmarking with the industrywide practice of Six Sigma. You would be explaining to everybody that your six sigma level is different from the rest because of less shift.
Another scenario, if everyone uses a different level of shift, and every one would be calling their processes as six sigma. Now, there could be a race, hey, my six sigma is better than yours because of smaller shift!
Something to think about. I felt compelled to write about your thought provoking ideas.
Regards,
0January 22, 2004 at 10:42 pm #94531
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Thank you for pointing out my mispelling of the
word “algegra.” Are you professionally secure
enough to point out your own errors as well?Reigle Stewart0January 22, 2004 at 11:03 pm #94532What do you think?
Do I sound like I am insecure?
I certainly haven’t spent the last 20 years of my life trying to justify a miscalculation … nor would I.0January 22, 2004 at 11:06 pm #94533Praveen,
you said:
“Another scenario, if everyone uses a different level of shift, and every one would be calling their processes as six sigma. Now, there could be a race, hey, my six sigma is better than yours because of smaller shift!”
That is just flat silly.
The size of shift is not of critical importance. what is important is that the correct compenstation for long term inflation in the variation is used to precisely estimate the defect rate of the process.0January 23, 2004 at 12:01 am #94535
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:You say “The size of shift is not of critical importance.
what is important is that the correct compenstation for
long term inflation in the variation is used to precisely
estimate the defect rate of the process.” First, I can not
believe that you say the “size of shift” don’t matter. Wow,
talk about missing DFSS-101. Second, let me point out
that you must stop reiterating what Dr. Harry wrote about
16 years ago. I would recommend reading Dave Antis’s
book on DFSS (Prentice Hall, 2002). You know what they
about size … I’m told it really does matter! Shift happens
baby. Now, even Praveen is silly. Where will this all end.
My guess is Arizona.Reigle Stewart0January 23, 2004 at 12:28 am #94538Stop your naked promotion of your group and it’s strategic partners!!!!
And I am not interested in your recommended reading list
I will ask isixsigma to remove this and any other future business promotions.0January 23, 2004 at 1:37 am #94541
Reigle StewartParticipant@Reigle-StewartInclude @Reigle-Stewart in your post and this person will
be notified via email.Statman:Your last “post” speaks volumes. Sorry, I mispelled “post.”
It should have been “poke.” Have you seen the movie
“Anger Management”? You might want to. Ooops, there I
go again promoting. Its not my posts you should worry
about. Its all of those posts that you committed your
statistical position. I am sure you will see them again.Reigle Stewart0January 23, 2004 at 3:13 am #94542It is interesting that YOU are accusing someone of lacking integrity. Don’t people with integrity do what they say they are going to do? I thought I remembered a string of comments from you saying that you were not going to participate in this forum anymore… And yet, here you are again pushing your 1.5 shift / 3.4 dpmo / Harry is the man agenda. If you had integrity, you would do what you said and stop taking up space.
I think you also said that there would be some great debate on this topic. Haven’t seen that yet either…0January 23, 2004 at 10:08 am #94550Reigle:
I agree with the first ‘principle’ you mentioned.
However, the strategy of designing products and processes according to a process capability is questionable, with or without a 1.5 sigma transient. Perhaps our definition of a robust design is different. To my mind a robust design is achieved when non-linearity and scaling reduces the effect of statsistical noise, which would include both variance and any small transient in the mean.
Now I’m sure there are exceptions to this rule in mechanical design, where it can be difficult to implement parameter design, especially with respect to mating parts. (A better approach would be to use bivariate tolerance design because I have yet to come across an ‘independent’ mating component or process step.) But the 1.5 sigma shift has been put forward as a general rule, and even as an example of a Kuhn Paradyn shift; and nothing could be further from the truth.
You might be interested in the following link. ( I have no association with them.)
http://www.som.cranfield.ac.uk/som/cbp/CBPupdate1-SixSigmaFriendOrFoe.pdf
Thanks for your efforts.
Andy
0January 23, 2004 at 1:25 pm #94557
Hein Jan van VeldhovenParticipant@Hein-Jan-van-VeldhovenInclude @Hein-Jan-van-Veldhoven in your post and this person will
be notified via email.Dear Gentlemen,
It seems that a public debate should be a good opportunity to escalate this mental combat you’ve been fighting at this forum.
I, myself, am based in Europe and know that there is a Six Sigma summit 27th through 30th of April in London. I suppose that in the US there must also be such a venue. I also assume that a company organizing such a venue would be more than willing to provide a stage for the above mentioned debate. You might probably be at such a summit anyway. I also suppose that from within the adience (or indeed the adience itsself) proper judges can be appointed.
So in true mathematical ways: the problem can be solved, and is therefore no longer interessting as it only requires basic hard work to get to the solution.
I implore all parties involved (Reigle Stewart, Statman and the party organizing a Six Sigma Summit) to take perform the basic hard work to have this debate realized.
Kind Regards,
Hein-Jan0January 23, 2004 at 1:42 pm #94562
hestatisParticipant@hestatisInclude @hestatis in your post and this person will
be notified via email.Please change the title “Death Blow to…” till debate is over.Reigle,For god’s, Oh No! For Six Sigma’s, No ! For Dr. Harry’s sake stop this and keep yourself busy organizing the event or entire community will loose interest in your postings, though they already laugh at your state.Seems to us that you are neither interested nor capable of holding the debate. What is your problem when Stan has come forward and offered a sponsorship to Statman.And Statman, now it it “Time” to reveal your true name. Enough of your genius. Just reveal and then go ahead to challenge the cowboy in the ring. Reigle, are you bent upon posting garbage after garbage and no debate. Hope it happens before a six feet astronaut reaches Mars using six sigma designed space ship. Will that shifting from “Earth” to “Mars” make him according to you exactly 4 and a half (4.5) feet.
0January 23, 2004 at 4:16 pm #94570Hi Darth,
I am responding to your post so that the title of this string will show up again today Just kiddingJ.
I am working on a theory about the ratio of long term to short term variation of a process and the number for the difference short to long (0.83 sigma) that you said you have observed in your process actually fits a hypothesis of mine. But I am curious about a couple of things.
Ø How did you calculate this did you determine it by subtracting Z.lt from Z.st?
Ø Is your process centered at target? And if not, did you use |SL Xbar| or |SL T| in the numerator of the Z value?
Ø Is your process in (or nearly in) statistical control?
Ø What is your Z.st?
Ø What is the ratio of S.lt/S.st (S is the standard deviation)?
Just want to check a few things out. I am assuming the this process performance is measured with a variables output.
As far as your question about the debate, I dont think that anyone will argue that the long term variation will not be greater than short term even when the process demonstrates statistical control. This has been recognized for years. In fact, it is the concept behind Cp vs Pp calculations that have been around much longer than six sigma. The question is about the 1.5 shift and its appropriateness in estimating long term capability.
Thanks,
Statman0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.