iSixSigma

Simple Explaination of Shift

Six Sigma – iSixSigma Forums Old Forums General Simple Explaination of Shift

Viewing 24 posts - 1 through 24 (of 24 total)
  • Author
    Posts
  • #36143

    Reigle Stewart
    Participant

    TO ALL: Here is a very simple analogous explanation of
    the infamous shift factor. Consider two groups of data.
    Group A = 1,2,3,4. Group B = 5, 6,7,8. The “process
    spread” of group A is Range = Max – Min = 4 – 1 = 3. For
    group B the Range is also 3. The “total” spread when
    considering BOTH groups is: Range = 8 – 1 = 7. Now
    look at the “mean shift” between the two groups of data”
    If the range of each individual group remains constant
    over time, but the mean difference increases, the “total
    aggregate range” will increase. In other words, as D =
    Xbar.B – Xbar.A gets bigger, then R.total also gets bigger.
    Hence, we are able to use process spread to estimate
    process-centering error. In practice, we use one-way
    analysis of variance to make this computation: SST =
    SSW + SSB. So, SSB = SST – SSW. Again, we use the
    subgroup “variances” to report on “mean differences.”
    Now, the question becomes: “If I have a process that shifts
    and drifts randomly over time, but the short-term spread is
    consistent over time, then what is the long-term spread?”
    Computing the short-term spread “within groups” and
    computing the long-term spread over time can readily
    estimate the “shift” between groups. Respectfully, Reigle
    Stewart

    0
    #103364

    leroy
    Participant

    Now it makes sense.           No, wait….. sorry, no, it still seems like an illogical fudge factor.  I thought I had it for a minute, but it slipped away.       Thanks though.  I was almost there.  It’s just too illusive for me.  A few more examples, and I think I’ll be on board.    It’s a darn shame that there is nobody either posting on the forum that was actually there when it was derived – or that can at least access one of the original Six Sigma thought leaders.    I’d really like to know the true genesis of the shift.

    0
    #103365

    Phil Campus
    Participant

    And now let’s just examine our own belly buttons……..

    0
    #103366

    Reigle Stewart
    Participant

    You can find the “original” thinking. Look at the header
    bar on this web page under the red button called “New
    eBook”. Reigle

    0
    #103367

    Robert Butler
    Participant

    Leroy,
      I can’t put you in contact with “thought leaders’ but if you are interested in reading the only papers that seem to have been published on the subject they are listed in the following post.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=39663

    0
    #103369

    tom.g
    Member

    I’m also curious why General Custer went into Little Big Horn like he did.  That made no sense either.    Then there’s the German army’s march into Russia in the dead of winter – what was up with that?     And Clinton, throwing away his presidential legacy over Monica!!  What a dufus. Geeze…. Where’d that come from???    But as odd as those and other historical incidents are, they pale in comparison to best and the brightest in industry and academia accepting the concept of a 1.5 sigma process shift.    3.44 DPMO is Six Sigma?? – no, stat people, it’s not – regardless of the convoluted rationales presented.  

    0
    #103372

    Sorry Reigle
    Member

    Wrong, it was Bill Smith and Mikel wasn’t there. It is just another (biased) account from a story teller.

    0
    #103373

    Reigle Stewart
    Participant

    To All on iSixSigma:
     
    To help clarify the simplicity of the shift (that others try to make complicated), I am providing the instructions for a most elementary Excel based simulation.  We will use a uniform distribution to keep things simple, but you can also use a normal distribution.  But, first time through, use the uniform since it is simple and will illustrate the principles.
     
    Here are the steps for constructing the simulation.
     
    Step 1: Create a rational subgroup of data.  To do this, we must create a random number in cell locations A1, B1, C1, D1 and E1; i.e., put the Excel equation “ = rand() ” in each of the 5 cells. You have now created the first row of n = 5 random numbers.  This row constitutes or otherwise models a “rational subgroup.”
     
    Step 2: Create 50 rational subgroups.  To do this, we repeat step 1 for rows 2 through 50.  Now, we have g = 50 rows of n = 5 random numbers.  At this point, we now have a “process” that was operated over some period of time, but we are only sampling its performance on 50 occasions – each time making 5 measurements.
     
    Step 3: Compute the “range” for each of the g = 50 rows.  The range of each row is computed by subtracting the minimum value from the maximum value.  As an example, we would input the equation: = max(A1:E1) – min(A1:E1) for the first row of data.  This calculation would be repeated for each of the g = 50 subgroups (rows of data), thereby creating 50 unique ranges in column F.
     
    Step 4: Compute the “grand range” for the aggregate set of ng = 5*50 = 250 random numbers. The Excel equation for computing the grand range is: = max(A$1:E$50) – min(A$1:E$50).  Locate this equation in cell locations G1 through G50.  Doing so will create a column with the same exact value in all 50 cells of column G.
     
    Step 5: Create a simple line chart that graphs columns F and G.  One of the lines on the graph should be a straight horizontal line.  This is the composite range (i.e., grand range) of all 50 sets of data (i.e., it is the overall range of all 50 rational subgroups treated as a single set of data).
     
    Step 6: Draw Conclusions.  Notice that all of the subgroup ranges are less than the grand range.  In fact, the average within-group range is less than the grand-range.  So why is this true?  Because no single within-group range can ever be bigger than the grand-range.  Thus, the average within-group range will certainly be even less than the grand-range.  The individual subgroup averages RANDOMLY bounce around making the grand-average be larger than any given within-group range.  So, the total variability of a process will always be larger than any given “slice in time.”  If we average the “slices in time,” we have the “short-term” standard deviation.  If we find the long-term standard deviation by concurrently considering all of the measurements (not just individual slices in time), we can compute the “long-term” standard deviation.  Thus, we have the ratio c = S.lt / S.st.  So, as the value “c” gets bigger, the average group-to-group “shift” also increases in magnitude.  In this manner, we are able to study “mean shift” by looking at the ratio of variances, just like the “F test” is able to test “mean difference” by looking at the variance ratios.
     
    Regards,
     
    Reigle Stewart

    0
    #103376

    Reigle Stewart
    Participant

    To the poster “Sorry Reigle.” OK, if you want me to be
    wrong, so be it … but that does not change its historical
    truthfulness, nor its mathematical validity. You cannot
    convince anyone with an “opinion.” Show us your
    evidence (verifiable facts), like any good Six Sigma
    practioner would do. As I have said many times on this
    forum, and as Dr. Harry has published in many books and
    articles, Bill Smith had the intuitive belief that a 1.5 shift
    happens in a stable system, but it was Dr. Harry that “put
    the math to it” so as to demonstrate the plausability of
    what Bill “suspected” to be true. This is also documented
    in the eBook currently being offered on this website. I am
    reminded of the old saying “you can lead a horse to
    water, but you cannot make it drink from the bucket.” If
    someone believes airplanes are “very unsafe,” then no
    matter how much scientific data and information you
    present to that person, they will never travel in an
    airplane. In the world of human psychology, this is called
    a “phobia.” Of course, phobias are not based in rational
    thought, they are founded in irrational thought, to the
    thinker, such things cannot be differentiated. Reigle
    Stewart.

    0
    #103377

    Stephens
    Participant

    Here is the explanation I hear from a resource that WAS at Motorola at the time.
    They wanted to move the improvement bar up. At the time, the recommended Cpk as 1.33, so they considered moving it to 1.5 – a reasonable increase, but not a dramatic one.  This equates to 4.5 sigma, but unfortunately that is not a “cool logo”.  They liked the 6 sigma looks…. 6 turn on it side…wa la… sigma.  Hence the development of the dreaded long term versus short term variation and the 1.5 shift.  Convinent so that 4.5 becomes 6.0!
    Hence, it wasn’t statistical in its nature at all, rather marketing driven…desire to use 6sigma, instead of 4.5! 
    Oh, same resource says she does not no of any study by Motorola where “world class quality” companies were equated to operating at Six Sigma in their process.  Another folklore story!

    0
    #103379

    Reigle Stewart
    Participant

    Craig: Funny thing, the benchmarking data has been
    published for nearly 20 years in several Motorola
    documents, training materials, and other books and
    articles … I would suggest you investigate deeper than a
    “person” you happen to know that worked at Motorola
    (then). Oh, by the way, my wonderful wife (Susan) has
    worked at Motrola (Semiconductor Group) for 35 years
    (and is still working there). The validity of your comments
    seems to fit with the old phrase “a sample size of one
    does not make a universe.” Again, believe what you want
    … freedom of speach is the law, but the truthfulness of that
    speach is NOT guaranteed by that law. Have a great day
    and keep on blasting away without any facts. Site me a
    reference that I can read and verify for myself what you
    are saying. This is not a Herculean task, just cite your
    references. Reigle

    0
    #103381

    Gabriel
    Participant

    Reigle, you now did screwed it up. Your concepts are more confused than what I thought. You are just confusing “sampling variation” with “shift”.
    COUNTER-EXAMPLE 1:
    Repeat your example but with the following differences:
    – Instead of taking subgroups of 5, take subgroups of 20.
    – We will consider this test as a multiple parallel simulation made with subgroups ranging from 2 to 20.
    – For that, we will compute the subgroups range for the size 2 as = max(A1:B1) – min(A1:B1) to = max(A50:B50) – min(A50:E50) and the grand range for size 2 as = max(A1:B50) – min(A1:B50). The same will be done for all the subgroups sizes until you get to the size 20: range = max(A1:T1) – min(A1:T1),…, = max(A50:T50) – min(A50:T50), and grand range = max(A1:T50) – min(A1:T50).
    Now draw 19 charts, each of them will show for one subgroup size the grand range as a stright line and all 50 individual subgroup ranges.
    My God! The “shift” between the average subgroup range and the grand range is, for subgropup size 2, much larger than for subgroup size 5, which itself is much larger than for subgroup size 20! By the way, for subgroup size 20 the difference is nearly negligible! Wait a minute, these subgroups are all sequential samples of the same process. It is the same process, and there is no way to say that the process shifted more or less deppending on the subgroup size used. It is history, either the process shifted or not.
    Know the following, Reigle: If a process shifted by this much, it shifted by this much whether you are taking subgroups of size 1, of size 20, or not measuring anything at all.
    COUNTER-EXAMPLE 2:
    Say that you have on batch of say 1,000,000 of parts which have a characteristic that is randomly and uniformily distributed beween 0 and 1. These parts were produced in a proces in sequencial order, but now they are all mixed so the original order is lost forever.
    Step 1: Take 5 samples, measure them and write down the 5 values in the cells A1:D1.
    Step 2:  Repeat with another five and another five until you fill 50 subgroups (i.e. to the cells A50:E50).
    Steps 3 to 6: As you explained.
    You will arrive exactly to the same conclusions that you arrived with your example. Now tell me, which process shifted in this case? When did it shift, since the parts within each subgroup can represent any time in the process?
    COUNTER-EXAMPLE 3:
    Repeat exactly the six steps of your example, just replacing the sindividuals subgroup ranges by the unbiased subgroup standard deviation (in Excel, make =stdev(A1:E1)/0.94 and so on, 0.94 is c4 for n=5) and replacing the grand range by the grand standard deviation (=stdev(A1:E50)). Add to the chart a third stright line with the average of the unbiased subgroup standard deviation. Press F9 several times to see what you get in different trials of the same simulation.
    Holly S##T! Not only that the average standard deviation is, in every trial, very very close to the grand standard deviation. But in some cases it is even greater!!!!! This means that….. THE SHORT TERM STANDARD DEVIATION CAN BE SMALLER THAN THE LONG TERM STANDARD DEVIATION! A NEGATIVE SHIFT! EUREKA! (of course, all this according to Reigle reasoning, which is wrong)
    One thing is an outlier, another thing is sampling variation, and a third and different thing is a shift in the process.

    0
    #103383

    Reigle Stewart
    Participant

    Gabrial:You state that “THE SHORT TERM STANDARD
    DEVIATION CAN BE SMALLER THAN THE LONG TERM
    STANDARD DEVIATION! A NEGATIVE SHIFT!” My
    answer is simple, you are not correcting for differences in
    the corresponding degrees of freedom. If the sums-of-
    squares between groups is zero, then the standard
    deviation long-term S.lt = SST/(ng-1) is smaller than S.st =
    SSW / g(n-1). Even though SST = SSW, the standard
    deviations are not equal because the degrees of freedom
    are NOT equal. If you correct your equations to
    compensate for the differences in degrees of freedom,
    you will discover that S.lt = S.st when SSB = 0, but as SST
    > SSW, then SSB > 0. If there is no subgroup shifting (all
    subgroups have the same mean and variance, the
    between group sums-of-squares will be ZERO once the
    degrees of freedom are corrected. AS SSB increases,
    SST also increases. You really need a simple education
    on the components of variance model (i.e., 1-Way
    ANOVA). With further investigation, you will also discover
    that Z.shift is an “natural artifact” of subgrouping. As I
    have so often stated, the Z.shift is NOT AN ACTUAL
    SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
    “TYPICAL” SHIFT IN SUBGROUP-TO-SUBGROUP
    AVERAGES. I have said many times, it is a
    “compensatory off-set” that models an expansion of the
    variance. Prove it to yourself … simply plot the
    cummulative sums-of-squares — SST, SSW, and SSB.
    For “typical” subgroup sizes (4 SSW, then SSB > 0. If there is no subgroup shifting (all
    subgroups have the same mean and variance, the
    between group sums-of-squares will be ZERO once the
    degrees of freedom are corrected. AS SSB increases,
    SST also increases. You really need a simple education
    on the components of variance model (i.e., 1-Way
    ANOVA). With further investigation, you will also discover
    that Z.shift is an “natural artifact” of subgrouping. As I
    have so often stated, the Z.shift is NOT AN ACTUAL
    SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
    “TYPICAL” SHIFT IN SUBGROUP-TO-SUBGROUP
    AVERAGES. I have said many times, it is a
    “compensatory off-set” that models an expansion of the
    variance. Prove it to yourself … simply plot the
    cummulative sums-of-squares — SST, SSW, and SSB.
    For “typical” subgroup sizes (4 SSW, then SSB > 0. If there is no subgroup shifting (all
    subgroups have the same mean and variance, the
    between group sums-of-squares will be ZERO once the
    degrees of freedom are corrected. AS SSB increases,
    SST also increases. You really need a simple education
    on the components of variance model (i.e., 1-Way
    ANOVA). With further investigation, you will also discover
    that Z.shift is an “natural artifact” of subgrouping. As I
    have so often stated, the Z.shift is NOT AN ACTUAL
    SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
    “TYPICAL” SHIFT IN SUBGROUP-TO-SUBGROUP
    AVERAGES. I have said many times, it is a
    “compensatory off-set” that models an expansion of the
    variance. Prove it to yourself … simply plot the
    cummulative sums-of-squares — SST, SSW, and SSB.
    For “typical” subgroup sizes (4 0. If there is no subgroup shifting (all
    subgroups have the same mean and variance, the
    between group sums-of-squares will be ZERO once the
    degrees of freedom are corrected. AS SSB increases,
    SST also increases. You really need a simple education
    on the components of variance model (i.e., 1-Way
    ANOVA). With further investigation, you will also discover
    that Z.shift is an “natural artifact” of subgrouping. As I
    have so often stated, the Z.shift is NOT AN ACTUAL
    SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
    “TYPICAL” SHIFT IN SUBGROUP-TO-SUBGROUP
    AVERAGES. I have said many times, it is a
    “compensatory off-set” that models an expansion of the
    variance. Prove it to yourself … simply plot the
    cummulative sums-of-squares — SST, SSW, and SSB.
    For “typical” subgroup sizes (4 0. If there is no subgroup shifting (all
    subgroups have the same mean and variance, the
    between group sums-of-squares will be ZERO once the
    degrees of freedom are corrected. AS SSB increases,
    SST also increases. You really need a simple education
    on the components of variance model (i.e., 1-Way
    ANOVA). With further investigation, you will also discover
    that Z.shift is an “natural artifact” of subgrouping. As I
    have so often stated, the Z.shift is NOT AN ACTUAL
    SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
    “TYPICAL” SHIFT IN SUBGROUP-TO-SUBGROUP
    AVERAGES. I have said many times, it is a
    “compensatory off-set” that models an expansion of the
    variance. Prove it to yourself … simply plot the
    cummulative sums-of-squares — SST, SSW, and SSB.
    For “typical” subgroup sizes (4 < n < 6), you will see first
    hand that SSB > 0. You will find that the “typical” Xbar –
    Xbarbar is about 1.5 sigma. Pretty soon, you might get
    the problem defined correctly. Only then will you pursue
    the correct answer. Math is never wrong, only the
    problem definitions. Reigle

    0
    #103384

    Mikel
    Member

    Craig,
    I have to agree with Reigle on this one. Your resource does not know of what they speak. What nonsense.

    0
    #103385

    Mikel
    Member

    Hey Reigle – back to the parlour tricks?

    0
    #103388

    howe
    Participant

    Stan,
    Ok, so can you explain why Six Sigma? Why not 5 Sigma, why not 7 Sigma or 4.5 Sigma? Also, is this a  short term or a long term capability?
    P.S. you insulted Craig quickly but did not provide your version of the story.
     

    0
    #103390

    Gabriel
    Participant

    I did not compare sum of squares without taking the degrees of freedom into account. I compared unbiased estimators of standard deviations, and the formula includes (n-1) or (ng-1) in the denominator. Exactly the same conclusions would have been made if comparing variances.
    And you are still messing something: The subgroups are samples of a population. Even if the population was allways exactly the same, the subgroups will have different averages and standard deviations. That is the very basics of random experiments. And it is not a shif. Draw 5 dice at once and record the average and the range. Repeat several times. You will not get allways the same average and range. This process is perfectly stable and does not shifts. The fact that the sample is different does not mean that the population changed at all.

    0
    #103394

    Reigle Stewart
    Participant

    Gabriel: The shift factor is related to sampling means, not
    a shift in the population mean.

    0
    #103400

    Mikel
    Member

    I did not insult Craig. His story is just so ridiculous that it does not merit any counter explanation.

    0
    #103401

    howe
    Participant

    Stan,
    You didn’t answer the question. Can you then explain where the name Six Sigma come from? Why not 5 Sigma, why not 7 Sigma or 4.5 Sigma? Also, is this a  short term or a long term capability? We all know SSA’s version of this naming. What’s your version?

    0
    #103410

    Anonymous
    Guest

    Gabriel,
    I was looking forward to your response.  Can this issue be resolved once and for all?
    Andy U

    0
    #103411

    Mikel
    Member

    Yes I can.
    With the exception of giving all credit for everything Six Sigma to Mikel, Reigle’s answer is correct.

    0
    #103429

    Gabriel
    Participant

    No, it cannot.
    Reigle is right. Sample means are different from the population mean. If he want to call that a shift in the sample means what can I do? We used to call that sampling variation.
    If this is the shift Reigle is talking about, I don’t care of it. I care if my process shifts, drifts or changes its variation. Not if the samples do. They do. That’s natural. That’s what random experiments are about. Throw 5 dice 20 times. Chances are that you will get lots of different results (the average will change from trial to trial, the range will change from trial to trial). That does not means that something chjanges, shifted, inflated or whatever in the “throwing dice” process. So who cares? Now, show me a run of poker of aces in 3 consecutive trials and I will jump in: “Wait a minute, what on earth is going on with this process?” That’s not a natural behaviour. That’s unstability. That’s a sign of a shift in the process.

    0
    #103461

    Anonymous
    Guest

    Gabriel,
    I asked the following two questions:
    Why would anyone want to estimate a process performance – average and sigma – based on a subgroup?
    Why would anyone then want to take the worse case of several subgroups?
    If I have understood the debate so far, I believe the answer might be provided by the way Motorola engineers controlled their processes prior to 1984. At that time, few Motorola statisticians, quality engineers, or device engineers recognised the importance of rational subgrouping, so that in photo processing for example, a subgroup actually corresponded to an individual wafer – with measurements taken from the top, centre, bottom, left, and right,  providing a subgroup of 5.
    This approach was widely used across most of the semiconductor industry at the time – for example, at Inmos, TI (Houston)  GI(Scotland), and Siemens in Ballanstrasse, but not at one excellent bipolar Fab in Phoenix, where they had an excellent SPC engineer. He used wafer averages as individuals, while in Austin we – process and yield enhancement engineers – preferred to use Multi-vari chart to order to study sources of variation, as recommended by Shewhart.
    Presumably, this is why certain individuals tried to ‘improve’ the statistical methods that had previously make Motorola’s waferfabs so successful. Like many others, they were more interested in ‘statistical political correctness’ than actual improvements.
    In summary, subgroups are not entities and should not be treated as such. If a process engineer uses a subgroup size of 3 and takes 10 subgroups giving a n = 30, then the population x-bar-bar can be estimated to within about 0.6 sigma.
    I believe this is an important issue because as Stan previously pointed out, many USA companies are already uncompetitive due lack lustre ‘system performance,’ mainly because they have not understood the ‘step by step confirmation’ method (100% inspection) of the Toyota Production System. Allowing for a 1.5 sigma shift will only make a bad situation worse, which is why the Japanese company I worked for did not use any shift to design its world-class image setter.
    Respectfully,
    Andy Urquhart
     

    0
Viewing 24 posts - 1 through 24 (of 24 total)

The forum ‘General’ is closed to new topics and replies.