iSixSigma

sigma range

Six Sigma – iSixSigma Forums Old Forums General sigma range

Viewing 22 posts - 1 through 22 (of 22 total)
  • Author
    Posts
  • #33389

    Lynn
    Participant

    When a process has achieved 6 sigma, are there 3 or 6 sigmas on either side of the mean?  IE – are there 6 or 12 sigmas between the lower limit and the upper limit?

    0
    #90202

    DaveG
    Participant

    There are 2 * 6 = 12 (actually 2 * 4.5 = 9, see 1.5 sigma shift).

    0
    #90203

    DogFood
    Participant

    9 sigma long term and 12 sigma short term

    0
    #90221

    Dr. Scott
    Participant

    Lynn,
    Normally I would not respond to such a simple question (no offense intended). But the responses prompted me to do so.
    There are at least 6 sigma between the mean and the spec limits on either side of a six sigma process.
    There is not such animal as long or short term or a 1.5 sigma shift. This is only a cruch for those unable to control their processes once improved.
    Dr. Scott

    0
    #90222

    Tony Burns
    Member

    Dr Scott,
    You said: “There is not such animal as long or short term or a 1.5 sigma shift.”
    There is indeed such an animal. He was a horse called “Clever Hans” and he was part of what is known as the Pygmalion Effect or the self-fulfilling prophesy. If you expect the mean to shift, it eventually will if nothing is done to manage the process … and of course it won’t find any reason to stop at 1.5 sigma. That is one of the reason for quality programs, to keep processes on target. Dr Scott, it’s good to see someone being able to see through the nonsense.
    Six sigma programs can bring great benefits to companies, because at last there is commitment to quality from senior management. At the same time, it is important to view all information critically.
    Dr Tony Burns
    [email protected]

    0
    #90243

    Dr. Scott
    Participant

    Dr. Burns,
    I have toyed with the idea of creating a new program called “4 Sigma”, only because “4.5 Sigma” doesn’t have much of a ring to it.
    However, my marketing folks tell me I might have to reduce my price by a third since I am delivering two fewer sigma.
    Just a little humor,
    Dr. Scott

    0
    #90251

    John Hickey
    Participant

    Dr. Scott
    Re: The 1.5 Sigma Shift
    Amen!!
    John Hickey

    0
    #90275

    Reigle Stewart
    Participant

    Dr. Scott and John HickeyIt was said in an earlier post “There is not such
    animal as long or short term or a 1.5 sigma
    shift. This is only a crutch for those unable to
    control their processes once improved.”
    Consider this … We frequently encounter
    situations where we must audit or verify the
    capability of a process with a highly limited and
    intermittent sample. In this kind of situation the
    sample may be limited to about 4 < n < 6
    consequtive pieces. For a realitively high
    volume process the 4 < n < 6 consequtive
    pieces would be gathered within a very short
    period of time … but we know that not all
    sources of random variation have been
    captured in such a limited window … so we
    continue to take “sets” of samples over time.
    Typically for a capability study we sample 25 < g
    < 100 groups or sets. Several noted authors
    have recommended this sampling range for a
    capability study. So it is common to see n=5
    and g=50 for ng = 250 observations. We also
    know that a process has many sources of
    random error. It takes time for all of these
    sources to become known. So we have short-
    term random error and long-term random error.
    After an extended period of sampling we can
    use one-way anova to compute the “within set”
    variation and the total variation. This gives us
    the sums of squares within sets (short-term)
    and the total sums of squares (long-term). In
    my experience of conducting process capability
    studies the short and long term sums of
    squares are never equal … SS long term is
    always larger than SS short term. The
    difference is due to the variation that occurs
    “between sets” This type of variation (SS
    between sets) is computed as sum(Xbar.set –
    Xdouble bar)^2 … or just SST – SSW = SSB. If
    SSB > 0, then the process has “natural shift and
    drift” due to random sources of variation. If the
    SSB term gets too big, the difference is no
    longer due to “random” causes alone. At the
    threshold value of the alternate hypothesis Ha
    for a typical sampling plan and alpha, you will
    find the “equivalent shift” for such a difference in
    variation to be about 1.5 sigma (due to random
    effects alone). So we can see that with limited
    sampling, we can expect some shift and drift
    due to expected random sampling error. So you
    see it is possible (and likely) to have short-term
    and long-term variation when verifying the
    capability of a process. These calculations
    have been worked out by Dr. Mikel Harry when
    he designed the MiniTab Six Sigma modules as
    being used by MiniTab to this day. It was Dr.
    Terry Zimmer that programmed the modules
    when he worked for MiniTab. These modules
    have been thorougly tested and have been
    availible for around 6 years now. The modules
    make all of the necessary calculations I have
    just discussed and are annotated in the help
    menues.Regards,Reigle Stewart

    0
    #90288

    Mikel
    Member

    Poor control methods do not justify the assumptions you are making.

    0
    #90325

    Tony Burns
    Member

    Reigle,Your note focusses on the differences between “short term” and “long term” variances:”…After an extended period of sampling we can use one-way anova to compute the “within set” variation and the total variation. This gives us the sums of squares within sets (short-term) and the total sums of squares (long-term). …”This is precisely what the range chart of a control chart does ! Range control charts compare the variation within groups (short term) to the variation between all the groups (long term). The +/- 1.5 theory confuses the differences in variances with means. I suggest reading Wheeler “Advanced Topics in SPC” Chapters 4 and 6 for a detailed understanding.It would be extraordinary if we could do some theoretical calculations that would force our process means to drift, despite our best efforts to keep processes on target with minimum variation !Dr Tony [email protected]

    0
    #90330

    Reigle Stewart
    Participant

    Dr. Tony Burns:Yes, I have the book you referenced and
    frequently use it; however, it has nothing to do
    with the discussion I offered concerning the
    shift factor. I guess we will just have to agree to
    disagree on this point. On another angle, you
    kindly stated: “It would be extraordinary if we
    could do some theoretical calculations that
    would force our process means to drift, despite
    our best efforts to keep processes on target
    with minimum variation !” I do believe you mean
    “to not drift” rather than “to drift,” or were you just
    adding some humor?Reigle StewartThe Old-Bald-Fat-GuyReigle Stewart

    0
    #90334

    Tony Burns
    Member

    Reigle,
    It’s great to see that you follow the Wheeler “bible”. It is a wonderful book that everyone on the forum should read and study. The chapters I referred to discuss control charts in great detail and the difference between “short term” variation within groups and “long term” variation. For Wheeler’s view on six sigma, you have to go to page 202:
    ” Failure to operate a process on target with minimum variance will inevitably result in dramatic increases in the Average Loss Per Unit of Production. Such losses may be severe and are always unnecessary.
    …Six-Sigma Quality and all other specification-based nostrums miss this point. … The sooner one wakes up to this fact of life, the sooner one can begin to compete.”
    And in a personal correspondence from him:
    “The only antidote to ignorance is education.  My most effective antidote is known as Understanding Variation, the Key to Managing Chaos.”
    Dr Tony Burns
    [email protected]

    0
    #90349

    Reigle Stewart
    Participant

    Dr. Tony Burns:Thank you for the reference to page 202. My
    humble opinion is that Wheeler missed the
    point about six sigma. I guess it’s a matter of
    perspective. Without question, when you look at
    processing for six sigma (PFSS) the task is to
    minimize variations … about the mean of a
    process and to keep the mean on target over
    time for those parameters that are truly critical
    … but how do you (as a process engineer)
    know which design features are critical? But
    when you look at designing for six sigma
    (DFSS) the task is somewhat different. As a 30
    year design engineering veteran I can assure
    you that my job IS NOT to put processes on
    target with minimum variance … my job (from
    the big picture view) is to configure a product
    concept that has form, fit and function (by the
    customers edict) … and then specify optimal
    tolerances for each of those key design features
    that are critical to Y, where Y is just about
    anything … serviceability, reliability,
    maintainability, cost, delivery, satisfaction, and
    so on and so on. The trick is to find the ones
    that are “critical to” The tolerances I then assign
    must be “robust” to process (and
    environmental) variations that I cannot possibly
    foresee but know will exist (at some point in
    time). I must specify optimal nominal
    conditions for all design features (target
    values). Guess what? The laws of physics
    usually governs the assignment of target values
    and tolerances … not process engineers
    working with control charts. Their control charts
    are only as good as the specifications I assign
    to the product design. Remember Dr. Burns, it
    is possible to EXERT PERFECT STATSTICAL
    CONTROL over a product characteristic that
    HAS THE WRONG SPECIFICATION. For this
    situation … defects will be made by the truck
    load if spec is wrong … no matter how many
    control charts you use. What Wheeler says is
    true … only after the RIGHT design has been
    put in place. Putting the RIGHT design in place
    requires more than what the good Donald
    Wheeler has to offer. For example, a design
    engineer can use the 1.5 sigma shift concept to
    “test” how robust his/her design is to process
    centering error. By analytical test, the designer
    can explore different nominal conditions and
    tolerances that maximizes “robustness” while
    concurrently meeting performance
    requirements. If the RIGHT combination is
    found by diligent engineering methods, the
    tolerances of the TRIVAL MANY characteristics
    can be greatly widened … which then means
    we can tolerate the variation and don’t need to
    even use control charts during production …
    REMEMBER, WE DON’T NEED TO CONTROL
    WHAT IS NOT SENSITIVE. We do not need to
    “Put on target and minimize variation” for what is
    not sensitive to the design requirements. This
    is what causes us to waste so much money…
    too many design tolerances are way too tight. If
    we find the “vital few” CTQs and properly treat
    these during design, then we do not need to
    control the “trivial many.” If we don’t need to
    control the trival many then we certainly do not
    waste our time “putting on target” and
    “minimizing variation” among the trival many.

    0
    #90351

    Dr. Scott
    Participant

    Reigle,
    You say:

    “We frequently encounter situations where we must audit or verify the capability of a process with a highly limited and intermittent sample. In this kind of situation the sample may be limited to about 4 < n < 6 consequtive pieces."
    First, why would we be “highly limited” in measuring a quality that is critical to either the customer or the process? Based on statements made in your earlier posts I am sure you would agree that if it is critical, it should be measured and controlled. If it is not, it should be ignored.
    More important though is the second part of your statement; “In this kind of situation the sample may be limited to about 4 < n < 6 consequtive pieces." Why consecutive pieces?? This is not a correct application of basic sampling procedures. Such procedures suggest that the within subroup variation should represent the natural variation in the process. Consecutive pieces are unlikely (as you point out) to do so. Therefore, consecutive piece sampling would be an improper approach for process control purposes. There ARE times where one WOULD use consecutive sampling, but this is done only in special circumstances, for example where measurement system concerns exist in a destructive test scenario. In such a case, the average of the subroup would be charted using I and mR charts, not X-bar and R.
    Bottom line: Sure, if you use improper sampling procedures (i.e., consecutive piece sampling) you would expect more shift and drift than is represented by the within subroup variation. However, to say this is expected to be 1.5 sigma is just a shot in the dark. It might just as likely be .15 sigma or even 15 sigma. Again, proper sampling for control charts dictates that the within subgroup variation (that represented by the R-chart) represent the natural sources of variation in the process.
    Dr. Scott

    0
    #90355

    Reigle Stewart
    Participant

    Dr. Scott:The use of destructive sampling often requires
    the we use a “highly limited” sample … due to
    the economics of testing or other mitgating
    circumstances. I do believe that you will find
    that “consequtive sampling” is not improper …
    that is why sequential proceedures and
    sequential tests exist … such as Wald’s
    sequential test of the mean and seqential
    probability ratio tests (SPRT). Many well known
    statistical proceedures are based on sequential
    sampling techniques … even certain types of
    designed experiments are predicated on
    sequential sampling. In the real world it is not
    always possible to get a random sample …
    especially in continous processes like used the
    chemical industry. When typical sample sizes
    are used and accepted levels of alpha, it is not
    possible to aquire a “.15 sigma or even 15
    sigma” shift.Respectfully,Reigle Stewart

    0
    #90363

    Gabriel
    Participant

    Haven’t you heard about rational subgroups? It is the recomended sampling strategy in SPC, and cosnsists of taking the parts for each subgroup as close as possible one from the other, so as to have a sample of what the process was doing at one instant.
    “Such procedures suggest that the within subroup variation should represent the natural variation in the process”
    Waht’s wrong with that? Subgroup variation actually SHULD represent varaition due to common causes only. That’s the key of SPC.

    0
    #90365

    Dr. Scott
    Participant

    Reigle,
    As I said before, I am NOT saying that sequential sampling should never be used. However, it should almost never be used for control charts (again, with very few exceptions one of which I mentioned before). For SPRT or DOE, certainly there might be good reasons for taking consecutive samples (sorry for the mispelling before, my mind works faster than my fingers at times). But again, consecutive samples are generally not appropriate for control chart applications.
    With respect to “When typical sample sizes are used and accepted levels of alpha, it is not possible to aquire a “.15 sigma or even 15 sigma” shift.” With all due respect, you are simply mistaken. Consider this example. Two consecutive ounces of a chemical are taken from a 1000 gallon/hr process. The measure is “impurity”. The range of impurity in the first hour’s sample is .00001%, its average is .005%. In hour two, another five consecutive ounces are sampled. Again, the sample range is .00001%, and now the average impurity is .006%. This behavior is repeated for 1000’s of hours with similar results.
    To sample this way for a control chart (as you previously proscribed) would result is an estimate of within subgroup sigma of about 0.000009 and between subgroup sigma of about .00033. So in this case the sigma shift as you have described (i.e., between to within) it would actually be about 37 sigma! Similarly, if the averages of the samples were much closer to each other, then the shift could be .37 sigma.
    Perhaps we are misunderstanding what each other is saying. It wouldn’t be our first time (humor).
    Good to hear from you again,
    Dr. Scott
     

    0
    #90369

    Dr. Scott
    Participant

    Gabriel,
    Yes, I have heard of rational subgroups. One of the properties of a rational subgroup is that the individual measures within the subgroup be independent. Too often in high volume manufacturing cases, measures of consecutive pieces yield characteristics of autocorrelation, or lack independence. Such was the case in the example I presented.
    Dr. Scott

    0
    #90370

    Reigle Stewart
    Participant

    Dr. Scott:I believe we are at a point where a grease board
    and some markers would be of huge benefit.
    Trying to describe such technical things without
    the benefit of drawings is so frustrating. Your
    thinking is put forward with great conviction and
    I admire that. Your skills and experience is
    most impressive. I have no doubt there is
    scientific merit to your arguments … we just
    need the ability to create drawings and talk in a
    highly interacive and graphical manner.
    Unfortunately simple textual discussions will
    not get us there. Perhaps we will cross paths at
    a conference … be able go to a bar with a pad of
    paper and pencil and cold beer… then we can
    really communicate our ideas, principles, and
    practices. Till then, we will do our best with
    words. Thank you for your time and effort on the
    subject at hand.
    The Old-Bald-Fat-Guy
    Reigle Stewart

    0
    #90371

    Dr. Scott
    Participant

    Reigle,
    Again, I strongly agree with you. Verbal communication via text only is not robust enough to achieve thorough understanding.
    And, I am pleased to hear you say “cold beer” rather than the usual response of “A cold beer”. One just wouldn’t be enough.
    BTW, what is the next conference of great value in your opinion?
    Thanks,
    Dr. Scott

    0
    #90393

    John Hickey
    Participant

    Dr. Scott
    Re: SPRT Sequential Sampling and it’s uses
    Be advised that Sequential Sampling is not strictly limited to it’s economic applications as implied in your recent post to Reigle Steward. Group Sampling is allowed and it has the effect of lowering the typeI/II errors of the test. Also, random sampling throughout the process is allowed. I.e aside from sampling savings, the difference between say a Binomial Attribute Sequential Sampling Plan and an old MIL-STD-105D X-F-2 Plan results from the fixed sampling of the former as opposed to the variable sampling of the latter which is unknown beforehand. The usual procedure in this case is to estimate the samples needed from the test form computed Average Sample Numbers(ASN )and selected Type I/II errors. Listed fro your perusal at the below Website are some application references:
    http://citeseer.nj.nec.com/context/78315/0
    Respectively,
    John Hickey

    0
    #90407

    John Hickey
    Participant

    Dr. Scott:
    Typo Correction on my message 33511
    Fixed sampling of the latter(MIL-STD-105) and variable sampling of former (Sequential Sampling)
    Sorry
    John Hickey
     

    0
Viewing 22 posts - 1 through 22 (of 22 total)

The forum ‘General’ is closed to new topics and replies.