iSixSigma

Could you explain….?

Six Sigma – iSixSigma Forums Old Forums General Could you explain….?

Viewing 100 posts - 1 through 100 (of 119 total)
  • Author
    Posts
  • #27000

    planb
    Participant

    Could you explain with more details what is Six Sigma in statistical terms? I do not understand the logic of the curve. If you have a wide curve tending to six sigma you will have more variation (as I saw in the chart of this newsletter).

    PB

    0
    #65667

    Tierradentro
    Participant

    The wider the curve, the greater the variation. You want a nice “tight” curve around the mean (average) so there is little variation. This increases your process sigma.

    0
    #65669

    Jim Parnella
    Participant

    planb,
    I’m e-mailing you some curves that will do better in explaining Six Sigma than the ones referenced in this week’s newsletter.

    What you need to realize is that Motorola studies had determined that processes will shift as much as 1.5 sigma from the desired process average. That is why 6 sigma delivers not more than 3.4 ppm defects rather than 2 parts-per-billion that would be anticipated from a centered 6 sigma process. It is because of this 1.5 sigma shift that accounts for the 3.4 ppm rather than 2 ppb.

    I would have attached the charts to this message but apparently the web site doesn’t allow attachments.

    0
    #65703

    Thomas Pyzdek
    Participant

    I’ve heard countless rumors about these internal Motorola studies. Does anyone know of any PUBLISHED studies that document the claims of processes drifting 1.5 sigma? In six sigma we are supposed to be data driven. In God we trust, all others bring data! :)

    0
    #65735

    Engineer X
    Participant

    I have an Excel spreadsheet with randomly-generated normal data in it. This process is far more stable than almost any real industrial process. The “process mean” can “shift” from 1 to 2 sigma depending on how big a sample I take.

    People from Motorola will sometimes tell you that the mean shift came from studies done at Western Electric, but I’ve never seen a citation. I think the short answer is that somebody made it up because “Six Sigma” sounds better than “Four-Point-Five Sigma.”

    0
    #65761

    Cone
    Participant

    I am surprized that a guy with a book on 6 sigma and a consulting practice around it asks such a fundamental question.

    Truth is that it does not matter. You obviously have lots of data on long term and short term capability, what is the average shift?

    0
    #65903

    Cathy Crisostomo
    Participant

    Jim,
    You mentioned that you have graphs that will better explain the statistical foundation/concept of Six Sigma and you will be sending it to planb. I am currently doing some research to know more about Six Sigma and would like to request if you could also send me the graphs that you were talking about.

    I would also like to ask if you know of some other sources of materials regarding Six Sigma. Would you recommend a book that is good for somebody who is just starting to learn about Six Sigma?

    Thank you very much.

    Cathy Crisostomo

    0
    #66008

    James Durowse
    Participant

    Yes I understand 1.5 sigma shift.

    0
    #66013

    Marc Richardson
    Participant

    Can we create an operational definition for the term “shift” for the purposes of this discussion? By shift do we mean any point beyond the 3 sigma control limits, seven points in a row without crossing the centerline, zone violations and so on or are we discussing longer term shifts such as seasonal shifts?
    Marc

    0
    #66021

    Joe Perito
    Participant

    The “shift” in the process comes from the statistical sense that the process has had an assignable cause forcing the process to deviate from its historical mean by one of the recognizable trend patterns: one point beyond three sigma limits, two out of three beyond two sigma, four out of five within one sigma, 7 on one side of the center line, 7 consistantly rising, 7 falling, and 14 points cycling. All of these are a shift from a stable process.

    0
    #66028

    Ken Myers
    Participant

    Marc,

    The shift everyone speaks of is the natural movement in the process center from its target. This natural movement is not due to any one systematic cause(s), but due to combinations of small random causes over time. As many have suggested each process will vary or shift from its target over a long period of time by a certain amount. We have heard numbers like +/-1.5 sd’s from the target while others have claimed different values. In any case, the idea here is that it is impossible to hold any process on target for long period of time. Therefore, we often refer to this natural shift as the long term variation of the process. The Six Sigma method uses a worst case value for this long term variation of +/- 1.5 sd’s. This was done through the work of Motorola, and is considered by some a controversial topic. However, it simply provides a reference by which stable processes typically perform within. This is not to say your process will exhibit this much or this little shift from the process target. The actual value in your process may have a larger or smaller shift in the process center from the desired target. However, it is important to have some way of expressing this performance because it is inherent in all processes.

    If we can assign a probability distribution to the process under observation, then using the long-term variation we can forcast the expected performance of the process over time. This predictive capability sets the stage for the Six Sigma improvement activity.

    Hopefully, my explanation above will not be considered philosophical by my colleagues out there. Most of this understanding was first expressed in general form by Dr. Walter Shewart back in the late 1940’s. As such, it is not really a new concept, just an old one being put to practical use.

    Ken

    0
    #66040

    Cone
    Participant

    Ken,

    Wow, great explanation.

    The only thing I would add is use the +/- 1.5 unless you have real data on your process that reflects your true control process. If you have the data, use what it tells you.

    Everyone that is trying to take this back to SPC would lead us to believe that they actual use, interpret and react to SPC criteria, as defined so many years ago by Shewhart, on all of their critical inputs to their processes. Most 1) don’t know all of their critical inputs (we do this with data not opinion sharing), and 2) don’t strictly follow the rules, and 3) couldn’t possibly react to that much SPC. I would also contend that you have to have the right training and setup philosophy, which most do not. If you think you do, I would like to see your process at my expense — please email me.

    0
    #66049

    Ken Myers
    Participant

    Gary,

    Point taken… As long as we agree we are speaking of the response of a process I can provide a typical example of using SPC to monitor the maximum shift in a process through various stages of improvement. The critical inputs were identified using a sequential experimental approach starting with a Hadamard based Screening Experiment, followed by a Response Surface DOE, and closed using Variation Transmission Analysis to set the tolerances for the critical inputs. This
    is a typical example used in many parts of our Six Sigma training ciriculum.

    I would be glad to show you an exerpt, but cannot email out of the company.

    Ken

    0
    #66090

    Jim Grizzard
    Participant

    I recently received Six-Sigma Champion training. I was very interested in the 1.5 shift theory. Particularly, in what effect it may have relative to the requirements of QS9000 4.9.2 and PPAP (this should be the topic of much discussion!). I was under the impression that Motorola developed this theory. Further reasearch reveals that John Bender was working on this idea in the early sixties with design tolerances. The term “Benderizing” tolerance came from his work (reference Graphic Science Dec 1962). Davis R. Bothe credits Bender in Chapter 14 – Understanding the Six-Sigma Philosophy in his book Measuring Process Capability.

    0
    #66091

    Schuette
    Participant

    It’s Arthur Bender, Jr. not John Bender

    0
    #66092

    Ken Myers
    Participant

    Jim,

    Thanks for bringing this tidbit of information to our attention. I was trying to place when I first heard of this concept of mean shift, and your right. It was Mr. Bender that first proposed the concept in dimensional tolerancing practice. I had the occasion to both hear and meet Mr. Bender at a past AQC Conference where he recounted the beginnings of what he callde “Benderized Tolerancing”. He is to say the least an interesting man.

    More recently, a gentlemen of whom I worked for the last 6 years proposed a development on Benderized Tolerancing suggesting an approach call Process Tolerancing. This tolerancing approach advocates setting requirements on both the average, called an operating window, and on the individual values which are the standard tolerances we are familar with. If you are interested in more on this subject Dr. W. Taylor has a site at http://www.variation.com.

    Thanks again for the comments.

    Ken

    0
    #66095

    Joe Perito
    Participant

    Planb, you have probably seen the normal curve drawn between the upper and lower spec limits as given by Engineering or your customers, right? That’s two vertical lines, one on the left of the curve and one on the right. The curve should not overlap either line if the process is in spec. If this curve fills “all” the space between the spec limits the process can not get off target or the ends of the curve will cross the spec limits and be out of spec. Customers will not allow this and require the width of the curve to be smaller. In other words, the process standard deviation must be smaller. As you compress the width of the curve (and the process variation) it allow more room for the process average to drift back and forth and still be in spec. The normal curve perfectly matching (or just touching) the spec limits has +/- 3 standard deviations fitting between the spec limits. If the process variation (Standard deviation) can be made small enough, you will have six standard deviations fitting between the spec limits. The normal curve (besides its +/- 3 sigma width) will be able to drift back and forth an additional 3 sigma without crossing the spec limits. This then is a +/- six sigma process. A process with this little variation will produce only 3.4 PPM, not the usual 2700 PPM (the complement of a 99.73% +/- 3 sigma process.

    0
    #66097

    Cone
    Participant

    A six sigma process that drifts three sigma, as you suggest, will produce 1350 ppm at the time of the shift. Not 3.4 as you wrote. This is confusing enough without stating it wrong. A six Sigma process that drifts 1.5 sigma will produce 3.4 ppm at the time of the shift.

    0
    #66206

    lin
    Participant

    Gary,

    Will you please provide more detail to your March 24 message?

    Thanks

    Bill

    0
    #66207

    lin
    Participant

    Gary,

    Will you please provide more detail to your March 24 message?

    Thanks

    Bill

    0
    #67528

    Isabel
    Participant

    Could you explain with more details what is Six Sigma in statistical terms?

    0
    #70022

    Sukmi
    Member

    As a student I really what to know how far is Motorola Company run in Sigma (because it is the first who pioner the motorola Concept to all off us
    Thank

    0
    #70487

    Bonnie Paris
    Participant

    Gary,
    My interpretation of Mr. Pyzdek’s question is a bit different than yours, and I thought you were dismissive of his request for information.  He wants to know if there are any published studies related to the 1.5 sigma shift used by Motorola & others so he can better understand how this practice came about.  What’s the history of using the 1.5 sigma shift?
    You chastised him for asking “such a fundamental question” and went on to say “Truth is that it does not matter.”
    By this, do you mean that the truth does not matter?  History is important, especially today.  A firm understanding of principles and history — how we derive the equations we use so freely — is what enables us to learn and add to the knowledge of our world.
    Regards,
    Bonnie Paris

    0
    #72713

    Jim deVries
    Participant

    Ken,
    Great information and agree with your thoughts.  However, I am not familiar with Hadamard based Screening Experiments.  How are they different than standard screening DOEs?  In addition, standard DOEs rely on having discrete input while RSMs allow you to combine discrete and continuous data inputs.  What are the advanateges of utilzing a screening tool that relies on discrete inputs if you already have continuous and discrete inputs?
    Just a note: I fully agree that screening tools are under utilized in detemining potential causes even though there is a danger of mixing interactions.
    Again, I appreciate your excellent comments.
    Cheers!
    Jim

    0
    #76325

    Jamie
    Participant

    I’m concerned less with where the 1.5 came from (though I agree with those that would like to see data behind it) then I am with differences between “long” and “short” term. The term long vs short would lead one to believe that I collect samples and assign the term (long vs short) in a discreet manner. ie I’ve either collected enough data to be long term or I have not. And if I have not the 1.5 sigma shift applies to it. I tend to argue that this 1.5 number is actually continuous and should be treated as such. In working with people on projects this is a most confusing problem. For example I ask the question “does the data you’ve collected over the last year include all the likely variation we expect in this system” if the answer is yes lets treat as long term data. Then a new sample is taken for the last month and its treated as short term data. Well what if instead I have historical data from 2001 that is treated as long term for base capability and now I have 5 months of data from 2002. Are the 5 months long term or short term or are they really something in between? Is the 2001 year data truly long term? The only right answer is the exact 1.5 constant is wrong, the question is how wrong is it and when can we use it. I’m stuck in the case where I have data that is certainly between long and short term (which is probably most of the time). And don’t tell me well just use .75:). Does anyone even know if the shift should follow a linear relationship as one goes from short to long term data?
    My personal opinion is we should stop using the 1.5 altogether it only seems to confuse people (i.e. is NVA) and simply calculate cabability/sigma level then qualify capability as “long” “short” or for me “intermediate”.
    Jamie

    0
    #76935

    Rob Williamson
    Member

    I think that the 1.5 sigma (or similar allowance) is not intended to be ‘long’ or ‘short’ term, but is to allow for irreversible change rather than ‘statistical’ variation.
    A typical source of this variation might be tool wear in an ‘uncorrected’ process.
    For instance, in an injection moulding process, the mould (tooling) wears and the component size gradually increases. This continues until the ‘open loop’ moulding process becomes ‘closed loop’, by means of the mould being replaced.
    For this reason, it’s good if you can understand the sources of your variation.
    The decision of when to ‘close the loop’ in the process control (i.e. to make a correction) is a statistically complex one! However, for processes with this type of irreversible change, it’s not unreasonable to say that many people will make the correction when they’re sure the drift is significant and permanent – 1.5 sigma permanent drift is probably a good typical variation up to which the drift is tolerated and therefore a reasonable figure to subtract from the the six-sigma ‘fluctuating’ variation.
    Rob Williamson

    0
    #81683

    Rob Vreeland
    Member

    I agree with you Jamie. Whether there is or isn’t a shift of 1.5 sigma, the founders of six sigma should not have altered the definition of a six sigma process to acknowledge it. The concept of a six sigma process is based on the use of the standard normal table and the calculation of a z-score. You look up your process yield in the table and find the corresponding z-score. That is your sigma level. 3.4 DPMO is a 4.5 sigma process (not a 6 sigma process). If they wanted to acknowledge the difference between short term and long term performance, they should have just added a qualifier to their statement of the sigma level. They turned a fairly easy concept into a point of confusion by messing with eveyone’s understanding of normal statistics.

    0
    #81736

    Schuette
    Participant

    For more information on the 1.5 sigma shift please refer to a paper by Davis R. Bothe “Statistical Reason for the 1.5 sigma Shift”. This paper was published in Quality Engineering in 2002. It sites 3 studies that are 25 and 50 years old and have, by Mr. Bothe’s conclusion, stistical validity. So it wasn’t the six sigma practioners that suggested the shift but research completed long before the inception of six sigma methodology.

    0
    #81737

    Schuette
    Participant

    You may already have gotten this information but I will note it for you if you haven’t. In Quality Engineering 14(3), 479-487 (2002) Davis Bothe wrote an article about the shift you might find interesting.

    0
    #81739

    Schuette
    Participant

    There are several articles stating the statistical validity of the 1.5 sigma shift that date back to the 1950’s and are based around statisitcal tolerancing. The most recent being an article published in Quality Engineering Journal in 2002 by Davis Bothe “Statistical Reason for the 1.5 s shift”. He belives the studies have significance and the shift is reasonable. As practioners of six sigma we should follow the data and if the data shows there is a shift then the shift needs to be part of our reasoning and analysis.

    0
    #81752

    John J. Flaig
    Participant

    Jamie,Excellent post. You have hit on some of the key
    statistical issues with the 1.5 sigma mean shift. 1. Define long and short term? Based on the 6s
    philosophy no matter how long or how much
    data you collect it is always “short term”. This
    does NOT make sense.2. The idea that the process deteriorates is
    based on empirical observation we are told but
    there is NO DATA provided to support it. Further,
    it is from one company’s processes. Are all
    processes subject to the same “shift”?3. Is just the mean shifting? How about sigma?
    Are they shift or trends?My suggestion is to model the data using EWMA
    charts and project trend in both the mean and
    sigma with confidence intervals. Using this
    methodology eliminates the long term short
    term issue and puts the whole process on a
    scientific footing rather than the “take my word
    for it” approach.John J. Flaig, Ph.D.
    Managing Director
    Applied Technology (http://www.e-AT-USA.com)

    0
    #81776

    boettler
    Member

    Jim,
    I am not questioning the 1.5 sigma shift.  What I am questioning is the idea of convoluting the standard normal distribution table to account for it. 
    I created my own z table in Excel and extended the z’s out to 6.  A z-score of 6 corresponds to about 1 defect per billion opportunities ( not 3.4 defects per million opportunities).  A z-score of 4.5 corresponds to about 3.4 defects per million opportunities.  In most six sigma calculators, a defect level of 3.4 DPMO will return a sigma level of 6 because it accounts for the 1.5 sigma shift between long term and short term data.
    Why wouldn’t you leave the standard normal distribution table as it is in every stat book we have ever seen and acknowledge the difference between short term and long term sigma with a qualifier based on the data you used.  For example,  if you used short term data to calculate your first pass yield then you would state that your process has a short term sigma level of x.  If you wanted to estimate your long term sigma level and first pass yield, you could then subtract 1.5 sigma and look up the corresponding yield in the standard normal table.
    I think the practitioners at Motorola took a fairly simple concept and complicated it for reasons that are still unclear to me.  Any thoughts?
     
     
     

    0
    #81806

    Schuette
    Participant

    My thoughts are this first it wasn’t Motorola that suggested the 1.5 shift it was J. Gilson in 1951 in a published paper titled “A New Approach to Engineering Tolerences” (Machinery Publishing Co London 1951). The most recent is an article by Davis Bothe “Statistical Reason for the 1.5 Sigma Shift” (Quality Engineering 14(3), 479-487 2000). each article illustrates the statistical reasoning for the shift.
    You can either accept the reasons or go off on your own based on your own data. Always remembering that with six sigma we follow the data. If your data shows something else then follow it.
     

    0
    #84546

    Ravi.T.
    Participant

    Please explain the statistical terms of 6 sigma

    0
    #84843

    S.A.BENEDICT RAJESH
    Member

    What is the statistical explanation of Six Sigma?

    0
    #84847

    Chugh
    Participant

    Hi cathy if u receive some soft copy of a matter on this topic , cud u pls send it to me also.on [email protected]
    rs
     
    manish

    0
    #84900

    Helper
    Participant

    Defined?  Well statisticly, it indicates that your process is 99.9997% effective at producing non-conformances – assuming normality of course.  This yield translates into 3.4 defects per million opportunities verses the traditional 3 sigma paradigm that has a yield of 2,700 non conformances per million opportunities.  Where as the traditional paradigm permitted a standard deviation to be no greater 1/6 the of the total allowable spread; under the practice of 6 Sigma, variation is not allowed to exceed 1/12 of the total allowable spread.
    Hope this helps………..

    0
    #85600

    xu
    Participant

    Jim,
     
    I have long wondered where I was going wrong in being unable to interpret 3.4 defects as 6s.  When I finally got to undergo education at Motorola University, I was so rattled when we got to the calculation exercise, I had to leave the room for awhile.
     
    Your explanation is the best I’ve seen and am able to reconcile it for myself.  I’m still left with one minor problem.  If there is a long term shift, then when taking the measurement, shouldn’t we be subtracting (as you stated), rather than adding 1.5 to the calculation?
     
    The bottom line is that 4.5s performance is the objective of the “Six Sigma” process.  After all, what’s in a name?

    0
    #87053

    Anonymous
    Participant

    I’m a mathematician and it is quite evident that
    the six sigma stuff is just junk, particularly the
    1.5 sigma shift!

    0
    #88926

    KY
    Participant

    Hope that I am getting correct this message:
    In classical traditional quality way, 3 sigma process control is Cpk be more than 1.33,
    how about in six sigma case?
    thks

    0
    #89228

    Shabbir Kagalwala
    Member

    Hi
    Please can you explain this 1.5 shift in more details
     
    Thanks
    Shabbir Kagalwala

    0
    #89229

    Reigle Stewart
    Participant

    I believe Dr. Harry has a new booklet on the topic entitled:
    “Resolving the Mysteries of Six Sigma: Statistical
    Constructs and Engineering Rationale.” In addition,
    several of the questions (and answers) on Dr. Harrys
    forum are directed to this issue. His forum is on the
    isixsigma.com website.

    0
    #89236

    Soumish Dev
    Member

    I am really not aware of earlier discussions but to make it more concise, 1.5 is the shift/ drift from the long term process capability to a short term process capability.
    It is being assumed that in the long term process capability, the special cause and common cause variations are included and in a short term process capability only the common cause variations are included (which is more of an ideal case). However, for reporting purpose the short term process capability is used. To measure this shift it has been standardized as 1.5 which has been fixed by Motorola Inc. However, this shift need not be 1.5

    0
    #89251

    Reigle Stewart
    Participant

    Soumish Dev:What you say is generally true, but there are execeptions.
    You may want to read the other threads that I have been
    recently involved with (since they all relate to the 1.5
    sigma shift). Please remember that the shift factor is not a
    “literal” shift in the mean. It is a compensitory off-set to
    account for long-term inflation in the standard deviation.

    0
    #89702

    ROSS
    Member

    People interested in the origins of the 1.5 sigma drift should read Bender’s original paper. In particular, note in the opening paragraphs explaining what the paper relates to:
    “… inputs from processes that are off-center or lack statistical control.”
    The paper is about tolerancing. It does not relate to the in-control process most people are involved in.
    If a process is operating within control limits, the process is stable and predictable. If the mean drifts, by 1.5 sigma or otherwise, it will begin to produce points outside the control limits. Such a process is not predictable. It may begin to produce defects, no matter where the specification limits are located, even if they have been pushed out to 6 sigma !
     

    0
    #89886

    Lynn
    Participant

    When six sigma is achieved for a process, are there 3 or 6 sigmas between the mean and the upper and lower spec limits (ie – are there 6 or 12 sigmas between the upper and lower spec limits?  I have seen conflicting information on this topic.

    0
    #89988

    Cannizzo
    Participant

    12.
    Here’s a more detailed answer to your question:
    https://www.isixsigma.com/library/content/c010101a.asp

    0
    #90914

    G.Prabhakar
    Participant

    Please explain and let me know the calculations of six sigma.Also, please send me the classic exalmples where 6 sigma concept succeeded and failed if any cases.

    0
    #90934

    Schuette
    Participant

    Ken, I liked your post so much that I felt moved to edit it a touch . With a few added observations from of my own experiences. Hope you don’t mind.
                 There is variation over time in all processes. This is due to reasons that are usually either physical, mechanical, or technical or any combination therein. In a process where the output is dependent on many sub-processes which function simultaneously this variation is considered natural, generally due to very slight movements in combinations of sub-processes which together produce the final output of the overall process. This creates a natural shift, which ordinarily does not fall outside of the desired upper or lower tolerances. Therefore this shift is the natural movement in the process center from its target. As stated, this natural movement is not necessarily due to any one systematic cause, but may be due to combinations of small random causes over time. As many have suggested each process will vary or shift from its target over a long period of time by a certain amount. We have heard numbers like +/-1.5 sd’s from the target while others have claimed different values. In any case, the idea is that it is impossible for any process which is left to operate on its own to hold on target for a long period of time. Therefore, we often refer to this natural shift as the long term variation of the process. The Six Sigma method uses a worse case value for this long term variation of +/- 1.5 sd’s. This was done through the work of Motorola, and is considered by some a controversial topic. However, it simply provides a reference within which otherwise stable processes typically perform. This is not to say your process will exhibit this much or this little shift from the process target. The actual value in your process may have a larger or smaller shift in the process center from the desired target. However, it is important to have some way of expressing this performance because it is inherent in all processes. The benefits of this type of  “handicapped” shift value can help the operators of a process to determine whether or not adjustments must be made to the sub-processes that effect the change or “shift” so as not to end up ultimately chasing the target value unnecessarily. After all, if this shift is a natural one, it may be likely to return to normal on its own, just as it did when it shifted out it could very well shift back. Indeed if one tries to control this natural shift, he may end up, by trying to adjust the process from random points within the tolerance values, out of control. Why? Because when he made adjustments to the natural shift at some arbitrary value within the tolerance range the natural shift in the process may not have been finished shifting. He has in essence shifted the target value.
    It is when the shift is in excess of this “natural” shift that operator of the process is alerted that something has possibly gone awry.
     
    If we can assign a probability distribution to the process under observation, then using the long-term variation we can forecast the expected performance of the process over time. This predictive capability sets the stage for the Six Sigma improvement activity.
     

     

    0
    #92442

    Murray
    Participant

    Ken,
    This is the first posting on this entire conversation that I read and thought OK here’s someone that really understands what the heck was going on…nice explanation, it should be in BOLD for everyone to see.
    Nick Ruhmann
    Certified Blackbelt, Tenneco Automotive

    0
    #94256

    Alfonso Mireles
    Participant

    I’m very interesting in to get a copy of these graphs.  Do you think is possible to send to [email protected] ?

    0
    #94259

    Alfonso Mireles
    Participant

    Jim, where can I get this paper?

    0
    #94371

    Kavita Nadkarni
    Participant

    The meaning of ‘six process standard deviation’ and the terms ‘specifications – lower or upper’. Unless we know the meaning of these basics with examples it very difficult to understand the statistical presentation of six sigma process. Also how does the deviation matter ? Please mail your reply to my E-mail ID as mentioned above.
    Thanks
    Kavita
     

    0
    #94378

    mman
    Participant

    Kavita
    Before some time I have posted an excellent example,allowing everybody (even non-technical)to under stand Cp,USL,LSL,UCL,LCL,VOC…etc.It was demonstrating those concepts through a ‘metaphore”.Please go thorough the messages to find it,it was addressed as :the car in the garade:a sinpe metaphore.
                                                 good luck      MMAN

    0
    #94380

    c.s.sivakumar
    Participant

    Hi !
    I can understand your confusion.
    Standard deviation is the dispersion / variation from the average of the population or a sample of observtions of a variable under study.
    Specification is the requirement. Say you are ordering for a noon meal from a nearby hotel stating that the delivery should be made at 12.00 noon and you can tolerate a span of 10 minutes within which you expect the delivery to be made at your office.( 5 minutes before 12.00 and 5 minutes after 12.00 ie. between 11.55am and 12.05 pm)
    Here 11.55 am is the lower specification limit and 12.05 is the upper specification limit. The target for the supplier is at 12.00 noon.
    If you start plotting the delivery time for 30 days you can observe the dispersion of the actual delivery time from the target time.
     Deviation is nothing but the variation you can accept from the average delivery time.
    Iam I clear ?
    Please give feedback to my e-mail address [email protected]
    with regards siva

    0
    #102898

    Suresh Narayan
    Member

    Sir,
    Can you send the attachments on Cp, LSL etc to em at my above email id ?
    Suresh
    [email protected]

    0
    #102901

    Suresh Narayan
    Member

    Hi Jim, can you send the 1.5 sigma shifts graphs to my email id as below ?
    Suresh
    [email protected]

    0
    #103162

    Michael Clark
    Participant

    I just ran across this 1.5 sigma shift while trying to reconcile some notes from 6-sigma class with the standard normal distribution table.  It is disappointing that the whole 6-sigma culture could not maintain mathematical purity–and had to resort to a fudge factor to explain variance from an ‘expected’ value.
    My grad school prof once lectured me on honesty with data.  It seems the 6-sigma culture has gotten off track with the honesty paradigm. 
    I’m not the expert and don’t have all the answers, but just had to post my little frustration.

    0
    #103165

    Ken Feldman
    Participant

    If you ignore all the theatrics associated with the 1.5 shift and look at it from a simpler perspective…  In the short run, there is likely to be less variation in the observed output of a process than in the long run.  This is due to a wider time frame in which wider swings in process variation can occur.  As an analogy…a small number of samples will not give you a true picture of a process distribution.  The odds of observing some of the extremes is less than if you took a large number of samples.  Given time and data, you can calculate what that “shift” is for your process.  It is not a lack of mathematical purity that is at fault just the impossibility of creating a universal truth as is possible in much of math and physics.  Even Dr. Harry never professed that his 1.5 is a universal truth but applicable for a very specific and narrow set of assumptions.  That is why many of us ignore all the hoopla about the 1.5 and seek to calculate our own process drift.

    0
    #103172

    Gabriel
    Participant

    Darth,
    Yes, but from what Reigle told us, Mr Harry supports his 1.5 shift on the sampling variation (lack of perfect certainty in the figures you derive from the sample) and not in the fact that any real-life process shifs, drifts and  has other unstabilityes in the long run. They are two clearly distinguishable things. You have sampling variation even if you take the sample from a static population (not a process) Would you call that a shift? And your process will still shit and drift even when you never take a sample of it in your life. How does come sampling variation explains this? Also, the sampling variation can give you a too optimistic or a too pessimistic estimate of your process varitation. On the other hand, the shifts and drifts can only make it worse. And belive me when I say that this shift can be much worse than 1.5 sigma if appropiate process control measures are not in place and that it will be much smaller than 1.5 sigma after an appropiate technique to identify and eliminate special causes of variation and to keep the process under control.
    Michael,
    Welcome to the 1.5SSMC (the “1.5-Sigma-Shift Myth” Club).

    0
    #109285

    Steve Prevette
    Member

    Pure drivel.  The 1.5 sigma shift doesn’t exist.  If you are doing SPC properly, with a reasonable set of trend rules (such as http://www.hanford.gov/safety/vpp/spc.htm), there is no way the process will vary by 1.5 sigma over the long run without being detected.
    My opinion at least.
    Steve Prevette
    pretty much died in the wool against Six Sigma

    0
    #109288

    Mikel
    Member

    Not only died in the wool, but slow to respond (the post you responded to is from 2001)
    I agree with you about the drivel.

    0
    #109290

    Loehr
    Member

    Hi Steve,
    I’m not a big fan of the 1.5 sigma shift either, but I came across an article that did a pretty job of explaining why it might exist. Take a look a “Statistical Reason for the 1.5 Sigma Shift,” by Davis Bothe in Quality Engineering, Vol. 14, No. 3, pp. 479 – 487.
    It made me think a little bit more about the possibility of such shifts ocurring.
    Hope this helps.

    0
    #109308

    Ken Myers
    Participant

    I happend to drop in to look at the postings after a long working stent in Europe.  It’s good to see that many appreciate a few of my past ramblings(postings). 
    I referred to Davis Bothe in the past concerning the explanation of using a +/-1.5SD shift in the process mean for computing the sigma qualify level.  I should mentioned that his book, “Measuring Process Capability” provides a reasonable explanation in Chapter 14, sections 14.2 and 14.3 on this concept.  Feel free to look here:
    http://www.amazon.com/exec/obidos/tg/detail/-/0070066523/qid=1098160664/sr=1-7/ref=sr_1_7/002-4014706-6651229?v=glance&s=books
    if you would like to obtain a copy of Davis’ book.
     
    Cheers,
    Ken

    0
    #110242

    Madhu
    Participant

    i madhu maroutu doing my course on total quality management and i seek ur help to understand what actually six sigma and how it is developed (basic concept). why cant we go with process capability why six sigma.
     
    thanking you,
     
    your sincerly,
    madhu maroutu.

    0
    #110275

    Mike Carnell
    Participant

    madhu,
    There must be some kind of secret society of you academic TQMers out there that just has to put up one stupid post per day.
    If you want to understand SS there is a blue bar to your left and at the top it says “New to Six Sigma?” – that is you. You may also want to do something like “New to research” as well.
    How it was developed is irrelevant and if you read the new to Six Sigma section you will understand your last question is pretty ignorant.
    Good luck

    0
    #110276

    Mike Carnell
    Participant

    You tied your post to a another post that was over a year old. What is that all about?

    0
    #110277

    Stevo
    Member

    Mike, you’re not taking your good friends advice regarding “Not posting until after noon”.
     
    Stevo

    0
    #110279

    Mike Carnell
    Participant

    Stevo,
    We are in South Africa so we are EST +7. Maybe it is just an ajustment period?
    This thing is like a curse.
    Regards

    0
    #110284

    u r pointless
    Member

    Carnell – why do you feel the need to “belittle” everyone?  Does that make you feel like a bigger man?  Take your self righteous attitude and beat it unless you want to be civil and actiually help people for a change.
     
    JUST MY OPINION……GOOD LUCK!

    0
    #110670

    Uthappa
    Member

    Please explan what you mean by six standard deviation from mean with curve formaulae & significance.

    0
    #113802

    Saurabh Tyagi
    Member

    Hi Kavita,
    To know the Standard Deviation first of all u should know the defination of variance.
    Variance is an index that tells us the shift and drift of the process from its actual mean and the square root of variance is known as Standard Deviation.
    Formula is as ,
     Std.Dev.(Sigma)= Sqr Root(x-xbar)2/n-1
    Specification is nothing but the boundaries given by the customer to run his part or process within.
    USL is upper specification limit means tolrance or relaxation given by the customer in the positive side, while LSL is lower specification limit means tolrance or relaxation given in the negative side.
    With the help of these specification we can easily drive the control limits which are known as LCL & UCL means lower control limit and upper control limit.

    0
    #113804

    Varun
    Member

    Saurabh,
    One comment on your response to Kavita.  The spec limits and control limits are in a sense independent.  So, when you say that that “with the help of these specifications (LSL/USL), we can easiliy drive the control limits” – it is somewhat misleading. 
    Knowledge of the spec limits (LSL/USL) can give one a better understanding of process capability given that the process data meets certain statistical requirements. 
    Control limits are mathematically independent of the specificaiton limits and are related to statistical parameters of the process.   
    Regards,
    Varun

    0
    #114568

    Wagh
    Participant

    Please explain and let me know the calculations of six sigma.Also, please send me the classic exalmples where 6 sigma concept succeeded and failed if any cases

    0
    #114584

    Mikel
    Member

    Would you like that in ten words or less?

    0
    #115962

    abhai
    Participant

    mam,
           please let me know on , why six sigma is called a statistical term.

    0
    #117224

    a k narvilkar
    Participant

    a six sigma methodology means continuous improvement in any process by statitistical quality control such as analysing the data for diesel consumption in locomotives used for coal transportation . the data would reveal the consumption patterns with respect to amount of coal transported . as such the process needs some other activity like maintenance and shunting of loco and wagons. the actual consumption or the actual requirement of diesel depends on the optrs skill, and many other factors which includes loco performances. and varies .
     

    0
    #117914

    dibyalochandas
    Participant

    what is the use sixsigma in industrilal,give some example and explain

    0
    #122744

    A M
    Participant

    If Variation is low however the entire distribution is below LSL, how would you depict sigma level(Process sigma ) using X-mu/std Dev.
     

    0
    #124403

    RAVINDR DESHPANDE
    Participant

    Dear PB
    I give below the “logic of the curve” in a manufacturing situation. I have kept things a bit approximate in the interest of simple explanation:.
    In normal distribution curve  around 1 % of the readings fall beyond the 3 Sigma limits.And 3 readings in a million fall beyond the 6 sigma limits.
    Let us assume we are making a part with a diameter of 100 mm +/- 1 mm. Now we have set up and controlled the manufacturing process such that the mean value on the curve representing the population of these parts matches the required dia that is 100 mm. Let us assume that the standard deviation is controlled at 0.33 mm. So 1 sigma is 0.33 mm and 3 sigma is 1 mm .Thus around 1 % of the parts will fall outside the range 99 to 101 mm.In short 1 % parts will be rejected.
    Now if we adopt six sigma standard and make the necessary changes to get process to perform on six sigma capability then 6 sigma limit will be equal to 1 mm that means 1 sigma will be 0.17 mm. So  only 3 parts in a million will fall outside the range of 99 to 101mm.In short 3 ppm will be rejected.
    Thus in six sigma scenario almost the whole population is now within the tolerance limits of 99 to101 mm as opposed to the earlier 99 % . So the curve will be tighter and less wide.This has happened because we halved the standard deviation by necessary changes in the process.
    This is the logic.
     
    RAVINDR DESHPANDE
    [email protected]
     
     
     

    0
    #124404

    RAVINDR DESHPANDE
    Participant

     
    Could you explain with more details what is Six Sigma in statistical terms? I do not understand the logic of the curve. If you have a wide curve tending to six sigma you will have more variation (as I saw in the chart of this newsletter). PB
    Dear PB
    I give below the “logic of the curve” in a manufacturing situation. I have kept things a bit approximate in the interest of simplicity in explanation:.
    In a normal distribution curve  around 1 % of the readings fall beyond the 3 Sigma limits.And 3 readings in a million fall beyond the 6 sigma limits.
    Let us assume we are making a part with a diameter of 100 mm +/- 1 mm. Now we have set up and controlled the manufacturing process such that the mean value on the curve representing the population of these parts matches the required dia that is 100 mm. Let us assume that the standard deviation is controlled at 0.33 mm. So 1 sigma is 0.33 mm and 3 sigma is 1 mm .Thus around 1 % of the parts will fall outside the range 99 to 101 mm.In short 1 % parts will be rejected.
    Now if we adopt six sigma standard and make the necessary changes to get process to perform on six sigma capability then 6 sigma limit will be equal to 1 mm that means 1 sigma will be 0.17 mm. So  only 3 parts in a million will fall outside the range of 99 to 101mm.In short 3 ppm will be rejected.
    Thus in six sigma scenario almost the whole population is now within the tolerance limits of 99 to101 mm as opposed to the earlier 99 % . So the curve will be tighter and less wide.Instead of more variation as you say there will be less variation.This has happened because we halved the standard deviation by necessary changes in the process.
    This is the logic.
    RAVINDR DESHPANDE
    rdeshpandekenstar-appliances.com

    0
    #134064

    ww
    Member

    Congratulations!!  You have single handedly set the SS movement back 5 years and blown off some of the very people we are trying to educate in the ways of Six Sigma.  If we don’t teach the newbies then we don’t move forward.  If we don’t treat our future BBs with basic human respect then we deserve to fail.  Your truly bad manners have set us back and make us out to be overbearing insolent worms like yourself.  Nice job.  Your fired as a SS spokesman!!

    0
    #134081

    Ken Feldman
    Participant

    Congratulations to you as well.  Guess you finally have the time machine working and have come back from 2003/2004 when the original message was posted.  Possibly your brain is working a little slow.  What an idiot!!!  Hope that this isn’t disrespectful.

    0
    #137543

    geeta
    Participant

    Madam /Sir i wanted to know some imformation
    1.What is 6 sigma instatistics.
    2 Compute 6 sigma scale mean =70 S D =7
                           Thanku u

    0
    #137547

    Brit
    Participant

    Clickl on the “New to Six Sigma” link on the left side panel. All your answers will be there.  There’s a good search engine as well in th etop right side of the screen.

    0
    #138910

    Lala
    Member

    could u provide detailed explanation of 6 sigma to get me understand

    0
    #138911

    Ken Feldman
    Participant

    Not sure you have the ability to understand if you couldn’t figure out how to do a Google search before posting your note or looking to the left at the BIG BLUE BAR and finding the link entitled “New to Six Sigma?”

    0
    #138924

    Mike Carnell
    Participant

    …..and you are hung over from what?

    0
    #138932

    Big Als Steakhouse
    Participant

    Mr Carnell – must you reply to every post on this site?  Take a break

    0
    #138935

    Charles Hannabarger
    Participant

    Do a little research on Mike Carnell and I think you will find that we are very fortunate that he takes the time to contribute on this forum.  It must come from the Texas beef, or something, but Texans do like to interact with others, and say “howdy” to a friend or two along the way.  They work hard and play hard, in typical Texan “BIG” style.  And Mike is “BIG” on this forum in his contributions  Thanks Mike.

    0
    #138936

    Quentin
    Participant

    Here, here.
    And while you are at it Big Al, stick an apostrophe between the “Al” & “s”.
    QT
     

    0
    #138943

    Mike Carnell
    Participant

    Charles,
    Thank you. You are always welcome in my part of Texas.
    Regards

    0
    #138944

    Mike Carnell
    Participant

    Quentin,
    Thanks. With a post like that I think you may have just become Darth’s new best friend.
    Regards

    0
    #138959

    Swaggerty
    Participant

    Regal,  please explain that comment further, it reads like a whole lot of drivel, i know it’s an old thread.  can’t believe no one else has asked you to explain the BS B4

    0
    #138968

    Heebeegeebee BB
    Participant

    Uh, George…
    You just responded to a 3-year old posting.
    DOH!

    0
    #139876

    Gerald Joseph
    Participant

    Please send the SIX POINTS

    0
    #140476

    hlanganani tutu
    Participant

    Please could you explain to me six sigma and 1.5 sigma drift?

    0
    #140477

    Mikel
    Member

    Yes, the shift is an urban legend with no data to support it.
    Six Sigma is easy – a good process needs margin between it’s center and where the customer becomes dissatified. 6 standard deviations margin has become the default definition of what that adequate margin is. Truth is, find where your improtant factors should be centered and minimize variation around that center. That’s all. Cpm is the most important capability to know.

    0
    #140489

    Ken Feldman
    Participant

    Stan,
    Looks like you finally found the correct dosage for your new meds.  Typically you would rant while I would point the young poster to the left to find the “New to Six Sigma?” link.  Good chatting with you this morning albeit so briefly.

    0
    #140491

    Anonymous
    Guest

    Darth/ Stan,
    How come you guys all know each other? Do you all meet up down Maggie Mays on Fridays, or something?
    Cheers,
    Andy

    0
Viewing 100 posts - 1 through 100 (of 119 total)

The forum ‘General’ is closed to new topics and replies.