iSixSigma released a process sigma calculator which allows the operator to input process opportunities and defects and easily calculate the process sigma to determine how close (or far) a process is from 6 sigma. One of the caveats written in fine print refers to the calculator using a default process shift of 1.5 sigma. From an earlier poll, greater than 50 percent of polled quality professionals indicated that they are not aware of why a process may shift 1.5 sigma. My goal is to explain it here.
I am not going to bore you with the hard core statistics. Every Green, Black and Master Black Belt learns the calculation process in class. If you did not go to class (or you forgot!), the table of the standard normal distribution is used in calculating the process sigma. Most of these tables, however, end at a z value of about 3. In 1992, Motorola published a book (see chapter 6) entitled Six Sigma Producibility Analysis and Process Characterization, written by Mikel J. Harry and J. Ronald Lawson. In it is one of the only tables showing the standard normal distribution table out to a z value of 6.
Using this table you’ll find that 6 sigma actually translates to about 2 defects per billion opportunities, and 3.4 defects per million opportunities, which we normally define as 6 sigma, really corresponds to a sigma value of 4.5. Where does this 1.5 sigma difference come from? Motorola has determined, through years of process and data collection, that processes vary and drift over time – what they call the LongTerm Dynamic Mean Variation. This variation typically falls between 1.4 and 1.6.
After a process has been improved using the Six Sigma DMAIC methodology, we calculate the process standard deviation and sigma value. These are considered to be shortterm values because the data only contains common cause variation — DMAIC projects and the associated collection of process data occur over a period of months, rather than years. Longterm data, on the other hand, contains common cause variation and special (or assignable) cause variation. Because shortterm data does not contain this special cause variation, it will typically be of a higher process capability than the longterm data. This difference is the 1.5 sigma shift. Given adequate process data, you can determine the factor most appropriate for your process.
In Six Sigma, The Breakthrough Management Strategy Revolutionizing The World’s Top Corporations, Harry and Schroeder write:
“By offsetting normal distribution by a 1.5 standard deviation on either side, the adjustment takes into account what happens to every process over many cycles of manufacturing… Simply put, accommodating shift and drift is our ‘fudge factor,’ or a way to allow for unexpected errors or movement over time. Using 1.5 sigma as a standard deviation gives us a strong advantage in improving quality not only in industrial process and designs, but in commercial processes as well. It allows us to design products and services that are relatively impervious, or ‘robust,’ to natural, unavoidable sources of variation in processes, components, and materials.”
Statistical Take Away: The reporting convention of Six Sigma requires the process capability to be reported in shortterm sigma – without the presence of special cause variation. Longterm sigma is determined by subtracting 1.5 sigma from our shortterm sigma calculation to account for the process shift that is known to occur over time.


© Copyright iSixSigma 20002017. User Agreement. Any reproduction or other use of content without the express written consent of iSixSigma is prohibited. More »
Comments
Well explained and the last paragraph “stastical take away” summarizes perfectly.
True. Answer i needed was in it
so, is there anyone who takes it all into account and does not use any fudge factors?
cuz that is who i want to work for. . .
Perfect explanation – Buyin communication about the subject
Nicely explained.
Excellent explanation.. very simple and informative
Very clear and effective explanation!
The author seems to have mixed what occurs in processes over time, notably in the ‘Statistical Take Away’ – probably a typographical error, but it needs correction as there are readers who don’t understand either processes or statistics.
To correct the error, please replace the Long Term sigma sentence with: “Longterm sigma is determined by /adding/ 1.5 sigma /to/ our shortterm sigma calculation to account for the process shift that is known to occur over time.”
This correction reflects that longterm error has all the shortterm common cause variability, plus the longterm shifting that occurs. And that’s why AIAG manuals state short term capability may be 1.33 Cpk, but Long Term capability may be 1.66 Cpk (yes, the AIAG math adds 1 sigma, rather than 1.5 sigma).
I have read the above, I just don’t buy into the math. Here is the problem: By definition a process that is in statistical control and models a normal distribution will find that 99.7% of output falls within the curve and + – 3 standard deviations. If the process goes beyond + 3 standard definitions then technically it is not longer in statistical control. So when you add 1 1/2 std definition to the curve on both sides giving you a +4.5 curve you no longer have a statistically controlled process. I understand the logic, it just does not pan out. Yes we need to make our processes robust to handle variation and still deliver. I am trying to explain this to my Green Belt Class. I don’t really want to tell them to ignore it all. But I also do not want to provide incorrect information.
Manny
ok so I have a question, I am currently in my green belt course and heard a 6sigma level translates to 3.4ppm rejects or a 99.73% success rate. why at the 8 sigma level does it translate to 64ppm rejects? if your sigma level improves I thought your rejects went down.