# Use of 1.5 Sigma Shift and 3.4 PPM

Six Sigma – iSixSigma Forums Old Forums General Use of 1.5 Sigma Shift and 3.4 PPM

Viewing 100 posts - 1 through 100 (of 203 total)
• Author
Posts
• #27114

Joe Perito
Participant

In answer to Mike’s question, “Where does the number 3.4 come from?” I’ll pose a question to all reader’s: Why would you want to use the fluctuation (variation) in coffee bean prices in Brazil to infer any kind of performance measurement in your processes or your suppliers’? The derivation, Mike, of the 3.4 PPM is the proportion defective outside of a normal curve assuming the normal curve has drifted off target by 1.5 sigma. This figure (3.4) goes from the data that Mototrola has collected off of their processes after finding that their processes has a history of drifting about the mean by 1.5 sigma. This figure is useful only to those processes measured at Motorola and must be verified every time they want to check the performance of those processes because historical performance may not be relavant to the current process if it has improved or deteriorated. A really distorted and egotistic application of this data is to assume that the data is applicable anywhere else in the world. What has Motorola’s process variation got to do with U.S. Steel, Pampers, Frizbies, or your and my processes untill data from our processes is evaluated? If you or others would like a table of zero to 6 sigma listings verses PPM rates and Verses PPM rates after a 1.5 sigma shift, just write me. You will see that 6 sigmas are equal to 0.002 PPM, or, 3.4PPM after a 1.5 sigma shift off target.

0
#65951

Neil Polhemus
Participant

I agree with you regarding the arbitrary nature of the 3.4 PPM. But lack of a reasonable target also has its problems. Some years ago, I did some work for the FAA on jet route separation in the North Atlantic. Part of that work involved assuring that we met an established TLS (Target Level of Safety), defined as the maximum acceptable rate of mid-air collisions. If you wish to use statistical methods, a target of 0 can never be met. So you establish a target which you can design to. Is it the right target? Obviously, that depends a lot upon the process. Are we filling boxes of cereal or are we making jet engines? The hard decisions aren’t statistical in nature.

0
#65952

Mark T.
Participant

Hi Neal,

>”Part of that work involved assuring that we met an established TLS (Target Level of Safety), defined as the maximum acceptable rate of mid-air collisions.”

How can a TLS be established for this type of analysis? Was it specified before you took the project, or was it negotiable? How did this project work out?

Curious,
Mark

0
#65953

Neil Polhemus
Participant

The International Civil Aviation Organization (ICAO) was the body that established the TLS, after considerable research into the risks encountered in other modes of transportation, various occupations, diseases, etc. I forget the exact number, but basically they set it up so that a reasonably healthy person had an order of magnitude greater chance of dying from natural causes (heart attack, stroke, etc) while on the plane than from running into another aircraft.
In fact, the TLS probably had little relationship to reality, just as the 3.4 PPM has little relationship to reality in many Six Sigma efforts. What a target such as that does, however, is force you to collect and actually LOOK AT data. What we found was:
1. We could give basic guidance on what type of separation standards were necessary to reduce the risk from common cause variation (equipment errors and such) to essentially zero.
2. After that, the risk what dominated by human factors, such as a pilot entering the wrong latitude and longitude.
With Six Sigma, you do the same. Force the common cause nonconformities to a very low number (3.4, 34, 0.34, it probably doen’t matter which). Then working on solving the human factors problems.

0
#65964

Cone
Participant

I would be really interested in what Joe’s EXPERIENCE is with how much a process shifts. It seems to me his answer is philosophical. Tell us about your actual experience Joe. Everyone knows that processes shift, so quoting short term 6 sigma numbers (.2/billion) has absolutely no application in real life.

It would be useful if we restricted this discussion to people who actually have data.

I have data with shifts of as little as .3 sigma where strict SPC is used with SMED and respect for Taguchi’s loss function. But lets be honest, most do none of the above. I have data where the shift was greater than 10 when we atarted. My experience is that the 1.5 is realistic where well trained and well defined procedures are used without strict SPC procedures, but if you have actual data reflecting your actual methods, use that data, not 1.5.

0
#65966

deveshchouhan
Participant

where from this variation is taken (sigma) i mean is it directly taken from the r chart.
or Standard deviation is calculated using the standard formula.

0
#65969

Cone
Participant

Its the difference between long term and short term capability. Easy to calculate if you have long term data. Didn’t you get this in Black Belt training?

0
#65972

howe
Participant

How about finding your own short term capability by subgrouping and looking for actual process potential. Measuring the difference between Long term and short term tells you whether you are either out-of-control or the process technology requires improvement…in most cases both. The tool is called a Z Control vs. Technology plot. Isn’t that the point of Measure Phase?

0
#68893

Arturo González
Participant

I want to know about list for z values 0 to 6.
Can you help me whith this?

0
#68900

Cone
Participant

Use the normsdist and normsinv functions in Excel. It is all you will ever need.

0
#73005

Michael Doherty
Participant

https://www.isixsigma.com/library/content/c010701a.asp
“Long-term sigma is determined by subtracting 1.5 sigma from our short-term sigma calculation to account for the process shift that is known to occur over time.”
So there’s a 1.5 sigma process shift over time? How much time? Why not correct for this? Why does it stop at 1.5 sigma and not keep drifting?Now if I measure 3.4 ppm defects in the short term then that’s a 4.5 sigma process. If the process is expected to drift by 1.5 sigma then if it drifts in the wrong direction then it’s effectively a 3 sigma process (if the process is symmetrical; i.e. can produce scrap at the upper and lower ends; then a drift will always result in a LOWER sigma).
So how does a drift in time IMPROVE the sigma level?????? Or does this magical drift always just occur in the correct direction. Maybe there’s really 5.5 sigma noise on everything but’s too high a frequency to measure so we should call 4.5 sigma 10 sigma!!!!!

I still can’t see through the con. A 4.5 sigma process is a 4.5 sigma process not a process that thinks it’s a 6 sigma one given a fair wind.

0
#73018

TCJ
Member

Mike,  A true six sigma process yields a Cp of 2 and a Cpk of 1.5, which accounts for a 1.5 sigma shift in both directions assuming bilaterial specification limits.  Remember Cp – process pontenial “how well can your process fit within your limits”, this must be 2 ~ .002 ppm.  Cpk accounts for centering “where is my process in relation to my limits”, 1.5 ~ 3.4ppm.
Hope that helps.

0
#73072

Michael Doherty
Participant

True, a Cp of 2 is a true 6 sigma process.
A Cpk of 1.5 could be anything between 4.5 and infinity. The way to find out is to inspect the mean. If it’s centered between the spec limits then the process is a 4.5 sigma one. So why not measure where the mean is and adjust it so that it is centered?
3.4 defects per million opportunities isn’t a six sigma process. It’s not even a 4.5 sigma process – you should only be measuring the defects that are outside of either the Upper or Lower specs (whichever was used to determine Cpk).
The customer wants to know the defect rate (total defects) so why not just say so – don’t try to pretend that the process is better than it is and say in small print that there’s a data shift.
So provide Cp to say what the data spread is like and then add the mean value to show if the data is skewed and then specify the likely defect rate.

0
#73073

RR Kunes
Member

The 1.5 sigma shift is placed into play to higlight what six sigma is all about VARIATION..
Anyone how uses 3.4 PPM as a target does not understnad the six sigma concept. Data is needed to make decisions, therefore, the 3.4 PPM is data that is supported by hard numbers of what life would be like if we operated at that level.
It is up to the individual company what their targets can afford to be. Years ago we used to see signs such as “Zero Defects” GM’s “Searh for product Excellence” all meaningless as they contained no way to determine the effects of accomplishing these ends. Obviously GM never quite got their.
My first view of Six Sigma came from military defense contracts. These contracts had twenty years warranty clauses. You had better know your reliability profile if you plan to meet these requirements.
So please set back and view these types of data as mechanisms by which we evaluate with data our individual conditions.

0
#74923

Reza
Participant

Dear Sir
Hello
Thanks for your nice article about use of 1.5 sigma shift .
I read that but I still need some more explaination about it
because I could not underestand it perfectly
If possible , please tell me in simple way about ” what is 1.5 sigma shift “
Best regards
Reza

0
#74928

Member

To all,A lot of people keep on wondering the reason for the 1.5 Sigma shift and debate it time to time.My personal opinion on the reason for this 1.5 sigma is primarily driven from the fact that understanding variation is the key to success.To understand the variation the most commonly used tools are control charts.While control charts are excellent at detecting drifts and shifts in processes that are more than 1.5 Sigma they have poor sensitivity in detecting shifts less than 1.5.In such cases the EWMA or CUSUM are used but are rarely used on line due to complexity of application.I have a strong belif that the 1.5 Shift is primarily motivated from this back ground.My be Mike Carnel and others from Motrola may be able to throw some light on the same. Shree Phadnis

0
#74939

Mike Carnell
Participant

Shree,
I appreciate the invitation into the thread but I have intentionally avoided it.
The answer from Gary on March 12 would be a basic answer. For years people have done a 30 piece sample, produced a Capability study and maquersaded short term data as long term data or at the very least not distinguished between the two. Now all of a sudden everyone is indignant about a 1.5 sigma shift. I would think the masquerading of ST data as LT data should be more of an issue.
The solution is simple. If you do not like it don’t use it. Nobody goes to statistical jail for using something else. If you are working for some steel company and the thought of using Motorola’s shift is so absolutely unethical that you cannot bear to use it – generate your own data. Create your own number or prove that in your world there is no difference between short term process capability and long term capability. I keep waiting for the thread that says “I have data that my shift is different from 1.5 sigma” rather than the string that begins with some esoteric discussion and no data.
Personnaly I do not have an emotional attachment to any number and 1.5 works as well as any until I get a hard number from the data. It is design margin for a process.
The part I do have an emotional reaction to is any group that considers this an issue still discussing it if they are over 12 months into a deployment. There might be a question of “is it real?” etc. when you begin the deployment and that is OK. If you believe it is a real issue and 12 months later you still have not accumulated enough data to substantiate your beliefs then your discussion is purely mental masturbation and you are wasting peoples time.
There is an intersting discussion of the shift in the book “Six Sigma” by Mario Perez Wilson page 221 – 223 (ISBN 1-883237-68-8). Mario takes a very strong position on this. If you know him then you realize he would not put this in his book unless he could back it up.
Thanks Shree – now I am sure I will get hammered the rest of the week over this.
Good luck.

0
#75015

Member

Dear Mike,
Thanks a lot for your insight.I am fully with you on this issue.

0
#75017

Mikel
Member

Mike,
Well said, I agree.
Shree may want to try to simulate the use of normal control charts and see if a 1.5 sigma shift can actually happen. It cannot, anything greater than a 1 sigma shift is immediately detected. It is hard for a half sigma shift to not be detected after only a few samples. The real deal here is we cannot put SPC and all of its rules in place at very many places if we really expect the rules to be followed. The pointing of normal users to EWMA or Cusum is not rational, the calculations and rules are to complicated for normal users.

0
#75071

Mike Carnell
Participant

Stan,
I appreciate the comments on the Cusum chart and EWMA. I agree with you on that.

0
#75149

Mikel
Member

Mike,
Does this mean we are bonding? I am looking forward to the day when you hype me like you hype Mr. Brue. And the good news for you is that I actually can do this stuff.

0
#79084

Darrell Tomlin
Participant

I would love to receive a copy of that table.  We have discussed this point infinitely in our workspace.  Hope its no trouble.

0
#79102

Chris Gates
Participant

For a nice, recent investigation into the “Statistical Reason for the 1.5 Sigma Shift”, see the article of this name by Davis R. Bothe in Quality Engineering 14(3), 479-487, (2002)

0
#79104

Mike Carnell
Participant

Chris,
Thank you for the reference. I am looking forward to reading it.
You must have quite a memory. That string is over 3 moths old.
Thanks again.

0
#79107

anon
Participant

Mr. Perito no longer graces us with his invaluable opinions based on ,well frankly, his opinions. But the good news is, you too, have to ability to create this invaluable chart yourself.
For the one with no shift, create in column a desired z values in increments of .1, .01, .001, or my favorite .0001. In the next column, use the normsdist function in the paste function (or just put =1 – normsdist(a1) in cell b1), then drag the formula down the entire length of the values in column a. For example:
Column A     Column B

5.981
0.0000000011

5.982
0.0000000011

5.983
0.0000000011

5.984
0.0000000011

5.985
0.0000000011

5.986
0.0000000011

5.987
0.0000000011

5.988
0.0000000011

5.989
0.0000000011

5.99
0.0000000011

5.991
0.0000000010

5.992
0.0000000010

5.993
0.0000000010

5.994
0.0000000010

5.995
0.0000000010

5.996
0.0000000010

5.997
0.0000000010

5.998
0.0000000010

5.999
0.0000000010

6
0.0000000010
The formula in column B is =1-NORMSDIST(A1).
You may note that this is only 1 DPBO but since the distribution is perfectly centered, it is x2 for 2 DPBO.
For the shift of 1.5, change the formula in column B to = 1-NORMSDIST(a1-1.5) where you will get:

5.981
0.0000037180

5.982
0.0000037007

5.983
0.0000036834

5.984
0.0000036661

5.985
0.0000036490

5.986
0.0000036319

5.987
0.0000036149

5.988
0.0000035980

5.989
0.0000035812

5.99
0.0000035644

5.991
0.0000035477

5.992
0.0000035311

5.993
0.0000035145

5.994
0.0000034981

5.995
0.0000034817

5.996
0.0000034654

5.997
0.0000034491

5.998
0.0000034329

5.999
0.0000034168

6
0.0000034008
You will note that the value for 6 is the magical 3.4 DPMO.
You may want to note that some of the more retentive contributors to this forum will want to figure out the other tail of this non centered process as well. If you are worried about such things, I am sure you will know what to do.

0
#79564

sanjeev
Member

i have a question about why six sigma is called six sigma.can anyone reply to it???

regds
sun

0
#82816

Thomas Pyzdek
Participant

I think the real point is that the quality experienced by customers over the life of the product isn’t quite as good as our internal metrics suggest. The 1.5 sigma factor is just a way of adjusting our internal predictions to come closer to the real-world. Is it really 1.5? No, but it’s some number greater than zero. If it’s worth it to you, go ahead and get a better estimate for your own operations. However, in most cases, Six Sigma practitioners have little interest or time to pursue these questions of theory. They’re more interested in putting dollars on the bottom line and adding to their bonus checks.

0
#87098

dick
Participant

Remember, the 1.5 sigma shift factor is just an estimate, a default.  If you only have defect data (long term) you can estimate process entitlement.  If you only have a single sample of continuous data for capability, you can estimate long term performance.
Mikel is very careful to also indicate that capability studies should be done using rational subgroups, and when data has been captured in that way, we can calculate the actual process shift being experienced on that process.  When that information is available, the 1.5 sigma estimate is no longer used.

0
#94147

Siva
Member

That’s nice to say it may not be same for all processes. Better to list them out for individual processes. But, if i have no knowledge about the shift, then better to go for 1.5 sigma shift, then i’ll go for refinements where ever necessary.
Cos, the benchmark 1.5 sigma, obtained from N number of processes, with our prior knowledge, i can follow it and reduce risk. as it’s a risky business from my side just simply assuming 1 sigma  or 10 sigma shift…..so, i’ll go for 1.5 sigma shift go for refinement whereever possible.

0
#98335

SF Lau
Member

Hi Joe,
I’m enlightened to read the discussion posted by you regarding the 1.5 sigma shift. Would like to request from you a copy of the zero to 6 sigma listings verses PPM level for before and after 1.5 sigma shift table.
Thanks n best regards,
SF Lau

0
#98593

Gary Smith
Participant

Joe,  agree with your comments about the 1.5 sigma shift. Would be grateful if you could please pass me a copy of the z-table which takes z score values to 6. I’m developing a sigma calculator (without the 1.5 sigma shift included) and would like to refer to your z-table. Stats books I’ve looked at only seem to go to z-score of 4.
Thanks – Gary

0
#98601

Mikel
Member

Try Excel – Normsdist and Normsinv is all you need to ever know. There is no rocket science here.

0
#98603

Gary Smith
Participant

Stan –
thanks for the excel formula tip.

0
#100045

Casey
Participant

Motorola (aka Mikel Harry) does not have data showing that processes shift 1.5 standard deviations. If anyone can produce that data, I will of course, retract my claim and apologize. Harry’s original book describing the shift was based on two references, both of which show merely a theoretical calculation, and a simulation. He does not state the assumptions of his simulation, which could have been made to produce any shift desired. In addition, his simulation also showed a 40% increase in variability. (And you thought that the nonsensical phrase Dynamic Fluctuation only made the mean shift!). Somehow this shift never entered the scale.
Those familiar with control charts, know that there are two sources of variability: common cause and special cause. Which is this Dynamic Fluctuation? Why neither! Harry’s discovery of another type of variation is muted. Perhaps because the first two are exhaustive and mutually exclusive, this discovery is not a real one.
Can you imagine what kind of study would have to be done to show that a change in a process was not due to special cause variation? I can not.

0
#100048

Mikel
Member

Motorola is not also known as Mikel Harry.
Harry only played in a very small group (not Sector), and had only a very small impact on that group. He mostly sought recognition from above and did hardly anything other than theoretical writing.
His justification of the shift is a farce, but Reigle will be on here anytime now to give the gospel according to the Rev. Dr. Harry. In a real process, this input is known as noise.

0
#100051

mjones
Participant

Sorry, I don’t have what most would consider data either, but…
I was taught that the 1.5 sigma shift was a general, typical, rule-of-thumb concept that reflected the difference in short-term and long-term data that, more or less, described the typical shift in the mean and drift in the variation of processes. I was never told it was based on math theory, but rather on empirical information and logic.
Later, as a MBB instructor, I reviewed/approved dozens of BB reports for certification over the years. Calculation of the sigma shift was a suggested, but not required, deliverable. I found it interesting, but not surprising, that most had a shift of about 1.5 sigma, i.e., about 1.4 to 1.6. About 20% were less than 1.4 and about 20% were more than 1.6.
In every case where the shift was << 1.5, data were obviously short term (a few weeks) or very short term (a few days), with shorter term data having less shift.
When the shift was >> 1.5, the data were usually long term, and always collected with a poor measurement system, e.g., less than 5 categories, and/or had obvious, clearly identifiable major changes in the process (in one case, removing the effect of the change moved the shift much lower, to about 1.4, because variation was reduced and the smaller data set was more short-term).
While I found all this interesting, it was consistent with what I had been taught and was logical, so I never bothered to tabulate the results. Now, with all the discussion of “Is there or isn’t there a 1.5 shift…” I sincerely wish I’d required the shift to be included and discussed in every report, and that I’d recorded the results. But, with all the emotion of this issue, I’m not sure my data would be any more convincing than my memory to Casey or anyone, but at least I’d feel better about it.
And the math theory of shift? I look fwd to the outcome of the debate. I’m very curious.
Meanwhile, I certainly know what the practical, real-world answer is. I have seen it, felt it, lived it. There is a shift, folks, and it is about 1.5, really.

0
#100053

Mikel
Member

The only thing your data tells you is that you really have missed on the opportunity to have good controls in place.
In addition to your 1.5 empirical shift, I’ll bet all of your processes have a sigma level of less than 4.5 (Cp of 1.5). Want to share you capability data?
Just because you were taught to accept mediocracy and you got it does not prove anything except that people do enough to keep their heads above water. Set expectations low and you achieve it.

0
#100060

mjones
Participant

Stan-
I agree with part of your response, but it leads me to some other thoughts.
Certainly, when the shift was large due to poor measurement systems (in some cases limited by available technology), there were obvious major opportunities for improvement. In others, no doubt, stronger, more effective Control methods could have improved (reduced) the shift. In fact, later, follow-on projects did that by identifying additional sources of variation and reducing them.
As to capability, some were not great, as you suggest; but some were pretty good. I recall some with 5 to 7 ST sigma, with shifts of about 1.5.  So you’d lose your bet on “all less than 4.5…” [Unfortunately, ‘capability’ often was of little practical value because spec rqmts came from eng/design/technical depts and had little relationship to customer rqmts. Yep, an opportunity for DFSS — that’s another story…]
Certainly, people live up to or down to expectations. The expectation was to improve time, quality and cost, to save \$\$\$. There was no expectation to get a particular shift, or even hit a Cp or Cpk target. It was about improvement and money. Controls were expected to be strong enough to hold the gains as defined, period.
This was not a matter of setting mediocre levels and living with it for the long term. The logic is to get the immediate gains and hold them; define the next project to build on lessons learned, obtain more gains, etc., etc.
There is another factor here: maturity of the SS deployment. My data are mostly from BB Cert projects in the early phase of SS deployment. We knew not all sources of variation had been identified. Most projects were the first or second whack at the big end of the Pareto. My bet, your thought context is a well-deployed SS process where specs ~= VOC, Control systems are (should be) mature, etc. For this context, maybe the 1.5 shift is substantially less, maybe approaches 0. Then, if there is no shift, if long term = short term sigma, you have identified all the sources of variation and your Control is perfect. Could be. Ive not seen it yet. Could it be that the degree of shift is a measure of the maturity of the deployment? I.e, a measure of the amount of variation not yet identified and effectively controlled?

0
#100063

Anonymous
Guest

My understanding of that the reason for the shift has shifted. Originally it was meant to allow  the process mean to drift – a mechanical device manufacturing false proposition, and then  later it became a ‘sampling error in the estimation of sigma from the process performance (X-bar and R chart so-called process capability; assuming no adjustment after preventative maintenance, and shifts after re-setting, or assignable causes such as tool wear …)
Since according to Taguchi the greatest loss is due to variation of the process mean, and since we generally use parametric design to reduce variance using non-linearity and scaling …
So the only shift I care about is the one I can detect on an X-bar and R chart.
As someone stated previosly, if I told our Japanese mechanical design engineers to ‘factor-in’ a 1.5 sign shift, they would roll around the aisles laughing.

0
#100073

Gabriel
Participant

mjones,
I have a few doubts and questions.
First (and obbvious) one: How was the shift measured? Was it a comparison between the z-socres using the “sigma within” and “sigma total” estimations of “sigma” (i.e. Rbar/d2 and sqrt(sum…/(n-1)) respctively)? Was it due to the difference between the expected out-of-tolerance rate (derived from Cp/Cpk) and an actual defectives rate? Another choice?
At one point you said something like that wen the shift was small usually it was short term data. If the shif is between “short” and “long” term, how can it be assessed with short term data only?
In case, what was the uncertainty of the estimatuion of the sigma shift? It is interesting to note that you mentioned the lack of a suitable measurement system as a cause of the shift. That shift, if exist at all, should be behind the noise of the poor measurement system, and thus not distinguishable.
Another theoretical source for the shift that is ussually put on the table is the imperfect power of the SPC charts to detect shifts. In that case, a shift that happened but was not detected is also unknown unless measured by an independent mean.
A final coment on your statement “Then, if there is no shift, if long term = short term sigma, you have identified all the sources of variation and your Control is perfect”
I guess you meant “all the special causes of variation”, right?

0
#100075

Reigle Stewart
Participant

mjones: Your source of information and data is very
credible and is most consistent with what Dr. Harry has
asserted and demonstrated over the last 20 years … the
“typical” CTQ from a “typical” process in a state of “typical
control” within a “typical” factory will “typically” shift and
drift between 1.4 sigma and 1.6 sigma, with the average
shift will be about 1.5 sigma (over many CTQ’s) . His data
was based on the simple calculation Z.shift = Z.st – Z.lt. In
fact, Dr. Harry has published such “emperical findings”
resulting from “typical” process studies undertaken at
Motorola and ABB. Recently, on this website, there was a
headline paper on the shift factor that cited Dr. Harry’s
research data as being the only “published” data currently
availible on the subject. You should consider publishing
on this website. They talk alot, but never produce any
emperical data to support their claims, nor do they publish
any type or form of technical papers on the subject … they
don’t even use their real names (wonder why?).
Understand that they will “poo-poo” what you do simply
because they don’t want the shift factor to be true
(probably because they have publicly committed to this
position without any data or math). As one automotive
executive recently observed: “These hand-full of [deleted]
don’t live by the their own gospel … that being the power
of data.” Keep up the good work and keep reporting your
findings. You really put them on the run with your recent
post. Reigle Stewart

0
#100085

Anonymous
Guest

Reigle,
You are so full of shift ..
Andy

0
#100087

fernando
Participant

As alternative to the Zshift = 1.5, sometimes Ive heard talking about an inflation rate for the standard deviation. In particular: stdev LT=1.3*stdev ST
To me it makes more sense than the Zshift, because, given a dataset with rational subgroups, the Zshift can strongly change depending on the spec limits, while the stdev ST and LT are always the same.
If you inflate the Z LT with a fixed Z shift independently of the value of Z LT itself (1.8, or 2.5, or 4.5 etc) the result could be misleading
So, if we dont have rational subgroups, dont you think it would be better to correct the standard deviation with an inflation rate and calculate the corresponding Zshift?
I look forward to hearing from you
Fernando

0
#100089

Mikel
Member

Dear Guess Who, nope StatmanToo, nope Reigle;
Got us on the run?
A noted Automotive Exec?
Andy U is right, only without the f.

0
#100094

Schuette
Participant

Reigle Stewart,
Hello – you are losing it! What seems to be problem? Is “Dr. Harry” not feeding you enough or is he feeding you too much? What a joke to quote an automotive executive! For a guy who claims that his boss invented Six Sigma, did you look at the quality level of that automotive executive’s company? I know for sure he could not have been from Toyota. Those folks do NOT have time for consultants like you and your boss. They got a job to do.You are now beginning to sound like Richard.

0
#100108

Ken Feldman
Participant

Now now, let’s not get personal.  The upcoming debate will put to rest all this discussion about the validity of the 1.5 shift and will likely conclude that while the math might be error free, the underlying assumptions are bogus and the practical use of the shift meaningless.  Let’s refrain from any pre-fight, weigh-in scuffling.  And Stan, please don’t bite off Dr. Harry’s ear during the clenches.

0
#100109

Mikel
Member

I think we need to worry more about ankle biting from the other side.

0
#100110

clb1
Participant

Stan, not to worry, a good pair of cowboy boots will protect you from even the worst ankle bites….and since it’s in Arizona there shouldn’t be any problem with finding a pair.   :)

0
#100113

mjones
Participant

Good questions, Gabriel…
Shift was measured by the Zscores ST and LT from Mtb most of the time. Some BBs were able to use Cp/Cpk estimates of expected vs. actual; but, as mentioned, most spec limits were arbitrary (no relation to customer needs/rqmts) and were either way too wide or way too tight so Cp/Cpk on these limits meant little and were rarely used.
Certainly agree that comparison of ST vs. LT means little if data is ‘short term. My point was, when I saw small sigma shifts, like 0.5 or 0.8, data came from a short period of time, and/or relatively few cycles of production, raw materials, etc., i.e., there was no reason to expect a large shift and there was not one.
That measurement system error affects both ST and LT? Certainly. But, there were some cases where repeatability was fair but reproducibility was quite bad. By selecting the operator and making repeated measurements we reduced the overall error some, but we probably reduced within variation more than between variation. At least that was the perception of the BBs, their team, and their logic and data supported that. Again, I simply observed that when I saw large shift (1.7, 1.9, etc.) measurement systems were often poor. No doubt, there were other factors as well (e.g., changes in raw mtls, major process changes, seasonal shifts, etc.) but the Gage R&R data were quantified and very visible.
About theoretical shift and drift undetected by SPC… It is there, no doubt. A few of us played with this, attempting to calculate or estimate beta risk, etc., but quickly lost interest when it got complicated and didn’t seem to be of much help. Our concern was that a ‘successful’ reduction of beta risk would likely encourage process tampering… something we already had enough problems with.
My statement of, “…no shift, means ST = LT, all sources of variation have been identified and there is perfect control…” was just a logical conclusion of a hypothetical situation. My logic is: In theory, you can have ST = LT if you have identified ALL your sources of variation, then defined which are “special” and you control them perfectly, and defined “common causes” you have chosen to not control.
Obviously, this is not practical or realistic. Since variation is the enemy, once we have this profound knowledge of the sources of common cause variation it is unlikely we could resist looking for ways to reduce the biggest source of common cause variation, i.e., make it a ‘special’ cause to manage and control. Again, practically speaking, I don’t know how you can ever get ST = LT so you have no shift.
I know you can reduce it because I’ve had BBs reduce it from ~2 down to <1.5. Certainly it not always 1.5. But my experience is that the order of magnitude is about right.

0
#100116

howe
Participant

Wow, you guys are amazing! Making everything so complicated that at the end no one knows what the message was.  Look, there is no doubt that shift exists in any system. It is common sense. The cause of it could be due to number of things such as environment, material degradation, aging, etc. But to claim that this shift is 1.5 Sigma and apply it to any products or industries is just beyond my imaginations. It’s more amazing to me hearing some folks claiming that they have the theoretical proof for it! Where were you guys educated?

0
#100123

KBailey
Participant

Casey, I have to question what you said about common cause and special cause being exhaustive and mutually exclusive sources of variability.
It seems to me that there are two key measurements that define a cause of variability: frequency and magnitude of effect. We can also think of frequency as a measure of the probability of an occurrence within a given period of time or per million opportunities. Magnitude of effect is simply how big of an impact the cause has on the output. There will be variation in both frequency and magnitude of effect, but for our purposes we’ll consider that variation to be part of the measurement of frequency and magnitude of effect.
We tend to think of special causes as discrete, or even unique events. The magnitude of the effect of such a cause may be discrete or continuously variable, although it must be large enough to produce an “out of control” warning on our control chart, in order for us to notice it. I submit, however, that if a process is allowed to run long enough, virtually any special cause will repeat. Isn’t that why we take steps to prevent recurrence of special causes, when we identify them? If we wanted, we could eventually capture enough data points of the special cause to model it statistically. Special causes can be discrete events, or they can be continuous like common waves which occasionally combine together to suddenly create an extraordinarily high peak.
Common cause variation, we’re more likely to visualize as continuously variable in both frequency and magnitude of effect. However, this is also inaccurate. Any cause can be called common as long as it doesn’t produce one of the out of control symptoms. In reality, common cause variation includes a multitude of discrete “special causes” which mask each other or – by sheer random chance – don’t happen to have enough impact on the output to show up as out of control at a given time.
In other words, there’s really no difference in essence between what we label as common cause and what we call special cause variation. It’s a continuum. I agree with you that “Dynamic Fluctuation” is not a third type of variation. Where I would differ with you is that I would say it is just a name for sources of variation which fall in the middle area of the continuum I described.

0
#100165

fernando
Participant

Any updating on this? It would be nice to have a feedback from some experts
thanks
——————–
As alternative to the Zshift = 1.5, sometimes Ive heard talking about an inflation rate for the standard deviation. In particular: stdev LT=1.3*stdev ST
To me it makes more sense than the Zshift, because, given a dataset with rational subgroups, the Zshift can strongly change depending on the spec limits, while the stdev ST and LT are always the same.
If you inflate the Z LT with a fixed Z shift independently of the value of Z LT itself (1.8, or 2.5, or 4.5 etc) the result could be misleading
So, if we dont have rational subgroups, dont you think it would be better to correct the standard deviation with an inflation rate and calculate the corresponding Zshift?
I look forward to hearing from you
Fernando

0
#100169

DaveG
Participant

Reigle,
Any chance of a webcast of the debate?  Or a (free) video link afterward?  Will the judges’ decision be published?
Thanks

0
#100175

Mikel
Member

Fernando,
It is the same thing.

0
#100180

Reigle Stewart
Participant

DaveG: Yes, the position papers and referees decision
will be published. It is likely that a court recorder will be
retained to document the event. If so, the transcripts will
also be published. Reigle Stewart.

0
#100182

fernando
Participant

Maybe there’s something not fully clear to me. I was looking at the example SIXSIGMA.MTW of Minitab
I want to calculate the process capability for the column “alldata” using the subgroups in “subg”. I got these results:
USL = 41, Zshift = 0.1965
USL = 43, Zshift = 0.5901
USL = 45, Zshift = 0.9838
This leads to conclude that the Z shift changes depending on SL, while the standad deviations LT and ST are always the same (0.8451 and 1.1815 respectively) .
So I suppose that I can define a standard inflation rate for the std dev (for example 1.3) but a standard Zshift cannot be defined at all, beacuse its value depends on SL
Where am I wrong?
Thanks again

0
#111091

Mike Clayton
Participant

If I remember correctly, the guru at Motorola had used 40 years of research reports on manufacturing variability from many industries.
It was not Motorola-only data.  What was remarkable was that this shift (roughly) was independent of industry, but common to manufacturing methods OF THAT TIME which were mostly mechanical, mechanized, but not automated with adaptive feedback. Modern APC methods can cut that shift in half…again roughly.
These are just useful concepts for planning characterization, optimization and control projects.   That’s all.  Nothing magic.
It moved management’s ideas of “good enough” quality up a few notches, and allowed allocation of resources for dissimilar processes and products, focussing on variation instead of “on target” or “in spec.”
But it is only as good as the Engineering Limits that define DEFECTIVE units.  That’s another subject.  The combination of Six Sigma ideas and Japanese “deviation from target” approach is useful also.  Cpm instead of Cpk, since Spec Limits are not always as available as Targets!!  Keeping a process on target, while reducing variation around that target is what this is all about.  DPMO’s and PPM’s are just concepts, rough ones at that, and processes that are dynamic must be controlled.  CONTROL THEORY needs to be wedded to STATISTICAL THEORY for reall success.

0
#111110

Mikel
Member

40 years worth of data? Wow, do you also believe in Santa Claus?
I agree about Cpm but not just where there are no specs. It is a  better measure than Cp, Cpk, Pp, Ppk.

0
#111134

SemiMike
Member

There indeed were war-time and cold-war studies of major manufacturing plants that were compiled by universities, and one manager at Motorola looked at all that and simply stated the the 1.5 sigma shift was TYPICAL.   We all accepted that simply as something to beat by better control methods.  But there are some practical barriers to keeping that “natural behavior” small after you have already reduced variation by 2x.  As sigma gets smaller, the 1.5 sigma shift  is smaller. So you are chasing your tail(s).  So some people simply wanted to call this a 5 sigma effort, or some even more “precise” number.  But the Cp of 2 being twice the “natural tolerance” or individuals chart control limit spread made a lot of sense to people.  So that became the Six Sigma goal…long term.  If Cp was > 2, then management could just tell manufacturing it was their job to keep things centered and stable (Cpk > 1.5).  Mgmt had the right machines and gages and spec limits if Cp > 2….at least potentially.  Of course spec limits were always suspect.  The Japanese had TARGETS.  Better idea.  Control instead of containment.
The old Ford and other goals of Cpk > 1.33 and talking of 4 sigma spec limits was a uniquely American mistake, in my opinion, for years.  But when you are in the swamp at Cpk < 1.0, the alligators keep you from setting reachout goals.  Rework was a major US problem for years. Hit to fit.
Cpm is not as popular in US and EU as “Robust Cpk” (using trimmed mean or median, and different sigma estimates for high side and low side, when data is skewed….instead of using transformations.  Its close enough for something as fuzzy as a capability metric.  Stability metrics are more common in my world, daily report on %OOC with monthly summaries to focus effort.  Cpk is residual of that effort and the DOE efforts.  So shop floor teams keep things stable, and engineers and manager try to fund or fix the bigger problems of bad machine, bad gages, etc. So Cpk is a management performance metric in my mind.
Santa Claus visits our grandchildren every year, so yes, I believe in him, as he is me.

0
#111171

Mikel
Member

Where do you come up with this nonsense?

0
#111175

SemiMike
Member

Could you be more specific?  What is nonsense to you?  Everything I ever say?  Or just something that you disagree with?

0
#111176

Anonymous
Guest

SemiMike,
My recollection is that some folks at Motorola were confused between Shainin 1.5 sigma limits (Yellow) and 3 sigma limits – this is the true origin of the shift.
As for processes drifting over time, I still have photolithography data from Motorola that shows no such shift over a six month period. In fact, our process control in MOS 3 and MOS 8 would not allow such a shift, which is why the Ford Motor Company recognised MOS 8 as the best controlled Waferfab in North Amercia at that time. Of course, the elimination of temporal causes of variation was accomplished using Shainin’s ‘randomized sequence’ tests.
Once we eliminated the temporal variation, we were able to use a  kind of pre-control (using Shewhart charts to plot the process mean and the process uniformity) to maintain the status quo.
The relevent question is what is the alpha and beta risk associated with the first, second, and subsequent runs. Fortunately, someone has since solved the question of the 1.5 sigma shift and will shortly publish the results on iSixSigma – hopefully this will resolve the issue of the shift once and for all.
Regards,
Andy

0
#111178

Mikel
Member

Your alledged history lesson is nonsense.

0
#111190

Dog Sxxt
Participant

Bill Smith or someone came out the original idea of 1.5 sigma shift?

0
#111191

Dog Sxxt
Participant

Pyzdek suggested all models are wrong models, its issue is which one is more useful than others.http://www.qualitydigest.com/may01/html/sixsigma.html

0
#111192

Anonymous
Guest

I don’t doubt that it was someone; I just doubt the veracity of the idea. Just think about it for a minute – I’ve seen your posts – you seem a rational person. Where is the 1.5 sigma shift in Geometric Dimensional Tolerancing? Putting it another way, if the shift can’t guarantee a defect level for a bivariate distribution, why have one. Better to find the sources of temporal variation and remove them. No wonder the US economy is going down the tubes! Pretty soom China, a communist country, will become the largest world economy, and I can’t see them championing democracy, can you?
Why am I against the shift … because the shift can cause a companies to become uncompetitive, moreover they can cause a lot of good process and device engineers to quit the industry, and go and sell BMWs instead. Who wants to buy slow fast static RAMS with Six Sigma L effective!!!!
Andy

0
#111195

Dog Sxxt
Participant

Just for your update, communist state China is a first country in the world (I hope I am not wrong in statement) forms the national level six sigma council, which is fully sponsored by government and also having nationwide black belt certification program. While, the “democratic” government in my country is still talking about greatness of ISO9000. I have same hopeless feeling as you when I see made-in-China goods are all over the shelf in the supermarkets and rosdside stalls.

0
#111197

Anonymous
Guest

If what you say is true, then there is still hope for the free world … unless of course some Japanese company teaches them one-by one confirmation and autonomous inspection, then we’ll really be in the sxxt.
Cheers,
Andy

0
#111199

Ken Feldman
Participant

It was George Box who made the statement about all models being wrong but some useful.  Great, Communist China is a role model for change.  I am sure a government sponsored centralized agency will be very successful.  Chinese made items are on the shelves because they pay workers a few pennies a day versus “democratic” free societies.  I don’t see anybody claiming the products are of superior quality, only that they are cheap.  Everyone thought India would be the next big world competitor.  Yes, they have stolen some jobs due to low costs but didn’t become the big threat everyone thought.  Same with Brazil.  China, unless it changes, will likely follow the same model.  Be grateful that Taiwan is democratic and free, at least as long as the USA provides protection.

0
#111202

Dog Sxxt
Participant

Cultural difference is barrier for Chinese and others to imitate Japanese way of doing thing. Through my real-life encounter, I only found South Korean thinking is closer to Japanese. Mainland and overseas Chinese people are no longer staunch followers for confucism.Dr. Deming in his “out of crisis” book (published in early 90s) says only Korean from Asia can be a real competitor to Japanese in the world market.

0
#111204

Dog Sxxt
Participant

A free and democray society or political system is not a main factor to create a quality minded and competitive economic entity. India is a biggest democratic country does not automatically make them a good success model. Please be noted that Taiwan and Korea phenomenal growth in 70s and 80s was under an authoritarian government.The culture in right mixture with some freedom elements such as entrepreneurship is the key. For me, China has a fairly good chance to be a “quality” competitor and a role model in 2020.

0
#111205

SemiMike
Member

Some people in AUSTIN may have been confused, old buddy, but the rest of the company got that idea from Bill Miller’s comments to Galvin and the years of factory variation research he quoted, as far as I know.  Shainon’s ideas were good, as far as they went, and were very useful first steps, some of which saved most of Motorola’s quality efforts at that time.  But Box and Hunter’s DOE world was even more effective, as taught internally by Tony A, now at Cypress, in the 80’s.  We all did great things with both of these sets of tools.  Later, the Six Sigma people added Project Management disciplines which really helped.  But the company lost the focus in the ’90’s as you know.

0
#111206

Heebeegeebee BB
Participant

Dog SXXT,
I totally and utterly reject your 1st hypothesis concerning influences of democratic societies upon economic models.   Rubbish.   The data simply does not support your statement.
And India as a “truly” democratic nation…PUH-LEEEEEEEEZZZZZ.
While the US is not perfect, at least we don’t cling to INSTITUTIONALIZED and Gov’t sponsored discrimination.
As is proven by the data (your past posts), you reveal your lack of data driven conclusions.
rant off.

0
#111207

Heebeegeebee BB
Participant

Great, the CHICOM’s bungle up 6S with totatalitarian bureaucracy and you call it a step ahead?!?!?!?!?!
An national council…NONSENSE

0
#111208

Anonymous
Guest

MOS 8’s DOE courses were provided by Jones Reilly Associates. Taguchi Methods were taught by Alan Wu and Shin Taguchi. Personally, I don’t think Box’s approach is good enough and I’ve explained why previously. (Variance scaling.)
One aspect I do agree with though is that Moto lost the plot in the 1990s, which raises the question How and Why?
Andy

0
#111209

Dog Sxxt
Participant

Data must be in figure and number? Korea and Taiwan’s story in 70s and 80s is not a hard data? It’s matter of fact, Taiwan society went into chaos after following democrary model with free election. Infighting in the Taiwanese parliament is another hard data for you to see that democrary is not a panecea to economic success and social stability. Another hard fact is there are many democatic countries like Philippine are in economic mishap.What make you think India is not a democratic country? “While the US is not perfect, at least we don’t cling to INSTITUTIONALIZED and Gov’t sponsored discrimination.”For me, US government’s affirmative action for minorities is a kind of state sponsored discrimination.

0
#111211

Dog Sxxt
Participant

Whether what they did is nonsense or not, you will feel the pain in the next 5 to 10 years for your arrogance and ignorance attitudes.

0
#111212

Mike Carnell
Participant

Heebeegeebee,
Welcome to “the non-data driven six sigma world according to Dog Sxxt” who will profess he respects your right to your opinion as long as it is coincidental with his. If not you are considered arrogant and ignorant.
Regards

0
#111213

Dog Sxxt
Participant

One aspect I do agree with though is that Moto lost the plot in the 1990s, which raises the question How and Why?
Why? The successor after Bob Galvin is not a good leader. He was booted out from Board two or three years after a series of blunders in 90s.

0
#111214

Dog Sxxt
Participant

No, I am not so barbaric like Bush to attack Iraq without supported by data. Data for you is a selective business, and I hardly see you provide data in your argument while you asking others to do so.
You are following Bush or American logic?

0
#111215

Mike Carnell
Participant

Dog Sxxt,
There is no possible way you could read anything you write before you post it. That is the most disjoint misdirected faux logic I have read in a while.
I rest my case.

0
#111217

Dog Sxxt
Participant

Ooo…This is your usual logic. Either other people brain is too tiny to absord your view or not logic like you. Have a good rest anyway.

0
#111219

Ken Feldman
Participant

Better not rile up Carnell otherwise he will send in the Marines and bomb your dip sxxt little country into the dark ages.  Have a Happy Thanksgiving.  Wait isn’t that the holiday where we celebrate tricking the Indians into giving the Pilgrims food so we could steal their country.

0
#111223

Mikel
Member

Hey genius, it’s Bill SMITH and SHAININ.
Come back when you know what you are talking about.

0
#111224

Whatsinaname
Member

Its shameful how you guys are using a professional forum like this to promote his/her racial/truly one sided opinions. Get a life and stick to the cause of building careers and companies.

0
#111229

SemiMike
Member

Andy, IMHO it was the mgmt metrics that changed.
They got the LEAN message, CT and OTD, and many of their managers could only keep one or two metrics on their plate.  Many companies over reacted to the LEAN ideas and Re-engineering ideas.  But they tried to start over recently I hear.
As far as Classical vs Taguchi methods, both work fine if you use them correctly.  I like Optimal designs as well for some cases.  The war between Box and Taguchi was settled long ago I think and new software supports the best of both worlds.  Shainon’s non-parametric ideas were also great, but in a few cases much less powerful than the parametric tests.   Nowdays we have such nice computers, I just graph all the data and 80% of the time I don’t need to do much else.  But for that other 20%, I will use any tool that works, and get help if I can’t handle the tool well.  The web is great for that.  So many retirees!!!
One bad Taguchi example was some Motorolan running giant L32 array experiment with Hfe as the response variable and multi-step factors.  Took him 6 months and never got run quite right.  Smaller sequential fractional factorials did that job much better, learning as they worked.
If I remember, your last name started Ur??  We should exchange email addresses some time. I think KB oversold Shainon-only idea, personally.  Ford seemed to agree.  But like the guys say, Toyota rules and they mostly use FOCUS ON THE TARGET, and some good discipline, and lots of LEAN ideas but without any sacrifice in quality.  Their managers seem to be able to handle multi-tasking and multiple metrics.  Same story in Korea now.  Samsung rules their world.  Hyundai has improved greatly. BMW has serious first-year rel issues.
Corporate Quality cycles in US and Europe, but seems to just get better in Japan and Korea, up to a point.  They do have problems.

0
#111235

Dog Sxxt
Participant

“Many companies over reacted to the LEAN ideas and Re-engineering ideas. But they tried to start over recently I hear.”Over-reacted to the lean? I don’t think so. Boeing used lean as main framework for imprvement, and not the oversold six sigma.I can confirm lean re-initiaive in M, hope they don’t lose stream as last time. I did say in this forum once, a senior staff from Dell, a former Motorolan said both companies are staffed by same capable army, but Dell can execute much faster than Motorola. This is main differentor. You need SPEED besides quality.Samsung will overtake Motorola as number 2 in handphone business in the next three months. Let wait and see my forecast.

0
#111241

Anonymous
Guest

Stan,
Glad you sorted that out  .. the only guy I could think of was the one with a good sauce.
Andy

0
#111242

Anonymous
Guest

SemiMike,
Yes, my last name is Urquhart. You can contact me through my website, by seaching Google. I noted that you used  the term hfe … does this mean you were out of Phoenix?
Andy

0
#111245

SemiMike
Member

Local Boeing people tell me they have Lean-Sigma (both) working well.   The Lean efforts were done earlier and got the low hanging fruit. At Motorola, the Sigma efforts were done first and then the REALLY killed it with the METRICS for “speed” as you call it, like Cycle Time and then they recently put back a balanced Lean-Sigma approach…or so they tell me.  I see this all over the world, a battle between managers who think SPC is “non-value added” due to their new Lean training, and remove much of it, then they end up getting in quality trouble and put it back.  It does not matter what you call your QM model, it matters how you execute it.  Its all about EXECUTION and that gave Re-Engineering a really bad name (cost-cutting with a vengeance), and at many sites, Lean got tagged the same way, as just an excuse for layoffs and outsourcing.  Sigma was done wrong at many places by big training companies, too expensive and they did not know the industry as some have pointed out.  But at others, it was a great success.  The key to execution seems to be KNOWLEDGE of the INDUSTRY and LEADERSHIP that can BALANCE the rigors of data-driven quality and cost-cut-driven lean efforts. Both can save money and speed up flow.   Just my experience.  Yours may be different.

0
#111248

Mikel
Member

Your alledged knowledge of Motorola continues to be bogus and incorrect.

0
#111250

K.Subbiah
Participant

Gents:
What is the best method to calculate LT Standard Deviation? Dr. Harry, in is book titled” Six Sigma Producibility and Process Characterization” suggests the use of RTY for calculating LT Standard Deviation. Is there anyother way(s) of determining the LT Standard deviation? Thanks.

0
#111253

Mikel
Member

Yes, use real variable data collected over many cycles.

0
#111254

K.Subbiah
Participant

Gents:
What is the best method to calculate LT Standard Deviation? Dr. Harry, in is book titled” Six Sigma Producibility and Process Characterization” suggests the use of RTY for calculating LT Standard Deviation. Is there anyother way(s) of determining the LT Standard deviation? Thanks.

0
#111255

Mikel
Member

Shree,Go set up a control chart and then provide it with data that is shifted only one sigma. Tell us how long it takes to detect.

0
#111267

SemiMike
Member

Stan, why not be specific instead of just insulting?
Tell me where I am wrong?  I worked them more than 30 years. Naturally, everyone worked in different divisions, so they have different views of what went wrong, but we should all agree they did go wrong in the 1990’s.  Most of the managers disagree to this day about why that happened, but it did.   Many blame specific leaders.  I think that is valid, but I can’t name names on this board.  Some of the equipment decisions for fabs were really stupid, in hindsight.  But leadership was clearly flawed and the 800 pound gorilla’s clearly prevented their leaders from implementing solutions.  Tell your view, don’t just attack.

0
#111591

Heebeegeebee BB
Participant

oooh, another non-humored “Shameful” posting…
Yes, yes, we musn’t, no, daren’t sully this holy of holy forums with open and free communication…
We must used hushed, reverent tones, lest we offend…
We must bow courteously and should someone by chance not agree with our position, we should pin on them names such as rascist and sexist…
And should we be confronted with a spot-on analysis of our lack of data-driven conclusions, we should resort to making lightly veiled threats about what to expect from the sleeping dragon in the next 5-10 years.
I’ll bring the incense, can someone else dim the lights and begin the monastic jedi-chanting?
Gee-whiz… between Dog SXXt’s postings and the “Hallowed halls” crowd, i am thouroughly amused!!!
Lighten up…

0
#116736

Naek Milton
Participant

I think I want to recognize the actual variation, whether that variation is caused by “normal” events or by a drift in the mean.  What is the purpose in not recognizing the actual variation?  It is what it is!

0
#118917

CS
Participant

Hi Joe,
May you send over in excel or words format the table below.  Thank you
“If you or others would like a table of zero to 6 sigma listings verses PPM rates and Verses PPM rates after a 1.5 sigma shift, just write me. You will see that 6 sigmas are equal to 0.002 PPM, or, 3.4PPM after a 1.5 sigma shift off target.”

Best rgds,
CS Foo

0
#128129

Participant

I agree.
“A theory is just a theory unless it can be scrutinized scientifically and shown within reasonable limits of accuracy that it can be replicated elsewhere”.
Swallowing the 1.5 sigma shift statement wholesale is equivalent to
physicists saying that they have developed the “theory of everything”.
There are infinetely many variables associated with this last statement, just as there is the potential for infinitely many variables between industries and their respective processes. Use common sense.
Statistics is not an exact science. Stop trying to force feed the rest of us that it is.

0
Viewing 100 posts - 1 through 100 (of 203 total)

The forum ‘General’ is closed to new topics and replies.